oracledba.help works great on your mobile device too!

oracledba.help

Data Pump


Attach and Detach

While a Data Pump job is running, you can press [Ctrl]-C (or the equivalent on your client) to "detach" from the job. The messages will stop coming to your client, but it's still running inside the database. Your client will be placed in an interactive mode (with an Export> prompt). To see which jobs are running, type status. To attach to an already running job run:

expdp system/mypassword attach=<JobName>

expdp system/******** attach=xSCOTT

Use the JOBNAME parameter for all your DP jobs so you can easily manage them.

To see what Data Pump jobs are running you can use this:

SELECT j.owner_name, j.job_name, j.job_mode, j.state, s.session_type, s.saddr
FROM dba_datapump_jobs j,dba_datapump_sessions s
WHERE UPPER(j.job_name) = UPPER(s.job_name);

Compression

In Oracle 11g and later you can use the COMPRESSION option. The values are: ALL, (METADATA_ONLY), DATA_ONLY and NONE. There is no difference during import using a compressed or uncompressed dumpfile.

New 12c Option

 COMPRESSION_ALGORITHM [BASIC | LOW | MEDIUM | HIGH]

Comparision exporting a 6gb schema. The schema in the test contained significant BLOB data. COMPRESSION=ALL used on test.

Type       Elapse    Filesize
---------  --------  -------- 
NONE        5 min.   1.5 gb
ALL         7 min.   258 kb
gzip (OS)  13 min.   222 kb

DDL

You can used an import file to get the DDL commands used to create all the objects in your import file via the SQLFILE parameter. Make sure to comment out any input type: FULL, SCHEMA, TABLE etc.

Example .par File:

# impdp system/mypassword PARFILE=impDDL.scott.par
DIRECTORY=datapump
DUMPFILE=scott.dmp
JOB_NAME=getDDL
LOGFILE=impSchemaDDL.log

#SCHEMAS=scott
SQLFILE=scott.ddl.txt

This method can be used for an import (.dmp) to get the DDL commands for just desired object type.

SQLFILE=scott.procedures.txt
INCLUDE=PROCEDURE:"IN (SELECT object_name from dba_procedures where owner = 'SCOTT')"

Exclude and Include

The EXCLUDE parameter enables you to filter the metadata that is imported by specifying database objects which you want to exclude from the import job. For the given mode of import, all the objects contained within the source and all their dependent objects are included, except for those specified in an EXCLUDE statement.

  • If an object is excluded, then all of its dependent objects are also excluded.
  • More than one EXCLUDE statement can be specified in a parameter file.
  • If you use an INCLUDE on a schema export then just the objects in the INCLUDE are exported.

To see which objects can be filtered, you can perform queries on the following views: DATABASE_EXPORT_OBJECTS, SCHEMA_EXPORT_OBJECTS, and TABLE_EXPORT_OBJECTS.

EXCLUDE=object_type[:name_clause] [, ...]
INCLUDE=object_type[:name_clause] [, ...]
EXCLUDE=constraint, index,materialized_view, ref_constraint, statistics
EXCLUDE=GRANT
EXCLUDE=SCHEMA:"= 'HR'"
EXCLUDE=INDEX:"='PK_EMPLOYEE'"
EXCLUDE=INDEX:"LIKE 'EMP%' "
EXCLUDE=FUNCTION:"='REFRESH'", EXCLUDE=VIEW:"='EMP_ALL_VIEW'"

INCLUDE=TABLE:"LIKE 'LOOKUP_%'"

Flashback

To get a time consistent export use the FLASHBACK_TIME parameter as shown. FLASHBACK_TIME=systimestamp

Log Time (12c)

Options: LOGTIME = [NONE | STATUS | LOGFILE | ALL]

  • NONE: No timestamp information is displayed (default).
  • STATUS: Timestamp on status messages displayed.
  • LOGFILE: Same as STATUS, but only displayed for logfile messages.
  • ALL: A combination of STATUS and LOGFILE.

Query

Enables you to filter the data that is exported by specifying a WHERE clause that is applied to all tables in the export job or to a specific table. On Unix systems all special characters need to be 'escaped'.

QUERY = [schema.]<table_name:> query_clause
QUERY="WHERE state='FL' "
QUERY=hr.emp:"WHERE job = 'ENGINEER' OR sal >= 5000"
QUERY=hr.emp:"WHERE name LIKE 'S%'"

Pause & Resume Job

Pause

 ^C--Press CTRL+C keys.
 DataPump> stop_job=immediate
 Are you sure you wish to stop this job ([yes]/no): yes

Alternatively attach from another session. Example: impdp system attach=impSchema

Resume

 impdp system attach=<JobName>
 DataPump> start_job

Improving Performance

Disable Archive Logging (12c)

Reduces the logging associated with tables and indexes during import.

  • TRANSFORM=disable_archive_logging:Y
  • TRANSFORM=disable_archive_logging:Y:tablename
  • TRANSFORM=disable_archive_logging:Y:indexname

Caution Using disable_archive_logging

  • An abort will not roll-back and you will need to restore the table.
  • This option has no effect if the database is running in FORCE LOGGING mode. This is required for a Data Guard system for instance.

PARALLEL

If you have the Enterprise edition of Oracle database, then Data Pump performance can be improved by using the PARALLEL parameter. This can be used in conjunction with the "%U" wildcard in the DUMPFILE parameter to allow multiple dump files to be created or read simultaneously.

Example Usage:

DIRECTORY=datapump
DUMPFILE=scott.%U.dmp
FILESIZE=10g
JOB_NAME=ExportSchema
LOGFILE=ExportSchema.scott.log
PARALLEL=4
SCHEMAS=scott

According to Oracle there is no advantage to having a larger degree of parallelism than there are dumpfiles.

If using RAC set this when using PARALLEL:
CLUSTER=N
See RAC, Best Practices with Data Pump.

Exclude Statistics

EXCLUDE=STATISTICS

Some have reported that using the new 12c LOGTIME parameter with a value of STATUS causes a performance hit.

Post Import: Check INVALID Objects and Recompile

In many cases after an import you may need to recompile any INVALID objects.

  1. Check Current INVALID Objects
    SELECT count(*) FROM dba_objects WHERE status='INVALID';
  2. Compile any INVALID Objects
    @$ORACLE_HOME/rdbms/admin/utlrp.sql
  3. Check Current INVALID Objects after compiling.
    SELECT count(*) FROM dba_objects WHERE status='INVALID';

RAC Best Practices with Data Pump

If PARALLEL is used Data Pump on RAC processes will be started from all available nodes, which is not ideal!

Why?
It can lead to corruption of the dumpfile as export pieces for the same dumpfile can be written locally to all nodes of the RAC. This can also result in errors reading from the dumpfile during import because pieces of the same dumpfile reside on different nodes of the RAC.

To prevent this from happening add the following parameter to your Data Pump command:
CLUSTER=N

Remap

Remap allows you to import data into another schema or object.

Remap Schema

REMAP_SCHEMA=source_schema:target_schema
expdp SYSTEM/pw DIRECTORY=exports DUMPFILE=hr.dmp SCHEMAS=hr

impdp SYSTEM/pw DIRECTORY=exports DUMPFILE=hr.dmp REMAP_SCHEMA=hr:dev

Remap Tablespace

REMAP_TABLESPACE=source_tablespace:target_tablespace
impdp system/pw DIRECTORY=datapump DUMPFILE=emp.dmp REMAP_TABLESPACE='hr':'dev'

Remap Table

REMAP_TABLE=Existing_Table_Name:New_Table_Name
impdp system/pw DIRECTORY=datapump DUMPFILE=emp.dmp TABLES=hr.emp 
                REMAP_TABLE=hr.employees:employees2

The REMAP_TABLE parameter will not work if the table being remapped has named constraints in the same schema and the constraints need to be created when the table is created. One way around this is to rename the original table before the import (ex: alter table hr.employees rename to hr.employees_orig;).

Create Tablespace Resize Commands for Destination DB

  • Run this on the source database to get commands. Then resize the destination database as required.
  • It is assumed all the tablespaces on the destination are using BFTS.
SELECT 
 'ALTER TABLESPACE ' || df.tablespace_name || ' RESIZE ' || df.totalspace || 'm;' "Cmd"
FROM
   (select tablespace_name,round(sum(bytes) / 1048576) TotalSpace        
    from dba_data_files group by tablespace_name) df,
   (select round(sum(bytes)/(1024*1024)) totalusedspace, tablespace_name 
    from dba_segments   group by tablespace_name) tu
WHERE df.tablespace_name = tu.tablespace_name
AND df.tablespace_name NOT IN ('SYSTEM','SYSAUX')
AND df.tablespace_name NOT LIKE 'UNDO%'
ORDER BY df.tablespace_name ASC;

It is always better to preRESIZE your destination tablespaces rather than let your import operation rely on AUTORESIZE and add excessive time to your import operation.

Overwrite Export File (11g and later)

REUSE_DUMPFILES=YES

Windows UNC Path Exports

To export to a remote Windows system via a UNC path requires the following:

Local System

  • A dedicated OS account for Oracle. For this example: oradba.
  • The oracle user (oradba) is a member of the (local) administrators and ora_dba groups.
  • The oracle user (oradba) is used to start both the database service and listener service.

Remote System

  • A local account exists with the same name as the local Oracle user (oradba).
  • The oracle user (oradba) is a member of the administrators group.

<- Site