Data Pump
- Data Pump Concepts
- Export\Import: Database | Schema | Table
Related Topics
- Attach-Detach
- Compression
- DDL
- Exclude-Include
- Flashback (time consistent)
- Job: Status | Kill
- Log Time
- Query
- Moving a Database Using Data Pump
- Network Link Usage
- Packages, Importing
- Pause & Resume Job
- Performance
- Post Import: Check INVALID Objects and Recompile
- Quiescenting for Import
- RAC, Best Practices with Data Pump
- Remap
- Resize Messages, Excessive (bug)
- Resize, Create Tablespace Resize Commands for Destination DB
- Overwrite Export File
- "ORA-01555: Snapshot too old" on Exports
- Transportable Tablespaces (TTS)
- UserName & Password in .par File
- Variables Usage
- Version
- Windows UNC Path Exports
Attach and Detach
While a Data Pump job is running, you can press [Ctrl]-C (or the equivalent on your client) to "detach" from the job. The messages will stop coming to your client, but it's still running inside the database. Your client will be placed in an interactive mode (with an Export> prompt). To see which jobs are running, type status. To attach to an already running job run:
expdp system/mypassword attach=<JobName>
expdp system/******** attach=xSCOTT
To see what Data Pump jobs are running you can use this:
SELECT j.owner_name, j.job_name, j.job_mode, j.state, s.session_type, s.saddr FROM dba_datapump_jobs j,dba_datapump_sessions s WHERE UPPER(j.job_name) = UPPER(s.job_name);
Compression
In Oracle 11g and later you can use the COMPRESSION option. The values are: ALL, (METADATA_ONLY), DATA_ONLY and NONE. There is no difference during import using a compressed or uncompressed dumpfile.
New 12c Option
COMPRESSION_ALGORITHM [BASIC | LOW | MEDIUM | HIGH]
BASIC
Offers a good combination of compression ratios and speed; the algorithm used is the same as in previous versions of Oracle Data Pump.
LOW
Least impact on export throughput and suited for environments where CPU resources are the limiting factor.
MEDIUM
Recommended for most environments. This option, like the BASIC option, provides a good combination of compression ratios and speed, but it uses a different algorithm than BASIC.
HIGH
Best suited for situations in which dump files will be copied over slower networks where the limiting factor is network speed.
Comparison exporting a 6gb schema. The schema in the test contained significant BLOB data. COMPRESSION=ALL used on test.
Type Elapse Filesize --------- -------- -------- NONE 5 min. 1.5 gb ALL 7 min. 258 kb gzip (OS) 13 min. 222 kb
DDL
You can get the DDL for most any schema object from a corresponding export file via the SQLFILE and INCLUDE parameters.
- Using SQLFILE causes the DDL creation commands to go to an output file rather than into the database.
- The INCLUDE parameter is used to help define which objects to write to the SQLFILE.
You can use an export of your database as shown here to functionally version control your DDL.
Example .par File:
# impdp system/mypassword PARFILE=getDDL.scott.par DIRECTORY=datapump DUMPFILE=scott.dmp JOB_NAME=getDDL LOGFILE=getDDL.log SCHEMAS=scott SQLFILE=scott.ddl.txt
The above would write all DDL for the given schema to the SQLFILE as an INCLUDE parameter is not used.
INCLUDE Usage
Adding the INCLUDE parameter as the following methods show can filter the SQLFILE to just contain what you need.
INCLUDE object types you can use:
FUNCTION, PACKAGE, PROCEDURE, SEQUENCE, TRIGGER, TYPE.
You can also use:
DIRECTORY, INDEX, SYNONYM, TABLE, VIEW.
Method 1: Get All DDL From a Class of Objects
SQLFILE=scott.procedures.sql INCLUDE=PROCEDURE:"IN (SELECT object_name from dba_procedures where owner = 'SCOTT')"
Method 2: Get Just One Object's DDL
SQLFILE=scott.my_proc.sql INCLUDE=PROCEDURE:" IN('MY_PROC')"
Your IN clause can also have multiple objects: INCLUDE=PACKAGE:" IN('AAA','BBB','CCC')"
.
Exclude and Include
The EXCLUDE parameter enables you to filter the metadata that is imported by specifying database objects which you want to exclude from the import job. For the given mode of import, all the objects contained within the source and all their dependent objects are included, except for those specified in an EXCLUDE statement.
- If an object is excluded, then all of its dependent objects are also excluded.
- More than one EXCLUDE statement can be specified in a parameter file.
- If you use an INCLUDE on a schema export then just the objects in the INCLUDE are exported.
To see which objects can be filtered, you can perform queries on the following views: DATABASE_EXPORT_OBJECTS, SCHEMA_EXPORT_OBJECTS, and TABLE_EXPORT_OBJECTS.
EXCLUDE=object_type[:name_clause] [, ...] INCLUDE=object_type[:name_clause] [, ...]
EXCLUDE=constraint, index,materialized_view, ref_constraint, statistics EXCLUDE=GRANT EXCLUDE=SCHEMA:"= 'HR'" EXCLUDE=INDEX:"='PK_EMPLOYEE'" EXCLUDE=INDEX:"LIKE 'EMP%' " EXCLUDE=FUNCTION:"='REFRESH'", EXCLUDE=VIEW:"='EMP_ALL_VIEW'" INCLUDE=TABLE:"LIKE 'LOOKUP_%'"
Packages, Importing
-- Drop Packages to Be Imported (if they exist) DROP PACKAGE HR.AAA; DROP PACKAGE HR.BBB; DROP PACKAGE HR.CCC; -- Example .par File # impdp system/******** PARFILE=impPackages.par DIRECTORY=U03 DUMPFILE=FullExport.dmp LOGFILE=impPackages.log SCHEMAS=HR INCLUDE=PACKAGE:" IN('AAA','BBB','CCC')"
- You can use this script to create an import .par file from your INVALID database packages.
- Importing a package as shown will also import the package bodies.
- TABLE_EXISTS_ACTION=REPLACE does not work for packages so you must DROP them if they already exist.
Flashback
To get a time consistent export use the FLASHBACK_TIME parameter as shown.
FLASHBACK_TIME=systimestamp
Log Time
Options: LOGTIME = [NONE | STATUS | LOGFILE | ALL]
- NONE: No timestamp information is displayed (default).
- STATUS: Timestamp on status messages displayed.
- LOGFILE: Same as STATUS, but only displayed for logfile messages.
- ALL: A combination of STATUS and LOGFILE.
Status
Master Job Status
SET lines 200 COL owner_name FORMAT a10; COL job_name FORMAT a20 COL state FORMAT a12 COL operation LIKE state COL job_mode LIKE state COL owner.object FORMAT a50 SELECT owner_name, job_name, rtrim(operation) "OPERATION", rtrim(job_mode) "JOB_MODE", state, attached_sessions FROM dba_datapump_jobs WHERE job_name NOT LIKE 'BIN$%' ORDER BY 1,2;
Child Jobs
SET lines 140 COL owner_name FORMAT a10; COL job_name FORMAT a20 COL state FORMAT a12 COL operation LIKE owner_name COL job_mode LIKE owner_name SELECT owner_name, job_name, operation, job_mode, state, attached_sessions FROM dba_datapump_jobs;
Query
Enables you to filter the data that is exported by specifying a WHERE clause that is applied to all tables in the export job or to a specific table. On Unix systems all special characters need to be 'escaped'.
QUERY = [schema.]<table_name:> query_clause
QUERY="WHERE state='FL' " QUERY=hr.emp:"WHERE job = 'ENGINEER' OR sal >= 5000" QUERY=hr.emp:"WHERE name LIKE 'S%'"
Pause & Resume Job
Pause
^C--Press CTRL+C keys. DataPump> stop_job=immediate Are you sure you wish to stop this job ([yes]/no): yes
Alternatively attach from another session. Example: impdp system attach=impSchema
Resume
impdp system attach=<JobName> DataPump> continue_client
As another option you can use: start_job
Improving Performance
Disable Archive Logging
Reduces the logging associated with tables and indexes during import.
- TRANSFORM=disable_archive_logging:Y
- TRANSFORM=disable_archive_logging:Y:tablename
- TRANSFORM=disable_archive_logging:Y:indexname
Caution Using disable_archive_logging
- An abort will not roll-back and you will need to restore the table.
- This option has no effect if the database is running in FORCE LOGGING mode. This is required for a Data Guard system for instance.
PARALLEL
If you have the Enterprise edition of Oracle database, then Data Pump performance can be improved by using the PARALLEL parameter. This can be used in conjunction with the "%U" wildcard in the DUMPFILE parameter to allow multiple dump files to be created or read simultaneously.
PARALLEL Fun Facts!
- Set PARALLEL to two times the number of CPUs, then tune from there.
- Get number of CPU's:
- SQL> show parameter cpu_count
- OS> nproc
- There is no advantage to having a larger PARALLEL value than there are dmp files.
- While just one Oracle worker creates all the indexes, it uses n PX processes up to the PARALLEL value.
- Because of the above, at times you may only see one or two Oracle worker processes being utilized.
- To view worker threads from OS: ps -ef|grep dw
See: gv$px_session tips. - MOS Doc: Parallel Capabilities of Oracle Data Pump (Doc ID 365459.1).
Example Usage:
DIRECTORY=datapump DUMPFILE=scott.%U.dmp FILESIZE=10g JOB_NAME=ExportSchema LOGFILE=ExportSchema.scott.log PARALLEL=4 SCHEMAS=scott
If using RAC set this when using PARALLEL: CLUSTER=N
See RAC, Best Practices with Data Pump.
Parallelism in Running Job
You can change the PARALLEL setting while a job is running.
Example
expdp system/MyPassword attach=MyJobName Export> PARALLEL=16
Change Parallelism from Command-Line (or script) echo exit|$ORACLE_HOME/bin/impdp system/mypw attach=IMP01 PARALLEL=8
Exclude Statistics
EXCLUDE=STATISTICS
Some have reported that using the new 12c LOGTIME parameter with a value of STATUS causes a performance hit.
Post Import: Check INVALID Objects and Recompile
In many cases after an import you may need to recompile any INVALID objects.
- Check Current INVALID Objects
SELECT count(*) FROM dba_objects WHERE status='INVALID'; - Compile any INVALID Objects
@$ORACLE_HOME/rdbms/admin/utlrp.sql - Check Current INVALID Objects after compiling.
SELECT count(*) FROM dba_objects WHERE status='INVALID'; - Do you need to set Archivelog Mode back on?
- If you did not include stats on your import you may wish to gather them on required schema.
EXEC DBMS_STATS.GATHER_SCHEMA_STATS( ownname => 'SCOTT', degree => 12, cascade => DBMS_STATS.AUTO_CASCADE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE);
Quiescenting for Import
To disable remote TCP connections to a database you can add the below parameter to the sqlnet.ora. It will allow only local password or OS authentication. All the connections coming through the listener will fail with ORA-01017.
sqlnet.authentication_services=(BEQ) <---allows LOCAL OS/password authentication sqlnet.authentication_services=(NONE) <----allows only LOCAL password authenticated connection
The default for this parameter is equivalent to sqlnet.authentication_services=(ALL), which allows all types of connections to the DB.
For more on sqlnet.ora parameters go here .
RAC Best Practices with Data Pump
If PARALLEL is used Data Pump on RAC processes will be started from all available nodes, which is not ideal!
Why?
It can lead to corruption of the dumpfile as export pieces for the same dumpfile can be written locally to all nodes of the RAC. This can also result in errors reading from the dumpfile during import because pieces of the same dumpfile reside on different nodes of the RAC.
To prevent this from happening add the following parameter to your Data Pump command: CLUSTER=N
Remap
Remap allows you to import data into another schema or object.
Remap Schema
REMAP_SCHEMA=source_schema:target_schema
expdp SYSTEM/pw DIRECTORY=exports DUMPFILE=hr.dmp SCHEMAS=hr impdp SYSTEM/pw DIRECTORY=exports DUMPFILE=hr.dmp REMAP_SCHEMA=hr:dev
Remap Tablespace
REMAP_TABLESPACE=source_tablespace:target_tablespace
impdp system/pw DIRECTORY=datapump DUMPFILE=emp.dmp REMAP_TABLESPACE='hr':'dev'
Remap Table
REMAP_TABLE=Existing_Table_Name:New_Table_Name
impdp system/pw DIRECTORY=datapump DUMPFILE=emp.dmp TABLES=hr.emp REMAP_TABLE=hr.employees:employees2
The REMAP_TABLE parameter will not work if the table being remapped has named constraints in the same schema and the constraints need to be created when the table is created. One way around this is to rename the original table before the import (ex: alter table hr.employees rename to hr.employees_orig;).
Resize Operation Messages, Excessive (bug)
If your alert log gets flooded with file resize messages such as the below.
Resize operation completed for file# 201, old size 127641600K, new size 127644800K
See see Doc ID 1982901.1
Disable Message
alter system set "_disable_file_resize_logging"=TRUE ;
Reset
alter system set "_disable_file_resize_logging"=FALSE ;
Create Tablespace Resize Commands for Destination DB
- Run this on the source database to get commands. Then resize the destination database as required.
- It is assumed all the tablespaces on the destination are using BFTS.
SELECT 'ALTER TABLESPACE ' || df.tablespace_name || ' RESIZE ' || df.totalspace || 'm;' "Cmd" FROM (select tablespace_name,round(sum(bytes) / 1048576) TotalSpace from dba_data_files group by tablespace_name) df, (select round(sum(bytes)/(1024*1024)) totalusedspace, tablespace_name from dba_segments group by tablespace_name) tu WHERE df.tablespace_name = tu.tablespace_name AND df.tablespace_name NOT IN ('SYSTEM','SYSAUX') AND df.tablespace_name NOT LIKE 'UNDO%' ORDER BY df.tablespace_name ASC;
It is always better to preRESIZE your destination tablespaces rather than let your import operation rely on AUTORESIZE and add excessive time to your import operation.
Overwrite Export File (11g and later)
REUSE_DUMPFILES=YES
UserName and Password in .par File
You can put the username, password and connection in the .par file using the USERID parameter on the first line.
userid=scott/tiger@MyDB
Then to use it all you need to do is this:
expdp parfile=/Path2File/expThis.par
Variables Usage
Available for export operations:
%d, %D: Day in DD format. %m, %M: Month in MM format. %y, %Y: Year in YYYY format. %t, %T: Date in YYYYMMDD format.
Available for both export and import operations.
%U: A two digit number incremented from 01-99. Fixed length. %l, %L: Starts as a two digit number between 01-99 and increases to 2147483646. The file name will not ultimately be fixed length.
Example using: DUMPFILE=xDB.MyDB.%U.dmp
Configured as shown above - your dump files are created as so: xDB.MyDB.01.dmp xDB.MyDB.02.dmp xDB.MyDB.03.dmp ...
Furthermore, on import you do not have to configure DUMPFILE statically. You can use the %U variable again (DUMPFILE=xDB.MyDB.%U.dmp).
Version
You can export and import between Oracle versions using the VERSION parameter.
Example scenario, if you need to perform an export from an Oracle 12.2 database to an Oracle 12.1 database include this in your export: VERSION=12.1
. The import side will not need to use the parameter (it is already 12.1).
If you neglect to do this your import will get errors similar to this:
ORA-39000: bad dump file specification ORA-39142: incompatible version number 5.1 in dump file "/u02/exports/MySchema.dmp"
Windows UNC Path Exports
To export to a remote Windows system via a UNC path requires the following:
Local System
- A dedicated OS account for Oracle. For this example: oradba.
- The oracle user (oradba) is a member of the (local) administrators and ora_dba groups.
- The oracle user (oradba) is used to start both the database service and listener service.
Remote System
- A local account exists with the same name as the local Oracle user (oradba).
- The oracle user (oradba) is a member of the administrators group.