- Data Pump Concepts
- Export\Import: Database | Schema | Table
- Flashback (time consistent)
- Killing a Data Pump Job
- Log Time
- Moving a Database Using Data Pump
- Network Link Usage
- Pause & Resume Job
- Post Import: Check INVALID Objects and Recompile
- RAC, Best Practices with Data Pump
- Resize, Create Tablespace Resize Commands for Destination DB
- Overwrite Export File
- "ORA-01555: Snapshot too old" on Exports
- Transportable Tablespaces (TTS)
- Windows UNC Path Exports
Attach and Detach
While a Data Pump job is running, you can press [Ctrl]-C (or the equivalent on your client) to "detach" from the job. The messages will stop coming to your client, but it's still running inside the database. Your client will be placed in an interactive mode (with an Export> prompt). To see which jobs are running, type status. To attach to an already running job run:
expdp system/mypassword attach=<JobName>
expdp system/******** attach=xSCOTT
To see what Data Pump jobs are running you can use this:
SELECT j.owner_name, j.job_name, j.job_mode, j.state, s.session_type, s.saddr FROM dba_datapump_jobs j,dba_datapump_sessions s WHERE UPPER(j.job_name) = UPPER(s.job_name);
In Oracle 11g and later you can use the COMPRESSION option. The values are: ALL, (METADATA_ONLY), DATA_ONLY and NONE. There is no difference during import using a compressed or uncompressed dumpfile.
New 12c Option
COMPRESSION_ALGORITHM [BASIC | LOW | MEDIUM | HIGH]
Offers a good combination of compression ratios and speed; the algorithm used is the same as in previous versions of Oracle Data Pump.
Least impact on export throughput and suited for environments where CPU resources are the limiting factor.
Recommended for most environments. This option, like the BASIC option, provides a good combination of compression ratios and speed, but it uses a different algorithm than BASIC.
Best suited for situations in which dump files will be copied over slower networks where the limiting factor is network speed.
Comparison exporting a 6gb schema. The schema in the test contained significant BLOB data. COMPRESSION=ALL used on test.
Type Elapse Filesize --------- -------- -------- NONE 5 min. 1.5 gb ALL 7 min. 258 kb gzip (OS) 13 min. 222 kb
You can used an import file to get the DDL commands used to create all the objects in your import file via the SQLFILE parameter. Make sure to comment out any input type: FULL, SCHEMA, TABLE etc.
Example .par File:
# impdp system/mypassword PARFILE=impDDL.scott.par DIRECTORY=datapump DUMPFILE=scott.dmp JOB_NAME=getDDL LOGFILE=impSchemaDDL.log #SCHEMAS=scott SQLFILE=scott.ddl.txt
This method can be used for an import (.dmp) to get the DDL commands for just desired object type.
SQLFILE=scott.procedures.txt INCLUDE=PROCEDURE:"IN (SELECT object_name from dba_procedures where owner = 'SCOTT')"
Exclude and Include
The EXCLUDE parameter enables you to filter the metadata that is imported by specifying database objects which you want to exclude from the import job. For the given mode of import, all the objects contained within the source and all their dependent objects are included, except for those specified in an EXCLUDE statement.
- If an object is excluded, then all of its dependent objects are also excluded.
- More than one EXCLUDE statement can be specified in a parameter file.
- If you use an INCLUDE on a schema export then just the objects in the INCLUDE are exported.
To see which objects can be filtered, you can perform queries on the following views: DATABASE_EXPORT_OBJECTS, SCHEMA_EXPORT_OBJECTS, and TABLE_EXPORT_OBJECTS.
EXCLUDE=object_type[:name_clause] [, ...] INCLUDE=object_type[:name_clause] [, ...]
EXCLUDE=constraint, index,materialized_view, ref_constraint, statistics EXCLUDE=GRANT EXCLUDE=SCHEMA:"= 'HR'" EXCLUDE=INDEX:"='PK_EMPLOYEE'" EXCLUDE=INDEX:"LIKE 'EMP%' " EXCLUDE=FUNCTION:"='REFRESH'", EXCLUDE=VIEW:"='EMP_ALL_VIEW'" INCLUDE=TABLE:"LIKE 'LOOKUP_%'"
To get a time consistent export use the FLASHBACK_TIME parameter as shown.
Log Time (12c)
Options: LOGTIME = [NONE | STATUS | LOGFILE | ALL]
- NONE: No timestamp information is displayed (default).
- STATUS: Timestamp on status messages displayed.
- LOGFILE: Same as STATUS, but only displayed for logfile messages.
- ALL: A combination of STATUS and LOGFILE.
Enables you to filter the data that is exported by specifying a WHERE clause that is applied to all tables in the export job or to a specific table. On Unix systems all special characters need to be 'escaped'.
QUERY = [schema.]<table_name:> query_clause
QUERY="WHERE state='FL' " QUERY=hr.emp:"WHERE job = 'ENGINEER' OR sal >= 5000" QUERY=hr.emp:"WHERE name LIKE 'S%'"
Pause & Resume Job
^C--Press CTRL+C keys. DataPump> stop_job=immediate Are you sure you wish to stop this job ([yes]/no): yes
Alternatively attach from another session. Example:
impdp system attach=impSchema
impdp system attach=<JobName> DataPump> continue_client
As another option you can use: start_job
Disable Archive Logging (12c)
Reduces the logging associated with tables and indexes during import.
If you have the Enterprise edition of Oracle database, then Data Pump performance can be improved by using the PARALLEL parameter. This can be used in conjunction with the "%U" wildcard in the DUMPFILE parameter to allow multiple dump files to be created or read simultaneously.
DIRECTORY=datapump DUMPFILE=scott.%U.dmp FILESIZE=10g JOB_NAME=ExportSchema LOGFILE=ExportSchema.scott.log PARALLEL=4 SCHEMAS=scott
According to Oracle there is no advantage to having a larger degree of parallelism than there are dumpfiles.
If using RAC set this when using PARALLEL:
See RAC, Best Practices with Data Pump.
Parallelism in Running Job
You can change the PARALLEL setting while a job is running.
expdp system/MyPassword attach=MyJobName Export> PARALLEL=16
Post Import: Check INVALID Objects and Recompile
In many cases after an import you may need to recompile any INVALID objects.
- Check Current INVALID Objects
SELECT count(*) FROM dba_objects WHERE status='INVALID';
- Compile any INVALID Objects
- Check Current INVALID Objects after compiling.
SELECT count(*) FROM dba_objects WHERE status='INVALID';
RAC Best Practices with Data Pump
If PARALLEL is used Data Pump on RAC processes will be started from all available nodes, which is not ideal!
It can lead to corruption of the dumpfile as export pieces for the same dumpfile can be written locally to all nodes of the RAC. This can also result in errors reading from the dumpfile during import because pieces of the same dumpfile reside on different nodes of the RAC.
To prevent this from happening add the following parameter to your Data Pump command:
Remap allows you to import data into another schema or object.
expdp SYSTEM/pw DIRECTORY=exports DUMPFILE=hr.dmp SCHEMAS=hr impdp SYSTEM/pw DIRECTORY=exports DUMPFILE=hr.dmp REMAP_SCHEMA=hr:dev
impdp system/pw DIRECTORY=datapump DUMPFILE=emp.dmp REMAP_TABLESPACE='hr':'dev'
impdp system/pw DIRECTORY=datapump DUMPFILE=emp.dmp TABLES=hr.emp REMAP_TABLE=hr.employees:employees2
The REMAP_TABLE parameter will not work if the table being remapped has named constraints in the same schema and the constraints need to be created when the table is created. One way around this is to rename the original table before the import (ex: alter table hr.employees rename to hr.employees_orig;).
Create Tablespace Resize Commands for Destination DB
- Run this on the source database to get commands. Then resize the destination database as required.
- It is assumed all the tablespaces on the destination are using BFTS.
SELECT 'ALTER TABLESPACE ' || df.tablespace_name || ' RESIZE ' || df.totalspace || 'm;' "Cmd" FROM (select tablespace_name,round(sum(bytes) / 1048576) TotalSpace from dba_data_files group by tablespace_name) df, (select round(sum(bytes)/(1024*1024)) totalusedspace, tablespace_name from dba_segments group by tablespace_name) tu WHERE df.tablespace_name = tu.tablespace_name AND df.tablespace_name NOT IN ('SYSTEM','SYSAUX') AND df.tablespace_name NOT LIKE 'UNDO%' ORDER BY df.tablespace_name ASC;
Overwrite Export File (11g and later)
You can export and import between Oracle versions using the VERSION parameter.
Example scenario, if you need to perform an export from an Oracle 12.2 database to an Oracle 12.1 database include this in your export:
VERSION=12.1. The import side will not need to use the parameter (it is already 12.1).
If you neglect to do this your import will get errors similar to this:
ORA-39000: bad dump file specification ORA-39142: incompatible version number 5.1 in dump file "/u02/exports/MySchema.dmp"
Windows UNC Path Exports
To export to a remote Windows system via a UNC path requires the following:
- A dedicated OS account for Oracle. For this example: oradba.
- The oracle user (oradba) is a member of the (local) administrators and ora_dba groups.
- The oracle user (oradba) is used to start both the database service and listener service.
- A local account exists with the same name as the local Oracle user (oradba).
- The oracle user (oradba) is a member of the administrators group.