Data Pump Performance Tuning

I was tasked to move an entire small database from 1 server to another and finish the task ASAP. The reason was the site will be down until the task is completed. A quick test of exporting, coping files to remote server then importing had the following numbers:

Export expdp with parallel = 16, generating 1 file only: 13 mins
Copying 12 GB to remote server: 10 mins
Import with parallel = 16: 45 mins

Total Time: 1 hour and 8 minutes.

Way too much for a downtime in our environment.


Here are the commands we initially used:

expdp username/password directory=dump_dir dumpfile=full.dmp logfile=full.log full=y parallel=16

impdp username/password directory=dump_dir dumpfile=full.dmp logfile=full.log parallel=16

In our attempt to tune this process, we introduced exporting in parallel to multiple files and the results were amazing

expdp username/password directory=dump_dir filesize=1G dumpfile=full%U.dmp logfile=fulllog parallel=16

Execution time: about 2 minutes

We figured the import will be improve significantly as well, but we were a bit disappointed

impdp username/password directory=dump_dir dumpfile=full%U.dmp logfile=full.log parallel=16

Execution time: about 30 minutes. Better, but not as expected improvement.

We noticed that during import operation, the process takes a lot of time during creating indexes, and the first thing came to our mind is to increase the pga_aggregate_target to big number.

After increasing pga_aggregate_target to 6 GB, performance of import came down to 14 minutes.

Now total time of entire operation: 26 minutes down from 1:08 hours, not bad


We can still improve by copying files in parallel between servers, but this wasn’t needed.

Hazem Ameen
Senior Oracle DBA

(added Jun 26, 2011)

Tip: This is another tip that will improve importing time. When importing using Data Pump, exclude statistics and then use dbms_stats package to calculate statistics in parallel after Data Pump finishes importing. Importing using Data Pump is done serially (one segment after another, not in parallel) even when you use the parallel option, Data Pump will still have 1 thread only calculating statistics.

Here is an example:

impdp username/password EXCLUDE=STATISTICS other options …


19 thoughts on “Data Pump Performance Tuning

  1. Hi there,

    I’m curious to know, how did you come up with the idea of exporting in parallel to multiple files. its a big improvement.

  2. Roni, through research, experimentation and previous experience with exp/imp in older releases but we had to script it back then

    Appreciate your comments


  3. Hazem,

    You have some interesting articles. Nice of you to share them. I liked streams heart beat & hindi number storing and the one on local culture.

    On Importing, I used to exclude them from exporting/importing. Then create the indexes in parallel manually to speed up.


  4. Hello Hazem :

    I have some question about the process with expdp and impdp.

    1. I can use the commands expdp and impdp if the tables and where i need to do the impdp exists but in differents tablespaces and the structure of this tables are different, the tables has more fields. I can do that with the command exp and imp, but the process with imp take 36 hours.

    I appreciate so much your help


  5. Jacky,

    I’m not sure I understood your question.

    You use exp/imp instead of expdp/impdp because structure of tables are different which data pump doesn’t support. Import process (imp) takes 36 hours, your question is: how can you improve the import speed? can you please confirm my understanding


  6. HI,

    Is there any possibility to change the parallel workers numbers when an impdp is on line progressing. My impdp is progressing more than 6 hours and i tried changing the Parallel=5 ( i don have any logic behind why i chosen 5, may be if there are any parameters to be considered before choosing the number of workers, please let me know), and still in the status, it shows only one worker is on job.

    Please let me know if any idea on this.

    Madasamy M.

  7. very useful post, i’m also working on improving expdp/impdp time.
    I also have one question. expdp can estimate the size of the dump file.
    but the dump file is way smaller than the original database.
    any idea why and how can i estimate the size that it will take when imported into a new database.
    My original DB size is 500 GB, dump file is 30 GB

  8. Another way we can also speed up import process i.e. import everything except ‘ indexes’ after completion just recreate all indexes .

  9. Hi Hazem, Thanks for sharing…would like to know for expdp EXCLUDE=statistics is a good option or not? what are the implications of taking it and excluding? What is the default behaviour as I could not find clear explanation in oracle docs..Thanks in advance

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s