The "Vimal Mohan" Blog

Oracle, Unix basics & tricks for all those interested in discovering…

Posts Tagged ‘import’

Datapump Import ORA-04030 Oracle 11gR2

Posted by Vimal Mohan on July 18, 2011

Problem Description:
Datapump Import in Oracle 11gR2 (11.2.0.2.0) is failing.
Datapump Import is failing while importing the schema user password history from the dumpfile.
Lots of kuxLpxAlloc memory in the “session heap” of the DW process.
Import process using lot of PGA memory. Sometimes peaking to 4GB during this import.

Error:
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PASSWORD_HISTORY
ORA-39014: One or more workers have prematurely exited.
ORA-39029: worker 1 with process name “DW00” prematurely terminated
ORA-31671: Worker process DW00 had an unhandled exception.
ORA-04030: out of process memory when trying to allocate 120048 bytes (session heap,kuxLpxAlloc)
ORA-06512: at “SYS.KUPW$WORKER”, line 1751
ORA-06512: at line 2

Code:
Oracle Import Procedure is using Datapump API DBMS_DATAPUMP.

Cause:
This seems to be an Oracle bug. But I haven’t seen any metalink note confirming this for the PASSWORD_HISTORY module, but I see this for few other IMPORT DUMP modules in 10g & 11gr1 (but fixed in 11gr2). So we are going to have Oracle Support check this and confirm if this is a new or known bug in 11gR2.

Workaround/Solution:
If you don’t have a requirement to import such a password history to the new schema, perform impdp with:

EXCLUDE=PASSWORD_HISTORY

For Datapump API DBMS_DATAPUMP, enable EXCLUDE_PATH_EXPR using the METADATA_FILTER below:

DBMS_DATAPUMP.METADATA_FILTER(handle => v_dp_handle, name => ‘EXCLUDE_PATH_EXPR’, value => ‘like”%SCHEMA_EXPORT/PASSWORD_HISTORY”’);

Advertisements

Posted in Oracle | Tagged: , , , , , , , , , | Leave a Comment »

Export and Import using split compressed dumps

Posted by Vimal Mohan on December 9, 2010

I see this as one of the extensively used process and thought of sharing it with you all.

We all might encounter a scenario to use export/import for database/schema/object refresh, but the underlying infrastructure doesn’t give you the support to complete this without much fight. They being transferring the data from source server to target (servers being on either side of the ocean) and the source/target filesystem sets a file size limit of 2 GB or so. Below Export and Import process will help you create compressed dumps, with size not more than 2 GB each in Unix.

Export –

# Create Pipes for Export
ExpFile=ExpName.dmp.Z.
Pipe=/tmp/pipe.$$
rm -f $Pipe
mknod $Pipe p

# Run the compress in the background
(compress <$Pipe | split -b 2048m – $ExpFile) &

# Run the export in the foreground
exp / owner=abc file=$Pipe >Exp.Log 2>&1 <&-

This will create export dumps of 2GB size with the name –

ExpName.dmp.Z.aa
ExpName.dmp.Z.ab
ExpName.dmp.Z.ac
and so on…

Import –

# Create Pipes for Import
ImpFile=ExpName.dmp.Z.
Pipe=/tmp/Pipe.$$
rm -f $Pipe
mknod $Pipe p

# Run the uncompress of the files into uncompress into pipe in the background
(cat ${ImpFile}* | uncompress >$Pipe) &

# Run the import in the foreground
imp / fromuser=abc touser=abc file=$Pipe >Imp.Log 2>&1 <&-

Posted in Oracle | Tagged: , , , | Leave a Comment »