The "Vimal Mohan" Blog

Oracle, Unix basics & tricks for all those interested in discovering…

Datapump Import ORA-04030 Oracle 11gR2

Posted by Vimal Mohan on July 18, 2011

Problem Description:
Datapump Import in Oracle 11gR2 (11.2.0.2.0) is failing.
Datapump Import is failing while importing the schema user password history from the dumpfile.
Lots of kuxLpxAlloc memory in the “session heap” of the DW process.
Import process using lot of PGA memory. Sometimes peaking to 4GB during this import.

Error:
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PASSWORD_HISTORY
ORA-39014: One or more workers have prematurely exited.
ORA-39029: worker 1 with process name “DW00” prematurely terminated
ORA-31671: Worker process DW00 had an unhandled exception.
ORA-04030: out of process memory when trying to allocate 120048 bytes (session heap,kuxLpxAlloc)
ORA-06512: at “SYS.KUPW$WORKER”, line 1751
ORA-06512: at line 2

Code:
Oracle Import Procedure is using Datapump API DBMS_DATAPUMP.

Cause:
This seems to be an Oracle bug. But I haven’t seen any metalink note confirming this for the PASSWORD_HISTORY module, but I see this for few other IMPORT DUMP modules in 10g & 11gr1 (but fixed in 11gr2). So we are going to have Oracle Support check this and confirm if this is a new or known bug in 11gR2.

Workaround/Solution:
If you don’t have a requirement to import such a password history to the new schema, perform impdp with:

EXCLUDE=PASSWORD_HISTORY

For Datapump API DBMS_DATAPUMP, enable EXCLUDE_PATH_EXPR using the METADATA_FILTER below:

DBMS_DATAPUMP.METADATA_FILTER(handle => v_dp_handle, name => ‘EXCLUDE_PATH_EXPR’, value => ‘like”%SCHEMA_EXPORT/PASSWORD_HISTORY”’);

Advertisements

Posted in Oracle | Tagged: , , , , , , , , , | Leave a Comment »

Datapump Import ORA-31627 & ORA-31655 in 11gR2

Posted by Vimal Mohan on June 27, 2011

Problem Description:
Datapump Import with name_expr in Oracle 11gR2 (11.2.0.2.0) is failing.
Same code was working fine in Oracle 10gR2 (10.2.0.4).
Stopped working following a database upgrade from 10gR2 to 11gR2.

Code:
Oracle Import Procedure is using Datapump API DBMS_DATAPUMP and thus the name_expr to list the tables for import.
DBMS_DATAPUMP.METADATA_FILTER(handle => v_dp_handle, name => ‘name_EXPR’, value => v_tab_exp);
where v_tab_exp is an IN clause of list of tables.

Error:
ORA-31627: API call succeeded but more information is available
ORA-31655: no data or metadata objects selected for job

Cause:
This is intended behavior and occurs due to the fix for bug:6831823 which went in to 11.2 and 10.2.0.5. The ORA-31655 error is now signaled following the fix for bug:6831823, as that bug which existed in previous versions meant that if a privileged user performed a table import from a full export dump, and the specified table name existed in multiple schemas, then all the tables would be imported where as only the table for the importing user should have been imported.

Solution:
Perform import with name_expr itself, but make sure you call and assign SCHEMA_EXPR before this, to let datapump know the source schema which owns the object in the dump file.
DBMS_DATAPUMP.METADATA_FILTER(handle => v_dp_handle, name => ‘SCHEMA_EXPR’, value => v_src_sch_exp);
where v_src_sch_exp is an IN clause of list of source schema name(s).
DBMS_DATAPUMP.METADATA_FILTER(handle => v_dp_handle, name => ‘name_EXPR’, value => v_tab_exp);
where v_tab_exp is an IN clause of list of tables.

Posted in Oracle | Leave a Comment »

To remove new line characters from a file in UNIX

Posted by Vimal Mohan on May 30, 2011

perl -e ‘while (<>) { if (/$/ ) { chomp; } print ;}’ input_file > output_file

Posted in Unix | Leave a Comment »

Oracle Join Methods

Posted by Vimal Mohan on May 14, 2011

The method used to join tables with each other in Oracle.

Hash Joins

Hash joins are used for joining large data sets. The optimizer uses the smaller of the two tables or data sources to build a hash table, based on the join key, in memory. It then scans the larger table, and performs the same hashing algorithm on the join column(s). It then probes the previously built hash table for each value and if they match, it returns a row.

Nested Loops joins

Nested loops joins are useful when small subsets of data are being joined and if there is an efficient way of accessing the second table (for example an index look up). For every row in the first table (the outer table), Oracle accesses all the rows in the second table (the inner table). Consider it like two embedded FOR loops. In Oracle Database 11g the internal implementation for nested loop joins changed to reduce overall latency for physical I/O so it is possible you will see two NESTED LOOPS joins in the operations column of the plan, where you previously only saw one on earlier versions of Oracle.

Sort Merge joins

Sort merge joins are useful when the join condition between two tables is an inequality condition such as, <, <=, >, or >=. Sort merge joins can perform better than nested loop joins for large data sets.

The join consists of two steps: 1) Sort join operation: Both the inputs are sorted on the join key. 2) Merge join operation: The sorted lists are merged together.

A sort merge join is more likely to be chosen if there is an index on one of the tables that will eliminate one of the sorts.

Cartesian join

The optimizer joins every row from one data source with every row from the other data source, creating a Cartesian product of the two sets. Typically this is only chosen if the tables involved are small or if one or more of the tables does not have a join conditions to any other table in the statement. Cartesian joins are not common, so it can be a sign of problem with the cardinality estimates, if it is selected for any other reason. Strictly speaking, a Cartesian product is not a join.

Posted in Oracle | Tagged: , , , , , | Leave a Comment »

Misconception around hot backup

Posted by Vimal Mohan on December 14, 2010

A lot of DBA’s around here might be having a lot of different concepts about how Oracle deals its datafiles during a hot backup process. Well, end of the day we all need to understand and agree with one of them ie: the real Oracle way. I thought its appropriate to explain here on how Oracle deals itself during a hot backup scenario.

Lets see what these different DBA concepts or misconceptions are, before we get the Oracle way of doing things.

Wrong Concept 1: Oracle datafiles are “frozen” during backup mode.
Wrong Concept 2: While the datafiles are “frozen”, changes to the data within these datafiles are written somewhere else, like SGA, UNDO, REDO etc.. and moved to the datafiles after the completion of backup.
Wrong Concept 3: Few of them even have an argument supporting the above wrong concept 2, that the excess amount of redo generated during the hot backup are all changes during the backup and will be moved to the datafile after the hot backup.

Stop right here, no more Wrong Concepts…

Let me now get on to how Oracle works its way through this misconceptions. There are three milestones during for a datafile during its hot backup phase –

  • The corresponding tablespace is checkpointed by DBWn.
  • The Checkpoint SCN marker in the datafile headers cease to increment with checkpoints, but the Hot Backup checkpoint SCN field instead will get updated.
  • LGWR begins logging full images of changed database blocks to the redologs.

By freezing the Checkpoint SCN, any subsequent recovery on that backup copy of the file will know that it must commence at that SCN. And thus, having an old SCN in the file header tells recovery that the file is an old one, and that it should look for the archivelog containing that SCN, and apply recovery starting there. And please note that the datafile “header” is not completely frozen, ie: the Checkpoint SCN is not allowed to make any changes to its state, but the Hot Backup checkpoint SCN field will gets its updates from the CKPT. The same way, the data section of datafile will also receive its update from DBWn as it does during a normal write operation (while not in hot backup mode). Thus please make it crystal clear that the DBWn doesn’t stop writing to datafiles during a hot backup mode, rather it actually continues as before.

By checkpointing the datafile as the first step during a hot backup phase and performing a full block redo write of the first change in a block, Oracle ensures that all changes to the datafile after the start of hot backup will also be available in the redolog for recovery. This is the reason why the redo is more during a hot backup phase of a database. Normally for every change, Oracle writes a change vector to the redologs. But during a hot backup mode, Oracle will write the complete block (not just the change vector) to the redolog. But it should be noted that Oracle doesn’t write the complete block redo everytime a block is changed during this backup mode, it does this just for the first change to the block after the datafile was set to hot backup mode. All the remaining changes to that particular block will be logged just like any other redo change ie: as change vector.

Why does Oracle write a complete first block image? To understand that, we need to know what is a “split block”. As you know, Oracle writes and read in-terms of Oracle blocks, ie: 8k or multiples, which in-turn are multiples of OS blocks. While DBWn does a write on one oracle block within a hot backup mode datafile, our backup process might be in stage which does a copy of the same block to another disk/tape. Since the copy will be done using OS-based block sizes, we might get to a stage where some OS blocks within that backed-up Oracle block might have data before the database write and some after. This situation is called “split block”. By logging the first full block image of the changed block to the redologs, Oracle guarantees that in the event of a recovery, any split blocks that might be in the backup copy of the datafile will be resolved by overlaying them with the full legitimate image of the block from the archivelogs.

RMAN backups don’t suffer from split blocks. RMAN first reads the blocks into its input buffers and while writing the data from inout buffers to output buffers, it checks for split blocks. If it encounters a split block, RMAN will reread that oracle block.

Posted in Oracle | Tagged: , , , | Leave a Comment »

Export and Import using split compressed dumps

Posted by Vimal Mohan on December 9, 2010

I see this as one of the extensively used process and thought of sharing it with you all.

We all might encounter a scenario to use export/import for database/schema/object refresh, but the underlying infrastructure doesn’t give you the support to complete this without much fight. They being transferring the data from source server to target (servers being on either side of the ocean) and the source/target filesystem sets a file size limit of 2 GB or so. Below Export and Import process will help you create compressed dumps, with size not more than 2 GB each in Unix.

Export –

# Create Pipes for Export
ExpFile=ExpName.dmp.Z.
Pipe=/tmp/pipe.$$
rm -f $Pipe
mknod $Pipe p

# Run the compress in the background
(compress <$Pipe | split -b 2048m – $ExpFile) &

# Run the export in the foreground
exp / owner=abc file=$Pipe >Exp.Log 2>&1 <&-

This will create export dumps of 2GB size with the name –

ExpName.dmp.Z.aa
ExpName.dmp.Z.ab
ExpName.dmp.Z.ac
and so on…

Import –

# Create Pipes for Import
ImpFile=ExpName.dmp.Z.
Pipe=/tmp/Pipe.$$
rm -f $Pipe
mknod $Pipe p

# Run the uncompress of the files into uncompress into pipe in the background
(cat ${ImpFile}* | uncompress >$Pipe) &

# Run the import in the foreground
imp / fromuser=abc touser=abc file=$Pipe >Imp.Log 2>&1 <&-

Posted in Oracle | Tagged: , , , | Leave a Comment »

11.1.0.7 – Partitioned Tables – ORA-00600: internal error code

Posted by Vimal Mohan on December 8, 2010

I would like to detail the ORA-00600 error we encountered recently and the steps taken to resolve this.

The database is 11.1.0.7 and the application was recently migrated to this database from another 9.2.0.8 database. One fine day after couple of weeks of successful post-migration production run, the application reported numerous ORA-00600 errors like –

ORA-00600: internal error code, arguments: [kcbz_find_bpid_3], [6], [], [], [], [], [], [], [], [], [], []-600
ORA-00600: internal error code, arguments: [25026], [268242928], [266543073], [], [], [], [], [], [], [], [], []-600
ORA-00600: internal error code, arguments: [25026], [33700653], [11676161], [], [], [], [], [], [], [], [], []-600
ORA-00600: internal error code, arguments: [25027], [11], [41992705], [], [], [], [], [], [], [], [], []-600

We were a bit puzzled seeing this as the process was tested and was running fine during UAT and parallel phase of the migration. We could identify (from the package which was running) that the process is hitting the error while trying to insert data into one specific table. Luckily, this gave us the direction.

The table is partitioned and the the partitioned indexes of this table were created with “parallel x” (x>1) clause during the final production migration (not during the UAT or parallel migrations) to reduce the overall migration time.

Since this was production and we didn’t have enough time to wait for the reply from Oracle support, we decided to rebuild the indexes online with “parallel 1” clause (as done during the UAT and parallel phases). This WORKED and the application was up and running as usual in couple of hours.

Meantime, Oracle Support responded to the error as a bug –

CAUSE DETERMINATION
It appears that you are encountering bug #9399991:
Details: Various dumps and / or ORA-600 errors can occur if a partitioned table has lots of dependent references in the shared pool.

PROPOSED SOLUTION(S)
1. Upgrade to database release 11.2.0.2 (already released) or to db release 12.1 (not yet available).
OR
2.  Apply the backport patch #9399991 which is available for: 11.1.0.7.0, 11.1.0.7.2, 11.1.0.7.3.

We then decided to apply the backport patch later during the month after testing the patch in DEV/UAT databases.

Posted in Oracle | Tagged: , , , , , , , | Leave a Comment »

A backdoor entry to the Oracle Instance

Posted by Vimal Mohan on December 7, 2010

Many of us are not aware of an option in SQL Plus, “-prelim”. This backdoor entry to the instance, connects us to the SGA, but doesn’t open a session. Such a backdoor entry allows us to reach the instance when there is a total freeze [if the number of sessions max’d out or if there is is hung situation (ie: when sysdba connections are also hanging)].

sqlplus -prelim

Posted in Oracle | Tagged: , , , , , , | Leave a Comment »

Hello world!

Posted by Vimal Mohan on December 6, 2010

Hello Everyone!!!

Posted in General | 1 Comment »