Saturday, November 20, 2010

Removing orphan DataPump jobs and filtering included tables with a query

Say you define a DP job from the API (DBMS_DATAPUMP) you may end up with jobs with status NOT RUNNING until you get it right. Verify if you have such a job with

select job_name,state
from user_datapump_jobs;

Then you may try to remove it with:

l_h number;

If the query above still reports the job with the same status it can be removed by dropping the master table, according to Note 336014.1:

drop table YOUR_JOB;

If the database is using a recycle bin the table has been renamed to a name starting with 'BIN'; if you rerun the query above the job is renamed to reflect this, but if you purge the recycle bin the job will be finally removed:

purge recyclebin;

If your PL/SQL code fails during definitions before you do a detach you cannot do a second attach. Since you have no reference to the job the only solution I'm aware of is to reconnect (i.e. create another database session). Better yet is to define an exception on ORA-39001 and retrieve the error message with dbms_datapump.get_status, and dbms_datapump.detach(l_handle) in the exception handler:

invalid_arg exception;
pragma exception_init(invalid_arg,-39001);
exception when invalid_arg then

If you drop the master table that is in the defining state, you will not be able to attach to the job, but if you try to attach to such a job the job will be removed for the user_datapump_jobs table.

If you try to add a metadata filter of type NAME_EXPR and you add a subquery in the expression, make sure that any table name in the subquery is prefixed with table owner or else you may receive ORA-00942 in the logfile:

dbms_datapump.metadata_filter(handle => l_h, name => 'NAME_EXPR', value =>'IN (SELECT TABLE_NAME FROM SCOTT.CHANGE_TABLE)' );

Typical error in logfile when schema is missing:

Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
ORA-39125: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS while calling DBMS_METADATA.FETCH_XML_CLOB []
ORA-00942: table or view does not exist
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 6307
----- PL/SQL Call Stack -----
object line object
handle number name
0x7bfb80b0 15032 package body SYS.KUPW$WORKER

Tuesday, November 16, 2010

Slow performance with TDE and CHAR data type on

In version of the database there is a bug when using Transparent Data Encryption (TDE) on a column with the CHAR datatype. In the following test a table is created where the primary key is defined as CHAR(11) and then encrypted with NO SALT. A simple lookup on this PK works as expected by using the respective index on But on a trace on event 10053 shows that an INTERNAL_FUNCTION is wrapped around the PK-column and therefore impedes use of the index.

The following test was performed on an OVM guest created from a template downloaded from It contains Oracle Server EE running on Oracle Enterprise Linux 5, 64-bit. Configuration of TDE and wallet is not shown.

create table foo (
id char(11) encrypt no salt primary key ,
msg varchar2(1000)

insert into foo ( select * from (
with generator as (
select rownum rn
from dual
connect by
rownum <= 100
dbms_random.string('x',11) id,
dbms_random.string('x',1000) msg
generator g1,
generator g2
rownum <= 10000
exec dbms_stats.gather_table_stats(user,'FOO', cascade=>true,method_opt=>'for all indexed columns size 1',estimate_percent=>100)
variable lid char(11)
exec :lid:='abc123def45';
alter session set events='10053 trace name context forever, level 1';
select msg from foo where id=:lid;

The query returns immediately since the index is used and the trace file shows the good plan (a simple explain plan will show the same):

| Id | Operation | Name | Rows | Bytes | Cost | Time |
| 0 | SELECT STATEMENT | | | | 2 | |
| 1 | TABLE ACCESS BY INDEX ROWID | FOO | 1 | 514 | 2 | 00:00:01 |
| 2 | INDEX UNIQUE SCAN | SYS_C007236| 1 | | 1 | 00:00:01 |
Predicate Information:
2 - access("ID"=:LID)

Now, the database is patched to version (patch 8202632). There are no invalid objects in the database after the upgrade. Then repeating the last query from above together with the trace on event 10053, the query takes a long time and the trace file shows this plan:

| Id | Operation | Name | Rows | Bytes | Cost | Time |
| 0 | SELECT STATEMENT | | | | 388 | |
| 1 | TABLE ACCESS FULL | FOO | 100 | 50K | 388 | 00:00:05 |
Predicate Information:

The INTERNAL_FUNCTION is used among other situations when oracle has to perform an implicit data conversion. This may happen when the data type of a bind variable is different from the data type used in the table.

This kind of mismatch may happen during upgrades of JDBC drivers where the mapping may change; Java had support for TimeStamp before it was introduced in the Oracle database. Later when Oracle supported TimeStamp this mapping changed, leading to this kind of errors. There are other reasons for such mismatches; a trace on event 10053 will reveal the data type of the bind variable:

Peeked values of the binds in SQL statement
oacdty=96 mxl=32(11) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1000000 frm=01 csi=31 siz=32 off=0
kxsbbbfp=2b19acc082a0 bln=32 avl=11 flg=05

The part "oacdty=96" shows the type is CHAR or NCHAR. This did not change between the tests, so there has to be another reason why the INTERNAL_FUNCTION is used. Also in this case the only change is the upgrade, even the parameter compatible remained the same ( The tests where performed from sqlplus on the database server.

This is bug 10296606 on MOS.

Update: It was discovered that if you use a variable of VARCHAR type the INTERNAL_FUNCTION is not applied, meaning that columns of CHAR type are converted to VARCHAR after encryption, but that USER_TAB_COLUMNS continues to report CHAR for the column.