Wednesday, February 15, 2023

♨️A complete review of Export Pump features and capabilities

Orcale data pump abilities 


Do you think that knowing all export pump options?


A lot of people don’t know several powerful functionalities that we have available when using Data Pump (expdp/impdp), most of the people only use these tools to export and import data (in other words, only to move data), and never notice that it can be used for example to help us to do:

  • Data Masking
  • Build a Metadata Repository
  • Create a version control
  • Clone Users (Create a new user using and existent user as a template)
  • Create smaller copies of production
  • Create your database in a different file structure
  • Move all objects from one tablespace to another
  • Move a object to a different schema (A simple example, change a table owner)

Now let’s see how each functionality I mentioned above can be used at real life.

1) Data Masking/Remapping 

The goal of this section is to provide an example on how to use the REMAP_DATA command line option with EXPDP and IMPDP.

The REMAP_DATA command line option allows you to specify a remap function that takes as a source the original value of the designated column and returns a remapped value that will replace the original value in the dump file. 

A common use of this option is to mask data when moving from a production system to a test/uat/dev system.


The following example shows how to use the REMAP_DATA command line option to convert the case of the letters in the first 2 columns of a table when exporting the data. Similar functionality can also be applied to the importing process.

Example 1:
The example will start with the creation of a DATAPUMP_TEST user in SQL*Plus and granting the proper privileges to that user to be able to export and import data:


connect / as sysdba


Sql>create user datapump_test identified by test default tablespace users temporary tablespace temp quota 10m on users;

Sql>grant create session, create table to datapump_test;


After this, create a remap function (also in SQL*Plus) in the schema of the user running EXPDP which will be called from the REMAP_DATA command line option:


connect system/<password>


create or replace package datapump_remap_test

as

  function toggle_case(p_value varchar2) return varchar2;

end;

/


create or replace package body datapump_remap_test

as

  function toggle_case(p_value varchar2) return varchar2 is

  begin

 return translate(p_value,      

'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz',

'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ');

end;

/


-note: you can any way to generate a hash value such as dbms_random package.


The function TOGGLE_CASE replaces uppercase letters with their lowercase equivalents and vice versa.

Now the test data will be created in the DATAPUMP_TEST schema by using the following SQL*Plus commands:


connect datapump_test/test


create table test 

(

   col_01 varchar2(6) primary key,

   col_02 varchar2(6),

   col_03 varchar2(6),

   col_04 varchar2(6)

);


insert into test values ('AAAAAA', 'AAAAAA', 'AAAAAA', 'AAAAAA');

insert into test values ('BBBBBB', 'BBBBBB', 'BBBBBB', 'BBBBBB' );

insert into test values ('CCCCCC', 'CCCCCC', 'CCCCCC', 'CCCCCC');

commit;


Now the export can be started which uses the REMAP_DATA command line option:

#> expdp system/<password> dumpfile=remap.dmp schemas=datapump_test parfile=par1


The parameter file par1 contains:

INCLUDE=table:"= 'TEST'"

REMAP_DATA=datapump_test.test.col_01:datapump_remap_test.toggle_case

REMAP_DATA=datapump_test.test.col_02:datapump_remap_test.toggle_case


Now import the data in a new table called TEST_NEW_1:

#> impdp system/<password> dumpfile=remap.dmp schemas=datapump_test remap_table=datapump_test.test:test_new_1 exclude=constraint


Finally, check the newly created table TEST_NEW_1 in the DATAPUMP_TEST schema:

connect datapump_test/test


select * from test_new_1;


COL_01 COL_02 COL_03 COL_04

------ ------ ------ ------

aaaaaa aaaaaa AAAAAA AAAAAA

bbbbbb bbbbbb BBBBBB BBBBBB

cccccc cccccc CCCCCC CCCCCC


In this case only COL_01 and COL_02 were changed from uppercase to lowercase.

NOTE:

The remap function mentioned in the REMAP_DATA command line option must be available in the schema which is connected to by the Datapump utility.


Example 2:

Using REMAP_DATA  for hashing values on tables:


In many organizations the DBA’s had the obligation for a security and compliance purpose to mask all sensible information that leaves the production environment to as an example to refresh or create a UAT/Test or Dev environment. To help us to address this kind of requirements we could easily use the Enterprise Manager Data Masking Pack (remember it is an extra pack, and consequently you need to pay extra to use it), or as a different option, use the “remap_data” parameter available in Data Pump to help you with this requirement.


Let’s use the classic SSN (Social Security Number) example to illustrate how it works:

a) First let’s create the table for the test and load some data on it.

SQL> CREATE TABLE HR.EMPLOYEE

 ( EMP_ID         NUMBER(10) NOT NULL,

 EMP_NAME    VARCHAR2(30),

 EMP_SSN       VARCHAR2(9),

  EMP_DOB      DATE

 )

SQL> /

insert into hr.employee values (101,'Alirrza Kamrani',123456789,'30-DEC-13');

insert into hr.employee values (102,'myname',459000231,'17-JUL-13');

insert into hr.employee values (103,'pop cepop',246810119,'02-OCT-13');


b) The second step will be to create the remap function:


SQL> create or replace package pkg_masking

 as

function mask_ssn (p_in varchar2) return varchar2;

end;

/


SQL> create or replace package body      pkg_masking

    as

   function mask_ssn (p_in varchar2)

   return varchar2

   is

   begin

   return lpad (

   round(dbms_random.value (100000000,999999999)),9,0);

   end;

  end;

  /


This function will take a varchar argument and returns a 9 char. We will use this function to mask all SSN information inside our employee table.


SQL> desc employee


Name                 Null?                Type

------------- ---------- -------------------

EMP_ID          NOT NULL  NUMBER(10)

EMP_NAME                       VARCHAR2(30)

EMP_SSN                          VARCHAR2(9)

EMP_DOB                          DATE


SQL> select * from employee;


EMP_ID   EMP_NAME   EMP_SSN   EMP_DOB

----------- ---------- ----------- ------------

101  Alireza Kamrani    123456789  30-DEC-13

102  my name               459000231  17-JUL-13

103  cepop                    246810119   02-OCT-13


For this example, all you want to mask is the column EMP_SSN, which contains the SSN of each employee.


b) Now we are going to export the table employees using the expdp tool, and while exporting, we will use the parameter “remap_data” to mask the data for us in the dump file using the function we previously created.


$ expdp hr/hr tables=hr.employee dumpfile=mask_ssn.dmp directory=datapump remap_data=hr.employee.emp_ssn:pkg_masking.mask_ssn


Note: By defect the “remap_data” parameter will use the user doing the export as the owner of the remap function, if the schema owner of the function is different you will need to use the following commad:


$ expdp hr/hr tables=hr.employee dumpfile=mask_ssn.dmp directory=datapump remap_data=hr.employee.emp_ssn:owner.pkg_masking.mask_ssn


c) Now all we need to do is to import the mask_ssn.dmp in our QA/Test or Dev Database and it will magically have the new values there.


SQL> select * from employee;

EMP_ID   EMP_NAME   EMP_SSN   EMP_DOB

----------- ---------- ----------- ------------

101  Alireza Kamrani    398091209  30-DEC-13

102  my name               1209455178  17-JUL-13

103  cepop                    9080173564  02-OCT-13


Note: you can use the “remap_data” option in the impdp tool if you have a normal export done before, also remember that you can use it to mask almost everything, but please take in consideration your application requirements and data integrity requirements when using it!


2) Metadata Repository and Version Control

As a DBA, I’m always looking for proactive ways to allow me to be prepared in case of a disaster strike or if an emergency release rollback is required and due to these reasons, have a metadata repository and version control of it is always useful .

But how can I easily create it?

Easy, first do a full backup of your database using Datapump.


$ expdp user/password content=metadata_only full=y directory=datapump dumpfile=metadata_02032013.dmp


Note: If you want to create a repository only for objects like procedures, packages, triggers, … all you need to do is add the parameter include=<procedures,packages,triggers> to your expdp command, I usually include in the dump file name the date of the dump for reference purpose and best practice.

Then use the impdp tool to create the SQL file that will allow you to create all objects in your Database. It will be something like this:


$ impdp user/password directory=datapump dumpfile= metadata_02032013.dmp sqlfile=metadata_02032013.sql


This simple technique will allow you to create your metadata repository easily and also keep a versioning of your database objects as an extra, also if you create your repository (DB) and you want to refresh an object definition (as example let use the table emp from schema “scott”), all you will need to do is an export of the new table definition from your source database and then import it on your target database (your repository) as show bellow:


$ expdp user/password content=metadata_only tables=scott.emp directory=datapump dumpfile= refresh_of_table_emp_02032013.dmp


$ impdp user/password table_exists_action=replace directory=datapump dumpfile= refresh_of_table_name_02032013.dmp


3) Clone a User

Let use as an example that you want to create the user ”Z” exactly like the user “A”, to achieve this goal all you will need to do is first export the schema “A” definition and then import it again saying to the Data Pump to change the schema “A” for the new schema named “Z” using the “remap_schema” parameter available with impdp.


$ expdp user/password schemas=A content=metadata_only directory=datapump dumpfile= A_02032013.dmp


$ impdp user/password remap_schema=A:Z directory=datapump dumpfile= A_02032013.dmp


And your new user Z is now created like your existing user A , that easy!


4) Create smaller copies of production

That is a very common task for a DBA, you are always having a task to create a copy of your Database (for development or test purpose) but your destination server don’t have enough space to create a full copy of it! This can be easily solved with Data Pump, for this example, let say that you only have space for 70% of your production database, now to know how to proceed, we need to decide if the copy will contain metadata only (no data/rows) or if it will include the data also. Let’s see how to do each way:


We use transform option in impdp and this parameter could be following values:


TRANSFORM

Metadata transform to apply to applicable objects.


Valid keywords are: 


DISABLE_ARCHIVE_LOGGING, INMEMORY, INMEMORY_CLAUSE,

LOB_STORAGE, 

OID, 

PCTSPACE, 

SEGMENT_ATTRIBUTES, 

SEGMENT_CREATION,

STORAGE, 

TABLE_COMPRESSION_CLAUSE.


a) Metadata Only

First do a full export of your source database.


$ expdp user/password content=metadata_only full=y directory=datapump dumpfile=metadata_02032013.dmp


Then, let’s import the metadata and tell the Data Pump to reduce the size of extents to 70%, you can do it using the parameter “transform” available with “impdp”, it represent the percentage multiplier that will be used to alter extent allocations and datafiles size.


$ impdp user/password transform=pctspace:70 directory=datapump dumpfile=metadata_02032013.dmp


Let’s do a test and see if this is really true, first let export any table of my test database (metadata only) and generate the “sql” script to see the normal size of it.


$expdp user/password content=metadata_only tables=user.x_integration_log_det directory=datapump dumpfile=example_02032013.dmp


$impdp user/password content=metadata_only directory=datapump dumpfile=example_02032013.dmp sqlfile=check_02032013.sql


CREATE TABLE "USER"."TEST_TABLE"

      ( "EMP_NO" NUMBER,

        "Age" NUMBER,

        "Address" VARCHAR2(4000),

        "Description" VARCHAR2(100),

        "STATUS" VARCHAR2(2000)

      ) PCTFREE 10 PCTUSED 0 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING

STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645

PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)

TABLESPACE "TEST_TBS" ;


Above is the SQL code generated by Data Pump, you can see that the table is going to be created using 65536 for the initial extent and 1048576 for the next extent, now let’s generate it again but using the transform parameter to reduce the size of it to 70% of original size.


$impdp user/password transform=pctspace:70 content=metadata_only directory=datapump dumpfile=example_24112010.dmp sqlfile=check2_02032013.sql


CREATE TABLE "USER"."TEST_TABLE"

      ( "EMP_NO" NUMBER,

        "Age" NUMBER,

        "Address" VARCHAR2(4000),

        "Description" VARCHAR2(100),

        "STATUS" VARCHAR2(2000)

      ) PCTFREE 10 PCTUSED 0 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING

STORAGE(INITIAL 45875 NEXT 734003 MINEXTENTS 1 MAXEXTENTS 2147483645

PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)

TABLESPACE "TEST_TBS" ;


Above is the SQL code generated by Data Pump, and you can see that the table is now going to be created using 45875 for the initial extent and 734003 for the next extent, clearly reduced 30% of the original size, in other words, it works.


Please refer to Oracle documentation for more ways to use the transform parameter, you will not regret.


b) Metadata and data

First does a full export of your source database using the export parameter “sample”, this parameter specify a percentage of the data rows to be sampled and unload from your source database, in this case let’s use 70%.


$ expdp user/password sample=70 full=y directory=datapump dumpfile=expdp_02032013.dmp


Then, all you need to do as the example before is to import it telling the Data Pump to reduce the size of extents to 70%, and that’s it!


$ impdp user/password transform=pctspace:70 directory=datapump dumpfile=expdp_02032013.dmp


5) Create your database in a different path

This is very easy to archive, all you need to do is use the parameter “remap_datafile” on your import command as the example bellow:


$ impdp user/password directory=datapump dumpfile=example_02032013.dmp remap_datafile=’/u01/app/oracle/oradata/datafile_01.dbf’:’/u02/datafile_01.dbf’


6) Move all objects from one tablespace to another

This is very easy to do it, as the previous example, all you will need to do is use the parameter “remap_tablespace” on your import command as the example bellow:


$ impdp user/password directory=datapump dumpfile=example_02032013.dmp remap_tablespace=OLD_TBS:NEW_TBS


7) Move a object to a different schema

All you will need to do is use the parameter remap_schema as the example bellow when importing it.


$ expdp user/password tables=user.table_name directory=datapump dumpfile=table_name_02032013.dmp


$ impdp user/password directory=datapump dumpfile=table_name_02032013.dmp remap_schema=old_schema:new_schema


8)  Exporting using Data Pump API (DBMS_DATAPUMP package)


declare

 handle number;

begin

 handle := dbms_datapump.open ('EXPORT', 'SCHEMA');

 dbms_datapump.add_file(handle, 'scott.dmp', 'EXTDIR');

 dbms_datapump.metadata_filter(handle, 'SCHEMA_EXPR','=''SCOTT''');

 dbms_datapump.set_parallel(handle,4);

 dbms_datapump.start_job(handle);

 dbms_datapump.detach(handle);

exception

 when others then

  dbms_output.put_line(substr(sqlerrm, 1, 254));

end;

/

_datapump.add_file(handle, 'scott.dmp', 'EXTDIR');

 dbms_datapump.metadata_filter(handle, 'SCHEMA_EXPR','=''SCOTT''');

 dbms_datapump.set_parallel(handle,4);

 dbms_datapump.start_job(handle);

 dbms_datapump.detach(handle);

exception

 when others then

  dbms_output.put_line(substr(sqlerrm, 1, 254));

end;

/


9) export from higher version and import into lower version:


If the dumpfile needs to be imported into a lower version target database, use the Export Data Pump parameter VERSION to match the compatibility level of the target database or change compatible parameter same as other.

In a downgrade scenario, perform the Export Data Pump job with the same version Export Data Pump client as the version of the source database (recommended), and specify the VERSION parameter which is set to the lower compatibility level of the target database.


VERSION={COMPATIBLE | LATEST | version_string


COMPATIBLE = The version of the metadata corresponds to the database compatibility level. Database compatibility must be set to 9.2 or higher. This is the default value.


LATEST = The version of the metadata corresponds to the database release version.


version_string = A specific database version (e.g.: 12.2.0).


Import Data Pump can always import from Export Data Pump dumpfile sets created by older versions of the database. I.e.: an export Data Pump dumpfile set created on 10.1.0.2.0 can be imported into any higher release database.

Import Data Pump cannot import dumpfile sets with a higher compatibility level than the target database. I.e.: an export Data Pump dumpfile created with VERSION=COMPATIBLE (default) from a 11.1.0.6.0 source database with COMPATIBLE=11.1.0 can never be imported into a 10.2.0.4.0 target database.


Note:it is possible to import any dumpfile from a main release in all patched versions of the main release (e.g.: import an 10.2.0.4.0 Export Data Pump dumpfile set into a 10.2.0.1.0 target database without specifying the VERSION=10.2.0.1.0 for the Export Data Pump job).


10) Export from readonly database:


If the source database is a read-only database, then an Export Data Pump job that is started on that database will fail because Data Pump cannot create its so-called Master table. To workaround this, consider the read-only source database as a remote database and start the export Data Pump job on a different local database which has a database link to the remote read-only source database. Specify the NETWORK_LINK parameter when connecting with the Export Data Pump client to that different local database. The data from the remote read-only source database instance is transferred over the database link and written to a dump file set on the connected (local) database instance.


The local db can be on the same (or different) server but the version must be within the normal version rules for expdp. I would always suggest to use the exact same version as the db you want to extract from.


--on local db:

SQL> create public database link test connect to user identified by pass using 'tns alias of readonly db';


This assumes of course this user already exists in your read only database.....


Now the db link is there we just do datapump in the normal way :


expdp user/pass network_link=test dumpfile=test.dmp full=y


11)How to obtain the version of an export Data Pump dumpfile ? 


Every export dumpfile starts with a header (usually 4 kb in size) which contains details about the dumpfile. Starting with Oracle10g Release 2 (10.2.0.1.0), this header information can be extracted from the dumpfile by calling the procedure DBMS_DATAPUMP.GET_DUMPFILE_INFO. 


SHOW_DUMPFILE_INFO:

SET serveroutput on SIZE 1000000


exec show_dumpfile_info(

p_dir=> 'my_dir', 

p_file=> 'expdb_sample.dmp');


12)Compression options 

Import the table from the dump file, using the TRANSFORM parameter to compress the table using the TABLE_COMPRESSION_CLAUSE and the indexes using the INDEX_COMPRESSION_CLAUSE.


$ impdp testuser1/testuser1@//localhost:1521/    pdb1 \

    tables=t2 \

    directory=test_dir \

    dumpfile=test.dmp \

    logfile=impdp_test.log \

    transform=table_compression_clause:\"compress basic\" \

    transform=index_compression_clause:\"compress advanced low\"


conn testuser1/testuser1@//localhost:1521/pdb1


select compression from user_tables

where  table_name = 'T2';


COMPRESS

--------

ENABLED



column index_name format a12

select index_name, compression

from   user_indexes

where  table_name = 'T2'

order by 1;


INDEX_NAME   COMPRESSION

------------ -------------

T2_COL1_IDX  ADVANCED LOW

T2_COL2_IDX  ADVANCED LOW

SQL>

We can see both the table and indexes have now been compressed.


13)EDITION AND TARGET EDITION 

First I would like to explain what do you mean by edition . Simple definition is same object name you can store multiple times. This will useful during upgradation or versioning control of ddls.

  

For example, as per your not sure whether object is required extra structure or not, in this case create same object name with different edition. Once you finalised then you can drop the unused object name .


Editions are nonschema objects; as such, they do not have owners. 

Editions are created in a single namespace, and multiple editions can coexist in the database.

The database must have at least one edition. Every newly created or upgraded Oracle Database starts with one edition named ora$base.


Edition-based redefinition enables you to upgrade the database component of an application while it is in use, thereby minimizing or eliminating down time.


To upgrade an application while it is in use, you copy the database objects that comprise the application and redefine the copied objects in isolation. Your changes do not affect users of the application—they continue to run the unchanged application. When you are sure that your changes are correct, you make the upgraded application available to all users.

Using edition-based redefinition means using one or more of its component features. The features you use, and the down time, depend on these factors:

·  What kind of database objects you redefine

·  How available the database objects must be to users while you are redefining them

·  Whether you make the upgraded application available to some users while others continue to use the older version of the application

You always use the edition feature to copy the database objects and redefine the copied objects in isolation; that is why the procedure that this chapter describes for upgrading applications online is called edition-based redefinition.

If the object type of every object you will redefine is editionable, the edition is the only feature you use.

Table is not an editionable type. If you change the structure of one or more tables, you also use the editioning view feature.

If other users must be able to change data in the tables while you are changing their structure, you also use forward crossedition triggers. If the pre- and post-upgrade applications will be in ordinary use at the same time (hot rollover), you also use reverse crossedition triggers.  Crossedition triggers are not a permanent part of the application—you drop them when all users are using the post-upgrade application.

An editioned object is a schema object that has both an editionable type and an editions-enabled owner. 

(A schema object that has an editionable type but not an editions-enabled owner is potentially editioned.) An edition can have its own copy of an editioned object, in which case only the copy is visible to the edition.

noneditioned object is a schema object that has a noneditionable type. An edition cannot have its own copy of a noneditioned object. A noneditioned object is identical in, and visible to, all editions.

The objects are limited to some set of objects which is not having the storage.

 Editionable and Noneditionable Schema Object Types

These schema objects types are editionable:

  • SYNONYM
  • VIEW
  • All PL/SQL object types:
    • FUNCTION
    • LIBRARY
    • PACKAGE and PACKAGE BODY
    • PROCEDURE
    • TRIGGER
    • TYPE and TYPE BODY

All other schema object types are noneditionable. Table is an example of an noneditionable type.

A schema object of an editionable type is editioned if its owner is editions-enabled; otherwise, it is potentially editioned.

A schema object of a noneditionable type is always noneditioned, even if its owner is editions-enabled. A table is an example of an noneditioned object.

STEP1: Enabling Editions for a User


To enable editions for a user, use the ENABLE EDITIONS clause of either the  CREATE  USER  or  ALTER  USER statement. The EDITIONS_ENABLED column of the static data dictionary view  DBA_USERS  or USER_USERS shows which users have editions enabled.


[oracle@ ~]$ sqlplus

Enter user-name: /as sysdba


SQL> GRANT CREATE ANY EDITION, DROP ANY EDITION to dpusr;


SQL> alter user dpusr enable editions;


SQL> grant create any view to dpusr;

SQL> select USERNAME,EDITIONS_ENABLED from dba_users where USERNAME='DPUSR';


USERNAME                       E

-------------------  ---------

DPUSR                          Y


STEP2:  Check the current & new edition in DPUSR 


[oracle@inas ~]$ sqlplus dpusr/dpusr


SQL> create edition e2;     

--creating new edition


SQL> SELECT SYS_CONTEXT('userenv', 'current_edition_name') FROM DUAL;    


-- check the current edition


SYS_CONTEXT('USERENV','CURRENT_EDITION_NAME')

---------------------

ORA$BASE


SQL> create editioning view ed_emp_view as select  EMPNO,ENAME  from emp;


View created.


SQL> desc ed_emp_view

 Name                                      Null?    Type

 ----------------------  ----------  ------ 

 EMPNO        NUMBER(4)

 ENAME        VARCHAR2(10)


SQL> alter session set edition=e2;    

---moving to new edition


SQL> SELECT SYS_CONTEXT('userenv', 'current_edition_name') as env FROM DUAL;

Env

--------

E2


SQL> CREATE or replace EDITIONING VIEW ed_emp_view as select  EMPNO,ENAME,SAL,COMM from emp;   


SQL> desc TR

 Name                                      Null?    Type

 ------------------------ -------- ---------

 EMPNO                         NUMBER(4)

 ENAME                          VARCHAR2(10)

 SAL                                NUMBER(7,2)

  

STEP3: Take the export 

oracle@inas ~]$ expdp dumpfile=d4.dmp schemas=dpusr include=view directory=dps source_edition=e2 


STEP4:  drop the view from any edition


[oracle@inas ~]$ sqlplus dpusr/dpusr

SQL> alter session set edition=ORA$BASE;


SQL> desc TR

 Name                                      Null?    Type

 ------------------------ -------- -----

 EMPNO                                  NUMBER(4)

 ENAME                                  VARCHAR2(10)

SQL> drop view TR;


Now you lost object in ORA$BASE edition.   Now we have dumpfile backup but its in new edition.


Step5: IMPORT THE DUMPFILE

[oracle@inas ~]$ impdp dumpfile=d4.dmp directory=dps target_edition=ORA\$BASE


STEP 6: Verify it

[oracle@inas ~]$ sqlplus dpusr/dpusr


SQL> ALTER SESSION SET EDITION =ORA$BASE;


SQL> desc TR

 Name                                      Null?    Type

  ---------------------- ---------  ---------

 EMPNO                           NUMBER(4)

 ENAME                           VARCHAR2(10)

 SAL                                 NUMBER(7,2)


Note:  Now you got the object from dumpfile which is target edition dumpfile.  Now source edition and target edition objects are same.

--------------------


I review useful options of Data Pump utility and hopefully this will help you.


Best Regards,


Alireza Kamrani

Senior RDBMS Consultant.


Fleet Patching & Provisioning (FPP) Concepts and cloning methods for Oracle/Grid Home

  Fleet Patching & Provisioning (FPP) Concepts and cloning methods for Oracle/Grid Home:                           ♠️Alireza Kamrani♠️  ...