Wednesday, April 12, 2023

Import option in Oracle 19c

 Import option in Oracle 19c 

tip & tricks🧶

IMG_7957.jpeg

As a DBA sometimes you have a challenging when loading large data into a target database ,specially when existing many relations between tables or large partitions tables.

By this parameters you can control how to handle constraint violations during import operations.


DATA_OPTIONS

The Oracle Data Pump Import command-line mode DATA_OPTIONS parameter designates how you want certain types of data to be handled during import operations.


Default

There is no default. If this parameter is not used, then the special data handling options it provides simply do not take effect.


Purpose

The DATA_OPTIONS parameter designates how you want certain types of data to be handled during import operations.


Syntax and Description


DATA_OPTIONS = [DISABLE_APPEND_HINT | SKIP_CONSTRAINT_ERRORS | 

REJECT_ROWS_WITH_REPL_CHAR | GROUP_PARTITION_TABLE_DATA | TRUST_EXISTING_TABLE_PARTITIONS | 

VALIDATE_TABLE_DATA | ENABLE_NETWORK_COMPRESSION |

CONTINUE_LOAD_ON_FORMAT_ERROR]


DISABLE_APPEND_HINT: Specifies that you do not want the import operation to use the APPEND hint while loading the data object. Disabling the APPEND hint can be useful to address duplicate data. For example, you can use DISABLE_APPEND_HINT when there is a small set of data objects to load that exists already in the database, and some other application can be concurrently accessing one or more of the data objects.
DISABLE_APPEND_HINT: Changes the default behavior, so that the APPEND hint is not used for loading data objects. When not set, the default is to use the APPEND hint for loading data objects.


GROUP_PARTITION_TABLE_DATA: Tells Oracle Data Pump to import the table data in all partitions of a table as one operation. The default behavior is to import each table partition as a separate operation. If you know that the data for a partition will not move, then choose this parameter to accelerate the import of partitioned table data. There are cases when Oracle Data Pump attempts to load only one partition at a time. It does this when the table already exists, or when there is a risk that the data for one partition might be moved to another partition.


♨️Tells Datapump to load the information from a partitioned table in a single operation, instead of loading the partitions separately.   

The definition of the table is not taken into account when importing, which allows the task to be carried out much faster.


It is important to clarify that the structure of the table remains unchanged. This option only allows you to speed up the export and import operation.


We execute the EXPDP command without the GROUP_PARTITION_TABLE_DATA option and observe how it exports the partitions of a table.


&expdp system@ORCL DUMPFILE=SH_SALES.dmp LOGFILE=exp_SH_SALES.log DIRECTORY=EXP_DIR TABLES=SH.SALES DATA_OPTIONS=

GROUP_PARTITION_TABLE_DATA


REJECT_ROWS_WITH_REPL_CHAR: Specifies that you want the import operation to reject any rows that experience data loss because the default replacement character was used during character set conversion.
If REJECT_ROWS_WITH_REPL_CHAR is not set, then the default behavior is to load the converted rows with replacement characters.


SKIP_CONSTRAINT_ERRORS: Affects how non-deferred constraint violations are handled while a data object (table, partition, or subpartition) is being loaded. 
If deferred constraint violations are encountered, then SKIP_CONSTRAINT_ERRORS has no effect on the load. Deferred constraint violations always cause the entire load to be rolled back.
The SKIP_CONSTRAINT_ERRORS option specifies that you want the import operation to proceed even if non-deferred constraint violations are encountered. It logs any rows that cause non-deferred constraint violations, but does not stop the load for the data object experiencing the violation. 
SKIP_CONSTRAINT_ERRORS: Prevents roll back of the entire data object when non-deferred constraint violations are encountered. 
If SKIP_CONSTRAINT_ERRORS is not set, then the default behavior is to roll back the entire load of the data object on which non-deferred constraint violations are encountered.


TRUST_EXISTING_TABLE_PARTITIONS: Tells Data Pump to load partition data in parallel into existing tables. 
Use this option when you are using Data Pump to create the table from the definition in the export database before the table data import is started. Typically, you use this parameter as part of a migration when the metadata is static, and you can move it before the databases are taken off line to migrate the data. Moving the metadata separately minimizes downtime. If you use this option, and if other attributes of the database are the same (for example, character set), then the data from the export database goes to the same partitions in the import database.
You can create the table outside of the data pump. However, if you create tables as a separate option from using Oracle Data Pump, then the partition attributes and partition names must be identical to the export database.


The TRUST_EXISTING_TABLE_PARTITIONS option tells Datapump that the structure of a partitioned table already exists in the destination database and is equal to the source database. In this way, Datapump can import data in parallel into different partitions.  

If the structure is not the same, a loading error could occur.


ORA-31693: Table data object "SH"."SALES ":"SALES_Q3_2001" failed to load/unload and is being skipped due to error:

ORA-29913: error in executing ODCIEXTTABLEFETCH callout

ORA-14401: inserted partition key is outside specified partition


To explain it better, we review the following example.  


♨️When we perform export and import operations on a partitioned table, this operation is performed serially, even if one defines parallelism

📍Parallelism helps on different tables, but not on partitions of a table.


We can see this in the following export task with the option METRICS=Y (an import task works in the same way)

expdp system@ORCL DUMPFILE=SH_SALES3_%U.dmp LOGFILE=exp_SH_SALES3.log DIRECTORY=EXP_DIR TABLES=SH.SALES METRICS=Y PARALLEL=2


Now, we run the import again in the database, but with the   TRUST_EXISTING_TABLE_PARTITIONS option , we also use the TABLE_EXISTS_ACTION = TRUNCATE option .


impdp system@PRD DUMPFILE=SH_SALES_%U.dmp LOGFILE=imp_SH_SALES.log DIRECTORY=IMP_DIR TABLES=SH.SALES METRICS=Y PARALLEL=2 TABLE_EXISTS_ACTION = TRUNCATE DATA_OPTIONS=

TRUST_EXISTING_TABLE_PARTITIONS



VALIDATE_TABLE_DATA: Directs Oracle Data Pump to validate the number and date data types in table data columns. 
If the import encounters invalid data, then an ORA-39376 error is written to the .logfile. The error text includes the column name. The default is to do no validation. Use this option if the source of the Oracle Data Pump dump file is not trusted.


ENABLE_NETWORK_COMPRESSION: Used for network imports in which the Oracle Data Pump ACCESS_METHOD parameter is set to DIRECT_PATH to load remote table data. 
When ENABLE_NETWORK_COMPRESSION is specified, Oracle Data Pump compresses data on the remote node before it is sent over the network to the target database, where it is decompressed. This option is useful if the network connection between the remote and local database is slow, because it reduces the amount of data sent over the network.
Setting 
ACCESS_METHOD=AUTOMATIC enables Oracle Data Pump to set ENABLE_NETWORK_COMPRESSION automatically during the import if Oracle Data Pump uses DIRECT_PATH for a network import.
The ENABLE_NETWORK_COMPRESSION option is ignored if Oracle Data Pump is importing data from a dump file, if the remote data base is earlier than Oracle Database 12c Release 2 (12.2), or if an INSERT_AS_SELECT statement is being used to load data from the remote database.


CONTINUE_LOAD_ON_FORMAT_ERROR: Directs Oracle Data Pump to skip forward to the start of the next granule when a stream format error is encountered while loading table data. 
Stream format errors typically are the result of corrupt dump files. If Oracle Data Pump encounters a stream format error, and the original export database is not available to export the table data again, then you can use CONTINUE_LOAD_ON_FORMAT_ERROR. If Oracle Data Pump skips over data, then not all data from the source database is imported, which potentially skips hundreds or thousands of rows.

Restrictions

  • If you use DISABLE_APPEND_HINT, then it can take longer for data objects to load.
  • If you use SKIP_CONSTRAINT_ERRORS, and if a data object has unique indexes or constraints defined on it at the time of the load, then the APPEND hint is not used for loading that data object. Therefore, loading such data objects can take longer when the SKIP_CONSTRAINT_ERRORS option is used.
  • Even if SKIP_CONSTRAINT_ERRORS is specified, it is not used unless a data object is being loaded using the external table access method.


Example

This example shows a data-only table mode import with SKIP_CONSTRAINT_ERRORS enabled:


$impdp hr TABLES=employees CONTENT=DATA_ONLY 

DUMPFILE=dpump_dir1:table.dmp DATA_OPTIONS=skip_constraint_errors


If any non-deferred constraint violations are encountered during this import operation, then they are logged. The import continues on to completion.


Another example:


$impdp hr/pass dumpfile=unique.dmp directory=EXPDIR table_exists_action=APPEND data_options=SKIP_CONSTRAINT_ERRORS


Best regards,

Alireza Kamrani

Wednesday, April 5, 2023

Orcale database patching techniques

 Out-of-Place Oracle Database Patching and Provisioning Golden Images


Visit more in my professional group on LinkedIn:

https://www.linkedin.com/groups/8151826


There are some traditional methods for upgrading and patching orcale dataabse, such as offline or online solutions.

DBAs always have a challenging on this way to do this by minimizing downtime.

A good option alwayis using Orcale Goldengate features for online upgrades that might be better in some situations, but totally select a best method is related to another parameters such as current version of DB, SLA contracts, RTO/RPO, work load, resource limitations, type of applications that used database, database size, human experiences, and so on.


Orcale always going on to simplify this method and in newer versions we see these features.


Also in the previous posts I explained some useful features such as Orcale Fleep Patching and Provisioning that made easy in multi instances and cloud base environments.


 Here, I demonstrated another useful concept and solution for doing that.


These features are available on Multitenant databases 12c above and was better in newer as 19c, 21c,... and with newer versions manuall interference by DBAs reached in minimum with minimal total times of upgrade.

Lets go.


Introduction:

Oracle Database is a widely used database management system that requires regular patching to ensure security, stability, and performance improvements. 

Out-of-place patching is a recommended approach, and creating golden images helps streamline the process. This essay will explain the steps involved in out-of-place Oracle database patching, creating golden images for Oracle Home and Grid Home, and provisioning Oracle features using the -apply_ru option.


1. Out-of-Place Oracle Database Patching:


Out-of-place patching involves creating a new Oracle Home and applying patches to it rather than modifying the existing installation. This approach minimizes downtime and allows easy rollback if any issues arise. Here's a step-by-step example:


   a. Obtain the required patch set from Oracle's support website.

   b. Unzip the patch set files to a temporary directory.

   c. Create a new Oracle Home directory where the patched database will reside.

   d. Install Oracle software binaries into the new Oracle Home.

   e. Use the Oracle Universal Installer (OUI) to apply the patch set to the newly created Oracle Home.

   f. Update the database's Oracle Inventory (oraInventory) to reflect the new Oracle Home.


2. Creating Golden Images for Oracle Home and Grid Home:


Creating golden images involves cloning an existing Oracle Home and Grid Home installation, which serves as a template for future installations. This approach ensures consistency and reduces manual effort. Here's an example of creating a golden image:


   a. Stop all relevant services and processes related to the Oracle Home or Grid Home.

   b. Create a backup of the existing Oracle Home or Grid Home using tools like Oracle Recovery Manager (RMAN) or file system-level backups.

   c. Clone the backup to a new location or server, preserving the directory structure and permissions.

   d. Update configuration files, such as listener.ora and tnsnames.ora, to reflect the new environment.

   e. Start the services and verify the cloned Oracle Home or Grid Home.


3. Patching and Provisioning Oracle Features:


After creating the golden image, you can use the `-apply_ru` option to apply patch sets and provision additional Oracle features. Here's a brief example:


   a. Obtain the required patch set or feature pack from Oracle's support website.

   b. Unzip the patch set or feature pack files to a temporary directory.

   c. Run the patching command, specifying the Oracle Home or Grid Home to be patched, along with the `-apply_ru` option.

   d. Follow the prompts and provide any necessary information during the patching process.

   e. Once the patching is complete, verify the success of the patch installation by checking the logs and running appropriate tests.


Conclusion:

Out-of-place Oracle database patching, creating golden images for Oracle Home and Grid Home, and provisioning Oracle features are crucial steps to ensure a secure and stable database environment. By following the provided example and best practices, organizations can minimize downtime, maintain consistency, and apply patches and features efficiently.


Do you know better solutions to upgrading database?



Regards,

Alireza Kamrani

Senior RDBMS Consultant.

Fleet Patching & Provisioning (FPP) Concepts and cloning methods for Oracle/Grid Home

  Fleet Patching & Provisioning (FPP) Concepts and cloning methods for Oracle/Grid Home:                           ♠️Alireza Kamrani♠️  ...