Saturday, November 22, 2025

How to Use Oracle Data Pump with Oracle RAC

How to Use Oracle Data Pump with Oracle RAC๐Ÿ“š


Using Oracle Data Pump in an Oracle Real Application Clusters (Oracle RAC) environment requires you to perform a few checks to ensure that you are making cluster member nodes available.

๐Ÿ“™To use Oracle Data Pump or external tables in an Oracle RAC configuration, you must ensure that the directory object path is on a cluster-wide file system.

The directory object must point to shared physical storage that is visible to, and accessible from, all instances where Oracle Data Pump or external tables processes (or both) can run.


๐Ÿ“˜The default Oracle Data Pump behavior is that child processes can run on any instance in an Oracle RAC configuration. 


Therefore, child processes on those Oracle RAC instances must have physical access to the location defined by the directory object, such as shared storage media. 


If the configuration does not have shared storage for this purpose, but you still require parallelism, then you can use the CLUSTER=NO parameter to constrain all child processes to the instance where the Oracle Data Pump job was started.


๐Ÿ“—Under certain circumstances, Oracle Data Pump uses parallel query child processes to load or unload data. 

In an Oracle RAC environment, Data Pump does not control where these child processes run. 


Therefore, these child processes can run on other cluster member nodes in the cluster, regardless of which instance is specified for CLUSTER and SERVICE_NAME for the Oracle Data Pump job. 


Controls for parallel query operations are independent of Oracle Data Pump. 

When parallel query child processes run on other instances as part of an Oracle Data Pump job, they also require access to the physical storage of the dump file set.

How to limit pump process in specific node on RAC env?

Consider using these options:

- Use service for data pump

- PARALLEL_INSTANCE_GROUP

- Impdp/expdp cluster=no

- set parallel_force_local=TRUE SID='ORCL1';

- Adjusting Resources manager

Oracle Data Pump Option

 DATA_OPTIONS:

The Oracle Data Pump Import command-line mode DATA_OPTIONS parameter designates how you want certain types of data to be handled during import operations.

Default:
There is no default. If this parameter is not used, then the special data handling options it provides simply do not take effect.

Purpose:
The DATA_OPTIONS parameter designates how you want certain types of data to be handled during import operations.

Syntax and Description:

✴️ DATA_OPTIONS = [DISABLE_APPEND_HINT | SKIP_CONSTRAINT_ERRORS | REJECT_ROWS_WITH_REPL_CHAR | GROUP_PARTITION_TABLE_DATA | TRUST_EXISTING_TABLE_PARTITIONS | VALIDATE_TABLE_DATA | ENABLE_NETWORK_COMPRESSION | CONTINUE_LOAD_ON_FORMAT_ERROR]

CONTINUE_LOAD_ON_FORMAT_ERROR: Directs Oracle Data Pump to skip forward to the start of the next granule when a stream format error is encountered while loading table data.
Stream format errors typically are the result of corrupt dump files. If Oracle Data Pump encounters a stream format error, and the original export database is not available to export the table data again, then you can use CONTINUE_LOAD_ON_FORMAT_ERROR.
If Oracle Data Pump skips over data, then not all data from the source database is imported, which potentially skips hundreds or thousands of rows.

DISABLE_APPEND_HINT: Specifies that you do not want the import operation to use the APPEND hint while loading the data object. Disabling the APPEND hint can be useful to address duplicate data. For example, you can use DISABLE_APPEND_HINT when there is a small set of data objects to load that exists already in the database, and some other application can be concurrently accessing one or more of the data objects.

DISABLE_APPEND_HINT: Changes the default behavior, so that the APPEND hint is not used for loading data objects. When not set, the default is to use the APPEND hint for loading data objects.

DISABLE_STATS_GATHERING: Oracle Data Pump does not gather statistics on load by default. However, some environments, such as Oracle Autonomous Database, do gather statistics on load by default. When this parameter is used, statistics gathering is suspended during the import job.
The tradeoff for gathering statistics while loading data is incurring potentially significant overhead for the short-term benefit of having basic statistics gathered until you can gather full statistics on the table.

ENABLE_NETWORK_COMPRESSION: Used for network imports in which the Oracle Data Pump ACCESS_METHOD parameter is set to DIRECT_PATH to load remote table data.
When ENABLE_NETWORK_COMPRESSION is specified, Oracle Data Pump compresses data on the remote node before it is sent over the network to the target database, where it is decompressed.

This option is useful if the network connection between the remote and local database is slow, because it reduces the amount of data sent over the network.
Setting ACCESS_METHOD=AUTOMATIC enables
Oracle Data Pump to set ENABLE_NETWORK_COMPRESSION automatically during the import if Oracle Data Pump uses DIRECT_PATH for a network import.

The ENABLE_NETWORK_COMPRESSION option is ignored if Oracle Data Pump is importing data from a dump file, if the remote data base is earlier than Oracle Database 12c Release 2 (12.2), or if an INSERT_AS_SELECT statement is being used to load data from the remote database.

GROUP_PARTITION_TABLE_DATA: Tells Oracle Data Pump to import the table data in all partitions of a table as one operation. The default behavior is to import each table partition as a separate operation. If you know that the data for a partition will not move, then choose this parameter to accelerate the import of partitioned table data. There are cases when Oracle Data Pump attempts to load only one partition at a time. It does this when the table already exists, or when there is a risk that the data for one partition might be moved to another partition.

REJECT_ROWS_WITH_REPL_CHAR: Specifies that you want the import operation to reject any rows that experience data loss because the default replacement character was used during character set conversion.
If REJECT_ROWS_WITH_REPL_CHAR is not set, then the default behavior is to load the converted rows with replacement characters.

SKIP_CONSTRAINT_ERRORS: Affects how non-deferred constraint violations are handled while a data object (table, partition, or subpartition) is being loaded.
If deferred constraint violations are encountered, then SKIP_CONSTRAINT_ERRORS has no effect on the load. Deferred constraint violations always cause the entire load to be rolled back.
The SKIP_CONSTRAINT_ERRORS option specifies that you want the import operation to proceed even if non-deferred constraint violations are encountered. It logs any rows that cause non-deferred constraint violations, but does not stop the load for the data object experiencing the violation.

SKIP_CONSTRAINT_ERRORS: Prevents roll back of the entire data object when non-deferred constraint violations are encountered.
If SKIP_CONSTRAINT_ERRORS is not set, then the default behavior is to roll back the entire load of the data object on which non-deferred constraint violations are encountered.


TRUST_EXISTING_TABLE_PARTITIONS: Tells Data Pump to load partition data in parallel into existing tables.
Use this option when you are using Data Pump to create the table from the definition in the export database before the table data import is started. Typically, you use this parameter as part of a migration when the metadata is static, and you can move it before the databases are taken off line to migrate the data. Moving the metadata separately minimizes downtime. If you use this option, and if other attributes of the database are the same (for example, character set), then the data from the export database goes to the same partitions in the import database.

You can create the table outside of Oracle Data Pump. However, if you create tables as a separate option from using Oracle Data Pump, then the partition attributes and partition names must be identical to the export database.

Note:
This option can be used for import no matter the source version of the export.

VALIDATE_TABLE_DATA: Directs Oracle Data Pump to validate the number and date data types in table data columns.
If the import encounters invalid data, then an ORA-39376 error is written to the .log file. The error text includes the column name. The default is to do no validation. Use this option if the source of the Oracle Data Pump dump file is not trusted.

Restrictions:
• If you use DISABLE_APPEND_HINT, then it can take longer for data objects to load.
• If you use SKIP_CONSTRAINT_ERRORS, and if a data object has unique indexes or constraints defined on it at the time of the load, then the APPEND hint is not used for loading that data object. Therefore, loading such data objects can take longer when the SKIP_CONSTRAINT_ERRORS option is used.
• Even if SKIP_CONSTRAINT_ERRORS is specified, it is not used unless a data object is being loaded using the external table access method.

Example:
This example shows a data-only table mode import with SKIP_CONSTRAINT_ERRORS enabled:

$ impdp hr TABLES=employees CONTENT=DATA_ONLY DUMPFILE=dpump_dir1:table.dmp DATA_OPTIONS=skip_constraint_errors

If any non-deferred constraint violations are encountered during this import operation, then they are logged. The import continues on to completion.


For Export (expdp) :

DATA_OPTIONS=
[GROUP_PARTITION_TABLE_DATA | VERIFY_STREAM_FORMAT]


GROUP_PARTITION_TABLE_DATA: Tells Oracle Data Pump to unload all table data in one operation rather than unload each table partition as a separate operation. As a result, the definition of the table will not matter at import time because Import will see one partition of data that will be loaded into the entire table.
VERIFY_STREAM_FORMAT: Validates the format of a data stream before it is written to the Oracle Data Pump dump file. The verification checks for a valid format for the stream after it is generated but before it is written to disk. This assures that there are no errors when the dump file is created, which in turn helps to assure that there will not be errors when the stream is read at import time.
Example:
This example shows an export operation in which data for all partitions of a table are unloaded together instead of the default behavior of unloading the data for each partition separately.
$ expdp hr TABLES=hr.tab1 DIRECTORY=dpump_dir1
DUMPFILE=hr.dmp
VERSION=11.2
DATA_OPTIONS=GROUP_PARTITION_TABLE_DATA


Monday, November 17, 2025

Oracle Hybrid Partitioned Tables

 ๐Ÿš€ Oracle Hybrid Partitioned Tables — The Power of Internal + External Data in One Table


Hybrid Partitioned Tables (HPT) in Oracle Database bring a game-changing capability:


Seamlessly combine internal table partitions with external partitions stored in Hadoop, Object Storage, Kafka, NoSQL, and more, all inside a single logical table.


This feature helps DBAs and architects build cost-optimized, high-scale, and flexible data architectures without sacrificing Oracle’s powerful partitioning features.


๐Ÿ”น What Are Hybrid Partitioned Tables?


Hybrid partitioned tables merge:

Internal partitions → stored traditionally inside Oracle tablespaces

External partitions → stored outside the DB (HDFS, Object Storage, Parquet, ORC, CSV, HBase, Kafka, Oracle NoSQL, AWS S3, Azure, OCI, etc.)


This hybrid model is ideal when you want:

•Hot data inside Oracle (fast access, indexed, DML-capable)

•Cold/archived data in cheaper external storage

•A unified SQL interface over both


๐Ÿ”น Access Drivers Supported


Hybrid partitions work with all external table types using drivers like:

ORACLE_LOADER

ORACLE_DATAPUMP

ORACLE_HDFS

ORACLE_HIVE

ORACLE_BIGDATA


Users must have:

READ on data directories

WRITE on log/bad directories

EXECUTE on pre-processor directories.


๐Ÿ”น Constraints & Query Rewrite


Because Oracle cannot enforce integrity on external data:


Primary/foreign/unique key constraints cannot be enforced


Only RELY DISABLE constraints are allowed


Enable optimizations by setting:

ALTER SESSION SET QUERY_REWRITE_INTEGRITY = TRUSTED;

or

STALE_TOLERATED;


๐Ÿ”น Performance Optimizations Available


Oracle still applies partition-based optimizations across internal + external partitions:

✔ Static Partition Pruning

✔ Dynamic Partition Pruning

✔ Bloom Pruning


This ensures queries only scan relevant partitions—even if some reside in object storage.


๐Ÿ”น What You Can Do with HPT


Hybrid tables support many powerful operations:


Create RANGE and LIST hybrid partitions


ALTER TABLE operations: ADD / DROP / RENAME partitions


Change external data source locations


Convert internal tables → hybrid tables


Exchange partitions between:

internal ↔ internal

external ↔ external

internal ↔ external

Create:

Global partial non-unique indexes

Materialized views (with restrictions)

Full partition-wise refresh on external partitions


๐Ÿ”น Restrictions You Must Know


Some limitations apply:

No unique or global unique indexes

No clustering clause

DML allowed only on internal partitions (external = read-only)

No LOB, LONG, ADT types

No column defaults, no invisible columns

No in-memory for external partitions

No SPLIT, MERGE, MOVE on external partitions

Only RELY constraints allowed


๐ŸŽฏ Why Hybrid Partitioned Tables Matter


HPT enables Oracle customers to build:

Tiered storage architectures

Cost-effective archiving

Unified access to data lakes and databases

Elastic, cloud-integrated data platforms


All while maintaining Oracle SQL performance features where they matter.


Friday, November 14, 2025

Introduction to Blocker Resolver in Oracle 26ai

Introduction to Blocker Resolver in Oracle 26ai

Blocker Resolver is an Oracle Real Application Clusters (Oracle RAC) environment feature that autonomously resolves delays and keeps the resources available.

Enabled by default, Blocker Resolver:

  • Reliably detects database delays and deadlocks
  • Autonomously resolves database delays and deadlocks
  • Logs all detections and resolutions
  • Provides SQL interface to configure sensitivity (Normal/High) and trace file sizes

A database delays when a session blocks a chain of one or more sessions. The blocking session holds a resource such as a lock or latch that prevents the blocked sessions from progressing. The chain of sessions has a root or a final blocker session, which blocks all the other sessions in the chain. 


Blocker Resolver resolves these issues autonomously by detecting and resolving the delays.

  • Blocker Resolver Architecture
    Blocker Resolver autonomously runs as a DIA0 task within the database.
  • Optional Configuration for Blocker Resolver
    You can adjust the sensitivity, and control the size and number of the log files used by Blocker Resolver.
  • Blocker Resolver Diagnostics and Logging
    Blocker Resolver autonomously resolves delays and continuously logs the resolutions in the database alert logs and the diagnostics in the trace files.

Blocker Resolver Architecture

Blocker Resolver autonomously runs as a DIA0 task within the database.


Blocker Resolver works in the following three phases:

  • Detect: In this phase, Blocker Resolver collects the data on all the nodes and detects the sessions that are waiting for the resources held by another session. 
  • Analyze: In this phase, Blocker Resolver analyzes the sessions detected in the Detect phase to determine if the sessions are part of a potential delay. If the sessions are suspected as delayed, Blocker Resolver then waits for a certain threshold time period to ensure that the sessions are delayed.
  • Verify: In this phase, after the threshold time period is up, Blocker Resolver verifies that the sessions are delayed and selects a session that's causing the delay.

After selecting the session that's causing the delay, Blocker Resolver applies resolution methods on that session. 

If the chain of sessions or the delay resolves automatically, then Blocker Resolver does not apply delay resolution methods. 

However, if the delay does not resolve by itself, then Blocker Resolver resolves the delay by terminating the session that's causing the delay. If terminating the session fails, then Blocker Resolver terminates the process of the session. This entire process is autonomous and does not block resources for a long period and does not affect the performance.


For example, if a high rank session is included in the chain of delayed sessions, then Blocker Resolver expedites the termination of the session that's causing the delay. 

Termination of the session that's causing the delay prevents the high rank session from waiting too long and helps to maintain performance objective of the high rank session.

IMG_2121.jpeg

Optional Configuration for Blocker Resolver

You can adjust the sensitivity, and control the size and number of the log files used by Blocker Resolver.


Note:

The DBMS_HANG_MANAGER package is deprecated in Oracle AI Database 26ai. Use DBMS_BLOCKER_RESOLVER instead. The DBMS_HANG_MANAGER package provides a method of changing some configuration parameters and constraints to address session issues. 

This package is being replaced with DBMS_BLOCKER_RESOLVER.DBMS_HANG_MANAGER can be removed in a future release.


Sensitivity:

If Blocker Resolver detects a delay, then Blocker Resolver waits for a certain threshold time period to ensure that the sessions are delayed. Change threshold time period by using DBMS_BLOCKER_RESOLVER to set thesensitivity parameter to either Normal or High. If the sensitivity parameter is set to Normal, then Blocker Resolver waits for the default time period. However, if the sensitivity is set to High, then the time period is reduced by 50%.

By default, the sensitivity parameter is set toNormal. To set Blocker Resolver sensitivity, run the following commands in SQL*Plus as SYS user:

  • To set the sensitivity parameter toNormal:

    exec dbms_blocker_resolver.set(dbms_blocker_resolver.sensitivity, dbms_blocker_resolver.sensitivity_normal);

  • To set the sensitivity parameter toHigh:

    exec dbms_blocker_resolver.set(dbms_blocker_resolver.sensitivity, dbms_blocker_resolver.sensitivity_high);

Size of the Trace Log File:

The Blocker Resolver logs detailed diagnostics of the delays in the trace files with _base_ in the file name. Change the size of the trace files in bytes with the base_file_size_limit parameter. Run the following command in SQL*Plus, for example, to set the trace file size limit to 100 MB:

exec dbms_blocker_resolver.set(dbms_blocker_resolver.base_file_size_limit, 104857600);


Number of Trace Log Files

The base Blocker Resolver trace files are part of a trace file set. Change the number of trace files in trace file set with the base_file_set_count  parameter. Run the following command in SQL*Plus, for example, to set the number of trace files in trace file set to 6:


exec dbms_blocker_resolver.set(dbms_blocker_resolver.base_file_set_count,6);

By default, base_file_set_count parameter is set to 5.


Blocker Resolver Diagnostics and Logging

Blocker Resolver autonomously resolves delays and continuously logs the resolutions in the database alert logs and the diagnostics in the trace files.

Blocker Resolver logs the resolutions in the database alert logs as Automatic Diagnostic Repository (ADR) incidents with incident code ORA–32701. 

You also get detailed diagnostics about the delay detection in the trace files. 


Trace files and alert logs have file names starting with database instance_dia0_.

  • The trace files are stored in the $ ADR_BASE/diag/rdbms/database name/database instance/incident/incdir_xxxxxxdirectory
  • The alert logs are stored in the $ ADR_BASE/diag/rdbms/database name/database instance/tracedirectory


Example: Blocker Resolver Trace File for a Local Instance

This example shows an example of the output you see for Blocker Resolver for the local database instance


Trace Log File .../oracle/log/diag/rdbms/hm1/hm11/incident/incdir_111/hm11_dia0_11111_i111.trc

Oracle Database 19c Enterprise 64bit 

...

*** 2025-05-16T12:39:02.715475-07:00

HM: Hang Statistics - only statistics with non-zero values are listed


current number of active sessions 3

current number of hung sessions 1

instance health (in terms of hung sessions) 66.67%

number of cluster-wide active sessions 9

number of cluster-wide hung sessions 5

cluster health (in terms of hung sessions) 44.45%


*** 2025-05-16T12:39:02.715681-07:00

Resolvable Hangs in the System

 Root       Chain Total      Hang Hang Hang          Inst Root  #hung #hung  Hang   Hang  Resolution


ID Type Status   Num  Sess   Sess  Sess  Conf   Span  Action

  ----- ---- -------- ---- ----- ----- ----- ------ ------ --------

1 HANG RSLNPEND    3    44     3     5   HIGH GLOBAL Terminate Process

Hang Resolution Reason: Although hangs of this root type are typically

 self-resolving, the previously ignored hang was automatically resolved.


Example: Error Message in the Alert Log Indicating a Delayed Session

This example shows an example of a Blocker Resolver alert log on the primary instance


2025-07-16T12:39:02.616573-07:00

Errors in file .../oracle/log/diag/rdbms/hm1/hm1/trace/hm1_dia0_i1111.trc  (incident=1111):

ORA-32701: Possible hangs up to hang ID=1 detected

Incident details in: .../oracle/log/diag/rdbms/hm1/hm1/incident/incdir_1111/hm1_dia0_11111_i1111.trc

2025-07-16T12:39:02.674061-07:00

DIA0 requesting termination of session sid:44 with serial # 23456 (ospid:34569) on instance 3

due to a GLOBAL, HIGH confidence hang with ID=1.

Hang Resolution Reason: Although hangs of this root type are typically self-resolving, the previously ignored hang was automatically resolved.

DIA0: Examine the alert log on instance 3 for session termination status of hang with ID=1.


Example: Error Message in the Alert Log Showing a Session Delay Resolved by Blocker Resolver

This example shows an example of a Blocker Resolver alert log on the local instance for resolved delays


2025-07-16T12:39:02.707822-07:00

Errors in file .../oracle/log/diag/rdbms/hm1/hm11/trace/hm11_dia0_11111.trc  (incident=169):

ORA-32701: Possible hangs up to hang ID=1 detected

Incident details in: .../oracle/log/diag/rdbms/hm1/hm11/incident/incdir_169/hm11_dia0_30676_i169.trc

2025-07-16T12:39:05.086593-07:00

DIA0 terminating blocker (ospid: 30872 sid: 44 ser#: 23456) of hang with ID = 1

requested by master DIA0 process on instance 1

Hang Resolution Reason: Although hangs of this root type are typically

self-resolving, the previously ignored hang was automatically resolved.

by terminating session sid:44 with serial # 23456 (ospid:34569)

...

DIA0 successfully terminated session sid:44 with serial # 23456 (ospid:34569) with status 0.


---------------------

Alireza Kamrani 

RDBMS Solution Architect


How to Use Oracle Data Pump with Oracle RAC

How to Use Oracle Data Pump with Oracle RAC๐Ÿ“š Using Oracle Data Pump in an Oracle Real Application Clusters (Oracle RAC) environment require...