Monday, November 24, 2025

The ORACLE_DATAPUMP Access Driver

The ORACLE_DATAPUMP Access Driver: Abilities and Applications in External Tables


The ORACLE_DATAPUMP access driver is a specialized component within Oracle's External Tables framework, enabling the use of Oracle Data Pump dump files as the data source or target for external tables. This driver is unique in its dual capability for both unloading (exporting) and loading (importing) data, offering a high-performance, parallelizable, and feature-rich mechanism for data movement within the Oracle Database ecosystem.
 
1. Core Applications: Unloading and Loading Data
The primary function of the ORACLE_DATAPUMP access driver is to facilitate the movement of data between database tables or between databases and Data Pump dump files via external tables.
 
1.1 Unloading Data (Export)
Data is unloaded from a database table into a Data Pump dump file when an external table is created using the ORACLE_DATAPUMP driver in conjunction with a CREATE TABLE AS SELECT statement. The external table is then created, pointing to the newly generated dump file.
As part of creating an external table with a SQL CREATE TABLE AS SELECT statement, 🔷️the ORACLE_DATAPUMP access driver can write data to a dump file.
The data in the file is written in a binary format that can only be read by the ORACLE_DATAPUMP access driver. Once the dump file is created, it cannot be modified (that is, no data manipulation language (DML) operations can be performed on it). However, the file can be read any number of times and used as the dump file for another external table in the same database or in a different database.

Example 1: Unloading Data and Creating an External Table
 
This example demonstrates creating an external table and unloading data from the inventories table into a dump file named inv_xt.dmp.
 
CREATE TABLE inventories_xt
ORGANIZATION EXTERNAL
(
    TYPE ORACLE_DATAPUMP
    DEFAULT DIRECTORY def_dir1
    LOCATION ('inv_xt.dmp')
)
AS SELECT * FROM inventories;
 
-- Table created.
 
The resulting external table can be queried like any standard table.
 
Example 2: Querying and Verifying Unloaded Data
 
SQL> DESCRIBE inventories
Name
--------------------------------------------------------------
PRODUCT_ID
WAREHOUSE_ID
QUANTITY_ON_HAND
 
SQL> DESCRIBE inventories_xt
Name
-------------------------------------------------------------
PRODUCT_ID
WAREHOUSE_ID
QUANTITY_ON_HAND
 
SQL> SELECT COUNT(*) FROM inventories_xt;
 
COUNT(*)
----------
1112
 
SQL> SELECT * FROM inventories MINUS SELECT * FROM inventories_xt;
 
no rows selected
 
1.2 Loading Data (Import)
To load data, an external table is created to point to an existing Data Pump dump file, without the AS SELECT clause. This external table then acts as the source for loading data into a new database table using a SQL INSERT INTO...SELECT statement.

Example 3: Creating an External Table with an Existing Dump File
The dump file created for the external table can now be moved and used as the dump file for another external table in the same database or different database. Note that when you create an external table that uses an existing file, there is no AS SELECT clause for the CREATE TABLE statement.

CREATE TABLE inventories_xt2
(
    product_id NUMBER(6),
    warehouse_id NUMBER(3),
    quantity_on_hand NUMBER(8)
)
ORGANIZATION EXTERNAL
(
    TYPE ORACLE_DATAPUMP
    DEFAULT DIRECTORY def_dir1
    LOCATION ('inv_xt.dmp')
);
 
-- Table created.
 
Example 4: Loading Data into a New Database Table
 
CREATE TABLE inventories_new
(
    product_id NUMBER(6),
    warehouse_id NUMBER(3),
    quantity_on_hand NUMBER(8)
);
 
-- Table created.
 
INSERT INTO inventories_new SELECT * FROM inventories_xt4; --or use database link for remote db.
 
-- 1112 rows created.
 
SQL> SELECT COUNT(*) FROM inventories_new;
 
COUNT(*)
----------
1112

1.3 Parallelism and Combining Dump Files
The driver supports parallel operations, where the degree of parallelism for unloading is specified by the PARALLEL clause, and the number of dump files created equals the degree of parallelism. For loading, the degree of parallelism is determined by the number of dump files listed in the LOCATION clause.
 
Example 5: Parallel Unloading with Multiple Dump Files
 
CREATE TABLE inventories_xt3
ORGANIZATION EXTERNAL
(
    TYPE ORACLE_DATAPUMP
    DEFAULT DIRECTORY def_dir1
    LOCATION ('inv_xt_1.dmp', 'inv_xt_2.dmp', 'inv_xt_3.dmp')
)
PARALLEL 3
AS SELECT * FROM inventories;
 
-- Table created.
 
Example 6: Combining Multiple Dump Files for Loading
 
CREATE TABLE inventories_xt7
(
    product_id NUMBER(6),
    warehouse_id NUMBER(3),
    quantity_on_hand NUMBER(8)
)
ORGANIZATION EXTERNAL
(
    TYPE ORACLE_DATAPUMP
    DEFAULT DIRECTORY def_dir1
    LOCATION ('inv_xt_1.dmp', 'inv_xt_2.dmp', 'inv_xt_3.dmp')
);

2. Access Parameters (access_parameters Clause)

When you create the ORACLE_DATAPUMP access driver external table, you can specify certain parameters in an access_parameters clause.
This clause is optional, as are its individual parameters. For example, you can specify LOGFILE, but not VERSION, or vice versa. The syntax for the access_parameters clause is as follows.

Note
These access parameters are collectively referred to as the opaque_format_spec in the SQL CREATE TABLE...ORGANIZATION EXTERNAL statement.

The access_parameters clause allows for fine-grained control over the dump file creation and access process. These parameters are collectively referred to as the opaque_format_spec in the CREATE TABLE...ORGANIZATION EXTERNAL statement.
 
• Comments
The ORACLE_DATAPUMP access driver comments access parameter enables you to place comments with external tables
• COMPRESSION
The ORACLE_DATAPUMP access driver compression access parameter specifies whether and how data is compressed before the external table data is written to the dump file set.
• ENCRYPTION
The ORACLE_DATAPUMP access driver encryption access parameter specifies whether to encrypt data before it is written to the dump file set.
• LOGFILE | NOLOGFILE
The ORACLE_DATAPUMP access driver LOGFILE|NOLOGFILE access parameter specifies the name of the log file that contains any messages generated while the dump file was being accessed.t.
• VERSION Clause
The ORACLE_DATAPUMP access driver version clause access parameter enables you to specify generating a dump file that can be read with an earlier Oracle Database release.
• HADOOP_TRAILERS Clause
The ORACLE_DATAPUMP access driver provides a HADOOP_TRAILERS clause that specifies whether to write Hadoop trailers to the dump file.
• Effects of Using the SQL ENCRYPT Clause
Review the requirements and guidelines for external tables when you encrypt columns using the ORACLE_DATAPUMP access driver ENCRYPT clause.

2.1 Effects of SQL ENCRYPT Clause
The ORACLE_DATAPUMP access driver encryption access parameter specifies whether to encrypt data before it is written to the dump file set.
Default: DISABLED
Purpose:
Specifies whether to encrypt data before it is written to the dump file set.

If ENABLED is specified, then all data is written to the dump file set in encrypted format.

If DISABLED is specified, then no data is written to the dump file set in encrypted format.

Restrictions:
This parameter is used only for export operations.

Example:
In the following example, the ENCRYPTION parameter is set to ENABLED. Therefore, all data written to the dept.dmp file will be in encrypted format.

CREATE TABLE deptXTec3
ORGANIZATION EXTERNAL (TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY def_dir1
ACCESS PARAMETERS (ENCRYPTION ENABLED) LOCATION ('dept.dmp'));

3. Data Type Support and Workarounds

3.1. Supported Data Types
The ORACLE_DATAPUMP access driver resolves many data types automatically during loads and unloads.
When you use external tables to move data between databases, you may encounter the following situations:
• The database character set and the database national character set may be different between the two platforms.
• The endianness of the platforms for the two databases may be different.

The ORACLE_DATAPUMP access driver automatically resolves some of these situations.

The following data types are automatically converted during loads and unloads:
• Character (CHAR, NCHAR, VARCHAR2, NVARCHAR2)
• RAW
• NUMBER
• Date/Time
• BLOB
• CLOB and NCLOB
• ROWID and UROWID
If you attempt to use a data type that is not supported for external tables, then you receive an error. This is demonstrated in the following example, in which the unsupported data type, LONG, is used:

SQL> CREATE TABLE bad_datatype_xt  ( product_id NUMBER(6),  language_id VARCHAR2(3),  translated_name NVARCHAR2(50),  translated_description LONG )
ORGANIZATION EXTERNAL
( TYPE ORACLE_DATAPUMP 
DEFAULT DIRECTORY def_dir1
LOCATION ('proddesc.dmp') 13 );

Error:
translated_description LONG * ERROR at line 6: ORA-30656: column type not supported on external organized table

3.2. Unsupported Data Types
You can use the ORACLE_DATAPUMP access driver to unload and reload data for some of the unsupported data types.
An external table supports a subset of all possible data types for columns. In particular, it supports character data types (except LONG), the RAW data type, all numeric data types, and all date, timestamp, and interval data types.

The unsupported data types for which you can use the ORACLE_DATAPUMP access driver to unload and reload data include the following:
• BFILE
• LONG and LONG RAW
• Final object types
• Tables of final object types
• Unloading and Loading BFILE Data Types
The BFILE data type has two pieces of information stored in it: the directory object for the file and the name of the file within that directory object.
• Unloading LONG and LONG RAW Data Types
The ORACLE_DATAPUMP access driver can be used to unload LONG and LONG RAW columns, but that data can only be loaded back into LOB fields.
• Unloading and Loading Columns Containing Final Object Types
Final column objects are populated into an external table by moving each attribute in the object type into a column in the external table.
• Tables of Final Object Types
Object tables have an object identifier that uniquely identifies every row in the table.

4. Performance Hints and Restrictions

4.1 Performance Hints
To maximize performance when using the ORACLE_DATAPUMP access driver:
 
•  Parallelism: Use the PARALLEL clause and ensure the degree of parallelism matches the number of dump files in the LOCATION clause.
•  Compression: Utilize the COMPRESSION clause to reduce dump file size, which is particularly beneficial over slower networks.
•  Logging: Use LOGFILE for monitoring or NOLOGFILE to eliminate logging overhead if not needed.
 
When you monitor performance, the most important measurement is the elapsed time for a load. Other important measurements are CPU usage, memory usage, and I/O rates.

You can alter performance by increasing or decreasing the degree of parallelism. The degree of parallelism indicates the number of access drivers that can be started to process the data files. The degree of parallelism enables you to choose on a scale between slower load with little resource usage and faster load with all resources utilized. The access driver cannot automatically tune itself, because it cannot determine how many resources you want to dedicate to the access driver.

An additional consideration is that the access drivers use large I/O buffers for better performance. On databases with shared servers, all memory used by the access drivers comes out of the system global area (SGA). For this reason, you should be careful when using external tables on shared servers.

Example 7: Attempting DML on an External Table (Error)
 
SQL> DELETE FROM inventories_xt WHERE warehouse_id = 1;
DELETE FROM inventories_xt WHERE warehouse_id = 1
*
ERROR at line 1:
ORA-30657: operation not supported on external organized table
•  Dump File Format: The dump file must be in the Oracle Data Pump format.
•  Directory Object: The directory object specified in DEFAULT DIRECTORY must exist, and the user must have READ and WRITE privileges.
•  No DDL on Dump File: Data Definition Language (DDL) statements (ALTER, DROP) cannot be used on the dump file itself.
• No AS SELECT for Existing Files: When creating an external table that points to an existing dump file, the AS SELECT clause is prohibited.

 
🟥Practical Scenario in data warehouse env using Combining Dump Files

🔷️Dump files populated by different external tables can all be specified in the LOCATION clause of another external table.
For example, data from different production databases can be unloaded into separate files, and then those files can all be included in an external table defined in a data warehouse. This provides an easy way of aggregating data from multiple sources.

💠The only restriction is that the metadata for all of the external tables be exactly the same. This means that the character set, time zone, schema name, table name, and column names must all match. Also, the columns must be defined in the same order, and their data types must be exactly alike.

This means that after you create the first external table you must drop it so that you can use the same table name for the second external table.

This ensures that the metadata listed in the two dump files is the same and they can be used together to create the same external table.

SQL> CREATE TABLE inv_part_1_xt
    ORGANIZATION EXTERNAL
    (
     TYPE ORACLE_DATAPUMP
      DEFAULT DIRECTORY def_dir1
     LOCATION ('inv_p1_xt.dmp')
    )
   AS SELECT * FROM oe.inventories WHERE warehouse_id < 5;

SQL> DROP TABLE inv_part_1_xt;
-- we need only dumpfile.

SQL> CREATE TABLE inv_part_1_xt
    ORGANIZATION EXTERNAL
    (
      TYPE ORACLE_DATAPUMP
      DEFAULT directory def_dir1
      LOCATION ('inv_p2_xt.dmp')
    )
   AS SELECT * FROM oe.inventories WHERE warehouse_id >= 5;

--use both export pump in previous steps to filling inv_part_all_xt table:

SQL> CREATE TABLE inv_part_all_xt
  2  (
  3    PRODUCT_ID          NUMBER(6),
  4    WAREHOUSE_ID        NUMBER(3),
  5    QUANTITY_ON_HAND    NUMBER(8)
  6  )
  7  ORGANIZATION EXTERNAL
  8  (
  9    TYPE ORACLE_DATAPUMP
10    DEFAULT DIRECTORY def_dir1
11    LOCATION ('inv_p1_xt.dmp','inv_p2_xt.dmp')
12  );



SQL> SELECT * FROM inv_part_all_xt MINUS SELECT * FROM oe.inventories;

no rows selected

So we can use two separate dumpfile and merge them into a full table in another database easily.

-------------------------------
Alireza Kamrani 
Oracle senior Solution Architect 

Saturday, November 22, 2025

How to Use Oracle Data Pump with Oracle RAC

How to Use Oracle Data Pump with Oracle RAC📚


Using Oracle Data Pump in an Oracle Real Application Clusters (Oracle RAC) environment requires you to perform a few checks to ensure that you are making cluster member nodes available.

📙To use Oracle Data Pump or external tables in an Oracle RAC configuration, you must ensure that the directory object path is on a cluster-wide file system.

The directory object must point to shared physical storage that is visible to, and accessible from, all instances where Oracle Data Pump or external tables processes (or both) can run.


📘The default Oracle Data Pump behavior is that child processes can run on any instance in an Oracle RAC configuration. 


Therefore, child processes on those Oracle RAC instances must have physical access to the location defined by the directory object, such as shared storage media. 


If the configuration does not have shared storage for this purpose, but you still require parallelism, then you can use the CLUSTER=NO parameter to constrain all child processes to the instance where the Oracle Data Pump job was started.


📗Under certain circumstances, Oracle Data Pump uses parallel query child processes to load or unload data. 

In an Oracle RAC environment, Data Pump does not control where these child processes run. 


Therefore, these child processes can run on other cluster member nodes in the cluster, regardless of which instance is specified for CLUSTER and SERVICE_NAME for the Oracle Data Pump job. 


Controls for parallel query operations are independent of Oracle Data Pump. 

When parallel query child processes run on other instances as part of an Oracle Data Pump job, they also require access to the physical storage of the dump file set.

How to limit pump process in specific node on RAC env?

Consider using these options:

- Use service for data pump

- PARALLEL_INSTANCE_GROUP

- Impdp/expdp cluster=no

- set parallel_force_local=TRUE SID='ORCL1';

- Adjusting Resources manager

Oracle Data Pump Option

 DATA_OPTIONS:

The Oracle Data Pump Import command-line mode DATA_OPTIONS parameter designates how you want certain types of data to be handled during import operations.

Default:
There is no default. If this parameter is not used, then the special data handling options it provides simply do not take effect.

Purpose:
The DATA_OPTIONS parameter designates how you want certain types of data to be handled during import operations.

Syntax and Description:

✴️ DATA_OPTIONS = [DISABLE_APPEND_HINT | SKIP_CONSTRAINT_ERRORS | REJECT_ROWS_WITH_REPL_CHAR | GROUP_PARTITION_TABLE_DATA | TRUST_EXISTING_TABLE_PARTITIONS | VALIDATE_TABLE_DATA | ENABLE_NETWORK_COMPRESSION | CONTINUE_LOAD_ON_FORMAT_ERROR]

CONTINUE_LOAD_ON_FORMAT_ERROR: Directs Oracle Data Pump to skip forward to the start of the next granule when a stream format error is encountered while loading table data.
Stream format errors typically are the result of corrupt dump files. If Oracle Data Pump encounters a stream format error, and the original export database is not available to export the table data again, then you can use CONTINUE_LOAD_ON_FORMAT_ERROR.
If Oracle Data Pump skips over data, then not all data from the source database is imported, which potentially skips hundreds or thousands of rows.

DISABLE_APPEND_HINT: Specifies that you do not want the import operation to use the APPEND hint while loading the data object. Disabling the APPEND hint can be useful to address duplicate data. For example, you can use DISABLE_APPEND_HINT when there is a small set of data objects to load that exists already in the database, and some other application can be concurrently accessing one or more of the data objects.

DISABLE_APPEND_HINT: Changes the default behavior, so that the APPEND hint is not used for loading data objects. When not set, the default is to use the APPEND hint for loading data objects.

DISABLE_STATS_GATHERING: Oracle Data Pump does not gather statistics on load by default. However, some environments, such as Oracle Autonomous Database, do gather statistics on load by default. When this parameter is used, statistics gathering is suspended during the import job.
The tradeoff for gathering statistics while loading data is incurring potentially significant overhead for the short-term benefit of having basic statistics gathered until you can gather full statistics on the table.

ENABLE_NETWORK_COMPRESSION: Used for network imports in which the Oracle Data Pump ACCESS_METHOD parameter is set to DIRECT_PATH to load remote table data.
When ENABLE_NETWORK_COMPRESSION is specified, Oracle Data Pump compresses data on the remote node before it is sent over the network to the target database, where it is decompressed.

This option is useful if the network connection between the remote and local database is slow, because it reduces the amount of data sent over the network.
Setting ACCESS_METHOD=AUTOMATIC enables
Oracle Data Pump to set ENABLE_NETWORK_COMPRESSION automatically during the import if Oracle Data Pump uses DIRECT_PATH for a network import.

The ENABLE_NETWORK_COMPRESSION option is ignored if Oracle Data Pump is importing data from a dump file, if the remote data base is earlier than Oracle Database 12c Release 2 (12.2), or if an INSERT_AS_SELECT statement is being used to load data from the remote database.

GROUP_PARTITION_TABLE_DATA: Tells Oracle Data Pump to import the table data in all partitions of a table as one operation. The default behavior is to import each table partition as a separate operation. If you know that the data for a partition will not move, then choose this parameter to accelerate the import of partitioned table data. There are cases when Oracle Data Pump attempts to load only one partition at a time. It does this when the table already exists, or when there is a risk that the data for one partition might be moved to another partition.

REJECT_ROWS_WITH_REPL_CHAR: Specifies that you want the import operation to reject any rows that experience data loss because the default replacement character was used during character set conversion.
If REJECT_ROWS_WITH_REPL_CHAR is not set, then the default behavior is to load the converted rows with replacement characters.

SKIP_CONSTRAINT_ERRORS: Affects how non-deferred constraint violations are handled while a data object (table, partition, or subpartition) is being loaded.
If deferred constraint violations are encountered, then SKIP_CONSTRAINT_ERRORS has no effect on the load. Deferred constraint violations always cause the entire load to be rolled back.
The SKIP_CONSTRAINT_ERRORS option specifies that you want the import operation to proceed even if non-deferred constraint violations are encountered. It logs any rows that cause non-deferred constraint violations, but does not stop the load for the data object experiencing the violation.

SKIP_CONSTRAINT_ERRORS: Prevents roll back of the entire data object when non-deferred constraint violations are encountered.
If SKIP_CONSTRAINT_ERRORS is not set, then the default behavior is to roll back the entire load of the data object on which non-deferred constraint violations are encountered.


TRUST_EXISTING_TABLE_PARTITIONS: Tells Data Pump to load partition data in parallel into existing tables.
Use this option when you are using Data Pump to create the table from the definition in the export database before the table data import is started. Typically, you use this parameter as part of a migration when the metadata is static, and you can move it before the databases are taken off line to migrate the data. Moving the metadata separately minimizes downtime. If you use this option, and if other attributes of the database are the same (for example, character set), then the data from the export database goes to the same partitions in the import database.

You can create the table outside of Oracle Data Pump. However, if you create tables as a separate option from using Oracle Data Pump, then the partition attributes and partition names must be identical to the export database.

Note:
This option can be used for import no matter the source version of the export.

VALIDATE_TABLE_DATA: Directs Oracle Data Pump to validate the number and date data types in table data columns.
If the import encounters invalid data, then an ORA-39376 error is written to the .log file. The error text includes the column name. The default is to do no validation. Use this option if the source of the Oracle Data Pump dump file is not trusted.

Restrictions:
• If you use DISABLE_APPEND_HINT, then it can take longer for data objects to load.
• If you use SKIP_CONSTRAINT_ERRORS, and if a data object has unique indexes or constraints defined on it at the time of the load, then the APPEND hint is not used for loading that data object. Therefore, loading such data objects can take longer when the SKIP_CONSTRAINT_ERRORS option is used.
• Even if SKIP_CONSTRAINT_ERRORS is specified, it is not used unless a data object is being loaded using the external table access method.

Example:
This example shows a data-only table mode import with SKIP_CONSTRAINT_ERRORS enabled:

$ impdp hr TABLES=employees CONTENT=DATA_ONLY DUMPFILE=dpump_dir1:table.dmp DATA_OPTIONS=skip_constraint_errors

If any non-deferred constraint violations are encountered during this import operation, then they are logged. The import continues on to completion.


For Export (expdp) :

DATA_OPTIONS=
[GROUP_PARTITION_TABLE_DATA | VERIFY_STREAM_FORMAT]


GROUP_PARTITION_TABLE_DATA: Tells Oracle Data Pump to unload all table data in one operation rather than unload each table partition as a separate operation. As a result, the definition of the table will not matter at import time because Import will see one partition of data that will be loaded into the entire table.
VERIFY_STREAM_FORMAT: Validates the format of a data stream before it is written to the Oracle Data Pump dump file. The verification checks for a valid format for the stream after it is generated but before it is written to disk. This assures that there are no errors when the dump file is created, which in turn helps to assure that there will not be errors when the stream is read at import time.
Example:
This example shows an export operation in which data for all partitions of a table are unloaded together instead of the default behavior of unloading the data for each partition separately.
$ expdp hr TABLES=hr.tab1 DIRECTORY=dpump_dir1
DUMPFILE=hr.dmp
VERSION=11.2
DATA_OPTIONS=GROUP_PARTITION_TABLE_DATA


Monday, November 17, 2025

Oracle Hybrid Partitioned Tables

 ðŸš€ Oracle Hybrid Partitioned Tables — The Power of Internal + External Data in One Table


Hybrid Partitioned Tables (HPT) in Oracle Database bring a game-changing capability:


Seamlessly combine internal table partitions with external partitions stored in Hadoop, Object Storage, Kafka, NoSQL, and more, all inside a single logical table.


This feature helps DBAs and architects build cost-optimized, high-scale, and flexible data architectures without sacrificing Oracle’s powerful partitioning features.


🔹 What Are Hybrid Partitioned Tables?


Hybrid partitioned tables merge:

Internal partitions → stored traditionally inside Oracle tablespaces

External partitions → stored outside the DB (HDFS, Object Storage, Parquet, ORC, CSV, HBase, Kafka, Oracle NoSQL, AWS S3, Azure, OCI, etc.)


This hybrid model is ideal when you want:

•Hot data inside Oracle (fast access, indexed, DML-capable)

•Cold/archived data in cheaper external storage

•A unified SQL interface over both


🔹 Access Drivers Supported


Hybrid partitions work with all external table types using drivers like:

ORACLE_LOADER

ORACLE_DATAPUMP

ORACLE_HDFS

ORACLE_HIVE

ORACLE_BIGDATA


Users must have:

READ on data directories

WRITE on log/bad directories

EXECUTE on pre-processor directories.


🔹 Constraints & Query Rewrite


Because Oracle cannot enforce integrity on external data:


Primary/foreign/unique key constraints cannot be enforced


Only RELY DISABLE constraints are allowed


Enable optimizations by setting:

ALTER SESSION SET QUERY_REWRITE_INTEGRITY = TRUSTED;

or

STALE_TOLERATED;


🔹 Performance Optimizations Available


Oracle still applies partition-based optimizations across internal + external partitions:

✔ Static Partition Pruning

✔ Dynamic Partition Pruning

✔ Bloom Pruning


This ensures queries only scan relevant partitions—even if some reside in object storage.


🔹 What You Can Do with HPT


Hybrid tables support many powerful operations:


Create RANGE and LIST hybrid partitions


ALTER TABLE operations: ADD / DROP / RENAME partitions


Change external data source locations


Convert internal tables → hybrid tables


Exchange partitions between:

internal ↔ internal

external ↔ external

internal ↔ external

Create:

Global partial non-unique indexes

Materialized views (with restrictions)

Full partition-wise refresh on external partitions


🔹 Restrictions You Must Know


Some limitations apply:

No unique or global unique indexes

No clustering clause

DML allowed only on internal partitions (external = read-only)

No LOB, LONG, ADT types

No column defaults, no invisible columns

No in-memory for external partitions

No SPLIT, MERGE, MOVE on external partitions

Only RELY constraints allowed


🎯 Why Hybrid Partitioned Tables Matter


HPT enables Oracle customers to build:

Tiered storage architectures

Cost-effective archiving

Unified access to data lakes and databases

Elastic, cloud-integrated data platforms


All while maintaining Oracle SQL performance features where they matter.


The ORACLE_DATAPUMP Access Driver

The ORACLE_DATAPUMP Access Driver: Abilities and Applications in External Tables The ORACLE_DATAPUMP access driver is a specialized compone...