How To Setup & Use DBMS_CLOUD for non-Autonomous Database (Customer managed) in two basic model:
non_CDB & Pluggable databases
There are two traditional methods to take a dump:
1- expdp command
2- Using DBMS_DATAPUMP Package
3- New: DBMS_Cloud Package
In this topic, I introduce a amazing package that is available on Oracle Cloud but I demonstrate how to use it to taking dump in on-premises env in multitenant and non_cdb environment.
DBMS_ClOUD has some main sub programs but I focus in the taking data pump.
📬You know that from 21c above, non_cdb is not available, so because many of us, still working on 19c non_cdb and Multitenant architecture, I review installation steps for both architecture.
DBMS_CLOUD Overview
Using DBMS_CLOUD to work with data in Object Storage you can do the following:
- Manage your credentials for accessing Object Store.
- Copy data as-is: External data is loaded into the database in a single one-step operation, without the need to create any objects in the database. The load operation is monitored and tracked inside the database.
- Manage the information about your load operations.
- Validate the content of external data prior to consumption. Prior to consumption of external data you verify its content and identify rows that do not conform to the expected external data format.
- Create an external table. Create the necessary object in the database for flexible and continuous consumption.
- Work with objects in object stores and directories.
- 📤Taking an Export from tables as a onloading method (in this post I focus to this ability)
Installing the DBMS_CLOUD Package
The DBMS_CLOUD package provides a comprehensive support for working with data in Object Storage starting with Oracle Database 19c and later releases. However, this package is not pre-installed with Oracle Database. You need to manually install the DBMS_CLOUD package and also configure users or roles to use this package.
For get more information on installing the DBMS_CLOUD package and configuring users/roles, wallet ,... see the MOS-NOTE with Doc ID 2748362.1 or take main steps to install in this document .
This document covers the manual setup and configuration of DBMS_CLOUD and the proper configuration of users and roles for using DBMS_CLOUD.
🌠DBMS_CLOUD is pre-installed, configured and maintained in Oracle Autonomous Database.
🎆Important Note: DBMS_CLOUD package functionality may differ between Autonomous Database (Oracle Managed) vs non-Autonomous Database (Customer managed) and on different Database versions also.
You can get more info for
non-Autonomous Database, and documentation for the Database version you are using,
Manual installation steps
The following steps are required to manually install and enable DBMS_CLOUD. This package is supported in Oracle Database 19c beginning with 19.9 and in Oracle Database 21c beginning with 21.3, and in Oracle Database 23ai.
For Oracle Database 26ai Release 23.7 and above, please refer to the official documentation for how to install and configure the family of DBMS_CLOUD packages, Refer
Installing DBMS_CLOUD in:
https://docs.oracle.com/en/database/oracle/oracle-database/23/sutil/dbms_cloud-family-packages.html
More info:(details for 19c, 21/23c/26ai)
📘Following is step by step guide to install dbms_cloud on non-Autonomous Database (Customer managed)
Steps:
- Create a schema owning the DBMS_CLOUD package and install the DBMS_CLOUD code in CDB and all PDBs or your non-CDB respectively
- Create the wallet containing the necessary certificates to accessing HTTP URIs and Object Stores
- Configure your Oracle environment to use the new SSL wallet
- Configure your database with ACEs for DBMS_CLOUD
- Verify the configuration of DBMS_CLOUD
Configuring users or roles to use DBMS_CLOUD
- Grant the minimal privileges to a user or role for using DBMS_CLOUD
- Configure ACEs for a user or role to use DBMS_CLOUD
- Verify the proper setup of your user or role for using DBMS_CLOUD
DBMS_CLOUD Considerations:
DBMS_CLOUD is owned by a separate C##CLOUD$SERVICE schema. This step creates the user and grants appropriate privileges. The user is locked by default so that no connections are directly made as this user.
- When you update to a newer RU and that happens to have a new DBMS_CLOUD deployment, then you have to re-run the installation procedure on top of your existing procedure. The installation is written idempotent, so you do not have to uninstall and re-install
- Oracle is planning to automatically maintain DBMS_CLOUD in future patch sets so it is mandatory to install DBMS_CLOUD into schema C##C$CLOUD$SERVICE to avoid conflicts and problems in the future.
- Beginning with 19.15 and 21.3 respectively, you can install DBMS_CLOUD in non-CDB environments.
Installation of DBMS_CLOUD
To ensure proper installation of DBMS_CLOUD into any existing and future pluggable databases (PDBs) you need to use catcon.pl. Wrap the following commands into a SQL script - e.g. dbms_cloud_install.sql - and execute those in your multi-tenant environment using catcon.pl as shown below to ensure the proper setup. For non-CDB environments, you execute the following script in your non-CDB environment as SYS.
@$ORACLE_HOME/rdbms/admin/sqlsessstart.sql
set verify off
-- you must not change the owner of the functionality to avoid future issues
define username='C##CLOUD$SERVICE'
create user &username no authentication account lock;
REM Grant Common User Privileges
grant INHERIT PRIVILEGES on user &username to sys;
grant INHERIT PRIVILEGES on user sys to &username;
grant RESOURCE, UNLIMITED TABLESPACE, SELECT_CATALOG_ROLE to &username;
grant CREATE ANY TABLE, DROP ANY TABLE, INSERT ANY TABLE, SELECT ANY TABLE,
CREATE ANY CREDENTIAL, CREATE PUBLIC SYNONYM, CREATE PROCEDURE, ALTER SESSION, CREATE JOB to &username;
grant CREATE SESSION, SET CONTAINER to &username;
grant SELECT on SYS.V_$MYSTAT to &username;
grant SELECT on SYS.SERVICE$ to &username;
grant SELECT on SYS.V_$ENCRYPTION_WALLET to &username;
grant read, write on directory DATA_PUMP_DIR to &username;
grant EXECUTE on SYS.DBMS_PRIV_CAPTURE to &username;
grant EXECUTE on SYS.DBMS_PDB_LIB to &username;
grant EXECUTE on SYS.DBMS_CRYPTO to &username;
grant EXECUTE on SYS.DBMS_SYS_ERROR to &username;
grant EXECUTE ON SYS.DBMS_ISCHED to &username;
grant EXECUTE ON SYS.DBMS_PDB_LIB to &username;
grant EXECUTE on SYS.DBMS_PDB to &username;
grant EXECUTE on SYS.DBMS_SERVICE to &username;
grant EXECUTE on SYS.DBMS_PDB to &username;
grant EXECUTE on SYS.CONFIGURE_DV to &username;
grant EXECUTE on SYS.DBMS_SYS_ERROR to &username;
grant EXECUTE on SYS.DBMS_CREDENTIAL to &username;
grant EXECUTE on SYS.DBMS_RANDOM to &username;
grant EXECUTE on SYS.DBMS_SYS_SQL to &username;
grant EXECUTE on SYS.DBMS_LOCK to &username;
grant EXECUTE on SYS.DBMS_AQADM to &username;
grant EXECUTE on SYS.DBMS_AQ to &username;
grant EXECUTE on SYS.DBMS_SYSTEM to &username;
grant EXECUTE on SYS.SCHED$_LOG_ON_ERRORS_CLASS to &username;
grant SELECT on SYS.DBA_DATA_FILES to &username;
grant SELECT on SYS.DBA_EXTENTS to &username;
grant SELECT on SYS.DBA_CREDENTIALS to &username;
grant SELECT on SYS.AUDIT_UNIFIED_ENABLED_POLICIES to &username;
grant SELECT on SYS.DBA_ROLES to &username;
grant SELECT on SYS.V_$ENCRYPTION_KEYS to &username;
grant SELECT on SYS.DBA_DIRECTORIES to &username;
grant SELECT on SYS.DBA_USERS to &username;
grant SELECT on SYS.DBA_OBJECTS to &username;
grant SELECT on SYS.V_$PDBS to &username;
grant SELECT on SYS.V_$SESSION to &username;
grant SELECT on SYS.GV_$SESSION to &username;
grant SELECT on SYS.DBA_REGISTRY to &username;
grant SELECT on SYS.DBA_DV_STATUS to &username;
alter session set current_schema=&username;
REM Create the Catalog objects
@$ORACLE_HOME/rdbms/admin/dbms_cloud_task_catalog.sql
@$ORACLE_HOME/rdbms/admin/dbms_cloud_task_views.sql
@$ORACLE_HOME/rdbms/admin/dbms_cloud_catalog.sql
@$ORACLE_HOME/rdbms/admin/dbms_cloud_types.sql
REM Create the Package Spec
@$ORACLE_HOME/rdbms/admin/prvt_cloud_core.plb
@$ORACLE_HOME/rdbms/admin/prvt_cloud_task.plb
@$ORACLE_HOME/rdbms/admin/dbms_cloud_capability.sql
@$ORACLE_HOME/rdbms/admin/prvt_cloud_request.plb
@$ORACLE_HOME/rdbms/admin/prvt_cloud_internal.plb
@$ORACLE_HOME/rdbms/admin/dbms_cloud.sql
@$ORACLE_HOME/rdbms/admin/prvt_cloud_admin_int.plb
REM Create the Package Body
@$ORACLE_HOME/rdbms/admin/prvt_cloud_core_body.plb
@$ORACLE_HOME/rdbms/admin/prvt_cloud_task_body.plb
@$ORACLE_HOME/rdbms/admin/prvt_cloud_capability_body.plb
@$ORACLE_HOME/rdbms/admin/prvt_cloud_request_body.plb
@$ORACLE_HOME/rdbms/admin/prvt_cloud_internal_body.plb
@$ORACLE_HOME/rdbms/admin/prvt_cloud_body.plb
@$ORACLE_HOME/rdbms/admin/prvt_cloud_admin_int_body.plb
-- Create the metadata
@$ORACLE_HOME/rdbms/admin/dbms_cloud_metadata.sql
alter session set current_schema=sys;
@$ORACLE_HOME/rdbms/admin/sqlsessend.sql
Assuming you created a script named dbms_cloud_install.sql in a working directory dbc under the home directory /home/oracle, the command to install it in your CDB environment would look like as follows:
$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -u sys/<your_sys_password> --force_pdb_mode 'READ WRITE' -b dbms_cloud_install -d /home/oracle/dbc -l /home/oracle/dbc dbms_cloud_install.sql
Assuming you created a script named dbms_cloud_install.sql in a working directory dbc under the home directory /home/oracle, the commandto install it in your non-CDB environment would like as follows:
sqlplys sys/<your_sys_password> @/home/oracle/dbs/dbms_cloud_install.sql
Please check the log files for any errors. As a sample test you should see the package DBMS_CLOUD being created and valid in both ROOT and any PDB:
REM from within ROOT to see all containers
select con_id, owner, object_name, status, sharing, oracle_maintained from cdb_objects where object_name = 'DBMS_CLOUD' order by con_id;
REM within an individual container only
select owner, object_name, status, sharing, oracle_maintained from dba_objects where object_name = 'DBMS_CLOUD'
For non-CDB environments you only use the SQL equivalent to the command within an individual container above.
Note that the installation will force all pluggable database to being opened for installation of DBMS_CLOUD but will retain the prior state of a PDB after installation, so the queries above will only show/work for open pluggable databases.
If the install logs show any error and/or you have any invalid objects owned by C##CLOUD$SERVICE you need to analyze and correct those.
Note: The DBMS_CLOUD installation relies on the default installation of Oracle, incl. existing public privileges. If you happen to have modified the standard installation of Oracle, you might have to grant the following privileges explicitly to C##CLOUD$SERVICE:
grant EXECUTE on SYS.UTL_HTTP to &username;
grant EXECUTE on SYS.UTL_TCP to &username;
grant EXECUTE on SYS.UTL_FILE to &username;
Use this link for Oracle 26ai install:
https://docs.oracle.com/en/database/oracle/oracle-database/26/sutil/dbms_cloud-family-packages.html
Create SSL Wallet with Certificates
In order to safely access HTTP URIs and Object Stores, a wallet is required with appropriate certificates for the object stores.
Currently Oracle does not ship the certs as part of RUs
You can download the necessary certificates from https://objectstorage.us-phoenix-1.oraclecloud.com/p/KB63IAuDCGhz_azOVQ07Qa_mxL3bGrFh1dtsltreRJPbmb-VwsH2aQ4Pur2ADBMA/n/adwcdemo/b/CERTS/o/dbc_certs.tar
All Object Storage access is done through https and requires the database to have the certifications of all trusted locations. You need to create a security wallet containing the certificates.
- The wallet has to be created with auto login capabilities.
- On RAC installations, the wallet must either be accessible for all nodes centrally or you have to create the wallet on all nodes for local wallet storage.
The wallet can be stored at any location that is accessible by the user you installed the SW for - typically oracle, but it is recommended to manage the security wallet similar to the tde wallet provided by default for Cloud installation in Oracle OCI: by default, TDE wallets in Oracle Database Cloud installations are stored at /opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME.
It is recommended to store the SSL wallet in an equivalent location, for example /opt/oracle/dcs/commonstore/wallets/ssl. Assuming you are choosing this location and have unpacked the certs in /home/oracle/dbc, you have to issue the following commands to create the necessary wallet.
If you are already having a wallet for SSL certificates then you do not have to create a new wallet but rather add the required certs to the existing wallet.
cd /opt/oracle/dcs/commonstore/wallets/ssl
orapki wallet create -wallet . -pwd <your_chosen_wallet_pw> -auto_login
#! /bin/bash
for i in 'ls <location of cert files>/*cer'
do
orapki wallet add -wallet . -trusted_cert -cert $i -pwd <SSL Wallet password>
done
You can check the proper addition of the certificates as follows:
cd /opt/oracle/dcs/commonstore/wallets/ssl
orapki wallet display -wallet .
Sample output for a wallet containing only the three certs required for dbms_cloud:
[oracle@hb19cee ssl]$ orapki wallet display -wallet .
Oracle PKI Tool Release 21.0.0.0.0 - Production
Version 21.0.0.0.0
Copyright (c) 2004, 2020, Oracle and/or its affiliates. All rights reserved.
Requested Certificates:
User Certificates:
Trusted Certificates:
Subject: CN=VeriSign Class 3 Public Primary Certification Authority - G5,OU=(c) 2006 VeriSign\, Inc. - For authorized use only,OU=VeriSign Trust Network,O=VeriSign\, Inc.,C=US
Subject: CN=Baltimore CyberTrust Root,OU=CyberTrust,O=Baltimore,C=IE
Subject: CN=DigiCert Global Root CA,OU=www.digicert.com,O=DigiCert Inc,C=US
Configure your Oracle environment to use the new SSL wallet
To have your SSL wallet taken into effect you need to point to the newly created ssl wallet for your Oracle installation by adding it to your sqlnet.ora on the Server side. If you are on a RAC installation then you have to adjust this on all nodes.
For cloud installations without Grid infrastructure, the default location of this file is $ORACLE_HOME/network/admin.
For cloud installations with Grid infrastructure, the default location of this file is $GRID_HOME/network/admin.
If you already had a wallet for SSL certificates and added the certificates to the existing one then this step is not necessary.
Assuming you stored your wallet at the suggested location mentioned in the wallet setup section, your entry in sqlnet.ora would look as follows:
WALLET_LOCATION=
(SOURCE=(METHOD=FILE)(METHOD_DATA=
(DIRECTORY=/opt/oracle/dcs/commonstore/wallets/ssl)))
There is no need to restart the listener of the database.
Full installation of this package has another steps( such as ACL definition), you can see in official website.
Also see the MOS-NOTE with Doc ID NOTE:2778782.1 to troubleshooting of This Package.
📒One of useful abilities in DBMS_CLOUD is EXPORT_DATA sub_program that can take a Export from your tables. In this section you will see application for this feature .
Take a Expor Pump using DBMS_CLOUD
Solution 1: Using Export pump API
To perform an export using DBMS_DATAPUMP, you must first create a directory object in your database, then use the DBMS_DATAPUMP.OPEN procedure to start an export job with the desired parameters, followed by DBMS_DATAPUMP.ADD_FILE to specify the dump file, and finally, use DBMS_DATAPUMP.START_JOB to run the job. Because this way is legacy,I skip this solution.
Solution 2: Using DBMS_CLOUD.EXPORT_DATA
Export Data to a Directory as Oracle Data Pump Files
You can export data to a directory as Oracle Data Pump dump files by specifying a query.
With this export method you use the DBMS_CLOUD.EXPORT_DATA procedure to specify a query to select the data to export as a dump file to a directory.
Connect to your Autonomous AI Database instance.
Note: all of following works fine in Oracle 19c also.
Create a directory.
For example:
CREATE DIRECTORY export_dir AS 'export_dir';
Export data from Autonomous AI Database to your directory as Oracle Data Pump dump file(s) with DBMS_CLOUD.EXPORT_DATA and specify the format parameter type as datapump.
For example:
BEGIN
DBMS_CLOUD.EXPORT_DATA(
file_uri_list => 'export_dir:sales.dmp',
format => json_object('type' value 'datapump'),
query => 'SELECT * FROM sales'
);
END;
/
Example to export data as multiple Data Pump files to a directory:
BEGIN
DBMS_CLOUD.EXPORT_DATA(
file_uri_list => 'export_dir:sales1.dmp, export_dir:sales2.dmp',
format => json_object('type' value 'datapump'),
query => 'SELECT * FROM sales'
);
END;
/
The parameters are:
file_uri_list: is a comma delimited list of the export file(s). Use of wildcard and substitution characters is not supported in the file_uri_list.
format: specifies the required typeparameter. The valid values are datapump, json, xml, csv and parquet and it also optionally defines the options you can specify for the export with the ORACLE_DATAPUMP Access Driver.
query: specifies a SELECT statement so that only the required data is exported. The query determines the contents of the dump file(s).
Note:
The DBMS_CLOUD.EXPORT_DATA procedure creates the dump file(s) that you specify in the file_uri_list.
Notes for exporting data with DBMS_CLOUD.EXPORT_DATA:
- The provided directory must exist and you must be logged in as the ADMIN user or have WRIT Eaccess to the directory.
- The procedure does not overwrite files. If a dump file in the file_uri_list exists, DBMS_CLOUD.EXPORT_DATA reports an error such as:
ORA-31641: unable to create dump file "/u02/exports/123.dmp" or ORA-27038: created file already exists - DBMS_CLOUD.EXPORT_DATA does not create directories.
- The directory name is case-sensitive when enclosed in double quotes.
- The number of dump files that DBMS_CLOUD.EXPORT_DATA generates is determined when the procedure runs. The number of dump files that are generated depends on the number of file names you provide in the file_uri_list parameter, as well as on the number of ECPUs available to the instance, the service level, and the size of the data.
For example, if you use a 2 ECPU Autonomous AI Database instance or the low service, then a single dump file is exported with no parallelism, even if you provide multiple file names. If you use an 8 ECPU Autonomous AI Database instance with the medium or high service, then the jobs can run in parallel and multiple dump files are exported if you provide multiple file names. - The query parameter value that you supply can be an advanced query, if required, such as a query that includes joins or subqueries.
- The dump files you create with DBMS_CLOUD.EXPORT_DATA cannot be imported using Oracle Data Pump impdp. Depending on the database, you can use these files as follows:
- On an Autonomous AI Database, you can use the dump files with the DBMS_CLOUD procedures that support the format parameter type with the value 'datapump'. You can import the dump files using DBMS_CLOUD.COPY_DATA or you can call DBMS_CLOUD.CREATE_EXTERNAL_TABLE to create an external table.
- 🧩On any other Oracle Database, such as Oracle Database 19c on-premise, you can import the dump files created with the procedure DBMS_CLOUD.EXPORT_DATA using the ORACLE_DATAPUMP access driver. To see more info about abilities of Export Po Drivers in Oracle, you can see my previous post in "Database Box" Group on the LinkedIn.
Use below link to find out more info about how to take export from tables in Oracle such as Export Data as CSV :
https://docs.oracle.com/en/cloud/paas/autonomous-database/dedicated/ndosu/
Another samples to take Export:
Sample1: take an export from emp table
Create directory for output
$mkdir -p /exports/csv_dump
create directory dir1 as '/exports/csv_dump';
grant read,write on directory dir1 to public;
begin
dbms_cloud.export_data(
file_uri_list => 'dir1:emp.csv',
format => json_object('type' value 'csv'),
query => 'select * from emp where id < 50'
);
end;
/
Sample2: take an export from large table
When dealing with tables containing millions of rows, exporting everything at once can cause memory issues or create unwieldy files.
Solution:
Export in chunks using FETCH and OFFSET:
declare
v_chunk_size constant pls_integer := 1000; -- 1000 rows per file
v_offset number := 0;
v_batch number := 1;
v_total_rows number;
begin
-- Calculate total rows to determine loop iterations
select count(*) into v_total_rows from foo;
while v_offset < v_total_rows loop
dbms_cloud.export_data(
credential_name => null,
file_uri_list => 'dir1:emp_' || to_char(v_batch, 'fm000') || '.csv',
format => json_object('type' value 'csv'),
query => 'select * from emp offset ' || v_offset ||' rows fetch next ' || v_chunk_size || ' rows only');
v_offset := v_offset + v_chunk_size;
v_batch := v_batch + 1;
end loop;
end;
/
This approach creates multiple files (emp_001.csv, emp_002.csv, etc.), each containing up to 1 million rows. You can adjust the chunk size based on your needs.
If you on 19c then you can use these files with ORACLE_DATAPUMP access driver.
Sample3: Adding Compression
For even better efficiency, especially when dealing with text heavy data or transferring files over networks, add gzip compression:
Add following to your code:
format => json_object('type' value 'csv',
'compression' value 'gzip'),
Compression is useful when:
- Tables with millions of rows
- When file size limitations exist
- To enable parallel processing of exported data
- To minimize memory usage during export
No comments:
Post a Comment