Sunday, September 28, 2025

Partitioning and LOB Data in Oracle

Partitioning and LOB Data in Oracle

Partitioned Tables

Most tables can be partitioned.

Any table can be partitioned up to a million separate partitions except those tables containing columns with LONG or LONG RAW data types. You can, however, use tables containing columns with CLOB or BLOB data types.

Note: To reduce disk and memory usage (specifically, the buffer cache), you can store tables and partitions of a partitioned table in a compressed format inside the database. This often improves scaleup for read-only operations. Table compression can also speed up query execution. There is, however, a slight cost in CPU overhead.

When to Partition a Table

There are certain situations when you would want to partition a table.

Here are some suggestions for situations when you should consider partitioning a table:

  • Tables that are greater than 2 GB.

These tables should always be considered as candidates for partitioning.

  • Tables that contain historical data, in which new data is added into the newest partition.

A typical example is a historical table where only the current month's data is updatable and the other 11 months are read only.

  • Tables whose contents must be distributed across different types of storage devices.

Partitioning and LOB Data in Oracle

Unstructured data, such as images, videos, and documents, is often stored in LOB (Large Object) columns within an Oracle database. Oracle allows not only the partitioning of relational data but also the partitioning and separate management of LOB data.

When a table is partitioned, all of its columns typically reside in the tablespace for that partition. However, LOB columns have a unique advantage: they can be stored in separate tablespaces that are independent from the main partitioned data.

This separation becomes especially valuable when working with tables that contain very large LOBs. By storing LOBs separately from the main row data, administrators can optimize both performance and cost efficiency.

For instance, if the primary row data is updated frequently but the LOB data remains relatively static, this approach ensures that heavy LOB storage does not impact routine updates.

Consider an example of an employee table. Each record may include an employee’s photo stored as a BLOB, alongside personnel details such as name, address, department, and manager. The personnel details are subject to frequent updates, while the photo is unlikely to change. By storing the LOB column (the photo) in a separate tablespace, updates to the employee record remain lightweight and efficient.

Another advantage is tiered storage management. LOB data can be placed on less expensive, slower storage devices, while the main employee records can reside on high-performance storage. This ensures that critical transactional data benefits from speed, while large, less frequently accessed objects do not consume costly resources.

In summary, Oracle’s ability to partition and separately store LOB data provides organizations with flexibility, performance optimization, and cost control. This strategy is particularly effective in systems where structured data changes regularly, but unstructured LOB data remains relatively stable.

Benefits of Partitioning

Partitioning can provide tremendous benefit to a wide variety of applications by improving performance, manageability, and availability.

It is not unusual for partitioning to greatly improve the performance of certain queries or maintenance operations. Moreover, partitioning can greatly simplify common administration tasks.

Partitioning also enables database designers and administrators to solve some difficult problems posed by cutting-edge applications. Partitioning is a key tool for building multi-terabyte systems or systems with extremely high availability requirements.

Partitioning and LOB Data in Oracle

Unstructured data, such as images, videos, and documents, is often stored in LOB (Large Object) columns within an Oracle database. Oracle not only supports partitioning for relational (structured) data but also provides mechanisms to partition and manage LOB data more effectively. This capability helps organizations improve performance, manageability, and storage costs when dealing with large volumes of binary or multimedia content.

Storing LOBs in Separate Tablespaces

When a table is partitioned in Oracle, all of its columns typically reside in the same tablespace for that partition. However, LOB columns have the unique ability to be stored in separate tablespaces, independent of the main row data.

This separation becomes highly beneficial in cases where tables contain large LOBs. For example:

  • Main row data (employee details such as name, address, and department) may be updated frequently.
  • LOB data (such as an employee’s photo) is far less likely to change.

By storing the photo (BLOB) in a separate tablespace, updates to the main employee record remain lightweight and efficient, avoiding unnecessary I/O overhead from the larger LOB data.

Additionally, this strategy enables tiered storage optimization:

  • Fast, expensive storage can be allocated to the transactional row data that changes frequently.
  • Cheaper, slower storage can be used for LOB data that is accessed less often.

This separation reduces costs while maintaining high performance for critical workloads.

Oracle SecureFiles and Partitioned LOBs

Modern Oracle implementations recommend the use of SecureFiles over BasicFiles for managing LOBs. SecureFiles not only provide performance improvements but also include advanced features such as:

  • Compression – Reduces storage requirements.
  • Deduplication – Prevents redundant LOB storage.
  • Encryption – Secures sensitive unstructured data.
  • Partitioned LOBs – Distributes LOBs across multiple partitions for scalability.

Partitioned SecureFiles enable administrators to organize large object data more intelligently, aligning storage with business and performance needs.

 

LOB Partitioning Strategies

Description of Figure 2-2 follows

Oracle supports multiple partitioning methods for tables containing LOB columns, which can also extend to the LOB data itself. Choosing the right partitioning strategy depends on the data distribution and query patterns:

  1. Range Partitioning
    • LOBs are distributed based on a range of values (e.g., partitioning employee photos by hire date).
    • Useful for time-based queries and archiving older data.
  2. List Partitioning
    • LOBs are grouped into partitions based on discrete values (e.g., file type: images, PDFs, or audio).
    • Provides logical grouping for specific categories of data.
  3. Hash Partitioning
    • LOBs are evenly distributed across partitions using a hashing algorithm.
    • Prevents data skew and balances I/O workload across partitions.
  4. Composite Partitioning
    • Combines methods (e.g., range-hash) for greater control.
    • Example: Partition employee documents by year (range) and distribute them evenly across storage (hash).

Benefits of Partitioning LOB Data

  • Performance Optimization – Queries scan only relevant partitions instead of the entire dataset.
  • Parallelism – Partitioned LOBs enable Oracle to perform operations across multiple partitions in parallel.
  • Improved Manageability – Partitions can be archived, backed up, or purged independently.
  • Cost Efficiency – Large, less frequently accessed LOB partitions can be placed on cheaper storage tiers.
  • Reduced Contention – Partition-level operations minimize locking conflicts during updates.

Use Case Example

Consider an HR application:

  • Employee details (address, job title, department) change frequently.
  • Employee documents (contracts, scanned IDs, certificates) are stored as BLOBs but rarely updated.

By storing the structured data in a partitioned table and separating the LOB columns into their own partitioned SecureFiles storage, the application achieves:

  • Faster updates to personnel details.
  • Efficient storage for documents.
  • The ability to move older, rarely accessed documents into archival storage tiers without disrupting the main HR system.

LOB Storage in Separate Tablespaces

  • When you create or alter a partitioned table, you can specify the LOB storage clause (STORE AS) separately.
  • You can assign each LOB (or partitioned LOB) to its own tablespace, which can differ from the tablespace of the main row data.

2. Compression for LOBs

  • Oracle BasicFiles LOBs support only dedicated or shared storage, but not compression.
  • Oracle SecureFiles LOBs (introduced in 11g) support:
    • COMPRESS – Reduces storage footprint.
    • DEDUPLICATE – Avoids redundant copies.
    • ENCRYPT – Provides data security.
  • Compression works at the LOB level, not at the row-level data.

3. Example: Partitioned Table with LOBs in Compressed Tablespace

CREATE TABLE employee_docs

(

   emp_id        NUMBER,

   emp_name      VARCHAR2(100),

   hire_date     DATE,

   resume        BLOB

)

PARTITION BY RANGE (hire_date)

(

   PARTITION p1 VALUES LESS THAN (DATE '2020-01-01')

      TABLESPACE tbs_fast,

   PARTITION p2 VALUES LESS THAN (DATE '2023-01-01')

      TABLESPACE tbs_fast,

   PARTITION p3 VALUES LESS THAN (MAXVALUE)

      TABLESPACE tbs_fast

)

LOB (resume) STORE AS SECUREFILE resume_seg

(

   TABLESPACE tbs_lob_compressed

   ENABLE STORAGE IN ROW

   COMPRESS HIGH

   CHUNK 8192

   RETENTION

);

4. Benefits

  • Performance isolation: OLTP operations on main table data are not slowed by large LOB I/O.
  • Storage optimization: Compressed LOBs in a dedicated tablespace reduce storage usage.
  • Flexibility: You can move, archive, or back up LOB tablespaces separately.

Conclusion

Unstructured data, such as images and documents, which is stored in a LOB column in a database can also be partitioned.

When a table is partitioned, all of the columns reside in the tablespace for that partition, except LOB columns, which can be stored in their own tablespace.

Oracle’s support for partitioning LOB data represents a powerful solution for enterprises managing large volumes of unstructured data. By separating LOBs into dedicated partitions and leveraging SecureFiles features such as compression, deduplication, and encryption, organizations can achieve better scalability, performance, and cost optimization.

As multimedia and unstructured data continue to grow in enterprise systems, LOB partitioning combined with SecureFiles provides the flexibility to balance fast transactional performance with efficient long-term data storage, ensuring both technical efficiency and business value.

Wednesday, September 24, 2025

Oracle Global Data Services (GDS)

Oracle Global Data Services (GDS): Automated Workload Management for Replicated Databases (Video Included)

1. Introduction to Global Data Services (GDS)

Oracle Global Data Services (GDS) provides Automated Workload Management for Replicated Databases. It is designed to extend the concept of services to database replicas.

Key GDS Capabilities:

  • Automatic and transparent client workload management across replicas.
  • Workload routing based on load, locality, or replication lag.
  • Inter-database Service failover across replicas.
  • Load balancing, including connect-time and run-time capabilities.
  • Centralized service management across replicas.

GDS databases must be Oracle Database EE 12.1+. They can be Single Instance or RAC, CDB or Non-CDB, and run on engineered systems (like Oracle Exadata) or commodity hardware. Importantly, GDS requires licensing for either Oracle Active Data Guard or Oracle GoldenGate.

2. Benefits of Oracle GDS

Implementing GDS helps maximize your application performance and mitigate downtime during planned and unplanned outages.

Benefit Area

Details & Insights

Supporting Sources

High Availability & Downtime Mitigation

GDS provides automatic failover of client workload to another datacenter to mitigate unplanned outages. It enables continuous availability. For planned maintenance, GDS allows for transparent workload movement, achieving zero-downtime (e.g., during patching, as noted in a case study). GDS supports Global Service failovers for both Oracle GoldenGate and Active Data Guard.

Performance & Load Balancing

Applications use GDS to maximize performance. It routes workload based on load, locality, or lag. For Active/Active Oracle GoldenGate configurations, GDS can route all workloads to the nearest and best database in the client’s region (Region Affinity). GDS provides Connect-time Load Balancing (CLB) and Run-time Load Balancing (RLB) advisory to clients, supporting intelligent load balancing even across asymmetrical database servers.

Resource Utilization

GDS solves the challenge of unbalanced replicas by providing automatic and transparent client workload management. It enables better resource utilization of replicas, allowing workloads to be balanced on reader farms (e.g., Active Data Guard or GoldenGate reading farm) for improved scalability.

Manageability & Cost

GDS allows administrators to manage resources of replicas with one interface. It provides centralized management of database services across replicas. GDS is a cost-effective solution as it is included with Active Data Guard or Oracle GoldenGate.

 

3. Workload Management Challenges Addressed by GDS

Before GDS, managing workloads across replicas presented significant challenges:

Challenge Type

Description

Supporting Sources

Workload Balance

Replicated environments often suffer from no automated load balancing, leading to unbalanced resources and sub-optimal resource utilization across data centers.

Service Failover

Lack of Global Service Failover results in application outages when replicas fail, meaning there is No Service HA (High Availability).

General HA Deployment

Challenges associated with deploying highly available systems include the risk of failure, lack of skills, and high cost and complexity.

 

 

4. High-Level GDS Deployment and Usage

GDS is managed using the GDSCTL CLI or Enterprise Manager DB Plug-in.

A. GDS Deployment Steps (High Level):

  1. Install GSM software on GSM servers (minimum of 1, recommended 3 GSMs per region).
  2. Pre-create the GDS catalog database.
  3. Setup GDS Administrator accounts & privileges (e.g., granting gsmadmin_role to the admin user on the GDS Catalog database).
  4. Configure GDS using GDSCTL commands such as create catalog, add gsm, add gdspool, and add database. (Note: For Data Guard environments, add brokerconfig should be used instead of add database).
  5. Define and start Global Services using GDSCTL (e.g., add service -service sales_qry_srvc -gdspool sales -preferred db01 –available db02).
    • Services can be configured with attributes based on placement (-preferred, -available, -preferred_all), role (-role PRIMARY or PHYSICAL_STANDBY), lag tolerance (-lag 180), locality (LOCAL_ONLY, ANYWHERE), and load balancing goals (-clbgoal, -rlbgoal).

 

B. Client Connectivity and Application Requirements:

Applications must be GDS-Ready by adhering to specific configuration requirements:

  • Use Oracle Integrated Connection Pools/Drivers (OCI, JDBC, ODP.NET, WebLogic).
  • TNS entries must include the GSM Listener end points and specific parameters such as CONNECT_TIMEOUT, RETRY_COUNT, RETRY_DELAY, and TRANSPORT_CONNECT_TIMEOUT.
  • For locality-based routing, specify the client’s REGION in the connection URL or TNS entry.
  • Applications must subscribe to FAN events (published by GDS via ONS) by enabling Fast Connection Failover (FCF) to proactively handle instance events.

 

 

What is a GDS pool?

A Global Data Services pool (GDS pool) is a fundamental component of the Oracle Global Data Services (GDS) architecture, serving to group a collection of databases for administrative and service management purposes.

Here is a comprehensive breakdown of what a GDS pool is, based on the sources:

Definition and Purpose

  • A GDS pool is a named subset of databases located within a larger GDS configuration.
  • The databases in a GDS pool provide a unique set of global services.
  • A GDS pool belongs to the same administrative domain.

Role in Management and Security

  • Partitioning the databases in a GDS configuration into pools simplifies service management.
  • GDS pools provide ** higher security** by allowing each pool to be administered by a different administrator.
  • When configuring GDS, the GDSCTL command-line utility is used to add a GDS pool (e.g., add gdspool -gdspool sales).

Rules and Constraints

  • A database can only belong to a single Global Data Services pool.
  • While all databases in a pool do not necessarily need to provide the same set of global services, all databases that do provide the same global service must belong to the same pool.
  • A Global Data Services region—a set of databases and clients sharing network proximity—can contain multiple GDS pools, and these pools can span multiple regions.
  • For configurations involving Oracle Data Guard broker, only an entire broker configuration can be included in (or deleted from) a GDS pool, and a broker configuration cannot span multiple pools.
    • A pool that contains a Data Guard configuration cannot have databases that do not belong to that configuration.
    • The set of databases in a GDS pool must be either the set of databases that belong to a single broker configuration or a set of databases that do not belong to a broker configuration.

Usage Examples

The sources frequently use GDS pools (such as a "Sales GDS Pool" or "HR GDS Pool") in command examples to define where global services should run:

  • When defining a global service, you must specify the pool it belongs to, along with its preferred and available databases. For example:

GDSCTL>add service -service sales_qry_srvc -gdspool sales -preferred db01 –available db02.

  • One large company utilizes approximately 260 GDS Pools within their consolidated Oracle database environment.

List three GDS configuration topics.

The following three topics are central to configuring and managing an Oracle Global Data Services (GDS) environment:

  1. Defining the GDS Configuration Components (Pools and Regions): A GDS configuration requires administrators to logically organize databases based on function and physical location. This involves defining:
    • Global Data Services Pools: A named subset of databases within the configuration that provides a unique set of global services and belongs to the same administrative domain. All databases providing the same global service must belong to the same pool. Configuration involves using GDSCTL commands like add gdspool.
    • Global Data Services Regions: A named subset of databases and clients that share network proximity (low network latency), usually corresponding to a local area network. A region may contain multiple GDS pools. Configuration involves adding regions (e.g., create catalog -region siteA, siteB).
  2. Centralized Configuration Metadata (The GDS Catalog): The GDS environment relies on a central repository to store its definitions. The Global Data Services catalog is the repository that stores configuration data for the GDS configuration and all global services it provides.
    • A catalog is associated with only one GDS configuration.
    • The catalog must reside in an Oracle Database 12c or later database.
    • Configuration steps include pre-creating the GDS catalog database and then using GDSCTL to create catalog.
  3. Defining Global Services and Their Attributes: A Global Data Services configuration is primarily defined by the global services it offers, which are functionally similar to local database services but span multiple databases. Configuring these services dictates workload management behavior:
    • Global services have a set of attributes that control starting the services, load balancing connections, and failing over those connections.
    • Attributes unique to global services include preferred or available databases, replication lag tolerance, and region affinity.
    • Configuration involves defining and starting these services using GDSCTL, specifying the service name, the GDS pool, and placement (e.g., -preferred db01 –available db02).

 

What Oracle database versions are supported?

Based on the sources, Oracle Global Data Services (GDS) is supported on specific Oracle Database versions:

  • The databases participating in the GDS configuration (the GDS databases) must be Oracle Database EE 12.1+ (Enterprise Edition 12.1 or later).
  • More broadly, database versions earlier than Oracle Database 12c can provide local services, but only Oracle Database 12c, and later, can provide global services.
  • The Global Data Services catalog—the repository that stores configuration data—must reside in an Oracle Database 12c or later database.

Additionally, the sources mention Oracle Database 21c in the context of architecture:

  • The multitenant container database (CDB) is the only supported architecture in Oracle Database 21c.

The GDS databases can be Single Instance or Oracle Real Application Clusters (Oracle RAC), and they can be CDB (Container Database) or Non-CDB.

  

List three unique global service attributes.

The sources explicitly identify the attributes that are unique to global services compared to traditional local database services.

Here are three unique global service attributes:

  1. Preferred or Available Databases: Global services span the instances of multiple databases. Administrators must specify which databases support the global service, known as preferred databases. If a preferred database fails, the Global Service Manager (GSM) maintains the service cardinality by relocating the service to an available database. This attribute defines service placement across the entire Global Data Services (GDS) pool.
    • Example options include: -preferred dbname_list, -available dbname_list, or -preferred_all (implying all databases in the pool are preferred).
  2. Replication Lag (Lag Tolerance): This attribute establishes an application's tolerance for non-current data. When configured, GDS uses this setting to route requests only to replica databases that are not lagging behind the primary database by longer than the specified maximum acceptable lag value (in seconds).
    • For applications that require real-time data, the lag time can be set to zero.
    • This specification is supported for Active Data Guard configurations.
    • Example option: -lag {lag_value | ANY} (e.g., -lag 180).
  3. Region Affinity (Locality): This attribute controls where the service operates relative to the clients, enabling geographical affinity between clients and databases. The Global Service Manager (GSM) uses region affinity to route client connections based on network proximity.
    • Any-Region Affinity (the default) routes a client request to the best database regardless of region, giving preference to a local region if databases are equally loaded.
    • Affinity to a Local Region (LOCAL_ONLY) ensures that GDS will not route to databases in other regions, regardless of load.
    • Affinity to a Local Region with Interregion Failover (LOCAL_ONLY -region_failover) attempts to use the local region first, but if all databases in the local region fail, the request is forwarded to a database in another region instead of being denied.

 

VIDEO:


Tuesday, September 23, 2025

Dynamic Statistic vs Dynamic Sampling

 

Dynamic Statistic vs Dynamic Sampling

Effect of change OPTIMIZER_DYNAMIC_SAMPLING param in Oracle

When need / What is challenges

Controlling Dynamic Statistics

By default, when optimizer statistics are missing, stale, or insufficient, dynamic statistics automatically run recursive SQL during parsing to scan a small random sample of table blocks.

About Dynamic Statistics Levels

The dynamic statistics level controls both when the database gathers dynamic statistics, and the size of the sample that the optimizer uses to gather the statistics.

Set the dynamic statistics level using either the OPTIMIZER_DYNAMIC_SAMPLING initialization parameter or a statement hint.

Note:

Dynamic statistics were called dynamic sampling in releases earlier than Oracle Database 12c Release 1 (12.1).

The following table describes the levels for dynamic statistics. Note the following:

  • If dynamic statistics are not disabled, then the database may choose to use dynamic statistics when a SQL statement uses parallel execution.
  • If the OPTIMIZER_ADAPTIVE_FEATURES initialization parameter is true, then the optimizer uses dynamic statistics when relevant SQL plan directives exist.

The database maintains the resulting statistics in the server result cache, making them available to other queries.

Table: Dynamic Statistics Levels

Level

When the Optimizer Uses Dynamic Statistics

Sample Size (Blocks)

0

Do not use dynamic statistics.

n/a

1

Use dynamic statistics for all tables that do not have statistics, but only if the following criteria are met:

  • At least one nonpartitioned table in the query does not have statistics.
  • This table has no indexes.
  • This table has more blocks than the number of blocks that would be used for dynamic statistics of this table.

32

2

Use dynamic statistics if at least one table in the statement has no statistics. This is the default value.

64

3

Use dynamic statistics if any of the following conditions is true:

  • At least one table in the statement has no statistics.
  • The statement has one or more expressions used in the WHERE clause predicates, for example, WHERE SUBSTR(CUSTLASTNAME,1,3).

64

4

Use dynamic statistics if any of the following conditions is true:

  • At least one table in the statement has no statistics.
  • The statement has one or more expressions used in the WHERE clause predicates, for example, WHERE SUBSTR(CUSTLASTNAME,1,3).
  • The statement uses complex predicates (an OR or AND operator between multiple predicates on the same table).

64

5

The criteria are identical to level 4, but the database uses a different sample size.

128

6

The criteria are identical to level 4, but the database uses a different sample size.

256

7

The criteria are identical to level 4, but the database uses a different sample size.

512

8

The criteria are identical to level 4, but the database uses a different sample size.

1024

9

The criteria are identical to level 4, but the database uses a different sample size.

4086

10

The criteria are identical to level 4, but the database uses a different sample size.

All blocks

11

The database uses adaptive dynamic sampling automatically when the optimizer deems it necessary.

Automatically determined

 

Setting Dynamic Statistics Levels Manually

Determining a database-level setting that would be beneficial to all SQL statements can be difficult.

When setting the level for dynamic statistics, Oracle recommends setting the OPTIMIZER_DYNAMIC_SAMPLING initialization parameter at the session level.

 

Assumptions

This tutorial assumes the following:

  • You want correct selectivity estimates for the following query, which has WHERE clause predicates on two correlated columns:

SELECT *  FROM   sh.customers WHERE

  cust_city='Los Angeles'   AND   cust_state_province='CA';

  • The preceding query uses serial processing.
  • The sh.customers table contains 932 rows that meet the conditions in the query.
  • You have gathered statistics on the sh.customers table.
  • You created an index on the cust_city and cust_state_province columns.
  • The OPTIMIZER_DYNAMIC_SAMPLING initialization parameter is set to the default level of 2.

To set the dynamic statistics level manually:

  1. Connect SQL*Plus to the database with the appropriate privileges, and then explain the execution plan as follows:

EXPLAIN PLAN FOR

  SELECT * FROM   sh.customers  WHERE  cust_city='Los Angeles'

 AND    cust_state_province='CA';

  1. Query the plan as follows:

SET LINESIZE 130

SET PAGESIZE 0

SELECT * FROM   TABLE(DBMS_XPLAN.DISPLAY);

The output appears below (the example has been reformatted to fit on the page):

-------------------------------------------------------------------------------

|Id| Operation                           | Name                                     |Rows|Bytes|Cost | Time   |

-------------------------------------------------------------------------------

| 0| SELECT STATEMENT            |                                              | 53| 9593|53(0)|00:00:01|

| 1|  TABLE ACCESS BY INDEX ROWID|CUSTOMERS                | 53| 9593|53(0)|00:00:01|

|*2|   INDEX RANGE SCAN          |CUST_CITY_STATE_IND       | 53| 9593| 3(0)|00:00:01|

-------------------------------------------------------------------------------

 Predicate Information (identified by operation id):

---------------------------------------------------

    2 - access("CUST_CITY"='Los Angeles' AND "CUST_STATE_PROVINCE"='CA')

 

The columns in the WHERE clause have a real-world correlation, but the optimizer is not aware that Los Angeles is in California and assumes both predicates reduce the number of rows returned.

Thus, the table contains 932 rows that meet the conditions, but the optimizer estimates 53, as shown in bold.

If the database had used dynamic statistics for this plan, then the Note section of the plan output would have indicated this fact.

The optimizer did not use dynamic statistics because the statement executed serially, standard statistics exist, and the parameter OPTIMIZER_DYNAMIC_SAMPLING is set to the default of 2.

  1. Set the dynamic statistics level to 4 in the session using the following statement:

ALTER SESSION SET OPTIMIZER_DYNAMIC_SAMPLING=4;

  1. Explain the plan again:

EXPLAIN PLAN FOR

 SELECT * FROM   sh.customers

 WHERE  cust_city='Los Angeles'  AND    cust_state_province='CA';

The new plan shows a more accurate estimate of the number of rows, as shown by the value 932 in bold:

PLAN_TABLE_OUTPUT

---------------------------------------------------------------------------

Plan hash value: 2008213504

 ---------------------------------------------------------------------------

| Id  | Operation         | Name      |Rows | Bytes |Cost (%CPU)|Time     |

---------------------------------------------------------------------------

|   0 | SELECT STATEMENT  |           | 932 |   271K|   406 (1)| 00:00:05 |

|*  1 |  TABLE ACCESS FULL| CUSTOMERS | 932 |   271K|   406 (1)| 00:00:05 |

---------------------------------------------------------------------------

 Predicate Information (identified by operation id):

---------------------------------------------------

    1 - filter("CUST_CITY"='Los Angeles' AND "CUST_STATE_PROVINCE"='CA')

 Note

   - dynamic statistics used for this statement (level=4)

The note at the bottom of the plan indicates that the sampling level is 4.

The additional dynamic statistics made the optimizer aware of the real-world relationship between the cust_city and cust_state_province columns, thereby enabling it to produce a more accurate estimate for the number of rows: 932 rather than 53.

 

When DBA might change OPTIMIZER_DYNAMIC_SAMPLING (tune or override)?

A DBA usually considers changing OPTIMIZER_DYNAMIC_SAMPLING only in specific scenarios, not as a general practice:

Common Reasons to Increase It

  1. Missing or stale statistics
    • If tables don’t have stats and DBMS_STATS cannot be run (e.g., transient staging tables, external tables).
    • Example: ETL jobs loading into temp tables, then queried immediately.
  2. Complex predicates not well estimated by normal stats
    • Multi-column correlated predicates
    • Complex functions or expressions in WHERE clause that lack function-based indexes.
  3. Star schema or data warehouse queries
    • Joins between large fact and dimension tables where stats are often not enough.
  4. Temporary tables in batch jobs
    • Since stats are often not gathered, dynamic sampling gives better plans.

 

Common Reasons to Decrease It

  1. High parse-time overhead
    • Dynamic sampling runs extra queries behind the scenes, which increases parse time.
    • This is bad for OLTP systems with many short-running queries.
  2. Stable environment with good stats
    • If you already have well-maintained statistics, dynamic sampling is usually unnecessary and just adds overhead.
  3. Predictability of plans
    • Dynamic sampling can cause query plans to change at parse time depending on sample results, which reduces plan stability.

 

Best Practices for DBAs

  • Don’t change it globally unless you really need to.
    Instead, use hints or SQL Profiles, SPM for specific queries:
    • OPTIMIZER_DYNAMIC_SAMPLING(n) hint in SQL.
  • Keep statistics up to date with DBMS_STATS so you don’t need high dynamic sampling levels.
  • Use higher levels (like 4, 6, 11) only for queries with uncertain stats (DW, ad-hoc queries).
  • For OLTP, stick with defaults (0–2).

 

Partitioning and LOB Data in Oracle

Partitioning and LOB Data in Oracle Partitioned Tables Most tables can be partitioned. Any table can be partitioned up to a million sepa...