Wednesday, February 18, 2026

Oracle 26ai: Automatic Storage Compression

Orcale  Compression in 26ai:

Are you tired of choosing between fast direct loads and the massive space savings of Hybrid Columnar Compression (HCC)? 

Oracle Database 26ai introduces a game-changer: Automatic Storage Compression (ASC).


https://www.linkedin.com/feed/update/urn:li:groupPost:8151826-7429626918938791936


ASC solves the classic dilemma: 

HCC’s compression overhead can slow down high-volume direct loads. 

With ASC, you get the speed of uncompressed direct loads first, and the space efficiency of HCC later, all automatically in the background.


ASC allows you to benefit from two traditionally opposing advantages simultaneously:


  1. The World of Speed (Fast Direct Loads): When you load a large amount of data into a database, you want the process to be as fast as possible. Compression, especially a high-ratio compression like Hybrid Columnar Compression (HCC), adds overhead that can slow down the initial data loading process. ASC initially loads the data uncompressed to achieve the fastest possible direct load performance.
  2. The World of Efficiency (Massive Space Savings): Once the data is loaded, you want it to take up the least amount of disk space and be optimized for analytical queries. HCC provides superior space savings and fast analytics. ASC automatically and gradually compresses the data into the HCC format in the background after the initial load is complete and the data is no longer being actively modified.


How it Works: The Intelligent Background Process

Instead of forcing compression during the load, ASC uses a smart, two-phase approach:


Phase 1: Speed: Data is direct-loaded into an uncompressed format, ensuring maximum load performance.


Phase 2: SpaceAfter a user-specified DML inactivity threshold is met, a background Automatic Compression AutoTask gradually moves and HCC-compresses the data.


This is a huge improvement over the manual ILM process, which often required full segment rebuilds and complex policy management.


Prerequisites for Activation

To enable this feature, ensure your environment meets these simple requirements:


  • Set `HEAT_MAP=ON` in your Pluggable Database (PDB).


  • The table must be specified for HCC and reside on a tablespace using `SEGMENT SPACE MANAGEMENT AUTO` and `AUTOALLOCATE`.


Step-by-Step Usage Example

Here’s how easy it is to enable and monitor this feature:


1. Enable Automatic Storage Compression at the PDB level:


SQL> exec dbms_ilm_admin.enable_auto_optimize


2. Check the initial compression state (Uncompressed = 1):


SELECT UNIQUE dbms_compression.get_compression_type(         'SCOTT', 'MYTAB', rowid)

FROM scott.mytab;

-- Result: 1 (Uncompressed)

3. Monitor the uncompressed size before compression starts:


SELECT bytes/1024/1024 MB

FROM dba_segments

WHERE owner = 'SCOTT' AND segment_name = 'MYTAB';

-- Result: 5.625 (MB)

4. Monitor the background compression progress:


The `v$sysstat` view tracks the data moved and compressed by the background task.


SELECT name, value

FROM v$sysstat

WHERE name LIKE 'Auto compression data%';


IMG_2121.jpeg

Name

Value(initial)

Value after Compression 

Auto compression data movement success

0

0

Auto compression data movement failure

0

0

Auto compression data moved

0

6 (MB)


5. Check the final compression state (HCC Query Low = 8) and size:


SELECT UNIQUE dbms_compression.get_compression_type('SCOTT', 'MYTAB', rowid)

FROM scott.mytab;

-- Result: 8 (HCC Query Low)


SELECT bytes/1024/1024 MB

FROM dba_segments

WHERE owner = 'SCOTT' AND segment_name = 'MYTAB';

-- Result: .3125 (MB)


From 5.625MB uncompressed to 0.3125MB compressed—that’s a massive space saving, achieved without sacrificing load performance!


Using this feature in OLTP vs. OLAP env


The Automatic Storage Compression (ASC) feature in Oracle 23ai is primarily designed to be useful for OLAP (Online Analytical Processing) environments, or more specifically, Data Warehousing environments, which are characterized by:


  • High-volume, bulk data loading (Direct Loads): This is where the initial speed benefit of ASC comes into play.
  • Large, mostly static data sets: This is where the massive space savings and analytical performance benefits of Hybrid Columnar Compression (HCC) are most valuable.


Why it’s best for OLAP/Data Warehousing

ASC is a direct solution to a problem common in data warehouses: the trade-off between fast data loading and HCC’s compression overhead.


  • Fast Loading: Data is loaded quickly in an uncompressed state.
  • Space/Query Efficiency: The data is then automatically converted to HCC, which is optimized for read-heavy analytical queries and provides superior compression ratios.
  • DML Inactivity Threshold: The compression only happens after a period of DML (Data Manipulation Language) inactivity, which is typical for historical or read-only partitions in a data warehouse.


Why it’s less suited for OLTP

OLTP (Online Transaction Processing) environments are characterized by frequent, small, and concurrent transactions (inserts, updates, deletes).


  • Compression Type: OLTP environments typically use basic compression (like `COMPRESS FOR ALL OPERATIONS`), which is optimized for transactional workloads and allows for DML on compressed blocks. HCC, which ASC uses, is generally not recommended for tables with high DML activity because it can lead to block splits and reduced performance.


  • Inactivity RequirementASC’s core mechanism relies on a “DML inactivity threshold” before compression begins. This threshold is rarely met in a high-activity OLTP system, meaning the data would likely remain uncompressed, defeating the purpose of the feature.


Summary

Automatic Storage Compression in Oracle 26ai is a significant step forward for data warehousing and large-scale data ingestion. 


It ensures you get the best of both worlds: blazing-fast direct loads and the superior space efficiency of Hybrid Columnar Compression.


ASC is a powerful automation feature that maximizes the benefits of Hybrid Columnar Compression, making it a highly useful and relevant feature for OLAP and Data Warehousing environments.

*********************************************

Sunday, February 15, 2026

Oracle RAC 26ai New Feature


Joining Inaccessible Nodes After a Forced Cluster Upgrade (Oracle Grid Infrastructure 26ai)

With Oracle Grid Infrastructure 26ai, cluster lifecycle management becomes more flexible, especially when dealing with partially upgraded or temporarily inaccessible nodes.

In earlier releases, if a node was unreachable during a cluster upgrade, the only practical option was often to delete and reconfigure the node. In 26ai, Oracle introduces a cleaner and safer mechanism to rejoin inaccessible nodes after a forced cluster upgrade.

 

Scenario: Forced Cluster Upgrade Completed

You performed a force cluster upgrade, and one or more nodes were inaccessible during the process.

Instead of:

  • Deleting the node
  • Cleaning OCR references
  • Re-adding the node from scratch

You can now rejoin the node directly, provided that:

Oracle Grid Infrastructure 26ai software is already installed on the node.

 

🛠 Procedure: Join an Inaccessible Node

Step 1 – Log in as root

On the node that was inaccessible:

ssh root@inaccessible_node

 

Step 2 – Change to Grid Infrastructure Home

cd $GRID_HOME

Example:

cd /u01/app/26.0.0/grid

 

Step 3 – Run the Join Command

Use the following syntax:

./rootupgrade.sh -join -existingnode upgraded_node

Where:

  • upgraded_node = A cluster node that was successfully upgraded
  • The script synchronizes cluster metadata and configuration

 

Example:

./rootupgrade.sh -join -existingnode node1

This command:

  • Reintegrates the node into the cluster
  • Syncs OCR configuration
  • Aligns voting disk and cluster registry metadata
  • Avoids full node reconfiguration

 

Changing the First Node for Installation or Upgrade

Cluster installation/upgrade designates a first node that initializes cluster configuration.

But what if the first node becomes inaccessible?


During Installation

If root.sh fails on the first node:

Run this on another node:

root.sh -force -first

This forces the new node to assume the role of the first node for installation.

 

During Upgrade

If the first node fails during upgrade:

rootupgrade.sh -force -first

This command:

  • Overrides first-node designation
  • Continues upgrade process from another node
  • Prevents rollback or cluster restart requirement

 

Architecture Impact

In large RAC environments:

  • Rolling upgrades are common
  • Network partitions can occur
  • Temporary node failures are realistic

 

With 26ai:

No need to delete/recreate nodes
Less downtime risk
Better operational continuity
Simplified recovery from partial upgrades

This is particularly valuable in:

  • Exadata environments
  • Extended clusters
  • Multi-site RAC with Data Guard

 

 Important Notes

  • The node must already have 26ai Grid binaries installed
  • Ensure cluster interconnect and voting disks are reachable
  • Verify CRS status after join:

crsctl check cluster -all

  • Always validate cluster health post-operation:

olsnodes -n

crsctl stat res -t

 

 Summary

Oracle Grid Infrastructure 26ai significantly improves cluster resilience by allowing:

  • Rejoining inaccessible nodes after forced upgrades
  • Forcing a new first node during install or upgrade

This eliminates the painful delete-and-readd cycle from previous releases and reduces operational complexity in production RAC environments.

 

Tuesday, February 10, 2026

Logging Options in Oracle for Data Guard

 Logging Options in Oracle for Data Guard


When preparing a primary Oracle Database for use with Oracle Data Guard, enabling the correct logging mode is critical to ensure data integrity and that all changes are shipped to standby databases correctly. The default logging behavior in a standalone database is not suitable for a Data Guard configuration because certain data-load operations can be performed without writing to redo logs, which causes missing data on the standby.  


Why Logging Mode Matters


In Oracle, redo logs are the source of truth for Data Guard replication. Without proper logging:

NOLOGGING operations on the primary may not produce redo records.

Standbys will miss these changes unless corrected manually.

This can compromise failover and recovery operations.


To avoid this, Oracle provides specific logging modes that control how changes are recorded.  


Available Logging Modes for Data Guard


You must set the logging mode on the primary database to one of the following, depending on your Data Guard strategy:


1. FORCE LOGGING


Ensures all database changes are logged, regardless of whether an operation uses the NOLOGGINGattribute.

Prevents unlogged operations that could lead to missing redo.

Ideal for maximum data protection environments.

Can impact performance for bulk loads since redo is always generated.


ALTER DATABASE FORCE LOGGING;


The database must be at least mounted, and this operation can take time as it waits for all unlogged direct writes to finish.  


2. STANDBY NOLOGGING FOR DATA AVAILABILITY


A special logging mode where:

Loaded data is sent synchronously to all connected standbys.

Commit waits until standby(s) have applied the data.

Useful when you want safe performance balancing with Data Guard.


ALTER DATABASE SET STANDBY NOLOGGING FOR DATA AVAILABILITY;


This mode optimizes throughput while still guaranteeing data is shipped to standbys before commit.  


3. STANDBY NOLOGGING FOR LOAD PERFORMANCE


This mode prioritizes load performance, allowing:

Fast load operations with asynchronous delivery to standbys.

Standbys may temporarily miss some data but will fetch it automatically later during managed recovery.


ALTER DATABASE SET STANDBY NOLOGGING FOR LOAD PERFORMANCE;


This is ideal for high-volume data loads where network bandwidth or latency might be a bottleneck, and delayed propagation is acceptable.  


Notes and Considerations

Database State: All the above commands require the database to be at least mounted (it can also be open).  

I/O Delay: Changing logging mode may take significant time as Oracle flushes unlogged direct I/O.  

Active Data Guard: When using STANDBY NOLOGGING modes, standby databases employing multi-instance redo apply can stop with ORA-10892. You may need to restart redo apply to proceed past the NOLOGGING period.   Recovery Behavior: The choice of logging mode directly impacts how consistent your standby will be during failover or switchover operations.


                  *************************

Scenario :

Primary DB

Some tables behave like queues

(high-frequency INSERT / DELETE / UPDATE)

Result:

Very high redo rate

Archive log pressure

Potential transport / apply lag on standby


Which logging mode is best?


What NOT to use

FORCE LOGGING


Technically safe, operationally painful here.


Why it’s bad for queue-heavy workloads:

Forces redo for everything

Amplifies redo generation

Increases:

LGWR pressure

ARCH backlog

Network traffic

Standby apply lag


Use only if zero data loss is more important than performance (e.g. financial systems).



Use Carefully


STANDBY NOLOGGING FOR DATA AVAILABILITY


Characteristics:

Commit waits for standby acknowledgment

Still generates redo

Safer than load-performance mode


Why it’s risky for queue tables:

Queue workloads are commit-intensive

You’ll add commit latency

TPS may drop noticeably


 Acceptable only if:

Queues are business-critical

Commit latency is acceptable

Network latency is very low



BEST CHOICE for this Case


STANDBY NOLOGGING FOR LOAD PERFORMANCE


This is the recommended model when:

Tables behave like queues

Redo rate is extremely high

Data can be eventually consistent on standby


Why this works best


✔ Primary performance stays high

✔ Commit does not wait for standby

✔ Reduces redo transport pressure

✔ Standby will self-heal missing blocks during recovery


This mode is designed exactly for this pattern.


Important Architecture Insight (very important)


For queue-like tables:


Standby consistency ≠ business correctness


Ask yourself:

Are queues replayable?

Are they rebuildable?

Are they ephemeral?


If yes → you don’t need synchronous protection for them.


Many mature architectures:

Exclude queue tables from DR correctness guarantees

Rebuild queues after failover

Protect core business tables only


Extra Recommendations (Advanced, but powerful)


1️⃣ Isolate redo impact


If possible:

Separate queue tablespaces

Separate I/O

Monitor v$log_histogram


2️⃣ Standby tuning

Increase PARALLEL_EXECUTION_MESSAGE_SIZE

Use real-time apply

Tune APPLY_PARALLELISM


3️⃣ Consider Far Sync


If network latency is an issue:

Far Sync + async redo

Keeps primary fast, improves DR reliability




Decision Matrix

Workload Type

Recommended Mode

Queue / event tables

 Load Performance

Bulk ETL

Load Performance

Financial / OLTP core

Force Logging

Mixed workload

Data Availability


Bottom line:


For queue-style tables with massive archive generation:


STANDBY NOLOGGING FOR LOAD PERFORMANCE is the right model


It’s not a compromise, it’s the correct architectural choice.


Summary:


Logging Mode

Writes All Redo

Standby Sync

Performance Impact

FORCE LOGGING

Yes

N/A

Moderate

STANDBY NOLOGGING FOR DATA AVAILABILITY

Conditional

Yes

Moderate–Low

STANDBY NOLOGGING FOR LOAD PERFORMANCE

Conditional

Async

Low

Choosing the right logging mode ensures:


✔ Reliable redo shipping to standby

✔ Correct application of changes on standby

✔ Reduced risk of data loss during failover



Alireza Kamrani 

Oracle 26ai: Automatic Storage Compression

Orcale  Compression in 26ai: Are you tired of choosing between fast direct loads and the massive space savings of Hybrid Columnar Compressio...