Thursday, February 19, 2026

Oracle GoldenGate Veridata 26ai / 26c

 Oracle GoldenGate Veridata 26ai / 26c

key features & architecture updates

Oracle GoldenGate Veridata 26ai:
Continuous data trust for hybrid and lakehouse architectures

With the GoldenGate 26ai platform release, Oracle has significantly evolved Veridata 26c to deliver continuous data trust across heterogeneous databases, NoSQL, and modern data platforms.

The focus of this release is clear: automated validation + repair + governance across multi-cloud and hybrid architectures.

Below is a technical breakdown of the most relevant capabilities.

What Veridata actually does
Oracle GoldenGate Veridata is a high-speed data comparison and repair engine that validates consistency between source and target systems during replication, migrations, or coexistence architectures. It compares billions of rows without impacting production workloads and can automatically repair discrepancies.

It can run with or without GoldenGate replication, making it a standalone data-trust layer.

Key new features in Veridata 26ai / 26c

1. Expanded heterogeneous and lakehouse coverage
Veridata 26c dramatically expands platform support to align with hybrid and lakehouse architectures:

New compare & repair support includes:

• MongoDB (Atlas, DocumentDB, Cosmos DB for Mongo)
• MariaDB
• SingleStore
• Databricks Delta tables
• LDAP/Active Directory
• Additional NoSQL/document databases

This enables cross-platform validation between:

• OLTP databases
• NoSQL stores
• lakehouse targets
• distributed microservices data stores

This is critical for organizations running CDC pipelines into lakehouses or event platforms and needing end-to-end validation.

2. Built-in scheduler and operational automation
Veridata now includes a native job scheduler in the UI.

Capabilities:
• Schedule validation jobs directly from the UI
• Run periodic integrity checks without external schedulers
• Manage server configuration from the web UI
• Modify logging levels dynamically

This reduces reliance on external orchestration and enables continuous validation pipelines.

3. Compare-and-repair for NoSQL and modern data stores
New compare/repair support now covers document and distributed databases:

• MongoDB-compatible systems
• SingleStore
• heterogeneous comparisons across relational + NoSQL
• The engine can:
• Compare by row count or full data
• Generate repair SQL
• Execute automated repairs

This positions Veridata as a data trust layer for polyglot persistence environments.

4. OCI-managed Veridata (Veridata as a Service)
Veridata is now available as a fully managed OCI service.

Benefits:
• No infrastructure management
• Cloud-scale scheduling and automation
• Unified GoldenGate control plane
• Hybrid/multi-cloud validation workflows

This aligns with GoldenGate’s broader move toward “data integration as a service”.

5. Modernized architecture & DevOps readiness
The 26c generation introduced major platform modernization:
• Microservices-based architecture
• Embedded MySQL repository
• REST APIs for automation
• Simplified installation
• Improved UI and reporting

This makes Veridata far easier to integrate into CI/CD and data-platform pipelines.

6. Unified governance across hybrid & multi-cloud
Veridata is positioned as a governance layer for:
• long-running migrations
• zero-downtime upgrades
• active-active replication
• multi-cloud replication
• lakehouse synchronization
• It ensures no missing or out-of-sync data across distributed systems.

Why Veridata 26ai matters architecturally
1. Continuous validation becomes mandatory
In modern architectures:
• CDC pipelines
• streaming ingestion
• lakehouse replication
• microservices data duplication

Replication ≠ correctness

Veridata provides continuous verification, not just movement.

2. Data trust for AI and analytics pipelines
GoldenGate 26ai emphasizes AI-ready data pipelines.

Veridata complements this by ensuring:
• feature store correctness
• training data integrity
• cross-system consistency
• Without validation, AI pipelines risk training on inconsistent data.

3. Lakehouse + database coexistence
Many enterprises now run:

OLTP → CDC → Kafka → Lakehouse → serving DB

Veridata 26c can validate:
• relational → NoSQL
• relational → lakehouse
• cross-cloud

This makes it a critical trust layer for hybrid data platforms.

Typical architecture with Veridata 26ai


Source DB (Oracle/Postgres/etc)
      │
GoldenGate replication
      │
Target DB / Lakehouse
      │
Veridata continuous validation
      │
Auto repair / governance

Now extended to:


Relational ↔ NoSQL ↔ Lakehouse ↔ Multi-cloud

Veridata’s role is shifting: from migration validator → continuous trust layer
Traditionally, Veridata was used during:
zero-downtime migrations
platform upgrades
active-active replication validation
You’d run compares, fix drift, and move on.
With Oracle GoldenGate Veridata 26ai, Oracle is clearly repositioning it as a continuous data trust service that runs permanently alongside replication and data pipelines.

My perspective
The most important shift in Veridata 26ai is not a single feature.

It’s the positioning.

Oracle is building:
Replication + observability + validation + repair
= continuous data trust platform

As architectures get more distributed, this becomes mandatory.
Most organizations still validate data only during migration.
That model doesn’t work anymore.

✨️Oracle GoldenGate Veridata 26c is also being released as container images available through Oracle Container Registry (OCR).
This provides customers with a modern, standardized way to deploy Veridata in containerized and cloud-native environments.
By delivering Veridata as container images and aligning Bundle Patch updates with a quarterlycadence, Oracle simplifies lifecycle management and keeps containerized deployments consistently aligned with supported fixes and enhancements. This approach supports customers adopting Kubernetes and other container platforms while maintaining the same enterprise grade validation capabilities Veridata is known for.

When to use Veridata 26ai
You should seriously consider it if you run:
• long-term GoldenGate replication
• active-active databases
• cross-cloud replication
• CDC into lakehouses
• zero-downtime migrations
• database modernization programs
It’s especially valuable in regulated environments where data correctness must be provable.

Supported Databases for Repair
Oracle GoldenGate Veridata supports the following databases for repair functionality:
• HPE NonStop
• IBM Db2
• MariaDB
• Microsoft SQL Server
• MySQL
• Oracle Database
• PostgreSQL
• Snowflake
• Sybase Adaptive Server Enterprise
• Teradata Vantage
• MongoDB Database
• SingleStore Database

For the latest information about Oracle GoldenGate Veridata release, including the list of certified database versions and operating systems, go to My Oracle Support at http://support.oracle.com.

Final thoughts
GoldenGate Veridata 26ai is evolving into a core component of modern data platforms, not just a migration tool.
If GoldenGate moves data,
Veridata proves the data is correct.
And in distributed architectures,
that distinction matters more than ever.

GoldenGate Veridata 26ai is no longer just a migration validation tool.
It’s becoming a continuous data trust platform for:
• hybrid databases
• lakehouses
• streaming pipelines
• AI data platforms
In architectures where data moves across many systems, trust must be continuously verified — not assumed.
Veridata 26ai is Oracle’s answer to that problem.


Reference:
https://docs.oracle.com/en/database/goldengate/veridata/26/index.html

Wednesday, February 18, 2026

Oracle 26ai: Automatic Storage Compression

Orcale  Compression in 26ai:

Are you tired of choosing between fast direct loads and the massive space savings of Hybrid Columnar Compression (HCC)? 

Oracle Database 26ai introduces a game-changer: Automatic Storage Compression (ASC).


https://www.linkedin.com/feed/update/urn:li:groupPost:8151826-7429626918938791936


ASC solves the classic dilemma: 

HCC’s compression overhead can slow down high-volume direct loads. 

With ASC, you get the speed of uncompressed direct loads first, and the space efficiency of HCC later, all automatically in the background.


ASC allows you to benefit from two traditionally opposing advantages simultaneously:


  1. The World of Speed (Fast Direct Loads): When you load a large amount of data into a database, you want the process to be as fast as possible. Compression, especially a high-ratio compression like Hybrid Columnar Compression (HCC), adds overhead that can slow down the initial data loading process. ASC initially loads the data uncompressed to achieve the fastest possible direct load performance.
  2. The World of Efficiency (Massive Space Savings): Once the data is loaded, you want it to take up the least amount of disk space and be optimized for analytical queries. HCC provides superior space savings and fast analytics. ASC automatically and gradually compresses the data into the HCC format in the background after the initial load is complete and the data is no longer being actively modified.


How it Works: The Intelligent Background Process

Instead of forcing compression during the load, ASC uses a smart, two-phase approach:


Phase 1: Speed: Data is direct-loaded into an uncompressed format, ensuring maximum load performance.


Phase 2: SpaceAfter a user-specified DML inactivity threshold is met, a background Automatic Compression AutoTask gradually moves and HCC-compresses the data.


This is a huge improvement over the manual ILM process, which often required full segment rebuilds and complex policy management.


Prerequisites for Activation

To enable this feature, ensure your environment meets these simple requirements:


  • Set `HEAT_MAP=ON` in your Pluggable Database (PDB).


  • The table must be specified for HCC and reside on a tablespace using `SEGMENT SPACE MANAGEMENT AUTO` and `AUTOALLOCATE`.


Step-by-Step Usage Example

Here’s how easy it is to enable and monitor this feature:


1. Enable Automatic Storage Compression at the PDB level:


SQL> exec dbms_ilm_admin.enable_auto_optimize


2. Check the initial compression state (Uncompressed = 1):


SELECT UNIQUE dbms_compression.get_compression_type(         'SCOTT', 'MYTAB', rowid)

FROM scott.mytab;

-- Result: 1 (Uncompressed)

3. Monitor the uncompressed size before compression starts:


SELECT bytes/1024/1024 MB

FROM dba_segments

WHERE owner = 'SCOTT' AND segment_name = 'MYTAB';

-- Result: 5.625 (MB)

4. Monitor the background compression progress:


The `v$sysstat` view tracks the data moved and compressed by the background task.


SELECT name, value

FROM v$sysstat

WHERE name LIKE 'Auto compression data%';


IMG_2121.jpeg

Name

Value(initial)

Value after Compression 

Auto compression data movement success

0

0

Auto compression data movement failure

0

0

Auto compression data moved

0

6 (MB)


5. Check the final compression state (HCC Query Low = 8) and size:


SELECT UNIQUE dbms_compression.get_compression_type('SCOTT', 'MYTAB', rowid)

FROM scott.mytab;

-- Result: 8 (HCC Query Low)


SELECT bytes/1024/1024 MB

FROM dba_segments

WHERE owner = 'SCOTT' AND segment_name = 'MYTAB';

-- Result: .3125 (MB)


From 5.625MB uncompressed to 0.3125MB compressed—that’s a massive space saving, achieved without sacrificing load performance!


Using this feature in OLTP vs. OLAP env


The Automatic Storage Compression (ASC) feature in Oracle 23ai is primarily designed to be useful for OLAP (Online Analytical Processing) environments, or more specifically, Data Warehousing environments, which are characterized by:


  • High-volume, bulk data loading (Direct Loads): This is where the initial speed benefit of ASC comes into play.
  • Large, mostly static data sets: This is where the massive space savings and analytical performance benefits of Hybrid Columnar Compression (HCC) are most valuable.


Why it’s best for OLAP/Data Warehousing

ASC is a direct solution to a problem common in data warehouses: the trade-off between fast data loading and HCC’s compression overhead.


  • Fast Loading: Data is loaded quickly in an uncompressed state.
  • Space/Query Efficiency: The data is then automatically converted to HCC, which is optimized for read-heavy analytical queries and provides superior compression ratios.
  • DML Inactivity Threshold: The compression only happens after a period of DML (Data Manipulation Language) inactivity, which is typical for historical or read-only partitions in a data warehouse.


Why it’s less suited for OLTP

OLTP (Online Transaction Processing) environments are characterized by frequent, small, and concurrent transactions (inserts, updates, deletes).


  • Compression Type: OLTP environments typically use basic compression (like `COMPRESS FOR ALL OPERATIONS`), which is optimized for transactional workloads and allows for DML on compressed blocks. HCC, which ASC uses, is generally not recommended for tables with high DML activity because it can lead to block splits and reduced performance.


  • Inactivity RequirementASC’s core mechanism relies on a “DML inactivity threshold” before compression begins. This threshold is rarely met in a high-activity OLTP system, meaning the data would likely remain uncompressed, defeating the purpose of the feature.


Summary

Automatic Storage Compression in Oracle 26ai is a significant step forward for data warehousing and large-scale data ingestion. 


It ensures you get the best of both worlds: blazing-fast direct loads and the superior space efficiency of Hybrid Columnar Compression.


ASC is a powerful automation feature that maximizes the benefits of Hybrid Columnar Compression, making it a highly useful and relevant feature for OLAP and Data Warehousing environments.

*********************************************

Sunday, February 15, 2026

Oracle RAC 26ai New Feature


Joining Inaccessible Nodes After a Forced Cluster Upgrade (Oracle Grid Infrastructure 26ai)

With Oracle Grid Infrastructure 26ai, cluster lifecycle management becomes more flexible, especially when dealing with partially upgraded or temporarily inaccessible nodes.

In earlier releases, if a node was unreachable during a cluster upgrade, the only practical option was often to delete and reconfigure the node. In 26ai, Oracle introduces a cleaner and safer mechanism to rejoin inaccessible nodes after a forced cluster upgrade.

 

Scenario: Forced Cluster Upgrade Completed

You performed a force cluster upgrade, and one or more nodes were inaccessible during the process.

Instead of:

  • Deleting the node
  • Cleaning OCR references
  • Re-adding the node from scratch

You can now rejoin the node directly, provided that:

Oracle Grid Infrastructure 26ai software is already installed on the node.

 

🛠 Procedure: Join an Inaccessible Node

Step 1 – Log in as root

On the node that was inaccessible:

ssh root@inaccessible_node

 

Step 2 – Change to Grid Infrastructure Home

cd $GRID_HOME

Example:

cd /u01/app/26.0.0/grid

 

Step 3 – Run the Join Command

Use the following syntax:

./rootupgrade.sh -join -existingnode upgraded_node

Where:

  • upgraded_node = A cluster node that was successfully upgraded
  • The script synchronizes cluster metadata and configuration

 

Example:

./rootupgrade.sh -join -existingnode node1

This command:

  • Reintegrates the node into the cluster
  • Syncs OCR configuration
  • Aligns voting disk and cluster registry metadata
  • Avoids full node reconfiguration

 

Changing the First Node for Installation or Upgrade

Cluster installation/upgrade designates a first node that initializes cluster configuration.

But what if the first node becomes inaccessible?


During Installation

If root.sh fails on the first node:

Run this on another node:

root.sh -force -first

This forces the new node to assume the role of the first node for installation.

 

During Upgrade

If the first node fails during upgrade:

rootupgrade.sh -force -first

This command:

  • Overrides first-node designation
  • Continues upgrade process from another node
  • Prevents rollback or cluster restart requirement

 

Architecture Impact

In large RAC environments:

  • Rolling upgrades are common
  • Network partitions can occur
  • Temporary node failures are realistic

 

With 26ai:

No need to delete/recreate nodes
Less downtime risk
Better operational continuity
Simplified recovery from partial upgrades

This is particularly valuable in:

  • Exadata environments
  • Extended clusters
  • Multi-site RAC with Data Guard

 

 Important Notes

  • The node must already have 26ai Grid binaries installed
  • Ensure cluster interconnect and voting disks are reachable
  • Verify CRS status after join:

crsctl check cluster -all

  • Always validate cluster health post-operation:

olsnodes -n

crsctl stat res -t

 

 Summary

Oracle Grid Infrastructure 26ai significantly improves cluster resilience by allowing:

  • Rejoining inaccessible nodes after forced upgrades
  • Forcing a new first node during install or upgrade

This eliminates the painful delete-and-readd cycle from previous releases and reduces operational complexity in production RAC environments.

 

Oracle GoldenGate Veridata 26ai / 26c

  Oracle GoldenGate Veridata 26ai / 26c key features & architecture updates Oracle GoldenGate Veridata 26ai: Continuous data trust f...