Monday, February 9, 2026

NVMe Over Fabrics (NVMe-oF) with Oracle ASM – Oracle AI Database 26ai

 NVMe Over Fabrics (NVMe-oF) with Oracle ASM – Oracle AI Database 26ai

With Oracle AI Database 26ai, Oracle introduces native support for NVMe over Fabrics (NVMe-oF) over TCP/IP, enabling Oracle ASM to use remote NVMe storage with near-local performance.

This is an important evolution in Oracle storage architecture — and a key point to highlight:

The network fabric is Ethernet (TCP/IP), NOT Fibre Channel.



The Oracle Grid Infrastructure server works as an initiator that connects to an NVMe-oF storage target created using Linux Kernel nvmet_tcp module.

You can use NVMe-oF storage devices to create Oracle ASM disk groups. These disk groups can store Oracle AI Database data files. 

This configuration extends Oracle ASM capabilities to the NVMe-oF storage devices. 

Direct access to NVMe-oF storage targets from Oracle AI Database servers offers lower latency and greater throughput for I/O operations.

Oracle Grid Infrastructure 26ai works only with NVMe of Fabrics storage targets with Linux kernel 5.4 or later versions. 

Use an Ethernet Network Interface Card (NIC) on your Oracle Grid Infrastructure server to connect to an NVMe-oF storage target. 

Oracle recommends that you use a 25 GbE or higher Ethernet NIC to minimize latency.

Note:
The performance of databases stored on the NVMe-oF devices depends on the performance of the network connection between the Oracle Grid Infrastructure servers and the NVMe-oF storage targets. 

For optimal performance, Oracle recommends that you connect the Oracle Grid Infrastructure servers to the NVMe-oF storage targets using private dedicated network connections.

Architecture Overview

  • Oracle Grid Infrastructure 26ai acts as the NVMe-oF initiator

  • Connects to NVMe-oF targets created using the Linux kernel nvmet_tcp module

  • Communication happens over standard Ethernet using TCP/IP

  • Oracle ASM disk groups can be created directly on NVMe-oF devices

  • These disk groups store Oracle AI Database data files

Key Capabilities

  • Direct user-space NVMe-oF access from Oracle processes

  • Low latency & high throughput compared to traditional network storage

  • Extends ASM beyond local disks to remote NVMe storage

  • No FC HBAs, no FC switches, no SAN zoning

Technical Requirements

  • Oracle Grid Infrastructure 26ai

  • Linux kernel 5.4 or later

  • Ethernet-based NVMe-oF over TCP

  • Recommended 25 GbE or higher NIC

  • Dedicated network fabric for storage traffic

Important Note on Networking

Although NVMe-oF is a “fabric” technology:

  •  It is not Fibre Channel

  •  It does not use NVMe-FC

  •  It uses Ethernet + TCP/IP

Performance depends heavily on network quality.
For optimal results, Oracle recommends:

  • Private, dedicated Ethernet connections

  • Low-latency switching

  • No congestion or oversubscription

When properly designed, NVMe-oF over TCP + Oracle ASM delivers scalable, high-performance storage without the complexity of Fibre Channel — ideal for modern Oracle AI workloads.


 Do you need specific RPMs on Oracle Linux?

No extra or special RPMs are required beyond what comes with:

  • Oracle Linux 8/9

  • Oracle Grid Infrastructure 26ai

Why?

  • Oracle uses its own native user-space NVMe-oF initiator

  • The initiator is embedded in Grid Infrastructure 26ai

  • You do NOT use:

    • nvme-cli

    • nvme-tcp kernel initiator

    • Any external NVMe user tools

What is required on the OS?

  • Linux kernel 5.4+ (already standard on OL8/OL9)

  • Standard networking stack

  • Ethernet NIC drivers (25 GbE+ recommended)

 The NVMe-oF target (storage side) does require:

  • nvmet

  • nvmet-tcp kernel modules
    (but that’s on the storage server, not the Oracle DB server)

 Is there any new configuration at the ASM layer?

No new ASM concepts, no new disk types

From an ASM point of view:

  • NVMe-oF disks look like regular ASM disks

  • Disk groups are created exactly the same way

  • ASM redundancy, rebalance, failure groups → unchanged

What does change behind the scenes?

  • The path to the storage

  • ASM disks are backed by remote NVMe-oF namespaces

  • Access is managed internally by Grid Infrastructure

Typical ASM workflow (unchanged)

asmcmd afd_state asmcmd afd_label NVME01 /dev/oracle_nvme0
CREATE DISKGROUP DATA_NVME EXTERNAL REDUNDANCY DISK 'AFD:NVME01';

 No new ASM parameters
 No ASM patches
 No ASM driver changes

 What is new operationally?

 Grid Infrastructure layer

  • GI handles:

    • NVMe-oF session lifecycle

    • Namespace discovery

    • Error handling and reconnects

 Network becomes critical

  • Latency-sensitive

  • Requires:

    • Dedicated Ethernet fabric

    • Stable MTU (jumbo frames if supported)

    • No packet loss

#OracleDatabase #Oracle26ai #NVMeoF #OracleASM #GridInfrastructure #Ethernet #TCPIP #HighPerformanceStorage #DatabaseArchitecture

No comments:

Post a Comment

NVMe Over Fabrics (NVMe-oF) with Oracle ASM – Oracle AI Database 26ai

  NVMe Over Fabrics (NVMe-oF) with Oracle ASM – Oracle AI Database 26ai With Oracle AI Database 26ai , Oracle introduces native support for...