NVMe Over Fabrics (NVMe-oF) with Oracle ASM – Oracle AI Database 26ai
With Oracle AI Database 26ai, Oracle introduces native support for NVMe over Fabrics (NVMe-oF) over TCP/IP, enabling Oracle ASM to use remote NVMe storage with near-local performance.
This is an important evolution in Oracle storage architecture — and a key point to highlight:
The network fabric is Ethernet (TCP/IP), NOT Fibre Channel.
Architecture Overview
-
Oracle Grid Infrastructure 26ai acts as the NVMe-oF initiator
-
Connects to NVMe-oF targets created using the Linux kernel
nvmet_tcpmodule -
Communication happens over standard Ethernet using TCP/IP
-
Oracle ASM disk groups can be created directly on NVMe-oF devices
-
These disk groups store Oracle AI Database data files
Key Capabilities
-
Direct user-space NVMe-oF access from Oracle processes
-
Low latency & high throughput compared to traditional network storage
-
Extends ASM beyond local disks to remote NVMe storage
-
No FC HBAs, no FC switches, no SAN zoning
Technical Requirements
-
Oracle Grid Infrastructure 26ai
-
Linux kernel 5.4 or later
-
Ethernet-based NVMe-oF over TCP
-
Recommended 25 GbE or higher NIC
-
Dedicated network fabric for storage traffic
Important Note on Networking
Although NVMe-oF is a “fabric” technology:
-
It is not Fibre Channel
-
It does not use NVMe-FC
-
It uses Ethernet + TCP/IP
Performance depends heavily on network quality.
For optimal results, Oracle recommends:
-
Private, dedicated Ethernet connections
-
Low-latency switching
-
No congestion or oversubscription
When properly designed, NVMe-oF over TCP + Oracle ASM delivers scalable, high-performance storage without the complexity of Fibre Channel — ideal for modern Oracle AI workloads.
Do you need specific RPMs on Oracle Linux?
No extra or special RPMs are required beyond what comes with:
-
Oracle Linux 8/9
-
Oracle Grid Infrastructure 26ai
Why?
-
Oracle uses its own native user-space NVMe-oF initiator
-
The initiator is embedded in Grid Infrastructure 26ai
-
You do NOT use:
-
nvme-cli -
nvme-tcpkernel initiator -
Any external NVMe user tools
-
What is required on the OS?
-
Linux kernel 5.4+ (already standard on OL8/OL9)
-
Standard networking stack
-
Ethernet NIC drivers (25 GbE+ recommended)
The NVMe-oF target (storage side) does require:
-
nvmet -
nvmet-tcpkernel modules
(but that’s on the storage server, not the Oracle DB server)
Is there any new configuration at the ASM layer?
No new ASM concepts, no new disk types
From an ASM point of view:
-
NVMe-oF disks look like regular ASM disks
-
Disk groups are created exactly the same way
-
ASM redundancy, rebalance, failure groups → unchanged
What does change behind the scenes?
-
The path to the storage
-
ASM disks are backed by remote NVMe-oF namespaces
-
Access is managed internally by Grid Infrastructure
Typical ASM workflow (unchanged)
No new ASM parameters
No ASM patches
No ASM driver changes
What is new operationally?
Grid Infrastructure layer
-
GI handles:
-
NVMe-oF session lifecycle
-
Namespace discovery
-
Error handling and reconnects
-
Network becomes critical
-
Latency-sensitive
-
Requires:
-
Dedicated Ethernet fabric
-
Stable MTU (jumbo frames if supported)
-
No packet loss
-
#OracleDatabase #Oracle26ai #NVMeoF #OracleASM #GridInfrastructure #Ethernet #TCPIP #HighPerformanceStorage #DatabaseArchitecture

No comments:
Post a Comment