Execution Plan & Statistics in Oracle SQL
An
in-depth look at SQL query diagnosis and tuning
Execution
Plan
The Execution Plan shows the SQL optimizer's query
execution path. Execution Plan output is generated using EXPLAIN PLAN
and DBMS_XPLAN.
The statistics are recorded by the server when your
statement executes and indicate the system resources required to execute your
statement. The results include the following statistics.
Database Statistic Name |
Description |
recursive calls |
Number of recursive
calls generated at both the user and system level. Oracle
Database maintains tables used for internal processing. When Oracle Database
needs to make a change to these tables, it internally generates an internal
SQL statement, which in turn generates a recursive call. |
db block gets |
Number of
times a CURRENT block was requested. |
consistent gets |
Number of times a
consistent read was requested for a block |
physical reads |
Total number
of data blocks read from disk. This number equals the value of "physical
reads direct" plus all reads into buffer cache. |
redo size |
Total amount of redo
generated in bytes |
bytes sent through Oracle Net Services to client |
Total number
of bytes sent to the client from the foreground processes. |
bytes received through Oracle Net Services from client |
Total number of bytes
received from the client over Oracle Net. |
Oracle Net Services round-trips to/from client |
Total number
of Oracle Net messages sent to and received from the client |
sorts (memory) |
Number of sort
operations that were performed completely in memory and did not require any
disk writes |
sorts (disk) |
Number of
sort operations that required at least one disk write |
rows processed |
Number of rows
processed during the operation |
The client referred to in the statistics is SQL*Plus. Oracle
Net refers to the generic process communication between SQL*Plus and the
server, regardless of whether Oracle Net is installed. You cannot change the
default format of the statistics report.
In Oracle, Recursive Calls are internal SQL
statements that the database generates automatically to perform its own
housekeeping tasks.
They are not issued by the user directly, but by
Oracle itself when it needs to maintain data dictionary tables or perform
background operations.
Examples of when recursive calls happen:
- Creating
or modifying database objects (tables, indexes, views, etc.)
- Managing
space in segments (e.g., extending a table or index)
- Maintaining
internal data dictionary tables
- Executing
PL/SQL blocks that internally issue SQL queries
Why it matters:
- A normal
amount of recursive calls is expected in any Oracle workload.
- Excessive
recursive calls often indicate:
- Inefficient
application code (too many DDLs or dynamic SQL)
- Lack
of bind variables, leading to many hard parses
- Repeated
queries against data dictionary views (like ALL_TABLES, DBA_USERS)
DBA Action: If recursive calls are unusually high,
check the Top SQL report in AWR/Statspack to identify the source, and
tune SQL or reduce unnecessary data dictionary access.
What makes “db block gets” too high?
- Large
number of full table scans with updates/deletes.
- Poor
index usage, causing row lookup with excessive block visits.
- Hot
blocks (many sessions contending for the same block).
- Inefficient
application logic updating many rows repeatedly.
How to optimize / tune
- SQL
Tuning
- Ensure
queries use the right indexes (avoid unnecessary full scans).
- Rewrite
SQL to minimize row-by-row updates (use bulk operations if possible).
- Reduce
Hot Block Contention
- Spread
inserts/updates across multiple blocks (use reverse key indexes or hash
partitioning if appropriate).
- Use
sequence caching or multiple sequences to avoid all inserts hitting the
same index block.
- Monitor
Ratio
- Compare
db block gets vs consistent gets:
- In
OLTP, consistent gets should normally be higher.
- If
db block gets dominates, review workload for excessive
modifications.
Rule of thumb for DBA attention:
- If db
block gets is high relative to consistent gets, investigate SQL
execution and potential hot block contention.
Consistent
gets vs. Physical Reads
Consistent Gets
- Definition:
Number of times Oracle accessed a block in consistent mode
(read-only, possibly using undo data to provide a read-consistent image).
- Happens
in memory (buffer cache).
- Reflects
how often queries are satisfied without going to disk.
- Higher
is usually better → means most queries are resolved from cache.
Optimization for
Consistent Gets
- Tune
SQL to use indexes efficiently (fewer block visits).
- Reduce
unnecessary large scans (use partitions, filters).
- Ensure
sufficient buffer cache so frequently accessed blocks remain
cached.
- Rewrite
queries to reduce repetitive lookups of the same data.
Physical Reads
- Definition:
Number of blocks read from disk into buffer cache because they were
not already cached.
- Much
slower than consistent gets, since disk I/O is involved.
- High
values usually indicate insufficient memory, poor SQL access paths, or
large scans.
Optimization for Physical Reads
- Increase
DB_CACHE_SIZE (more memory for buffer cache).
- Use
KEEP pool for frequently accessed small lookup tables.
- Tune
SQL to avoid unnecessary full table scans.
- Use
indexes to minimize disk I/O.
- Consider
partitioning for very large tables to reduce scanned blocks.
- In
Exadata: enable Smart Scan to offload reads.
Relationship
- Ideally,
Oracle should satisfy most reads via consistent gets (memory) and minimize
physical reads (disk).
- A high
consistent gets : physical reads ratio is a good sign of cache
efficiency.
- Example:
If consistent gets = 1,000,000 and physical reads = 10,000 → ratio =
100:1 (good).
- If
ratio is low (<10:1), check memory sizing and SQL plans.
DBA Rule of Thumb:
- Consistent
Gets High → Good (memory hit rate high).
- Physical
Reads High → Needs tuning (SQL or memory).
What is Redo Size?
- It
represents the total amount of redo entries (in bytes) generated in
the redo log buffer and written to the redo log files.
- Redo
is required for data recovery, so every change to a data block
(INSERT, UPDATE, DELETE, DDL) must generate redo.
- Even
some SELECTs (that create temporary segments) may generate redo.
Why Redo Size Matters
- High
redo generation increases:
- Redo
log I/O (LGWR writes more frequently).
- Archiving
overhead (in ARCH or Data Guard).
- Network
traffic (in RAC or Data Guard).
- Excessive
redo means the system is doing too many changes or inefficient
modifications.
Causes of High Redo Size
- Large
batch DML operations (bulk UPDATE/DELETE/INSERT).
- Indexes
on heavily updated tables (each index update generates redo).
- Unnecessary
commits in loops (commit frequently = more redo header overhead).
- Hot
tables receiving massive concurrent DML.
- Data
loads using conventional path (INSERT … VALUES / SELECT).
- Logging
mode set to default (redo is always generated).
How to Control / Minimize Redo Size
- Use
Direct-Path Inserts
- INSERT
/*+ APPEND */ or SQL*Loader direct path to bypass redo (in NOLOGGING
mode).
- Best
for bulk loads where recovery of intermediate steps isn’t
critical.
- Use
NOLOGGING / UNRECOVERABLE Options (with caution!)
- For
objects like indexes, materialized views, and temporary tables.
- Reduces
redo, but means you cannot recover that object via redo after failure.
- Minimize
Unnecessary Indexes
- Each
index on a table adds extra redo during DML. Keep only required indexes.
- Batch
and Commit Strategy
- Avoid
committing row by row. Use larger batch commits.
- Example:
Commit every 10k rows instead of every row.
- Use
Temporary Tables (Global Temporary Tables)
- Operations
on GTT generate little or no redo (only undo).
- Partitioning
- Reduce
redo during data maintenance by operating on smaller partitions.
- Application
Tuning
- Avoid
unnecessary updates (e.g., updating a column to the same value).
- Use
MERGE instead of separate INSERT/UPDATE when applicable.
DBA Rule of Thumb
- Redo
size should scale with business workload.
- Sudden
spikes in redo size = usually due to a bad SQL or massive DML job.
- Regularly
check in AWR/Statspack reports → "Redo size per
transaction" → to detect inefficient application logic.
Sorts (Memory)
- A sort
operation that can be completed entirely in PGA (Program Global
Area).
- Fast,
because data is kept in memory.
- Examples:
- ORDER
BY, GROUP BY, DISTINCT, UNION, MERGE JOIN, index builds.
- Tracked
in Oracle statistics as sorts (memory).
Good sign →
most sorts should complete in memory.
Sorts (Disk)
- When
the sort cannot fit into available PGA memory (workarea), Oracle
spills data into temporary tablespace (TEMP).
- Disk
sorts are much slower due to I/O.
- Tracked
as sorts (disk) in Oracle statistics.
Bad sign if frequent → indicates insufficient memory
or badly written queries.
Causes of High Sorts (Disk)
- Insufficient
PGA memory (too small PGA_AGGREGATE_TARGET or PGA_AGGREGATE_LIMIT).
- Poor
SQL design requiring unnecessary or large sorts.
- Lack
of indexes → forcing large sorting operations.
- Too
many concurrent sorts exhausting memory.
- Large
reporting/analytic queries with ORDER BY, GROUP BY, DISTINCT.
How to Minimize Disk Sorts
- Increase
PGA Memory
- Use
PGA_AGGREGATE_TARGET (or MEMORY_TARGET/MEMORY_MAX_TARGET) to allocate
more memory.
- For
Oracle 12c+, PGA_AGGREGATE_LIMIT prevents runaway usage, so tune
accordingly.
- Use
Workarea Size Policy AUTO
- Let
Oracle automatically size sort/hash areas based on workload.
- Optimize
SQL
- Avoid
unnecessary ORDER BY or DISTINCT.
- Use
indexes to avoid large sorts.
- Rewrite
queries with analytic functions to reduce temporary sorting.
- Use
TEMP Tablespace Efficiently
- Ensure
TEMP has enough space to handle required disk sorts.
- Monitor
v$tempseg_usage for active temp usage.
- Batch
Processing
- Break
very large sorts into smaller chunks if possible.
- Parallel
Query Caution
- Parallel
operations can explode sort memory requirements and push more to disk.
🔹 DBA Rule of Thumb
- Sorts
(memory) >> Sorts (disk) → healthy system.
- If
sorts (disk) is high:
- Check
PGA_AGGREGATE_TARGET sizing.
- Review
top SQL for bad query design.
- Check
TEMP usage (shouldn’t grow excessively unless queries are poorly tuned).
Sorting in PGA (Memory Sorts)
- If
the sort fits entirely in PGA (workarea), it does not generate
redo, because it’s private to the session.
- Example:
A query with ORDER BY that completes in memory.
- Only
undo may be generated if Oracle needs to construct read-consistent
versions of blocks — but redo is not generated for in-memory sorting.
Sorting in TEMP Tablespace (Disk Sorts)
- When
a sort spills to TEMP, Oracle writes to tempfiles.
- Tempfiles
are not redo-logged (no redo generated).
- This
is by design: tempfiles can be recreated, and their contents are not
needed for recovery.
But Sorting Can Generate Redo in Some Cases
- Index
Creation / Rebuild
- Oracle
sorts keys before writing them to the index structure.
- The
index blocks being created do generate redo (unless you use NOLOGGING).
- Global
Temporary Tables with ON COMMIT PRESERVE ROWS
- Depending
on settings, some redo may be generated for metadata/undo, but not for
the bulk sort data.
- Operations
Involving Permanent Segments
- If
the sort result is materialized into a permanent table (e.g., CREATE
TABLE AS SELECT … ORDER BY), redo will be generated for populating the
table.
Rule of Thumb
- Normal
SQL sorts (memory or temp) → No redo.
- DDL
operations (index build, CTAS, MV refresh) → Yes, redo generated
(unless NOLOGGING).
Tracing
Statements for Performance Statistics and Query Execution Path
If the SQL buffer contains the following statement:
SQL> SELECT E.LAST_NAME, E.SALARY, J.JOB_TITLE
FROM EMPLOYEES E, JOBS J
WHERE E.JOB_ID=J.JOB_ID AND E.SALARY>12000;
The statement can be automatically traced when it is run:
SET AUTOTRACE ON
/
Example: Tracing Statements Without Displaying Query Data
To trace the same statement without displaying the query
data, enter:
SET AUTOTRACE TRACEONLY
/
About
Collecting Timing Statistics
Use the SQL*Plus TIMING command to collect and
display data on the amount of computer resources used to run one or more
commands or blocks. TIMING collects data for an elapsed period of time,
saving the data on commands run during the period in a timer.
To delete all timers, enter CLEAR TIMING.
Tracing Parallel and Distributed Queries
When you trace a statement in a parallel or distributed
query, the Execution Plan output depends on the statement you use.
Example: Tracing Statements With Parallel Query Option
To trace a parallel query running the parallel query option:
create table D2_t1 (unique1 number) parallel (degree 6);
create table D2_t2 (unique1 number) parallel (degree 6);
create unique index d2_i_unique1 on d2_t1(unique1);
set long 500 longchunksize 500
SET AUTOTRACE ON EXPLAIN
SELECT /*+ INDEX(B,D2_I_UNIQUE1) USE_NL(B) ORDERED */ COUNT
(A.UNIQUE1)
FROM D2_T2 A, D2_T1 B
WHERE A.UNIQUE1 = B.UNIQUE1;
Execution Plan
----------------------------------------------------------
Plan hash value: 107954098
------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows |Bytes| Cost(%CPU)| Time |
TQ |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------
| 0 | SELECT
STATEMENT |
| 1 | 26 |
1 (0)| 00:00:01 | |
| |
| 1 |SORT
AGGREGATE |
| 1 | 26 |
| | |
| |
| 2 |PX
COORDINATOR | |
| | | | |
| |
| 3 |PX SEND QC
(RANDOM) | :TQ10001| 1 |
26 | | |
Q1,01 | P->S | QC (RAND) |
| 4 |SORT
AGGREGATE |
| 1 | 26 |
| | Q1,01 | PCWP | |
| 5 |NESTED LOOPS
| | 1 |
26 | 1 (0)| 00:00:01 | Q1,01 | PCWP | |
| 6 |PX RECEIVE |
| | |
| | Q1,01 | PCWP | |
| 7 | PX SEND
BROADCAST | :TQ10000| |
| | |
Q1,00 | P->P | BROADCAST |
| 8 | PX BLOCK ITERATOR | |
1 | 13 | 0
(0)| 00:00:01 | Q1,00 | PCWC
| |
| 9 | TABLE ACCESS
FULL| D2_T2 | 1 |
13 | 0 (0)| 00:00:01 | Q1,00 | PCWP | |
| 10 | PX BLOCK
ITERATOR | |
1 | 13 | 2
(0)| 00:00:01 | Q1,01 | PCWC
| |
|* 11 | TABLE ACCESS FULL
| D2_T1 | 1 |
13 | 2 (0)| 00:00:01 | Q1,01 | PCWP | |
-------------------------------------------------------------------------------------------------
Predicate Information
(identified by operation id):
---------------------------------------------------
11 -
filter("A"."UNIQUE1"="B"."UNIQUE1")
Note
-----
- dynamic sampling
used for this statement
Example: To monitor disk reads and buffer gets.
SET AUTOTRACE TRACEONLY STATISTICS
The following shows typical results:
Statistics
----------------------------------------------------------
467 recursive calls
27 db block gets
147 consistent gets
20 physical reads
4548 redo size
502 bytes sent via Oracle Net Services to client
496 bytes received via Oracle Net Services from
client
2
Oracle Net Services roundtrips to/from client
14 sorts (memory)
0 sorts (disk)
1 rows processed
If consistent gets
or physical reads are high relative to the amount of data returned, it
indicates that the query is expensive and needs to be reviewed for
optimization. For example, if you are expecting less than 1,000 rows back and
consistent gets is 1,000,000 and physical reads is 10,000, further optimization
is needed.
Note
You can also monitor disk reads and buffer gets using V$SQL
or TKPROF.
Execution
Plan Output in Earlier Databases
Execution Plan output from Oracle Database 9i Release
2 (9.2) or earlier is different.
Each line of the Execution Plan has a sequential line
number. SQL*Plus also displays the line number of the parent operation.
The Execution Plan consists of four columns displayed in
the following order:
Column Name |
Description |
ID_PLUS_EXP |
Shows the line number
of each execution step. |
PARENT_ID_PLUS_EXP |
Shows the
relationship between each step and its parent. This column is useful for
large reports. |
PLAN_PLUS_EXP |
Shows each step of the
report. |
OBJECT_NODE_PLUS_EXP |
Shows
database links or parallel query servers used. |
The format of the columns may be altered with the COLUMN
command.
For example, to stop the PARENT_ID_PLUS_EXP column
being displayed, enter
COLUMN PARENT_ID_PLUS_EXP NOPRINT
The Execution Plan output is generated using the EXPLAIN
PLAN command.
When you trace a statement in a parallel or distributed
query, the Execution Plan shows the cost-based optimizer estimates of the
number of rows (the cardinality). In general, the cost, cardinality and bytes
at each node represent cumulative results.
For example, the cost of a join node accounts for not only
the cost of completing the join operations, but also the entire costs of
accessing the relations in that join.
Lines marked with an asterisk (*) denote a parallel or
remote operation. Each operation is explained in the second part of the report.
The second section of this report consists of three columns
displayed in the following order
Column Name |
Description |
ID_PLUS_EXP |
Shows the line number
of each execution step. |
OTHER_TAG_PLUS_EXP |
Describes the
function of the SQL statement in the OTHER_PLUS_EXP column. |
OTHER_PLUS_EXP |
Shows the text of the
query for the parallel server or remote database. |
The format of the columns may be altered with the COLUMN
command.
No comments:
Post a Comment