This document explains what
end-to-end application tracing is, and how to generate and read trace files.
Overview of End-to-End Application
Tracing
End-to-end
application tracing can identify the source of an excessive database workload,
such as a high load SQL statement, by client identifier, service, module,
action, session, instance, or an entire database.
In multitier
environments, the middle tier routes a request from an end client to different
database sessions, making it difficult to track a client across database
sessions.
End-to-end
application tracing is an infrastructure that uses a client ID to uniquely
trace a specific end-client through all tiers to the database and provides
information about the operation that an end client is performing in the
database.
Purpose
of End-to-End Application Tracing
End-to-end
application tracing simplifies diagnosing performance problems in multitier
environments.
For example,
you can identify the source of an excessive database workload, such as a
high-load SQL statement, and contact the user responsible.
Also, a user
having problems can contact you. You can then identify what this user session
is doing at the PDB level
End-to-end
application tracing also simplifies management of application workloads by
tracking specific modules and actions in a service.
The module
and action names are set by the application developer.
For example,
you would use the SET_MODULE and SET_ACTION procedures in
the DBMS_APPLICATION_INFO package to set these values in a
PL/SQL program.
End-to-end
application tracing can identify workload problems in a database for:
- Client identifier
Specifies an end user based on the
logon ID, such as HR.HR
- Service
Specifies a group of applications
with common attributes, service level thresholds, and priorities; or a single
application, such as ACCTG for an accounting application
- Module
Specifies a functional block, such as
Accounts Receivable or General Ledger, of an application
- Action
Specifies an action, such as
an INSERT or UPDATE operation, in a module
- Session
Specifies a session based on a given
database session identifier (SID), on the local instance
- Instance
Specifies a given database instance
based on the instance name
- Container
Specifies the container
End-to-End Application Tracing for
PDBs
V$ views
enable read access to trace files that are specific to a container.
The primary
use cases are as follows:
- CDB administrators must view
trace files from a specific PDB.
The V$DIAG_TRACE_FILE view lists all trace files in
the ADR trace directory that contain trace data from a specific
PDB. V$DIAG_TRACE_FILE_CONTENTS displays the contents of the trace
files.
- PDB administrators must view
trace files from a specific PDB.
You can use SQL Trace to collect diagnostic data for the SQL
statements executing in a PDB application.
The trace data includes SQL tracing (event 10046) and
optimizer tracing (event 10053). Using V$ views, developers
can access only SQL or optimizer trace records without accessing the entire
trace file.
To enable
users and tools to determine which PDB is associated with a file or a part of a
file, PDB annotations exist in trace files, incident dumps, and log files.
The PDB
information is part of the structured metadata that is stored in the .trm file
for each trace file. Each record captures the following attributes:
- CON_ID, which is the ID of the
container associated with the data
- CON_UID, which is the unique ID of the
container
- NAME, which is the name of the
container
Tools for
End-to-End Application Tracing
The SQL
Trace facility and TKPROF are two basic performance diagnostic tools
that can help you accurately assess the efficiency of the SQL statements an
application runs.
For best
results, use these tools with EXPLAIN PLAN rather than
using EXPLAIN PLAN alone. After tracing information is written to
files, you can consolidate this data with the TRCSESS utility, and then
diagnose it with TKPROF or SQL Trace.
The
recommended interface for end-to-end application tracing is Oracle Enterprise
Manager Cloud Control (Cloud Control=OEMCC).
Using Cloud
Control, you can view the top consumers for each consumer type, and enable or
disable statistics gathering and SQL tracing for specific consumers. If Cloud
Control is unavailable, then you can manage this feature using the DBMS_MONITOR APIs.
Overview
of the SQL Trace Facility
The SQL
Trace facility provides performance information on individual SQL statements.
SQL Trace
generates the following statistics for each statement:
- Parse, execute, and fetch counts
- CPU and elapsed times
- Physical reads and logical reads
- Number of rows processed
- Misses on the library cache
- User name under which each parse
occurred
- Each commit and rollback
- Wait event data for each SQL
statement, and a summary for each trace file
If the
cursor for the SQL statement is closed, then SQL Trace also provides row source
information that includes:
- Row operations showing the
actual execution plan of each SQL statement
- Number of rows, number of
consistent reads, number of physical reads, number of physical writes, and
time elapsed for each operation on a row
Although you can enable the SQL Trace facility for a
session or an instance, Oracle recommends that you use the DBMS_SESSION or DBMS_MONITOR packages
instead.
When the SQL
Trace facility is enabled for a session or for an instance, performance
statistics for all SQL statements executed in a user session or in the instance
are placed into trace files.
Using the
SQL Trace facility can affect performance and may result in increased system
overhead, excessive CPU usage, and inadequate disk space.
The TRCSESS
command-line utility consolidates tracing information from several trace files
based on specific criteria, such as session or client ID.
v
TRCSESS
The TRCSESS
utility consolidates trace output from selected trace files based on
user-specified criteria.
After TRCSESS
merges the trace information into a single output file, TKPROF can
process the output file.
TRCSESS is
useful for consolidating the tracing of a particular session for performance or
debugging purposes.
Tracing a
specific session is usually not a problem in the dedicated server model because
one process serves a session during its lifetime.
You can see
the trace information for the session from the trace file belonging to the
server process.
However, in
a shared server configuration, a user session is serviced by different
processes over time.
The trace
for the user session is scattered across different trace files belonging to
different processes, which makes it difficult to get a complete picture of the
life cycle of a session.
You must
specify one of the session, clientid, service, action,
or module options.
If you
specify multiple options, then TRCSESS consolidates all trace files that
satisfy the specified criteria into the output file.
Syntax
trcsess [output=output_file_name]
[session=session_id]
[clientid=client_id]
[service=service_name]
[action=action_name]
[module=module_name]
[trace_files]
TRCSESS
supports a number of command-line options.
Argument |
Description |
output |
Specifies the file where the output is generated.
If this option is not specified, then the utility writes to standard output. |
session |
Consolidates the trace information for the session
specified. The session identifier is a combination of session index and
session serial number, such as 21.2371. You can locate these values in
the V$SESSION view. |
clientid |
Consolidates the trace information for the
specified client ID. |
service |
Consolidates the trace information for the
specified service name. |
action |
Consolidates the trace information for the
specified action name. |
module |
Consolidates the trace information for the
specified module name. |
trace_files |
Lists the trace file names, separated by spaces, in
which TRCSESS should look for trace information. You can use the wildcard
character (*) to specify the trace file names. If you do not specify trace
files, then TRCSESS uses all files in the current directory as input. |
This section
demonstrates common TRCSESS use cases.
Example:
Tracing a Single Session
This sample
output of TRCSESS shows the container of traces for a particular session. In
this example, the session index and serial number equals 21.2371. All
files in current directory are taken as input.
trcsess
session=21.2371
Example:
Specifying Multiple Trace Files
The
following example specifies two trace files:
trcsess
session=21.2371 main_12359.trc main_12995.trc
The sample
output is similar to the following:
[PROCESS ID
= 12359]
*** 2025-04-02
09:48:28.376
PARSING IN
CURSOR #1 len=17 dep=0 uid=27 oct=3 lid=27 tim=868373970961 hv=887450622
ad='22683fb4'
select *
from cat
END OF STMT
PARSE
#1:c=0,e=339,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=868373970944
EXEC
#1:c=0,e=221,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=868373971411
FETCH
#1:c=0,e=791,p=0,cr=7,cu=0,mis=0,r=1,dep=0,og=4,tim=868373972435
FETCH
#1:c=0,e=1486,p=0,cr=20,cu=0,mis=0,r=6,dep=0,og=4,tim=868373986238
***
2014-04-02 10:03:58.058
XCTEND
rlbk=0, rd_only=1
STAT #1 id=1
cnt=7 pid=0 pos=1 obj=0 op='FILTER '
STAT #1 id=2
cnt=7 pid=1 pos=1 obj=18 op='TABLE ACCESS BY INDEX ROWID OBJ$ '
STAT #1 id=3
cnt=7 pid=2 pos=1 obj=37 op='INDEX RANGE SCAN I_OBJ2 '
STAT #1 id=4
cnt=0 pid=1 pos=2 obj=4 op='TABLE ACCESS CLUSTER TAB$J2 '
STAT #1 id=5
cnt=6 pid=4 pos=1 obj=3 op='INDEX UNIQUE SCAN I_OBJ# '
[PROCESS
ID=12995]
***
2014-04-02 10:04:32.738
Archiving is
disabled
v
TKPROF
To format
the contents of the trace file and place the output into a readable output
file, run the TKPROF program.
TKPROF can
also do the following:
- Create a SQL script that stores
the statistics in the database
- Determine the execution plans of
SQL statements
TKPROF
reports each statement executed with the resources it has consumed, the number
of times it was called, and the number of rows which it processed.
This
information enables you to locate those statements that are using the greatest
resource. With baselines available, you can assess whether the resources used
are reasonable given the work done.
Enabling Statistics Gathering for
End-to-End Tracing
To gather
the appropriate statistics using PL/SQL, you must enable statistics gathering
for client identifier, service, module, or action using procedures
in DBMS_MONITOR.
The default
level is the session-level statistics gathering. Statistics gathering is global
for the database and continues after a database instance is restarted.
Enabling Statistics Gathering for a
Client ID
The
procedure CLIENT_ID_STAT_ENABLE enables statistics gathering for a
given client ID, whereas the
procedure CLIENT_ID_STAT_DISABLE disables it.
You can view
client identifiers in the CLIENT_IDENTIFIER column in V$SESSION.
Assumptions
This
tutorial assumes that you want to enable and then disable statistics gathering
for the client with the ID oe.oe.
To enable
and disable statistics gathering for a client identifier:
- Start SQL*Plus, and then connect
to the database with the appropriate privileges.
- Enable statistics gathering
for oe.oe.
For example,
run the following command:
EXECUTE
DBMS_MONITOR.CLIENT_ID_STAT_ENABLE(client_id => 'OE.OE');
- Disable statistics gathering
for oe.oe.
For example,
run the following command:
EXECUTE
DBMS_MONITOR.CLIENT_ID_STAT_DISABLE(client_id => 'OE.OE');
Enabling Statistics Gathering for
Services, Modules, and Actions
The
procedure SERV_MOD_ACT_STAT_ENABLE enables statistic gathering for a
combination of service, module, and action, whereas the
procedure SERV_MOD_ACT_STAT_DISABLE disables statistic gathering for
a combination of service, module, and action.
When you
change the module or action using the
preceding DBMS_MONITOR procedures, the change takes effect when the
next user call is executed in the session.
For example,
if a module is set to module1 in a session, and if the module is
reset to module2 in a user call in the session, then the module
remains module1 during this user call. The module is changed
to module2 in the next user call in the session.
Assumptions
This
tutorial assumes that you want to gather statistics as follows:
- For the ACCTG service
- For all actions in
the PAYROLL module
- For the INSERT
ITEM action within the GLEDGER module
To enable
and disable statistics gathering for a service, module, and action:
- Start SQL*Plus, and then connect
to the database with the appropriate privileges.
- Enable statistics gathering for
the desired service, module, and action.
For example,
run the following commands:
BEGIN
DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE( service_name => 'ACCTG' , module_name
=> 'PAYROLL' );
END;
BEGIN
DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE(
service_name => 'ACCTG' , module_name
=> 'GLEDGER' , action_name => 'INSERT ITEM' );
END;
- Disable statistic gathering for
the previously specified combination of service, module, and action.
For example,
run the following command:
BEGIN
DBMS_MONITOR.SERV_MOD_ACT_STAT_DISABLE(service_name => 'ACCTG' , module_name
=> 'GLEDGER'
, action_name
=> 'INSERT ITEM' );
END;
Enabling
End-to-End Application Tracing
To enable
tracing for client identifier, service, module, action, session, instance or
database, execute the appropriate procedures in
the DBMS_MONITOR package.
With the
criteria that you provide, specific trace information is captured in a set of
trace files and combined into a single output trace file.
You can
enable tracing for specific diagnosis and workload management by the following
criteria:
Enabling
Tracing for a Client Identifier
To enable
tracing globally for the database for a specified client identifier, use
the DBMS_MONITOR.CLIENT_ID_TRACE_ENABLE procedure.
The CLIENT_ID_TRACE_DISABLE procedure
disables tracing globally for the database for a given client identifier.
Assumptions
This
tutorial assumes the following:
- OE.OE is the client
identifier for which SQL tracing is to be enabled.
- You want to include wait
information in the trace.
- You want to exclude bind
information from the trace.
To enable
and disable tracing for a client identifier:
- Start SQL*Plus, and then connect
to the database with the appropriate privileges.
- Enable tracing for the client.
For example,
execute the following program:
BEGIN
DBMS_MONITOR.CLIENT_ID_TRACE_ENABLE(
client_id => 'OE.OE' , waits => true
, binds => false );
END;
- Disable tracing for the client.
For example,
execute the following command:
EXECUTE
DBMS_MONITOR.CLIENT_ID_TRACE_DISABLE(client_id => 'OE.OE');
Enabling
Tracing for a Service, Module, and Action
The DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE procedure
enables SQL tracing for a specified combination of service name, module, and
action globally for a database, unless the procedure specifies a database
instance name.
The SERV_MOD_ACT_TRACE_DISABLE procedure
disables the trace at all enabled instances for a given combination of service
name, module, and action name globally.
Assumptions
This
tutorial assumes the following:
- You want to enable tracing for
the service ACCTG.
- You want to enable tracing for
all actions for the combination of the ACCTG service and
the PAYROLL module.
- You want to include wait
information in the trace.
- You want to exclude bind
information from the trace.
- You want to enable tracing only
for the inst1 instance.
To enable
and disable tracing for a service, module, and action:
- Start SQL*Plus, and then connect
to the database with the appropriate privileges.
- Enable tracing for the service,
module, and actions.
For example,
execute the following command:
BEGIN
DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE( service_name => 'ACCTG' ,
module_name => 'PAYROLL'
, waits =>
true , binds =>
false , instance_name => 'inst1' );
END;
- Disable tracing for the service,
module, and actions.
For example,
execute the following command:
BEGIN
DBMS_MONITOR.SERV_MOD_ACT_TRACE_DISABLE( service_name => 'ACCTG' ,
module_name => 'PAYROLL'
, instance_name => 'inst1' );
END;
Enabling
Tracing for a Session
The SESSION_TRACE_ENABLE procedure
enables the trace for a given database session identifier (SID) on the local
instance.
Whereas
the DBMS_MONITOR package can only be invoked by a user with the DBA
role, users can also enable SQL tracing for their own session by invoking
the DBMS_SESSION.SESSION_TRACE_ENABLE procedure, as in the following
example:
BEGIN
DBMS_SESSION.SESSION_TRACE_ENABLE( waits => true , binds => false);
END;
Assumptions
This
tutorial assumes the following:
- You want to log in to the
database with administrator privileges.
- User OE has one active
session.
- You want to temporarily enable
tracing for the OE session.
- You want to include wait
information in the trace.
- You want to exclude bind
information from the trace.
To enable
and disable tracing for a session:
- Start SQL*Plus, and then log in
to the database with the administrator privileges.
- Determine the session ID and
serial number values for the session to trace.
For example,
query V$SESSION as follows:
SELECT SID,
SERIAL#, USERNAME
FROM V$SESSION
WHERE USERNAME = 'OE';
SID
SERIAL# USERNAME
----------
---------- ------------------------------
27 60 OE
- Use the values from the
preceding step to enable tracing for a specific session.
For example,
execute the following program to enable tracing for the OE session,
where the true argument includes wait information in the trace and
the false argument excludes bind information from the trace:
BEGIN
DBMS_MONITOR.SESSION_TRACE_ENABLE( session_id => 27 , serial_num => 60 , waits
=> true , binds => false);
END;
- Disable tracing for the session.
The SESSION_TRACE_DISABLE procedure
disables the trace for a given database session identifier (SID) and serial
number. For example:
BEGIN
DBMS_MONITOR.SESSION_TRACE_DISABLE( session_id => 27 , serial_num => 60);
END;
Generating Output Files Using SQL Trace and TKPROF
The
procedure for generating output files is as follows:
- Set initialization parameters
for trace file management.
- Enable the SQL Trace facility
for the desired session, and run the application. This step produces a
trace file containing statistics for the SQL statements issued by the
application.
- Run TKPROF to translate the
trace file created in Step 2 into a readable output file. This step can
optionally create a SQL script that you can use to store the statistics in
a database.
- Optionally, run the SQL script
produced in Step 3 to store the statistics in the database.
Step 1:
Setting Initialization Parameters for Trace File Management
To enable
trace files, you must ensure that specific initialization parameters are set.
When the SQL
Trace facility is enabled for a session, Oracle Database generates a trace file
containing statistics for traced SQL statements for that session.
When the SQL
Trace facility is enabled for an instance, Oracle Database creates a separate
trace file for each process.
To set
initialization parameters for trace file management:
- Check the settings of
the TIMED_STATISTICS, MAX_DUMP_FILE_SIZE,
and DIAGNOSTIC_DEST initialization parameters, as indicated in
below:
Parameter |
Description |
DIAGNOSTIC_DEST |
Specifies the location of the Automatic Diagnostic
Repository (ADR) Home. The diagnostic files for each database instance are
located in this dedicated directory. |
MAX_DUMP_FILE_SIZE |
When the SQL Trace facility is enabled at the
database instance level, every call to the database writes a text line in a
file in the operating system's file format. The maximum size of these files
in operating system blocks is limited by this initialization parameter. The
default is 32M on Oracle Database Free, and 1G on all other Oracle Database
offerings. |
TIMED_STATISTICS |
Enables and disables the collection of timed
statistics, such as CPU and elapsed times, by the SQL Trace facility, and the
collection of various statistics in the V$ views. If STATISTICS_LEVEL is set
to TYPICAL or ALL, then the default value
of TIMED_STATISTICS is true. If STATISTICS_LEVEL is
set to BASIC, then the default value
of TIMED_STATISTICS is false. Enabling timing causes extra timing calls for
low-level operations. This is a dynamic parameter. It is also a session
parameter. |
- Devise a way of recognizing the
resulting trace file.
Be sure you
know how to distinguish the trace files by name. You can tag trace files by
including in your programs a statement such as SELECT program_name'
FROM DUAL. You can then trace each file back to the process that created it.
You can also
set the TRACEFILE_IDENTIFIER initialization parameter to specify a
custom identifier that becomes part of the trace file name. For example, you
can add my_trace_id to subsequent trace file names for easy
identification with the following:
ALTER
SESSION SET TRACEFILE_IDENTIFIER = 'my_trace_id';
- If the operating system retains
multiple versions of files, then ensure that the version limit is high
enough to accommodate the number of trace files you expect the SQL Trace
facility to generate.
- If the generated trace files can
be owned by an operating system user other than yourself, then ensure that
you have the necessary permissions to use TKPROF to format them.
Step 2:
Enabling the SQL Trace Facility
You can
enable the SQL Trace facility at the instance or session level.
The package
to use depends on the level:
- Database instance
Use DBMS_MONITOR.DATABASE_TRACE_ENABLE procedure
to enable tracing, and DBMS_MONITOR.DATABASE_TRACE_DISABLE procedure
to disable tracing.
- Database session
Use DBMS_SESSION.SET_SQL_TRACE procedure
to enable tracing (true) or disable tracing (false).
Note
Because
running the SQL Trace facility increases system overhead, enable it only when
tuning SQL statements, and disable it when you are finished.
To enable
and disable tracing at the database instance level:
- Start SQL*Plus, and connect to
the database with administrator privileges.
- Enable tracing at the database
instance level.
The
following example enables tracing for the orcl instance:
EXEC
DBMS_MONITOR.DATABASE_TRACE_ENABLE(INSTANCE_NAME => 'orcl');
- Execute the statements to be
traced.
- Disable tracing for the database
instance.
The
following example disables tracing for the orcl instance:
EXEC
DBMS_MONITOR.DATABASE_TRACE_DISABLE(INSTANCE_NAME => 'orcl');
To enable
and disable tracing at the session level:
- Start SQL*Plus, and connect to
the database with the desired credentials.
- Enable tracing for the current
session.
The
following example enables tracing for the current session:
EXEC
DBMS_SESSION.SET_SQL_TRACE(sql_trace => true);
- Execute the statements to be
traced.
- Disable tracing for the current
session.
The
following example disables tracing for the current session:
EXEC
DBMS_SESSION.SET_SQL_TRACE(sql_trace => false);
Step 3: Generating Output Files with
TKPROF
TKPROF
accepts as input a trace file produced by the SQL Trace facility, and it
produces a formatted output file. TKPROF can also generate execution plans.
After the
SQL Trace facility has generated trace files, you can:
- Run TKPROF on each individual
trace file, producing several formatted output files, one for each
session.
- Concatenate the trace files, and
then run TKPROF on the result to produce a formatted output file for the
entire instance.
- Run the TRCSESS command-line
utility to consolidate tracing information from several trace files, then
run TKPROF on the result.
TKPROF does
not report COMMIT and ROLLBACK statements recorded in the
trace file.
Note
The
following SQL statements are truncated to 25 characters in the SQL Trace file:
SET ROLE
GRANT
ALTER USER
ALTER ROLE
CREATE USER
CREATE ROLE
Example :
TKPROF Output
SELECT *
FROM emp, dept WHERE emp.deptno = dept.deptno;
call count
cpu elapsed disk
query current rows
----
------- ------- --------- -------- -------- ------- ------
Parse 1
0.16 0.29 3
13 0 0
Execute 1
0.00 0.00 0 0
0 0
Fetch 1
0.03 0.26 2 2
4 14
Misses in
library cache during parse: 1
Parsing user
id: (8) SCOTT
Rows Execution Plan
-------
---------------------------------------------------
14 MERGE JOIN
4 SORT
JOIN
4
TABLE ACCESS (FULL) OF 'DEPT'
14 SORT JOIN
14 TABLE ACCESS (FULL) OF 'EMP'
For this
statement, TKPROF output includes the following information:
- The text of the SQL statement
- The SQL Trace statistics in
tabular form
- The number of library cache
misses for the parsing and execution of the statement.
- The user initially parsing the
statement.
- The execution plan generated
by EXPLAIN PLAN.
TKPROF also
provides a summary of user level statements and recursive SQL calls for the
trace file.
Step 4: Storing SQL Trace Facility
Statistics
You might
want to keep a history of the statistics generated by the SQL Trace facility
for an application, and compare them over time.
TKPROF can
generate a SQL script that creates a table and inserts rows of statistics into
it. This script contains the following:
- A CREATE
TABLE statement that creates an output table named TKPROF_TABLE.
- INSERT statements that add
rows of statistics, one for each traced SQL statement,
to TKPROF_TABLE.
After
running TKPROF, run this script to store the statistics in the database.
Generating the TKPROF Output SQL
Script
When you
run TKPROF, use the INSERT parameter to specify the name of the
generated SQL script.
If you omit
the INSERT parameter, then TKPROF does not generate a
script.
Editing the TKPROF Output SQL Script
After TKPROF has
created the SQL script, you might want to edit the script before running it.
If you have
created an output table for previously collected statistics, and if you want to
add new statistics to this table, then remove the CREATE
TABLE statement from the script.
The script
then inserts the new rows into the existing table. If you have created multiple
output tables, perhaps to store statistics from different databases in
different tables, then edit the CREATE
TABLE and INSERT statements to change the name of the output
table.
After you
have created the output table, query using a SELECT statement.
The
following CREATE TABLE statement creates the TKPROF_TABLE:
CREATE TABLE
TKPROF_TABLE (
DATE_OF_INSERT DATE,
CURSOR_NUM NUMBER,
DEPTH NUMBER,
USER_ID NUMBER,
PARSE_CNT NUMBER,
PARSE_CPU NUMBER,
PARSE_ELAP NUMBER,
PARSE_DISK NUMBER,
PARSE_QUERY NUMBER,
PARSE_CURRENT NUMBER,
PARSE_MISS NUMBER,
EXE_COUNT NUMBER,
EXE_CPU NUMBER,
EXE_ELAP NUMBER,
EXE_DISK NUMBER,
EXE_QUERY NUMBER,
EXE_CURRENT NUMBER,
EXE_MISS NUMBER,
EXE_ROWS NUMBER,
FETCH_COUNT NUMBER,
FETCH_CPU NUMBER,
FETCH_ELAP NUMBER,
FETCH_DISK NUMBER,
FETCH_QUERY NUMBER,
FETCH_CURRENT NUMBER,
FETCH_ROWS NUMBER,
CLOCK_TICKS NUMBER,
SQL_STATEMENT LONG);
Most output
table columns correspond directly to the statistics that appear in the
formatted output file. For example, the PARSE_CNT column value
corresponds to the count statistic for the parse step in the output file.
The columns in the following table help you identify a row of statistics.
TKPROF_TABLE Columns for Identifying a Row of Statistics
Column |
Description |
SQL_STATEMENT |
This is the SQL statement for which the SQL Trace
facility collected the row of statistics. Because this column has data
type LONG, you cannot use it in expressions or WHERE clause
conditions. |
DATE_OF_INSERT |
This is the date and time when the row was inserted
into the table. This value is different from the time when the SQL Trace
facility collected the statistics. |
DEPTH |
This indicates the level of recursion at which the
SQL statement was issued. For example, a value of 0 indicates that a user
issued the statement. A value of 1 indicates that Oracle Database generated
the statement as a recursive call to process a statement with a value of 0 (a
statement issued by a user). A value of n indicates that
Oracle Database generated the statement as a recursive call to process a
statement with a value of n-1. |
USER_ID |
This identifies the user issuing the statement.
This value also appears in the formatted output file. |
CURSOR_NUM |
Oracle Database uses this column value to keep
track of the cursor to which each SQL statement was assigned. |
The output
table does not store the statement's execution plan. The following query
returns the statistics from the output table.
SELECT *
FROM TKPROF_TABLE;
Sample
output appears as follows:
DATE_OF_INSERT
CURSOR_NUM DEPTH USER_ID PARSE_CNT PARSE_CPU PARSE_ELAP
--------------
---------- ----- ------- --------- --------- ----------
21-DEC-2017 1
0 8 1 16 22
PARSE_DISK
PARSE_QUERY PARSE_CURRENT PARSE_MISS EXE_COUNT EXE_CPU
----------
----------- ------------- ---------- --------- -------
3
11 0 1 1 0
EXE_ELAP
EXE_DISK EXE_QUERY EXE_CURRENT EXE_MISS EXE_ROWS FETCH_COUNT
--------
-------- --------- ----------- -------- -------- -----------
0
0 0 0 0 0 1
FETCH_CPU
FETCH_ELAP FETCH_DISK FETCH_QUERY FETCH_CURRENT FETCH_ROWS
---------
---------- ---------- ----------- ------------- ----------
2
20 2 2 4 10
SQL_STATEMENT
---------------------------------------------------------------------
SELECT
* FROM EMP, DEPT WHERE EMP.DEPTNO = DEPT.DEPTNO
Guidelines
for Interpreting TKPROF Output
While TKPROF provides
a useful analysis, the most accurate measure of efficiency is the performance
of the application. At the end of the TKPROF output is a summary of
the work that the process performed during the period that the trace was
running.
Guideline for Interpreting the
Resolution of Statistics
Timing
statistics have a resolution of one hundredth of a second. Therefore, any
operation on a cursor that takes a hundredth of a second or less might not be
timed accurately.
Keep the
time limitation in mind when interpreting statistics. In particular, be careful
when interpreting the results from simple queries that execute very quickly.
Guideline for Recursive SQL
Statements
Recursive
SQL is additional SQL that Oracle Database must issue to execute a SQL
statement issued by a user.
Conceptually,
recursive SQL is “side-effect SQL.” For example, if a session inserts a row
into a table that has insufficient space to hold that row, then the database
makes recursive SQL calls to allocate the space dynamically.
The database
also generates recursive calls when data dictionary information is not
available in memory and so must be retrieved from disk.
If recursive
calls occur while the SQL Trace facility is enabled,
then TKPROF produces statistics for the recursive SQL statements and
marks them clearly as recursive SQL statements in the output file.
You can suppress the listing of Oracle
Database internal recursive calls (for example, space management) in the output
file by setting the SYS command-line parameter to NO.
The
statistics for a recursive SQL statement are included in the listing for that
statement, not in the listing for the SQL statement that caused the recursive
call.
So, when you
are calculating the total resources required to process a SQL statement,
consider the statistics for that statement and those for recursive calls caused
by that statement.
Note: Recursive SQL statistics are not
included for SQL-level operations.
Guideline for Deciding Which Statements to Tune
You must
determine which SQL statements use the most CPU or disk resource.
If
the TIMED_STATISTICS parameter is enabled, then you can find high CPU
activity in the CPU column. If TIMED_STATISTICS is not
enabled, then check the QUERY and CURRENT columns.
With the
exception of locking problems and inefficient PL/SQL loops, neither the CPU
time nor the elapsed time is necessary to find problem statements.
The key is
the number of block visits, both query (that is, subject to read consistency)
and current (that is, not subject to read consistency).
Segment
headers and blocks that are going to be updated are acquired in current mode,
but all query and subquery processing requests the data in query mode.
These are
precisely the same measures as the instance statistics CONSISTENT
GETS and DB BLOCK GETS. You can find high disk activity in
the disk column.
The
following listing shows TKPROF output for one SQL statement as it
appears in the output file:
SELECT *
FROM emp, dept WHERE emp.deptno = dept.deptno;
call count
cpu elapsed disk
query current rows
----
------- ------- --------- -------- -------- ------- ------
Parse 11
0.08 0.18 0
0 0 0
Execute 11
0.23 0.66 0
3 6 0
Fetch 35
6.70 6.83 100
12326 2 824
------------------------------------------------------------------
total 57
7.01 7.67 100
12329 8 826
Misses in
library cache during parse: 0
If it is
acceptable to have 7.01 CPU seconds and to retrieve 824 rows, then you need not
look any further at this trace output. In fact, a major use
of TKPROF reports in a tuning exercise is to eliminate processes from
the detailed tuning phase.
The output
indicates that 10 unnecessary parse call were made (because 11 parse calls
exist for this single statement) and that array fetch operations were
performed. More rows were fetched than there were fetches performed.
A large gap
between CPU and elapsed timings indicates Physical I/Os.
Guidelines
for Avoiding Traps in TKPROF Interpretation
When
interpreting TKPROF output, it helps to be aware of common traps.
Guideline for Avoiding the Argument
Trap
If you are
not aware of the values being bound at run time, then it is possible to fall
into the argument trap.
EXPLAIN
PLAN cannot determine the type of a bind variable from the text of SQL
statements, and it always assumes that the type is VARCHAR.
If the bind
variable is actually a number or a date, then TKPROF can cause
implicit data conversions, which can cause inefficient plans to be executed.
To avoid this situation, experiment with
different data types in the query, and perform the conversion yourself.
Guideline for Avoiding the Read
Consistency Trap
Without
knowing that an uncommitted transaction had made a series of updates to a
column, it is difficult to see why so many block visits would be incurred.
Such cases
are not normally repeatable. If the process were run again, it is unlikely that
another transaction would interact with it in the same way.
SELECT
name_id
FROM
cq_names
WHERE name =
'FLOOR';
call count
cpu elapsed disk
query current rows
---- -----
--- ------- ----
----- ------- ----
Parse 1
0.10 0.18 0 0
0 0
Execute 1
0.00 0.00 0 0
0 0
Fetch 1
0.11 0.21 2
101 0 1
Misses in
library cache during parse: 1
Parsing user
id: 01 (USER1)
Rows Execution Plan
---- --------- ----
0
SELECT STATEMENT
1
TABLE ACCESS (BY ROWID) OF 'CQ_NAMES'
2
INDEX (RANGE SCAN) OF 'CQ_NAMES_NAME' (NON_UNIQUE)
Guideline
for Avoiding the Schema Trap
In some
cases, an apparently straightforward indexed query looks at many database
blocks and accesses them in current mode.
The
following example shows an extreme (and thus easily detected) example of the
schema trap:
SELECT
name_id
FROM
cq_names
WHERE name =
'FLOOR';
call count cpu
elapsed disk query current rows
-------- -------
-------- --------- ------- ------ ------- ----
Parse 1 0.06 0.10 0
0 0 0
Execute 1
0.02 0.02 0
0 0 0
Fetch 1 0.23 0.30 31
31 3 1
Misses in
library cache during parse: 0
Parsing user
id: 02 (USER2)
Rows Execution Plan
-------
---------------------------------------------------
0
SELECT STATEMENT
2340
TABLE ACCESS (BY ROWID) OF 'CQ_NAMES'
0
INDEX (RANGE SCAN) OF 'CQ_NAMES_NAME' (NON-UNIQUE)
Two
statistics suggest that the query might have been executed with a full table
scan: the current mode block visits, and the number of rows originating from
the Table Access row source in the plan.
The
explanation is that the required index was built after the trace file had been
produced, but before TKPROF had been run. Generating a new trace file
gives the following data:
SELECT
name_id
FROM
cq_names
WHERE name =
'FLOOR';
call count
cpu elapsed disk
query current rows
----- ------ ------
-------- ----- ------ -------
-----
Parse 1
0.01 0.02 0
0 0 0
Execute 1
0.00 0.00 0
0 0 0
Fetch 1
0.00 0.00 0
2 0 1
Misses in
library cache during parse: 0
Parsing user
id: 02 (USER2)
Rows Execution Plan
-------
---------------------------------------------------
0
SELECT STATEMENT
1
TABLE ACCESS (BY ROWID) OF 'CQ_NAMES'
2
INDEX (RANGE SCAN) OF 'CQ_NAMES_NAME' (NON-UNIQUE)
In the
correct version, the parse call took 10 milliseconds of CPU time and 20
milliseconds of elapsed time, but the query apparently took no time to execute
and perform the fetch.
These
anomalies arise because the clock tick of 10 milliseconds is too long relative
to the time taken to execute and fetch the data.
In such cases, it is important to get many
executions of the statements, so that you have statistically valid numbers.
Guideline
for Avoiding the Time Trap
In some
cases, a query takes an inexplicably long time.
For example,
the following update of 7 rows executes in 19 seconds:
UPDATE
cq_names
SET ATTRIBUTES = lower(ATTRIBUTES)
WHERE ATTRIBUTES = :att
call count cpu
elapsed disk query current rows
--------
------- -------- --------- -------- -------- ------- ----------
Parse 1
0.06 0.24 0
0 0 0
Execute 1
0.62 19.62 22
526 12 7
Fetch 0
0.00 0.00 0
0 0 0
Misses in
library cache during parse: 1
Parsing user
id: 02 (USER2)
Rows Execution Plan
-------
---------------------------------------------------
0
UPDATE STATEMENT
2519
TABLE ACCESS (FULL) OF 'CQ_NAMES'
The
explanation is interference from another transaction. In this case, another
transaction held a shared lock on the table cq_names for several
seconds before and after the update was issued.
It takes experience to diagnose that
interference effects are occurring.
On the one
hand, comparative data is essential when the interference is contributing only
a short delay (or a small increase in block visits in the previous example).
However, if
the interference contributes only modest overhead, and if the statement is
essentially efficient, then its statistics may not require analysis.
No comments:
Post a Comment