đź’˘Do you know that Oracle(All versions) has a fantastic feature to prepare library Cache before Switchover & Failover operations which leads to reduced database preparation time?
Alireza Kamrani
12/03/2024
After a failover or switchover, the Oracle database requires a certain amount of time to fill the library cache, and this process varies based on the database workload and rate of running queries.
This feature can help you prepare the required data in the library cache to reduce the database time to prepare for service.
Overview of Warming the Library Cache
The library cache includes the shared SQL areas, private SQL areas, PL/SQL procedures and packages, and control structures such as locks and library cache handles. A shared SQL area contains the parse tree and execution plan for a single SQL statement or for similar SQL statements. Oracle saves memory by using one shared SQL area for multiple similar DML statements, particularly when many users execute the same application. A private SQL area contains data such as bind information and runtime buffers. Oracle processes PL/SQL program units (procedures, functions, packages, anonymous blocks, and database triggers) much the same way it processes individual SQL statements. Oracle allocates a shared area to hold the parsed, compiled form of a program unit. Oracle allocates a private area to hold values specific to the session that executes the program unit, including local, global, and package variables and buffers for executing SQL.
Maintaining information about frequently executed SQL and PL/SQL statements in the library cache improves the performance of the Oracle database server. In an Oracle9i Real Application Clusters primary/secondary configuration, the library cache associated with the primary instance contains up-to-date information. If failover occurs, then the benefit of that information is lost unless the library cache on the secondary instance is populated before failover.
You can use the DBMS_LIBCACHE package to transfer the information in the library cache of the primary instance to the library cache of the secondary instance. This process is called warming the library cache. It improves performance immediately after failover because the new primary library cache does not need to be populated with parsed SQL statements and compiled PL/SQL units.
Below picture shows the library cache being compiled at the secondary instance, by using SQL statements and PL/SQL units extracted from the primary instance.
Warming the Library Cache of the Secondary Instance
DBMS_LIBCACHE captures and compiles the part of selected cursors that can be shared. It selects cursors based on the amount of shared memory that is used and the frequency with which the associated SQL statements are used. It then populates the library cache on the secondary instance with the compiled cursors.
Execute the DBMS_LIBCACHE package on the secondary instance:
- As a regularly scheduled job
- Before executing a manual failover or switchover
DBMS_LIBCACHE
The DBMS_LIBCACHE package consists of one subprogram that prepares the library cache on an Oracle instance by extracting SQL and PL/SQL from a remote instance and compiling this SQL locally without execution.
The value of compiling the cache of an instance is to prepare the information the application requires to execute in advance of failover or switchover.
This chapter contains the following topics:
- Overview
- Security Model
- Summary of DBMS_LIBCACHE Subprograms
DBMS_LIBCACHE Overview
Compiling a shared cursor consists of open, parse, and bind operations, plus the type-checking and execution plan functions performed at the first execution. All of these steps are executed in advance by the package DBMS_LIBCACHE for SELECT statements.
The open and parse functions are executed in advance for PL/SQL and DML. For PL/SQL, executing the parse phase has the effect of loading all library cache heaps other than the MCODE.
DBMS_LIBCACHE Security Model
To execute DBMS_LIBCACHE you must directly access the same objects as do SQL statements. You can best accomplish this by utilizing the same user id as the original system on the remote system.
When there are multiple schema users, DBMS_LIBCACHE should be called for each.
Alternatively, DBMS_LIBCACHE may be called with the generic user PARSER. However, this user cannot parse the SQL that uses objects with access granted though roles. This is a standard PL/SQL security limitation.
Summary of DBMS_LIBCACHE Subprograms
The DBMS_LIBCACHE package includes the COMPILE_FROM_REMOTE procedure subprogram.
DBMS_LIBCACHE Package Subprograms
Subprogram | Description |
COMPILE_FROM_REMOTE Procedure | Extracts SQL in batch from the source instance and compiles the SQL at the target instance |
COMPILE_FROM_REMOTE Procedure
This procedure extracts SQL in batch from the source instance and compiles the SQL at the target instance.
Syntax
DBMS_LIBCACHE.COMPILE_FROM_REMOTE (
p_db_link IN dbms_libcache$def.db_link%type,
p_username IN VARCHAR2 default null,
p_threshold_executions IN NATURAL default 3,
p_threshold_sharable_mem IN NATURAL default 1000,
p_parallel_degree IN NATURAL default 1);
Parameters:
COMPILE_FROM_REMOTE Procedure
Parameter | Description |
p_db_link | Database link to the source name (mandatory). The database link pointing to the instance that will be used for extracting the SQL statements. The user must have the role SELECT_ON_CATALOG at the source instance. For improved security, the connection may use a password file or LDAP authentication. The database link is mandatory only for releases with dbms_libcache$def.ACCESS_METHOD = DB_LINK_METHOD |
p_instance_name | (Reserved for future use). The name of the instance that will be used for extracting the SQL statements. The instance name must be unique for all instances excluding the local instance. The name is not case sensitive. |
p_username | Source username (default is all users). The name of the username that will be used for extracting the SQL statements. The username is an optional parameter that is used to ensure the parsing user id is the same as that on the source instance. For an application where users connect as a single user_id, for example APPS, APPS is the parsing user_id that is recorded in the shared pool. To select only SQL statements parsed by APPS, enter the string 'APPS' in this field. To also select statements executed by batch, repeat the executing the procedure with the schema owner, for example GL. If the username is supplied, it must be valid. The name is not case sensitive. |
p_threshold_executions | The lower bound for the number of executions, below which a SQL statement will not be selected for parsing. This parameter is optional. It allows the application to extract and compile statements with executions, for example, greater than 3. The default value is 1. This means SQL statements that have never executed, including invalid SQL statements, will not be extracted. |
p_threshold_sharable_mem | The lower bound for the size of the shared memory consumed by the cursors on the source instance. Below this value a SQL statement will not be selected for parsing. This parameter is optional. It allows the application to extract and compile statements with shared memory for example, greater than 10000 bytes. |
p_parallel_degree | The number of parallel jobs that execute to complete the parse operation. These tasks are spawned as parallel jobs against a sub-range of the SQL statements selected for parsing. This parameter is reserved for parallel compile jobs which are currently not implemented. |
This feature works fine in RAC nodes of Database Cluster or Single node instance.
Best regards,
Alireza Kamrani
No comments:
Post a Comment