Tuesday, November 21, 2023

CPU limitation per database instance

 ♨️CPU limitation per database instance 

About Creating Cgroups on Linux Systems

Resource management always is a best tools for DBA to control consumption of databases that live together in same server.

Resource consumption and limiting databases and PDBs can control on i/o, memory, cpu by traditional or new features in oracle such as applying limit on PGA, SGA using pga aggregate target and sga_target, sga_min_size and for controlling I/O usage we can use max_iops.

Using dbms_resource_manager package we can apply plan directives to handles cpu utilization. 


In this post I demonstrate a another way to specify control CPU usage for databases that live side by side on the same machine.


Cgroups, or control groups, improve database performance by associating a dedicated set of CPUs to a database instance. Each database instance can only use the resources in its cgroup.

When consolidating on a large server, you may want to restrict the database to a specific subset of the CPU and memory. This feature makes it easy to enable CPU and memory restrictions for an Oracle Database instance.

Use the setup_processor_group.sh script to create cgroups. 

Download this script from note 1585184.1 on the Oracle Support website.


Using PROCESSOR_GROUP_NAME to bind a database instance to CPUs or NUMA nodes on Linux (Doc ID 1585184.1)


PURPOSE

This document provides a step-by-step guide for binding a database instance to a subset of a server's CPUs and memory, using Linux cgroups.  Cgroups provide a way to create a named set of CPUs and memory.  

A database instance that is associated with this cgroup can only use its CPUs and memory.  


Using Linux cgroups, a DBA that is consolidating multiple database instances on a single server can:

-Physically isolate database instances onto different CPUs

-Bind instances to specific NUMA nodes to improve performance on NUMA-based systems.

 

Step 1 - Configuring the Linux cgroup

Use the available script, setup_processor_group.sh, to create and modify Linux cgroups.  

You must run this script as root.


First, check the number of CPUs, the NUMA configuration, and the existing cgroups (if any) for your system:


setup_processor_group.sh –show

Next, prepare the system to use cgroups (this command can be repeated):


setup_processor_group.sh –prepare


To check if the system is indeed ready:


setup_processor_group.sh –check


To create a new cgroup "mycg" for user "oracle" in group "dba" with CPUs 0 and 1, use the "-create" option.  With the "-cpu" option, you can provide either a comma-separated list or a range, e.g. "–cpus 0-7,16-23".


setup_processor_group.sh –create –name mycg –cpus 0,1 -u:g oracle:dba


Or, create a new cgroup "mycg" for user "oracle" in group "dba" with NUMA nodes 1 and 2, using the "-numa_nodes" option.  


You cannot use the "-create" option with both "–cpus" and "–numa_nodes".


$setup_processor_group.sh –create –name mycg –numa_nodes 1,2 -u:g oracle:dba


To update an existing cgroup "mycg" with new values:

$setup_processor_group.sh –update  -name mycg –cpus 2,3 -u:g oracle:dba


To delete the cgroup "mycg": 

$setup_processor_group.sh –delete –name mycg


Step 2 - Configuring the Database


To bind a database instance to a cgroup, set the Oracle initialization parameter, PROCESSOR_GROUP_NAME, to the name of the cgroup.  

The cgroup was named through the "setup_processor_group.sh -name" option.  PROCESSOR_GROUP_NAME is a static parameter.  

Therefore, the database instance must be restarted in order for the parameter to take effect.

To verify that the database instance has successfully bound itself to the cgroup, check for this message in the alert log: 
"Instance started in processor group mycg"

When a database instance is successfully running in a cgroup, the default value of the Oracle initialization parameter, CPU_COUNT, is set to the number of CPUs in the cgroup. 


show parameter cpu_count

If you have explicitly set CPU_COUNT, you should consider clearing it so that CPU_COUNT is set to the cgroup's value:


alter system set cpu_count = 0;


Use this Linux command to verify that a particular database process is running in the cgroup (substitute <pid> with the process id).  


In the output, you should see the cgroup name after the string "cpuset:/".

cat /proc/<pid>/cgroup

Use this Linux command to see all processes that are running in the cgroup (substitute <mycg> with your cgroup name).

cat /mnt/cgroup/<mycg>/tasks


Best Practices

(1) Note that Linux cgroups allow databases and applications that are not associated with the cgroup to use its CPU and memory.


(2) For processors with hyper-threading (e.g. x-86), configure cgroups out of CPU threads, using the "-cpus" option, from the minimum number of CPU cores and sockets.  Do not assign the CPU threads on a core to more than one cgroup.  

These best practices enable much better isolation and performance since CPUs on a core share many common resources, such as parts of the execution pipeline, caches, and TLBs.  

For a list of the CPU threads, cores, and sockets, use the following commands:


cat /proc/cpuinfo  | grep processor  

(lists CPU threads)


cat /proc/cpuinfo | grep "physical id" | sort | uniq  

(lists the CPU sockets)


cat /proc/cpuinfo | egrep "core id|physical id"  (lists CPU cores and sockets)


  • Create cgroups with at least 2 CPU cores.
  • For NUMA systems (e.g. Exadata X4-8), 

configure the cgroup from NUMA nodes, using the "-numa_nodes" option.  


The "-numa_nodes" option will ensure that the database instance allocates local memory for both SGA and PGA, resulting in improved database performance.


-When consolidating a large number of databases, consider creating a few cgroups and binding multiple database instances to each cgroup.  


For example:

◦Create one cgroup per NUMA node.  Bind multiple database instances to each NUMA node's cgroup.

◦Create 2 cgroups, one for test databases and one for standby databases.

Linux Cgroups vs Virtualization

Linux cgroups and virtual machines (VMs) are both effective tools for consolidating multiple databases on a server.  

Both tools provide resource isolation by dedicating CPU and memory to a database instance.  Database instances on VMs are isolated in a similar way as database instances on separate physical servers.  While this isolation offers many obvious advantages, it also has the following disadvantages:


◦Oracle Clusterware must be installed for each virtual machine.  Databases in cgroups can share one instance of the Oracle Clusterware.


◦Exadata does not currently support virtualization. 


Using Linux cgroups does not reduce Oracle database licensing costs.  The license is based on the number of CPUs on the server, not the database's cgroup size.


Regarding,

Alireza Kamrani 

Senior RDBMS Consultant 


Sunday, November 5, 2023

Memory limitations per PDB

 ðŸ”´Memory limitations per PDB🔴


In the earlier 12.1.0.2 Oracle database version, we could limit the amount of CPU utilization as well as Parallel Server allocation at the PDB level via Resource Plans.

Now in 12c Release 2, we can not only regulate CPU and Parallelism at the Pluggable database level, but in addition we can also restrict the amount of memory that each PDB hosted by a Container Database (CDB) uses

Further, we can also limit the amount of I/O operations that each PDB performs so that now we have a far improved Resource Manager at work ensuring that no PDB hogs all the CPU or the IO because of maybe some runaway query and thereby impacts the other PDBs hosted in the same PDB.

We can now limit the amount of SGA or PGA that an individual PDB can utilize as well as ensure that certain PDBs always are ensured a minimum level of both available SGA and PGA memory.

For example we can now issue SQL statements like these while connected to the individual PDB.

SQL> ALTER SYSTEM SET SGA_TARGET = 2500M SCOPE = BOTH;

SQL> ALTER SYSTEM SET SGA_MIN_SIZE = 800M SCOPE = BOTH;

SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT = 2048M SCOPE = BOTH;

SQL> ALTER SYSTEM SET MAX_IOPS = 20000 SCOPE = BOTH;


Another 12c Release 2 New Feature related to Multitenancy is Performance Profiles.

With Performance Profiles we can manage resources for large numbers of PDBs by specifying Resource Manager directives for profiles instead for each individual PDB.

These profiles are then allocated to the PDB via the initialization parameter 


DB_PERFORMANCE_PROFILE

Let us look at a worked example of Performance Profiles.

In this example I have three PDBs (PDB1, PDB2 and PDB3) hosted in the container database CDB1. PDB1 pluggable database hosts some mission critical applications and we need to ensure that PDB1 gets a higher share of memory,I/O as well as CPU resources as compared to PDB2 and PDB3.

So we will be enforcing this resource allocation via two sets of Performance Profiles - we call those TIER1 and TIER2.

Here are the steps:

Create a Pending Area

SQL> exec DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();

PL/SQL procedure successfully completed.


Create a CDB Resource Plan 

SQL> BEGIN

 DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN(

   plan   => 'profile_plan',

   comment => 'Performance Profile Plan allocating highest share of resources to PDB1');

END;

PL/SQL procedure successfully completed.


Create the CDB resource plan directives for the PDBs

Tier 1 performance profile ensures at least 60% (3 shares) of available CPU and parallel server resources and no upper limit on CPU utilization or parallel server execution. In addition it ensures a minimum allocation of at least 50% of available memory.

SQL> BEGIN

 DBMS_RESOURCE_MANAGER.CREATE_CDB_PROFILE_DIRECTIVE(

   plan                 => 'profile_plan', 

   profile              => 'Tier1', 

   shares               => 3, 

   memory_min           => 50);

END;

/  

PL/SQL procedure successfully completed.


Tier 2 performance profile is more restrictive in the sense that it has fewer shares as compared to Tier 1 and limits the amount of CPU/Parallel server usage to 40% as well as limits the amount of memory usage at the PDB level to a maximum of 25% of available memory.

SQL> BEGIN

 DBMS_RESOURCE_MANAGER.CREATE_CDB_PROFILE_DIRECTIVE(

   plan                 => 'profile_plan', 

   profile              => 'Tier2', 

   shares               => 2, 

   utilization_limit    => 40,

   memory_limit          => 25);

END;

/   

PL/SQL procedure successfully completed.


Validate and Submit the Pending Area 


SQL> exec DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();

PL/SQL procedure successfully completed.

SQL> exec DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();

PL/SQL procedure successfully completed.


Allocate Performance Profiles to PDBs


TIER1 Performance Profile is allocated to PDB1 and TIER2 Performance Profile is allocated to PDB2 and PDB3.

SQL> alter session set container=pdb1;

Session altered.

SQL> alter system set DB_PERFORMANCE_PROFILE='TIER1' scope=spfile;

System altered.

SQL> alter session set container=pdb2;

Session altered.

SQL> alter system set DB_PERFORMANCE_PROFILE='TIER2' scope=spfile;

System altered.

SQL> alter session set container=pdb3;

Session altered.

SQL> alter system set DB_PERFORMANCE_PROFILE='TIER2' scope=spfile;

System altered.


Set the Resource Plan at the CDB level 

SQL> conn / as sysdba

Connected.

SQL> alter system set resource_manager_plan='PROFILE_PLAN' scope=both;

System altered.


Set the Performance Profiles at the PDB level 

SQL> alter pluggable database all close immediate;

Pluggable database altered.

SQL> alter pluggable database all open;

Pluggable database altered.


Monitor memory utilization at PDB level 

The V$RSRCPDBMETRIC view enables us to track the amount memory used by PDBs.

We can see that the PDB1 belonging to the profile TIER1 has almost double the memory allocated to the other two PDBs in profile TIER2.

Oracle 12.2 has a lot of new exciting features. Learn all about these at a forthcoming online training session


Regards,


Alireza Kamrani

Fleet Patching & Provisioning (FPP) Concepts and cloning methods for Oracle/Grid Home

  Fleet Patching & Provisioning (FPP) Concepts and cloning methods for Oracle/Grid Home:                           ♠️Alireza Kamrani♠️  ...