Wednesday, April 2, 2025

How to evaluate and analyze the Network Optimization in Oracle Standby environment

How to evaluate and analyze the Network Optimization in Oracle Standby environment


♠️ Alireza Kamrani ♠️
        03/April/2025

One of the points that is required in designing the network architecture between the standby and the main database is to pay attention to the network bandwidth and ensure that there are settings that minimize the delay in sending packets. Knowing these things is important and significant for network tuning in the standby environment, and the right information can ensure that the standbys are in sync even during peak hours.
Oracle Data Guard redo transport performance is directly dependent on the performance of the primary and standby systems, the network that connects them, and the I/O subsystem.

For most Oracle Data Guard configurations, you should be able to achieve zero or minimal data loss by troubleshooting and tuning redo transport.

To calculate for Network Bandwidth Transfer for REDO LOG - Data Guard Environment in Primary database

Formula : 

Required bandwidth = ((Redo rate bytes per sec. / 0.7) * 8) / 1,000,000 = bandwidth in Mbps

Note : Assuming TCP/IP network overhead of 30%.

Calculation :

1. RUN Statspack during peak intervals to measure REDO rate.
2. If it is RAC instances, we have to calculate for all the RAC instances.
3. Check the following SQL Statement
    SQL> select * from v$sysmetric_history
               where metric_name = 'Redo Generated Per Sec';
4. RDA-Output:
    Performance - AWR Report - Statistic: "redo size"

Example :

Let us assume the redo rate is a 600 KB/sec.

Required bandwidth =

((Redo rate bytes per sec. / 0.7) * 8) / 1,000,000 = bandwidth in Mbps

= ((614400/0.7) * 8) /1,000,000
= 7.02 Mbps


In this topic, I will introduce a preferred tools that can help DBA to make better decisions with a accurate evaluations.

oratcptest is an Oracle-provided tool to evaluate network bandwidth and latency for Data Guard and Oracle RAC environments. It helps determine whether the network can handle redo transport between primary and standby databases.

1. Running oratcptest to Measure Bandwidth

The tool is available in Oracle 12c and later. It runs in client-server mode to simulate redo transport.

Step 1: Start the Listener on the Standby Server

On the standby database (or target system), start oratcptest in server mode:

oratcptest -server -host <STANDBY_HOST>

This opens a listener to receive test traffic.

Step 2: Run the Test from the Primary Database

On the primary database, run oratcptest in client mode:

oratcptest -client -host <STANDBY_HOST> -dir /tmp -time 60 -speed 100


Where:

• -client: Runs the test as a client.

• -host <STANDBY_HOST>: IP/hostname of the standby server.

• -dir /tmp: Location for temporary test files.

• -time 60: Duration of the test in seconds.

• -speed 100: Maximum speed in Mbps.

2. Analyzing the Output

After running the test, you get:

• Throughput (Mbps): Maximum sustainable bandwidth.

• Latency (ms): Round-trip delay.

• Packet loss: If network congestion occurs.


A sample output might look like:

Average Throughput: 200 Mbps
Latency: 5 ms
Packet Loss: 0.01%

This means your network can handle 200 Mbps, which should be sufficient if your redo rate is below this threshold.


3. Adjusting for Real-World Conditions

• Enable Compression: If redo transport compression is on, effective bandwidth may increase.

• Add Overhead: TCP/IP overhead (~10%) should be considered.

• Test During Peak Load: Network congestion can affect results.

Detailed guide on interpreting the output for Data Guard performance tuning:


Interpreting oratcptest Output for Oracle Data Guard Performance Tuning

Once you run oratcptest, the results help assess whether your network can sustain redo transport between the primary and standby databases. Here’s how to interpret the output and optimize performance.

1. Key Metrics from oratcptest Output

The output provides the following crucial performance indicators:


a) Throughput (Mbps)

• Definition: The maximum network bandwidth available for redo transport.

• Interpretation:

• If the reported throughput is higher than your redo generation rate, the network is sufficient.

• If the throughput is lower, you may experience redo lag, requiring optimizations.



Example:

Average Throughput: 200 Mbps

👉 If your redo rate is 50 MB/sec (~400 Mbps), this bandwidth is insufficient, and you may experience standby lag.

b) Latency (ms)

• Definition: Time taken for data packets to travel between primary and standby.

• Ideal Value:

• For SYNC mode, latency should be < 5ms.

• For ASYNC mode, higher latency is tolerable but may impact failover times.

Example:

Latency: 4.8 ms

👉 If using SYNC mode, this latency is acceptable. If it were >10 ms, you might need ASYNC mode or optimize network routing.

c) Packet Loss (%)

• Definition: Percentage of data packets lost during transmission.

• Ideal Value: 0% (or very close to zero).

• Impact:

• High packet loss (>0.1%) causes redo transport delays and Data Guard lag.

• If packet loss is high, check for network congestion or unstable links.



Example:

Packet Loss: 0.01%

👉 This is within an acceptable range. However, >0.1% packet loss needs troubleshooting.


2. Performance Tuning Based on Results

Depending on the results, take the following actions:

a) Low Bandwidth Issues

Problem: Reported bandwidth is lower than redo rate.
Solution:

• Enable Redo Transport Compression: ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=standby_db COMPRESSION=ENABLE';

• Upgrade network speed (e.g., from 1Gbps to 10Gbps).

• Use a dedicated network interface for redo transport.

• Check hardware device such as Network Switches to hace correct configuration and speeds.

b) High Latency Issues

Problem: Latency > 10ms affecting SYNC mode.
Solution:

• Switch to ASYNC mode: ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=standby_db ASYNC';

• Reduce distance between primary and standby.

• Use a private/dedicated network instead of shared internet.


c) Packet Loss Issues

Problem: Packet loss >0.1%, causing redo transport delays.
Solution:

• Check network congestion (QoS settings, firewall, VPN issues).

• Switch to ASYNC mode if using SYNC and experiencing high packet loss.

• Use a higher-quality network provider or optimize routing.


3. Validating Optimizations

After applying fixes, rerun oratcptest to validate improvements:

oratcptest -client -host <STANDBY_HOST> -dir /tmp -time 60 -speed 500

If throughput increases, latency decreases, and packet loss is near 0%, the network is tuned correctly.

Automating oratcptest for Continuous Monitoring

To ensure continuous monitoring of network performance for Oracle Data Guard, you can set up automated scripts that:


• Run oratcptest periodically (e.g., every hour).

• Log the results for trend analysis.

• Send alerts if bandwidth, latency, or packet loss exceed thresholds.


1. Creating an Automation Script

You can use a Bash script to automate the test on the primary database.

Script: oratcptest_monitor.sh

#!/bin/bash
# Set standby server hostname or IP STANDBY_HOST="<standby-host>"
# Log file path LOG_FILE="/var/log/oratcptest.log"
# Run the oratcptest command (adjust speed & duration as needed)

oratcptest -client -host $STANDBY_HOST -dir /tmp -time 60 -speed 500 > /tmp/oratcptest_result.txt


# Extract key metrics from the output
THROUGHPUT=$(grep "Throughput" /tmp/oratcptest_result.txt | awk '{print $3}')
LATENCY=$(grep "Latency" /tmp/oratcptest_result.txt | awk '{print $2}')
PACKET_LOSS=$(grep "Packet Loss" /tmp/oratcptest_result.txt | awk '{print $3}' | tr -d '%')
# Log results
echo "$(date) | Throughput: ${THROUGHPUT} Mbps | Latency: ${LATENCY} ms | Packet Loss: ${PACKET_LOSS}%" >> $LOG_FILE
# Define threshold values THRESHOLD_BANDWIDTH=200
# Minimum acceptable bandwidth (Mbps) THRESHOLD_LATENCY=10
# Maximum acceptable latency (ms) THRESHOLD_PACKET_LOSS=0.1
# Maximum acceptable packet loss (%)
# Check if thresholds are exceeded and trigger alerts
if (( $(echo "$THROUGHPUT < $THRESHOLD_BANDWIDTH" | bc -l) )); then
echo "ALERT: Low Network Bandwidth ($THROUGHPUT Mbps)!" | mail -s "Oracle Network Alert" admin@example.com
fi
if (( $(echo "$LATENCY > $THRESHOLD_LATENCY" | bc -l) )); then
echo "ALERT: High Latency ($LATENCY ms)!" | mail -s "Oracle Network Alert" admin@example.com
fi
if (( $(echo "$PACKET_LOSS > $THRESHOLD_PACKET_LOSS" | bc -l) )); then
echo "ALERT: High Packet Loss ($PACKET_LOSS%)!" | mail -s "Oracle Network Alert" admin@example.com fi
# Clean up
rm -f /tmp/oratcptest_result.txt

2. Scheduling the Script Using Cron

To run this script every hour, add it to the crontab:

crontab -e

Add the following line:

0 * * * * /path/to/oratcptest_monitor.sh

This ensures the script runs at the start of every hour.

3. Analyzing Results

• The script logs all results in /var/log/oratcptest.log.

• Alerts are sent via email if thresholds are exceeded.

• You can visualize trends using tools like Grafana or ELK Stack.

♠️ Alireza Kamrani ♠️

Friday, March 28, 2025

TAKE A DUMP FILE IN ORACLE

The ALTER SYSTEM DUMP command in Oracle is utilized to generate detailed dumps of various database components, such as datafiles and redo log files, for diagnostic and troubleshooting purposes. 


Alireza Kamrani 

28/March/2025


Dump commands and it's usage:


1. Dumping Datafile Blocks


To inspect the contents of specific blocks within a datafile, you can use the following commands:


• Dump a Single Block:


ALTER SYSTEM DUMP DATAFILE <file_number> BLOCK <block_number>; 


Example:


ALTER SYSTEM DUMP DATAFILE 3 BLOCK 10; 


This command dumps block number 10 from datafile 3. 


• Dump a Range of Blocks:


ALTER SYSTEM DUMP DATAFILE <file_number> BLOCK MIN <start_block> BLOCK MAX <end_block>; 


Example:


ALTER SYSTEM DUMP DATAFILE 3 BLOCK MIN 10 BLOCK MAX 12; 


This command dumps blocks 10 through 12 from datafile 3. 




2. Dumping Redo Log Files


To analyze the contents of redo log files, you can use the following commands:


• Dump an Entire Redo Log File:


ALTER SYSTEM DUMP LOGFILE '<path_to_redo_log_file>'; 


Example:


ALTER SYSTEM DUMP LOGFILE '/u01/app/oradata/TEST/redo01.log'; 


This command dumps the contents of the specified redo log file. 


• Dump Redo Records Within a Specific SCN Range:


ALTER SYSTEM DUMP REDO SCN MIN <start_scn> SCN MAX <end_scn>; 



Example:


ALTER SYSTEM DUMP REDO SCN MIN 100000 SCN MAX 200000; 


This command dumps redo records with SCNs between 100,000 and 200,000. 



✴️Considerations:


• The output of these commands is written to trace files located in the directory specified by the USER_DUMP_DEST initialization parameter.


• These commands are primarily intended for diagnostic purposes and should be used with caution, especially in production environments, as they can generate large amounts of output.


• Ensure you have the necessary administrative privileges to execute these commands.


🔻Also one of applicable and useful commamd for DBA is taking dump of Redo logs to troubleshooting and diagnostic, in following section you can find more details about this solutions.



🔆 ALTER SYSTEM DUMP REDO

Often customers or Oracle Support are asked to provide a dump of database redo for diagnostic purposes. 

From 11g and on, ALTER SYSTEM DUMP REDO can be used to dump a merged collection of redo records from multiple logfiles into a single trace file ordered by SCN. 

Provided the redologs / archivelogs are present in locations recorded in the database control file, this command provides an easy way to dump redo for all redologs/archives accessible to the database.


When dumping records based on timestamps, the procedure is not straightforward since human-readable timestamps must be converted to an internal database format. An internal script has typically been used for this purpose.


The procedure presented in this document will take an alert log timestamp of the form 'YYYY-MM-DDTHH24:MI:SS.<fractional_seconds>-<time_zone>' and the number of seconds forward from this timestamp that redo needs to be dumped for, and output an equivalent ALTER SYSTEM DUMP REDO statement.



Solution [19.13 and later]

NOTE: the following two statements must be run prior to running the procedure in order to have this work properly:


set serveroutput on

alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS';


As an example, let's say we wish to dump all of the database redo for 20 seconds starting at timestamp '2024-09-24T12:51:40.386098-04:00'. Assuming that procedure DUMP_REDO has already been created in the schema, we would run:


SQL> set serveroutput on

SQL> alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS';


Session altered.


SQL> exec DUMP_REDO('2024-09-24T12:51:40.386098-04:00',20);


-- Run the following statements and upload the file produced

-- in the diagnostic_dest /trace directory:

----------------------------------------------

alter session set tracefile_identifier = 'REDO_DUMP';

alter system dump redo time min 1116247600 time max 1116247920;


PL/SQL procedure successfully completed.


The code for the DUMP_REDO procedure can be found below.



PLSQL Procedure That Outputs an ALTER SYSTEM DUMP REDO Statement Based on an Alert Log Timestamp and the Number of Seconds of Redo to Dump:


create or replace procedure DUMP_REDO (incident_date IN varchar2, seconds_forward IN number) is

redo_day varchar2(20);

redo_time varchar2(20);

redo_date_char varchar2(35);

redo_date date;

rd_back date;

rd_back_char varchar2(35);

rd_year number;

rd_month number;

rd_day number;

rd_back_hour number;

rd_back_min number;

rd_back_sec number;

rd_forward date;

rd_forward_char varchar2(35);

rd_forward_hour number;

rd_forward_min number;

rd_forward_sec number;

time_back number;

time_forward number;

stmt1 varchar2(35) := 'alter system dump redo time min ';

stmt2 varchar2(10) := ' time max ';

stmt3 varchar2(1) := ';';

stmt varchar2(120);

begin

redo_day := substr(incident_date, 1, 10);

redo_time := substr(incident_date, 12, 8);

redo_date_char := concat(concat(redo_day, ' '), redo_time);

rd_year := to_number(substr(incident_date, 1, 4));

rd_month := to_number(substr(incident_date, 6, 2));

rd_day := to_number(substr(incident_date, 9, 2));

redo_date := to_date(redo_date_char, 'YYYY-MM-DD HH24:MI:SS');

rd_back := redo_date - 5/1440;

rd_back_char := to_char(rd_back);


rd_back_hour := to_number(substr(rd_back_char, 13, 2));

rd_back_min := to_number(substr(rd_back_char, 16, 2));

rd_back_sec := to_number(substr(rd_back_char, 19, 2));


rd_forward := redo_date + seconds_forward/86400;

rd_forward_char := to_char(rd_forward);

rd_forward_hour := to_number(substr(rd_forward_char, 13, 2));

rd_forward_min := to_number(substr(rd_forward_char, 16, 2));

rd_forward_sec := to_number(substr(rd_forward_char, 19, 2));


time_back := ((((((rd_year-1988))*12+(rd_month- 1))*31+(rd_day-1))*24+(rd_back_hour))*60+(rd_back_min))*60+(rd_back_sec);

time_forward := ((((((rd_year-1988))*12+(rd_month- 1))*31+(rd_day-1))*24+(rd_forward_hour))*60+(rd_forward_min))*60+(rd_forward_sec);


stmt := concat( concat( concat( concat (stmt1, time_back), stmt2), time_forward), stmt3);


dbms_output.put_line(chr(10));

dbms_output.put_line('-- Run the following statements and upload the file produced');

dbms_output.put_line('-- in the diagnostic_dest /trace directory:');

dbms_output.put_line('------------------------------------');

dbms_output.put_line('alter session set tracefile_identifier = ''REDO_DUMP'';');

dbms_output.put_line(stmt);


end;

/

***********************


To minimise the volume dumped by the “alter system dump logfile” command and make it easier to find the bit I wanted to see I used to “switch logfile” just before (and sometimes just after) the statement I was investigating.


With the advent of pluggable databases the “switch logfile” command now raises Oracle error: 

“ORA-65040: operation not allowed from within a pluggable database” so I had to change the strategy. 

This is just a brief note (echoing a footnote to an older note) of the approach I now use:


column current_scn new_value start_scn


select to_char(current_scn,'9999999999999999') current_scn from v$database;


accept X  prompt "Do something interesting in another session then press return"


column current_scn new_value end_scn


select to_char(current_scn,'9999999999999999') current_scn from v$database;


alter session set tracefile_identifier='sometextyoulike';


alter system dump redo scn min &start_scn scn max &end_scn ; 


alter session set tracefile_identifier='';


The list of options for the dump has been extended since I published the earlier note on dumping the log file and now (19.11.0.0) allows the following options (using C notation for the type of the variables you supply to each parameter):


rdba min  %d rdba max  %d tablespace_no  %d

dba min  %u  %u dba max  %u  %u

securefile_dba  %u  %u

length  %d

time min  %d

time max  %d

layer  %d

opcode  %d

scn min  %llu

scn max  %llu

xid  %d  %d  %d

objno  %u

con_id  %d

skip corruption


If you try to restrict the dump on objno (object id) or xid (transaction id) then the trace file will skip any redo records generated by private threads / in-memory undo and report the text: “Skipping IMU Redo Record: cannot be filtered by XID/OBJNO”


The tablespace_no option can only be used when both rdba min and rdba max (rolback data block address range) have been specified.


The con_id option may only be legal when used to specify a PDB from the CDB


Remember: when you dump redo you don’t just get the redo for your session. There is some scope for being selective but the starting point would be all the redo for the PDB you’re working from.


🔆 Oracle LogMiner


A built-in tool that provides SQL reconstruction of redo logs, allowing users to analyze database changes.


Used for auditing, forensic analysis, and data recovery.


Can be queried with V$LOGMNR_CONTENTS, making it much more readable and useful.


Example:


BEGIN

  DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/path/to/redo.log', OPTIONS => DBMS_LOGMNR.NEW);

  DBMS_LOGMNR.START_LOGMNR;

END;

/

SELECT SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE OPERATION = 'INSERT';

******************************




When should we use which one?


Use ALTER SYSTEM DUMP REDO if you are troubleshooting corruption or need low-level redo log analysis.


Use LogMiner if you want to analyze and recover transactions in an understandable SQL format.



When DBA can take a Dump of Redo?


This command is typically used for troubleshooting, forensic analysis, and debugging redo-related issues. Here are some scenarios where it can be helpful:


1. Analyzing Redo Log Contents


If you suspect data corruption or need to investigate what transactions were recorded in the redo logs, dumping the redo contents helps inspect the changes at a low level.



2. Debugging Flashback and Undo Issues


When troubleshooting flashback-related problems, you might need to verify if necessary undo or before-image changes exist in the redo logs.



3. Checking for Uncommitted Transactions


If an instance crashes and you need to investigate which transactions were active at the time, dumping redo logs can reveal in-progress transactions.



4. Troubleshooting Data Recovery Issues


If you suspect a problem with redo log corruption or need to manually inspect redo log records for a recovery scenario, this command provides raw redo log contents.



5. Performance Tuning and Auditing Changes


In some cases, you may want to analyze the redo logs to understand which operations are generating excessive redo and optimize accordingly.



Alireza Kamrani 



Thursday, March 20, 2025

In Oracle Database, When a SELECT statement generate redo?

In Oracle Database, Can a Select statement generate some Redo vector?


Alireza Kamrani
21/March/2025

No🙄, a pure SELECT statement does not generate redo vectors because it does not modify data. However, there are some exceptions where a SELECT can indirectly cause redo generation.

You may face some situations that a SELECT in Oracle DB has some wait redo log in some classification. And on the first you ask question why a SELECT statement should generate Redo.

The redo log buffer is a circular buffer in the SGA that holds information about changes made to the database. This information is stored in redo entries. Redo entries contain the information necessary to reconstruct, or redo, changes made to the database by INSERT, UPDATE, DELETE, CREATE, ALTER or DROP operations. Redo entries can be used for database recovery if necessary.

Why a Regular SELECT Doesn't Generate Redo?

• Redo is generated only for changes to database blocks (INSERT, UPDATE, DELETE, etc.).

• SELECT is a read-only operation, so it does not alter data or create redo records.

When Can a SELECT Generate Redo?

There are some special cases where redo is generated due to a SELECT:

1. Selecting from a Temporary Table (GLOBAL TEMPORARY TABLE - GTT)

• If a SELECT on a GTT causes a temporary UNDO change (e.g., sorting in PGA spills to TEMP tablespace), it may indirectly generate redo for undo segments.
• Although the data in a GTT is written to the temporary tablespace, the associated undo is still written to the normal undo tablespace, which is itself protected by redo, so using a GTT does not reduce undo and the redo associated with protecting the undo tablespace.
• The data in a GTT is written to the temporary tablespace, which is not directly protected by redo, so using a GTT improves performance by reducing redo generation. Unfortunately, prior to Oracle 12c, all undo associated with DML against a GTT is written to the normal undo tablespace, which is itself protected by redo. As a result, using a GTT reduces the amount of redo generation, but does not eliminate it. Another why of describing this is, using a GTT removes direct redo generation, but not indirect redo generation cause by undo.



2. Execution Plans that Modify Memory Structures

• A SELECT with operations like sorting (ORDER BY, GROUP BY), hash joins, or parallel execution may trigger temporary segment usage, generating redo for undo blocks.

3. Function-Based Indexes and Materialized Views

• If a SELECT triggers an on-demand refresh of a materialized view or involves function-based indexes, it could cause redo due to internal updates.

4. Read Consistency Mechanism and Multi-Version (Undo Changes)

• Oracle uses UNDO to maintain read consistency, and UNDO changes generate redo.

• If your SELECT accesses data modified by another transaction, Oracle may generate redo for undo blocks to reconstruct a consistent snapshot.

• SELECT for UPDATE-  these queries generate redo to maintain the locks

• if Auditing is enabled – auditing the select query could create redo

• delayed block cleanout (Do not neglect this)


When Oracle Database commits a transaction, all modified blocks are not necessarily updated with the commit SCN immediately. In this case, it is done later on demand when the block is read or updated. This is called delayed block cleanout.
Delayed Block Cleanout is the process where Oracle completes the cleanout processes started by the DML operation that modified the affected blocks.

There is a limit on how many blocks Oracle will automatically clean out at commit time and that is 10% of the buffer cache. Any blocks exceeding that threshold must wait for a transaction or SELECT to finish the job.

Use this way to view redo generation for a SQL:

SQL> set autotrace traceonly statistics

Or


SQL> SET AUTOTRACE ON

Select * From Table whre ...;

See output on Redo Size section.

Statistics

----------------------------------------------------------
2274  recursive calls
46503  db block gets
12033  consistent gets
7  physical reads
66050648  redo size
821  bytes sent via SQL*Net to client
822  bytes received via SQL*Net from client
3  SQL*Net roundtrips to/from client
1 sorts (memory)
 0  sorts (disk)
10000  rows processed

Why is this Beneficial?

Even though a simple SELECT doesn't generate redo, Oracle’s redo mechanisms ensure:

• Crash Recovery: If undo changes were made, redo for undo ensures rollback after a crash.

• Read Consistency: Redo logging helps manage undo, ensuring a stable snapshot for long-running queries.

• Better Performance with Parallel Query: Parallel queries that involve sorting, temp space usage, or function-based indexes may need redo to maintain integrity.

Conclusion

A SELECT itself does not create redo vectors, but certain indirect operations (undo for read consistency, temp tables, sorting) might cause redo generation. If you're seeing redo during selects, checking V$LOGMNR_CONTENTS or V$UNDOSTAT can help identify the cause.

Diagnosing a specific case where you're seeing redo from a SELECT

To diagnose why a SELECT statement is generating redo, follow these steps:

1. Check if Redo is Generated for the Session

Before running the SELECT, capture the current redo log usage:

SELECT VALUE FROM V$SYSSTAT WHERE NAME = 'redo size';

Then, execute the SELECT query:

SELECT * FROM your_table WHERE some_condition;

Afterward, check the redo size again:

SELECT VALUE FROM V$SYSSTAT WHERE NAME = 'redo size';

SELECT name , sid , value
from v$sesstat s, v$statname n
where name in ('redo size','redo log space requests')
and n.statistic# = s.statistic# and value > 0
order by 1,2

select * from ( select s.sid, ss.value, s.username, s.program, s.type, s.module from v$sesstat ss, v$statname sn, v$session s where ss.statistic# = ss.statistic# and s.sid = ss.sid and name ='redo size' and ss.value > 0 order by ss.value desc) where rownum<20;
If the value increases, the SELECT is causing redo generation.

Also you can see output of V$SESSTAT.

2. Check for Undo Activity (Read Consistency)

Redo can be generated for undo records, especially in long-running queries. Run:

SELECT BEGIN_TIME, END_TIME, UNDOBLKS, TXNCOUNT FROM V$UNDOSTAT WHERE BEGIN_TIME > SYSDATE - INTERVAL '10' MINUTE;

If undo blocks (UNDOBLKS) are increasing during the SELECT, it indicates read consistency is causing redo generation.



3. Verify If a Temporary Table (GTT) is Used

If your query accesses a Global Temporary Table (GTT), redo can be generated for UNDO segments. Check:

SELECT DISTINCT SEGMENT_NAME, SEGMENT_TYPE FROM USER_SEGMENTS WHERE SEGMENT_TYPE LIKE '%TEMP%';

If a GTT is used, and it’s set with ON COMMIT PRESERVE ROWS, undo for GTT modifications can generate redo.

4. Check Sorting, Hash Joins, and Parallel Query Effects

• If sorting occurs, redo might be generated due to PGA spills into the TEMP tablespace.

• Check for sort operations that might trigger redo:

SELECT NAME, VALUE FROM V$SYSSTAT WHERE NAME IN ('workarea executions - multipass', 'workarea executions - onepass');

If multipass executions are increasing, PGA is spilling data to TEMP, which can generate redo for undo tracking.

• Check if the query is running in parallel:

SELECT * FROM V$PX_SESSION;

Parallel execution may trigger redo if it involves temporary table modifications.

5. Analyze Redo Log Contents Using LogMiner

If you want to see exactly what redo is being generated, use LogMiner:

BEGIN DBMS_LOGMNR.START_LOGMNR(
STARTTIME => SYSDATE - INTERVAL '10' MINUTE,
ENDTIME => SYSDATE,
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG );
END;
/

SELECT OPERATION, SQL_REDO FROM V$LOGMNR_CONTENTS WHERE OPERATION != 'SELECT';

This will show what redo entries (if any) were triggered by the SELECT statement.


Next Steps

• If redo is caused by undo for read consistency, consider reducing long-running queries.

• If redo is from sorting, try increasing PGA memory (pga_max_size, pga_aggregate_limit).

• If a GTT is involved, verify whether redo suppression is enabled (TEMP_UNDO_ENABLED=TRUE).

Monday, March 17, 2025

Optimizing Oracle Redo Log Buffer for Peak Performance

Optimizing Oracle Redo Log Buffer

for Peak Performance



Alireza Kamrani
17/March/2025

Optimizing Oracle Redo Log Buffer for Peak Performance

In Oracle Database, the Redo Log Buffer is a critical memory structure that temporarily stores redo entries before they are written to the redo log files. Proper tuning of this buffer is essential for optimizing database performance, reducing I/O contention, and ensuring efficient transaction processing.

Why Redo Log Buffer Tuning Matters?

• Prevents excessive waits on "log buffer space" and "log file sync" events.

• Reduces disk I/O overhead by efficiently batching redo log writes.

• Enhances transaction throughput, especially in high-commit environments.

Key Tuning Considerations:

• Size Appropriately – Allocate sufficient memory to avoid frequent log buffer flushes while preventing excessive memory usage.

• Monitor Performance Metrics – Use AWR, ASH, and v$ views (e.g., V$LOG, V$LATCH) to detect bottlenecks.

• Optimize Log Writer (LGWR) Behavior – Ensure LGWR is not a bottleneck by tuning disk I/O performance and adjusting commit_write settings if necessary.

• Consider Workload Characteristics – High-frequency DML operations may require larger redo log buffers to accommodate fast-paced changes.


Start with checking current status and find issue:

SQL>
select retries.value/entries.value "redo buffer retries ratio"
from v$sysstat retries,
v$sysstat entries
where retries.name='redo buffer allocation retries'
and entries.name='redo entries';

redo buffer retries ratio
--------------
      0


Redolog buffer retry ratio should be <1% otherwise need tuning.

To tune the value for LOG_BUFFER first determine the space request ratio, this is the ratio of redo log space requests to redo log requests:

Select Round(e.value/s.value,5) “Redo Log Ratio”
From v$sysstat s, v$sysstat e
Where s.name = ‘redo log space requests’
and e.name = ‘redo entries’;

If the ratio of “redo log space requests” to “redo entries” is less than 5000:1 (i.e. less than 5000 in the above), then increase the size of the redo log buffer until the space request ratio stops falling.

Alternately, if memory is not a constraint then try to reduce the number of times that a process had to wait for the log cache to be flushed:

Select name, value from v$sysstat
Where name = ‘redo log space requests’;

The number of waits should always be zero.
If not, increase the size of LOG_BUFFER, until the number returns to zero. Typically, there is no advantage in setting this beyond 1M.

If you want to know how long processes had to wait as well as the number of times then try the following script instead:

Select name, value from v$sysstat
Where name in (‘redo log space requests’, ‘redo log space wait time’);

This shows the time in units of 10 milliseconds.
Be ware that because of the time granularity, ‘redo log space requests’ may be greater than zero whilst ‘redo log space wait time’ is zero.
This happens if none of the waits were for 10ms or longer. Unless you have a busy server having ‘redo log space wait time’ of (or near) zero may indicate an acceptable level of tuning.





SQL> select sid,wait_time from v$session_wait;

SID WAIT_TIME

---------- ----------
11            -1
1               0
2               0
3               0
6               0
7               0
10             0
9               0
4               0
5               0
8               0
11 rows selected.

If wait_time for accessing redolog buffer is greater than 0 second and less than 1 second then, the value of wait_time will be -1.

Tuning Redo Log Buffer steps:

1. Increase log buffer

2.improve efficiency of checkpoints.

3 Speed up archiving

4:Reduce redo generation

5: Add more redolog file

Adjust it based on  your workload and database status.

Increase Redo Log Buffer

Oracle log_buffer parameter is not to be directly set anymore. Instead Oracle automatically calculates internally how big this buffer should be. Although it is changeable by DBA.

SQL> show parameter log_buffer;

NAME TYPE VALUE
------------------        -----------        ----------------
log_buffer          integer          524288

Note: Log_buffer is not dynamic

SQL> alter system set log_buffer=104857600 scope=spfile;

System altered.

SQL> startup force;



Improve efficiency of checkpoints.

The maximum length of the checkpoint queue is determined by the parameter from the following list that provides the shortest recovery time:

• LOG_CHECKPOINT_TIMEOUT: The number of seconds that has passed between the checkpoint position and the last write to the redo. No block in the buffer cache will be dirty longer than this time.

• LOG_CHECKPOINT_INTERVAL: The number of operating system block in the redo log between the checkpoint position and the end of the redo log.

• FAST_START_IO_TARGET: The number of I/O operations the database should take to perform crash recovery of a single instance.

• FAST_START_MTTR_TARGET: The average number of seconds the database should take to perform crash recovery of a single instance.


Speed up archiving

SQL> show parameter log_archive_max_process

NAME TYPE VALUE

-------------------------------------------------- ----------- ---------------

log_archive_max_processes      integer           2

SQL> select process,status from v$archive_processes;

Based on your redolog configuration,  if you have some of the Stoped processes , then you can increase it:

SQL> alter system set log_archive_max_processes=10;


Reduce redo generation

If possible use NOLOGGING option or APPEND Hint on INSERT operations.

SQL> alter table scott.emp nologging;


Increase the size of redo logs

Remove inactive old redo logs and add new with larger size.
Becareful for standby env.

Note:

• Online redo log files should be sizes to perform a log switch no more than 15min. These is no a recommendation for all database and must adjust carefully.

• A high value of redo buffer allocation retries indicates that you may want to increase the size of the online redo log files.


The redo log buffer size can be analyzed using V$SYSSTAT, particularly the "redo allocation retries" statistic. This metric indicates how often Oracle couldn't allocate space in the redo log buffer immediately and had to retry.


Steps to Analyze and Tune LOG_BUFFER

1. Query Key Redo Statistics

Run the following query to check redo-related stats:

SELECT name, value FROM v$sysstat
WHERE name IN ('redo allocation retries', 'redo buffer allocation retries', 'redo entries', 'redo size');

2. Interpretation of Results

• redo allocation retries:

0 or very low → No issue, no need to increase LOG_BUFFER.

• High (>1000 per hour or continuously increasing) → Maybe needs to consider increasing LOG_BUFFER.

• redo buffer allocation retries:

This metric should ideally be close to zero. If it’s high, it confirms redo log buffer contention.

• redo size:

Helps estimate total redo generated, useful for setting LOG_BUFFER.

• redo entries:

Shows the number of transactions generating redo.

3. Possible Actions

Recommendations Based on redo buffer allocation retries:

Near Zero: This is optimal; no action is needed.

Consistently Increasing: If this value is continuously increasing, it indicates that user processes have had to wait for space in the redo log buffer. In such cases, consider increasing the size of the redo log buffer by adjusting the LOG_BUFFER parameter.


Additional Considerations:

Monitoring: Regularly monitor the redo buffer allocation retries statistic over time to detect trends or sudden increases.

System Resources: Ensure that increasing the LOG_BUFFER does not adversely affect other memory allocations within the System Global Area (SGA).


By maintaining the redo buffer allocation retries metric near zero, you can help ensure that the redo log buffer is adequately sized, reducing potential contention and improving overall database performance.

If you have a definite diagnosis of insufficient buffer space, then try starting with smaller values ​​and then analyze the database behavior. Do not start with multiple values ​​all at once.

4. Setting an Optimal Value

If redo allocation retries are high, increase the buffer:

ALTER SYSTEM SET LOG_BUFFER = <new_value_in_bytes> SCOPE=SPFILE;

SHUTDOWN IMMEDIATE;
STARTUP;

Note: When processes wait on the Log Buffer Space event, the log buffer is filling up faster than what the LGWR process can write to disk. If this wait event is frequent, throughput is reduced for the entire database. To address this problem, look at increasing the LOG_BUFFER to an optimal size.

Other tuning alternatives include:

• Put log files on faster disks.

• Look at tuning checkpoint or archive processes (log files switches).

• Review application design, use NOLOGGING operations where appropriate, and avoid changing more data than required.

• Finally, check refresh methods when using Materialized Views to reduce logging activity.


Following statistics (which are available in V$SYSSTAT) indicate contention in the redo log buffer at some level:


Contention Statistics in the Redo Log Buffer


Redo buffer allocation retries:
Indicates that a user process has had to wait for space in the redo log buffer. Increase the size of the redo log buffer if you see this latch causing wait issues. This might also imply an  I/O problem with the online redo logs (since the process of writing to the online redo logs can cause delays in allocating space to the redo log buffer).


Redo writer latching time:
High values over a short period of time can indicate latching issues associated with the LGWR process. This might be a result of disk contention causing LGWR writes to the online redo logs to be delayed.

Redo log space wait time:
Indicates sessions have waited for space to be allocated from the redo log buffer. High values can indicate latch contention for the redo logs. This could indicate an insufficient number of redo logs, or redo logs that are not sized properly. Additionally this can indicate IO problems with the disks that the redo logs are located on.


Note:
If you have a definite diagnosis of insufficient buffer space, then try starting with smaller values ​​and then analyze the database behavior. Do not start with multiple values ​​all at once.Also usually, value that consider automatically by Oracle is sufficient.

Written by:
Alireza Kamrani



Tuesday, March 11, 2025

An Overview of Oracle Media Recovery

Oracle Datafile Media Recovery

.vs

Block Media Recovery

Alireza Kamrani
11 March 2025

Performing Block Media Recovery

This topic explains how to restore and recover individual data blocks within a data file and we following these topics:

• Overview of Block Media Recovery

• Prerequisites for Block Media Recovery

• Recovering Individual Blocks

• Recovering All Blocks in V$DATABASE_BLOCK_CORRUPTION


Overview of Block Media Recovery

Block media recovery recovers provides lower mean time to recover (MTTR) by recovering corrupt data blocks.

This section contains the following topics:

• Purpose of Block Media Recovery

• Basic Concepts of Block Media Recovery

Purpose of Block Media Recovery

Use block media recovery to recover one or more corrupt data blocks within a data file.

Block media recovery provides the following advantages over data file media recovery:

• Lowers the mean time to recover (MTTR) because only blocks needing recovery are restored and recovered

• Enables affected data files to remain online during recovery

Without block media recovery, if even a single block is corrupt, then you must take the data file offline and restore a backup of the data file. You must apply all redo generated for the data file after the backup was created. The entire file is unavailable until media recovery completes. With block media recovery, only the blocks actually being recovered are unavailable during the recovery.

Block media recovery is most useful for physical corruption problems that involve a small, well-known number of blocks. Block-level data loss usually results from intermittent, random I/O errors that do not cause widespread data loss, and memory corruptions that are written to disk. Block media recovery is not intended for cases where the extent of data loss or corruption is unknown and the entire data file requires recovery. In such cases, data file media recovery is the best solution.

Basic Concepts of Block Media Recovery

Usually, the database marks a block as media corrupt and then writes it to disk when the corruption is first encountered. No subsequent read of the block is successful until the block is recovered. You can perform block recovery only on blocks that are marked corrupt or that fail a corruption check.

Typically, block corruption is reported in the following locations:

• Results of the LIST FAILURE, VALIDATE, or BACKUP ... VALIDATE command

• The V$DATABASE_BLOCK_CORRUPTION view

• Error messages in standard output

• The alert log

• User trace files

• Results of the SQL commands ANALYZE TABLE and ANALYZE INDEX

• Results of the DBVERIFY utility

• Third-party media management output

For example, you may discover the following messages in a user trace file:


ORA-01578: ORACLE data block corrupted (file # 7, block # 3)
ORA-01110: data file 7: '/oracle/oradata/trgt/tools01.dbf'
ORA-01578: ORACLE data block corrupted (file # 2, block # 235)
ORA-01110: data file 2: '/oracle/oradata/trgt/undotbs01.dbf'

Note:

Starting in Oracle Database 19c, the Data Recovery Advisor (DRA) feature is deprecated.

The deprecation of DRA includes deprecation of the following Oracle Recovery Manager (RMAN) commands: 
LIST FAILURE, ADVISE FAILURE, REPAIR FAILURE, and CHANGE FAILURE. 

Database administrators will no longer have access to these commands. There is no replacement feature for DRA.

About Block Recovery and Standby Databases

Block recovery behavior depends on whether the data block corruption was discovered on the primary database or the physical standby database.

If the database on which the corruption occurs is associated with a real-time query physical standby database, then the database automatically attempts to perform block media recovery.
The primary database searches for good copies of blocks on the standby database and, if they are found, repairs the blocks with no impact to the query that encountered the corrupt block.
The Oracle Database physical block corruption message (ORA-1578) is displayed only if the database cannot repair the corruption.

Whenever block corruption has been automatically detected, you can perform block media recovery manually with the RECOVER ... BLOCK command.


By default, RMAN first searches for good blocks in the real-time query physical standby database, then flashback logs and then blocks in full or level 0 incremental backups.

Note:

For block media recovery to work automatically, the physical standby database must be in real-time query mode.
An Oracle Active Data Guard license is required.

If a corrupt data block is discovered on a real-time query physical standby database, the server attempts to repair the corruption by obtaining a copy of the block from the primary database. The repair is performed in the background, enabling subsequent queries to succeed if the repair is successful.
Automatic block repair is attempted if the following database initialization parameters are configured on the standby database as described:

• The LOG_ARCHIVE_CONFIG parameter is configured with a DG_CONFIG list and a LOG_ARCHIVE_DEST_n parameter is configured for the primary database with the DB_UNIQUE_NAME attribute

or

• The FAL_SERVER parameter is configured and its value contains an Oracle Net service name for the primary database

Note:

If a corrupt block is detected during validation, such as by the RMAN VALIDATE command, then recovery is not initiated automatically.

About Identifying Corrupt Blocks

The V$DATABASE_BLOCK_CORRUPTION view displays blocks marked corrupt by database components such as RMAN, ANALYZE, and SQL queries.

The following types of corruption result in the addition of rows to this view:

• Physical corruption (sometimes called media corruption)

The database does not recognize the block: the checksum is invalid, the block contains all zeros, or the block header is corrupt.

Physical corruption checking is enabled by default.

You can turn off checksum checking by specifying the NOCHECKSUM option of the BACKUP command, but other physical consistency checks, such as checks of the block headers and footers, cannot be disabled.

• Logical corruption

The block has a valid checksum, the header and footer match, and so on, but the contents are logically inconsistent. Block media recovery may not be able to repair all logical block corruptions.

In these cases, alternate recovery methods, such as tablespace point-in-time recovery, or dropping and re-creating the affected objects, may repair the corruption.

Logical corruption checking is disabled by default.

You can turn it on by specifying the CHECK LOGICAL option of the BACKUP, RESTORE, RECOVER, and VALIDATE commands.

The database can detect some corruptions by validating relationships between blocks and segments, but cannot detect them by a check of an individual block. The V$DATABASE_BLOCK_CORRUPTION view does not record at this level of granularity.

About Missing Redo During Block Recovery

Block media recovery only requires an unbroken set of redo changes for the blocks being recovered. This is unlike data file recovery that requires an unbroken series of redo changes from the beginning of recovery to the end.

Like data file media recovery, block media recovery cannot generally survive a missing or inaccessible archived log, although it attempts restore failover when looking for usable copies of archived redo log files. Also, block media recovery cannot survive physical redo corruptions that result in checksum failure.
However, block media recovery can survive gaps in the redo stream if the missing or corrupt redo records do not affect the blocks being recovered.

Note:

Each block is recovered independently during block media recovery, so recovery may be successful for a subset of blocks.

When RMAN first detects missing or corrupt redo records during block media recovery, it does not immediately signal an error because the block undergoing recovery may create one later in the redo stream.

When a block is re-created, all previous redo for that block becomes irrelevant because the redo applies to an old incarnation of the block.

For example, the database creates a new a block when users drop or truncate a table and then use the block for other data.

Assume that media recovery is performed on block 13 as depicted in below figure:


After block recovery begins, RMAN discovers that change 120 is missing from the redo stream, either because the log block is corrupt or because the log cannot be found.

RMAN continues recovery if block 13 is re-created later in the redo stream.

See Figure in the last line.

Assume that in change 140 a user drops the table employees stored in block 13, allocates a new table in this block, and inserts data into the new table.
At this point, the database formats block 13 as a new block.
Recovery can now proceed with this block even though some redo preceding the recreation operation was missing!

Prerequisites for Block Media Recovery

Certain prerequisites must be met before you perform block media recovery by using the RECOVER ... BLOCK command.

The prerequisites include the following:

• The target database must run in ARCHIVELOG mode and be open or mounted with a current control file.

• If the target database is a standby database, then it must be in a consistent state, recovery cannot be in session, and the backup must be older than the corrupted file.

• The backups of the data files containing the corrupt blocks must be full or level 0 backups. They cannot be proxy copies or incremental backups.

If only proxy copy backups exist, then you can restore them to a nondefault location on disk, in which case RMAN considers them data file copies and searches them for blocks during block media recovery.

• RMAN can use only archived redo logs for the recovery.

RMAN cannot use level 1 incremental backups!

Block media recovery cannot survive a missing or inaccessible archived redo log, although it can sometimes survive missing redo records.

• Flashback Database must be enabled on the target database for RMAN to search the flashback logs for good copies of corrupt blocks.

If flashback logging is enabled and contains older, uncorrupted versions of the corrupt blocks, then RMAN can use these blocks, possibly speeding up the recovery.

• The target database must be associated with a real-time query physical standby database for RMAN to search the database for good copies of corrupt blocks.

Recovering Individual Blocks

Use the RECOVER...BLOCK command to recover individual corrupt blocks in a data file.

This section contains the following topics:

• Recovering Individual Blocks Using the RECOVER...BLOCK Command

• Example: Recovering Individual Blocks Using the Data Recovery Advisor

Recovering Individual Blocks Using the RECOVER...BLOCK Command

You identify the blocks that require recovery and then use any available backup to restore and recover these blocks.

To recover specific data blocks using the RECOVER...BLOCK command:

• Obtain the data file numbers and block numbers of the corrupted blocks.

The easiest way to locate trace files and the alert log is to connect SQL*Plus to the target database and execute the following query:

SELECT NAME, VALUE FROM V$DIAG_INFO;

• Start RMAN and connect to the target database, which must be mounted or open.

• Run the SHOW ALL command to confirm that the appropriate channels are preconfigured.

• Run the RECOVER ... BLOCK command at the RMAN prompt, specifying the file and block numbers for the corrupted blocks.

The following example recovers two blocks.


RECOVER
DATAFILE 8 BLOCK 13
DATAFILE 2 BLOCK 19;

You can also specify various options to control RMAN behavior.

The following example indicates that only backups with the tag mondayam are used when searching for blocks.

You could use the FROM BACKUPSET option to restrict the type of backup that RMAN searches, or the EXCLUDE FLASHBACK LOG option to restrict RMAN from searching the flashback logs.

RECOVER
DATAFILE 8 BLOCK 13
DATAFILE 2 BLOCK 199
FROM TAG mondayam;

Example: Recovering Individual Blocks Using the Data Recovery Advisor

You can use the Data Recovery Advisor to diagnose and repair failures caused by data block corruptions. In this example, corrupt data blocks were discovered when the VALIDATE DATABASE command.

🔴Starting in Oracle Database 19c, the Data Recovery Advisor (DRA) feature is deprecated.

🤦‍♂️The deprecation of DRA includes deprecation of the following Oracle Recovery Manager (RMAN) commands: LIST FAILURE, ADVISE FAILURE, REPAIR FAILURE, and CHANGE FAILURE. 
Database administrators will no longer have access to these commands. There is no replacement feature for DRA.

To generate automated repair options and repair the failure using the Data Recovery Advisor:

• Start RMAN and connect to the target database.

% rman TARGET /
% rman TARGET user@prod NOCATALOG

• List the failures recorded by the Data Recovery Advisor using the following command:

LIST FAILURE;

Database Role: PRIMARY

List of Database Failures
=========================

Failure ID Priority Status    Time Detected Summary
---------- -------- --------- ------------- -------
5720       HIGH     OPEN      24-APR-14     Datafile 14:
'/home1/oracle/dbs/tbs_32.f' contains one or more corrupt blocks


• Generate repair options for the failure listed Step 2.

The following command generates repair options and creates a repair script to perform the automated repair tasks.

ADVISE FAILURE;

Database Role: PRIMARY

List of Database Failures
=========================

Failure ID Priority Status    Time Detected Summary
---------- -------- --------- ------------- -------
5720       HIGH     OPEN      24-APR-14     Datafile 14:
'/home1/oracle/dbs/tbs_32.f' contains one or more corrupt blocks

analyzing automatic repair options; this may take some time
using channel ORA_DISK_1
analyzing automatic repair options complete

Mandatory Manual Actions
========================
no manual actions available

Optional Manual Actions
=======================
no manual actions available

Automated Repair Options
========================
Option Repair Description
------ ------------------
1      Perform block media recovery of block 20 in file 14 
  Strategy: The repair includes complete media recovery with no data loss
  Repair script: /home1/oracle/log/diag/rdbms/db12/hm/reco_287949467.hm

• Perform the automated repairs recommended by Data Recovery Advisor.

RMAN uses the repair script generated by the ADVISE FAILURE command to perform the required repairs.

REPAIR FAILURE;

Strategy: The repair includes complete media recovery with no data loss Repair script: /home1/oracle/log/diag/rdbms/db12/hm/reco_287949467.hm contents of repair script: # block media recovery recover datafile 14 block 20;

Do you really want to execute the above repair (enter YES or NO)? yes 🔆

executing repair script Starting recover at 24-APR-14 using channel ORA_DISK_1 channel ORA_DISK_1: restoring block(s)channel ORA_DISK_1: specifying block(s) to restore from backup setrestoring blocks of datafile 00014 channel ORA_DISK_1: reading from backup piece /backups/DB121/backupset/2024_04_24/o1_mf_nnndf_TAG20240424T213309_9omsd7vb_.bkp channel ORA_DISK_1: piece handle=/backups/DB121/backupset/2024_04_24/o1_mf_nnndf_TAG20240424T213309_9omsd7vb_.bkp tag=TAG20240424T213309 channel ORA_DISK_1: restored block(s) from backup piece 1 channel ORA_DISK_1: block restore complete, elapsed time: 00:00:01 starting media recovery media recovery complete, elapsed time: 00:00:03
Finished recover at 24-APR-14 repair failure complete

When the LIST FAILURE command displays more than one failures, you can perform repair actions only for a particular failure.

Use the option number displayed in the Automated Repair Options section of the ADVISE FAILURE command output to perform specific repair actions.

The following command performs only the repair actions listed under Option 2 of the Automated Repair Options section.

REPAIR FAILURE USING ADVISE OPTION 2;

Recovering All Blocks in V$DATABASE_BLOCK_CORRUPTION

RMAN can automatically recover all blocks listed in the V$DATABASE_BLOCK_CORRUPTION view.

To recover all blocks logged in V$DATABASE_BLOCK_CORRUPTION:

• Start SQL*Plus and connect to the target database.

• Query V$DATABASE_BLOCK_CORRUPTION to determine whether corrupt blocks exist. For example, execute the following statement:

SQL> SELECT * FROM V$DATABASE_BLOCK_CORRUPTION;

• Start RMAN and connect to the target database, as descried in "Making Database Connections with RMAN".

• Recover all blocks marked corrupt in V$DATABASE_BLOCK_CORRUPTION.

The following command repairs all physically corrupted blocks recorded in the view:

RMAN> RECOVER CORRUPTION LIST;

After the blocks are recovered, the database removes them from V$DATABASE_BLOCK_CORRUPTION.

  Alireza Kamrani
♠️ ORACLE ACE ♠️



How to evaluate and analyze the Network Optimization in Oracle Standby environment

How to evaluate and analyze the Network Optimization in Oracle Standby environment ♠️ Alireza Kamrani ♠️         03/April/2025 One of the...