Monday, April 7, 2025

Using ORM like Hibernate on Oracle Database , challenges and recommendations

 🎭 Some recommendations about using ORM's that automatically create long format SQL statements and its challenges to making a preferred Databases execution plan


                             Alireza Kamrani
                                                04/07/2025

🔻As a database administrator, have you ever been in a situation where you had to convert your ORM code to Lazy or native SQL?

🎗When dealing with complex SQL queries generated by ORMs like Hibernate, which produce long aliases and suboptimal execution plans, you can follow these standard solutions to optimize them without necessarily forcing developers to switch to Lazy loading.

1. Analyze Execution Plan and Identify Bottlenecks

• Use EXPLAIN (ANALYZE) (PostgreSQL), EXPLAIN PLAN (Oracle), or SET STATISTICS IO, TIME ON (SQL Server) to analyze the query execution plan.

• Look for issues like full table scans, index scans vs. index seeks, sorts, hash joins, etc.

• Identify whether inefficient joins, unnecessary columns, or incorrect index usage is causing slow performance.

2. Apply Query Rewriting & Optimization

• Use CTEs or materialized views: If queries are deeply nested, consider Common Table Expressions (CTEs) or materialized views to precompute parts of the query.

• Reduce unnecessary columns: Hibernate queries often select more columns than needed. Encourage the development team to use projection queries instead of SELECT *.

• Optimize JOIN order: Ensure the most selective tables are joined first.

• Limit the result set: If possible, use LIMIT or TOP to reduce result size.

3. Use Indexing Strategies

• Ensure proper indexing: Verify that the query filters and joins utilize indexes efficiently.

• Use covering indexes: If specific columns are frequently queried together, create a covering index.

• Partitioning: If the dataset is large, partitioning can help improve performance.

4. Force a Better Execution Plan (DB-Specific)

• SQL Server: Use query hints like OPTION (FORCE ORDER, RECOMPILE, OPTIMIZE FOR, HASH JOIN, MERGE JOIN).

• PostgreSQL: Use SET enable_nestloop = OFF, SET enable_hashjoin = OFF for fine-tuning execution.

• Oracle: Use optimizer hints like /*+ INDEX(table index_name) */,  /*+ LEADING(table) */.

5. Use Hibernate-Specific Optimizations

• Enable query caching: Hibernate has second-level cache and query caching mechanisms to reduce redundant executions.

• Fine-tune fetch strategy: Instead of changing to Lazy loading globally, use:

• @BatchSize(size = X) to optimize IN clause fetches.

• JOIN FETCH to prefetch related entities efficiently.

6. Consider SQL Plan Baselines (Oracle) or Query Store (SQL Server)

• SQL Server Query Store: Helps force a good execution plan when the optimizer picks a bad one.

• Oracle SQL Plan Baselines: Store an optimal plan and prevent regressions.

• PostgreSQL pg_hint_plan: Allows manual hints to guide the optimizer.

7. Encourage Better Query Practices in Development

• Avoid SELECT N+1 problem: Use batch fetching strategies.

• Encourage use of DTOs: Instead of fetching entire entity objects, fetch only necessary fields.

• Optimize Hibernate mappings: Tune fetch, batch size, lazy/eager loading, and caching.

8. Monitor and Profile Regularly

• Use AWR Reports (Oracle), Query Store (SQL Server), or pg_stat_statements (PostgreSQL) to track slow queries.

• Set up automated performance alerts when query execution time exceeds a threshold.

Final Thought

If ORM-generated queries are consistently problematic, consider a hybrid approach:

• Allow Hibernate for general queries.

• Use hand-written SQL (native queries or stored procedures) for critical performance-sensitive operations.


🔴Recommendations for Oracle Database

For Oracle, here’s how you can optimize complex SQL queries generated by Hibernate or other ORMs when the optimizer struggles to generate an efficient execution plan.

1. Analyze and Understand the Execution Plan

• Use EXPLAIN PLAN EXPLAIN PLAN FOR <your_query>; SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);

• Use DBMS_XPLAN.DISPLAY_CURSOR (for live execution) SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL, NULL, 'ALLSTATS LAST'));

• Look for key issues like:

• Full Table Scans (TABLE ACCESS FULL)

• Inefficient Index Usage

• Expensive Nested Loops

• High TEMP usage (implies sorting or hash joins)

• High Cardinality Estimates

2. Optimize Query Structure

• Rewrite queries using Common Table Expressions (CTEs) (WITH clause) to break down large queries.

• Reduce unnecessary joins and subqueries: Hibernate often generates redundant subqueries.

• Remove unnecessary columns: Hibernate sometimes selects all columns (SELECT *), impacting performance.

3. Force a Better Execution Plan

If the Oracle optimizer picks a suboptimal plan, try Optimizer Hints:

• Force Index Usage SELECT /*+ INDEX(my_table my_index) */ col1, col2 FROM my_table WHERE col1 = 'value';

• Control Join Order SELECT /*+ LEADING(t1 t2) */ t1.*, t2.* FROM table1 t1 JOIN table2 t2 ON t1.id = t2.id;

• Force a Hash Join SELECT /*+ USE_HASH(t1 t2) */ t1.*, t2.* FROM table1 t1 JOIN table2 t2 ON t1.id = t2.id;

• Force Nested Loop Join SELECT /*+ USE_NL(t1 t2) */ t1.*, t2.* FROM table1 t1 JOIN table2 t2 ON t1.id = t2.id;

• Parallel Execution for Large Queries SELECT /*+ PARALLEL(4) */ * FROM my_large_table;


4. SQL Plan Management (SQL Baselines)

If you find a good execution plan and want Oracle to stick to it:

BEGIN DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => 'your_sql_id'); END; /

And then set preferred plan as a Fixed plan.
🔆Also using Spm you can Rewriting a SQL statement as your better format in the fly.

This prevents the optimizer from changing the plan unexpectedly.

5. Adaptive Query Optimization (11g and above)

Oracle 12c+ introduced Adaptive Query Optimization. If Hibernate-generated queries struggle, consider adjusting these parameters:

ALTER SESSION SET optimizer_adaptive_features = FALSE; -- For 12c
ALTER SESSION SET optimizer_adaptive_statistics = FALSE; -- For 19c

These settings prevent Oracle from making unstable plan choices.

6. Hibernate-Specific Optimizations

• Enable Second-Level Cache (EhCache, OSCache, Infinispan) to reduce redundant queries.

• Use @BatchSize(size = X) to improve fetching

• Avoid JOIN FETCH when unnecessary: This loads large result sets.

• Use native queries for performance-critical reports.

Example:

@Query(value = "SELECT /*+ INDEX(t my_index) */ t.* FROM my_table t WHERE t.status = :status", nativeQuery = true) List <MyEntity> findByStatus(@Param("status") String status);

7. Monitor Performance Proactively

• AWR Reports (@?/rdbms/admin/awrrpt.sql)

• SQL Monitoring SELECT * FROM V$SQL_MONITOR WHERE sql_text LIKE '%your_query_pattern%';

• Active Session History (ASH) SELECT sql_id, sql_text, elapsed_time FROM V$ACTIVE_SESSION_HISTORY WHERE sql_text LIKE '%your_query_pattern%';

8. SQL Patch - SQL Transformation

You can use a SQL Patch to apply an internal transformation to rewrite the SQL, reducing excessive aliasing:

BEGIN DBMS_SQLDIAG.CREATE_SQL_PATCH( sql_text => 'SELECT long_alias_format_generated_by_ORM', hint_text => '/*+ NO_MERGE(alias) INLINE(alias) FULL(alias) */' ); END; /

This can help Oracle simplify execution, but it won't fully rewrite the SQL into a more compact form.

9. SQL Profiles
(DBMS_SQLTUNE.IMPORT_SQL_PROFILE)

If you have a better manually rewritten query that performs well, you can apply a SQL Profile to guide the optimizer. SQL Profiles provide statistical corrections rather than full rewrites but can improve performance.

10. Use a Materialized View or a SQL Macro

If your ORM frequently generates complex views with excessive joins, consider creating a Materialized View or SQL Macro (introduced in 19c) to simplify the execution path.

11. Hibernate-Level Optimization

Instead of forcing changes at the database level, tuning the Hibernate Query Generation Strategy can reduce aliasing. Consider:

• Using DTO projections instead of fetching full entities.

• Enabling query transformations in Hibernate.

• Using native SQL queries for complex reporting use cases.

Final Recommendation

If you want to force Oracle to interpret a complex ORM query in a simpler form, SQL Patch with a transformation hint or SPM to change Sql statement.

But it is better to try to  have a complete rewrite, you may need to adjust ORM settings or use a specific database view to expose a simpler query structure.

• If Hibernate is causing issues, consider a hybrid approach: Use Hibernate for standard queries and PL/SQL procedures or native queries for performance-sensitive operations.

• SQL Plan Baselines can be your best friend for ensuring a stable, optimized plan over time.

• Indexing and proper join strategies should be reviewed regularly to adapt to data growth.


I hope this post was helpful to you.

Note: All recommendations that reviewed in this topic, needs test and customizing base on your workload and structures/design, before applyingin production.


Alireza Kamrani

Saturday, April 5, 2025

Benefits of Upgrading Oracle ASMLib v3

Benefits of Upgrading Oracle ASMLib v3

  04/05/2025
Alireza Kamrani 

Oracle ASMLib (Automatic Storage Management Library) v3 introduces several improvements, particularly in performance, compatibility, and stability for ASM-managed storage:


• io_uring Integration: ASMLib v3 takes advantage of io_uring (if supported by the kernel) to enhance I/O performance, reducing system call overhead and improving disk access speeds.


• Better Kernel Compatibility: ASMLib v3 is designed to work with modern Linux kernels, particularly Oracle Linux 9 with UEK R7 or RHCK, ensuring long-term support and stability.


• Optimized I/O Handling: With io_uring, ASMLib can achieve lower latency and higher throughput for ASM disk operations, improving database performance.


• Multipath & Storage Improvements: Enhanced support for multipath devices ensures stable disk access, reducing the risk of ASM disk errors.


• Independent Updates: Unlike earlier versions that required kernel module recompilation, ASMLib v3 can be updated separately, simplifying maintenance.


How io_uring Benefits Oracle Databases


io_uring is a modern Linux I/O framework that optimizes database storage operations in the following ways:


• Asynchronous I/O with Lower Overhead: Reduces system call overhead, improving CPU efficiency.


• Batch Submission & Completion Queues: Supports bulk I/O processing, reducing latency for database workloads.


• Improved Log Writes & Checkpoints: Enhances performance for frequent redo log writes and data checkpointing.


• Optimized Multi-Threading: Supports non-blocking I/O, reducing contention in high-concurrency environments.


• Direct I/O Support: Works well with Direct I/O, minimizing memory copying and increasing efficiency for databases.


Considerations & Limitations


• Kernel Dependency: io_uring benefits are only available if running a supported kernel (e.g., UEK R7, OL9 RHCK).


• Data Integrity Passthrough Limitation: Certain integrity checks are not supported with io_uring due to kernel constraints, potentially affecting reliability in some cases.


Comparison of io_uring vs. libaio for Oracle Databases


Both io_uring and libaio provide asynchronous I/O (AIO) capabilities, but io_uring is a modern alternative with performance improvements. Here’s a detailed comparison:


1. Architecture & Mechanism


• System Calls


libaio: Requires multiple system calls for submission and completion.


io_uring: Uses shared ring buffers, reducing the number of system calls.


• Batching


libaio: Supports batching, but each batch still requires an extra system call.


io_uring: Supports native batching with submission and completion queues, improving efficiency.


• Polling


libaio: No direct polling support; relies on wake-ups.


io_uring: Supports polling, reducing latency and CPU overhead.


• Zero-Copy Support


libaio: No native zero-copy support.


io_uring: Can avoid unnecessary data copies, improving performance.


• Kernel Dependency


libaio: Works on older Linux kernels (2.6+).


io_uring: Requires Linux 5.1+ (or Oracle UEK R7 / OL9 RHCK).


2. Performance Considerations


• Latency


libaio: Higher due to additional system calls.


io_uring: Lower latency because it minimizes syscall overhead.


• Throughput


libaio: Good performance but suffers under high concurrency.


io_uring: Higher throughput due to reduced context switching.


• CPU Utilization


libaio: Higher CPU usage due to frequent context switches.


io_uring: Lower CPU overhead thanks to optimized batching and async handling.


• I/O Concurrency


libaio: Scales well but has syscall-related bottlenecks.


io_uring: Scales better, allowing higher parallelism in workloads.


3. Oracle Database Use Cases


• Redo Log Writes


libaio: Supported, but suffers from syscall overhead.


io_uring: More efficient due to polling and batching capabilities.


• Checkpoints & Background Writes


libaio: Works well but can experience delays due to context switching.


io_uring: Improves handling by reducing latency and CPU load.


• Datafile I/O (Direct I/O)


libaio: Compatible with ASM and filesystem I/O.


io_uring: More efficient for ASM-managed storage.


• Backup & Restore Operations


libaio: Uses traditional AIO methods, which can be slower.


io_uring: Faster due to reduced system call overhead.


Which One to Use?


• If using Oracle ASM with ASMLib v3 and a supported kernel (UEK R7, OL9 RHCK) → io_uring is the better choice for improved performance.


• If running on an older kernel or using direct database I/O without ASM → libaio is more stable and officially supported.



Configuring ASM I/O Filtering


oracleasm-support includes an ASM I/O filtering feature that depends on BPF infrastructure support in the kernel. This feature is available in UEK R7 or Oracle Linux 9 with RHCK. When enabled, the I/O filter feature rejects any write operations that aren't started by ASM and prevents writes to ASM disks by admin commands such as dd after disks have been added to the ASM system.


• Run the configuration utility to enable or disable I/O filtering.


By default, the I/O filter feature is enabled. Use the oracleasm configure command to disable or enable the I/O filter feature.


• Disable the I/O filter.


sudo oracleasm configure --iofilter n


• Enable the I/O filter.


sudo oracleasm configure --iofilter y


• Run the configuration utility to set the maximum number of disk devices that ASMLIB can use with I/O filtering.


I/O filtering requires a mapping of the maximum number of disk devices that ASMLIB can use. The default value is 2048, but this value can be changed to any value, such as 4096, by running:


sudo oracleasm configure --maxdevs 4096


Checking ASMLIB Configuration Status


Use the oracleasm status command to show the status of ASMLIB configuration. This command can help identify issues and can show which features are enabled.


• Run oracleasm status to view the current configuration status.


sudo oracleasm status


The following example output is taken from a system running UEK R7:Checking if the oracleasm kernel module is loaded: no (Not required with kernel 5.15.0) Checking if /dev/oracleasm is mounted: no (Not required with kernel 5.15.0) Checking which I/O Interface is in use: io_uring (KABI_V3) Checking if io_uring is enabled: yes Checking if ASM disks have the correct ownership and permissions: yes Checking if ASM I/O filter is set up: yesThe following checks are performed:


• Check if the oracleasm kernel module is loaded: The kernel module is required for earlier kernels that don't include io_uring.


• Check if the /dev/oracleasm is mounted: When the oracleasm kernel module is used, a device node is configured and mounted. This action isn't required with kernels that include io_uring.


• Check which I/O interface is being used: in the case of a kernel that's using KABI_V3 the io_uring interface is used, while a kernel using KABI_V2 uses the oracleasm driver interface.


Note that the following checks are only performed when KABI_V3 is detected:


• Check if io_uring is enabled: On a kernel that includes io_uring, the io_uring feature must be enabled to use ASMLIB.


• Check if ASM disks have correct ownership and permissions: Checks that any disk devices that are labeled for ASM use are owned by the user and group configured for ASM, and set when you initialized the configuration. 


• Check if ASM I/O filter is enabled and configured: On kernels that include the required BPF functionality, I/O filtering can be enabled and configured to protect ASM disks from accidental overwrites. 

Wednesday, April 2, 2025

How to evaluate and analyze the Network Optimization in Oracle Standby environment

How to evaluate and analyze the Network Optimization in Oracle Standby environment


♠️ Alireza Kamrani ♠️
        03/April/2025

One of the points that is required in designing the network architecture between the standby and the main database is to pay attention to the network bandwidth and ensure that there are settings that minimize the delay in sending packets. Knowing these things is important and significant for network tuning in the standby environment, and the right information can ensure that the standbys are in sync even during peak hours.
Oracle Data Guard redo transport performance is directly dependent on the performance of the primary and standby systems, the network that connects them, and the I/O subsystem.

For most Oracle Data Guard configurations, you should be able to achieve zero or minimal data loss by troubleshooting and tuning redo transport.

To calculate for Network Bandwidth Transfer for REDO LOG - Data Guard Environment in Primary database

Formula : 

Required bandwidth = ((Redo rate bytes per sec. / 0.7) * 8) / 1,000,000 = bandwidth in Mbps

Note : Assuming TCP/IP network overhead of 30%.

Calculation :

1. RUN Statspack during peak intervals to measure REDO rate.
2. If it is RAC instances, we have to calculate for all the RAC instances.
3. Check the following SQL Statement
    SQL> select * from v$sysmetric_history
               where metric_name = 'Redo Generated Per Sec';
4. RDA-Output:
    Performance - AWR Report - Statistic: "redo size"

Example :

Let us assume the redo rate is a 600 KB/sec.

Required bandwidth =

((Redo rate bytes per sec. / 0.7) * 8) / 1,000,000 = bandwidth in Mbps

= ((614400/0.7) * 8) /1,000,000
= 7.02 Mbps


In this topic, I will introduce a preferred tools that can help DBA to make better decisions with a accurate evaluations.

oratcptest is an Oracle-provided tool to evaluate network bandwidth and latency for Data Guard and Oracle RAC environments. It helps determine whether the network can handle redo transport between primary and standby databases.

1. Running oratcptest to Measure Bandwidth

The tool is available in Oracle 12c and later. It runs in client-server mode to simulate redo transport.

Step 1: Start the Listener on the Standby Server

On the standby database (or target system), start oratcptest in server mode:

oratcptest -server -host <STANDBY_HOST>

This opens a listener to receive test traffic.

Step 2: Run the Test from the Primary Database

On the primary database, run oratcptest in client mode:

oratcptest -client -host <STANDBY_HOST> -dir /tmp -time 60 -speed 100


Where:

• -client: Runs the test as a client.

• -host <STANDBY_HOST>: IP/hostname of the standby server.

• -dir /tmp: Location for temporary test files.

• -time 60: Duration of the test in seconds.

• -speed 100: Maximum speed in Mbps.

2. Analyzing the Output

After running the test, you get:

• Throughput (Mbps): Maximum sustainable bandwidth.

• Latency (ms): Round-trip delay.

• Packet loss: If network congestion occurs.


A sample output might look like:

Average Throughput: 200 Mbps
Latency: 5 ms
Packet Loss: 0.01%

This means your network can handle 200 Mbps, which should be sufficient if your redo rate is below this threshold.


3. Adjusting for Real-World Conditions

• Enable Compression: If redo transport compression is on, effective bandwidth may increase.

• Add Overhead: TCP/IP overhead (~10%) should be considered.

• Test During Peak Load: Network congestion can affect results.

Detailed guide on interpreting the output for Data Guard performance tuning:


Interpreting oratcptest Output for Oracle Data Guard Performance Tuning

Once you run oratcptest, the results help assess whether your network can sustain redo transport between the primary and standby databases. Here’s how to interpret the output and optimize performance.

1. Key Metrics from oratcptest Output

The output provides the following crucial performance indicators:


a) Throughput (Mbps)

• Definition: The maximum network bandwidth available for redo transport.

• Interpretation:

• If the reported throughput is higher than your redo generation rate, the network is sufficient.

• If the throughput is lower, you may experience redo lag, requiring optimizations.



Example:

Average Throughput: 200 Mbps

👉 If your redo rate is 50 MB/sec (~400 Mbps), this bandwidth is insufficient, and you may experience standby lag.

b) Latency (ms)

• Definition: Time taken for data packets to travel between primary and standby.

• Ideal Value:

• For SYNC mode, latency should be < 5ms.

• For ASYNC mode, higher latency is tolerable but may impact failover times.

Example:

Latency: 4.8 ms

👉 If using SYNC mode, this latency is acceptable. If it were >10 ms, you might need ASYNC mode or optimize network routing.

c) Packet Loss (%)

• Definition: Percentage of data packets lost during transmission.

• Ideal Value: 0% (or very close to zero).

• Impact:

• High packet loss (>0.1%) causes redo transport delays and Data Guard lag.

• If packet loss is high, check for network congestion or unstable links.



Example:

Packet Loss: 0.01%

👉 This is within an acceptable range. However, >0.1% packet loss needs troubleshooting.


2. Performance Tuning Based on Results

Depending on the results, take the following actions:

a) Low Bandwidth Issues

Problem: Reported bandwidth is lower than redo rate.
Solution:

• Enable Redo Transport Compression: ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=standby_db COMPRESSION=ENABLE';

• Upgrade network speed (e.g., from 1Gbps to 10Gbps).

• Use a dedicated network interface for redo transport.

• Check hardware device such as Network Switches to hace correct configuration and speeds.

b) High Latency Issues

Problem: Latency > 10ms affecting SYNC mode.
Solution:

• Switch to ASYNC mode: ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=standby_db ASYNC';

• Reduce distance between primary and standby.

• Use a private/dedicated network instead of shared internet.


c) Packet Loss Issues

Problem: Packet loss >0.1%, causing redo transport delays.
Solution:

• Check network congestion (QoS settings, firewall, VPN issues).

• Switch to ASYNC mode if using SYNC and experiencing high packet loss.

• Use a higher-quality network provider or optimize routing.


3. Validating Optimizations

After applying fixes, rerun oratcptest to validate improvements:

oratcptest -client -host <STANDBY_HOST> -dir /tmp -time 60 -speed 500

If throughput increases, latency decreases, and packet loss is near 0%, the network is tuned correctly.

Automating oratcptest for Continuous Monitoring

To ensure continuous monitoring of network performance for Oracle Data Guard, you can set up automated scripts that:


• Run oratcptest periodically (e.g., every hour).

• Log the results for trend analysis.

• Send alerts if bandwidth, latency, or packet loss exceed thresholds.


1. Creating an Automation Script

You can use a Bash script to automate the test on the primary database.

Script: oratcptest_monitor.sh

#!/bin/bash
# Set standby server hostname or IP STANDBY_HOST="<standby-host>"
# Log file path LOG_FILE="/var/log/oratcptest.log"
# Run the oratcptest command (adjust speed & duration as needed)

oratcptest -client -host $STANDBY_HOST -dir /tmp -time 60 -speed 500 > /tmp/oratcptest_result.txt


# Extract key metrics from the output
THROUGHPUT=$(grep "Throughput" /tmp/oratcptest_result.txt | awk '{print $3}')
LATENCY=$(grep "Latency" /tmp/oratcptest_result.txt | awk '{print $2}')
PACKET_LOSS=$(grep "Packet Loss" /tmp/oratcptest_result.txt | awk '{print $3}' | tr -d '%')
# Log results
echo "$(date) | Throughput: ${THROUGHPUT} Mbps | Latency: ${LATENCY} ms | Packet Loss: ${PACKET_LOSS}%" >> $LOG_FILE
# Define threshold values THRESHOLD_BANDWIDTH=200
# Minimum acceptable bandwidth (Mbps) THRESHOLD_LATENCY=10
# Maximum acceptable latency (ms) THRESHOLD_PACKET_LOSS=0.1
# Maximum acceptable packet loss (%)
# Check if thresholds are exceeded and trigger alerts
if (( $(echo "$THROUGHPUT < $THRESHOLD_BANDWIDTH" | bc -l) )); then
echo "ALERT: Low Network Bandwidth ($THROUGHPUT Mbps)!" | mail -s "Oracle Network Alert" admin@example.com
fi
if (( $(echo "$LATENCY > $THRESHOLD_LATENCY" | bc -l) )); then
echo "ALERT: High Latency ($LATENCY ms)!" | mail -s "Oracle Network Alert" admin@example.com
fi
if (( $(echo "$PACKET_LOSS > $THRESHOLD_PACKET_LOSS" | bc -l) )); then
echo "ALERT: High Packet Loss ($PACKET_LOSS%)!" | mail -s "Oracle Network Alert" admin@example.com fi
# Clean up
rm -f /tmp/oratcptest_result.txt

2. Scheduling the Script Using Cron

To run this script every hour, add it to the crontab:

crontab -e

Add the following line:

0 * * * * /path/to/oratcptest_monitor.sh

This ensures the script runs at the start of every hour.

3. Analyzing Results

• The script logs all results in /var/log/oratcptest.log.

• Alerts are sent via email if thresholds are exceeded.

• You can visualize trends using tools like Grafana or ELK Stack.

♠️ Alireza Kamrani ♠️

Using ORM like Hibernate on Oracle Database , challenges and recommendations

  🎭 Some recommendations about using ORM's that automatically create long format SQL statements and its challenges to making a preferre...