Sql cursors invalidated

v$sql INVALIDATION column and Prevent SQL Cursor Invalidation

Reducing the number of hard parses results in reduced CPU usage. So it is important to avoid hard parses for better performance especially in OLTP systems. The reduction of hard parsing is mostly accomplished by using bind variables (not using literals) or modifying “CURSOR_SHARING” parameter. For more information on identification of sql statements using literals, you can inspect Connor McDonald’s post “SQL statements using literals“. The main idea is detecting sql statements sharing the same FORCE_MATCHING_SIGNATURE and using bind variables in these sql statements.

In this specific case, the high number of hard parses was due to SQL cursor invalidations, not the use of literals.

A previously compiled and cached SQL cursor – parsed statement and its execution plan becomes invalid and needs to be recompiled. This can happen due to various reasons.

Reason 1 – Schema Changes: Changes to the database schema, such as altering tables, adding or dropping indexes or modifying columns, changing package body can invalidate cursors that depend on those objects. CURSOR_INVALIDATION parameter controls  whether deferred cursor invalidation or immediate cursor invalidation is used for DDL statements by default. Deferred invalidation reduces the number of cursor invalidations happening at the same time and spreads the recompilation workload over time. Note that when the recompilation workload is spread over time, a cursor may run with a sub-optimal plan until it is recompiled, and may incur small execution-time overhead. Default cursor_invalidation is immediate, which means schema changes invalidates related cursors immediately.

Reason 2 – Statistics Updates: When statistics are modified by DBMS_STATS, new cursors not yet cached in the shared pool will use them to get execution plans. Existing cursors cached in the shared pool cannot update their execution plans however. Instead, such cursors are invalidated and new versions (children cursors) are created which get execution plans based on the updated statistics. This involves a hard-parse operation which is more expensive in resource consumption than a soft-parse which simply reuses a cached cursor.

Reason 3 Parameter Changes: Changes to system or session parameters that affect SQL execution might cause cursor invalidation.

First, I will start with detecting most invalidated sql statements. I used the query provided below for this purpose.

SQL> SELECT sql_id,
sum(invalidations)
FROM gv$sql
WHERE invalidations > 0
GROUP BY sql_id
ORDER BY sum(invalidations) desc
FETCH first 20 ROWS only ;

Most of them from the first 20 was related with daily loaded tables from a different unit which provides data through SSIS (SQL Server Integration Services) jobs. The reason for the invalidated SQL statements was schema changes. These tables were being truncated and bulk-loaded periodically. We replaced most of the truncate operations with DELETE FROM statements, being mindful of the high water mark and moved them online during very long periodic maintenance periods to control segment size. You can use the commands provided below to identify which SQL statement is causing the SQL invalidation for a specific SQL ID. In my example, I will demonstrate this for the SQL statement with the sql_id ‘7uqjqjgmv9umc’.

From 18C onwards, _kgl_debug command can be used at below debug level (99088) so that invalidation for the specified cursor is reported to alert log with full trace file name. You do not need to check for invalidation and search for the trace file. More information is available on “How To Trace Specific Cursor Invalidation Using Library Cache Debugging (_KGL_DEBUG) Event (Doc ID 2745080.1)” and How To Trace Overall Library Cache Objects Invalidation Happening At Particular Period (Doc ID 2746493.1)

We also detected that when statistics for a highly popular object are gathered by the Automatic Statistics Gathering Job, the related SQL statements get invalidated. This invalidation behavior can be controlled with the option provided below. You can use DBMS_STATS.SET_GLOBAL_PREFS to control whether invalidation happens immediately or is spread out over a time period long enough to prevent noticeable spikes due to hard-parses. The default time period is 300 minutes and is controlled by the “_optimizer_invalidation_period” parameter. More information is available on Rolling Cursor Invalidations with DBMS_STATS.AUTO_INVALIDATE (Doc ID 557661.1)

You may choose to fix statistics for these highly popular objects or simply accept the situation as we did, since in our case the primary root cause for high amount of sql invalidation count was actually not the statistics but the command used for Materialized Views Refreshing.

I tried to trace the overall invalidation in the SQL AREA for a specified period of interval on one instance with the command provided below.

What drew my attention was that although only one materialized view was refreshed, there were numerous dependent invalid objects. Additionally, many more SQL statements referring to these objects become invalidated every time we refresh a materialized view with the ‘atomic_refresh => FALSE’ flag. The example I provided is one of the most critical and heavily used objects in that database.

So, I tried to measure the impact by determining how many SQL statements get invalidated with a single refresh on the most brutal one.

4834 sql statement got invalidated with a single refresh. That was sad but true.

Once I identified the issue, I continued investigating by simplifying it to a single statement that queries the synonym referencing the materialized view, to determine whether it gets invalidated or not after a refresh.

What solutions do we have?

Option 1: We could use materialized view logs. However, we eliminated this idea because the materialized view query depends on many different tables, and it would require tracking a large number of tables.

Option 2: Refresh materialized views with the atomic_refresh => TRUE option and fixing the materialized view table statistics. We monitored the refresh times to see if they took longer. Only a small number of materialized views showed extended refresh periods, and for those, we rewrote their queries and tuned them.

The root cause of the high number of hard parses was SQL cursor invalidations triggered by the materialized view refreshes, not the literals themselves. By implementing atomic refresh (atomic_refresh => TRUE) for materialized views and fixing the table statistics, we were able to significantly reduce SQL cursor invalidations, leading to fewer hard parses and consequently, lower CPU usage. This approach not only improves performance but is particularly crucial in OLTP systems.

Hope it helps.


Discover More from Osman DİNÇ


Comments

Leave your comment