Choosing the Right Option for Materialized View Refresh to Reduce Hard Parses
Reducing the number of hard parses results in reduced CPU usage. So it is important to avoid hard parses for better performance especially in OLTP systems. The reduction of hard parsing is mostly accomplished by using bind variables (not using literals) or modifying “CURSOR_SHARING” parameter. For more information on identification of sql statements using literals, you can inspect Connor McDonald’s post “SQL statements using literals“. The main idea is detecting sql statements sharing the same FORCE_MATCHING_SIGNATURE and using bind variables in these sql statements.
In this specific case, the high number of hard parses was due to SQL cursor invalidations, not the use of literals.
What is SQL Cursor Invalidation?
A previously compiled and cached SQL cursor – parsed statement and its execution plan becomes invalid and needs to be recompiled. This can happen due to various reasons.
Reason 1 – Schema Changes: Changes to the database schema, such as altering tables, adding or dropping indexes or modifying columns, changing package body can invalidate cursors that depend on those objects. CURSOR_INVALIDATION parameter controls whether deferred cursor invalidation or immediate cursor invalidation is used for DDL statements by default. Deferred invalidation reduces the number of cursor invalidations happening at the same time and spreads the recompilation workload over time. Note that when the recompilation workload is spread over time, a cursor may run with a sub-optimal plan until it is recompiled, and may incur small execution-time overhead. Default cursor_invalidation is immediate, which means schema changes invalidates related cursors immediately.
Reason 2 – Statistics Updates: When statistics are modified by DBMS_STATS, new cursors not yet cached in the shared pool will use them to get execution plans. Existing cursors cached in the shared pool cannot update their execution plans however. Instead, such cursors are invalidated and new versions (children cursors) are created which get execution plans based on the updated statistics. This involves a hard-parse operation which is more expensive in resource consumption than a soft-parse which simply reuses a cached cursor.
Reason 3 – Parameter Changes: Changes to system or session parameters that affect SQL execution might cause cursor invalidation.
First, I will start with detecting most invalidated sql statements. I used the query provided below for this purpose.
| SQL> SELECT sql_id, | |
| sum(invalidations) | |
| FROM gv$sql | |
| WHERE invalidations > 0 | |
| GROUP BY sql_id | |
| ORDER BY sum(invalidations) desc | |
| FETCH first 20 ROWS only ; |
Most of them from the first 20 was related with daily loaded tables from a different unit which provides data through SSIS (SQL Server Integration Services) jobs. The reason for the invalidated SQL statements was schema changes. These tables were being truncated and bulk-loaded periodically. We replaced most of the truncate operations with DELETE FROM statements, being mindful of the high water mark and moved them online during very long periodic maintenance periods to control segment size. You can use the commands provided below to identify which SQL statement is causing the SQL invalidation for a specific SQL ID. In my example, I will demonstrate this for the SQL statement with the sql_id ‘7uqjqjgmv9umc’.
--First identify the Full Hash Value (KGLNAHSV) of the SQL statement to be traced.
SQL> select kglnahsv from sys.x$kglob where kglobt03='7uqjqjgmv9umc'
KGLNAHSV
217cf1e0bd5f0cf37d5a368be7b4ea6c
-- Enable the invalidation trace for the SQL statement using kglnahsv value.
SQL> alter system set "_kgl_debug"="hash= '217cf1e0bd5f0cf37d5a368be7b4ea6c' debug=33552";
-- Check the invalidation for the debugged cursor.
SQL> select kglnahsv,kglhdivc from sys.x$kglob where kglobt03='7uqjqjgmv9umc';
KGLNAHSV KGLHDIVC
-------------------------------- ----------
217cf1e0bd5f0cf37d5a368be7b4ea6c 32
-- Wait some time and check again.
-- If KGLHDIVC value increases, then it means sql cursor invalidated.
-- Disable the trace.
SQL> alter system set "_kgl_debug"="hash= '217cf1e0bd5f0cf37d5a368be7b4ea6c' debug=0";
-- Find the related trace file.
[oracle@blt01 trace]$ grep -l "^<KGLTRACE>" *.trc | xargs ls -lsthr
4.0K -rw-r----- 1 oracle dba 2.4K Jun 19 13:31 bltdb1_ora_296230.trc
-- You will find the sql statement causing the sql cursor validation.
...
<CurrentSQL>truncate table teams.match_results</CurrentSQL>
...
From 18C onwards, _kgl_debug command can be used at below debug level (99088) so that invalidation for the specified cursor is reported to alert log with full trace file name. You do not need to check for invalidation and search for the trace file. More information is available on “How To Trace Specific Cursor Invalidation Using Library Cache Debugging (_KGL_DEBUG) Event (Doc ID 2745080.1)” and How To Trace Overall Library Cache Objects Invalidation Happening At Particular Period (Doc ID 2746493.1)
SQL> alter system set "_kgl_debug"="hash= '217cf1e0bd5f0cf37d5a368be7b4ea6c' debug=99088";
We also detected that when statistics for a highly popular object are gathered by the Automatic Statistics Gathering Job, the related SQL statements get invalidated. This invalidation behavior can be controlled with the option provided below. You can use DBMS_STATS.SET_GLOBAL_PREFS to control whether invalidation happens immediately or is spread out over a time period long enough to prevent noticeable spikes due to hard-parses. The default time period is 300 minutes and is controlled by the “_optimizer_invalidation_period” parameter. More information is available on Rolling Cursor Invalidations with DBMS_STATS.AUTO_INVALIDATE (Doc ID 557661.1)
SQL> select DBMS_STATS.GET_PREFS ('NO_INVALIDATE') as global_default_invalidation from dual;
GLOBAL_DEFAULT_INVALIDATION
--------------------------------------------------------------------------------
DBMS_STATS.AUTO_INVALIDATE
You may choose to fix statistics for these highly popular objects or simply accept the situation as we did, since in our case the primary root cause for high amount of sql invalidation count was actually not the statistics but the command used for Materialized Views Refreshing.
I tried to trace the overall invalidation in the SQL AREA for a specified period of interval on one instance with the command provided below.
SQL> alter system set "_trace_kqlidp"=true;
Statements in the alert.log file.
...
INVALIDATION performed by ospid=129614. Please see tracefile.
2023-06-19T17:10:34.224220+03:00
INVALIDATION performed by ospid=271830. Please see tracefile.
2023-06-19T17:10:35.754104+03:00
INVALIDATION performed by ospid=271852. Please see tracefile.
2023-06-19T17:10:35.788458+03:00
INVALIDATION performed by ospid=271825. Please see tracefile.
2023-06-19T17:10:36.312270+03:00
INVALIDATION performed by ospid=271865. Please see tracefile.
2023-06-19T17:10:36.338241+03:00
INVALIDATION performed by ospid=271835. Please see tracefile.
2023-06-19T17:10:36.371669+03:00
INVALIDATION performed by ospid=271840. Please see tracefile.
2023-06-19T17:10:36.406036+03:00
INVALIDATION performed by ospid=271859. Please see tracefile.
2023-06-19T17:10:36.794529+03:00
INVALIDATION performed by ospid=270509. Please see tracefile.
2023-06-19T17:10:39.212258+03:00
INVALIDATION performed by ospid=270507. Please see tracefile.
2023-06-19T17:10:39.616963+03:00
INVALIDATION performed by ospid=271877. Please see tracefile.
...
[oracle@blt01 ~]$ vi ORACLE_BASE/diag/rdbms/bltdb/bltdb1/trace/bltdb1_ora_129614.trc
...
INVALIDATION: Current SQL follows
*** 2023-06-19T17:09:23.112912+03:00
BEGIN
DBMS_SNAPSHOT.refresh ('TEAMS.MV_FOOTBALL_PLAYERS',
'C',
atomic_refresh => FALSE);
END;
...
----- End of Abridged Call Stack Trace -----
Partial short call stack signature: 0x589054cbd5152378
kqlidp0: 4261649 (CURSOR .) (Parent: 0) [ROOT]
kqlidp0:. 1585497 (SYNONYM PUBLIC.S_FOOTBALL_PLAYERS) (Parent:4261649) [ADDED TO QUEUE]
kqlidp0:. 1585497 (SYNONYM PUBLIC.S_FOOTBALL_PLAYERS) (Parent:4261649) [REMOVE:ONLY-SUMM]
kqlidp0:. 2449896 (SUMMARY TEAMS.S_PUNISHES) (Parent:4261649) [ADDED TO QUEUE]
kqlidp0:. 2449896 (SUMMARY TEAMS.MV_PUNISHES) (Parent:4261649) [SKIP:ALREADY INVALID]
....
INVALIDATION: Current SQL follows
/* MV_REFRESH (DEL) */ truncate table "TEAMS"."MV_FOOTBALL_PLAYERS" purge snapshot log
...
What drew my attention was that although only one materialized view was refreshed, there were numerous dependent invalid objects. Additionally, many more SQL statements referring to these objects become invalidated every time we refresh a materialized view with the ‘atomic_refresh => FALSE’ flag. The example I provided is one of the most critical and heavily used objects in that database.
So, I tried to measure the impact by determining how many SQL statements get invalidated with a single refresh on the most brutal one.
SQL> select count(distinct sql_id) from gv$sql where object_status='INVALID_UNAUTH' and parsing_schema_name in (select username from dba_users where oracle_maintained='N') and (upper(sql_fulltext) like upper('%S_FOOTBALL_PLAYERS%') or upper(sql_fulltext) like upper('%MV_FOOTBALL_PLAYERS%') );
4834 sql statement got invalidated with a single refresh. That was sad but true.
Once I identified the issue, I continued investigating by simplifying it to a single statement that queries the synonym referencing the materialized view, to determine whether it gets invalidated or not after a refresh.
SYS@bltdb1> set feedback on sql_id
SYS@bltdb1> var p_author_id number;
SYS@bltdb1> exec :p_author_id := 1463128;
PL/SQL procedure successfully completed.
SYS@bltdb1> select * FROM S_FOOTBALL_PLAYERS SFP, S_APP_AUTHORITY SAA WHERE UPPER_AUTHORITY_ID = :p_author_id AND BTS_TRH > SYSDATE AND SAA.PER_ID=SFB.PER_ID
SQL_ID: adcx5wznvnjv2
SYS@bltdb1> select object_status, invalidations from gv$sql where sql_id='adcx5wznvnjv2'
OBJECT_STATUS INVALIDATIONS
VALID 3
-- Atomic refresh false (using truncate)
SYS@bltdb1> BEGIN
DBMS_SNAPSHOT.refresh ('TEAMS.MV_FOOTBALL_PLAYERS ',
'C',
atomic_refresh => FALSE);
END;
-- Sql statement invalidated after refresh with atomic_refresh false.
SYS@bltdb1> select object_status from gv$sql where sql_id='adcx5wznvnjv2'
OBJECT_STATUS INVALIDATIONS
INVALID_UNAUTH 4
-- Atomic refresh true (using delete)
SYS@bltdb1> BEGIN
DBMS_SNAPSHOT.refresh ('TEAMS.MV_FOOTBALL_PLAYERS ',
'C',
atomic_refresh => TRUE);
END;
-- Sql statement not invalidated with atomic_refresh true.
SYS@bltdb1> select object_status from gv$sql where sql_id='adcx5wznvnjv2'
OBJECT_STATUS INVALIDATIONS
VALID 4
What solutions do we have?
Option 1: We could use materialized view logs. However, we eliminated this idea because the materialized view query depends on many different tables, and it would require tracking a large number of tables.
Option 2: Refresh materialized views with the atomic_refresh => TRUE option and fixing the materialized view table statistics. We monitored the refresh times to see if they took longer. Only a small number of materialized views showed extended refresh periods, and for those, we rewrote their queries and tuned them.
Conclusion
The root cause of the high number of hard parses was SQL cursor invalidations triggered by the materialized view refreshes, not the literals themselves. By implementing atomic refresh (atomic_refresh => TRUE) for materialized views and fixing the table statistics, we were able to significantly reduce SQL cursor invalidations, leading to fewer hard parses and consequently, lower CPU usage. This approach not only improves performance but is particularly crucial in OLTP systems.
Hope it helps.


Leave your comment