In one of the our mission-critical databases, I have discovered excessive memory utilization for SQLs querying DBA_SEGMENTS and they were not using bind variables. Total sharable memory usage for sqls using same plan_hash_value was up to 7GB on 4 instances. Sadly, it was an internal query issued from FBDA(Flashback Data Archiver) background process.
Before i deep dive into the issue, I want to explain “What is hard parse?”
What is hard parse?
When an application or user issues a SQL statement, a parse call to prepare the statement for execution is made. The parse call opens a cursor, which is a handle for the session specific private SQL area that holds a parsed SQL statement and other processing information. During that time, the database performs the Syntax Check, Semantic Check and Shared Pool Check. At the shared pool check phase, The text of the SQL statement is hashed, If there is no matching hash value, then the SQL statement does not currently exist in the shared pool, and a hard parse is performed.
If there is a matching hash value for an existing SQL statement in the shared pool, then the text of the matched statement is compared to the text of the hashed statement to verify if they are identical. Assuming that the “CURSOR_SHARING” parameter is set to its default of EXACT, to be considered identical, The text of the SQL statements or PL/SQL blocks must be identical, character for character, including spaces, case, and comments. Also, SQL statements that differ only in literals cannot use the same shared SQL area and hard parse is performed.
Hard parse is a resource intensive process because during the hard parse, the database accesses the library cache and data dictionary cache numerous times to check the data dictionary. When the database accesses these areas, it uses a serialization device called a latch on required objects so that their definition does not change. Latch contention increases statement execution time and decreases concurrency.
Reducing the number of hard parses results in reduced CPU usage.
So it is important to avoid hard parses for better performance especially in OLTP systems. One of the dba’s primary jobs is detecting and pointing out those kind of queries.
The reduction of hard parsing is mostly accomplished by using bind variables (not using literals) or modifying “CURSOR_SHARING” parameter. For more information on identification of sql statements using literals, you can inspect Connor McDonald’s post “SQL statements using literals“
Excessive memory utilization from FBDA process
I will show how we detected the issue briefly. We used the query below to detect top 20 SQL plans sharing same plan and consuming most memory from shared pool.
The most memory consuming plan_hash_value was 44909958. When i queried the sql statements using that plan, what i found was that on four instances a total of 23910 sql statements were sharing same plan and they were only differentiating from each other with literal values. Below is a sample of these sql statements.
| SQL> SELECT sql_id, sql_text FROM gv$sql b where plan_hash_value=44909958; | |
| SQL_ID SQL_TEXT | |
| ————- —————————————————————————————————————————————- | |
| … | |
| 6uzsj5a6b007x select sum(BLOCKS) from DBA_SEGMENTS where segment_name='SYS_FBA_DDL_COLMAP_85287' and owner='XXXX' and TABLESPACE_NAME='USERS_XXX' | |
| 193rca1sdh0n0 select sum(BLOCKS) from DBA_SEGMENTS where segment_name='SYS_FBA_HIST_1584811' and owner='YYYY' and TABLESPACE_NAME='USERS_XXX' | |
| 4xnnvvkkcn1m3 select sum(BLOCKS) from DBA_SEGMENTS where segment_name='SYS_FBA_TCRV_IDX_84869' and owner='ZZZZ' and TABLESPACE_NAME='USERS_XXX' | |
| gxskkznf5s057 select sum(BLOCKS) from DBA_SEGMENTS where segment_name='SYS_FBA_TCRV_1584668' and owner='TTTT' and TABLESPACE_NAME='USERS_XXX' | |
| 79k6129w542gr select sum(BLOCKS) from DBA_SEGMENTS where segment_name='SYS_FBA_TCRV_IDX_83598' and owner='XXXX' and TABLESPACE_NAME='USERS_XXX' | |
| … |
These sql statements were not issued from an user/developer session or an application. Actually they were coming from sys internal sessions (FBDA background process). We diagnosed that with the below sql statements.
| SQL> SELECT | |
| sq.sql_id, | |
| sq.parsing_schema_name as schema | |
| sq.service, | |
| ses.program, | |
| ses.sample_time | |
| FROM | |
| v$sql sq, | |
| v$active_session_history ses | |
| WHERE | |
| sq.plan_hash_value = 44909958 | |
| AND sq.sql_id = ses.sql_id | |
| AND ROWNUM < 3; | |
| SQL_ID SCHEMA SERVICE PROGRAM SAMPLE_TIME | |
| —— ——- ——- ——- ———– | |
| bq0cg50m60nzd SYS SYS$BACKGROUND oracle@blt01.localdomain (FBDA) 11/01/2023 21:51:53,035000000 | |
| 8m0bvrr2vssyp SYS SYS$BACKGROUND oracle@blt01.localdomain (FBDA) 11/01/2023 20:51:51,100000000 |
Actually problematic query is a very simple one. In my humble opinion, FBDA process is checking periodically(i guess hourly) sum of blocks in dba_segments view for every object with a segment (tables,indexes etc.) to calculate storage allocated from Flashback Data Archive tablespace.
Also solution for the problematic query is straightforward, JUST TO USE BIND VARIABLES.
I searched metalink for a bug or enhancement request, but I could not find anything really helpful.
I found the Bug 15931756 – ORA-4031 / Queries against SYS_FBA_TRACKEDTABLES not shared (do not use binds) (Doc ID 15931756.8), which is fixed in 19.11 Release Update. This bug is also about queries coming from FBDA process which are not using bind variables. These queries are also shown in the example of Connor McDonald’s post “SQL statements using literals”
Interestingly enough, on another database which also uses flashback time travel (formerly known as flashback data archive) feature efficiently, there were no signs of “excessive memory utilization from FBDA process”. In library cache, there were no sql statements similar to our problematic query.
Here is the test case for the issue.
| CREATE TABLESPACE USERS_TEST | |
| DATAFILE '+DATA' SIZE 12G; | |
| CREATE FLASHBACK ARCHIVE FDA_TEST TABLESPACE USERS_TEST QUOTA 512 M RETENTION 1 DAY; | |
| ALTER USER HR | |
| QUOTA 10G ON USERS_TEST; | |
| BEGIN | |
| FOR C1 IN 1..100 LOOP | |
| EXECUTE IMMEDIATE 'CREATE TABLE HR.TEST_TABLE' | |
| || TO_CHAR(C1) | |
| || '(COL1 NUMBER, COL2 NUMBER ) TABLESPACE USERS_TEST'; | |
| END LOOP; | |
| END; | |
| BEGIN | |
| FOR C1 IN 1..100 LOOP | |
| EXECUTE IMMEDIATE ' ALTER TABLE HR.TEST_TABLE' | |
| || TO_CHAR(C1) | |
| || ' FLASHBACK ARCHIVE FDA_TEST '; | |
| END LOOP; | |
| END; | |
| BEGIN | |
| FOR C1 IN 1..100 LOOP | |
| EXECUTE IMMEDIATE 'INSERT INTO HR.TEST_TABLE' | |
| || TO_CHAR(C1) | |
| || ' VALUES (2,20) '; | |
| END LOOP; | |
| COMMIT; | |
| END; |
So far, the problem still exists in 19.16 Release Update.
But i will come up with a workaround for the issue. These sql statements looks like written for to calculate storage allocated from Flashback Data Archive tablespace. FBDA process is checking periodically whether a QUOTA is exceeded or not.
To eliminate the excessive memory usage from FBDA process and avoid from ORA-4031;
REMOVE QUOTA CLAUSE FROM YOUR FLASHBACK DATA ARCHIVES with the below statement and also create your new flashback data archives without quota clause.
| — REMOVE QUOTA FROM FLASHBACK DATA ARCHIVE WITH SPECIFYING NO QUOTA CLAUSE | |
| SQL> ALTER FLASHBACK ARCHIVE FDA_TEST MODIFY TABLESPACE USERS_TEST ; |
No more checks for segment sizes.
Hope it helps.


Leave your comment