Using Cursordump to Capture SQL Parse Failures
Parse failures are actually not stored in data dictionary and therefore can not be identified through querying the data dictionary.
As of Oracle 10g, event 10035 can be used to report on all failed parses.
| Until explicitly disabled, it will track sql statements with parsing errors. | |
| SQL> alter system set events '10035 trace name context forever, level 1'; | |
| Levels: | |
| level 1+ Print out user SQL statements | |
| level >3 Print out recursive SQL | |
| level >5 Print out row cache cached cursor stats | |
| Turning off : | |
| SQL> alter system set events '10035 trace name context off'; |
With 19.16 Release Update, Oracle does not record sql statements to alert.log. Any statement that fails at parsing stage, is recorded with the error number and the process OSPID. Also setting hidden “_kks_parse_error_warning” parameter to 1, does not help reporting the failed sql statements.
I tried changing the levels(1-128) of tracing but I could not the be successful to detect the failed statements.
| 2023-05-21T17:18:03.998348+03:00 | |
| PARSE ERROR: ospid=28682, error=923 for statement: | |
| Additional information: hd=0x6ae210f0 phd=0x8bdc7d18 flg=0x28 cisid=0 sid=0 ciuid=0 uid=0 sqlid=d431n9ajscp24 | |
| …Current username=SYS | |
| …Application: sqlplus@blt01.localdomain (TNS V1-V3) Action: |
I have searched x$kglob table, which is the primary library cache object underneath v$sql view.
| SQL> select kglhdpar, kglhdadr, kglobt03, kglnaobj from sys.x$kglob where kglobt03='d431n9ajscp24' | |
| KGLHDPAR KGLHDADR KGLOBT03 KGLNAOBJ | |
| —————- —————- ————— ————————————————– | |
| 000000008BDC7D18 000000006AE210F0 d431n9ajscp24 select 1923 | |
| 000000008BDC7D18 000000008BDC7D18 d431n9ajscp24 select 1923 |
kglnaobj column of x$kglob table can be used to detect these failing sql statements, but its data type is VARCHAR2(1000). For statements which are longer than 1000 characters, it is not possible to detect with this method.
The best way to go for detecting these statements is dumping the cursor with the below command and then viewing the tracefile(.trc) and searching for the “sql=” statement. Tracefile name is typically formatted as instancename_ora_ospid.trc. For my example, it is bltdb1_ora_28682.trc. Also tracefile information is available on v$process view.
| alter system set events '10035 cursordump(1)'; |
Using database triggers for capturing parse failures
Actually, error logging should be handled by applications. In practice, most applications do not catch all database errors. You may also use a after servererror on database trigger to catch all the errors. I have provided a sample one.
SQL> create sequence error_log_seq;
SQL> create table logged_errors
(
log_id number not null,
log_timestamp date not null,
system_event varchar2(128),
user_name varchar2(128),
error_message varchar2(4000),
session_id number,
client_host varchar2(256),
client_ip varchar2(15),
executing_module varchar2(4000),
server_name varchar2(256),
query_text clob,
constraint errors_pk primary key(log_id)
);
SQL> create or replace trigger capture_error_trg
after servererror on database
declare
query_buffer ora_name_list_t;
full_query clob;
query_length number;
begin
query_length := ora_sql_txt(query_buffer);
for idx in 1 .. query_length loop
full_query := full_query || query_buffer(idx);
end loop;
-- Consider adding filters here to exclude common, non-critical errors.
insert into logged_errors
values
(
error_log_seq.nextval,
sysdate,
ora_sysevent,
ora_login_user,
ora_server_error(1),
sys_context('USERENV','SID'),
sys_context('USERENV','HOST'),
sys_context('USERENV','IP_ADDRESS'),
sys_context('USERENV','MODULE'),
sys_context('USERENV','SERVER_HOST'),
full_query
);
commit;
-- Avoid raising exceptions from this trigger to prevent infinite error loops.
exception when others then
null;
end;
/
Certain database triggers come with specific challenges that require careful consideration. If a query containing sensitive information, like social security numbers, fails, that data could end up in the error logs, so it’s crucial to manage access to sensitive tables and assess the risks involved. There may be a higher number of errors than expected, which should be addressed constructively, focusing on system improvements rather than individual accountability. Thorough testing is essential, as poorly designed triggers—especially those related to user logins—can cause major disruptions. Even error-handling triggers can behave unpredictably and need to be evaluated carefully. Additionally, while handling errors typically doesn’t impact performance significantly, it’s important to monitor error logs to prevent excessive growth, which can lead to performance issues or potential denial-of-service scenarios.
Edited on November 16th, 2024:
According to the Bug 35888571 – Diagnostic Improvement To Record SQL That Hits High Number Of Parse Errors (Doc ID 35888571.8), systems with the fix 35888571 in their inventory will now have SQL statements recorded in the alert.log file again. This fix is included in the 19.24 DBRU.
Hope it helps.


Leave a reply to Anonymous Cancel reply