Scripts for trace 10053

By Brian Fitzgerald

Trace 10053 tells you “how” the optimizer selected its plan — the reasons for the choices it has made. The main fact to remember when you are collecting trace 10053 is that you must force a hard parse each time you run your statement. Although forcing a hard parse can be done by manually varying the statement’s SQL text, this blog post offers a way to automate some of the steps, in case you need to quickly make a high number of trace 10053 runs.

For the sake of this blog article, we’ll use trace 10053 to examine the effect of statistics on the execution plan.

Preliminary: To run trace 10053, the user must be granted alter session.

SQL> grant alter session to scott;

Grant succeeded.

Trace 10053 with manual steps

In order to analyze the effect of gathering statistics on execution plan, we’ll run a statement twice, once without statistics and once with statistics. We’ll cycle the pluggable database to guarantee at least one hard parse to start with. Here is file demo.softparse.10053.sql

set pagesize 50000
conn / as sysdba
alter pluggable database pdba close;
alter pluggable database pdba open;
@ conn.pdba.scott.sql

alter session set tracefile_identifier = a;
exec dbms_stats.delete_schema_stats(user);
alter session set events='10053 trace name context forever, level 1';
select e.ename, e.deptno
from emp e
;
alter session set events '10053 trace name context off';
select value tracefilename
from v$diag_info
where name = 'Default Trace File';

alter session set tracefile_identifier = b;
exec dbms_stats.gather_schema_stats(user);
alter session set events='10053 trace name context forever, level 1';
select e.ename, e.deptno
from emp e
;
alter session set events '10053 trace name context off';
select value tracefilename
from v$diag_info
where name = 'Default Trace File';

The output:

[oracle@stormking cdb12201 sql]$ sqlplus /nolog @ demo.softparse.10053.sql
SQL*Plus: Release 12.2.0.1.0 Production on Mon Aug 7 21:25:35 2017
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Connected.
Pluggable database altered.
Pluggable database altered.
Connected.
Session altered.
PL/SQL procedure successfully completed.
Session altered.
ENAME DEPTNO
---------- ----------
SMITH 20
ALLEN 30
WARD 30
JONES 20
MARTIN 30
BLAKE 30
CLARK 10
SCOTT 20
KING 10
TURNER 30
ADAMS 20
JAMES 30
FORD 20
MILLER 10
14 rows selected.
Session altered.
TRACEFILENAME
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_16771_A.trc
Session altered.
PL/SQL procedure successfully completed.
Session altered.
ENAME DEPTNO
---------- ----------
SMITH 20
ALLEN 30
WARD 30
JONES 20
MARTIN 30
BLAKE 30
CLARK 10
SCOTT 20
KING 10
TURNER 30
ADAMS 20
JAMES 30
FORD 20
MILLER 10
14 rows selected.
Session altered.
TRACEFILENAME
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_16771_B.trc
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
[oracle@stormking cdb12201 sql]$ ls -l /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_16771_{A,B}.trc
ls: cannot access /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_16771_B.trc: No such file or directory
-rw-r----- 1 oracle asmadmin 99753 Aug 7 21:25 /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_16771_A.trc

Whoops, where is my second trace file? The second trace file is not there because I did not force a hard parse on the second statement execution. The SQL text was exactly the same in both runs:

select e.ename, e.deptno
from emp e
;

We failed to collect a trace 10053 for the second execution.

I could work around by manually varying the white space, upper case, or adding a comment. There are other ways to force a hard parse, that sometimes work, such as issuing a grant or flushing the statement from the shared pool, if you have the privilege.

In the next example, the statements are the same, except that SELECT is in caps in the first, and FROM is in caps in the second. These SQL text variations were made manually with an editor. Here is how demo.hardparse.10053.sql differs from the original:

[oracle@stormking cdb12201 sql]$ diff demo.softparse.10053.sql demo.hardparse.10053.sql
10c10
< select e.ename, e.deptno
---
> SELECT e.ename, e.deptno
22c22
< from emp e
---
> FROM emp e
YYY

The output :

[oracle@stormking cdb12201 sql]$ sqlplus /nolog @ demo.hardparse.10053.sql
SQL*Plus: Release 12.2.0.1.0 Production on Mon Aug 7 21:34:36 2017
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Connected.
Pluggable database altered.
Pluggable database altered.
Connected.
Session altered.
PL/SQL procedure successfully completed.
Session altered.
ENAME DEPTNO
---------- ----------
SMITH 20
ALLEN 30
WARD 30
JONES 20
MARTIN 30
BLAKE 30
CLARK 10
SCOTT 20
KING 10
TURNER 30
ADAMS 20
JAMES 30
FORD 20
MILLER 10
14 rows selected.
Session altered.
TRACEFILENAME
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_18928_A.trc
Session altered.
PL/SQL procedure successfully completed.
Session altered.
ENAME DEPTNO
---------- ----------
SMITH 20
ALLEN 30
WARD 30
JONES 20
MARTIN 30
BLAKE 30
CLARK 10
SCOTT 20
KING 10
TURNER 30
ADAMS 20
JAMES 30
FORD 20
MILLER 10
14 rows selected.
Session altered.
TRACEFILENAME
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_18928_B.trc
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
[oracle@stormking cdb12201 sql]$ ls -l /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_18928_{A,B}.trc
-rw-r----- 1 oracle asmadmin 99909 Aug 7 21:35 /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_18928_A.trc
-rw-r----- 1 oracle asmadmin 98628 Aug 7 21:35 /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_18928_B.trc

Notice that there are two 10053 trace files this time, one for each statement. The statements are unique, so each one was hard parsed.

Notice also the trace file continuation message from file A to file B:

[oracle@stormking cdb12201 sql]$ tail -2 /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_18928_A.trc
*** TRACE CONTINUES IN FILE /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_18928_B.trc ***

Issuing alter session set tracefile_identifier = b forced a switch to a new trace file.

That’s the basic idea, force a hard parse and switch trace file each execution, but I want to automate the process because in the future, I want to make a high number of trace 10053 runs.

Scripts for trace 10053

My demo queries are in directory /home/oracle/brian/sql/sql, but I plan to reuse my trace in the future. I set SQLPATH, and put reusable scripts there.

[oracle@stormking cdb12201 sqlplus]$ echo $SQLPATH
/home/oracle/brian/sql/sqlplus

Here are the scripts:

conn.pdba.scott.sql

A one-liner for connecting to the schema. I will reuse this script many times.

conn scott/tiger@stormking:1521/pdba

trace.10053.on.sql

Here is a one-line script for starting the trace 10053. Using this script is a lot easier than remembering the exact syntax, and it makes for an uncluttered main script.

alter session set events='10053 trace name context forever, level 1';

trace.10053.off.sql

Here is the script for stopping trace 10053. If you do not explicitly turn off trace, you will not get your trace file.

alter session set events '10053 trace name context off';

tracefilename.sql

This script displays the trace file name. It’s a better approach than searching for the file manually.

set echo off
column tracefilename format a100
set trimspool on
select value tracefilename
from v$diag_info
where name = 'Default Trace File';

randident.sql

I want to use a random identifier in two places. I want a random trace file identifier, and I want a random identifier somewhere in my statement.

-- set sqlplus substitution variable randident
-- to a random character string
set termout off
column randident new_value randident format a6
select dbms_random.string('u',6) randident from dual;
set termout on

randtfi.sql

This script sets the tracefile identifier to a random string. Notice that each time you call randtfi.sql, Oracle switches to a new trace file. That’s handy if you want to break up the trace for a single sqlplus session into multiple parts.

-- Change to a new, randomly named trace file on each call
@ randident.sql
alter session set tracefile_identifier = &&randident;

This is my collection of reusable scripts. They go in /home/oracle/brian/sql/sqlplus

Demonstrating the scripts

The demonstration is a throwaway example. The script does not belong in SQLPLATH. I have put the script in /home/oracle/brian/sql/sql. Here is demo.trace.10053.sql:

set pagesize 50000
@ conn.pdba.scott.sql

exec dbms_stats.delete_schema_stats(user);
@ randtfi.sql
@ trace.10053.on.sql
@ randident.sql
set echo on
set verify on
select e.ename &&randident, e.deptno
from emp e
;
set echo off
@ trace.10053.off.sql
@ tracefilename.sql

exec dbms_stats.gather_schema_stats(user);
@ randtfi.sql
@ trace.10053.on.sql
@ randident.sql
set echo on
set verify on
select e.ename &&randident, e.deptno
from emp e
;
set echo off
@ trace.10053.off.sql
@ tracefilename.sql

Running demo.trace.10053.sql:

[oracle@stormking cdb12201 sql]$ sqlplus /nolog @ demo.trace.10053.sql
SQL*Plus: Release 12.2.0.1.0 Production on Mon Aug 7 21:50:53 2017
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Connected.
PL/SQL procedure successfully completed.
Session altered.
Session altered.
SQL> set verify on
SQL> select e.ename &&randident, e.deptno
 2 from emp e
 3 ;
old 1: select e.ename &&randident, e.deptno
new 1: select e.ename AWQVDZ, e.deptno
AWQVDZ DEPTNO
---------- ----------
SMITH 20
ALLEN 30
WARD 30
JONES 20
MARTIN 30
BLAKE 30
CLARK 10
SCOTT 20
KING 10
TURNER 30
ADAMS 20
JAMES 30
FORD 20
MILLER 10
14 rows selected.
SQL> set echo off
Session altered.
TRACEFILENAME
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_RYNDLO.trc
PL/SQL procedure successfully completed.
Session altered.
Session altered.
SQL> set verify on
SQL> select e.ename &&randident, e.deptno
 2 from emp e
 3 ;
old 1: select e.ename &&randident, e.deptno
new 1: select e.ename ZRQGGE, e.deptno
ZRQGGE DEPTNO
---------- ----------
SMITH 20
ALLEN 30
WARD 30
JONES 20
MARTIN 30
BLAKE 30
CLARK 10
SCOTT 20
KING 10
TURNER 30
ADAMS 20
JAMES 30
FORD 20
MILLER 10
14 rows selected.
SQL> set echo off
Session altered.
TRACEFILENAME
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_XGLYME.trc
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
[oracle@stormking cdb12201 sql]$ grep '^sql_id' /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_RYNDLO.trc /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_XGLYME.trc
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_RYNDLO.trc:sql_id=6z7nrbdts160v plan_hash_value=-338806364 problem_type=3
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_XGLYME.trc:sql_id=5ub8qv1cugvr4 plan_hash_value=-338806364 problem_type=3

Again, there are two trace files. The first trace file has a continuation message on to the second:

[oracle@stormking cdb12201 sql]$ tail -2 /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_RYNDLO.trc
*** TRACE CONTINUES IN FILE /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_XGLYME.trc ***

I have varied the output column name:

new 1: select e.ename AWQVDZ, e.deptno

vs.

new 1: select e.ename ZRQGGE, e.deptno

Because the SQL is different each time, a hard parse is forced each execution. I could have also forced a hard parse by placing the random string in a comment:

/* &&randident */

Now, on to the analysis and comparison of the two trace files.

In the first execution, statistics had been deleted. The trace 10053 was collected in file cdb12201_ora_22121_RYNDLO.trc. The plan is:

============
Plan Table
============
-------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
-------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 3 | |
| 1 | TABLE ACCESS FULL | EMP | 14 | 280 | 3 | 00:00:01 |
-------------------------------------+-----------------------------------+

Trace cdb12201_ora_22121_XGLYME.trc, with statistics, show this plan:

============
Plan Table
============
-------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
-------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 3 | |
| 1 | TABLE ACCESS FULL | EMP | 14 | 126 | 3 | 00:00:01 |
-------------------------------------+-----------------------------------+

The plan hash value in both cases is -338806364. The rows and blocks estimate is the same, and the bytes estimate is slightly different.

Trace cdb12201_ora_22121_RYNDLO.trc contains a dynamic sampling analysis:

*** 2017-08-07T21:50:53.660117-04:00 (PDBA(3))
** Performing dynamic sampling initial checks. **
** Dynamic sampling initial checks returning TRUE (level = 2).
** Dynamic sampling updated index stats.: PK_EMP, blocks=1
** Dynamic sampling updated table stats.: blocks=5

*** 2017-08-07T21:50:53.660117-04:00 (PDBA(3))
** Generated dynamic sampling query:
 query text :
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE RESULT_CACHE(SNAPSHOT=3600) opt_param('parallel_execution_enabled', 'false') NO_PARALLEL(SAMPLESUB) NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0) FROM (SELECT /*+ NO_PARALLEL("E") FULL("E") NO_PARALLEL_INDEX("E") */ 1 AS C1, 1 AS C2 FROM "SCOTT"."EMP" "E") SAMPLESUB

*** 2017-08-07T21:50:53.660117-04:00 (PDBA(3))
** Executed dynamic sampling query:
 level : 2
 sample pct. : 100.000000
 actual sample size : 14
 filtered sample card. : 14
 orig. card. : 409
 block cnt. table stat. : 5
 block cnt. for sampling: 5
 max. sample block cnt. : 64
 sample block cnt. : 5
 min. sel. est. : -1.00000000

In the second execution, statistics had been collected. The trace is cdb12201_ora_22121_XGLYME.trc. No dynamic sampling appears.

The analysis of the bytes difference follows. For the case of gathered stats, the row length is:

SQL> select sum(avg_col_len) from user_tab_columns 
where table_name = 'EMP' 
and column_name in ('ENAME','DEPTNO');

SUM(AVG_COL_LEN)
----------------
 9

9 bytes per row x 14 rows = 126 bytes, which is the bytes value in the plan, step 2 TABLE ACCESS FULL. For deleted stats, avg_col_len is null. The dynamic sampling query only finds the row count. The simplified dynamic sampling query is:

SQL> SELECT NVL(SUM(C1),0) FROM (SELECT /*+ FULL("E") */ 1 AS C1 FROM "SCOTT"."EMP" "E");
NVL(SUM(C1),0)
--------------
 14

Dynamic sampling tells us nothing about the number of bytes per row. The row length would have to come from the catalog.

SQL> select column_name, data_length, data_type from user_tab_columns where table_name = 'EMP' and column_name in ('ENAME','DEPTNO');

COLUMN_NAME DATA_LENGTH DATA_TYPE
------------------------------ ----------- ------------------------------
ENAME 10 VARCHAR2
DEPTNO 22 NUMBER

Allowing number digits per byte, each row would return 21 bytes. (That’s how I figure it). The optimizer estimated 20 bytes per row, or 280 bytes.

In conclusion, whether dynamic stats or gathered stats were used, the resulting plan was the same. The estimates of cardinality, CPU and time were the same, but the estimates of bytes returned differed slightly. Trace 10053 helps to explain the similarities and differences.

This was a simple example to demonstrate the use of the trace 10053 scripts to find out the reason for the optimizer’s predicted costs.

Conclusion

This blog post made the following points about the trace 10053 technique:

  • Trace 10053 can be used to find out why the optimizer made its decisions.
  • A user must have been granted alter session before running trace 10053.
  • To get trace 10053 output, is necessary to force a hard parse.
  • To get a trace file split into pieces, change the trace file identifier.
  • To get a random identifier in sqlplus, use dbms_random and column new_value.
  • One way to force a hard parse is to vary the SQL with a random identifier.
  • Force a trace file switch by using a random tracefile identifier.

The scripts in this blog post can let you efficiently, systematically, and quickly collect trace 10053 files on a SQL statement. The collected files can be reviewed, analyzed, and compared.

 

 

 

 

 

 

 

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s