AUTOTRACE Problem

I’ve commented before about how AUTOTRACE is a favoured tool for a quick performance check of a query. So, whenever a developer pays an interest in it I’m glad to help where I can.

Recently a developer told me that AUTOTRACE wasn’t working for them. They sent across their script, which enabled AUTOTRACE with:

SET AUTOTRACE TRACEONLY

and the output they were getting. The output was very strange (this is from an 11.2.0.3 database):

Statistics
----------------------------------------------------------
          0  user rollbacks
          0  global enqueue releases
          0  physical read requests optimized
          0  physical write total multi block requests
          0  hot buffers moved to head of LRU
          0  commit wait performed
          0  global undo segment hints helped
          0  global undo segment hints were stale
          0  IMU commits
          0  IMU Flushes
        244  rows processed

Running their script in my own account revealed nothing wrong; the AUTOTRACE output was present and correct. Looking into the roles and privileges on the developer’s account showed nothing amiss either… but their account did show as having an expired password. A change of their password and the problem was resolved.

Why did resetting a password resolve an AUTOTRACE problem? What might not be understood about AUTOTRACE is that it creates a new session in order to do its profiling. With the password expired I guess that AUTOTRACE couldn’t do its usual work and generated the strange output above. It’s not good that it didn’t report some sort of login problem but it didn’t take long to resolve. I’ve simply made a mental note (and a blog post to remind me) to check for expired passwords next time someone tells me that AUTOTRACE isn’t working for them.

Advertisements

CURSOR_SHARING for bulk inserts

You won’t have been developing in Oracle for too long before you find the need to populate data in your application tables. Moving from across the various environments from development to production means that you will want the ability to create data easily so manually typing data using a GUI is out of the question. Besides, how many errors do you think you’ll make trying to type in a list of postcodes and suburbs?

For relatively small amounts of data the simple INSERT statement is more than adequate for loading data. Creating scripts for reference tables can be a tedious though. Thankfully GUI development tools such as Dell (Quest) TOAD and Oracle SQL Developer have the ability to export data as INSERT statements. Once a table has been loaded in the development environment it can be easily exported as INSERTs for inclusion as part of the application deployment scripts.

Things begin to wrong when developers think this a good strategy for large tables. Those same GUI tools that made it easy to export small reference table data as INSERTs also make it easy to export large tables, running into the tens of thousands of rows. While there are better ways to load data it can be too late to do anything about it by the time it reaches the DBA for deployment. Not only can the script take a while to run but it will flood the SGA with one-off SQL statements.

Both problems can be mitigated with the CURSOR_SHARING parameter though. Somewhat simplistically, changing the parameter from the default value of EXACT to FORCE will replace the literal values in the insert statements with bind variables. Since CURSOR_SHARING is a session modifiable setting it can be changed with:

ALTER SESSION SET cursor_sharing = FORCE

So how effective is CURSOR_SHARING? Let’s do a test using 10,000 inserts into an empty table with CURSOR_SHARING set to EXACT, i.e. no forced bind variable substitution. The inserts are all along the lines of:

Insert into INS_TEST (ID,PADDING) values (1,'ZLwgVwYBfDopIlMbvjQjtWNfVfFrKP');

The full scripts can be downloaded using the links at the end of the this post.

SQL>SET ECHO ON
SQL>VARIABLE start_time NUMBER
SQL>ALTER SYSTEM FLUSH SHARED_POOL
  2  /

System altered.

SQL>TRUNCATE TABLE ins_test
  2  /

Table truncated.

SQL>ALTER SESSION SET cursor_sharing = EXACT
  2  /

Session altered.

SQL>EXEC :start_time := dbms_utility.get_cpu_time

PL/SQL procedure successfully completed.

SQL>SET TERMOUT OFF
-- 10,000 insert statements executed here but not shown
SQL>EXEC dbms_output.put_line ('Insert duration (cursor_sharing=exact): ' || TO_CHAR(dbms_utility.get_cpu_time - :start_time))
Insert duration (cursor_sharing=exact): 1288

PL/SQL procedure successfully completed.

So, our inserts too 12.7 seconds, which is quite good for my small notebook albeit equipped with a fast processor and an SSD.

Now let’s see what happens if we set CURSOR_SHARING to FORCE, which substitutes literal values with bind values in our insert statements:

SQL>TRUNCATE TABLE ins_test
  2  /

Table truncated.

SQL>ALTER SESSION SET cursor_sharing = FORCE
  2  /

Session altered.

SQL>EXEC :start_time := dbms_utility.get_cpu_time

PL/SQL procedure successfully completed.

SQL>SET TERMOUT OFF
-- 10,000 insert statements executed here but not shown
SQL>EXEC dbms_output.put_line ('Insert duration (cursor_sharing=force): ' || TO_CHAR(dbms_utility.get_cpu_time - :start_time))
Insert duration (cursor_sharing=force): 626

PL/SQL procedure successfully completed.

SQL>
SQL>ALTER SESSION SET cursor_sharing = EXACT
  2  /

Session altered.

Using FORCE for CURSOR_SHARING halved our execution time; from 12.7 to 6.3 seconds. A very nice benefit from a single line change to the deployment script!

So, what about our SGA? Running the following query after each test:

SELECT AVG(executions)
FROM   v$sql
WHERE  sql_text LIKE 'Insert into INS_TEST%' ESCAPE '\'

gives us a result of 1 when CURSOR_SHARING is EXACT and 10,000 when FORCE, showing the CURSOR_SHARING is behaving exactly as we want.

One last point to note: always change CURSOR_SHARING back to the default value immediately at the end of the script! This is just good practice.


Download the scripts for this post.