In the previous part of this post we briefly looked at autotrace, what it is, how to access it and what it can tell us. In this part we’ll use autotrace to make some observations about simple queries to illustrate it’s usefulness.
The examples that follow were performed on an Oracle 11.2.0.2 64-bit database running on Windows 7 64-bit.
We’ll start off by creating a test table with 3 columns, a primary key on one of the columns and populate it with 1 million rows. Like any good Oracle developer we gather statistics on the table once we’ve populated it.
SET ECHO ON SET DEFINE OFF CREATE TABLE test_tab (id NUMBER(10) NOT NULL ,val NUMBER(10) NOT NULL ,padding CHAR(50) NOT NULL ) / INSERT INTO test_tab SELECT rownum , MOD(rownum,1000) , 'X' FROM dual CONNECT BY rownum <= 1000000 / COMMIT / CREATE UNIQUE INDEX test_tab_pk ON test_tab (id) / ALTER TABLE test_tab ADD CONSTRAINT test_tab_pk PRIMARY KEY (id) / EXEC dbms_stats.gather_table_stats('','TEST_TAB')
First up, let’s run a simple query to look up a single row from the table with autotrace enabled in its default state. A word of warning though, the script clears the buffer and library caches first so please don’t try this on your production system! The script we’ll execute is:
SET ECHO ON ALTER SYSTEM FLUSH BUFFER_CACHE / ALTER SYSTEM FLUSH SHARED_POOL / SET AUTOTRACE ON SELECT * FROM test_tab WHERE id = 123456 / SET AUTOTRACE OFF
Which gives us the following output:
SQL> SET ECHO ON SQL> SQL> ALTER SYSTEM FLUSH BUFFER_CACHE 2 / System altered. SQL> SQL> ALTER SYSTEM FLUSH SHARED_POOL 2 / System altered. SQL> SQL> SET AUTOTRACE ON SQL> SQL> SELECT * 2 FROM test_tab 3 WHERE id = 123456 4 / ID VAL PADDING ---------- ---------- -------------------------------------------------- 123456 456 X Execution Plan ---------------------------------------------------------- Plan hash value: 752309145 ------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 60 | 3 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID| TEST_TAB | 1 | 60 | 3 (0)| 00:00:01 | |* 2 | INDEX UNIQUE SCAN | TEST_TAB_PK | 1 | | 2 (0)| 00:00:01 | ------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("ID"=123456) Statistics ---------------------------------------------------------- 73 recursive calls 0 db block gets 77 consistent gets 45 physical reads 0 redo size 579 bytes sent via SQL*Net to client 509 bytes received via SQL*Net from client 1 SQL*Net roundtrips to/from client 6 sorts (memory) 0 sorts (disk) 1 rows processed SQL> SQL> SET AUTOTRACE OFF
As can be expected, the execution plan section of the output shows that Oracle obtained the relevant row from the table via the primary key index. The execution plan shown is the same as would be obtained from using EXPLAIN PLAN FOR … followed by DBMS_XPLAN.DISPLAY. The Statistics section is where autotrace shows its “value add” beyond the execution plan.
The Statistics from the initial query tells us (keep in mind that the library and buffer caches had been cleared):
- Oracle needed to do 73 internal query operations to run the query. These internal queries are lookups into the data dictionary to ensure the table exists, what columns the table has, whether the user has access to the table, etc.
- The 0 db block gets tells us that Oracle did not access any data in “current” mode, which is what would be required if Oracle were to update the data.
- 77 consistent gets gives us the number of logical I/O operations Oracle made into it’s block cache. Many of these would have resulted from the recursive queries Oracle ran.
- Oracle was required to make 45 physical reads against the disk in order to get the data needed for the query. This too will include data needed for the recursive calls made and not just the TEST_TAB table that the query ran against.
- Since the query did not modify the data the redo size is 0, which would be expected.
- The next two entries tells us how much data was transferred from the database to the SQL Plus client. The roundtrips entry tells us that all the data was done via 1 roundtrip and that Oracle didn’t need to stream data to SQL Plus across multiple trips to retrieve all the data.
- 6 sorts of data were required to complete the query. Given that the execution plan didn’t contain an operation requiring a sort we can assume that the sorts were done by the recursive calls.
- Finally the query resulted in 1 row being returned back to the client.
If we were to execute the exact same query again without flushing the library and buffer caches, the autotrace statistics would change:
SQL> SET ECHO ON SQL> SET AUTOTRACE ON SQL> SQL> SELECT * 2 FROM developer.test_tab 3 WHERE id = 123456 4 / ID VAL PADDING ---------- ---------- -------------------------------------------------- 123456 456 X Execution Plan ---------------------------------------------------------- Plan hash value: 752309145 ------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 60 | 3 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID| TEST_TAB | 1 | 60 | 3 (0)| 00:00:01 | |* 2 | INDEX UNIQUE SCAN | TEST_TAB_PK | 1 | | 2 (0)| 00:00:01 | ------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("ID"=123456) Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 4 consistent gets 0 physical reads 0 redo size 579 bytes sent via SQL*Net to client 509 bytes received via SQL*Net from client 1 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed SQL> SQL> SET AUTOTRACE OFF SQL> SQL> SELECT blevel 2 FROM user_indexes 3 WHERE index_name = 'TEST_TAB_PK' 4 / BLEVEL ---------- 2
The execution plan section shows that we executed a query that was identical to the previous one. However, since Oracle now has the query cached in its library and the table data cached in the buffer cache we see the elimination of recursive calls and sorts in the statistics section. More importantly, logical I/O (consistent gets) has dropped dramatically and is now just 4. We can calculate this value by looking at the execution plan and the size of the index. According to Oracle it “walked” the index to find where the relevant row was in the table and then jumped straight to the row. Querying the data dictionary we see that the index contains 2 branch levels (BLEVEL from the USER_INDEXES view) so that accounts for 2 consistent gets, accessing the index leaf block gives us 1 more consistent get and reading the row in the table gives us another 1, which totals the 4 that autotrace reports.
As might be anticipated, the number of bytes sent to and from the SQL Plus client remains the same, as does the number of rows processed.
The last autotrace example illustrates the impact that the number of rows that the client fetches at a time has on the processing of a query. In this example, we retrieve 1,000 rows from our table. In the first query we retrieve 10 rows at a time and in the second query we take 500 rows at a time.
Our script is:
SET ECHO ON VARIABLE l_start NUMBER VARIABLE l_end NUMBER EXEC :l_start := 123000 EXEC :l_end := 123999 SET ARRAYSIZE 10 SET AUTOTRACE TRACEONLY SELECT * FROM test_tab WHERE id BETWEEN :l_start AND :l_end / SET ARRAYSIZE 500 SELECT * FROM test_tab WHERE id BETWEEN :l_start AND :l_end / SET AUTOTRACE OFF
and the output is:
SQL> SET ECHO ON SQL> SQL> VARIABLE l_start NUMBER SQL> VARIABLE l_end NUMBER SQL> SQL> EXEC :l_start := 123000 PL/SQL procedure successfully completed. SQL> EXEC :l_end := 123999 PL/SQL procedure successfully completed. SQL> SQL> SET ARRAYSIZE 10 SQL> SQL> SET AUTOTRACE TRACEONLY SQL> SQL> SELECT * 2 FROM test_tab 3 WHERE id BETWEEN :l_start AND :l_end 4 / 1000 rows selected. Execution Plan ---------------------------------------------------------- Plan hash value: 1954378606 -------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 2500 | 146K| 53 (0)| 00:00:01 | |* 1 | FILTER | | | | | | | 2 | TABLE ACCESS BY INDEX ROWID| TEST_TAB | 2500 | 146K| 53 (0)| 00:00:01 | |* 3 | INDEX RANGE SCAN | TEST_TAB_PK | 4500 | | 12 (0)| 00:00:01 | -------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(TO_NUMBER(:L_START)<=TO_NUMBER(:L_END)) 3 - access("ID">=TO_NUMBER(:L_START) AND "ID"<=TO_NUMBER(:L_END)) Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 214 consistent gets 0 physical reads 0 redo size 32791 bytes sent via SQL*Net to client 1609 bytes received via SQL*Net from client 101 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1000 rows processed SQL> SQL> SET ARRAYSIZE 500 SQL> SQL> SELECT * 2 FROM test_tab 3 WHERE id BETWEEN :l_start AND :l_end 4 / 1000 rows selected. Execution Plan ---------------------------------------------------------- Plan hash value: 1954378606 -------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 2500 | 146K| 53 (0)| 00:00:01 | |* 1 | FILTER | | | | | | | 2 | TABLE ACCESS BY INDEX ROWID| TEST_TAB | 2500 | 146K| 53 (0)| 00:00:01 | |* 3 | INDEX RANGE SCAN | TEST_TAB_PK | 4500 | | 12 (0)| 00:00:01 | -------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(TO_NUMBER(:L_START)<=TO_NUMBER(:L_END)) 3 - access("ID">=TO_NUMBER(:L_START) AND "ID"<=TO_NUMBER(:L_END)) Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 19 consistent gets 0 physical reads 0 redo size 14857 bytes sent via SQL*Net to client 531 bytes received via SQL*Net from client 3 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1000 rows processed SQL> SQL> SET AUTOTRACE OFF
From the autotrace output we can see that the execution plan is identical for both queries, which is not unexpected as the query is the same. It may also be fairly obvious that the number of network round trips and the amount of data sent to the SQL Plus client is greater with the smaller batch size. We did 101 round trips, totaling 32,791 bytes with a 10 row fetch size as opposed to 3 round trips and 14,857 bytes. The larger number of round trips will be especially noticeable in situations where network latency is a concern.
As a side note; why were there 101 round trips to fetch 1,000 rows when the fetch size was 10 (1000 / 10 = 100, not 101) and 3 round trips with the 500 fetch size? The answer is that after being provided with all the rows the client makes one more request and is told “that’s it”, which gives us the extra 1 round trip.
Autotrace also alerts us to one other aspect that may not be so obvious; the number of consistent gets is greater when the fetch size is 10 than when it is 500; 214 as compared to 19, a difference of 195. The reason for this is that after every batch is fetched the database is required to go back into the data where it left off. These extra visits result in more logical I/O operations since it will visit the same index and table blocks multiple times; nearly 100 extra index and table visits, which gives us the 195 extra consistent gets.
From this last example autotrace has clearly shown the extra overhead involved in having a small fetch size. Since the fetch size is controlled by the client it is worth reviewing with the client application developers that they have sized this appropriately. Of course, an assumption in this example is that the 1,000 rows were actually required. The most efficient strategy is to filter the data inside the database and only retrieve the rows that are actually required by the application.
This concludes our initial look at autotrace. It’s a quick and simple tool for obtaining some highly useful information about the performance of SQL statements. The combination of execution plan, simple statistics and easy accessibility makes it particular attractive for quick investigations when things go wrong or for evaluating different strategies when composing SQL.