Constants and package state revisited

I have previously noted that numeric constants declared at the package level don’t create package state, and that strings and (weirdly) negative numbers do create package state. Happily, under Oracle 19c and above, no package level constants create package state.

We can test this using a very basic package (this test done using XE 21c):

CREATE OR REPLACE PACKAGE pkg_constants
AS

   FUNCTION get_string
   RETURN VARCHAR2;

   FUNCTION get_number
   RETURN NUMBER;

   FUNCTION get_negative_number
   RETURN NUMBER;

END pkg_constants;
/

CREATE OR REPLACE PACKAGE BODY pkg_constants
AS

   c_string            CONSTANT VARCHAR2(20) := 'abc';
   c_number            CONSTANT NUMBER := 100;
   c_negative_number   CONSTANT NUMBER := -100;

   FUNCTION get_string
   RETURN VARCHAR2
   AS
   BEGIN
      RETURN c_string;
   END get_string;

   FUNCTION get_number
   RETURN NUMBER
   AS
   BEGIN
      RETURN c_number;
   END get_number;

   FUNCTION get_negative_number
   RETURN NUMBER
   AS
   BEGIN
      RETURN c_negative_number;
   END get_negative_number;

END pkg_constants;
/


Using a simple query:

select pkg_constants.get_string as string_value
,      pkg_constants.get_number as number_value
,      pkg_constants.get_negative_number as negative_number
from   dual;

we can list the package level values:

STRING_VALUE    NUMBER_VALUE NEGATIVE_NUMBER
--------------- ------------ ---------------
abc                      100            -100

In another session if we now recompile the package body with new values for the constants:

CREATE OR REPLACE PACKAGE BODY pkg_constants
AS

   c_string       CONSTANT VARCHAR2(20) := 'xyz';
   c_number       CONSTANT NUMBER := 99;
   c_neg_number   CONSTANT NUMBER := -99;

   FUNCTION get_string
   RETURN VARCHAR2
   AS
   BEGIN
      RETURN c_string;
   END get_string;

   FUNCTION get_number
   RETURN NUMBER
   AS
   BEGIN
      RETURN c_number;
   END get_number;

   FUNCTION get_negative_number
   RETURN NUMBER
   AS
   BEGIN
      RETURN c_negative_number;
   END get_negative_number;

END pkg_constants;
/

When we rerun the query in our previous session, we get the new values and not ORA-04068:

select pkg_constants.get_string as string_value
,      pkg_constants.get_number as number_value
,      pkg_constants.get-negative_number as negative_number
from   dual;

STRING_VALUE    NUMBER_VALUE NEGATIVE_NUMBER<
--------------- ------------ ---------------
xyz                       99             -99

If we remove the CONSTANT keywords in the package body, i.e. turn the constants into package level variables, then rerunning the query after the second compilation of the package body results in the familiar ORA-04068.

Global indexes and query performance

A question came to me recently about whether a global index on a partitioned table can provide a performance benefit over a similar index on a non-partitioned table. As with many query related things, the answer is “it depends” but the simple answer is “yes it can do if your query contains the table partitioning key”. Let’s look at what I mean using a simple example…

We’ll use two tables this demo, run under Oracle 21 XE:

  • NONPART_TABLE – the non-partitioned table
  • PART_TABLE – the partitioned table

Each table will have the following columns:

  • ID – unique value designed to represent the primary key, which we won’t actually use for this demo
  • CAT_1 – column with 7 distinct values. This will be the list partitioning key for table PART_TABLE
  • CAT_2 – column with 1,000 distinct values. This will be the indexed column.

A simple b-tree index will be created on the CAT_2 column. The index on the partitioned table, PART_TABLE, will be global.

Each table will be populated with 1,000,000 rows:

  • ID – a unique integer
  • CAT_1 – values VAL_0 through VAL_6, distributed evenly
  • CAT_2 – values VAL_0 through VAL_999, distributed evenly

The values in columns CAT_1 and CAT_2 are set such that there is an even distribution between the combinations of (CAT_1, CAT_2).

The script for the above is:

create table nonpart_table
  (id     number not null
  ,cat_1  varchar2(10) not null
  ,cat_2  varchar2(10) not null)
/


create table part_table
  (id     number not null
  ,cat_1  varchar2(10) not null
  ,cat_2  varchar2(10) not null)
partition by list (cat_1) automatic
  (partition p1 values ('VAL_0'))
/

insert into nonpart_table
  (id, cat_1, cat_2)
with rowgen as (select rownum as rn from dual connect by level <= 1000)  
select rownum
,      'VAL_' || trim(mod(rownum,7)) as cat_1
,      'VAL_' || trim(mod(rownum,1000)) as cat_2
from   rowgen r1
,      rowgen r2;

insert into part_table
select *
from   nonpart_table;

commit;

create index nonpart_table_idx1
  on nonpart_table (cat_2);

create index part_table_idx1
  on part_table (cat_2);

exec dbms_stats.gather_table_stats ('','nonpart_table');
exec dbms_stats.gather_table_stats ('','part_table');

For our test, we’ll look at the following simple query, hinted to ensure we’re access the table via the index:

select /*+ index (nonpart_table nonpart_table_idx1) */ *
from   nonpart_table
where  cat_1 = 'VAL_5'
and    cat_2 = 'VAL_42';

Running this query with AUTOTRACE enabled, we get the following:

143 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 3717371333

----------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                    |   143 |  2717 |  1006   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS BY INDEX ROWID BATCHED| NONPART_TABLE      |   143 |  2717 |  1006   (1)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | NONPART_TABLE_IDX1 |  1000 |       |     5   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("CAT_1"='VAL_5')
   2 - access("CAT_2"='VAL_42')


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       1016  consistent gets
          0  physical reads
          0  redo size
       4004  bytes sent via SQL*Net to client
        151  bytes received via SQL*Net from client
         11  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        143  rows processed

So, against our non-partitioned table, we retrieved the 143 rows with 1,016 consistent gets. Now we repeat the test using the partitioned table and obtain the AUTOTRACE output of that:

143 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 2417870909

------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name            | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |                 |   143 |  2717 |   897   (0)| 00:00:01 |       |       |
|*  1 |  TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED| PART_TABLE      |   143 |  2717 |   897   (0)| 00:00:01 |     4 |     4 |
|*  2 |   INDEX RANGE SCAN                         | PART_TABLE_IDX1 |   143 |       |     6   (0)| 00:00:01 |       |       |
------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("CAT_1"='VAL_5')
   2 - access("CAT_2"='VAL_42')


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        144  consistent gets
          0  physical reads
          0  redo size
       4004  bytes sent via SQL*Net to client
        151  bytes received via SQL*Net from client
         11  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        143  rows processed

This time we can see the 143 rows were retrieved with just 144 consistent gets, which is about one seventh of the 1,016 consistent gets for the non-partitioned table query. Clearly the query against the partitioned table is better from a performance perspective.

If we look at the execution plan for the partitioned table query we can why it performs better. Line 1 of the plan, operation TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED, has PSTART and PSTOP set to 4, i.e. only 1 partition is being accessed. What Oracle is doing is an index range scan of the global index, satisfying the CAT_2 = ‘VAL_42’ criteria of the query. This gets the table ROWIDs of the rows that satisfies that criteria but it doesn’t need to read the rows to evaluate the CAT_1 = ‘VAL_5’ criteria for the partitioned table. Since the index is global, the table ROWIDs specify the partition of the row and this is sufficient for Oracle to determine if the referenced row will be CAT_1 = ‘VAL_5’ or not.

If we contrast that with the query against the non-partitioned table, we see that Oracle had to visit the table row to evaluate the CAT_1 = ‘VAL_5’ query criteria and in doing so incurred an extra logical IO for every row where CAT_2 = ‘VAL_42’. This is why the consistent gets is seven times what the query against the partitioned table incurred.

You might be wondering that if our query contains predicates on both column CAT_1 and CAT_2 why not make the index local? Well, in most systems there are many different queries and we might have some that do not specify CAT_1 partitioning column. Queries that contain filter predicates on just CAT_2 will benefit better from a global index, rather than a local index.

In summary, despite our index being global, we can get a performance benefit from having our table partitioned for those queries that filter on the partitioning key of the table in addition to filters that would use the index key.

Mutating table error (ORA-04091) but no trigger?!?

I encountered an interesting scenario recently… a “mutating table” error but the reported table had no trigger on it.

The Oracle documentation has this to say about ORA-04091:

ORA-04091: table string.string is mutating, trigger/function may not see it
Cause: A trigger (or a user defined PL/SQL function that is referenced in this statement) attempted to look at (or modify) a table that was in the middle of being modified by the statement which fired it.

Action: Rewrite the trigger (or function) so it does not read that table.

Basically, if we have a row level trigger on a table then our trigger cannot read or write the same table that the trigger belongs to. The following illustrates this:

SQL> create table src_tab
  2    (id     number(10) not null
  3    ,val    char(10) not null
  4    ,constraint src_tab_pk primary key (id));

Table created.

SQL> insert /*+ append */ into src_tab
  2    (id, val)
  3  with row_gen as (select rownum as rn from dual connect by level <= 1000)
  4  select rownum
  5  ,      'x'
  6  from   row_gen r1
  7  ,      row_gen r2;

1000000 rows created.

SQL> commit;

SQL> create or replace trigger src_tab_aiu
  2  after insert or update
  3  on src_tab
  4  for each row
  5  declare
  6    l_temp number;
  7  begin
  8  
  9    select count(*)
 10    into   l_temp
 11    from   src_tab;
 12  
 13  end src_tab_aiu;
 14  /

Trigger created.

SQL> update src_tab
  2  set val = 'y'
  3  where id = 1234;
update src_tab
       *
ERROR at line 1:
ORA-04091: table DEVELOPER.SRC_TAB is mutating, trigger/function may not see it 
ORA-06512: at "DEVELOPER.SRC_TAB_AIU", line 5 
ORA-04088: error during execution of trigger 'DEVELOPER.SRC_TAB_AIU'

In the above we created a table, populated it with a million rows, added a row level trigger that counts the number of rows in the table and then attempted to update a row. The update caused the trigger to fire, which then failed with our ORA-04091 mutating table error.

The scenario I encountered recently reported a valid trigger in the system but the table noted from the error was not the table upon which the trigger was defined. The trigger in question did insert into the reported table and had been operating successfully for a number of years so the usual cause of the ORA-04091 exception was puzzling…

… until one of the DBAs reported that they had been doing some online table maintenance. The cause of the ORA-04091 scenario was then easy to replicate.

We’ll start with a source table and define a trigger that inserts the data changes into a log table, mimicking what might be an audit or change capture process. We’ll then run a series of updates against the source table and at the same time perform an online move of the log table (this was run on a 21c Express Edition database):

SQL> create table src_tab
  2    (id     number(10) not null
  3    ,val    char(10) not null
  4    ,constraint src_tab_pk primary key (id));

Table created.

SQL> insert /*+ append */ into src_tab
  2    (id, val)
  3  with row_gen as (select rownum as rn from dual connect by level <= 1000)
  4  select rownum
  5  ,      'x'
  6  from   row_gen r1
  7  ,      row_gen r2;

1000000 rows created.

SQL> commit;

Commit complete.

SQL> create table src_tab_log
  2    (id     number(10) not null
  3    ,val    char(10) not null
  4    ,update_timestamp timestamp not null);

Table created.

SQL> insert /*+ append */ into src_tab_log
  2    (id, val, update_timestamp)
  3  with row_gen as (select rownum as rn from dual connect by level <= 1000)
  4  select rownum
  5  ,      'x'
  6  ,      systimestamp
  7  from   row_gen r1
  8  ,      row_gen r2;

1000000 rows created.

SQL> commit;

Commit complete.

SQL> create or replace trigger src_tab_aiu
  2  after insert or update
  3  on src_tab
  4  for each row
  5  declare
  6  begin
  7  
  8    insert into src_tab_log
  9      (id
 10      ,val
 11      ,update_timestamp)
 12    values
 13      (:NEW.id
 14      ,:NEW.val
 15      ,systimestamp);
 16  
 17    dbms_session.sleep (0.01); -- just to slow down the processing
 18  
 19  end src_tab_aiu;
 20  /

Trigger created.

SQL> update src_tab -- to show that the trigger does work
  2  set val = 'y'
  3  where id = 1234;

1 row updated.

SQL> commit;

Commit complete.

SQL> begin -- run lots of updates and in another session run an online move of src_tab_log
  2    for i in 1..1000
  3    loop
  4      update src_tab
  5      set val = 'y'
  6      where id = i*100;
  7      commit;
  8    end loop;
  9  end;
 10  /
begin
*
ERROR at line 1:
ORA-04091: table DEVELOPER.SRC_TAB_LOG is mutating, trigger/function may not 
see it 
ORA-06512: at "DEVELOPER.SRC_TAB_AIU", line 4 
ORA-04088: error during execution of trigger 'DEVELOPER.SRC_TAB_AIU' 
ORA-06512: at line 4 

Whilst the last block updating the table rows was running, the following was kicked off in another session:

SQL> alter table src_tab_log move online;

which caused the reported ORA-04091.

Online partitioning of a table has the same impact. It seems versions 19c and 21c are affected.

So, our online table maintenance operation can interfere with the usual operation of the database when the table is subject to DML invoked from a trigger. I’m kinda inclined to call “bug” on this but I guess Oracle might like to describe it as an “unavoidable feature”. Regardless, it’s something to be aware of and to avoid online table reorganisations on tables subject to trigger sourced DML.

DBMS_XPLAN hint report

After a few years working with SQL Server I’ve recently been doing some performance improvements on an Oracle system running 19c. After clearing out the Oracle cobwebs in my mind I set about tuning some queries and started to look at execution plans. It wasn’t long before I noticed additional information in the output of dbms_xplan that I hadn’t seen before; the hint report section.

Previously, one of the problems with applying hints was that the database was silent on whether a hint was being applied or not. It may be that the hint is not correctly specified or it may not be a valid option for the query. Now the hint report section of the explain plan provides information on hints and why they are not being applied.

Let’s take a quick look at what dbms_xplan shows us, using an Oracle XE 21c database. We’ll start with a simple table with no data:

create table t
    (id   number(6) not null);

exec dbms_stats.gather_table_stats ('','t')

Next we’ll generate the execution plan for a query that specifies an index access for the table, which is invalid as the table has no indexes:

explain plan
for
select /*+ index(t) */
       *
from   t;

select t.*
from   table(dbms_xplan.display()) t;

We get the following output from dbms_xplan:

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    13 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| T    |     1 |    13 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
---------------------------------------------------------------------------

   1 -  SEL$1 / "T"@"SEL$1"
         U -  index(t)

The bottom section of the output shows that there is one hint that is being ignored by the database.

Specifying a table reference that doesn’t exist generates an “unresolved” warning:

explain plan
for
select /*+ index(x) */
       *
from   t;

select t.*
from   table(dbms_xplan.display()) t;

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    13 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| T    |     1 |    13 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (N - Unresolved (1))

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
---------------------------------------------------------------------------

   1 -  SEL$1
         N -  index(x)

If we try something that’s completely invalid, which might happen if our hint contains a typo, then we get a syntax error:

explain plan
for
select /*+ no_such_hint(t) */
       *
from   t;

select t.*
from   table(dbms_xplan.display()) t;

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    13 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| T    |     1 |    13 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (E - Syntax error (1))

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
---------------------------------------------------------------------------

   1 -  SEL$1
         E -  no_such_hint

I’m not sure the specific version of Oracle the hint report appeared in the dbms_xplan listing. I don’t recall it in the versions of 12c I worked with a few years ago but it’s present in 19c. It’s a very useful addition!

PL/SQL Conditional Compilation

I’ve been going through the motions of database upgrades at work. From Oracle 12.2 onwards, Oracle made changes to the call interface for a number of their in-built packages that involve encrypted external communication (utl_tcp, utl_http, utl_smtp, etc). For example, the call to the routine utl_http.begin_request has gone from:

UTL_HTTP.BEGIN_REQUEST (
   url               IN  VARCHAR2,
   method            IN  VARCHAR2 DEFAULT 'GET',
   http_version      IN  VARCHAR2 DEFAULT NULL,
   request_context   IN  request_context_key DEFAULT NULL)
RETURN req;

to:

UTL_HTTP.BEGIN_REQUEST (
   url               IN  VARCHAR2,
   method            IN  VARCHAR2 DEFAULT 'GET',
   http_version      IN  VARCHAR2 DEFAULT NULL,
   request_context   IN  request_context_key DEFAULT NULL,
   https_host        IN  VARCHAR2 DEFAULT NULL)
RETURN req;

When creating an https connection there’s a new parameter that needs to be supplied. The change is quite trivial but how should you handle this change from within your source code when you will have databases of different versions? You are probably faced with the scenario that your development and test databases get upgraded some time before your production databases. Development does not stop whilst this is going on. Furthermore, you don’t really want to remember to deploy new versions of code at the point in time of the upgrade as that’s yet another thing to remember within an already complicated process. Conditional compilation offers us a solution.

Using conditional compilation we can support different code sections at the same time and have Oracle use the one that is appropriate to the database version that is executing the code. For example, I use the above utl_http routines to post to Slack and pre and post 12.2 database versions can be supported using the following:

-- for Oracle 12.1 and earlier
$IF (DBMS_DB_VERSION.VERSION = 12 AND DBMS_DB_VERSION.RELEASE = 1) OR (DBMS_DB_VERSION.VERSION < 11)      
   l_http_request := utl_http.begin_request (utl          => l_slack_webhook_url
                                            ,method       => 'POST'
                                            ,http_version => 'HTTP/1.1');
-- for Oracle 12.2 and above...
$ELSE
   l_http_request := utl_http.begin_request (utl          => l_slack_webhook_url
                                            ,method       => 'POST'
                                            ,http_version => 'HTTP/1.1'
                                            ,https_host   => 'slack.com');
$END

It’s quite easy to see what’s going on in the above code block. There are some lines that commence with $ and these lines are the conditional compilation lines. In this case I have a conditional IF THEN ELSE block that evaluates the database version and will compile the correct version of the utl_http call for that version.

Conditional compilation is another tool for the PL/SQL toolbox that’s quite handy in certain scenarios. The documentation has further suggested uses and the full breadth of functionality available.

ROLLUP Function

“The database seems a bit slow today…” was the comment I received upon my arrival at work recently. It wasn’t too hard to see why either with a rather large, convoluted reporting style query chugging along, much to the annoyance of the other activity in the database. Peering into the depths of the query I noticed at its core it comprised of a UNION of 3 queries. What struck me as strange was that each of the three components were nearly identical to each other:

SELECT col_1, col_2, col3, col_4
FROM <tables>
WHERE <selection criteria>
GROUP BY
      col_1, col_2, col3, col_4
UNION ALL
SELECT col_1, col_2, col3, 'TOTAL'
FROM <tables>
WHERE <selection criteria>
GROUP BY
      col_1, col_2, col3, 'TOTAL'
UNION ALL
SELECT col_1, col_2, 'TOTAL', 'TOTAL'
FROM <tables>
WHERE <selection criteria>
GROUP BY
      col_1, col_2, 'TOTAL', 'TOTAL'

The execution details of the query showed that Oracle spent 10 minutes executing each component. The tables and the WHERE clause were identical in each of the query components. This meant that the second component is nothing more than an aggregation of the first and the third component an aggregation of the second. Surely there’s a better way of obtaining the aggregation entries without rerunning the entire query again? And there is…

Lurking in the depths of the Database Data Warehousing Guide, and not the SQL Language Reference, is a chapter titled SQL for Aggregation in Data Warehouses. The particular function we want for our query is ROLLUP. The documentation has this to say about ROLLUP:

ROLLUP enables a SELECT statement to calculate multiple levels of subtotals across a specified group of dimensions. It also calculates a grand total. ROLLUP is a simple extension to the GROUP BY clause, so its syntax is extremely easy to use. The ROLLUP extension is highly efficient, adding minimal overhead to a query.

By way of simple example we create a table with 4 columns: 3 of them dimension type attributes and one numeric metric, along with some test data:

CREATE TABLE rollup_test
	(col_a 		VARCHAR2(1)	NOT NULL
	,col_b		VARCHAR2(1)	NOT NULL
	,col_c		VARCHAR2(1)	NOT NULL
	,val		NUMBER(3)	NOT NULL);

INSERT INTO rollup_test
VALUES ('A','A','A',1);
INSERT INTO rollup_test
VALUES ('A','A','B',1);
INSERT INTO rollup_test
VALUES ('A','A','C',1);

INSERT INTO rollup_test
VALUES ('A','B','A',1);
INSERT INTO rollup_test
VALUES ('A','B','B',1);
INSERT INTO rollup_test
VALUES ('A','B','C',1);

INSERT INTO rollup_test
VALUES ('A','C','A',1);
INSERT INTO rollup_test
VALUES ('A','C','B',1);
INSERT INTO rollup_test
VALUES ('A','C','C',1);

INSERT INTO rollup_test
VALUES ('B','A','A',1);
INSERT INTO rollup_test
VALUES ('B','A','B',1);
INSERT INTO rollup_test
VALUES ('B','A','C',1);

INSERT INTO rollup_test
VALUES ('B','B','A',1);
INSERT INTO rollup_test
VALUES ('B','B','B',1);
INSERT INTO rollup_test
VALUES ('B','B','C',1);

INSERT INTO rollup_test
VALUES ('B','C','A',1);
INSERT INTO rollup_test
VALUES ('B','C','B',1);
INSERT INTO rollup_test
VALUES ('B','C','C',1);

COMMIT;

and we can see the impact that ROLLUP has using the following query:

SQL> SELECT col_a
  2  ,      NVL(col_b,'TOTAL') AS b
  3  ,      NVL(col_c,'TOTAL') AS c
  4  ,      SUM(val)           AS sum_val
  5  FROM   rollup_test
  6  GROUP  BY
  7         col_a
  8  ,      ROLLUP(col_b, col_c)
  9  ORDER  BY
 10         col_a
 11  ,      NVL(col_b,'TOTAL')
 12  ,      NVL(col_c,'TOTAL');

COL_A  B      C       SUM_VAL
------ ------ ------ --------
A      A      A             1
A      A      B             1
A      A      C             1
A      A      TOTAL         3
A      B      A             1
A      B      B             1
A      B      C             1
A      B      TOTAL         3
A      C      A             1
A      C      B             1
A      C      C             1
A      C      TOTAL         3
A      TOTAL  TOTAL         9
B      A      A             1
B      A      B             1
B      A      C             1
B      A      TOTAL         3
B      B      A             1
B      B      B             1
B      B      C             1
B      B      TOTAL         3
B      C      A             1
B      C      B             1
B      C      C             1
B      C      TOTAL         3
B      TOTAL  TOTAL         9

26 rows selected.

Applying it to our reporting query was as simple as replacing the UNION ALL with:

SELECT col_1, col_2, NVL(col3,'TOTAL'), NVL(col_4,'TOTAL')
FROM <tables>
WHERE <selection criteria>
GROUP BY
      col_1, col_2, ROLLUP(col3, col_4)

No more UNION ALL, no more execution of (nearly) the same query three times and an immediate saving of 20 minutes of wasted execution time… and all before my morning coffee.

Purging AQ can be slow

So, you’ve constructed an awesome multi consumer, event-based processing system built around Oracle AQ. The system’s humming along with messages are flying all over the place when something goes wrong, maybe a network outage or an application fault, and it all grinds to a halt. Within the database things keep going but the messages are piling up.

“No problem” you say, “I’ve set an expiration time on the messages so they’ll get moved onto the exception queue if they hang around for too long.”. Sure enough, the next morning when you get into work and everything is running smoothly again you see that the exception queue has a couple of million of entries in it. After a bit of checking you decide the expired messages are no longer necessary and that they can be removed. “Easy, I’ll just run a purge of the exception queue, using the dbms_aqadm.purge_queue_table routine” you think. Something like:

DECLARE
  l_purge_options dbms_aqadm.aq$_purge_options_t;
BEGIN
   l_purge_options.block := false;
   l_purge_options.delivery_mode := dbms_aqadm.persistent;

  dbms_aqadm.purge_queue_table (queue_table     => 'TEST_Q_TAB'
                               ,purge_condition => 'qtview.queue = ''AQ$_TEST_Q_TAB_E'' '
                               ,purge_options   => l_purge_options);
END;
/

Lunchtime comes around and the purge is still running. “It’s taking a while…”, you muse. A few hours later and you’re thinking about leaving for the day but the purge is still running. By now you’re beginning to wonder what’s going on and whether you should kill the purge and try it again tomorrow.

Well, what’s going on is that using dbms_aqadm.purge_queue_table against a multi conumer queue and specifying a purge condition is slow. Real slow. Here are the results of removing 100,000 messages on a 12.2 database running on my laptop:

  • Multi consumer queue: 20 mins, 57.11 secs
  • Single consumer queue: 4.87 secs
  • Dequeue operation: 34.49 secs

Yep, the purge operation against a multi consumer queue is around 250 time slower than the same operation against a single consumer queue. Interestingly, it is much faster to dequeue the messages rather than purging them. I raised this with Oracle Support but it appears that this is due to the work that the purge needs to do, i.e. it’s working as designed.

In summary, when it comes to removing messages from a multi-consumer queue you may like to check how many messages there are and dequeue those messages rather than using the dbms_aqadm.purge_queue_table routine.

The test scripts used to create the results shown above are available.

Database Views: the good, the bad and the ugly

Views are good as they can be used to hide complexity.

Views are bad as they can hide complexity.

It’s when views are used for the first point above without consideration of the second point that things can turn ugly. Here’s recently encountered example…

A developer decided to modify a process so that it would determine when it was being run from within the internal database job scheduler. The following simple query was added to the code:

SELECT job_name
INTO   l_result
FROM   user_scheduler_running_jobs
where  session_id = l_sid;

What the developer overlooked was that user_scheduler_running_jobs is a view and even though it’s an Oracle data dictionary view the underlying table structure might not be as simple as the query would suggest. The following is the execution plan that Oracle selected:

-------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name                        | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |                             |       |       |     6 (100)|          |        |      |            |
|   1 |  VIEW                                | USER_SCHEDULER_RUNNING_JOBS |     3 |   198 |     6  (34)| 00:00:01 |        |      |            |
|   2 |   UNION-ALL                          |                             |       |       |            |          |        |      |            |
|   3 |    MERGE JOIN CARTESIAN              |                             |     1 |   190 |     5  (20)| 00:00:01 |        |      |            |
|*  4 |     HASH JOIN OUTER                  |                             |     1 |   190 |     5  (20)| 00:00:01 |        |      |            |
|   5 |      NESTED LOOPS                    |                             |     1 |   171 |     4   (0)| 00:00:01 |        |      |            |
|   6 |       NESTED LOOPS                   |                             |     1 |   171 |     4   (0)| 00:00:01 |        |      |            |
|*  7 |        HASH JOIN OUTER               |                             |     1 |   138 |     1   (0)| 00:00:01 |        |      |            |
|   8 |         NESTED LOOPS                 |                             |     1 |    99 |     1   (0)| 00:00:01 |        |      |            |
|   9 |          NESTED LOOPS                |                             |     1 |    99 |     1   (0)| 00:00:01 |        |      |            |
|  10 |           MERGE JOIN CARTESIAN       |                             |     1 |    73 |     0   (0)|          |        |      |            |
|* 11 |            FIXED TABLE FULL          | X$KCCDI                     |     1 |    15 |     0   (0)|          |        |      |            |
|  12 |            BUFFER SORT               |                             |     1 |    58 |     0   (0)|          |        |      |            |
|* 13 |             PX COORDINATOR           |                             |       |       |            |          |        |      |            |
|  14 |              PX SEND QC (RANDOM)     | :TQ10000                    |     1 |    26 |     0   (0)|          |  Q1,00 | P->S | QC (RAND)  |
|* 15 |               VIEW                   | GV$SCHEDULER_RUNNING_JOBS   |       |       |            |          |  Q1,00 | PCWP |            |
|* 16 |                FIXED TABLE FULL      | X$JSKSLV                    |     1 |    26 |     0   (0)|          |  Q1,00 | PCWP |            |
|* 17 |           INDEX UNIQUE SCAN          | SCHEDULER$_JOB_PK           |     1 |       |     0   (0)|          |        |      |            |
|* 18 |          TABLE ACCESS BY INDEX ROWID | SCHEDULER$_JOB              |     1 |    26 |     1   (0)| 00:00:01 |        |      |            |
|* 19 |         PX COORDINATOR               |                             |       |       |            |          |        |      |            |
|  20 |          PX SEND QC (RANDOM)         | :TQ20000                    |     1 |    28 |     0   (0)|          |  Q2,00 | P->S | QC (RAND)  |
|* 21 |           VIEW                       | GV$SESSION                  |       |       |            |          |  Q2,00 | PCWP |            |
|  22 |            MERGE JOIN CARTESIAN      |                             |     1 |    28 |     0   (0)|          |  Q2,00 | PCWP |            |
|  23 |             NESTED LOOPS             |                             |     1 |    12 |     0   (0)|          |  Q2,00 | PCWP |            |
|* 24 |              FIXED TABLE FIXED INDEX | X$KSLWT (ind:1)             |     1 |     8 |     0   (0)|          |  Q2,00 | PCWP |            |
|* 25 |              FIXED TABLE FIXED INDEX | X$KSLED (ind:2)             |     1 |     4 |     0   (0)|          |  Q2,00 | PCWP |            |
|  26 |             BUFFER SORT              |                             |     1 |    16 |     0   (0)|          |  Q2,00 | PCWP |            |
|* 27 |              FIXED TABLE FIXED INDEX | X$KSUSE (ind:1)             |     1 |    16 |     0   (0)|          |  Q2,00 | PCWP |            |
|* 28 |        INDEX RANGE SCAN              | I_OBJ1                      |     1 |       |     2   (0)| 00:00:01 |        |      |            |
|  29 |       TABLE ACCESS BY INDEX ROWID    | OBJ$                        |     1 |    33 |     3   (0)| 00:00:01 |        |      |            |
|  30 |      PX COORDINATOR                  |                             |       |       |            |          |        |      |            |
|  31 |       PX SEND QC (RANDOM)            | :TQ30000                    |   173 |  2595 |     1 (100)| 00:00:01 |  Q3,00 | P->S | QC (RAND)  |
|  32 |        VIEW                          | GV$PROCESS                  |       |       |            |          |  Q3,00 | PCWP |            |
|* 33 |         FIXED TABLE FULL             | X$KSUPR                     |   173 |  2595 |     1 (100)| 00:00:01 |  Q3,00 | PCWP |            |
|  34 |     BUFFER SORT                      |                             |     1 |       |     5  (20)| 00:00:01 |        |      |            |
|  35 |      FIXED TABLE FULL                | X$KCCDI2                    |     1 |       |     0   (0)|          |        |      |            |
|* 36 |    HASH JOIN OUTER                   |                             |     1 |   286 |     1 (100)| 00:00:01 |        |      |            |
|* 37 |     HASH JOIN OUTER                  |                             |     1 |   267 |     0   (0)|          |        |      |            |
|* 38 |      FILTER                          |                             |       |       |            |          |        |      |            |
|* 39 |       HASH JOIN OUTER                |                             |     1 |   228 |     0   (0)|          |        |      |            |
|  40 |        NESTED LOOPS                  |                             |     1 |   189 |     0   (0)|          |        |      |            |
|  41 |         NESTED LOOPS                 |                             |     1 |   189 |     0   (0)|          |        |      |            |
|  42 |          NESTED LOOPS                |                             |     1 |    97 |     0   (0)|          |        |      |            |
|* 43 |           PX COORDINATOR             |                             |       |       |            |          |        |      |            |
|  44 |            PX SEND QC (RANDOM)       | :TQ40000                    |     1 |    26 |     0   (0)|          |  Q4,00 | P->S | QC (RAND)  |
|* 45 |             VIEW                     | GV$SCHEDULER_RUNNING_JOBS   |       |       |            |          |  Q4,00 | PCWP |            |
|* 46 |              FIXED TABLE FULL        | X$JSKSLV                    |     1 |    26 |     0   (0)|          |  Q4,00 | PCWP |            |
|  47 |           TABLE ACCESS BY INDEX ROWID| SCHEDULER$_LIGHTWEIGHT_JOB  |     1 |    39 |     0   (0)|          |        |      |            |
|* 48 |            INDEX UNIQUE SCAN         | SCHEDULER$_LWJOB_PK         |     1 |       |     0   (0)|          |        |      |            |
|* 49 |          INDEX RANGE SCAN            | SCHEDULER$_LOBJ_UK          |     1 |       |     0   (0)|          |        |      |            |
|* 50 |         TABLE ACCESS BY INDEX ROWID  | SCHEDULER$_LWJOB_OBJ        |     1 |    92 |     0   (0)|          |        |      |            |
|* 51 |        PX COORDINATOR                |                             |       |       |            |          |        |      |            |
|  52 |         PX SEND QC (RANDOM)          | :TQ50000                    |     1 |    39 |            |          |  Q5,00 | P->S | QC (RAND)  |
|* 53 |          VIEW                        | GV$SCHEDULER_INMEM_RTINFO   |       |       |            |          |  Q5,00 | PCWP |            |
|* 54 |           FIXED TABLE FULL           | X$JSKMIMRT                  |     1 |    39 |            |          |  Q5,00 | PCWP |            |
|* 55 |      PX COORDINATOR                  |                             |       |       |            |          |        |      |            |
|  56 |       PX SEND QC (RANDOM)            | :TQ60000                    |     1 |    28 |     0   (0)|          |  Q6,00 | P->S | QC (RAND)  |
|* 57 |        VIEW                          | GV$SESSION                  |       |       |            |          |  Q6,00 | PCWP |            |
|  58 |         MERGE JOIN CARTESIAN         |                             |     1 |    28 |     0   (0)|          |  Q6,00 | PCWP |            |
|  59 |          NESTED LOOPS                |                             |     1 |    12 |     0   (0)|          |  Q6,00 | PCWP |            |
|* 60 |           FIXED TABLE FIXED INDEX    | X$KSLWT (ind:1)             |     1 |     8 |     0   (0)|          |  Q6,00 | PCWP |            |
|* 61 |           FIXED TABLE FIXED INDEX    | X$KSLED (ind:2)             |     1 |     4 |     0   (0)|          |  Q6,00 | PCWP |            |
|  62 |          BUFFER SORT                 |                             |     1 |    16 |     0   (0)|          |  Q6,00 | PCWP |            |
|* 63 |           FIXED TABLE FIXED INDEX    | X$KSUSE (ind:1)             |     1 |    16 |     0   (0)|          |  Q6,00 | PCWP |            |
|  64 |     PX COORDINATOR                   |                             |       |       |            |          |        |      |            |
|  65 |      PX SEND QC (RANDOM)             | :TQ70000                    |   173 |  2595 |     1 (100)| 00:00:01 |  Q7,00 | P->S | QC (RAND)  |
|  66 |       VIEW                           | GV$PROCESS                  |       |       |            |          |  Q7,00 | PCWP |            |
|* 67 |        FIXED TABLE FULL              | X$KSUPR                     |   173 |  2595 |     1 (100)| 00:00:01 |  Q7,00 | PCWP |            |
|* 68 |    HASH JOIN OUTER                   |                             |     1 |   202 |     1 (100)| 00:00:01 |        |      |            |
|  69 |     MERGE JOIN CARTESIAN             |                             |     1 |   183 |     0   (0)|          |        |      |            |
|* 70 |      HASH JOIN OUTER                 |                             |     1 |   179 |     0   (0)|          |        |      |            |
|* 71 |       HASH JOIN                      |                             |     1 |   140 |     0   (0)|          |        |      |            |
|* 72 |        HASH JOIN                     |                             |     1 |    82 |     0   (0)|          |        |      |            |
|* 73 |         PX COORDINATOR               |                             |       |       |            |          |        |      |            |
|  74 |          PX SEND QC (RANDOM)         | :TQ80000                    |     1 |    69 |            |          |  Q8,00 | P->S | QC (RAND)  |
|* 75 |           VIEW                       | GV$SCHEDULER_INMEM_RTINFO   |       |       |            |          |  Q8,00 | PCWP |            |
|* 76 |            FIXED TABLE FULL          | X$JSKMIMRT                  |     1 |    69 |            |          |  Q8,00 | PCWP |            |
|  77 |         PX COORDINATOR               |                             |       |       |            |          |        |      |            |
|  78 |          PX SEND QC (RANDOM)         | :TQ90000                    |     1 |    13 |            |          |  Q9,00 | P->S | QC (RAND)  |
|  79 |           VIEW                       | GV$SCHEDULER_INMEM_MDINFO   |       |       |            |          |  Q9,00 | PCWP |            |
|  80 |            FIXED TABLE FULL          | X$JSKMIMMD                  |     1 |    13 |            |          |  Q9,00 | PCWP |            |
|* 81 |        PX COORDINATOR                |                             |       |       |            |          |        |      |            |
|  82 |         PX SEND QC (RANDOM)          | :TQ100000                   |     1 |    26 |     0   (0)|          |  10,00 | P->S | QC (RAND)  |
|* 83 |          VIEW                        | GV$SCHEDULER_RUNNING_JOBS   |       |       |            |          |  10,00 | PCWP |            |
|* 84 |           FIXED TABLE FULL           | X$JSKSLV                    |     1 |    26 |     0   (0)|          |  10,00 | PCWP |            |
|* 85 |       PX COORDINATOR                 |                             |       |       |            |          |        |      |            |
|  86 |        PX SEND QC (RANDOM)           | :TQ110000                   |     1 |    28 |     0   (0)|          |  11,00 | P->S | QC (RAND)  |
|* 87 |         VIEW                         | GV$SESSION                  |       |       |            |          |  11,00 | PCWP |            |
|  88 |          MERGE JOIN CARTESIAN        |                             |     1 |    28 |     0   (0)|          |  11,00 | PCWP |            |
|  89 |           NESTED LOOPS               |                             |     1 |    12 |     0   (0)|          |  11,00 | PCWP |            |
|* 90 |            FIXED TABLE FIXED INDEX   | X$KSLWT (ind:1)             |     1 |     8 |     0   (0)|          |  11,00 | PCWP |            |
|* 91 |            FIXED TABLE FIXED INDEX   | X$KSLED (ind:2)             |     1 |     4 |     0   (0)|          |  11,00 | PCWP |            |
|  92 |           BUFFER SORT                |                             |     1 |    16 |     0   (0)|          |  11,00 | PCWP |            |
|* 93 |            FIXED TABLE FIXED INDEX   | X$KSUSE (ind:1)             |     1 |    16 |     0   (0)|          |  11,00 | PCWP |            |
|  94 |      BUFFER SORT                     |                             |     1 |     4 |     0   (0)|          |        |      |            |
|  95 |       TABLE ACCESS CLUSTER           | USER$                       |     1 |     4 |     0   (0)|          |        |      |            |
|* 96 |        INDEX UNIQUE SCAN             | I_USER#                     |     1 |       |     0   (0)|          |        |      |            |
|  97 |     PX COORDINATOR                   |                             |       |       |            |          |        |      |            |
|  98 |      PX SEND QC (RANDOM)             | :TQ120000                   |   173 |  2595 |     1 (100)| 00:00:01 |  12,00 | P->S | QC (RAND)  |
|  99 |       VIEW                           | GV$PROCESS                  |       |       |            |          |  12,00 | PCWP |            |
|*100 |        FIXED TABLE FULL              | X$KSUPR                     |   173 |  2595 |     1 (100)| 00:00:01 |  12,00 | PCWP |            |
-------------------------------------------------------------------------------------------------------------------------------------------------

Yep, to top it all off, that’s a parallel query execution too.

The ugly part occurred when this query got deployed and the queue-based processing that contained the modified code went from processing 500+ messages per second down to just 2. Ouch!

Thankfully the problem was trivial to diagnose and easily fixed so no damage was done.

CHECK constraint evaluation

A developer approached me with the question “Are check constraints only evaluated when the columns that they apply to are modified and not some other column on the row?”. An unusual question perhaps but as it turned out they were creating a number of check constraints and wanted to assess the overhead this might introduce.

My response was “yes, a constraint it only evaluated when the columns(s) associated with the constraint are modified”… but then I had to think about how to prove this. After a minute or two I came up with the following.

Let’s take a table with two columns and insert a single row:

SQL> CREATE TABLE chk_test
  2     (col_1     NUMBER(2) NOT NULL
  3     ,col_2     NUMBER(2) NOT NULL)
  4  /

Table created.

SQL> INSERT INTO chk_test
  2  VALUES (1, -1)
  3  /

1 row created.

SQL> COMMIT
  2  /

Commit complete.

Onto this table we’ll add a CHECK constraint such that COL_2 must be greater than 0. However, because our table already has a value in COL_2 that violates this constraint, we’ll create the constraint in an ENABLE NOVALIDATE state, meaning that existing values do not need to abide by it but any new data changes must:

SQL> ALTER TABLE chk_test
  2     ADD CONSTRAINT chk_test_ch1
  3     CHECK (col_2 > 0)
  4     ENABLE NOVALIDATE
  5  /

Table altered.

We can verify the constraint is working by trying to update COL_2:

SQL> UPDATE chk_test
  2  SET col_2 = 0
  3  /
UPDATE chk_test
*
ERROR at line 1:
ORA-02290: check constraint (DEVELOPER.CHK_TEST_CH1) violated

However, we are allowed to modify COL_1 even though the existing value of COL_2 violates the constraint:

SQL> UPDATE chk_test
  2  SET col_1 = 0
  3  /

1 row updated.

So, a change to the row that did not modify COL_2 did not trigger the evaluation of the CHECK constraint… proof that CHECK constraints are only evaluated when the column(s) they relate to are manipulated.

Edition-based Redefinition to avoid ORA-04068

I have previously written about how to avoid package state to facilitate on-line deployments, so that PL/SQL code can be modified in a live environment without impacting existing sessions. The main drawback to the methods outlined are that they assume the call interface, or contract, for routines won’t change. When dealing with complex deployments, Oracle 11g Release 2 and later provides for a versioning capabilitiy in the form of edition-based redefinition.

Let’s take a quick look at how edition-based redefinition could be used to avoid ORA-04068. We’ll use the inital package code from the previous article that makes no effort to avoid package state. However, first we need to do a little bit of preparation. From an appropriately authorised account we will set up a new EDITION_USER account and create a couple of editions within the database:

CREATE USER edition_user
   IDENTIFIED BY edition_user;

ALTER USER edition_user ENABLE EDITIONS;
GRANT CONNECT TO edition_user;
GRANT CREATE PROCEDURE TO edition_user;

CREATE EDITION edition_1 AS CHILD OF ora$base;
CREATE EDITION edition_2 AS CHILD OF edition_1;

GRANT USE ON EDITION edition_1 TO edition_user;
GRANT USE ON EDITION edition_2 TO edition_user;

We’ve now got a new account and two new editions that the account has access to. The above assumes no previous editions have been defined, i.e. the default ORA$BASE edition was still in effect.

From a new session logged onto the new EDITION_USER account we’ll create our package under EDITION_1:

ALTER SESSION SET EDITION = edition_1;

CREATE OR REPLACE EDITIONABLE PACKAGE pkg_state
AS
   PROCEDURE set_variable (p_number IN NUMBER);
   FUNCTION get_variable
   RETURN NUMBER;
END pkg_state;
/

CREATE OR REPLACE EDITIONABLE PACKAGE BODY pkg_state
AS
   g_pkg_variable   NUMBER(10);

   PROCEDURE set_variable (p_number IN NUMBER)
   AS
   BEGIN   
      g_pkg_variable := p_number;
   END set_variable;
   
   FUNCTION get_variable
   RETURN NUMBER
   AS
   BEGIN
      RETURN g_pkg_variable;
   END get_variable;

END pkg_state;
/

Running the packaged routines within the session we get:

SQL> EXEC pkg_state.set_variable (5)

PL/SQL procedure successfully completed.

SQL> EXEC dbms_output.put_line ('get_value: ' || TO_CHAR(pkg_state.get_variable));
get_value: 5

PL/SQL procedure successfully completed.

Everything working as expected. Our session has now got package state for PKG_STATE. In a new session we’re going to modify the package, doubling the value we output in the GET_VARIABLE routine, but we’ll do it under EDITION_2:

ALTER SESSION SET EDITION = edition_2;

CREATE OR REPLACE EDITIONABLE PACKAGE pkg_state
AS
   PROCEDURE set_variable (p_number IN NUMBER);
   FUNCTION get_variable
   RETURN NUMBER;
END pkg_state;
/

CREATE OR REPLACE EDITIONABLE PACKAGE BODY pkg_state
AS
   g_pkg_variable   NUMBER(10);

   PROCEDURE set_variable (p_number IN NUMBER)
   AS
   BEGIN   
      g_pkg_variable := p_number;
   END set_variable;
   
   FUNCTION get_variable
   RETURN NUMBER
   AS
   BEGIN
      RETURN g_pkg_variable * 2;
   END get_variable;

END pkg_state;
/

Executing the code from within the EDITION_2 session we see that all is working as it should:

SQL> EXEC pkg_state.set_variable (5)

PL/SQL procedure successfully completed.

SQL> EXEC dbms_output.put_line ('get_value: ' || TO_CHAR(pkg_state.get_variable));
get_value: 10

PL/SQL procedure successfully completed.

Let’s go back to our initial session and see what happens if we execute the GET_VARIABLE packaged routine again:

SQL> EXEC dbms_output.put_line ('get_value: ' || TO_CHAR(pkg_state.get_variable));
get_value: 5

PL/SQL procedure successfully completed.

We can see that it’s still executing the initial version of the code and that it’s still got its package state. The compilation of the code under EDITION_2 did not impact our first session, which is running under EDITION_1. If we change out first session to EDITION_2 and run the routines everything works fine too:

SQL> ALTER SESSION SET EDITION = edition_2;

Session altered.

SQL> EXEC pkg_state.set_variable (2)

PL/SQL procedure successfully completed.

SQL> EXEC dbms_output.put_line ('get_value: ' || TO_CHAR(pkg_state.get_variable));
get_value: 4

PL/SQL procedure successfully completed.

We can see that now we’re executing the modified code under EDITION_2. Switching back to EDITION_1 we find that our package state from the EDITION_1 execution has been preserved:

SQL> ALTER SESSION SET EDITION = edition_1;

Session altered.

SQL> EXEC dbms_output.put_line ('get_value: ' || TO_CHAR(pkg_state.get_variable));
get_value: 5

PL/SQL procedure successfully completed.

So, edition-based redefinition allows us to have multiple versions of our package code in the database at the same time. Each version has its own package state so compiling a new version of the code into a different edition does not invalidate the package state associated with sessions using a different edition.

A word of caution however, Edition-based redefinition is a very powerful feature and one that needs careful planning before using it. I would strongly recommend reading the on-line documentation before implementating anything. Editions are database-wide and currently, as of Oracle 12.1.0.2, limited to a simple single parent-child sequence. For manageability reasons, care should be taken not to create too many editions with various changes in each edition. If at all possible, I would recommend there be just 2 active editions: a “current” one and a “future” one. Once the functionality of the future edition has been confirmed and all applications have been moved over to be using it then that becomes the current edition and the previous current edition is removed. Trying to track a dozen or more active editions, all with their own specific changes, is likely to lead to trouble.