AUSOUG OTN Day 2015

I attended the Perth all-day AUSOUG OTN Day 2015 event yesterday, which was a mini-conference involving a number of excellent speakers on all things Oracle. I have noted previously how valuable I find events such as these to be and this time was no different. As per usual I came away with a better understanding on various subjects and fired up with renewed enthusiasm!

As always, a very big thank-you to the speakers (in no particular order):

Of course, a equally big thank-you must go to AUSOUG for organising the event and Oracle Corporation itself.

Changing primary key index

While investigating a recent performance issue I arrived at the conclusion that an index comprising of the primary key plus one other column would be required. I have previously noted that a unique constraint may be enforced using a non-unique index.

Having another index on the table that mirrored the primary key plus another column seemed wasteful so I set about removing the existing primary key index and replacing it with the new index. Thankfully Oracle provides the ability to alter an existing primary key constraint to be enforced through a different index without the need of dropping and recreating the constraint, as I’ll illustrate using the following set-up:

SQL>CREATE TABLE tab
  2     (id     NUMBER (6) NOT NULL
  3     ,cat    NUMBER (6) NOT NULL
  4     ,val    NUMBER (6) NOT NULL)
  5  /

Table created.

SQL>ALTER TABLE tab
  2     ADD CONSTRAINT tab_pk
  3     PRIMARY KEY (id)
  4  /

Table altered.

SQL>CREATE INDEX tab_pk2
  2  ON tab (id, cat)
  3  /

Index created.

SQL>SELECT index_name
  2  ,      uniqueness
  3  FROM   user_indexes
  4  WHERE  table_name = 'TAB'
  5  /

INDEX_NAME      UNIQUENESS
--------------- ---------------
TAB_PK          UNIQUE
TAB_PK2         NONUNIQUE

In the above we have a simple table with a primary key defined on the column ID, a unique index TAB_PK on the ID column and a non-unique index on the ID and CAT colums. If we were to simply try removing the unique index that’s currently enforcing the primary key constraint then we’ll be met with:

SQL>DROP INDEX tab_pk
  2  /
DROP INDEX tab_pk
           *
ERROR at line 1:
ORA-02429: cannot drop index used for enforcement of unique/primary key

What we need to do first is to tell Oracle to enforce the primary key constraint using the non-unique index:

SQL>ALTER TABLE tab
  2     MODIFY CONSTRAINT tab_pk
  3     USING INDEX tab_pk2
  4  /

Table altered.

This is now where things get a bit more interesting. If we look at the indexes on our table we find:

SQL>SELECT index_name
  2  ,      uniqueness
  3  FROM   user_indexes
  4  WHERE  table_name = 'TAB'
  5  /

INDEX_NAME      UNIQUENESS
--------------- ---------------
TAB_PK2         NONUNIQUE

Hey, what hapened to our unique index? It seems that Oracle has dropped it. So why did that happen? As it turns out, if the index is created automatically as part of the creation of the primary key constraint then it will be automatically removed when a new index is used to enforce the constraint. If the index were created before the constraint then this doesn’t happen, as illustrated below:

SQL>SELECT index_name
  2  ,      uniqueness
  3  FROM   user_indexes
  4  WHERE  table_name = 'TAB'
  5  /

no rows selected

SQL>CREATE UNIQUE INDEX tab_pk
  2     ON tab (id)
  3  /

Index created.

SQL>ALTER TABLE tab
  2     ADD CONSTRAINT tab_pk
  3     PRIMARY KEY (id)
  4     USING INDEX tab_pk
  5  /

Table altered.

SQL>CREATE INDEX tab_pk2
  2  ON tab (id, cat)
  3  /

Index created.

SQL>ALTER TABLE tab
  2     MODIFY CONSTRAINT tab_pk
  3     USING INDEX tab_pk2
  4  /

Table altered.

SQL>SELECT index_name
  2  ,      uniqueness
  3  FROM   user_indexes
  4  WHERE  table_name = 'TAB'
  5  /

INDEX_NAME      UNIQUENESS
--------------- ---------------
TAB_PK          UNIQUE
TAB_PK2         NONUNIQUE

In the above we created the index for enforcing the primary key before declaring the constraint. When we switched to using a new index for the constraint the original index remained.

Some simple renaming is all that remains:

SQL>ALTER INDEX tab_pk
  2     RENAME TO tab_pk_old
  3  /

Index altered.

SQL>ALTER INDEX tab_pk2
  2     RENAME TO tab_pk
  3  /

Index altered.

I must say that I prefer the old index remaining behind after the change to the primary key constraint. Leaving the original index in place allows me the opportunity to revert the change in the unlikely event that the new index causes query execution plan problems. The original index can be marked as invisible:

ALTER INDEX tab_pk_old INVISIBLE

and should problems arise it can quickly be reinstated without having to do rebuild it from scratch, which is important for any reasonably large table:

ALTER INDEX tab_pk_old VISIBLE

If no problems arise after the change of primary key index then the old (now invisible) index can be dropped at some later point in time:

DROP INDEX tab_pk_old

DML within DML

As noted in my previous post, a recent code review revealed a couple of things that I wasn’t aware of… which is one of the pleasures of working in IT; learning new things. Aggregation within the RETURNING INTO clause of a DML statement was a good thing to find out about. I’m having a hard time thinking of a scenario that would legitimately justify the other feature; performing DML within DML…

To demonstrate, let’s create two simple tables, TAB_A and TAB_B, each with two columns; ID and VAL. Both table start off with a small set of data:

CREATE TABLE tab_a
   (id    NUMBER (6) NOT NULL
   ,val   NUMBER (6) NOT NULL
   ,CONSTRAINT tab_a_pk PRIMARY KEY (id))
/

CREATE TABLE tab_b
   (id    NUMBER (6) NOT NULL
   ,val   NUMBER (6) NOT NULL
   ,CONSTRAINT tab_b_pk PRIMARY KEY (id))
/

INSERT INTO tab_a
SELECT ROWNUM
,      ROWNUM
FROM   dual
CONNECT BY ROWNUM <= 10
/

INSERT INTO tab_b
SELECT ROWNUM
,      ROWNUM
FROM   dual
CONNECT BY ROWNUM <= 10
/

COMMIT
/

Next we’ll create a package with two routines. The first routine is a function that sets the VAL value in TAB_A to a random value and returns that value. The second routine is a procedure that sets the VAL value in TAB_B using the return value of the first function:

CREATE OR REPLACE PACKAGE proc_pkg
AS
   FUNCTION val_a (p_id IN tab_a.id%TYPE)
      RETURN tab_a.val%TYPE;
   PROCEDURE update_b (p_id IN tab_b.id%TYPE);
END proc_pkg;
/

CREATE OR REPLACE PACKAGE BODY proc_pkg
AS
   FUNCTION val_a (p_id IN tab_a.id%TYPE)
      RETURN tab_a.val%TYPE
   AS
      l_return_value tab_a.val%TYPE;
   BEGIN
      UPDATE tab_a
      SET    val = TRUNC(dbms_random.value (100,200))
      WHERE  id = p_id
      RETURNING val INTO l_return_value;
      RETURN l_return_value;
   END val_a;
   
   PROCEDURE update_b  (p_id IN tab_b.id%TYPE)
   AS
   BEGIN
      UPDATE tab_b
      SET    val = val_a (id)
      WHERE  id = p_id;
   END update_b;
END proc_pkg;
/

If you look at the UPDATE_B routine, we’re updating TABL_B but since we invoke function VAL_A we’re also updating TAB_A. My initial thought when seeing code like this was “hey, that won’t work”… and it seems I was wrong:

SQL> EXEC proc_pkg.update_b (5)

PL/SQL procedure successfully completed.

SQL> SELECT *
  2  FROM   tab_a
  3  WHERE  id = 5
  4  /

        ID        VAL
---------- ----------
         5        163

SQL> SELECT *
  2  FROM   tab_b
  3  WHERE  id = 5
  4  /

        ID        VAL
---------- ----------
         5        163

Okay, so it works; performing DML on one table can invoke a function that performs DML on another table. We can insert, update and delete data on a table from a PL/SQL function invoked via DML on a different table. If we were to try running a simple SELECT query using a function that does DML then we get ORA-14551:

SQL> SELECT id
  2  ,      proc_pkg.val_a (id)
  3  FROM   tab_a
  4  /
,      proc_pkg.val_a (id)
       *
ERROR at line 2:
ORA-14551: cannot perform a DML operation inside a query
ORA-06512: at "DEVELOPER.PROC_PKG", line 8

Yes, you can get around this particular error by making the function an autonomous transaction… but don't do that, it's not what autonomous transaction are designed for.

My problem is that I have difficulty thinking of a scenario whereby doing DML from within DML might be a good idea instead of splitting it into separate operations. In some ways it’s similar to a row level trigger and I dislike that approach to data processing as it leads to spaghetti code. Sure, some sort of demo where I want to log details about the execution of the function as it executes I can understand, but in normal business type processing…? Furthermore, keep in mind that we're essentially doing row by row type processing here; our TAB_A DML function will be invoked for each row modified in TAB_B so we're not doing bulk processing on TAB_A. This is likely to be a performance inhibitor for the TAB_A changes.

I guess this is something that I would classify as a nice feature but use only with caution when no other option exists… although I'm happy to take advice on this.

RETURNING INTO revisited

While reviewing some code I came across a feature with the RETRUNING INTO clause that I’m not sure I knew about (or I had long forgotten about). In any case it’s worth noting as a follow-up to my earlier RETURNING INTO posts…

As previously noted, RETURNING INTO is used to return scalar values back to PL/SQL from an SQL statement. The RETURNING INTO BULK COLLECT option is used for returning a set of values. The feature noted in a recent code review showed that it’s possible to use aggregate functions with the RETURNING INTO clause to collapse a multi-row operation into one or more scalar values. From a quick search of other posts on the internet, this feature crept in somewhere around Oracle 10g.

A simple example should illustrate things better. First up, we’ll create a simple table with two columns and ten rows:

CREATE TABLE t1
   (id      NUMBER (6) NOT NULL
   ,val     NUMBER (4) NOT NULL)
/

INSERT INTO t1
SELECT ROWNUM
,      ROWNUM
FROM   dual
CONNECT BY ROWNUM <= 10
/

COMMIT
/

Next we’ll write a simple PL/SQL block that increments the VAL column and, using the RETURNING INTO clause, returns the SUM and AVG of the values updated:

DECLARE
   l_sum  NUMBER;
   l_avg  NUMBER;
BEGIN
   UPDATE t1
   SET    val = val + 1
   RETURNING SUM(val), AVG(val) INTO l_sum, l_avg;
 
   dbms_output.enable; 
   dbms_output.put_line ('Sum: ' || TO_CHAR(l_sum));
   dbms_output.put_line ('Avg: ' || TO_CHAR(l_avg));
END;
/   

which gives us a result of:

Sum: 65
Avg: 6.5

The sum value of 65 is (2+3+4+5+6+7+8+9+10+11), which are values the VAL column contains after the update. Obviously the average of 6.5 is the sum of 65 divided by the 10 rows that were modified.

The same applies when deleting rows:

DECLARE
   l_sum  NUMBER;
   l_avg  NUMBER;
BEGIN
   DELETE
   FROM   t1
   WHERE  id BETWEEN 1 AND 4
   RETURNING SUM(val), AVG(val) INTO l_sum, l_avg;
 
   dbms_output.enable; 
   dbms_output.put_line ('Sum: ' || TO_CHAR(l_sum));
   dbms_output.put_line ('Avg: ' || TO_CHAR(l_avg));
END;
/   

which results in (assuming we’re operating on the original table, prior to the UPDATE above):

Sum: 10
Avg: 2.5

Note, it’s not possible to use the DISTINCT clause of the agregate functions:

DECLARE
   l_count  NUMBER;
BEGIN
   UPDATE t1
   SET    val = val + 1
   RETURNING COUNT(id) INTO l_count;
 
   dbms_output.enable; 
   dbms_output.put_line ('Count: ' || TO_CHAR(l_count));
END;
/   

which results in the following error:

ERROR at line 1:
ORA-00934: group function is not allowed here
ORA-06512: at line 4

So, +1 for learning new things from code reviews. Unfortunately the developer totally spoilt it all by using something else I didn’t know about and I would definitely not recommend but that’s a topic for a different post…

Deferrable Constraints

As someone who has done a lot of data manipulation work I strongly support anything that goes to improve the quality of data. Within the database that means I encourage the use of declarative constraints. In Oracle, these are:

  • Primary key constraints
  • Unique key constraints
  • Foreign key constraints
  • Check constraints
  • Not Null constraints

As anyone who has dealt with databases would know, data inserted or updated must abide by whatever constraints have been declared on a table. Normally this check is made at the point the data modification is made. However, deferrable constraints offers us the ability to delay the checking to the commit point.

So why might we need deferrable constraints? Depending on the data processes it might be that data obtained early on in the process is incomplete and gets supplemented as the process progresses. Deferrable constraints allow us to gradually build up the data knowing that at the end of the transaction the data will be complete. It’s certainly a better situation than dropping the constraints completely!

Let’s look at a very simple example of a deferrable NOT NULL constraint. We’ll create a table with two columns, COL_1 and COL_2, both with NOT NULL constraints but the COL_2 constraint will be deferrable:

CREATE TABLE t1
   (col_1   NUMBER CONSTRAINT col_1_not_null NOT NULL
   ,col_2   NUMBER CONSTRAINT col_2_not_null NOT NULL DEFERRABLE INITIALLY DEFERRED)
/

I’ve taken the liberty of naming the NOT NULL constraints in the above DDL, which is something that I would not normally do for NOT NULL constraints, as we’ll see the names in the exceptions thrown.

If we insert a row into our table with NULL for COL_1 we get:

SQL>INSERT INTO t1
  2  VALUES (NULL, 1)
  3  /
VALUES (NULL, 1)
        *
ERROR at line 2:
ORA-01400: cannot insert NULL into ("DEVELOPER"."T1"."COL_1")

However if the COL_2 value is set to NULL:

SQL>INSERT INTO t1
  2  VALUES (1, NULL)
  3  /

1 row created.

we find that the row gets created. If we were to commit at this point then the NOT NULL constraint check on COL_2 kicks in and we get:

SQL>COMMIT;
COMMIT
*
ERROR at line 1:
ORA-02091: transaction rolled back
ORA-02290: check constraint (DEVELOPER.COL_2_NOT_NULL) violated

However, if we initially insert NULL into COL_2 and then provide a value prior to the commit we find all goes through without error:

SQL>INSERT INTO t1
  2  VALUES (1, NULL)
  3  /

1 row created.

SQL>
SQL>UPDATE t1
  2  SET    COL_2 = 1
  3  WHERE  COL_1 = 1
  4  /

1 row updated.

SQL>COMMIT
  2  /

Commit complete.

You’ll note that the deferrable constraint was declared with the keywords DEFERRABLE INITIALLY DEFERRED. The DEFERRABLE keyword notes that the constraint is capable of being deferred while the INTIALLY DEFERRED part specifies that the constraint defaults to deferred checking. The alternative to this is DEFERRABLE INITIALLY IMMEDIATE, which means the constraint check is not deferred by default but can be made deferrable.

To make all deferrable constraint deferred issue the follow:

ALTER SESSION SET CONSTRAINTS = DEFERRED

and to reverse this, i.e. turn off deferred checking:

ALTER SESSION SET CONSTRAINTS = IMMEDIATE

Use:

ALTER SESSION SET CONSTRAINTS = DEFAULT

puts deferrable constraints into DEFERRED or IMMEDIATE according to how they are declared. A session can change its constraint setting in the middle of a transaction without impacting the DML that has occurred prior the change.

Note, the above commands only impact constraints that are declare to be DEFERRABLE. By default constraints are NOT DEFERRABLE, which means they can never be deferred. The only way to enable deferrable checking on a constraint that has been declared as not deferrable is to drop it and recreate it with one of the deferrable options (at least, as of Oracle 12.1.0.2. I guess future versions may support this).

If you have read my earlier post on unique constraints indexes then you’ll know that by default Oracle will create a unique index to enforce the constraint. If the unique constraint is declared as DEFERRABLE then the index Oracle creates will not be unique. On consideration this makes senses. During the course of a transaction Oracle might have duplicate entries on the unique column(s) and a unique index would prevent this scenario.

READ Table Privilege

I arrived at work one Monday morning to find an email from our automated job that checks for long transactions in the database:

Transaction for user xxxxxx (sid 678, program SQL Developer) has been open for 44460 seconds

Hmmm… so that would be someone holding a lock in the production system for 12 hours. Thankfully the table where the lock was being held wasn’t frequently subject to DML or this might have caused trouble. I have previously noted how simply having SELECT access allows a user to lock data and that was exactly the scenario that had occurred; a support person had logged into the database, locked a row in a table that they only had SELECT access on without realising what they had done… and then left their session active for a day.

After killing the offending session and sending out a polite email to all support staff requesting that they disconnect from a production database after they have finished whatever needed doing I set about making some changes…

Oracle 12.1.0.2 introduces a new privilege to address this specific problem; READ. According to the documentation READ privilege permits an account to:

Query the table with the SELECT statement. Does not allow SELECT … FOR UPDATE.

Note: This privilege is available starting with Oracle Database 12c Release 1 (12.1.0.2).

Let’s not take Oracle’s word for it; let’s test it using the scenario from the previous post:

CREATE USER data_owner
   IDENTIFIED BY data_owner
   DEFAULT TABLESPACE users
   QUOTA UNLIMITED ON users
/

CREATE USER data_reader
   IDENTIFIED BY data_reader
/

GRANT CREATE SESSION TO data_owner
/
GRANT CREATE TABLE TO data_owner
/
GRANT CREATE SESSION TO data_reader
/

CREATE TABLE data_owner.lock_test
   (id NUMBER(10))
/

INSERT INTO data_owner.lock_test
   VALUES (1)
/

COMMIT
/

Instead of granting SELECT on the DATA_OWNER.LOCK_TEST table to DATA_READER we grant READ:

GRANT READ ON data_owner.lock_test TO data_reader
/

Now to test if we can lock the data in the DATA_OWNER account, we run the following from the DATA_READER account:

DECLARE
   CURSOR cur
   IS
   SELECT *
   FROM   data_owner.lock_test
   FOR UPDATE;
BEGIN
   OPEN cur;
END;
/

The above results in:

SQL>DECLARE
  2     CURSOR cur
  3     IS
  4     SELECT *
  5     FROM   data_owner.lock_test
  6     FOR UPDATE;
  7  BEGIN
  8     OPEN cur;
  9  END;
 10  /
   FROM   data_owner.lock_test
                     *
ERROR at line 5:
ORA-06550: line 5, column 22:
PL/SQL: ORA-01031: insufficient privileges
ORA-06550: line 4, column 4:
PL/SQL: SQL Statement ignored

Exactly what we wanted! Nice one Oracle. READ privilege can be applied to tables, views and materialized views.

For me, some simple changes to the roles assigned to the support users and the unwanted locking problem is resolved.

Unique Constraint Indexes

It’s generally known by developers that when creating unique constraint Oracle will create a unique index to enforce the constraint, as illustrated below:

SQL> CREATE TABLE tab
  2     (col1     NUMBER (10) NOT NULL)
  3  /

Table created.

SQL> ALTER TABLE tab
  2     ADD CONSTRAINT tab_u1
  3     UNIQUE (col1)
  4  /

Table altered.

SQL> SELECT index_name
  2  ,      uniqueness
  3  FROM   user_indexes
  4  WHERE  table_name = 'TAB'
  5  /


INDEX_NAME           UNIQUENESS
-------------------- --------------------
TAB_U1               UNIQUE

For a unique constraint we can see what index is enforcing the constraint by querying USER_CONSTRAINTS:

SQL>SELECT constraint_name
  2  ,      index_name
  3  FROM   user_constraints
  4  WHERE  constraint_name = 'TAB_U1'
  5  /

CONSTRAINT_NAME      INDEX_NAME
-------------------- --------------------
TAB_U1               TAB_U1

When the index is created as part of adding the constraint then the index will be given the same name as the constraint, as shown in the example above. (As an aside, all these examples have been done on a 12.1.0.2 database but the details will be the same in earlier versions.)

What is less generally known about uniquness constraints (or primary key constraints for that matter) is that the constraint may be enforced by a non-unique index. This is done by creating the index first and then specifying the index to be used when creating the constraint, as shown below:

SQL>CREATE TABLE tab
  2     (col1     NUMBER (10) NOT NULL)
  3  /

Table created.

SQL>CREATE INDEX tab_i1
  2     ON tab (col1)
  3  /

Index created.

SQL>ALTER TABLE tab
  2     ADD CONSTRAINT tab_u1
  3     UNIQUE (col1)
  4     USING INDEX tab_i1
  5  /

Table altered.

SQL>SELECT index_name
  2  ,      uniqueness
  3  FROM   user_indexes
  4  WHERE  table_name = 'TAB'
  5  /

INDEX_NAME           UNIQUENESS
-------------------- -------------------
TAB_I1               NONUNIQUE

SQL>SELECT constraint_name
  2  ,      index_name
  3  FROM   user_constraints
  4  WHERE  constraint_name = 'TAB_U1'
  5  /

CONSTRAINT_NAME      INDEX_NAME
-------------------- -----------------
TAB_U1               TAB_I1

The index selected to enforce the constraint needs to lead with the same columns as the constraint, although the columns can be specified in any order:

SQL>CREATE TABLE tab
  2     (col1     NUMBER (10) NOT NULL
  3     ,col2     NUMBER (10) NOT NULL
  4     ,col3     NUMBER (10) NOT NULL)
  5  /

Table created.

SQL>
SQL>CREATE INDEX tab_i1
  2     ON tab (col2, col1)
  3  /

Index created.

SQL>
SQL>ALTER TABLE tab
  2     ADD CONSTRAINT tab_u1
  3     UNIQUE (col1, col2)
  4     USING INDEX tab_i1
  5  /

Table altered.

Failure to specify an index that leads with all the columns of the unique constraint results in the error:

ORA-14196: Specified index cannot be used to enforce the constraint.

Knowing the above gives us some opportunity to optimise our applications. We are able to create indexes that support the unique constraint but can also be tweaked to support other processes too. A relatively common scenario involves the need to look up a unique value in a table in order to obtain another attribute from the table. This might be obtaining the description associated with a unqiue ID value or, in a data warehouse scenario, this might be to lookup a dimension surrogate key from the source system unique key. Rather than create two indexes, one to support the unique constraint and another to aid our lookup, we can create just the latter one and use it to enforce the unique constraint too as the following example illustrates:

SQL>CREATE TABLE dim_x
  2     (dim_pk       NUMBER (10) NOT NULL
  3     ,source_pk    NUMBER (10) NOT NULL)
  4  /

Table created.

SQL>CREATE INDEX dim_x_i1
  2     ON dim_x (source_pk, dim_pk)
  3  /

Index created.

SQL>ALTER TABLE dim_x
  2     ADD CONSTRAINT dim_x_u1
  3     UNIQUE (source_pk)
  4     USING INDEX dim_x_i1
  5  /

Table altered.

Rather than needing to perform an index lookup followed by a ROWID access into our table Oracle is able to perform the lookup from the index alone:

SQL>INSERT INTO dim_x
  2  SELECT ROWNUM
  3  ,      ROWNUM+ 99
  4  FROM   dual
  5  CONNECT BY ROWNUM 
SQL>COMMIT
  2  /

Commit complete.

SQL>
SQL>EXEC dbms_stats.gather_table_stats ('','dim_x')

PL/SQL procedure successfully completed.

SQL>EXPLAIN PLAN
  2  FOR
  3  SELECT dim_pk
  4  FROM   dim_x
  5  WHERE  source_pk = :source_pk
  6  /

Explained.

SQL>
SQL>SELECT *
  2  FROM   TABLE(dbms_xplan.display())
  3  /

PLAN_TABLE_OUTPUT
------------------------------------------------------------------
Plan hash value: 3680186242

------------------------------------------------------------------
| Id  | Operation        | Name     | Rows  | Bytes | Cost (%CPU)|
------------------------------------------------------------------
|   0 | SELECT STATEMENT |          |     1 |     8 |     2   (0)|
|*  1 |  INDEX RANGE SCAN| DIM_X_I1 |     1 |     8 |     2   (0)|
------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("SOURCE_PK"=TO_NUMBER(:SOURCE_PK))

13 rows selected.

As with many tips, check carefully first as there are trade-offs to consider. The optimiser treats unique and non-unique indexes differently so check that the benefit you’re expecting from the index change is apparent and that hasn’t introduced any side-effects.