Edition-based Redefinition to avoid ORA-04068

I have previously written about how to avoid package state to facilitate on-line deployments, so that PL/SQL code can be modified in a live environment without impacting existing sessions. The main drawback to the methods outlined are that they assume the call interface, or contract, for routines won’t change. When dealing with complex deployments, Oracle 11g Release 2 and later provides for a versioning capabilitiy in the form of edition-based redefinition.

Let’s take a quick look at how edition-based redefinition could be used to avoid ORA-04068. We’ll use the inital package code from the previous article that makes no effort to avoid package state. However, first we need to do a little bit of preparation. From an appropriately authorised account we will set up a new EDITION_USER account and create a couple of editions within the database:

CREATE USER edition_user
   IDENTIFIED BY edition_user;

ALTER USER edition_user ENABLE EDITIONS;
GRANT CONNECT TO edition_user;
GRANT CREATE PROCEDURE TO edition_user;

CREATE EDITION edition_1 AS CHILD OF ora$base;
CREATE EDITION edition_2 AS CHILD OF edition_1;

GRANT USE ON EDITION edition_1 TO edition_user;
GRANT USE ON EDITION edition_2 TO edition_user;

We’ve now got a new account and two new editions that the account has access to. The above assumes no previous editions have been defined, i.e. the default ORA$BASE edition was still in effect.

From a new session logged onto the new EDITION_USER account we’ll create our package under EDITION_1:

ALTER SESSION SET EDITION = edition_1;

CREATE OR REPLACE EDITIONABLE PACKAGE pkg_state
AS
   PROCEDURE set_variable (p_number IN NUMBER);
   FUNCTION get_variable
   RETURN NUMBER;
END pkg_state;
/

CREATE OR REPLACE EDITIONABLE PACKAGE BODY pkg_state
AS
   g_pkg_variable   NUMBER(10);

   PROCEDURE set_variable (p_number IN NUMBER)
   AS
   BEGIN   
      g_pkg_variable := p_number;
   END set_variable;
   
   FUNCTION get_variable
   RETURN NUMBER
   AS
   BEGIN
      RETURN g_pkg_variable;
   END get_variable;

END pkg_state;
/

Running the packaged routines within the session we get:

SQL> EXEC pkg_state.set_variable (5)

PL/SQL procedure successfully completed.

SQL> EXEC dbms_output.put_line ('get_value: ' || TO_CHAR(pkg_state.get_variable));
get_value: 5

PL/SQL procedure successfully completed.

Everything working as expected. Our session has now got package state for PKG_STATE. In a new session we’re going to modify the package, doubling the value we output in the GET_VARIABLE routine, but we’ll do it under EDITION_2:

ALTER SESSION SET EDITION = edition_2;

CREATE OR REPLACE EDITIONABLE PACKAGE pkg_state
AS
   PROCEDURE set_variable (p_number IN NUMBER);
   FUNCTION get_variable
   RETURN NUMBER;
END pkg_state;
/

CREATE OR REPLACE EDITIONABLE PACKAGE BODY pkg_state
AS
   g_pkg_variable   NUMBER(10);

   PROCEDURE set_variable (p_number IN NUMBER)
   AS
   BEGIN   
      g_pkg_variable := p_number;
   END set_variable;
   
   FUNCTION get_variable
   RETURN NUMBER
   AS
   BEGIN
      RETURN g_pkg_variable * 2;
   END get_variable;

END pkg_state;
/

Executing the code from within the EDITION_2 session we see that all is working as it should:

SQL> EXEC pkg_state.set_variable (5)

PL/SQL procedure successfully completed.

SQL> EXEC dbms_output.put_line ('get_value: ' || TO_CHAR(pkg_state.get_variable));
get_value: 10

PL/SQL procedure successfully completed.

Let’s go back to our initial session and see what happens if we execute the GET_VARIABLE packaged routine again:

SQL> EXEC dbms_output.put_line ('get_value: ' || TO_CHAR(pkg_state.get_variable));
get_value: 5

PL/SQL procedure successfully completed.

We can see that it’s still executing the initial version of the code and that it’s still got its package state. The compilation of the code under EDITION_2 did not impact our first session, which is running under EDITION_1. If we change out first session to EDITION_2 and run the routines everything works fine too:

SQL> ALTER SESSION SET EDITION = edition_2;

Session altered.

SQL> EXEC pkg_state.set_variable (2)

PL/SQL procedure successfully completed.

SQL> EXEC dbms_output.put_line ('get_value: ' || TO_CHAR(pkg_state.get_variable));
get_value: 4

PL/SQL procedure successfully completed.

We can see that now we’re executing the modified code under EDITION_2. Switching back to EDITION_1 we find that our package state from the EDITION_1 execution has been preserved:

SQL> ALTER SESSION SET EDITION = edition_1;

Session altered.

SQL> EXEC dbms_output.put_line ('get_value: ' || TO_CHAR(pkg_state.get_variable));
get_value: 5

PL/SQL procedure successfully completed.

So, edition-based redefinition allows us to have multiple versions of our package code in the database at the same time. Each version has its own package state so compiling a new version of the code into a different edition does not invalidate the package state associated with sessions using a different edition.

A word of caution however, Edition-based redefinition is a very powerful feature and one that needs careful planning before using it. I would strongly recommend reading the on-line documentation before implementating anything. Editions are database-wide and currently, as of Oracle 12.1.0.2, limited to a simple single parent-child sequence. For manageability reasons, care should be taken not to create too many editions with various changes in each edition. If at all possible, I would recommend there be just 2 active editions: a “current” one and a “future” one. Once the functionality of the future edition has been confirmed and all applications have been moved over to be using it then that becomes the current edition and the previous current edition is removed. Trying to track a dozen or more active editions, all with their own specific changes, is likely to lead to trouble.

It’s obvious…

Whilst attempting to apply a CHECK constraint to a table, Oracle threw back the following:

SQL Error: ORA-02293: cannot validate (xxxx.xxxxxxxxx) - check constraint violated
02293. 00000 - "cannot validate (%s.%s) - check constraint violated"
*Cause:    an alter table operation tried to validate a check constraint to
           populated table that had nocomplying values.
*Action:   Obvious

As a database person, to me the obvious action would be to correct the data and re-apply the constraint. To a non-database developer the obvious action may be to discard the constraint entirely.

It’s a matter of perspective… obviously.

Image Attributes

I’ve worked on a number of systems where images were stored in the tables of the database, rather than some sort of reference to a file system location. This approach has quite a few merits; adding and modifying images will be transactional with whatever business process is updating the database and the images will be backed up with the rest of the data.

However, quite often I see the images stored in simple BLOB data type columns. This approach works just fine until you want to know details image. An example that springs to mind was the need to know the format and dimensions of the images so they could make decisions about how the images should be displayed on a screen.

Oracle has an object type for images where this type of metadata is exposed: ORDImage. You’ll find the documentation of this object type in the Multimedia User’s Guide and Multimedia Reference. One solution to the problem of determining attributes of an image stored in a BLOB is to query the BLOB, casting it to a ORDIMage type and reading the attributes that way. Let’s take a quick look at how this is done…

For the image I’m going to use for this exercise I’m going to use the banner image from this blog; picture I took just outside of Queenstown in New Zealand. This image is a 1000 x 288 jpg image.

First we need to load the image into the database in a BLOB column of a table:

CREATE TABLE img_tab
   (id      NUMBER (6) NOT NULL
   ,img     BLOB)
/

CREATE DIRECTORY img_dir AS 'C:\oracle\images'
/

DECLARE
   l_blob        BLOB;
   l_bfile       BFILE := BFILENAME('IMG_DIR', 'cropped-queenstown_sml.jpg');
   l_dest_offset INTEGER := 1;
   l_src_offset  INTEGER := 1;
BEGIN

   dbms_lob.createtemporary
      (lob_loc => l_blob
      ,cache   => TRUE);
      
   dbms_lob.open (file_loc => l_bfile);
                           
   dbms_lob.loadblobfromfile
      (dest_lob    => l_blob
      ,src_bfile   => l_bfile
      ,amount      => dbms_lob.lobmaxsize
      ,dest_offset => l_dest_offset 
      ,src_offset  => l_src_offset);   
   
   INSERT INTO img_tab
      (id
      ,img)
   VALUES
      (1
      ,l_blob);

   COMMIT;
   
   dbms_lob.close (file_loc => l_bfile);
END;
/

Running the above gets us our image into a column named IMG in the table IMG_TAB. Now to see what Oracle ORDImage object type can tell us about our image. We can do this directly in SQL, without having to resort to PL/SQL (note, the second parameter of the ORDImage constructor, the “1”, instructs Oracle to process the image and obtain the metadata):

WITH imgs AS
   (SELECT ORDImage(img, 1) AS ord_img
    FROM   img_tab)
SELECT i.ord_img.getFileFormat() AS file_format
,      i.ord_img.getContentLength() AS content_length
,      i.ord_img.getHeight() AS height
,      i.ord_img.getWidth() AS width
FROM   imgs i

which gives us an output of:

FILE_FORMAT     CONTENT_LENGTH     HEIGHT      WIDTH
--------------- -------------- ---------- ----------
JFIF                     51960        288       1000

Exactly what we wanted to know.

A word of caution though. Running a query like this against a large number of stored BLOB images is likely to be processor and IO intensive. Pick a time of day when the system has capacity.

Slow DELETEs

My lunch was interrupted yet again by a developer with a support question. “The users are complaining that it takes too long to clear out some data. There’s a delete that’s taking ages. Can you look into it please?” he asked.

“A long running delete?” I mused. “That’s interesting…”. Like so many systems these days we don’t do much in the way of deletes. Oracle could deprecate the DELETE statement and it would only be a minor inconvenience to us.:-)

Luckily for me the developer had already done an excellent job of narrowing down the problem to a specific packaged SQL statement. The statement itself was quite trivial; delete a single row from a table, identifying the row via its single column primary key. It doesn’t get much simpler than that. I didn’t bother checking for triggers on the table as the system doesn’t use them.

Since the problem occurred in the past hour I turned to my favourite diagnostic tool; Active Session History. A quick query of ASH using the SQL_ID of the delete statement revealed the extent of the problem; 120 samples. Yep, the users were quite right to complain as a single row delete should not take 2 minutes to complete. Looking at the EVENT details I saw that the session had been waiting on IO for they were mostly “db file scattered read” with a handful of “db file sequential read”. This told me what the problem was likely to be…

Foreign keys are constraints in the database to ensure that data values used in child table entry are present in a parent, reference table. This helps enforce the integrity we desire, and generally assume to be, in the data. The presence of foreign keys also has an often overlooked impact to deletions made against the parent table. When a row is deleted from the parent table, Oracle needs to do a check against the child table to ensure that the row being deleted is not referenced by the child table. When the foreign key column(s) in the child table are indexed then this check against the child table is quick and generally not noticable. If the foreign key column(s) are not indexed then Oracle has no choice but to resort to a full table scan of the child table… and if the child table is fairly large then that check may take a while to complete, holding up the delete.

So, knowing about the impact of unindexed foreign keys when deleting from a parent table and observing a lot of physical IO the next step for me was to identify the object that the IO was being performed on to confirm my suspicion. Once again the ASH data contained the necessary details. The CURRENT_OBJ# attribute contains the reference to the object that the IO related to. A quick query against DBA_OBJECTS using the object ID from ASH revealed it to be a table and, yep, it contained an unindexed foreign key to the table that the delete was being performed on. The child table contained tens of millions of rows so a table scan would certainly have explained the delay in deleting from the parent table.

Diagnosis complete. Time to resolve the problem. The solution: drop the foreign key on the child table. No, no, just kidding! The solution was to index the foreign key column in the child table, making sure to specify the ONLINE clause of the CREATE INDEX statement so that the index could be added without blocking acivity on the child the table.

Maximum NUMBER Precision

What’s the maximum precision for the NUMBER data type? For many years I believed it to be 38. Afterall, it’s in the documentation under the Datatype Limits of the Reference manual:

38 significant digits

We can easily test this out (the following on a 12.1.0.2 database):

SQL>CREATE TABLE table_a
  2     (val    NUMBER(38,0))
  3  /

Table created.

SQL>CREATE TABLE table_b
  2     (val    NUMBER(39,0))
  3  /
   (val    NUMBER(39,0))
                  *
ERROR at line 2:
ORA-01727: numeric precision specifier is out of range (1 to 38)

and see that we cannot create a NUMBER (39,0) column. However, what happens if we leave the precision off the data type and insert some really big numbers into it (38 to 42 digits):

SQL>CREATE TABLE table_c
  2     (val    NUMBER)
  3  /

Table created.

SQL>INSERT INTO table_c
  2  VALUES (99999999999999999999999999999999999999)
  3  /

1 row created.

SQL>INSERT INTO table_c
  2  VALUES (999999999999999999999999999999999999999)
  3  /

1 row created.

SQL>INSERT INTO table_c
  2  VALUES (9999999999999999999999999999999999999999)
  3  /

1 row created.

SQL>INSERT INTO table_c
  2  VALUES (99999999999999999999999999999999999999999)
  3  /

1 row created.

SQL>INSERT INTO table_c
  2  VALUES (999999999999999999999999999999999999999999)
  3  /

1 row created.

SQL>COMMIT
  2  /

Commit complete.

All values were accepted by the INSERT statements so let’s now see what we’ve ended up with in our table:

SQL>COLUMN VAL FORMAT 9999999999999999999999999999999999999999999

SQL>SELECT val
  2  ,      LENGTH(TO_CHAR(val)) AS length_val
  3  FROM   table_c
  4  /

                                         VAL LENGTH_VAL
-------------------------------------------- ----------
      99999999999999999999999999999999999999         38
     999999999999999999999999999999999999999         39
    9999999999999999999999999999999999999999         40
  100000000000000000000000000000000000000000         40
 1000000000000000000000000000000000000000000         40

It would appear that we can actually get 40 digits of precision into a NUMBER data type even though we cannot define it to be NUMBER(40,0). After 40 digits Oracle approximates the number, using 40 significant digits.

A careful read of the Oracle SQL Reference documentation confirms this behaviour:

NUMBER(p,s)

where:

  • p is the precision, or the maximum number of significant decimal digits, where the most significant digit is the left-most nonzero digit, and the least significant digit is the right-most known digit. Oracle guarantees the portability of numbers with precision of up to 20 base-100 digits, which is equivalent to 39 or 40 decimal digits depending on the position of the decimal point.
  • s is the scale, or the number of digits from the decimal point to the least significant digit. The scale can range from -84 to 127.
    • Positive scale is the number of significant digits to the right of the decimal point to and including the least significant digit.
    • Negative scale is the number of significant digits to the left of the decimal point, to but not including the least significant digit. For negative scale the least significant digit is on the left side of the decimal point, because the actual data is rounded to the specified number of places to the left of the decimal point. For example, a specification of (10,-2) means to round to hundreds.

Interesting to know. However, NUMBER(38) is going to be satisfactory for virtually all practical purposes… except if you need to store an IPv6 address in decimal form.

Multi-versioning read consistency… again

Following on from some recent questions this post goes back to basics with regard to what Oracle terms “multi-versioning read consistency”. I have posted on this subject before but as it’s so critical to how applications using Oracle will behave it’s worth another post, with emphasis on a very important point at the end…

“Multi-versioning read consistency” might appear to be just a rather grand sounding name (or another bit of techno-jargon) so here’s a brief outline of what it means:

  • The data reurned by a query is based on what the data in the underlying tables contained when the query commenced
  • By extension of the above point, the query is not impacted by changes to the table data over the time taken to execute and retrieve the query data

It’s time for an example to illustrate this. First we’ll set up a simple table, TAB, and insert a single row:

CREATE TABLE tab
   (id  NUMBER(6) NOT NULL)
/

INSERT INTO tab
VALUES (1);

COMMIT;

Next we’ll open a cursor that returns the data in TAB but we won’t actually retrieve the data yet.

VARIABLE rc1 REFCURSOR

BEGIN
   OPEN :rc1
   FOR
   SELECT *
   FROM   tab
   ORDER  BY id;
END;
/

Let’s head back to our table and do some data changes. We’ll do these changes as autonomous transactions so as to simulate an external process coming in and making these changes, removed from the session with the open cursor:

DECLARE
   PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
   UPDATE tab
   SET id = 2;

   INSERT INTO tab
   VALUES (2);

   COMMIT;
END;
/

… and once again open a new cursor onto our table:

VARIABLE rc2 REFCURSOR

BEGIN
   OPEN :rc2
   FOR
   SELECT *
   FROM   tab
   ORDER  BY id;
END;
/

Lastly, just to reinforce things, we’ll do the whole thing yet again followed by a last update:

DECLARE
   PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
   UPDATE tab
   SET id = 3;

   INSERT INTO tab
   VALUES (3);

   COMMIT;
END;
/

VARIABLE rc3 REFCURSOR

BEGIN
   OPEN :rc3
   FOR
   SELECT *
   FROM   tab
   ORDER  BY id;
END;
/

DECLARE
   PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
   UPDATE tab
   SET id = 4;

   INSERT INTO tab
   VALUES (4);

   COMMIT;
END;
/

At the end of all this we have performed 4 inserts and 3 updates to our data. If we display the contents of our table as it exists at the end of the changes we get:

SQL> SELECT *
  2  FROM   tab
  3  ORDER BY id;

        ID
----------
         4
         4
         4
         4

Now let’s retrieve the data from our cursors and see what they contain:

SQL> PRINT rc1

        ID
----------
         1

SQL> PRINT rc2

        ID
----------
         2
         2

SQL> PRINT rc3

        ID
----------
         3
         3
         3

Even though the data in the table had changed and was committed after we opened the cursors Oracle still returned the data as it was at the point in time we opened the cursor, not what the table contained when we read from the cursor. This is Oracle’s multi-versioning read consistency in action.

Now for that very important point that I mentioned at the start…

The above exercise also highlights another aspect of Oracle:

Readers and writers don’t block each other.

If you look closely at the example, whilst we were holding open cursors to the table we were still able to update existing rows as well as insert new ones. No locks on the table data were being held by the cursors to prevent this… yet the data returned by the cursors were still as the table looked when the cursor was opened. This is a fundamental of how Oracle works. If you want to build a system that is performant under high levels of concurrent activity this is exactly the behaviour you will require.

DATE Literals

I have previously noted problems that might occur with implicit type conversion involving dates. Ad-hoc scripts are a major source of the problem. Rather than putting TO_DATE functions around date literals people tend to take the lazier approach and just specify the string literal. I’m not sure in which version it was introduced (somewhere around 11 perhaps) but Oracle has a formal wany of specifying a date literal and that format is:

DATE 'yyyy-mm-dd'

Date literals do not permit a time component to be specified. In case this needs further elaboration, an example using the EMP table would be:

SELECT *
FROM   emp
WHERE  hiredate >= DATE '1981-07-01'

to find all those employees hired on or after 1st July 1981. This is a little bit easier than TO_DATE where you will need to specify the format parameter as well as the date string itself.

Similarly, TIMESTAMP literals may be specified using:

TIMESTAMP 'yyyy-mm-dd hh2:mi:ss.ff'

or, for TIMESTAMP WITH TIME ZONE:

TIMESTAMP 'yyyy-mm-dd hh2:mi:ss.ff tzh:tzm'

A small tip perhaps but a worthwhile one.