v$transaction

I’ve previously posted articles about monitoring activity within the database. These have focused on analysing details for the session processes from the internal views, such as v$session. A slightly different approach is to look at things from the currently active transactions.

Oracle conveniently provides a dynamic performance view that lists the active transactions in the database; v$transaction. According to the Oracle database documentation:

V$TRANSACTION lists the active transactions in the system.

Okay, so that sounds simple and straight forward… until you look at the details it exposes. A quick scan of the column details in the documentation shows a lot of acronyms (SCN, UBA, XID) and block number details. So, do we need to have intimate knowledge of the internals of Oracle in order to use this view from a monitoring perspective? Not really*.

You’ll notice that the view contains a couple of columns that are quite straight forward:

  • START_TIME is noted as the start time (wall clock) for the transaction
  • SES_ADDR is the user session object address

Interestingly, the START_TIME is stored not as a DATE or TIMESTAMP but as a VARCHAR2. For format for the column is mm/dd/yy hh24:mi:ss, giving us a 1 second precision as to when the transaction started. The SES_ADDR is a RAW column and can be used to relate to the SADDR column in v$session. So, using the SES_ADDR we can link v$transaction to v$session to extract all the details that we are familiar with.

The following is a simple query I put together during a system problem. Inside the database it was noted that contention was building up with lots of sessions waiting on row level locks and user locks (from dbms_lock). In an OLTP system this was not supposed to happen and so I wanted to gather some details on the extent of this from the sessions with longer than normal transaction durations… where “longer than normal” pretty much meant more than one or two seconds.

SELECT (SYSDATE - TO_DATE(t.start_time,'mm/dd/yy hh24:mi:ss'))*24*60*60 AS trans_dur_secs
,      s.sid
,      s.serial#
,      s.sql_id
,      s.sql_child_number
,      s.username
,      s.program
,      s.module
,      s.action
,      s.client_identifier
,      s.blocking_session
,      s.final_blocking_session
,      s.event
,      s.seconds_in_wait
,      s.plsql_entry_object_id
,      s.plsql_entry_subprogram_id
FROM   v$transaction t
,      v$session     s
WHERE  s.taddr = t.addr
       -- we'll only look at the application account
--AND    s.username    = :l_username
ORDER  BY
       (SYSDATE - TO_DATE(t.start_time,'mm/dd/yy hh24:mi:ss'))*24*60*60 DESC

The query displays primarily session details for those sessions that have an open transaction that originate from a specific account (for me this was the application account I was interested in). The listing is ordered with the details of the longest transaction holder shown first.

A few points to note about the details selected:

  • The SQL_ID and SQL_CHILD_NUMBER give details of the currently executing statement by the session, which may not be the statement(s) that have obtained locks for the transaction
  • The MODULE, ACTION and CLIENT_IDENTIFIER are used heavily within the system for instrumentation purposes. As noted in previous posts, this strategy is highly recommended.
  • The BLOCKING_SESSION gives the SID of the session currently blocking the session being displayed.
  • The FINAL_BLOCKING_SESSION gives the SID of the session that is at the end of the blocking lock chain. So, in the scenario of Session A blocking Session B, which in turn is blocking Session C, the entry for Session C will report Session B as the BLOCKING_SESSION and Session A as the FINAL_BLOCKING_SESSION. I believe FINAL_BLOCKING_SESSION is a new attribute for Oracle 11.2.
  • The PLSQL_ENTRY_OBJECT_ID and PLSQL_ENTRY_SUBPROGRAM_ID show the details of PL/SQL call currently being made. This will only be applicable if the session is invoking PL/SQL, which was the case in the situation I was dealing with.

As it turned out, the query revealed a series of transactions for a particular process. Many of the sessions were idle but holding locks for up to 60 seconds. These idle sessions were part of a distributed transaction where another system in the transaction was having trouble. Oracle was holding the transactions open for 60 seconds before rolling them back as it was supposed to do. The contention was being caused by repeated activity from other sessions (think “click here to retry”) that was being blocked by the waiting transactions. The solution was not to do anything with the database, as it was doing exactly what it was supposed to do, but instead address the root cause outside of Oracle.


*: Knowledge of Oracle’s internals may not be necessary but can be extremely beneficial at times. To anyone wanting to know more about the internals of Oracle then I would recommend Oracle Core: Essential Internals for DBAs and Developers by Jonathan Lewis. A fairly slender volume compared to many computer books these days, it will take a few reads to absorb fully… and it will be time well spent.

NULL in SQL Operators & Functions

Another situation from work worth creating a post; how Oracle SQL operators and functions handle NULL arguments. Quite simple stuff to an experienced Oracle developer but perhaps not so obvious to a newcomer as the concept of NULL seems to be frequently overlooked. I have previously commented on aggregate functions and their handling of NULL and so I’ll focus on standard operators and common in-built functions in the first part of the post and look into specific NULL handling functions in the latter part.

Note, all of the following was done on Oracle 11.2.0.3.

SQL Operators

Let’s start off by looking at how the in-built operators handle NULL. The follow operations were performed using a simple:

SELECT <operation>
FROM   dual

Where the result is NULL then it’s been specified in italics as null:

Operator Operation Result
Addition 1 + NULL null
Subtraction 1 – NULL null
Multiplication 1 * NULL null
Division 1 / NULL null
String Concatenation ‘A’ || NULL ‘A’

So, with the exception of string concatenation, all the operators return NULL when one of their arguments is NULL. Hopefully there are no surprises there; it’s quite logical really… how can an expression return a valid result when the value of one component is unknown?. String concatenation is the exception; NULL is ignored and so the end result will be the concatenation of the non-NULL arguments.

Pretty simple so far…

SQL Functions

I don’t intend to go through all functions here but will simply show the output of a handful that are not specific to dealing with NULL parameters initially. Later I’ll go through some functions that are designed specifically for NULL parameters. Similar to the previous section the results shown here were obtained via:

SELECT <operation>
FROM   dual
Function Operation Result
TO_NUMBER TO_NUMBER(NULL) null
TO_CHAR TO_CHAR(NULL) null
TRUNC TRUNC(NULL) null
SUBSTR SUBSTR(NULL,1,2) null
SIGN SIGN(NULL) null
LEAST LEAST(1,2,NULL) null
RPAD RPAD(NULL,5,’X’) null
LENGTH LENGTH(NULL) null

Hmmm… I see a pattern emerging…

Quite simply SQL functions return NULL if an input parameter is NULL. Even functions that you might expect to return a value, such as LENGTH, return NULL.

Okay, the above statement is a bit of a generalisation as there are functions that are specifically designed to handle NULL. Let’s have a look at some of them. For these functions I’ll go into a bit more detail. The functions examined are:

  • NVL
  • NVL2
  • COALESCE
  • NULLIF
  • LNNVL
  • DECODE

NVL

You won’t have been developing with Oracle for long before encountering this function. NVL takes 2 parameters. If the first parameter is not NULL then it returns that value, otherwise it returns the second parameter. Rewritten as a CASE expression it would be:

CASE WHEN <parameter 1> IS NOT NULL 
     THEN <parameter 1>
     ELSE <parameter 2>
END

So:

NVL (5, 4)

returns the value 5, whilst:

NVL(NULL, 4)

returns the value 4. So, NVL is a simple function for assigning a non-NULL value to an expression that might be NULL. However, even with such a simple function there are a few things to keep in mind:

  • NVL will return NULL if both parameters evaluate to NULL.
  • Oracle will attempt to perform an implicit data type conversion on the second parameter if it does not match the data type of the first parameter. So, the expression NVL(5,’NULL’) will raise an exception, whereas the expression NVL(5,’3′) will not.

NVL2

NVL2 takes 3 parameters. If the first parameter evaluates to not NULL then it returns the second parameter, otherwise it returns the third parameter. Expressed using CASE, NVL2 would be:

CASE WHEN <parameter 1> IS NOT NULL 
     THEN <parameter 2>
     ELSE <parameter 3>
END

From the above, it can be seen that NVL2 will return NULL in the following scenarios:

  • Parameter 1 is not NULL and Parameter 2 is NULL
  • Parameter 1 and 3 are NULL

The data type returned by NVL2 is driven by the second parameter and an implicit conversion is done on the third parameter if it does not match the data type of the second parameter.

COALESCE

COALESCE takes a variable number of parameters and returns the first non-NULL parameter. Expressed using CASE this would be:

CASE WHEN <parameter 1> IS NOT NULL THEN <parameter 1>
     WHEN <parameter 2> IS NOT NULL THEN <parameter 2>
     WHEN <parameter 3> IS NOT NULL THEN <parameter 3>
     ...
     WHEN <parameter n> IS NOT NULL THEN <parameter n>
END     

So, how many parameters can COALESCE accept? According to the following script, the answer is 65535:

DECLARE 
   l_no_parameters NUMBER(6) := 65535;
   l_result        NUMBER(5);
   l_sql           CLOB;
BEGIN
   dbms_lob.createtemporary(l_sql,FALSE);
   
   l_sql := 'SELECT COALESCE (';
   FOR i IN 1..l_no_parameters-1
   LOOP
       l_sql := l_sql || 'NULL,';
   END LOOP;
   l_sql := l_sql || TO_CHAR(l_no_parameters) || ') INTO :1 FROM dual';
   
   EXECUTE IMMEDIATE l_sql INTO l_result;
   
   dbms_output.put_line ('Result is: ' || TO_CHAR(l_result));
END;

Using 65535 as the number of parameters, we get a response of:

Result is: 65535

However, if we increment the number of parameters to 65536 we get a response of:

DECLARE
*
ERROR at line 1:
ORA-00939: too many arguments for function
ORA-06512: at line 15

Still, I would expect that 64k parameters should be enough for anyone…

NULLIF

NULLIF accepts 2 parameters and returns NULL if both parameters are the same, otherwise it returns the first parameter. Expressed using CASE this would be:

CASE WHEN <parameter 1> = <parameter 2>
     THEN NULL
     ELSE <parameter 1>
END

An interesting situation exists if the first parameter is NULL. According to the NULLIF documentation, the first parameter cannot be specified as a literal NULL but testing shows that it can be an expression that results in NULL:

SELECT NULLIF(TO_CHAR(NULL),'1')
FROM   dual

which results in NULL.

The parameters of NULLIF need to be of the same basic data type; numeric, character or date. NULLIF will reurn an error if they are not.

LNNVL

LNNVL is an interesting function in that it takes a single logical expression rather than a more traditional scalar parameter(s). If the expression evaluates to TRUE then LNNVL returns FALSE, otherwise it returns TRUE. Expressed using CASE, it would be:

CASE WHEN <logical expression>
     THEN FALSE
     ELSE TRUE
END

“Hang on” I hear you say, “isn’t that what NOT does?”, i.e.:

X > 5

could be just as easily expressed as:

NOT (X > 5)

to obtain the reverse without needing LNNVL. The answer is “No, not quite”. This is where the complexity of handling NULL comes in. Let’s take a closer look at very simple scenario of a data set involving 3 rows; values 1, 10 and NULL. First we select the entries where the value is greater than 5, as per our previous expression:

WITH demo_data AS
   (SELECT 1 AS id, 1    AS val FROM dual UNION ALL
    SELECT 2 AS id, 10   AS val FROM dual UNION ALL
    SELECT 3 AS id, NULL AS val FROM dual)
SELECT *
FROM   demo_data
WHERE  val > 5

which gives:

        ID        VAL
---------- ----------
         2         10

Then we’ll select the entries where the values is not greater than 5:

WITH demo_data AS
   (SELECT 1 AS id, 1    AS val FROM dual UNION ALL
    SELECT 2 AS id, 10   AS val FROM dual UNION ALL
    SELECT 3 AS id, NULL AS val FROM dual)
SELECT *
FROM   demo_data
WHERE  NOT(val > 5)

which gives:

        ID        VAL
---------- ----------
         1          1

So, despite one WHERE clauses being the exact opposite of the other from a simple boolean perspective we have excluded the row with the NULL from both result-sets. Enter LNNVL. If we use LNNVL in place of the NOT operator in the second statement:

WITH demo_data AS
   (SELECT 1 AS id, 1    AS val FROM dual UNION ALL
    SELECT 2 AS id, 10   AS val FROM dual UNION ALL
    SELECT 3 AS id, NULL AS val FROM dual)
SELECT *
FROM   demo_data
WHERE  LNNVL(val > 5)

we get:

        ID        VAL
---------- ----------
         1          1
         3

Ah, so now we do get the exact opposite of the initial result-set, which includes those entries where VAL is NULL.

DECODE

I’m including DECODE in the list of NULL handling functions due to peculiar property that it has in relation to NULL. DECODE is written as:

DECODE (expression
       ,value_1, result_1
       ,value_2, result_2
       ,value_3, result_3
       ,...
       ,default_result)

So, the expression is evaluated and compared to the value 1 parameter, if the values match then result_1 parameter is returned. Otherwise a comparision is made to the value_2 parameter, if matched then result_2 is returned and so on down the parameter list. If no match is made then the final parameter is returned, assuming one is provided, otherwise DECODE returns NULL. Using CASE, DECODE is:

CASE <expression>
     WHEN <value_1> THEN <result_1>
     WHEN <value_2> THEN <result_2>
     WHEN <value_3> THEN <result_3>
     ...
     ELSE <default_result>
END

So, what’s so special with DECODE in relation to NULLs? If the expression evaluates to NULL and a value parameter is NULL then DECODE returns the corresponding result parameter. Normally NULL is never considered equal to anything, including another NULL, but DECODE treats it as if it is. This can be best illustrated using the following:

WITH demo_data AS
   (SELECT 1 AS id, 1    AS val FROM dual UNION ALL
    SELECT 2 AS id, 10   AS val FROM dual UNION ALL
    SELECT 3 AS id, NULL AS val FROM dual)
SELECT id
,      val
,      DECODE (val, NULL, 999, -1) AS calc_val
FROM   demo_data

which gives the following result set:

        ID        VAL   CALC_VAL
---------- ---------- ----------
         1          1         -1
         2         10         -1
         3                   999

So, here DECODE mapped the NULL in the third row to a calculated value of 999. This also illustrates a bug with how I represented DECODE using CASE. If I replace DECODE with a CASE expression I initially showed above:

WITH demo_data AS
   (SELECT 1 AS id, 1    AS val FROM dual UNION ALL
    SELECT 2 AS id, 10   AS val FROM dual UNION ALL
    SELECT 3 AS id, NULL AS val FROM dual)
SELECT id
,      val
,      DECODE (val, NULL, 999, -1) AS calc_val_decode
,      CASE val
            WHEN NULL THEN 999
            ELSE -1
       END AS calc_val_case
FROM   demo_data

the result would be:

        ID        VAL CALC_VAL_DECODE CALC_VAL_CASE
---------- ---------- --------------- -------------
         1          1              -1            -1
         2         10              -1            -1
         3                        999            -1

Notice how the CASE expression did not consider the NULL column value to equal the NULL result scenario so it defaults to the ELSE clause instead. So, in order to mimic DECODE via CASE we would need a bit more complex logic:

CASE -- resolve the scenario where the expression evaluates to NULL
     WHEN <expression> IS NULL THEN <result_null>
     -- now handle the not NULL scenarios
     ELSE CASE expression
          WHEN <value_1> THEN <result_1>
          WHEN <value_2> THEN <result_2>
          WHEN <value_3> THEN <result_3>
          ...
          ELSE <default_result>
     END
END

… or something similar as there are various ways this logic could be implemented.

So, in summary, I hope this post has illustrated the basics of how NULL is handled by SQL operators and functions. The handling of NULL is not overly complex but it is an area that quite regularly causes problems when overlooked in applications.

TABLE function with BULK COLLECT

A quick post about a coding scenario that cropped up at work the other day. As developers we’re quite used to performing some calculation using a variable and assigning the result back to the very same variable used in the calculation. For example, the classic case of incrementing the value of a variable in PL/SQL would be written as:

l_variable := l_variable + 1;

Many modern languages even have specific operators for handling this very scenario, such as the C approach of:

l_variable++

which will use the current value of the variable and increment it immediately afterwards.

Anyway, the situation at work involved a nested table collection that the developer wanted sorted. The developer used the following within their PL/SQL code:

SELECT t.column_value
BULK COLLECT INTO l_collection
FROM   TABLE(l_collection) t
ORDER  BY
       t.column_value;

where l_collection was a nested table of numbers. Logically the above can be read as “take the nested table and treat it as if it were a database table, sort the values of that table and assign the resulting set of values back into the original collection.

The code compiled and executed without error but produced incorrect results. A quick test using an anonymous block shows what was happening:

CREATE OR REPLACE TYPE t_number_tab IS TABLE OF NUMBER
/

DECLARE
   l_number_tab t_number_tab := t_number_tab(2,4,5,1,3);
   
   -- outputs the collection details to dbms_output
   PROCEDURE display (p_number_tab t_number_tab)
   AS
   BEGIN
      IF (p_number_tab.COUNT = 0) THEN
         dbms_output.put_line ('Collection is empty');
      ELSE
         FOR i IN 1..p_number_tab.COUNT
         LOOP
            dbms_output.put_line ('Element ' || TO_CHAR(i) || ' is ' || TO_CHAR(p_number_tab(i)));
         END LOOP;
      END IF;
   END display;   
BEGIN

   dbms_output.put_line ('Initial collection:');
   display (l_number_tab);
   
   SELECT t.column_value
   BULK COLLECT INTO l_number_tab
   FROM TABLE (l_number_tab) t
   ORDER BY
         t.column_value;
         
   dbms_output.put_line ('Final collection:');
   display (l_number_tab);
END;
/

The result of the above script, in Oracle 11.2, is:

   
Initial collection:
Element 1 is 2
Element 2 is 4
Element 3 is 5
Element 4 is 1
Element 5 is 3
Final collection:
Collection is empty

So, the collection is being wiped out by the SQL statement. It seems the collection used in the TABLE expression cannot be the same as the collection that is being BULK COLLECTed into. Once this problem was identified then the solution was to simply BULK COLLECT into a new collection, i.e.:

SELECT t.column_value
BULK COLLECT INTO l_sorted_collection
FROM   TABLE(l_collection) t
ORDER  BY
       t.column_value;

Problem solved and another lesson on the finer points of Oracle development learned…