Package: pkg_ddl_sql

Performing DDL from within PL/SQL is not natively supported by the language. So, the following script will generate an error:

BEGIN
   DROP TABLE blah;
END;

Yet this is trivially easy to work around is limitation as the DDL command can simply be embedded in an EXECUTE IMMEDIATE call, i.e.:

BEGIN
   EXECUTE IMMEDIATE 'DROP TABLE blah';
END;

This approach to executing SQL is termed dynamic SQL as the SQL is constructed at run time, rather than being specified at compile time. Many Oracle experts will tell you that performing DDL from within PL/SQL is not good practice and there are good reasons for this advice. Generally speaking it’s not necessary. Database objects, such as tables, indexes, sequences, etc should be constructed in advance of the processes that operate on those objects. Stored procedures that operate on database objects won’t even compile if the objects are missing thanks to the way Oracle tracks dependencies. DDL is non-transactional and so is completely inappropriate for OLTP operations.

However, there are some scenarios where it may be appropriate to perform DDL from within PL/SQL, for example:

  • Automating regular maintenance activities, such as table partition management
  • ETL processes where it is necessary to do things such as disabling indexes and constraints

The approach to DDL I prefer to adopt is to split the process into the steps of:

  • generating the necessary SQL, and
  • execting that SQL.

By splitting the process I can have a simple package of reusable routines that returns me a SQL statement for each DDL activity that I wish to support. Another package can then take the generated DDL and execute it. Into this second package I can place whatever controls and processes that I want around the DDL execution, such as logging, security checks, etc. I can also be more selective on the latter package so that only certain activities are exposed.

The package I use for creating the DDL SQL statements is called PKG_DDL_SQL. This package contains a set of functions, each one returning a DDL string for a specific action. Each function accepts parameters appropriate to the DDL action. Where deemed appropriate default values for those parameters have been used. The returned value is a string type, using t_BIG_STRING types from the previously described PKG_DB_TYPE package (Hey, I don’t have an active imagination so “big string” is the best I could do!)

PKG_DDL_SQL contains routines to:

  • truncate tables, partitions and subpartitions
  • disable and enable constraints
  • compile PL/SQL objects such as packages, procedures, functions and triggers
  • drop objects, such as tables, indexes and PL/SQL code objects
  • rename tables, partitions and subpartitions

So, as an example, if I wanted the DDL for renaming a table I could write:

BEGIN
   dbms_output.put_line (pkg_ddl_sql.rename_table (p_table_owner  => 'TABLE_OWNER'
                                                  ,p_table_name   => 'OLD_TABLE_NAME'
                                                  ,p_new_name     => 'NEW_TABLE_NAME'));
END;

and the response output is:

ALTER TABLE TABLE_OWNER.OLD_TABLE_NAME RENAME TO NEW_TABLE_NAME

Generally speaking PKG_DDL_SQL is relatively “dumb”; it creates DDL statements for the inputs provided and does little else. However, it does add one bit of value to the overall process of executing dynamic DDL; it checks that identifiers specified are valid names. This check is done via the function DBMS_ASSET.SIMPLE_SQL_NAME and is designed to prevent SQL injection attacks. If a parameter for an object is not a valid name then DBMS_ASSET.SIMPLE_SQL_NAME will raise the excpetion ORA44003: string is not a simple SQL name, which is passed back out of PKG_DDL_SQL:

BEGIN
   dbms_output.put_line (pkg_ddl_sql.rename_table (p_table_owner  => 'TABLE_OWNER'
                                                  ,p_table_name   => 'INVALID''NAME'
                                                  ,p_new_name     => 'NEW_TABLE_NAME'));
END;

which results in:

BEGIN
*
ERROR at line 1:
ORA-44003: invalid SQL name
ORA-06512: at "SYS.DBMS_ASSERT", line 160
ORA-06512: at "DEVELOPER.PKG_DDL_SQL", line 530
ORA-06512: at line 2

Note, it is deliberate that PKG_DDL_SQL simply checks the validity of the identifiers and does not actually confirm their existence in the database, as could be done using DBMS_ASSET.SQL_OBJECT_NAME function. This is to permit the package to be used to generate DDL for objects that may not exist as at the point in time that the DDL is generated. Of course, you are welcome to alter the package if you wuld rather it behave otherwise.


Download scripts shown in this post.

INTERVAL MONTH TO DAY

I’ve already noted a couple of aspects about interval data types previously. In the examples used I have used the INTERVAL DAY TO SECOND data type. The other type of interval is INTERVAL YEAR TO MONTH.

Oracle handles intervals with TIMESTAMPs is a very natural way; adding or subtracting an INTERVAL to a TIMESTAMP results in a TIMESTAMP that has been incremented (or decremented) by the value of the interval. (You can read about the allowable operations in the Datetime/Interval Arithmetic section of the Oracle documentation).

So, if we were to run the following script:

DECLARE
   l_old_timestamp TIMESTAMP := TO_TIMESTAMP('05/01/2012 8:00','dd/mm/yyyy hh24:mi');
   l_new_timestamp TIMESTAMP;
   l_interval      INTERVAL DAY TO SECOND := INTERVAL '1' DAY;
BEGIN
   l_new_timestamp := l_old_timestamp + l_interval;
   dbms_output.put_line('New timestamp: ' || TO_CHAR(l_new_timestamp,'dd/mm/yyyy hh24:mi'));
END;   

we’d get:

New timestamp: 06/01/2012 08:00

What could be more natural!

Again though, you notice that I used the data type INTERVAL DAY TO SECOND. I could just as easily have run:

DECLARE
   l_old_timestamp TIMESTAMP := TO_TIMESTAMP('05/01/2012 8:00','dd/mm/yyyy hh24:mi');
   l_new_timestamp TIMESTAMP;
   l_interval      INTERVAL YEAR TO MONTH := INTERVAL '1' MONTH;
BEGIN
   l_new_timestamp := l_old_timestamp + l_interval;
   dbms_output.put_line('New timestamp: ' || TO_CHAR(l_new_timestamp,'dd/mm/yyyy hh24:mi'));
END;   

and got an answer of:

New timestamp: 05/02/2012 08:00

but I didn’t. The reason is that as a general rule I shun the INTERVAL YEAR TO MONTH data type. The reason is quite simple. If I were to modify the example above and run:

DECLARE
   l_old_timestamp TIMESTAMP := TO_TIMESTAMP('30/01/2012 8:00','dd/mm/yyyy hh24:mi');
   l_new_timestamp TIMESTAMP;
   l_interval      INTERVAL YEAR TO MONTH := INTERVAL '1' MONTH;
BEGIN
   l_new_timestamp := l_old_timestamp + l_interval;
   dbms_output.put_line('New timestamp: ' || TO_CHAR(l_new_timestamp,'dd/mm/yyyy hh24:mi'));
END;   

I would get:

ERROR at line 1:
ORA-01839: date not valid for month specified
ORA-06512: at line 6

So, adding months to a timestamp that results in a timestamp that exceeds the number of days in the resulting month generates an error. This might seem quite obvious to many people and if you’ve ever been bitten by the logic of the ADD_MONTHS function and the way it handles dates on the last day of the month you may even welcome this handling. However, the problem I have is that if I used something like this in my code then I would have to provide a work around for when the error occurs… and if I were to have to do that then I might as well write the entire “month adding” logic myself and skip the use of the INTERVAL data type altogether.

Worse yet, if we were to perform the calculation inside an SQL statement then we would also receive the same error. So now our long running batch process falls over because one entry combines a timestamp and interval that results in an invalid date… and in SQL the ability to catch and handle exceptions like these is very difficult and complicated.

So, the solution would appear to be not to use the INTERVAL MONTH TO DAY data type when requiring an offset from a TIMESTAMP. Unfortunately this is one of the core reasons for using INTERVAL data types in the first place.

Package: pkg_interval

As noted in a previous post, interval data types are used for storing an “amount of time” without specifying exactly when that time is, e.g. 10 minutes, 1 day, etc. Also as noted previously, while first appearing in Oracle 9i there are areas of functionality lacking in Oracle’s native support for interval data types. The last post looked at SQL aggregate functions for intervals while this post will look at extracting the interval value as a single unit of measure.

Suppose you have a long running process where the start and end time is tracked and recorded in a log table in timestamp columns. Obtaining the duration of the process is a simple matter of subtracting the start time from the end time, resulting in an interval data type duration, i.e.:

DECLARE
   l_start_timestamp  TIMESTAMP;
   l_end_timestamp    TIMESTAMP;
   l_process_duration INTERVAL DAY TO SECOND;
BEGIN
   l_start_timestamp := TO_TIMESTAMP('08:00:00.00','hh24:mi:ss:ff'); -- in place for SYSTIMESTAMP;
   --<>
   l_end_timestamp   := TO_TIMESTAMP('09:32:51.37','hh24:mi:ss:ff'); -- in place for SYSTIMESTAMP;
   
   l_process_duration := l_end_timestamp - l_start_timestamp;
   
   dbms_output.put_line ('Duration: ' || TO_CHAR(l_process_duration));
END;
/

The output of the above is:

Duration: +00 01:32:51.370000

This isn’t too bad but it could be confusing if you didn’t understand that the format was day, hours, minutes, seconds and fractions of seconds. If we simply wanted to report the duration in terms of minutes then we’d have to try a different approach. Reading the SQL documentation you may think that the EXTRACT function would be exactly what’s required. If we change the output statement in our script to:

   dbms_output.put_line ('Duration: ' || TO_CHAR(EXTRACT(MINUTE FROM l_process_duration)));

we get an output of:

Duration: 32

So, the EXTRACT function has simply extracted the minutes component from the overall duration, not expressed the entire duration in minutes. Anyone familiar with the way Oracle handles DATEs will probably find this rather annoying. If we were to use DATE data types we could draw the difference between the start of the process and the end of the process, which Oracle converts to a “number of days” and multiply by 24 x 60 (24 hours in a day and 60 minutes in an hour):

DECLARE
   l_start_timestamp  TIMESTAMP;
   l_end_timestamp    TIMESTAMP;
   l_process_duration NUMBER;
BEGIN
   l_start_timestamp := TO_TIMESTAMP('08:00:00.00','hh24:mi:ss:ff'); -- in place for SYSTIMESTAMP;
   --<>
   l_end_timestamp   := TO_TIMESTAMP('09:32:51.37','hh24:mi:ss:ff'); -- in place for SYSTIMESTAMP;
   
   l_process_duration := CAST(l_end_timestamp AS DATE) - CAST(l_start_timestamp AS DATE);
   
   dbms_output.put_line ('Duration: ' || TO_CHAR(l_process_duration*24*60) || ' minutes');
END;
/

Here we are casting our TIMESTAMP variables to DATEs, drawing the difference and expressing that in minutes, which results in:

Duration: 92.85 minutes

Looks good right? Certainly the whole digits is correct but on close observation you’ll note that the decimal places is out by 0.01. The correct answer, to 2 decimal places, is 92.86. What’s happened here is in the CASTing of the TIMESTAMPs to DATEs Oracle has truncated the 0.37 fraction of a second (DATEs don’t support fractions of seconds), which resulted in it rounding the result differently. Okay, admittedly, you’re not likely to be worried about that in this scenario but it may be of concern in other situations.

So, what can we do to express our duration in minutes and still preserve the sub-second precision that the TIMESTAMP data type gives us? Well, looking back at the EXTRACT function, we can extract the various components from the duration, perform the necessary artimetic to convert to minutes, add them up and output that:

DECLARE
   l_start_timestamp  TIMESTAMP;
   l_end_timestamp    TIMESTAMP;
   l_process_duration INTERVAL DAY TO SECOND;
BEGIN
   l_start_timestamp := TO_TIMESTAMP('08:00:00.00','hh24:mi:ss:ff'); -- in place for SYSTIMESTAMP;
   --<>
   l_end_timestamp   := TO_TIMESTAMP('09:32:51.37','hh24:mi:ss:ff'); -- in place for SYSTIMESTAMP;
   
   l_process_duration := l_end_timestamp - l_start_timestamp;
   
   dbms_output.put_line ('Duration: ' || 
                         TO_CHAR((EXTRACT(DAY    FROM l_process_duration) * 24 * 60) +
                                 (EXTRACT(HOUR   FROM l_process_duration) * 60) +
                                 (EXTRACT(MINUTE FROM l_process_duration)) +
                                 (EXTRACT(SECOND FROM l_process_duration) / 60)) || ' minutes');
END;
/

which gives us:

Duration: 92.85616666666666666666666666666666666667 minutes

We could apply the ROUND function to round the result to a reasonable number of decimal places. However, you will note that the result is accurate. You’ll have also noted that the expression to do this is rather longwinded. This type of thing is ideal for wrapping up into a utility package; pkg_interval in my utility library.

pkg_interval contains conversion routines that take INTERVAL DAY TO SECOND data type parameters and express them in units of a single time measure. The name of the routine tells you what the unit of measure is, i.e.:

  • to_seconds
  • to_minutes
  • to_hours
  • to_days

Using the to_minutes function in our example above, the output statement would then be:

dbms_output.put_line ('Duration: ' || TO_CHAR(pkg_interval.to_seconds(l_process_duration) || ' seconds'));

which gives us the output of:

Duration: 92.85616666666666666666666666666666666667 minutes

As noted previously, the result can be ROUNDed to however many decimal places you deem necessary.


Download scripts shown in this post.