Avoid SQL injection with DBMS_ASSERT

A simple internet search will reveal some serious computer system hacks that were, fundamentally, a result of SQL injection. If you plan on doing any significant amount of database development you need to be aware of what SQL injection is and how to identify and mitigate it within your system.

In Oracle the easiest way to avoid SQL injection is to simply use static SQL with bind variables. Stick to this strategy and you can’t go wrong. However, there may be some scenarios where it isn’t possible to use simple queries with binds and dynamic SQL is required. One scenario that regularly crops up in applications I’ve worked on is a generic search screen; uses are presented with a screen containing a number of fields for entering search criteria. The system dynamically generates an SQL statement using only the fields that have been filled in. Once you’re dealing with dynamic SQL you’ve got a potential SQL injection problem. Thankfully Oracle has provided a package to help us; DBMS_ASSERT.

I’ll illustrate the use of DBMS_ASSERT using a simple dynamic SQL routine. The following DO_SQL procedure takes as input parameters a schema name, a table name, a column name and a “starts with” data filter for the column requested. The routine dynamically generates an SQL statement of the form:

SELECT <column_name>
FROM   <schema_name>.<table_name>
WHERE  <column_name> LIKE '<starts_with>%';

and returns a cursor with the results from the generated statement. The routine, wrapped into a simple SQL*PLus test harness, is:

VARIABLE rc REFCURSOR

DECLARE

   l_table_owner VARCHAR2(100);
   l_table_name  VARCHAR2(100);
   l_column_name VARCHAR2(100);
   l_starts_with VARCHAR2(100);

   PROCEDURE do_sql (p_table_owner IN  VARCHAR2
                    ,p_table_name  IN  VARCHAR2
                    ,p_column_name IN  VARCHAR2
                    ,p_starts_with IN  VARCHAR2
                    ,p_rc          OUT SYS_REFCURSOR)
   AS                 
      l_sql_string VARCHAR2(32767);
   BEGIN
      l_sql_string := 'SELECT ' || p_column_name ||
                      ' FROM  ' || p_table_owner ||'.' || p_table_name ||
                      ' WHERE ' || p_column_name || ' LIKE ''' || p_starts_with || '%'''; 

      OPEN p_rc FOR l_sql_string;      
   END do_sql;
   
BEGIN   
   l_table_owner  := 'SCOTT';
   l_table_name   := 'DEPT';                  
   l_column_name  := 'DNAME';                 
   l_starts_with  := 'A';

   do_sql (p_table_owner => l_table_owner
          ,p_table_name  => l_table_name
          ,p_column_name => l_column_name
          ,p_starts_with => l_starts_with
          ,p_rc          => :rc);
END;
/

PRINT rc

Running the routine using the inputs of:

  • schema name: SCOTT
  • table name: DEPT
  • column_name: DNAME
  • starts with: A

produces the following:

DNAME
--------------
ACCOUNTING

Before we get to DBMS_ASSERT let’s now look at how we might exploit this routine using SQL injection. Let’s change the inputs to the following:

  • schema name: SCOTT
  • table name: DEPT
  • column_name: DNAME
  • starts with: A” UNION ALL SELECT username FROM all_users –

which produces the output:

DNAME
----------------------
SCOTT
APEX_APPS
OEHRAPP
--snip --
SYSDG
SYSBACKUP
SYSTEM
AUDSYS
SYS

44 rows selected.

In the above I managed to obtain a listing of the users in the database by “extending” the query to run SELECT USERNAME FROM ALL_USERS. By modifying the input parameters I managed to execute the query:

SELECT DNAME 
FROM   SCOTT.DEPT 
WHERE  DNAME LIKE 'A' 
UNION  ALL 
SELECT username 
FROM   all_users --%'

Note, the double hyphen at the end, which denotes the start of a comment in SQL, is to remove the %’ that the generated SQL puts on the end.

Let’s now look at how we can protect ourselves against this using DBMS_ASSERT. We’ll look at the following routines from the package:

  • SCHEMA_NAME
  • SQL_OBJECT_NAME
  • SIMPLE_SQL_NAME
  • QUALIFIED_SQL_NAME
  • ENQUOTE_NAME
  • ENQUOTE_LITERAL

SCHEMA_NAME

This routine can be used to ensure that a value provided for a schema is actually a schema within the database. If the value isn’t a schema name then an exception is thrown. In our example, we would use the routine as follows:

l_sql_string := 'SELECT ' || p_column_name ||
                ' FROM  ' || DBMS_ASSERT.SCHEMA_NAME(p_table_owner) ||'.' || p_table_name ||
                ' WHERE ' || p_column_name || ' LIKE ''' || p_starts_with || '%'''; 

Providing a value that is not a schema in the database results in:

ORA-44001: invalid schema

SQL_OBJECT_NAME

Similar to the SCHEMA_NAME routine, this one checks that the name provided is a valid SQL object name that exists in the database. We could use this routine to check that the table we want to query exists, i.e.:

l_sql_string := 'SELECT ' || p_column_name ||
                ' FROM  ' || DBMS_ASSERT.SQL_OBJECT_NAME(p_table_owner||'.' || p_table_name)  ||
                ' WHERE ' || p_column_name || ' LIKE ''' || p_starts_with || '%'''; 

This would be a safer test than just a test to check that the schema exists. Providing a value that is not an object in the database results in:

ORA-44002: invalid object name

SIMPLE_SQL_NAME

This routine checks that the value provided satisfies the rules for an object name without actually verifying that such an object exists in the database. In our example we would use it as follows:

l_sql_string := 'SELECT ' || p_column_name ||
                ' FROM  ' || DBMS_ASSERT.SIMPLE_SQL_NAME(p_table_owner) ||'.'
                          || DBMS_ASSERT.SIMPLE_SQL_NAME(p_table_name) ||
                ' WHERE ' || p_column_name || ' LIKE ''' || p_starts_with || '%'''; 

If we wanted to ensure that the column reference is actually a column, as opposed to some sort of calculation, then we could apply SIMPLE_SQL_NAME to the column reference too, i.e.:

l_sql_string := 'SELECT ' || DBMS_ASSERT.SIMPLE_SQL_NAME(p_column_name) ||
                ' FROM  ' || DBMS_ASSERT.SIMPLE_SQL_NAME(p_table_owner) ||'.'
                          || DBMS_ASSERT.SIMPLE_SQL_NAME(p_table_name) ||
                ' WHERE ' || p_column_name || ' LIKE ''' || p_starts_with || '%'''; 

Providing a parameter that violates the naming rules results in:

ORA-44003: invalid SQL name

QUALIFIED_SQL_NAME

While the SIMPLE_SQL_NAME can only be used to validate each component of an overall object name the QUALIFIED_SQL_NAME routine can be used for a fully qualified object name, inclusive of database link component if required. In our example we would use it as follows:

l_sql_string := 'SELECT ' || p_column_name ||
                ' FROM  ' || DBMS_ASSERT.QUALIFIED_SQL_NAME(p_table_owner || '.' || p_table_name) ||
                ' WHERE ' || p_column_name || ' LIKE ''' || p_starts_with || '%'''; 

Passing in an invalid name results in:

ORA-44004: invalid qualified SQL name

Once again, this routine does not verify that an SQL object exists with such a name, only if the name is a valid name. In our example, running the routine with a valid name that does not exist results in;

ORA-00942: table or view does not exist

ENQUOTE_NAME

Oracle will accept names using any character, including spaces and punctuation, if the name is enclosed in double quotes. For example “A silly column name!” is a legitimate name. DBMS_ASSERT.ENQUOTE_NAME wraps a name in double quotes to handle this situation. By default it will also make all alphabetic characters upper case unless the second parameter is set to FALSE. Names already enclosed in double quotes are left alone.

In our example, we would use the routine as follows:

l_sql_string := 'SELECT ' || DBMS_ASSERT.SIMPLE_SQL_NAME(DBMS_ASSERT.ENQUOTE_NAME(p_column_name)) ||
                ' FROM  ' || DBMS_ASSERT.QUALIFIED_SQL_NAME(DBMS_ASSERT.ENQUOTE_NAME(p_table_owner) || '.' 
                          || DBMS_ASSERT.ENQUOTE_NAME(p_table_name)) ||
                ' WHERE ' || p_column_name || ' LIKE ''' || p_starts_with || '%'''; 

Once a name is enclosed in double quotes it will pass the checking done within DBMS_ASSERT.SIMPLE_SQL_NAME.
An exception is thrown by DBMS_ASSERT.ENQUOTE_NAME if the string provided contains a double quote in any position other than the first and last character:

ORA-44003: invalid SQL name

ENQUOTE_LITERAL

This routine will enclose a string literal inside single quotes, making it suitable for string literal parameters within SQL. In our example we would use it as:

l_sql_string := 'SELECT ' || DBMS_ASSERT.SIMPLE_SQL_NAME(p_column_name) ||
                ' FROM  ' || DBMS_ASSERT.QUALIFIED_SQL_NAME(p_table_owner || '.' || p_table_name) ||
                ' WHERE ' || p_column_name || ' LIKE ' || DBMS_ASSERT.ENQUOTE_LITERAL(p_starts_with || '%') ; 

If we were to attempt the SQL injection attack noted at the start of the post then we would get the following exception:

ORA-06502: PL/SQL: numeric or value error
ORA-06512: at "SYS.DBMS_ASSERT", line 409
ORA-06512: at "SYS.DBMS_ASSERT", line 493
ORA-06512: at line 16
ORA-06512: at line 36

which is what DBMS_ASSERT throws if the string contains a single quote, not doubled up with another single quote.

Summary

SQL injection is not a theoretical security problem, it’s a very real problem. In Oracle the best way to mitigate it is use static SQL statements. This may not always be possible though and so you will need to take steps to prevent dynamically generated SQL from being abused. Oracle provides you with DBMS_ASSERT for just this purpose. The routines contained in DBMS_ASSERT go a long way to protect the various parts of an SQL statement from being used for injection.

Lastly, please be aware that if you do something as silly as to allow applications to call a routine similar to:

PROCEDURE do_sql (p_sql_string)
AS
BEGIN
    EXECUTE IMMEDIATE p_sql_string;
END do_sql;

then there is very, very little that can do done for your security.

ORA-01466: unable to read data – table definition has changed

Where I currently work a significant proportion of business is done on-line, via the web or mobile app. For the OLTP side of the systems this means we have a strong focus on performance and availability. As an example, we deliberately avoid package state in our database packages so we can deploy new versions of code without having to take the system down. Similarly, many database maintenance activities are performed on-line too… which is why I was concerned when I arrived at work one morning to find an on-line session had encountered the following exception:

ORA-01466: unable to read data - table definition has changed

The operation in question was a simple SELECT against a single interval partitioned table. Cross referencing the application logs to the database activity the error occurred at the same time that a partition was dropped on the table. The partition drop was part of a standard job to remove old data. The DDL issued against the table was:

ALTER TABLE partition_test SET INTERVAL (INTERVAL '7' DAY)
ALTER TABLE partition_test DROP PARTITION earliest_partition

The first command shifts the last range partition to the last partition of the table, converting the interval partitions to range partitions, thereby avoiding the exception thrown if an attempt is made to drop the last range partition. This then allows the dropping of old partitions without trouble.

Testing using the packaged routines in separate sessions failed to generate ORA-01466. No matter the order that the SELECT or DDL was executed no problems were encountered. Reading up on ORA-01466 on-line, one scenario that can give rise to the error is the use of read only queries, which led to how the error occurred…

Let’s use a simple interval partitioned table populated with 50 rows to illustrate the problem:

CREATE TABLE partition_test
   (id      NUMBER(5))
PARTITION BY RANGE (id) INTERVAL (10)
(PARTITION partition_test_p1 VALUES LESS THAN (10))
/

INSERT INTO partition_test
SELECT ROWNUM
FROM   dual
CONNECT BY ROWNUM <= 50
/

COMMIT
/

If we execute our partition maintenance commands followed immediately by a simple SELECT then we get a correct result:

SQL> ALTER TABLE partition_test SET INTERVAL (10)
  2  /

Table altered.

SQL> ALTER TABLE partition_test DROP PARTITION partition_test_p1
  2  /

Table altered.

SQL> SELECT *
  2  FROM   partition_test
  3  WHERE  id = 40
  4  /

        ID
----------
        40

However if our SELECT is done within a session that is read only:

SQL> ALTER TABLE partition_test SET INTERVAL (10)
  2  /

Table altered.

SQL> ALTER TABLE partition_test DROP PARTITION partition_test_p1
  2  /

Table altered.

SQL> SET TRANSACTION READ ONLY
  2  /

Transaction set.

SQL> SELECT *
  2  FROM   partition_test
  3  WHERE  id = 40
  4  /
FROM   partition_test
       *
ERROR at line 2:
ORA-01466: unable to read data - table definition has changed

From Oracle Support information, if a read only query if issued immediately against a table that has had DDL done on it then it will result in ORA-01466.

So, going back to the original system error, the developer of the client application confirmed that the session was put into read-only mode as it had no requirement to modify data. Sensible thinking on behalf of the developer but unfortunately it resulted in the problem. The solution was to simply run the session in standard read/write mode.

AusOUG Conference 2014

I’m on the train home after a very informative two days at the Perth AusOUG conference. It was great to hear some very useful talks by both local and international speakers, including:

If you ever have the opportunity to attend an AusOUG conference or hear any of these speakers at some other conference then I would highly recommend it! A big thanks to the conference organisers, AusOUG, the sponsors and Oracle.

PL/SQL functions and cursor fetch size

After reviewing some top SQL performance statistics I noticed one query with executions with the 10,000s but fetches in the millions. The query was embedded in a package routine that simply opened a ref cursor, allowing the associated application to pull the results on demand. After working out how many rows the cursor returned it was clear that the application was fetching one row at a time.

The application opening the cursor was written in C# and used ODP.Net to access the database. By default ODP.Net uses a 64k buffer to fetch data so for some reason it was under the impression that it could only retrieve one row at a time from the cursor. Back to the cursor…

The cursor contained a number of calculated columns and these calculations were done using PL/SQL functions, i.e. it resembled:

SELECT col1
,      col2
,      plsql_func1(...)
,      plsql_func2(...)
FROM   ...

The following package code uses the EMP and DEPT tables to illustrate the problem with the above strategy. The package contains a routine that opens a cursor containing EMP details. Some of the attributes are straight from the table, some are calculated using internal SQL functions and one uses a PL/SQL function to return the department name from the department id.

CREATE OR REPLACE PACKAGE pkg_emp
AS

TYPE t_emp_rec IS RECORD (empno          emp.empno%TYPE
                         ,formatted_name emp.ename%TYPE
                         ,hiredate       emp.hiredate%TYPE
                         ,dname          dept.dname%TYPE
                         ,mgr_flag       VARCHAR2(1));

TYPE t_emp_cur IS REF CURSOR RETURN t_emp_rec;

-- Function to return the department name from it id
FUNCTION dept_name (p_deptno  IN emp.deptno%TYPE)
RETURN dept.dname%TYPE;
   
-- Routine to return employee details   
PROCEDURE get_emps (p_emp_cur OUT t_emp_cur);

END pkg_emp;
/

CREATE OR REPLACE PACKAGE BODY pkg_emp
AS

-- Function to return the department name from it id
FUNCTION dept_name (p_deptno  IN emp.deptno%TYPE)
RETURN dept.dname%TYPE
AS
   l_dname  dept.dname%TYPE;
BEGIN
   SELECT dname
   INTO   l_dname
   FROM   dept
   WHERE  deptno = p_deptno;
   
   RETURN l_dname;
EXCEPTION 
   WHEN NO_DATA_FOUND THEN
      RETURN NULL;
END dept_name;

-- Routine to return employee details   
PROCEDURE get_emps (p_emp_cur OUT t_emp_cur)
AS
BEGIN
   OPEN p_emp_cur
   FOR
   SELECT e.empno
   ,      INITCAP(e.ename) AS formatted_name -- SQL function calculation
   ,      e.hiredate
   ,      dept_name (e.deptno) AS dname -- PL/SQL function calculation
   ,      CASE WHEN EXISTS (SELECT 'x'
                            FROM   emp e1
                            WHERE  e1.mgr = e.empno)
               THEN 'Y'
               ELSE 'N'
          END AS mgr_flag -- more complex SQL function calculation
   FROM   emp e;
END get_emps;

END pkg_emp;
/

We can get Oracle to tell us how it sees the cursor returned by the GET_EMPS routine by using DBMS_SQL.DESCRIBE. The following displays the cursor columns and the maximum length each can have.

DECLARE
  l_emp_cur   pkg_emp.t_emp_cur;
  l_cur_no    NUMBER;
  l_col_cnt   INTEGER;
  l_rec_tab   DBMS_SQL.DESC_TAB;
BEGIN

  pkg_emp.get_emps (l_emp_cur);
  l_cur_no := DBMS_SQL.TO_CURSOR_NUMBER (l_emp_cur);
 
  DBMS_SQL.DESCRIBE_COLUMNS(l_cur_no, l_col_cnt, l_rec_tab);

  FOR i IN 1..l_col_cnt
  LOOP
     DBMS_OUTPUT.PUT_LINE('Column ' || l_rec_tab(i).col_name || ' max length of ' || TO_CHAR(l_rec_tab(i).col_max_len));
  END LOOP;
 
  DBMS_SQL.CLOSE_CURSOR(l_cur_no);
END;
/

The output from the above is:

Column EMPNO max length of 22
Column FORMATTED_NAME max length of 10
Column HIREDATE max length of 7
Column DNAME max length of 4000
Column MGR_FLAG max length of 1

Looking at the DNAME, which is the PL/SQL function calculated attribute, we see a maximum length of 4,000. This might appear strange given that the function is declared to return a string based on the DEPT.DNAME column, which is defined as VARCHAR2(14). However Oracle only uses the base data type and ignores the length constraint. This means that Oracle will consider the PL/SQL function to be capable of returning a string of the maximum length allowed, which is 4,000 bytes. It’s worth noting that this only affects PL/SQL function calls. Calls to Oracle’s internal functions do not suffer from this problem as evidenced by the FORMATTED_NAME and MGR_FLAG columns.

Going back to the original ORDP.Net application, should a cursor return 8 or more PL/SQL function calculated columns then it will exceed half of the 64k buffer and will be restricted to fetching one row at a time.

How do we overcome this problem? The answer is to wrap the PL/SQL function calls in a CAST to tell Oracle the actual length it can expect from the functions, i.e.:

   SELECT e.empno
   ,      INITCAP(e.ename) AS formatted_name -- SQL function calculation
   ,      e.hiredate
   ,      CAST(dept_name (e.deptno) AS VARCHAR2(14)) AS dname -- PL/SQL function calculation
   ,      CASE WHEN EXISTS (SELECT 'x'
                            FROM   emp e1
                            WHERE  e1.mgr = e.empno)
               THEN 'Y'
               ELSE 'N'
          END AS mgr_flag -- more complex SQL function calculation
   FROM   emp e;

Running our DBMS_SQL.DESCRIBE routine using the CAST function gives us to result we want:

Column EMPNO max length of 22
Column FORMATTED_NAME max length of 10
Column HIREDATE max length of 7
Column DNAME max length of 14
Column MGR_FLAG max length of 1

After adding in the CAST calls the number of fetches required by the C# application dropped to a much more reasonable 2, instead of the 600+ it had been doing previously.

Point in time extracts

Extracts from a database are a common activity for any system. In our information driven world we are constantly extracting data for reporting systems, data warehouses, extracts for other systems or simple reports. Naturally, getting data out of an Oracle database is as simple as writing one or more SQL queries. Things can get a bit tricky when we need to consider the consistency of the data we’re extracting.

As an example of data extract inconsistency, one place I worked at had a requirement to extract data into CSV files for a third party on a daily basis. Primarily the data consisted of account details and transactions on those accounts. The third party started to complain that the account balances did not match up with the transactions that were reported. The cause for the difference was easily diagnosed. The overall process simply ran one query after another dumping the results to a CSV file. The transaction extract occurred after the account details one. By the time the account details extract had finished new transactions had made their way into the system and were picked up by the transaction extract. These new transactions were not reflected in the account balances of the account extract. What the extract process required was to do all the extracts as at the same point in time and Oracle provides this functionality via flashback query.

Flashback query has been part of Oracle since version 9. In it’s simplest form it’s a matter of specifying the point in time that you want to query a table, e.g.:

SELECT *
FROM   transactions AS OF TIMESTAMP CAST(TRUNC(SYSDATE) AS TIMESTAMP)

which will display the details of the TRANSACTION table as at the start of the day. Activity on the table since the start of the day will be rolled back for the purpose of the query. Using the above approach it’s possible to construct a set of queries that are consistent at the same point in time, which is ideal for extracts that need to be data consistent with each other.

Using PKG_CSV, creating a data consistent set of extracts is as simple as:

DECLARE
   l_directory         VARCHAR2(30) := 'EXTRACT_DIR';
   l_extract_timestamp TIMESTAMP     := CAST(TRUNC(SYSDATE) AS TIMESTAMP);
   l_emp_cur           SYS_REFCURSOR;
   l_dept_cur          SYS_REFCURSOR;
BEGIN
   OPEN l_emp_cur
   FOR
   SELECT * FROM emp AS OF TIMESTAMP l_extract_timestamp;
   
   OPEN l_dept_cur
   FOR
   SELECT * FROM dept AS OF TIMESTAMP l_extract_timestamp;
   
   pkg_csv.to_file (p_cursor      => l_emp_cur
                   ,p_directory   => l_directory
                   ,p_filename    => 'emp_' || TO_CHAR(l_extract_timestamp,'yyyymmddhh24miss') || '.csv');
   pkg_csv.to_file (p_cursor      => l_dept_cur
                   ,p_directory   => l_directory
                   ,p_filename    => 'dept_' || TO_CHAR(l_extract_timestamp,'yyyymmddhh24miss') || '.csv');
END;

So how far back can you go with this type of query? Oracle uses the undo segments (or rollback) to work out what the data looked like in the past. The obvious question is then “how much undo does Oracle retain?”. Unfortunately the answer to this isn’t clear cut. Generally speaking, Oracle will retain undo for at least the duration specified by the initialisation parameter UNDO_RETENTION, which is specified in seconds. This parameter defaults to 900, or 15 minutes, although many systems will have this set to a larger value. The Oracle documentation contains this warning on the way Oracle manages the undo available:

The UNDO_RETENTION parameter can only be honored if the current undo tablespace has enough space. If an active transaction requires undo space and the undo tablespace does not have available space, then the system starts reusing unexpired undo space. This action can potentially cause some queries to fail with a “snapshot too old” message

You can read more about flashback query in the Oracle documentation.

Strongly typed cursor variables

Arguably the easiest way to return a set of data back to a client application from PL/SQL is via a cursor variable. Oracle simply opens a cursor to a query and the application pulls data from the cursor on demand. Conveniently Oracle provides a generic SYS_REFCURSOR data type that can be used for any cursor variable, as shown below to retrieve employee details:

CREATE OR REPLACE PACKAGE pkg_emp
AS
   PROCEDURE get_emps (p_emp_cur OUT SYS_REFCURSOR);
END pkg_emp;
/
   
CREATE OR REPLACE PACKAGE BODY pkg_emp
AS

PROCEDURE get_emps (p_emp_cur OUT SYS_REFCURSOR)
AS
BEGIN
   OPEN p_emp_cur
   FOR
   SELECT id
   ,      first_name
   ,      last_name
   ,      dept_id
   FROM   emp;
END get_emps;

END pkg_emp;
/

While the above is very quick to code it suffers from a significant flaw; an ill-defined contract between the provider of the data (the database via the PL/SQL routine) and the consumer of the data (whatever application is calling the routine). The key problem here is that the SYS_REFCURSOR type can be used with any query. In this regard it’s like having an “any data” type in an application programming language.

If I were to modify the query inside the get_emps routine to return the department name then I could simply compile a new version as:

CREATE OR REPLACE PACKAGE BODY pkg_emp
AS

   PROCEDURE get_emps (p_emp_cur OUT SYS_REFCURSOR)
   AS
   BEGIN
      OPEN p_emp_cur
      FOR
      SELECT e.id
      ,      e.first_name
      ,      e.last_name
      ,      e.dept_id
      ,      d.dept_name
      FROM   emp e
      ,      dept d
      WHERE  d.id = e.dept_id;
   END get_emps;

END pkg_emp;
/

All done, right? Well, although this code will compile and run correctly I have probably inadvertently broken any application that was using it. The code does nothing to enforce the structure of the cursor that I am returning so any query is deemed acceptable. From Oracle’s perspective this is fine but it won’t be okay to the consumers of the cursor, which will be expecting a cursor of a particular structure.

A better way to code the above routine is to use a strongly typed cursor, which declares in advance the attributes and data types that the cursor will contain. If the cursor is opened using a query that does not match the definition then a compile time error is thrown. Sure, the definition of the strongly type cursor could be altered to match the new query but the error acts as a warning to the database developer that they might be about to break the application.

So, how do we create a strongly typed cursor variable? My preferred approach is to use a record type defined in the package specification as shown below:

CREATE OR REPLACE PACKAGE pkg_emp
AS
   TYPE t_emp_rec IS RECORD (id         emp.id%TYPE
                            ,first_name emp.first_name%TYPE
                            ,last_name  emp.last_name%TYPE
                            ,dept_id    emp.dept_id%TYPE);
   TYPE t_emp_cur IS REF CURSOR RETURN t_emp_rec;                          

   PROCEDURE get_emps (p_emp_cur OUT t_emp_cur);
END pkg_emp;
/
   
CREATE OR REPLACE PACKAGE BODY pkg_emp
AS

PROCEDURE get_emps (p_emp_cur OUT t_emp_cur)
AS
BEGIN
   OPEN p_emp_cur
   FOR
   SELECT id
   ,      first_name
   ,      last_name
   ,      dept_id
   FROM   emp;
END get_emps;

END pkg_emp;
/

The record type is useful for “self documenting” the code as shows what data will be returned by the cursor without having to go into the package body and read the query. It’s a little bit of extra typing but now when I try to modify the package body and replace DEPT_ID with DEPT_NAME I get the following compilation error:

CREATE OR REPLACE PACKAGE BODY pkg_emp
AS

   PROCEDURE get_emps (p_emp_cur OUT t_emp_cur)
   AS
   BEGIN
      OPEN p_emp_cur
      FOR
      SELECT e.id
      ,      e.first_name
      ,      e.last_name
      ,      e.dept_id
      ,      d.dept_name
      FROM   emp e
      ,      dept d
      WHERE  d.id = e.dept_id;
   END get_emps;

END pkg_emp;
/

Warning: Package Body created with compilation errors.

SQL> show errors
Errors for PACKAGE BODY PKG_EMP:

LINE/COL ERROR
-------- ---------------------------------------------
7/7      PL/SQL: SQL Statement ignored
9/7      PLS-00382: expression is of wrong type

The “expression is of wrong type” is telling me that my query no longer matches the record declaration in the package specification. Time for me to revise my code… or change the record type and go searching for the application code that calls this routine and ensure it will continue to work with the new cursor. In a large, complex multi-tier system additional checks and balances like strongly typed cursors are invaluable to prevent inadvertent mistakes.

Proxy Connections

Managing development environments is a fickle business. On one hand you want to be able to grant developers whatever privileges they require to make the development process easy. On the other hand you don’t want the environment to become so disorganised that it’s hard to figure out where the hacking ends and the real application begins.

Database applications are typically structured with database objects in one or more accounts (users). Sharing around the password for the application account can lead to trouble, especially in non-development environments where there is supposed to be deployment controls in place. Once an account password has been given out then it can be assumed that it is generally known by all developers. Proxy connections allow access to be given without having to compromise the application account password.

The ALTER USER command is used to configure the proxy connection details. Suppose your application account is named APP_OWNER and you want to connect to it through a developer’s account named MARK. The following is the DDL required:

ALTER USER app_owner GRANT CONNECT THROUGH mark
/

So, now the user MARK is able to connect to the APP_OWNER account as it they had the credentials for that account. Access can be granted for a particular task and then easily revoked later once the task is complete.

So, once set up how do you go about logging into the database using the proxy connection details? The following describes how to do it using the SQL*Plus command line, Oracle’s SQL Developer and Dell’s (formerly Quest) TOAD.

SQL*Plus
Proxy account connections using SQL*Plus involves specifying the details of the account you intend to connect to inside square brackets as part of the connection details, e.g.:

CONNECT mark[app_owner]

This can all be done direct from the OS command prompt when invoking SQL*Plus:

> sqlplus mark[app_owner]@<<database>>

You’ll be prompted for a password. Enter the password for the MARK account and you’ll be connected to the APP_OWNER account.

Oracle SQL Developer
Using Oracle’s SQL Developer, in the connection window there is a check-box near the bottom labelled “Proxy Connection”.

SQL Developer connection window

Checking this will cause a new window to open, into which you enter the proxy connection details. Using the basic set up described earlier all that would be required in this second window is proxy account name in the Proxy Client field.

SQL Developer proxy connection window

TOAD
Unlike SQL Developer, TOAD doesn’t have any special settings on its connection window for proxy connections. Instead the details are entered directly into the user/schema field, using the square bracket notation that SQL*Plus uses, i.e. username[proxy user] as shown below.

TOAD connection window