String constants and package state

I have previously discussed package state and avoiding the “ORA-04068: existing state of packages has been discarded” exception. Connor McDonald has pointed out that come Oracle 11.2 constants do not create package state, except when they involve a calculation. This post expands on that by noting that string constants always create package state (thanks to my work colleague Vlad for pointing this out, after finding out the hard way…).

First up, let’s take a look at how numeric constants don’t create package state. We start with the simplest of packages compiled in session 1:

CREATE OR REPLACE PACKAGE pkg_state
AS
   c_number   CONSTANT NUMBER(5) := 123;
END pkg_state;

and in another session, session 2, we run a short program that makes reference to the package constant:

SET SERVEROUTPUT ON
BEGIN
   dbms_output.put_line ('Package value is: ' || TO_CHAR(pkg_state.c_number));
END;

Not surprisingly we get the output:

Package value is: 123

If we go back to session 1 and recompile the package changing the constant to a different value:

CREATE OR REPLACE PACKAGE pkg_state
AS
   c_number   CONSTANT NUMBER(5) := 789;
END pkg_state;

and rerun our simple output script back in session 2 we don’t get ORA-04068 but instead our session displays the new value:

Package value is: 789

Now let’s repeat the entire exercise but with a string constant. In session 1:

CREATE OR REPLACE PACKAGE pkg_state
AS
   c_string   CONSTANT VARCHAR2(5) := 'ABC';
END pkg_state;

Running this in session 2:

SET SERVEROUTPUT ON
BEGIN
   dbms_output.put_line ('Package value is: ' || pkg_state.c_string);
END;

we get:

Package value is: ABC

Compiling the package in session 1 to:

CREATE OR REPLACE PACKAGE pkg_state
AS
   c_string   CONSTANT CHAR(5) := 'XYZ';
END pkg_state;

gives us the following when we rerun our display routine in session 2:

BEGIN
*
ERROR at line 1:
ORA-04068: existing state of packages has been discarded
ORA-04061: existing state of package "DEVELOPER.PKG_STATE" has been invalidated
ORA-04065: not executed, altered or dropped package "DEVELOPER.PKG_STATE"
ORA-06508: PL/SQL: could not find program unit being called: "DEVELOPER.PKG_STATE"
ORA-06512: at line 2

This occurs with Oracle 11.2 and 12.1. So, while in these versions have taken steps to limit constants from creating package state, Connor’s post and this post show that it doesn’t apply to all constants. Hence some care needs to be taken when planning your deployments. You wouldn’t want to be in a situation where you think you don’t have package state but in reality you do. Right Vlad?

ORA-01001: invalid cursor with REF CURSOR output parameter

A short post to note an Oracle database bug I encountered recently. Whilst testing some code the exception:

ORA-01001: invalid cursor

was produced. The error was originating from a pipelined function. Tracking things down it seems that Oracle will throw this exception when the following occur:

  • A PL/SQL routine passes out a ref cursor variable
  • The ref cursor passed out uses a pipelined function
  • The pipelined function itself has a ref cursor as as input parameter

Having replicated the problem on Linux, AIX and WIndows I believe it is not platform specific. It occurs in database versions 11.2 and 12.1. Earlier versions have not been tested and so may be affected too.

The bug appears to be similar to Bug 2968664 : ORA-1001 IN FUNCTION CALLED FROM SQL WHICH FETCHES FROM A REFCURSOR PARAMETER but that bug is noted to be fixed in version 10.1.0.1.

I created a simple test package, imaginatively named pipeline_test, for Oracle Support to replicate the problem. The package contains 3 callable routines:

  • run_test1 – a routine with an OUT ref cursor parameter that matches the criteria noted earlier. Calling this routine throws ORA-01001
  • run_test2 – a routine that opens a ref cursor similar to run_test1 but this routine consumes that cursor rather than passing it out to a caller. This routine does not generate ORA-01001.
  • run_test3 – A routine that calls run_test1 to obtain a ref cursor and then consumes it, similar to run_test2. This routine does generate ORA-01001.

The package and test script are available for download. The setup.sql file creates the package and collection object required by the pipelined function while the run.sql script is a basic SQL*Plus script to call the test routines.

For my situation I was to work around the problem by substituting a collection for the ref cursor that the pipelined function used. I had some luck with hinting the query that used the pipelined function with the MATERIALIZE hint.

I’ll update this post with the details that I get back from Oracle Support. At the moment they have acknowledged that there is a problem and they are investigating.

Avoid SQL injection with DBMS_ASSERT

A simple internet search will reveal some serious computer system hacks that were, fundamentally, a result of SQL injection. If you plan on doing any significant amount of database development you need to be aware of what SQL injection is and how to identify and mitigate it within your system.

In Oracle the easiest way to avoid SQL injection is to simply use static SQL with bind variables. Stick to this strategy and you can’t go wrong. However, there may be some scenarios where it isn’t possible to use simple queries with binds and dynamic SQL is required. One scenario that regularly crops up in applications I’ve worked on is a generic search screen; uses are presented with a screen containing a number of fields for entering search criteria. The system dynamically generates an SQL statement using only the fields that have been filled in. Once you’re dealing with dynamic SQL you’ve got a potential SQL injection problem. Thankfully Oracle has provided a package to help us; DBMS_ASSERT.

I’ll illustrate the use of DBMS_ASSERT using a simple dynamic SQL routine. The following DO_SQL procedure takes as input parameters a schema name, a table name, a column name and a “starts with” data filter for the column requested. The routine dynamically generates an SQL statement of the form:

SELECT <column_name>
FROM   <schema_name>.<table_name>
WHERE  <column_name> LIKE '<starts_with>%';

and returns a cursor with the results from the generated statement. The routine, wrapped into a simple SQL*PLus test harness, is:

VARIABLE rc REFCURSOR

DECLARE

   l_table_owner VARCHAR2(100);
   l_table_name  VARCHAR2(100);
   l_column_name VARCHAR2(100);
   l_starts_with VARCHAR2(100);

   PROCEDURE do_sql (p_table_owner IN  VARCHAR2
                    ,p_table_name  IN  VARCHAR2
                    ,p_column_name IN  VARCHAR2
                    ,p_starts_with IN  VARCHAR2
                    ,p_rc          OUT SYS_REFCURSOR)
   AS                 
      l_sql_string VARCHAR2(32767);
   BEGIN
      l_sql_string := 'SELECT ' || p_column_name ||
                      ' FROM  ' || p_table_owner ||'.' || p_table_name ||
                      ' WHERE ' || p_column_name || ' LIKE ''' || p_starts_with || '%'''; 

      OPEN p_rc FOR l_sql_string;      
   END do_sql;
   
BEGIN   
   l_table_owner  := 'SCOTT';
   l_table_name   := 'DEPT';                  
   l_column_name  := 'DNAME';                 
   l_starts_with  := 'A';

   do_sql (p_table_owner => l_table_owner
          ,p_table_name  => l_table_name
          ,p_column_name => l_column_name
          ,p_starts_with => l_starts_with
          ,p_rc          => :rc);
END;
/

PRINT rc

Running the routine using the inputs of:

  • schema name: SCOTT
  • table name: DEPT
  • column_name: DNAME
  • starts with: A

produces the following:

DNAME
--------------
ACCOUNTING

Before we get to DBMS_ASSERT let’s now look at how we might exploit this routine using SQL injection. Let’s change the inputs to the following:

  • schema name: SCOTT
  • table name: DEPT
  • column_name: DNAME
  • starts with: A” UNION ALL SELECT username FROM all_users –

which produces the output:

DNAME
----------------------
SCOTT
APEX_APPS
OEHRAPP
--snip --
SYSDG
SYSBACKUP
SYSTEM
AUDSYS
SYS

44 rows selected.

In the above I managed to obtain a listing of the users in the database by “extending” the query to run SELECT USERNAME FROM ALL_USERS. By modifying the input parameters I managed to execute the query:

SELECT DNAME 
FROM   SCOTT.DEPT 
WHERE  DNAME LIKE 'A' 
UNION  ALL 
SELECT username 
FROM   all_users --%'

Note, the double hyphen at the end, which denotes the start of a comment in SQL, is to remove the %’ that the generated SQL puts on the end.

Let’s now look at how we can protect ourselves against this using DBMS_ASSERT. We’ll look at the following routines from the package:

  • SCHEMA_NAME
  • SQL_OBJECT_NAME
  • SIMPLE_SQL_NAME
  • QUALIFIED_SQL_NAME
  • ENQUOTE_NAME
  • ENQUOTE_LITERAL

SCHEMA_NAME

This routine can be used to ensure that a value provided for a schema is actually a schema within the database. If the value isn’t a schema name then an exception is thrown. In our example, we would use the routine as follows:

l_sql_string := 'SELECT ' || p_column_name ||
                ' FROM  ' || DBMS_ASSERT.SCHEMA_NAME(p_table_owner) ||'.' || p_table_name ||
                ' WHERE ' || p_column_name || ' LIKE ''' || p_starts_with || '%'''; 

Providing a value that is not a schema in the database results in:

ORA-44001: invalid schema

SQL_OBJECT_NAME

Similar to the SCHEMA_NAME routine, this one checks that the name provided is a valid SQL object name that exists in the database. We could use this routine to check that the table we want to query exists, i.e.:

l_sql_string := 'SELECT ' || p_column_name ||
                ' FROM  ' || DBMS_ASSERT.SQL_OBJECT_NAME(p_table_owner||'.' || p_table_name)  ||
                ' WHERE ' || p_column_name || ' LIKE ''' || p_starts_with || '%'''; 

This would be a safer test than just a test to check that the schema exists. Providing a value that is not an object in the database results in:

ORA-44002: invalid object name

SIMPLE_SQL_NAME

This routine checks that the value provided satisfies the rules for an object name without actually verifying that such an object exists in the database. In our example we would use it as follows:

l_sql_string := 'SELECT ' || p_column_name ||
                ' FROM  ' || DBMS_ASSERT.SIMPLE_SQL_NAME(p_table_owner) ||'.'
                          || DBMS_ASSERT.SIMPLE_SQL_NAME(p_table_name) ||
                ' WHERE ' || p_column_name || ' LIKE ''' || p_starts_with || '%'''; 

If we wanted to ensure that the column reference is actually a column, as opposed to some sort of calculation, then we could apply SIMPLE_SQL_NAME to the column reference too, i.e.:

l_sql_string := 'SELECT ' || DBMS_ASSERT.SIMPLE_SQL_NAME(p_column_name) ||
                ' FROM  ' || DBMS_ASSERT.SIMPLE_SQL_NAME(p_table_owner) ||'.'
                          || DBMS_ASSERT.SIMPLE_SQL_NAME(p_table_name) ||
                ' WHERE ' || p_column_name || ' LIKE ''' || p_starts_with || '%'''; 

Providing a parameter that violates the naming rules results in:

ORA-44003: invalid SQL name

QUALIFIED_SQL_NAME

While the SIMPLE_SQL_NAME can only be used to validate each component of an overall object name the QUALIFIED_SQL_NAME routine can be used for a fully qualified object name, inclusive of database link component if required. In our example we would use it as follows:

l_sql_string := 'SELECT ' || p_column_name ||
                ' FROM  ' || DBMS_ASSERT.QUALIFIED_SQL_NAME(p_table_owner || '.' || p_table_name) ||
                ' WHERE ' || p_column_name || ' LIKE ''' || p_starts_with || '%'''; 

Passing in an invalid name results in:

ORA-44004: invalid qualified SQL name

Once again, this routine does not verify that an SQL object exists with such a name, only if the name is a valid name. In our example, running the routine with a valid name that does not exist results in;

ORA-00942: table or view does not exist

ENQUOTE_NAME

Oracle will accept names using any character, including spaces and punctuation, if the name is enclosed in double quotes. For example “A silly column name!” is a legitimate name. DBMS_ASSERT.ENQUOTE_NAME wraps a name in double quotes to handle this situation. By default it will also make all alphabetic characters upper case unless the second parameter is set to FALSE. Names already enclosed in double quotes are left alone.

In our example, we would use the routine as follows:

l_sql_string := 'SELECT ' || DBMS_ASSERT.SIMPLE_SQL_NAME(DBMS_ASSERT.ENQUOTE_NAME(p_column_name)) ||
                ' FROM  ' || DBMS_ASSERT.QUALIFIED_SQL_NAME(DBMS_ASSERT.ENQUOTE_NAME(p_table_owner) || '.' 
                          || DBMS_ASSERT.ENQUOTE_NAME(p_table_name)) ||
                ' WHERE ' || p_column_name || ' LIKE ''' || p_starts_with || '%'''; 

Once a name is enclosed in double quotes it will pass the checking done within DBMS_ASSERT.SIMPLE_SQL_NAME.
An exception is thrown by DBMS_ASSERT.ENQUOTE_NAME if the string provided contains a double quote in any position other than the first and last character:

ORA-44003: invalid SQL name

ENQUOTE_LITERAL

This routine will enclose a string literal inside single quotes, making it suitable for string literal parameters within SQL. In our example we would use it as:

l_sql_string := 'SELECT ' || DBMS_ASSERT.SIMPLE_SQL_NAME(p_column_name) ||
                ' FROM  ' || DBMS_ASSERT.QUALIFIED_SQL_NAME(p_table_owner || '.' || p_table_name) ||
                ' WHERE ' || p_column_name || ' LIKE ' || DBMS_ASSERT.ENQUOTE_LITERAL(p_starts_with || '%') ; 

If we were to attempt the SQL injection attack noted at the start of the post then we would get the following exception:

ORA-06502: PL/SQL: numeric or value error
ORA-06512: at "SYS.DBMS_ASSERT", line 409
ORA-06512: at "SYS.DBMS_ASSERT", line 493
ORA-06512: at line 16
ORA-06512: at line 36

which is what DBMS_ASSERT throws if the string contains a single quote, not doubled up with another single quote.

Summary

SQL injection is not a theoretical security problem, it’s a very real problem. In Oracle the best way to mitigate it is use static SQL statements. This may not always be possible though and so you will need to take steps to prevent dynamically generated SQL from being abused. Oracle provides you with DBMS_ASSERT for just this purpose. The routines contained in DBMS_ASSERT go a long way to protect the various parts of an SQL statement from being used for injection.

Lastly, please be aware that if you do something as silly as to allow applications to call a routine similar to:

PROCEDURE do_sql (p_sql_string)
AS
BEGIN
    EXECUTE IMMEDIATE p_sql_string;
END do_sql;

then there is very, very little that can do done for your security.

ORA-01466: unable to read data – table definition has changed

Where I currently work a significant proportion of business is done on-line, via the web or mobile app. For the OLTP side of the systems this means we have a strong focus on performance and availability. As an example, we deliberately avoid package state in our database packages so we can deploy new versions of code without having to take the system down. Similarly, many database maintenance activities are performed on-line too… which is why I was concerned when I arrived at work one morning to find an on-line session had encountered the following exception:

ORA-01466: unable to read data - table definition has changed

The operation in question was a simple SELECT against a single interval partitioned table. Cross referencing the application logs to the database activity the error occurred at the same time that a partition was dropped on the table. The partition drop was part of a standard job to remove old data. The DDL issued against the table was:

ALTER TABLE partition_test SET INTERVAL (INTERVAL '7' DAY)
ALTER TABLE partition_test DROP PARTITION earliest_partition

The first command shifts the last range partition to the last partition of the table, converting the interval partitions to range partitions, thereby avoiding the exception thrown if an attempt is made to drop the last range partition. This then allows the dropping of old partitions without trouble.

Testing using the packaged routines in separate sessions failed to generate ORA-01466. No matter the order that the SELECT or DDL was executed no problems were encountered. Reading up on ORA-01466 on-line, one scenario that can give rise to the error is the use of read only queries, which led to how the error occurred…

Let’s use a simple interval partitioned table populated with 50 rows to illustrate the problem:

CREATE TABLE partition_test
   (id      NUMBER(5))
PARTITION BY RANGE (id) INTERVAL (10)
(PARTITION partition_test_p1 VALUES LESS THAN (10))
/

INSERT INTO partition_test
SELECT ROWNUM
FROM   dual
CONNECT BY ROWNUM <= 50
/

COMMIT
/

If we execute our partition maintenance commands followed immediately by a simple SELECT then we get a correct result:

SQL> ALTER TABLE partition_test SET INTERVAL (10)
  2  /

Table altered.

SQL> ALTER TABLE partition_test DROP PARTITION partition_test_p1
  2  /

Table altered.

SQL> SELECT *
  2  FROM   partition_test
  3  WHERE  id = 40
  4  /

        ID
----------
        40

However if our SELECT is done within a session that is read only:

SQL> ALTER TABLE partition_test SET INTERVAL (10)
  2  /

Table altered.

SQL> ALTER TABLE partition_test DROP PARTITION partition_test_p1
  2  /

Table altered.

SQL> SET TRANSACTION READ ONLY
  2  /

Transaction set.

SQL> SELECT *
  2  FROM   partition_test
  3  WHERE  id = 40
  4  /
FROM   partition_test
       *
ERROR at line 2:
ORA-01466: unable to read data - table definition has changed

From Oracle Support information, if a read only query if issued immediately against a table that has had DDL done on it then it will result in ORA-01466.

So, going back to the original system error, the developer of the client application confirmed that the session was put into read-only mode as it had no requirement to modify data. Sensible thinking on behalf of the developer but unfortunately it resulted in the problem. The solution was to simply run the session in standard read/write mode.

AusOUG Conference 2014

I’m on the train home after a very informative two days at the Perth AusOUG conference. It was great to hear some very useful talks by both local and international speakers, including:

If you ever have the opportunity to attend an AusOUG conference or hear any of these speakers at some other conference then I would highly recommend it! A big thanks to the conference organisers, AusOUG, the sponsors and Oracle.

PL/SQL functions and cursor fetch size

After reviewing some top SQL performance statistics I noticed one query with executions with the 10,000s but fetches in the millions. The query was embedded in a package routine that simply opened a ref cursor, allowing the associated application to pull the results on demand. After working out how many rows the cursor returned it was clear that the application was fetching one row at a time.

The application opening the cursor was written in C# and used ODP.Net to access the database. By default ODP.Net uses a 64k buffer to fetch data so for some reason it was under the impression that it could only retrieve one row at a time from the cursor. Back to the cursor…

The cursor contained a number of calculated columns and these calculations were done using PL/SQL functions, i.e. it resembled:

SELECT col1
,      col2
,      plsql_func1(...)
,      plsql_func2(...)
FROM   ...

The following package code uses the EMP and DEPT tables to illustrate the problem with the above strategy. The package contains a routine that opens a cursor containing EMP details. Some of the attributes are straight from the table, some are calculated using internal SQL functions and one uses a PL/SQL function to return the department name from the department id.

CREATE OR REPLACE PACKAGE pkg_emp
AS

TYPE t_emp_rec IS RECORD (empno          emp.empno%TYPE
                         ,formatted_name emp.ename%TYPE
                         ,hiredate       emp.hiredate%TYPE
                         ,dname          dept.dname%TYPE
                         ,mgr_flag       VARCHAR2(1));

TYPE t_emp_cur IS REF CURSOR RETURN t_emp_rec;

-- Function to return the department name from it id
FUNCTION dept_name (p_deptno  IN emp.deptno%TYPE)
RETURN dept.dname%TYPE;
   
-- Routine to return employee details   
PROCEDURE get_emps (p_emp_cur OUT t_emp_cur);

END pkg_emp;
/

CREATE OR REPLACE PACKAGE BODY pkg_emp
AS

-- Function to return the department name from it id
FUNCTION dept_name (p_deptno  IN emp.deptno%TYPE)
RETURN dept.dname%TYPE
AS
   l_dname  dept.dname%TYPE;
BEGIN
   SELECT dname
   INTO   l_dname
   FROM   dept
   WHERE  deptno = p_deptno;
   
   RETURN l_dname;
EXCEPTION 
   WHEN NO_DATA_FOUND THEN
      RETURN NULL;
END dept_name;

-- Routine to return employee details   
PROCEDURE get_emps (p_emp_cur OUT t_emp_cur)
AS
BEGIN
   OPEN p_emp_cur
   FOR
   SELECT e.empno
   ,      INITCAP(e.ename) AS formatted_name -- SQL function calculation
   ,      e.hiredate
   ,      dept_name (e.deptno) AS dname -- PL/SQL function calculation
   ,      CASE WHEN EXISTS (SELECT 'x'
                            FROM   emp e1
                            WHERE  e1.mgr = e.empno)
               THEN 'Y'
               ELSE 'N'
          END AS mgr_flag -- more complex SQL function calculation
   FROM   emp e;
END get_emps;

END pkg_emp;
/

We can get Oracle to tell us how it sees the cursor returned by the GET_EMPS routine by using DBMS_SQL.DESCRIBE. The following displays the cursor columns and the maximum length each can have.

DECLARE
  l_emp_cur   pkg_emp.t_emp_cur;
  l_cur_no    NUMBER;
  l_col_cnt   INTEGER;
  l_rec_tab   DBMS_SQL.DESC_TAB;
BEGIN

  pkg_emp.get_emps (l_emp_cur);
  l_cur_no := DBMS_SQL.TO_CURSOR_NUMBER (l_emp_cur);
 
  DBMS_SQL.DESCRIBE_COLUMNS(l_cur_no, l_col_cnt, l_rec_tab);

  FOR i IN 1..l_col_cnt
  LOOP
     DBMS_OUTPUT.PUT_LINE('Column ' || l_rec_tab(i).col_name || ' max length of ' || TO_CHAR(l_rec_tab(i).col_max_len));
  END LOOP;
 
  DBMS_SQL.CLOSE_CURSOR(l_cur_no);
END;
/

The output from the above is:

Column EMPNO max length of 22
Column FORMATTED_NAME max length of 10
Column HIREDATE max length of 7
Column DNAME max length of 4000
Column MGR_FLAG max length of 1

Looking at the DNAME, which is the PL/SQL function calculated attribute, we see a maximum length of 4,000. This might appear strange given that the function is declared to return a string based on the DEPT.DNAME column, which is defined as VARCHAR2(14). However Oracle only uses the base data type and ignores the length constraint. This means that Oracle will consider the PL/SQL function to be capable of returning a string of the maximum length allowed, which is 4,000 bytes. It’s worth noting that this only affects PL/SQL function calls. Calls to Oracle’s internal functions do not suffer from this problem as evidenced by the FORMATTED_NAME and MGR_FLAG columns.

Going back to the original ORDP.Net application, should a cursor return 8 or more PL/SQL function calculated columns then it will exceed half of the 64k buffer and will be restricted to fetching one row at a time.

How do we overcome this problem? The answer is to wrap the PL/SQL function calls in a CAST to tell Oracle the actual length it can expect from the functions, i.e.:

   SELECT e.empno
   ,      INITCAP(e.ename) AS formatted_name -- SQL function calculation
   ,      e.hiredate
   ,      CAST(dept_name (e.deptno) AS VARCHAR2(14)) AS dname -- PL/SQL function calculation
   ,      CASE WHEN EXISTS (SELECT 'x'
                            FROM   emp e1
                            WHERE  e1.mgr = e.empno)
               THEN 'Y'
               ELSE 'N'
          END AS mgr_flag -- more complex SQL function calculation
   FROM   emp e;

Running our DBMS_SQL.DESCRIBE routine using the CAST function gives us to result we want:

Column EMPNO max length of 22
Column FORMATTED_NAME max length of 10
Column HIREDATE max length of 7
Column DNAME max length of 14
Column MGR_FLAG max length of 1

After adding in the CAST calls the number of fetches required by the C# application dropped to a much more reasonable 2, instead of the 600+ it had been doing previously.

Point in time extracts

Extracts from a database are a common activity for any system. In our information driven world we are constantly extracting data for reporting systems, data warehouses, extracts for other systems or simple reports. Naturally, getting data out of an Oracle database is as simple as writing one or more SQL queries. Things can get a bit tricky when we need to consider the consistency of the data we’re extracting.

As an example of data extract inconsistency, one place I worked at had a requirement to extract data into CSV files for a third party on a daily basis. Primarily the data consisted of account details and transactions on those accounts. The third party started to complain that the account balances did not match up with the transactions that were reported. The cause for the difference was easily diagnosed. The overall process simply ran one query after another dumping the results to a CSV file. The transaction extract occurred after the account details one. By the time the account details extract had finished new transactions had made their way into the system and were picked up by the transaction extract. These new transactions were not reflected in the account balances of the account extract. What the extract process required was to do all the extracts as at the same point in time and Oracle provides this functionality via flashback query.

Flashback query has been part of Oracle since version 9. In it’s simplest form it’s a matter of specifying the point in time that you want to query a table, e.g.:

SELECT *
FROM   transactions AS OF TIMESTAMP CAST(TRUNC(SYSDATE) AS TIMESTAMP)

which will display the details of the TRANSACTION table as at the start of the day. Activity on the table since the start of the day will be rolled back for the purpose of the query. Using the above approach it’s possible to construct a set of queries that are consistent at the same point in time, which is ideal for extracts that need to be data consistent with each other.

Using PKG_CSV, creating a data consistent set of extracts is as simple as:

DECLARE
   l_directory         VARCHAR2(30) := 'EXTRACT_DIR';
   l_extract_timestamp TIMESTAMP     := CAST(TRUNC(SYSDATE) AS TIMESTAMP);
   l_emp_cur           SYS_REFCURSOR;
   l_dept_cur          SYS_REFCURSOR;
BEGIN
   OPEN l_emp_cur
   FOR
   SELECT * FROM emp AS OF TIMESTAMP l_extract_timestamp;
   
   OPEN l_dept_cur
   FOR
   SELECT * FROM dept AS OF TIMESTAMP l_extract_timestamp;
   
   pkg_csv.to_file (p_cursor      => l_emp_cur
                   ,p_directory   => l_directory
                   ,p_filename    => 'emp_' || TO_CHAR(l_extract_timestamp,'yyyymmddhh24miss') || '.csv');
   pkg_csv.to_file (p_cursor      => l_dept_cur
                   ,p_directory   => l_directory
                   ,p_filename    => 'dept_' || TO_CHAR(l_extract_timestamp,'yyyymmddhh24miss') || '.csv');
END;

So how far back can you go with this type of query? Oracle uses the undo segments (or rollback) to work out what the data looked like in the past. The obvious question is then “how much undo does Oracle retain?”. Unfortunately the answer to this isn’t clear cut. Generally speaking, Oracle will retain undo for at least the duration specified by the initialisation parameter UNDO_RETENTION, which is specified in seconds. This parameter defaults to 900, or 15 minutes, although many systems will have this set to a larger value. The Oracle documentation contains this warning on the way Oracle manages the undo available:

The UNDO_RETENTION parameter can only be honored if the current undo tablespace has enough space. If an active transaction requires undo space and the undo tablespace does not have available space, then the system starts reusing unexpired undo space. This action can potentially cause some queries to fail with a “snapshot too old” message

You can read more about flashback query in the Oracle documentation.