READ Table Privilege

I arrived at work one Monday morning to find an email from our automated job that checks for long transactions in the database:

Transaction for user xxxxxx (sid 678, program SQL Developer) has been open for 44460 seconds

Hmmm… so that would be someone holding a lock in the production system for 12 hours. Thankfully the table where the lock was being held wasn’t frequently subject to DML or this might have caused trouble. I have previously noted how simply having SELECT access allows a user to lock data and that was exactly the scenario that had occurred; a support person had logged into the database, locked a row in a table that they only had SELECT access on without realising what they had done… and then left their session active for a day.

After killing the offending session and sending out a polite email to all support staff requesting that they disconnect from a production database after they have finished whatever needed doing I set about making some changes…

Oracle 12.1.0.2 introduces a new privilege to address this specific problem; READ. According to the documentation READ privilege permits an account to:

Query the table with the SELECT statement. Does not allow SELECT … FOR UPDATE.

Note: This privilege is available starting with Oracle Database 12c Release 1 (12.1.0.2).

Let’s not take Oracle’s word for it; let’s test it using the scenario from the previous post:

CREATE USER data_owner
   IDENTIFIED BY data_owner
   DEFAULT TABLESPACE users
   QUOTA UNLIMITED ON users
/

CREATE USER data_reader
   IDENTIFIED BY data_reader
/

GRANT CREATE SESSION TO data_owner
/
GRANT CREATE TABLE TO data_owner
/
GRANT CREATE SESSION TO data_reader
/

CREATE TABLE data_owner.lock_test
   (id NUMBER(10))
/

INSERT INTO data_owner.lock_test
   VALUES (1)
/

COMMIT
/

Instead of granting SELECT on the DATA_OWNER.LOCK_TEST table to DATA_READER we grant READ:

GRANT READ ON data_owner.lock_test TO data_reader
/

Now to test if we can lock the data in the DATA_OWNER account, we run the following from the DATA_READER account:

DECLARE
   CURSOR cur
   IS
   SELECT *
   FROM   data_owner.lock_test
   FOR UPDATE;
BEGIN
   OPEN cur;
END;
/

The above results in:

SQL>DECLARE
  2     CURSOR cur
  3     IS
  4     SELECT *
  5     FROM   data_owner.lock_test
  6     FOR UPDATE;
  7  BEGIN
  8     OPEN cur;
  9  END;
 10  /
   FROM   data_owner.lock_test
                     *
ERROR at line 5:
ORA-06550: line 5, column 22:
PL/SQL: ORA-01031: insufficient privileges
ORA-06550: line 4, column 4:
PL/SQL: SQL Statement ignored

Exactly what we wanted! Nice one Oracle. READ privilege can be applied to tables, views and materialized views.

For me, some simple changes to the roles assigned to the support users and the unwanted locking problem is resolved.

Let’s Talk Oracle: Oracle Database 12c for DBAs and Developers – Perth 9 Sept 2015

I spent today attending the “Let’s Talk Oracle: Oracle Database 12c for DBAs and Developers” event in Perth. This purely technical event was presented by three immensely talented and passionate people from Oracle:

The first half of the day saw Richard Foote covering database 12c features from a DBA perspective. After lunch Chris gave a very informative overview of Oracle REST Data Services followed by Connor with a summary of 12c features for developers. In just one short day I learned about a heap of 12c features and have a rather long list of things to investigate for when I get back to the office.

A very big thank-you to the presenters for their time and effort and also a big thank-you to Dennis Ward from Oracle for arranging the event. Lastly, thanks must go to Oracle Corporation for allowing these people to put on this free event. It was all very, very much appreciated!

Flashback Data Archive

I recently started work on a small APEX application. The owner of the application requested that all data changes be tracked, which was not an unreasonable request given the nature of the application. In days of yore, i.e. pre-Oracle 12c, this would have involved something like custom PL/SQL code within an data access API or a (*shudder*) trigger-based strategy. However, recalling back to Bjoern Rost’s presentation at the 2014 AusOUG conference I decided to look into using Flashback Data Archive (FDA) instead… and I’m so glad I did!

Flashback Data Archive essentially involves defining a retention policy, e.g. something like “1 year”, and associating tables with that policy. Oracle then handles the rest. Let’s take a look at how easy it is to set this up. We will:

  • define a new tablespace
  • create a FDA policy (specifying the new tablespace as the storage)
  • create an FDA application (which is nothing more than a container for tables to allow easy FDA management; 12c only)
  • associate the application with the FDA policy
  • enable the FDA

An example is in order and we’ll use Oracle standard DEPT and EMP tables for it (using a database of version 12.1.0.2). Assuming you’ve already got the table set up in the SCOTT account we’ll create an FDA to track 2 years of data changes:

-- Create the FDA tablespace
CREATE TABLESPACE scott_fda_data 
   DATAFILE '<<path>>/scott_fda_data.dbf'
   SIZE 1M 
   AUTOEXTEND ON NEXT 1M
   MAXSIZE 200M
/

-- Grant SCOTT access to the new tablespace
ALTER USER scott 
   QUOTA UNLIMITED ON scott_fda_data
/
   
-- Create the flashback data archive policy
CREATE FLASHBACK ARCHIVE scott_fda 
   TABLESPACE scott_fda_data
   QUOTA 200M 
   RETENTION 2 YEAR
/   

-- Set up an Application so we can control FDA for all tables (Oracle 12c only)
EXEC dbms_flashback_archive.register_application('SCOTT','SCOTT_FDA')

EXEC dbms_flashback_archive.add_table_to_application ('SCOTT','DEPT','SCOTT')
EXEC dbms_flashback_archive.add_table_to_application ('SCOTT','EMP','SCOTT')

EXEC dbms_flashback_archive.enable_application ('SCOTT')

Now that we’ve got our FDA in place, let’s make some data changes. The script below makes use of DBMS_LOCK.SLEEP to insert delays of 1 minute. This is done so that later on we can query the tables as they looked back in time using semi-realistic examples.

UPDATE emp
SET    sal = sal + 10;
COMMIT;

EXEC dbms_lock.sleep (60)

INSERT INTO EMP VALUES
(8000,'MARK','MANAGER',7839,to_date('01-01-2015','dd-mm-yyyy'),2000,NULL,40);
COMMIT;

EXEC dbms_lock.sleep (60)

INSERT INTO EMP VALUES
(8010,'JAMES','DEVELOPER',8000,to_date('15-01-2015','dd-mm-yyyy'),2500,NULL,40);
COMMIT;

EXEC dbms_lock.sleep (60)

INSERT INTO EMP VALUES
(8010,'HEATHER','DEVELOPER',8000,to_date('20-01-2015','dd-mm-yyyy'),2200,NULL,40);
COMMIT;

EXEC dbms_lock.sleep (60)

UPDATE emp
SET    sal = sal + 500
WHERE  empno = 8000;
COMMIT;

Of course, there’s little point tracking data changes if you can’t easily access them. Simple flashback query clauses to SELECT allows us to query the source tables either as it looked at a specific point in time or show all the changes that have occurred across time.

For example, to view the data in a table at a specific point in time we use:

SELECT *
FROM   <table_name> AS OF TIMESTAMP <timestamp_expr>

So, for our example table we could run something like the following to find out what that the EMP data was like 5 minutes ago, just before we did the operations shown above:

SELECT *
FROM   emp AS OF TIMESTAMP (SYSTIMESTAMP - INTERVAL '5' MINUTE)
ORDER  BY
       empno;

If you are wanting to get a full version history of the changes made then you can use the VERSIONS BETWEEN clause:

SELECT e.*
,      versions_starttime
,      versions_endtime
FROM   emp VERSIONS BETWEEN TIMESTAMP MINVALUE AND MAXVALUE e
WHERE  empno = 8000
ORDER  BY
       versions_starttime;

The above query includes the pseudo-columns versions_starttime andversions_endtime, which provide the date range that a specific row version was effective for.

Now for some FDA details not mentioned above:

  • Flashback Data Archive is available in all versions of Oracle from 11.2.0.4 without additional licensing. In earlier versions of 11g it required Advanced Compression license to utilise but that restriction has been removed now the archive tables are, by default, not compressed.
  • Only Oracle 12c supports the ability to create flashback applications. In 11g you associate the table directly with the FDA one by one using ALTER TABLE <<table>> FLASHBACK ARCHIVE <<fda>>. This means the FDA tracking won’t be started/stopped at precisely the same moment in time for all tables.
  • You’ll need to disable FDA whilst making DDL changes to the tracked tables; dbms_flashback_archive.disable_application
  • In order to query the tables as at an earlier time period you require FLASHBACK privilege on the table, or the FLASHBACK ANY TABLE privilege.

In summary, Flashback Data Archive provides an effective means of tracking data changes on a table. The implementation is not only simple and the interface for accessing the historical data, via extensions to the SELECT statement, is easy to harness and intuitive.

String constants and package state

I have previously discussed package state and avoiding the “ORA-04068: existing state of packages has been discarded” exception. Connor McDonald has pointed out that come Oracle 11.2 constants do not create package state, except when they involve a calculation. This post expands on that by noting that string constants always create package state (thanks to my work colleague Vlad for pointing this out, after finding out the hard way…).

First up, let’s take a look at how numeric constants don’t create package state. We start with the simplest of packages compiled in session 1:

CREATE OR REPLACE PACKAGE pkg_state
AS
   c_number   CONSTANT NUMBER(5) := 123;
END pkg_state;

and in another session, session 2, we run a short program that makes reference to the package constant:

SET SERVEROUTPUT ON
BEGIN
   dbms_output.put_line ('Package value is: ' || TO_CHAR(pkg_state.c_number));
END;

Not surprisingly we get the output:

Package value is: 123

If we go back to session 1 and recompile the package changing the constant to a different value:

CREATE OR REPLACE PACKAGE pkg_state
AS
   c_number   CONSTANT NUMBER(5) := 789;
END pkg_state;

and rerun our simple output script back in session 2 we don’t get ORA-04068 but instead our session displays the new value:

Package value is: 789

Now let’s repeat the entire exercise but with a string constant. In session 1:

CREATE OR REPLACE PACKAGE pkg_state
AS
   c_string   CONSTANT VARCHAR2(5) := 'ABC';
END pkg_state;

Running this in session 2:

SET SERVEROUTPUT ON
BEGIN
   dbms_output.put_line ('Package value is: ' || pkg_state.c_string);
END;

we get:

Package value is: ABC

Compiling the package in session 1 to:

CREATE OR REPLACE PACKAGE pkg_state
AS
   c_string   CONSTANT CHAR(5) := 'XYZ';
END pkg_state;

gives us the following when we rerun our display routine in session 2:

BEGIN
*
ERROR at line 1:
ORA-04068: existing state of packages has been discarded
ORA-04061: existing state of package "DEVELOPER.PKG_STATE" has been invalidated
ORA-04065: not executed, altered or dropped package "DEVELOPER.PKG_STATE"
ORA-06508: PL/SQL: could not find program unit being called: "DEVELOPER.PKG_STATE"
ORA-06512: at line 2

This occurs with Oracle 11.2 and 12.1. So, while in these versions have taken steps to limit constants from creating package state, Connor’s post and this post show that it doesn’t apply to all constants. Hence some care needs to be taken when planning your deployments. You wouldn’t want to be in a situation where you think you don’t have package state but in reality you do. Right Vlad?

Identity Columns

We’ve previously looked at the use of sequence NEXTVAL as a column default in Oracle 12c and the ability to use column defaults when an inserted value is NULL. Oracle has another change in 12c to help out in the area of column default; identity columns.

Developers coming to Oracle from other database platforms are sometimes perplexed about the need to create sequence objects to populate simple integer primary key columns. The complaint typically is one of “Why doesn’t Oracle just do this for me?” and identity columns are directly aimed at this complaint.

In it’s simplest form, an identity column is declared as:

CREATE TABLE test_table
   (id      NUMBER(12) GENERATED AS IDENTITY
   ,padding CHAR(30) NOT NULL)

Oracle will automatically add a NOT NULL constraint to the identity column. No unique index or primary key constraint is added though so you will need to define those yourself, assuming the column will be used as a table key.

Behind the scenes, Oracle creates a sequence and will draw from that sequence whenever an insert into the table is done, via a default on the column, as illustrated below:

SQL>SELECT sequence_name
  2  ,      min_value
  3  ,      max_value
  4  ,      cache_size
  5  FROM   user_sequences
  6  /

no rows selected

SQL>CREATE TABLE test_table
  2     (id      NUMBER(12) GENERATED AS IDENTITY
  3     ,padding CHAR(30) NOT NULL)
  4  /

Table created.

SQL>INSERT INTO test_table (padding)
  2  VALUES ('x')
  3  /

1 row created.

SQL>SELECT *
  2  FROM   test_table
  3  /

        ID PADDING
---------- ----------
         1 x

SQL>SELECT sequence_name
  2  ,      min_value
  3  ,      max_value
  4  ,      cache_size
  5  FROM   user_sequences
  6  /

SEQUENCE_NAME         MIN_VALUE  MAX_VALUE CACHE_SIZE
-------------------- ---------- ---------- ----------
ISEQ$$_91711                  1 1.0000E+28         20

SQL>SELECT column_name
  2  ,      data_default
  3  FROM   user_tab_columns
  4  WHERE  table_name = 'TEST_TABLE'
  5  AND    column_name = 'ID'
  6  /

COLUMN_NAME     DATA_DEFAULT
--------------- -----------------------------------
ID              "DEVELOPER"."ISEQ$$_91711".nextval

Like any normal sequence, the automatically created sequence can be drawn from explicitly, although this is probably not recommended:

SQL>SELECT ISEQ$$_91711.NEXTVAL FROM dual
  2  /

   NEXTVAL
----------
         2

Sequence properties can be explicitly specified when creating identity columns:

SQL>CREATE TABLE test_table
  2     (id      NUMBER(12) GENERATED AS IDENTITY
  3                         (CACHE 1000
  4                          MINVALUE 1000
  5                          MAXVALUE 999999999999)
  6     ,padding CHAR(30) NOT NULL)
  7  /

Table created.

SQL>SELECT sequence_name
  2  ,      min_value
  3  ,      max_value
  4  ,      cache_size
  5  FROM   user_sequences
  6  /

SEQUENCE_NAME         MIN_VALUE  MAX_VALUE CACHE_SIZE
-------------------- ---------- ---------- ----------
ISEQ$$_91725               1000 1.0000E+12       1000

Unlike a traditional column with a default, the identify column cannot be directly inserted into:

SQL>INSERT INTO test_table (id, padding)
  2  VALUES (999, 'x')
  3  /
INSERT INTO test_table (id, padding)
                        *
ERROR at line 1:
ORA-32795: cannot insert into a generated always identity column

However, the column can be altered to replace the “ALWAYS GENERATED” clause with the “BY DEFAULT”. In this case the sequence will behave just like any other sequence allocated column default:

SQL>ALTER TABLE test_table
  2     MODIFY id GENERATED BY DEFAULT AS IDENTITY
  3  /

Table altered.

SQL>INSERT INTO test_table (id, padding)
  2  VALUES (999, 'x')
  3  /

1 row created.

Finally, if the table is dropped then the sequence used for the identity column is also dropped:

SQL>DROP TABLE test_table PURGE
  2  /

Table dropped.

SQL>SELECT sequence_name
  2  ,      min_value
  3  ,      max_value
  4  ,      cache_size
  5  FROM   user_sequences
  6  /

no rows selected

Column defaults and NULL

More than once I have encountered tables with columns defined as:

   ...
   column_x VARCHAR2(10) DEFAULT ' '
   ...

Yep, that’s a nullable column with a default of a single space! Any time I see a nullable column with a default value I immediately think it’s a logical bug. After all, why define a default for a column and still permit NULLs? I realise that there are scenarios where this combination is legitimate but generally speaking these are rare. The use of a space as the default is to denote “no data” as client UIs will typically trim this down to en empty string.

A nullable column with a default of a space character will end up with a NULL if an insert specifies the column but sets the inserted value to NULL. This could be either explicitly assigning NULL or via an expression that evaluates to NULL.

The problem of having a column declared this way is that when writing queries you need to handle both scenarios. As a result, queries end up either wrapping column references with NVL(column_x, ‘ ‘) or trimming the column convert the single space entries to NULL. Either way, the queries end up looking confusing and can easily hamper the query optimiser.

It may appear that the obvious solution to the “does the column have a value or not” problem is to perform an update and translate all NULLs to spaces and then mark the column as NOT NULL. However, the problem is then how to find and modify the SQL statements that were inserting the NULLs in the first place… and here’s where the new ON NULL clause for column defaults in Oracle 12c helps us out.

In Oracle 12c, when a column default is defined with the ON NULL clause the default is also used if the inserted value evaluates to NULL and not only when no value for the column is specified. For our problematic column, we would define it as:

   ...
   column_x VARCHAR2(10) DEFAULT ' ' ON NULL
   ...

We can easily demonstrate the impact of this clause. First we use a default without the ON NULL clause to illustrate how the column can end up with a NULL value by creating a table and explicitly inserting two rows; one that uses the default and one that sets the column to NULL:

SQL>SET NULL <<null>>

SQL>CREATE TABLE test_table
  2     (id         NUMBER(10)
  3     ,column_x   VARCHAR2(10) DEFAULT ' ')
  4  /

Table created.

SQL>INSERT INTO test_table (id)
  2  VALUES (1)
  3  /

1 row created.

SQL>INSERT INTO test_table (id, column_x)
  2  VALUES (2, NULL)
  3  /

1 row created.

SQL>SELECT id
  2  ,      column_x
  3  ,      NVL(LENGTH(column_x),0) AS len_column_x
  4  FROM   test_table
  5  /

        ID COLUMN_X   LEN_COLUMN_X
---------- ---------- ------------
         1                       1
         2 <<null>>              0

We can see from the above how NULLs manage to get into our column even with a default in place. Now we repeat the test with the ON NULL clause:

SQL>ALTER TABLE test_table
  2     MODIFY column_x DEFAULT ON NULL ' '
  3  /

Table altered.

SQL>INSERT INTO test_table (id)
  2  VALUES (1)
  3  /

1 row created.

SQL>INSERT INTO test_table (id, column_x)
  2  VALUES (2, NULL)
  3  /

1 row created.

SQL>SELECT id
  2  ,      column_x
  3  ,      NVL(LENGTH(column_x),0) AS len_column_x
  4  FROM   test_table
  5  /

        ID COLUMN_X   LEN_COLUMN_X
---------- ---------- ------------
         1                       1
         2                       1

The above shows that the default value, a single space, is now being used when the column is not specified in the INSERT or if it is specified but the value inserted is NULL.

So, the solution to our problem of nullable columns with a default is to update the table and set all the NULLs to the default value and then change the default to make use of the ON NULL clause. No search of the codebase for SQL that generated the NULLs in the first place is required. Of course, you might have to upgrade to Oracle 12c first…

It should also be noted that Oracle has taken the liberty of defining a column with an ON NULL default to be NOT NULL:

SQL>DESCRIBE test_table
Name                  Null?    Type
--------------------- -------- ------------
ID                             NUMBER(10)
COLUMN_X              NOT NULL VARCHAR2(10)

which means that attempting to enable the ON NULL clause when there exists NULLs in the column will result in an error, ORA-02296.

Sequence triggers and column defaults

Tables with simple sequence assigned ID columns for primary keys are quite common. Unfortunately it also seems quite common to use row level triggers to assign those sequence values to the columns. The rationale trotted out for this strategy is that it doesn’t matter what the application layer does, the database will take case of ensuring that the ID is unique and it can’t be bypassed.

The primary problem with using a trigger to assign a sequence value to an ID column is one of performance. Let’s look at a simple test case that shows the overhead incurred in using a trigger to assign the ID column value, followed by a new feature from Oracle 12c that achieves the same end but without the overhead of a trigger.

Our test is to insert 1,000,000 rows into a table. We’ll run through the scenarios of:

  • ID value is not sequence generated – this is our baseline scenario to show how fast Oracle can copy rows
  • ID value is sequence assigned as part of the insert statement
  • ID value is sequence assigned from a trigger
  • ID value is sequence assigned from a column level default – this is the new 12c feature

The set-up is to create the table we will be testing against, TEST_TABLE, and also a source data table, SOURCE_DATA, which we’ll load with 1,000,000 rows. Each test is to copy the data from SOURCE_DATA to TEST_TABLE. The tests were all conducted on Oracle 12.1.0.1.

CREATE TABLE test_table
   (id      NUMBER(10) NOT NULL
   ,padding CHAR(20)  NOT NULL)
/

CREATE UNIQUE INDEX test_table_pk
   ON test_table (id)
/

ALTER TABLE test_table
   ADD CONSTRAINT test_table_pk
   PRIMARY KEY (id)
/   

CREATE TABLE source_data
   (id, padding)
PCTFREE 0   
AS
SELECT ROWNUM AS id
,      'X' AS padding
FROM   dual
CONNECT BY ROWNUM <= 1000000
/

COMMIT
/

First of all we time how long it takes to copy across the 1,000,000 rows in SOURCE_DATA to TEST_TABLE:

SQL>SET TIMING ON

SQL>INSERT INTO test_table
  2  SELECT *
  3  FROM   source_data
  4  /

1000000 rows created.

Elapsed: 00:00:03.06

Now we measure how long it takes to copy across the rows, assigning ID values from a sequence directly from within the INSERT statement:

SQL>TRUNCATE TABLE test_table
  2  /

Table truncated.

SQL>CREATE SEQUENCE test_table_id_seq
  2     CACHE 1000
  3  /

Sequence created.

SQL>SET TIMING ON

SQL>INSERT INTO test_table
  2  SELECT test_table_id_seq.NEXTVAL
  3  ,      padding
  4  FROM   source_data
  5  /

1000000 rows created.

Elapsed: 00:00:09.68

The insert of the rows went from 3 seconds to just under 10 seconds simply by assigning new ID values from a sequence (the cache size for the sequence was 1,000).

Let’s see what happens if we use a trigger to assign the sequence value to the ID column:

SQL>TRUNCATE TABLE test_table
  2  /

Table truncated.

SQL>CREATE OR REPLACE TRIGGER trg_test_table
  2     BEFORE INSERT ON test_table
  3     FOR EACH ROW
  4  DECLARE
  5  BEGIN
  6     :NEW.id := test_table_id_seq.NEXTVAL;
  7  END trg_test_table;
  8  /

Trigger created.

SQL>SET TIMING ON

SQL>INSERT INTO test_table
  2     (padding)
  3  SELECT padding
  4  FROM   source_data
  5  /

1000000 rows created.

Elapsed: 00:01:29.65

Yikes!!! With the trigger in place it took 1.5 minutes to insert the rows! This emphasises a really important aspect of triggers: they’re performance killers!

So, you have a system that is littered with triggers doing nothing more than assign sequence values to the ID columns… what can you do? Well, in Oracle 12c there’s a new feature that can come to your rescue. Column defaults can now reference sequence NEXTVAL and CURRVAL. Let’s see how this works:

SQL>TRUNCATE TABLE test_table
  2  /

Table truncated.

SQL>DROP TRIGGER trg_test_table
  2  /

Trigger dropped.

SQL>ALTER TABLE test_table
  2    MODIFY id DEFAULT test_table_id_seq.NEXTVAL
  3  /

Table altered.

SQL>SET TIMING ON

SQL>INSERT INTO test_table
  2     (padding)
  3  SELECT padding
  4  FROM   source_data
  5  /

1000000 rows created.

Elapsed: 00:00:07.04

From the above you can see that we’ve added a default to the ID column that is the sequence NEXTVAL. Our timing for the insert is now down to 7 seconds, which is pretty much the same as we got when calling the sequence directly in the insert statement. We’ve achieved the outcome of the trigger with the performance of the direct insert!

However, if you’re thinking “Great, I can get rid of my triggers and replace them with column defaults!” be a little bit careful though. While the trigger and default perform the same function there is a slight difference in them; a trigger might always set the ID column to a new value even if one is explicitly supplied by the application code whereas the default will only set the ID value if one is not supplied. Consider the following, first with the trigger in place:

SQL>CREATE OR REPLACE TRIGGER trg_test_table
  2     BEFORE INSERT ON test_table
  3     FOR EACH ROW
  4  DECLARE
  5  BEGIN
  6     :NEW.id := test_table_id_seq.NEXTVAL;
  7  END trg_test_table;
  8  /

Trigger created.

SQL>INSERT INTO test_table
  2  VALUES (0, 'X')
  3  /

1 row created.

SQL>INSERT INTO test_table
  2  VALUES (0, 'X')
  3  /

1 row created.

Even though the INSERT explicitly provided a value for the ID column it was overridden by the sequence value assigned by the trigger. Contrast that to what would happen with a column default:

SQL>DROP TRIGGER trg_test_table
  2  /

Trigger dropped.

SQL>ALTER TABLE test_table
  2    MODIFY id DEFAULT test_table_id_seq.NEXTVAL
  3  /

Table altered.

SQL>INSERT INTO test_table
  2  VALUES (0, 'X')
  3  /

1 row created.

SQL>INSERT INTO test_table
  2  VALUES (0, 'X')
  3  /
INSERT INTO test_table
*
ERROR at line 1:
ORA-00001: unique constraint (DEVELOPER.TEST_TABLE_PK) violated

Since an ID value was specified the column default is not used. Hopefully you do not have any code that seeks to explicitly set a column value only to be overridden by a trigger assigned value as that would be illogical and confusing. Chances are you will quickly catch this sort of a problem in your test environment if you do so choose to replace your triggers with column defaults.