If you’ve ever had to diagnose a production problem in Oracle you will know the huge benefit that Active Session History (commonly known as ASH) gives you. In summary, ASH provides you with a one second sampling of all active sessions in the database. As noted previously this data is stored in a memory buffer that is overwritten in a circular fashion. This means that there’s no defined period of time that the data is retained; a busy system will age out data quicker than one with little activity. Clearly it would be beneficial to retain the ASH data according to a time-based policy. That way when an application developer comes to you with a slow query problem two days after the fact you won’t be left wondering if the ASH data for that period of time is available or not.
If you take a look at v$active_session_history you will note that the first column is named SAMPLE_ID. The Oracle Database Reference simply notes this to be “ID of the sample” but that’s enough to indicate that it should be unique. Armed with this knowledge it isn’t hard to regularly capture the contents of v$active_session_history into a separate table so that it’s available should you require it. Furthermore, if you were to range partitioning the table on the SAMPLE_TIME then old data can be easily removed by simply dropping partitions.
CREATE TABLE ash_hist PARTITION BY RANGE (sample_time) INTERVAL (INTERVAL '1' DAY) (PARTITION ash_hist_p1 VALUES LESS THAN (TO_TIMESTAMP('01/01/2015','dd/mm/yyyy'))) AS SELECT * FROM v$active_session_history;
A capture routine that can be scheduled using the database scheduler, DBMS_JOB or an external job scheduling application can be as simple as:
BEGIN SELECT MAX(sample_id) INTO l_max_sample_id FROM ash_hist -- add a filter to stop scanning all partitions. just make sure -- the interval falls within the capture time period WHERE sample_time >= CAST(SYSTIMESTAMP AS TIMESTAMP) - INTERVAL '1' DAY; INSERT INTO ash_hist SELECT * FROM v$active_session_history WHERE sample_id >= NVL(l_max_sample_id,0); COMMIT; END;
Dropping the old partitions once they exceed their usefulness can be done via a routine similar to the following:
DECLARE -- A cursor that returns the partition details for -- the ASH_HIST table CURSOR tab_partition_cur IS SELECT tp.partition_name , tp.high_value , tp.high_value_length FROM user_tab_partitions tp WHERE tp.table_name = 'ASH_HIST' -- we sort the cursor from the earliest partition to the latest -- once we've hit one that we will not drop then we can stop -- processing the cursor ORDER BY partition_position; tab_partition_rec tab_partition_cur%ROWTYPE; c_retention_days CONSTANT NUMBER(4) := 60; l_earliest_timestamp ash_hist.sample_time%TYPE; l_partition_timestamp ash_hist.sample_time%TYPE; l_continue_processing BOOLEAN := TRUE; BEGIN l_earliest_timestamp := CAST(SYSTIMESTAMP AS TIMESTAMP) - NUMTODSINTERVAL(c_retention_days,'DAY'); -- reset the start of the interval partitioning so we can drop -- the old partitions EXECUTE IMMEDIATE 'ALTER TABLE ash_hist SET INTERVAL(INTERVAL ''1'' DAY)'; OPEN tab_partition_cur; WHILE (l_continue_processing) LOOP FETCH tab_partition_cur INTO tab_partition_rec; -- we stop if we run out of partitions to process IF (tab_partition_cur%NOTFOUND) THEN l_continue_processing := FALSE; ELSE -- extract the high value as a TIMESTAMP EXECUTE IMMEDIATE 'SELECT ' || SUBSTR (tab_partition_rec.high_value ,1 ,tab_partition_rec.high_value_length) || ' FROM dual' INTO l_partition_timestamp; -- if the partition is older than our retention period we drop it IF (l_partition_timestamp <= l_earliest_timestamp) THEN EXECUTE IMMEDIATE 'ALTER TABLE ash_hist DROP PARTITION ' || tab_partition_rec.partition_name; -- we've hit a cursor entry for a partition that we won't be dropping so -- we can stop as the remaining cursor entries will be the same ELSE l_continue_processing := FALSE; END IF; END IF; END LOOP; CLOSE tab_partition_cur; END;
Note, Active Session History requires Enterprise Edition database and is licensed as part of the Diagnostics Pack. Please make sure you have the appropriate licenses before using this facility.
Credits: Thanks go to Connor McDonald for setting up a system like this where I currently work. I’ve found it to be the single most invaluable tool for troubleshooting.