Monday 27 May 2013

Top 5 Oracle Books on Amazon

For your edification and enjoyment, below are the current top 5 most popular Oracle DBA titles on Amazon.co.uk

1. OCA/OCP Oracle Database 11g All-in-One Exam Guide with CD-ROM by John Watson.



2. Oracle PL/SQL Programming by Steven Feuerstein



3.  Oracle SQL Tuning with Oracle SQLTXPLAIN by Stelios Charalambides




4.  Oracle 11g For Dummies by Chris Zeis




5.  Oracle Enterprise Manager 12c Administration Cookbook by D Papde




Friday 17 May 2013

EM12c - A Step Backwards?

Enterprise Manager Cloud Control 12c is mostly a better product than the 10g version we were using before.  But I am getting really frustrated with the Explain Plan functionality.

A big part of my job is performance tuning and monitoring running SQL.  And to do that you need a clear, quick, stable view of the query plan, along with all pertinent information, such as the total cost of the plan.  in EM10g, we had that.

EM12c, though, is a huge step backwards.

Specifically:

1. When using the "Real Time: 15 sec refresh" function, every refresh shows the "Graphical" version of the Plan. You have to then switch to the "Tabular" version, if that's what you prefer.

2. The "Tabular" view in EM10g included a summary top line showing the total cost, total IO cost, total CPU cost, etc. This is no longer available in EM12c.

3. Previously, each node of the Plan was (if appropriate) a link back to the object referenced by that step of the plan. So if the optimizer had chosen an index lookup, you could click on the index name in the plan and go to the index definition page. Superb for deciding whether the optimizer had chosen a"good" or "bad" index. Now, in EM12c, there's no drill-down functionality in the Plan. One needs to open a separate tool to analyze table or index structures alongside the EM12c Plan page. This is a terrible step backwards.

4. Not a feature in 10g or 12c, but it would be nice to be able to set one's preferred view of the plan to "Graphical" or "Tabular" and for that view to remain, no matter what refresh setting is being used in View Data.

Unless I'm missing something obvious -- and please tell me if I am! -- Oracle have really messed up here.  I'm surprised no-one else has spotted these as issues.  Am I alone?  Or have I really missed something in the configuration?

The ability to easily view and compare SQL Plans in EM10g was great. It should have been better in EM12c. But Oracle has made it worse. It's virtually unusable now. I have to go outside EM and use TOAD to get decent Plans now.  Oracle - please re-instate the original functionality.

UPDATE: Oracle have confirmed that I'm not being dim and missing something obvious.  The functionality that was there in OEM 10g and 11g has been 'designed away'!  I've asked MOS to raise an enhancement request to put it back in.

Wednesday 24 October 2012

NOT NULL vs. Check Constraints

I found Jonathan Lewis's excellent article about "incorrectly" defining NOT NULL columns some time ago.  You can read his article at http://jonathanlewis.wordpress.com/2010/09/05/not-null/

Having read it, I did some investigation into our production databases and, lo and behold, found some examples of columns with NOT NULL check constraints but without the NOT NULL column definition.

Here's the query I used to find the 'guilty' columns:
 
select o.owner, o.object_type, o.object_name, t.column_name, co.search_condition, co.validated
 from dba_objects o, sys.col$ c, dba_tab_columns t, dba_cons_columns cc, dba_constraints co
 where o.owner = '&table_owner'
 and o.object_type = 'TABLE'
 and o.object_id = c.obj#
 and o.owner = t.owner
 and o.object_name = t.table_name
 and t.column_name = c.name
 and cc.owner = o.owner
 and cc.table_name = t.table_name
 and cc.column_name = c.name
 and co.owner = cc.owner
 and co.constraint_name = cc.constraint_name
 and co.constraint_type = 'C'
 and c.null$ = 0
order by object_name, column_name; 
Note that in 10g and 11g the DBA_CONSTRAINTS.SEARCH_CONDITION column is stored as a LONG.  Which means that you can't filter out NOT NULL check conditions (e.g. "column_name in (0,1)") from any other check conditions in the query.

Deprecated for at least 15 years, but still used in the data dictionary.  Thanks Oracle!

The alternative is to make your own one-off copy of DBA_CONSTRAINTS, using TO_LOB() to convert SEARCH_CONDITION to a CLOB.  Then the query can be re-written as:
 
select o.owner, o.object_type, o.object_name, t.column_name, co.search_condition, co.validated
 from dba_objects o, sys.col$ c, dba_tab_columns t, dba_cons_columns cc, my_dba_constraints co
 where o.owner = '&table_owner'
 and o.object_type = 'TABLE'
 and o.object_id = c.obj#
 and o.owner = t.owner
 and o.object_name = t.table_name
 and t.column_name = c.name
 and cc.owner = o.owner
 and cc.table_name = t.table_name
 and cc.column_name = c.name
 and co.owner = cc.owner
 and co.constraint_name = cc.constraint_name
 and co.constraint_type = 'C'
 and c.null$ = 0
 and co.search_condition not like '% AND %'
 and co.search_condition not like '% OR %'
 and co.search_condition like '%NOT NULL%'
order by object_name, column_name;

Monday 15 October 2012

Fisher-Yates (Knuth) Shuffle in PL/SQL

We were recently looking for a way to easily anonymize some customer data, specifically bank account numbers.

Part of our testing mechanism is to verify that a bank account and sort-code combination is valid and we retrieve the bank details for that combination (using a commercial account lookup service).  So we couldn't just replace the sort-code and/or account number with random numbers.

Conversely, replacing all account numbers with a handful of known, "safe" working values would lead to skewed data spreads, so that wouldn't work.

A better solution is to shuffle the existing data, moving the sort-code and account number pairs to a different, unrelated client record.

We found a description of the Fisher-Yates Shuffle (one variety of which is also known as the Knuth Shuffle) which seemed like a good way to achieve our aim.  It seems to be used quite widely to do exactly what we wanted, but I couldn't find any examples of implementing the shuffle in PL/SQL.

So, here's our solution:

 
set serveroutput on size unlimited
DECLARE

    jswap NUMBER ;
    jint NUMBER := 17 ;
    lcount NUMBER := 0 ;
    lrowcount NUMBER := 0 ;

    TYPE tab_ba     IS TABLE OF client_bank_accts%ROWTYPE INDEX BY BINARY_INTEGER;
    array_ba        tab_ba ;

    CURSOR r IS
    SELECT *
    FROM client_bank_accts;

    sav_banksortcd client_bank_accts.banksortcd%TYPE;
    sav_bankacctnbr client_bank_accts.bankacctnbr%TYPE;

BEGIN

   OPEN r;
   LOOP

      FETCH r 
      BULK COLLECT 
      INTO array_ba LIMIT 5000;

      lrowcount := array_ba.COUNT;

      FOR i IN REVERSE 1 .. array_ba.COUNT LOOP

         jswap := trunc(dbms_random.value(1, i+1));

         sav_banksortcd := array_ba(i).banksortcd;
         sav_bankacctnbr := array_ba(i).bankacctnbr;

         UPDATE client_bank_accts 
         SET    banksortcd = array_ba(jswap).banksortcd,
                bankacctnbr = array_ba(jswap).bankacctnbr
         WHERE  clientbankacctseq = array_ba(i).clientbankacctseq;

         UPDATE client_bank_accts 
         SET    banksortcd = sav_banksortcd,
                bankacctnbr = sav_bankacctnbr
         WHERE  clientbankacctseq = array_ba(jswap).clientbankacctseq;

         lcount := lcount + 2;

      END LOOP;
      EXIT WHEN r%NOTFOUND;

   COMMIT;
   END LOOP;
   CLOSE r;

   dbms_output.put_line('Completed. '||to_char(lcount)||' rows updated!');

END;
/

Friday 27 May 2011

Last Sunday

Our production databases are backed up with one of two schedules.  Most days our backups are sent offsite and retained for 3 months before being returned.  But on the last Sunday of each month, our backups are archived for a longer period of time.  Therefore the RMAN backups and Veritas catalogs need to be retained for up to 7 years!

Us Linux DBAs love cron for scheduling jobs. These fancy software schedulers with their dependencies and clever logic have never really replaced a really dense, difficult to read crontab file. And that's just the way we like it in order to keep our jobs!

Now, crontab doesn't have a built-in facility for deciding whether a particular day is the 'last' or 'first' of the month (or quarter, or year).  Our current backup regime relies on a separate cron job to create a lock file, earlier in the day, then each individual backup script checks for existence of that file before deciding which tape schedule and retention policy to use.

I thought there was a slightly more elegant way of achieving this.  Here's what I came up with:

We have a simple script called islastsunday which compares the current date with the date of the last Sunday in the month (from the cal command).

#!/bin/bash
[ $(cal | awk '{print $1}' | tr -s '[:blank:]' '\n' | tail -1) = $(date +%d) ]
In our crontab file we have two entries for our backup, as follows:
# Check if day is LAST SUNDAY of Month. If NOT, run the regular backup
30 00 * * * islastsunday || (rman-job.sh level_0_tape ORCL > level_0_tape_ORCL.log 2>&1; rman-job.sh level_0_tape ORCL2 > level_0_tape_ORCL2.log 2>&1)
# Check if day is LAST SUNDAY of Month. If it IS, run the Monthly backup
30 00 * * * islastsunday && (rman-job.sh level_0_tape_LS ORCL > level_0_tape_LS_ORCL.log 2>&1; rman-job.sh level_0_tape_LS ORCL2 > level_0_tape_LS_ORCL2.log 2>&1)
Both schedules run at the same time, but only one of them will actually execute, depending upon the day of the month. The actual backup commands are bracketed, so that they are ALL dependent upon the islastsunday script, and will all run with the same retention policy, even if the backups continue to run after midnight (i.e. in Monday!).

Friday 20 May 2011

Cold Backup RMAN Fun and Games

Here's an RMAN oddity.  If anyone could explain it, I'd be grateful.

We were moving a database to a new server.  Same OS, same version of Oracle (10.2.0.4).  The database is in Archivelog mode, but to keep things easy, we decided to do an offline, consistent backup.

So, we shutdown the database, then mounted it and performed an RMAN backup to a flash_recovery_area on a shared NFS filesystem:

backup device type disk database plus archivelog include current controlfile;
The backup worked fine. So we logged on to the new server, made sure that we could see the NFS-mounted FRA, and issued the Duplicate command.

run {
allocate channel d1 type disk;
allocate channel d2 type disk;
allocate channel d3 type disk;
allocate channel d4 type disk;
allocate auxiliary channel da1 type disk;
allocate auxiliary channel da2 type disk;
allocate auxiliary channel da3 type disk;
allocate auxiliary channel da4 type disk;
duplicate target database to CATDB;
}
Oddly, it didn't work.  It returned an error saying that a backup could not be found.

Starting restore at 20-MAY-11

released channel: da1
released channel: da2
released channel: da3
released channel: da4
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 20/05/2011 15:25:38
RMAN-03015: error occurred in stored script Memory Script
RMAN-06026: some targets not found - aborting restore
RMAN-06100: no channel to restore a backup or copy of datafile 1
RMAN-06100: no channel to restore a backup or copy of datafile 2
etc...
If we tried using automatic channel allocations for the restore, it still did not find our disk backup, but tried restoring older tape backups.  Not what we wanted.

So, we tried backing it up again.  Backing up just the archivelogs.  Just the database.  Nada!  The same error every time we tried to Duplicate.

In desperation, I tried opening up our original DB and running an online backup.  Luckily the database was small and the backup quick.  I then shut it down and re-mounted to prevent the users from changing anything.

This time, upon trying the Duplicate command (this time with an "until sequence" inserted) it all worked as expected!

So it looks like a wee bug.  If you've got a database in Archivelog mode, but you perform an offline backup, RMAN seems to get confused and have trouble deciding which backups to restore.

We know it's not a general problem with offline backups, as we've duplicated DBs in Noarchivelog mode on this system before.

Monday 28 March 2011

When is DDL not DDL?

If you ever thought that the LAST_DDL_TIME on a table only tells you the last time that DDL was performed directly upon that table, then think again.

We recently noticed some tables which appeared to have been recently changed, yet we couldn't find any related DDL changes.

Then we remembered that we regularly rebuild a handful of indexes (once a week, on Sunday morning) for performance/plan-stability reasons.

Could it be that rebuilding an index resets the LAST_DDL_TIME for the index's table?

Let's find out.

SQL> create table poo (x date);

Table created.

SQL> create index poodex on poo(x);

Index created.

SQL> select object_name,
to_char(last_ddl_time, 'DD-Mon-YYYY HH24:MI:ss')
2 from dba_objects where object_name like 'POO%';

OBJECT_NAME TO_CHAR(LAST_DDL_TIM
----------- --------------------
POO         22-Mar-2011 10:09:51
POODEX      22-Mar-2011 10:09:51

SQL> alter index poodex rebuild online;

Index altered.

SQL> select object_name,
     to_char(last_ddl_time, 'DD-Mon-YYYY HH24:MI:ss')
2 from dba_objects where object_name like 'POO%';

OBJECT_NAME TO_CHAR(LAST_DDL_TIM
----------- --------------------
POO         22-Mar-2011 10:11:31
POODEX      22-Mar-2011 10:11:31

Yup! In a surprising (to me) result, it looks as though rebuilding an index does indeed reset LAST_DDL_TIME on the table, even though we are not performing DDL on the table directly.

In my opinion, this is a misleading result.

Thanks to Matt Penny for doing all the work and investigation and telling me to publish the results!

EDIT:  Thanks to Neil below for suggestions of other events which may or may not cause this behaviour.

I have tested creating, amending, re-compiling and dropping a procedure, but this does not cause a change in LAST_DDL_TIME for the dependent table.

Example as follows:

14:16:55 PETE @ ETEST1D > create table foo (foocol number(5));

Table created.

14:17:20 PETE @ ETEST1D > @lastddl

OBJECT_NAME                    TO_CHAR(LAST_DDL_TIME,'DD-MON-
------------------------------ ------------------------------
FOO                            28-Mar-2011 14:17:20

14:18:35 PETE @ ETEST1D > create procedure fooproc is
14:19:13   2  v_var varchar2(10);
14:19:26   3  begin
14:19:31   4  null;
14:19:38   5  exception
14:19:42   6  when others then null;
14:19:45   7  end;
14:19:47   8  /

Procedure created.

14:19:50 PETE @ ETEST1D > @lastddl

OBJECT_NAME                    TO_CHAR(LAST_DDL_TIME,'DD-MON-
------------------------------ ------------------------------
FOO                            28-Mar-2011 14:17:20
FOOPROC                        28-Mar-2011 14:19:50

14:20:06 PETE @ ETEST1D > create or replace procedure fooproc is
14:20:31   2  v_var varchar2(10);
14:20:34   3  begin
14:20:36   4  select to_char(foocol)
14:21:02   5  into v_var
14:21:07   6  from foo;
14:21:14   7  exception
14:21:17   8  when others then null;
14:21:18   9  end;
14:21:19  10  /

Procedure created.

14:21:20 PETE @ ETEST1D > @lastddl

OBJECT_NAME                    TO_CHAR(LAST_DDL_TIME,'DD-MON-
------------------------------ ------------------------------
FOO                            28-Mar-2011 14:17:20
FOOPROC                        28-Mar-2011 14:21:20

14:21:31 PETE @ ETEST1D > alter procedure fooproc compile;

Procedure altered.

14:22:07 PETE @ ETEST1D > @lastddl

OBJECT_NAME                    TO_CHAR(LAST_DDL_TIME,'DD-MON-
------------------------------ ------------------------------
FOO                            28-Mar-2011 14:17:20
FOOPROC                        28-Mar-2011 14:22:07

14:22:13 PETE @ ETEST1D > drop procedure fooproc;

Procedure dropped.

14:22:40 PETE @ ETEST1D > @lastddl

OBJECT_NAME                    TO_CHAR(LAST_DDL_TIME,'DD-MON-
------------------------------ ------------------------------
FOO                            28-Mar-2011 14:17:20

Despite creating and dropping procedure, the LAST_DDL_TIME of the depdendent table is not changed. Exactly as expected.

Now trying with a comment (which is a DDL command):

14:24:20 PETE @ ETEST1D > comment on table foo is 'FOO is a test table';

Comment created.

14:24:25 PETE @ ETEST1D > @lastddl

OBJECT_NAME                    TO_CHAR(LAST_DDL_TIME,'DD-MON-
------------------------------ ------------------------------
FOO                            28-Mar-2011 14:24:25

Again - as expected.  A DDL command against the table changes the LAST_DDL_TIME.

This makes it all the more surprising that running a DDL command against an index will change LAST_DDL_TIME for its parent table.

Thursday 15 July 2010

Table Comparison Signature

We've been working on a data archiving project to move old data from the production schema in one database into an archive schema in a different tablespace, on a remote server.

We copy data from a production table (let's call it PROD.FINANCE_TAB into a pre-existing table in an archive schema (e.g. ARCH.FINANCE_TAB).  The fact that they are in a different database is actually irrelevant to the problem at hand.

If  PROD.FINANCE_TAB were to change structure (due to an added column or a datatype change), the archive transaction may fail.  So we need to test whether the table structure is the same in both schemas.

I Googled to take a look at how people usually do this and the result isn't pretty.  It usually takes three separate queries: one to find tables with the same columns, but different datatypes or sizes and two others to find tables with extra or missing columns.

This is cumbersome, but at least shows you what the differences are. In our case, though, we don't care what's different.  Just knowing that there is a difference is enough for us to stop processing the archive run and return control to the DBA.

So we came up with the following query which builds a quasi-hash value for the table structure:
select sum(
          ora_hash(COLUMN_NAME||
                   DATA_TYPE||
                   to_char(NVL(DATA_LENGTH,0))||
                   to_char(NVL(DATA_SCALE,0))||
                   to_char(NVL(DATA_PRECISION,0))
                  )
          ) t_hash
from all_tab_columns
where table_name = '&table_name'
and owner = '&owner';
           t_signature
----------------------
          141887801613
This signature is unique to the table structure.  It returns a different value if a column is added or dropped, or if its datatype, size or scale changes.

We based a function on the query which we could execute for each archive table in both PROD and ARCHIVE schema.  If the signatures matched, we could be confident that the table structure had not changed.

Tuesday 23 December 2008

RMAN Duplicate Gotchas - Part 2: Standby DB Restores and Redo Log Group

In my previous post I discussed identifying which media you would need to use when performing a restore in order to duplicate a database via RMAN.

In this post, I'll discuss a couple of other issues that you may come across; although, admittedly, they are fairly uncommon.

Standby DB Restores

Like a lot of other sites we maintain a physical standby database, managed using Dataguard, ready to use as a failover instance, in case of problems with the main RAC database. This database is the source for all backups, to avoid overloading the production servers. This leads to a small gotcha when performing any restores from the production database to create a duplicate or clone DB.

When backing-up we connect, as expected to the standby database and the RMAN repository catalog, as below:

rman catalog=rman/password@catalog_db target=sys/passwd@standby_db

RMAN then backs up the file contents from the standby database location, as expected, but because the database has the same DBID as the production database, they are marked as having the file names of the production service.

When you come to perform the RMAN duplicate using the standby DB as target will not work correctly. When attempting to create the duplicate database's controlfile RMAN will barf, saying that it cannot use a standby controlfile as source for a duplicate. You therefore have to connect to your production server directly to run the DUPLICATE command:

rman catalog=rman/password@catalog_db target=sys/passwd@prod_db auxiliary=sys/passwd@clone_db

In addition, you will likely need to amend the parameters which are passed to your tape backup software to ensure that it knows where to find the backup files. If, like us, you have separate backup servers for production and other environments, you will need to pass in the relevant policy, backup server and original backup client names which were used to perform the backup:

allocate channel t1 type sbt;
allocate auxiliary channel ta1 type sbt;
allocate channel d1 type disk;
allocate auxiliary channel da1 type disk;
send 'NB_ORA_POLICY=PROD_POLICY,NB_ORA_SERV=prod_bkup_server,
NB_ORA_CLIENT=prod_db_server';

This also means, as seen above, that you will need to allocate channels (and auxiliary channels) manually, rather than relying on any default channgels you may have defined in your clone database environment.

Redo Log Groups

Oracle allows you to create any almost any number of redo log groups on a database and to number them with any numbering scheme you fancy. At our site, the production groups are numbered with a 3-digit scheme, 101, 201, 301, etc, with each group containing multiple log members.

Unfortunately, there is a 'feature' which will crash your DUPLICATE in this particular case. RMAN attempts to re-create the redo log groups from the target system, but the code obviously cannot cope with redo log group numbers which are more than 2-digits wide. We therefore are forced to include the LOGFILE clause with the DUPLICATE command to re-number the groups and allow the command to complete:

duplicate target database to CLONEDB
until scn 36511896776
logfile GROUP 1 ('/u01/ oradata/CLONEDB/redo01_1.dbf',
'/u02/oradata/CLONEDB/redo01_2.dbf') SIZE 500M,
GROUP 2 ('/u01/oradata/CLONEDB/redo02_1.dbf',
'/u02/oradata/CLONEDB/redo02_2.dbf') SIZE 500M,
GROUP 3 ('/u01/oradata/CLONEDB/redo03_1.dbf',
'/u02/oradata/CLONEDB/redo03_2.dbf') SIZE 500M,
GROUP 4 ('/u01/oradata/CLONEDB/redo04_1.dbf',
'/u02/oradata/CLONEDB/redo05_2.dbf') SIZE 500M REUSE;

If you forget to add the LOGFILE clause first time around (as I did), don't despair. The database restore will have already completed. Simply re-run the DUPLICATE command with the additional clause - RMAN knows that the datafiles are now in place and things should pick-up from there.

Monday 1 December 2008

RMAN Duplicate Gotchas - Part 1: Identifying the Media

I've recently been doing quite a few RMAN Duplicate Database jobs, in order to create dev and test systems as copies of our production server.

The production database is a 2-node RAC cluster, but each of our test systems are single-instance databases. This causes one or two odd issues which aren't always obvious until you've hit them at least once.

In this post I'll discuss the first tricky issue which I came across: Identifying the Media.

At my current job (as in most IT departments), each day the most recent backup tapes are removed from the tape robot and sent off-site to a secure storage facility. The company I work for doesn't have the resource to maintain duplexed tapes (i.e. one on-site and one off-site) and, at present, for reasons too complicated to go into here, the production system doesn't have a flash recovery area, so the backups are written to tape, and tape only.

All this means that when a test database refresh is requested, it is critical to identify the set of tapes on which the required backups reside. So, once you've decided to which point of time you wish to restore, it's necessary to identify exactly which tapes you need to recall in order to load into the robot.

1. Log into RMAN (Target and Catalog) and list the backups you're interested in. E.g:

rman catalog=rman/password@catalog_db target

RMAN> list backup;

BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
1063429 1.87G SBT_TAPE 00:03:36 26-NOV-08
BP Key: 1063435 Status: AVAILABLE Compressed: NO Tag: TAG20081126T020905
Handle: m9k0kn62_1_1 Media:

List of Archived Logs in backup set 1063429
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
1 58271 36469897611 25-NOV-08 36470958889 25-NOV-08
1 58272 36470958889 25-NOV-08 36472168625 25-NOV-08
1 58273 36472168625 25-NOV-08 36472947809 25-NOV-08
1 58274 36472947809 25-NOV-08 36473847271 25-NOV-08

BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
1063430 1.86G SBT_TAPE 00:03:44 26-NOV-08
BP Key: 1063436 Status: AVAILABLE Compressed: NO Tag: TAG20081126T020905
Handle: mak0kn62_1_1 Media:

List of Archived Logs in backup set 1063430
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
2 61976 36471707453 25-NOV-08 36472752913 25-NOV-08
2 61977 36472752913 25-NOV-08 36473892837 25-NOV-08
2 61978 36473892837 25-NOV-08 36475824974 25-NOV-08
2 61979 36475824974 25-NOV-08 36477342218 25-NOV-08

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
1063514 Full 19.00M SBT_TAPE 00:00:30 26-NOV-08
BP Key: 1063521 Status: AVAILABLE Compressed: NO Tag: TAG20081126T021254
Handle: c-2982994484-20081126-01 Media:
Standby Control File Included: Ckp SCN: 36476925621 Ckp time: 25-NOV-08

[It's interesting to note that LIST BACKUP command in RMAN returns a label (in bold above) indicating that a media reference should be displayed, but I have yet to see any information listed here in any recent version of Oracle (9i or 10g on Windows or Linux).]

You can use the COMPLETED AFTER and/or COMPLETED BEFORE clauses to narrow down the list of displayed backup sets. And don't forget to list the backups starting with the most recent full, or level-0, backup, including any incrementals.

Make a note of the backup TAGs that you wish to check. At this point it's probably also sensible to make a note of which SCN you want to restore to. I find it much easier to cross-reference an SCN with the backup report than an exact point-in-time.

2. Now, Log into the RMAN catalog as the RMAN user:

sqlplus rman/password@catalog_db

3. Run the following SQL command. Enter a comma-separated, single-quoted list of tags as identified in step 1 (as many as you like - I've used two here, for simplicity):

select tag, bs_key, bp_key, start_time, media
from bp
where tag in(&list_of_tags.);

Enter value for list_of_tags: 'TAG20081126T020905','TAG20081126T021254'
old 3: where tag in(&list_of_tags.)
new 3: where tag in('TAG20081126T020905','TAG20081126T021254')

TAG BS_KEY BP_KEY START_TIM MEDIA
------------------ ------- ------- --------- ----------
TAG20081126T020905 1063426 1063432 26-NOV-08 000304
TAG20081126T020905 1063427 1063433 26-NOV-08 000310
TAG20081126T020905 1063428 1063434 26-NOV-08 000304
TAG20081126T020905 1063429 1063435 26-NOV-08 000304
TAG20081126T020905 1063430 1063436 26-NOV-08 000304
TAG20081126T021254 1063514 1063521 26-NOV-08 000304

Output is as above. The media column shows the label of the tape that the backup piece is stored on.

Now you know which tapes are going to be required for your restore, you can recall them from the storage company.

Next post I will consider the next "gotcha" on my list.