Archive

Archive for the ‘Backup and Recovery’ Category

Recover an Oracle Database with Missing Archived Logs

December 22, 2012 Leave a comment

Scenario:  To recover an Oracle database from a backup with missing archived logs. The recovery process will stop at some point asking for archive logs when we try to recover from such a state.

The assumption here is that we have exhausted all possible locations to find another good and valid copy or backup of the archivelog that we are looking for.

If the archivelog is not found in any of the locations, then the approach and strategy on how to recover and open the database depends on the SCN (System Change Number) of the datafiles, as well as, whether the logsequence# required for the recovery is still available in the online redo logs.

For the SCN of the datafiles, it is important to know the mode of the database when the datafiles are backed up. That is whether the database is open, mounted or shutdown (normally) when the backup is taken.

If the datafiles are restored from an online or hot backup, which means that the database is open when the backup istaken, then we must apply at least the archivelog(s) or redolog(s) whose log sequence# are generated from the beginning and until the completion of the said backup that was used to restore the datafiles.

However, if the datafiles are restored from an offline or cold backup, and the database is cleanly shutdown before thebackup is taken, that means that the database is either not open, is in nomount mode or mounted when the backup is taken, then the datafiles are already synchronized in terms of their SCN. In this situation, we can immediately open the database without even applying archivelogs, because the datafiles are already in a consistent state, except if there is a requirement to roll the database forward to a point-in-time after the said backup is taken.

The critical key thing here is to ensure that all of the online datafiles are synchronized in terms of their SCN before we can normally open the database. So, run the following SQL statement, as shown below, to determine whether the datafiles aresynchronized or not. Take note that we query the V$DATAFILE_HEADER, because we want to know the SCN recorded inthe header of the physical datafile, and not the V$DATAFILE, which derives the information from the controlfile.

select status, checkpoint_change#,  to_char(checkpoint_time,

‘DD-MON-YYYY HH24:MI:SS’) as checkpoint_time, count(*)

from v$datafile_header

group by status, checkpoint_change#, checkpoint_time

order by status, checkpoint_change#, checkpoint_time;

The results of the above query must return one and only one row for the online datafiles, which means that they are already synchronized in terms of their SCN. Otherwise, if the results return more than one row for the online datafiles, then the datafiles are still not synchronized yet. In this case, we need to apply archivelog(s) or redolog(s) to synchronize all of the online datafiles. By the way, take note of the CHECKPOINT_TIME in the V$DATAFILE_HEADER,which indicates the date and time how far the datafiles have been recovered.

The results of the query above may return some offline datafiles. So, ensure that all of the required datafiles are online,because we may not be able to recover later the offline datafile once we open the database in resetlogs. Even though wecan recover the database beyond resetlogs for the Oracle database starting from 10g and later versions due to the introduction of the format “%R” in the LOG_ARCHIVE_FORMAT, it is recommended that you online the required datafilesnow than after the database is open in resetlogs to avoid any possible problems. However, in some cases, we intentionally offline the datafile(s), because we are doing a partial database restore, or perhaps we don’t need thecontents of the said datafile.

You may run the following query to determine the offline datafiles:

select file#, name from

v$datafile

where file# in (select file#

from v$datafile_header

where status=’OFFLINE’);

You may issue the following SQL statement to change the status of the required datafile(s) from “OFFLINE” to “ONLINE”:

alter database datafile <file#> online;

If we are lucky that the required log sequence# is still available in the online redologs and the corresponding redologmember is still physically existing on disk, then we may apply them instead of the archivelog. To confirm, issue the following query, as shown below, that is to determine the redolog member(s) that you can apply to recover the database:

set echo on feedback on pagesize 100 numwidth 16
alter session set nls_date_format = ‘DD-MON-YYYY HH24:MI:SS’;

select LF.member, L.group#, L.thread#, L.sequence#, L.status,L.first_change#, L.first_time, DF.min_checkpoint_change#
from v$log L, v$logfile LF,
(select min(checkpoint_change#) min_checkpoint_change#
from v$datafile_headerwhere status=’ONLINE’) DF
where LF.group# = L.group#
and L.first_change# >= DF.min_checkpoint_change#;

If the above query returns no rows, because the V$DATABASE.CONTROLFILE_TYPE has a value of “BACKUP”, then try to apply each of the redolog members one at a time during the recovery. You may run the following query to determine theredolog members:

select * from v$logfile;

If you have tried to apply all of the online redolog members instead of an archivelog during the recovery, but you always received the ORA-00310 error, as shown in the example below, then the log sequence# required for recovery is no longer available in the online redolog.

ORA-00279: change 189189555 generated at 11/03/2007 09:27:46 needed for thread 1

ORA-00289: suggestion : +BACKUP

ORA-00280: change 189189555 for thread 1 is in sequence #428

Specify log: {<RET>=suggested | filename | AUTO | CANCEL}

+BACKUP/prmy/onlinelog/group_2.258.603422107

ORA-00310: archived log contains sequence 503; sequence 428 required

ORA-00334: archived log: ‘+BACKUP/prmy/onlinelog/group_2.258.603422107’

After trying all of the possible solutions mentioned above, but you still cannot open the database, because the archivelog required for recovery is either missing, lost or corrupted, or the corresponding log sequence# is no longer available in theonline redolog, since they are already overwritten during the redolog switches, then we cannot normally open the database, since the datafiles are in an inconsistent state. So, the following are the 3 options available to allow you to open the database:

Option#1:

Force open the database by setting some hidden parameters in the init.ora. Note that you can only do this under the guidance of Oracle Support with a service request. As per Oracle Metalink,  there is no 100% guarantee that this will open the database. However, once the database is opened, then we must immediately rebuild the database.

Database rebuild means doing the following, namely:

(1) perform a full-database export

(2) create a brand new and separate database

(3) import the recent export dump.

This option can be tedious and time consuming, but once we successfullyopen the new database, then we expect minimal or perhaps no data loss at all. Before you try this option, ensure that you have a good and valid backup of the current database.

When recovery process is initiated using backup controlfile, it will output recovery succeeded but inorder to open the database the datafiles should be in consistent state.

SQL> recover database until cancel using backup controlfile;
ORA-00279: change 9867098396261 generated at 03/21/2008 13:37:44 needed for
thread 1
ORA-00289: suggestion : /arcredo/XSCLFY/log1_648355446_2093.arc
ORA-00280: change 9867098396261 for thread 1 is in sequence #2093Specify log: {=suggested | filename | AUTO | CANCEL}
cancel
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01195: online backup of file 1 needs more recovery to be consistent
ORA-01110: data file 1: ‘/u100/oradata/XSCLFY/SYSTEM01_SCLFY.dbf’
ORA-01112: media recovery not startedSQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01195: online backup of file 1 needs more recovery to be consistent
ORA-01110: data file 1: ‘/u100/oradata/XSCLFY/SYSTEM01_SCLFY.dbf’

Now, a hidden parameter _ALLOW_RESETLOGS_CORRUPTION=TRUE will allow us to open database even though it’s not properly recovered.

Force open the database by setting the _ALLOW_RESETLOGS_CORRUPTION = TRUE. It allows us to open database but instance may crash immediately due to undo tablespace corruption. Check alert log file to view details of the issue.To resolve undo corruption issue, change undo_management to “Manual” in init.ora. Now the database will open successfully. Once database is up and running, create a new undo tablespace and drop the old corrupted undo tablespace. Also chang back the undo_management to “Auto” and undo_tablespace to “NewUndoTablespace” in init.ora._ALLOW_RESETLOGS_CORRUPTION=TRUE allows database to open without consistency checks. This may result in a corrupted database. The database should be recreated.

 

Option#2:

If you have a good and valid backup of the database, then restore the database from the said backup,and recover the database by applying up to the last available archivelog. In this option, we will only recover the databaseup to the last archivelog that is applied, and any data after that are lost. If no archivelogs are applied at all, then we can only recover the database from the backup that is restored. However, if we restored from an online or hot backup, then we may not be able to open the database, because we still need to apply the archivelogs generated during the said backup inorder to synchronize the SCN of the datafiles before we can normally open the database.

Option#3:

Manually extract the data using the Oracle’s Data Unloader (DUL), which is performed by Oracle Field Support at the customer site on the next business day and for an extra charge. If the customer wants to pursue this approach, we need the complete name, phone# and email address of the person who has the authority to sign the work order in behalf of the customer.

Source / Reference:

How to recover and open the database if the archivelog required for recovery is either missing, lost or corrupted?

How to recover and open the database if the archive log required for recovery is missing.

Links:

Recovering an Oracle Database with Missing Archived Logs

Resolving missing archive log gap at Standby Database


Advertisements

Plan Oracle Backup Strategy

March 28, 2012 Leave a comment

This article focus on general guidelines that can help to decide:

  • when to perform database backups
  • which parts of a database you should back up
  • what tools Oracle provides for those backups
  • how to configure your database to improve its robustness and make backup and recovery easier

The specifics of the strategy must balance the needs of your restore strategy with questions of cost, resources, personnel and other factors.

1.Protecting the redundancy set

  •  The redundancy set should contain:
    1.  Last backup of the control file and all the datafiles
    2. All archived redo logs generated after the last backup was taken
    3. Duplicates of the current control file and online redo log files, generated by Oracle database multiplexing, operating system mirroring, or both
    4. Copies of configuration files such as the server parameter file, tnsnames.ora, and listener.ora
  • Do not store the redundancy set on the same disk that contains the datafiles, online logs and control files of the database.
  • Make use of a flash recovery area, on a separate device from the working set files and keep it as the ideal location to store backup files and mirrored files.
  • Multiplex the online redo log files and current control file at the database level and the multiplexed files should be on different disks mounted under different disk controllers. Also mirror the online redo logs and current control file at the operating system or hardware level although this is not a substitute for multiplexing at the database level.
  • Archive the redo logs to multiple locations, ideally on different disks if database is running in Archivelog mode.
  • Use operating system or hardware mirroring for the control file.
  • Use operating system or hardware mirroring for the primary datafiles if possible, to avoid having to perform media recovery for simple disk failures.
  • Keep at least one copy of the entire redundancy set including the most recent backup on disk.
  • If the target database is stored on a RAID device, then store the redundancy set on a set of disks that are not in the same RAID device.
  • If you store the redundancy set on tape, then maintain at least two copies of the data to protect against the risk of tape failure.

2. Flash Recovery Area

  • It is recommended that you take advantage of the flash recovery area to store as many backup and recovery -related files as as possible, including disk backups and archived redo logs.
  • Obsolete files no longer needed to meet the recoverability goals and files backed up to tape or other media become eligible for deletion and are deleted when space is needed.

3. ARCHIVELOG and NOARCHIVELOG Mode

  • Database running in ARCHIVELOG mode is preferable to running in NOARCHIVELOG mode because  it provides more flexible recovery options after a data loss, such as point-in-time recovery of the database or some tablespaces.
  • In NOARCHIVELOG mode, one can’t perform online backups of the database and have to shut database down cleanly before backup is taken. Also any data recovery techniques that require the archived redo logs can’t be used.
  • The stored archived redo logs must be managed such that limited disk space is alloted for archived redo logs and older logs which are no longer needed to meet recoverability goals are deleted.

4. Oracle Flashback Features and Restore Points

  • Using the flashback features of Oracle improves the availability of the database when repairing the effects of unwanted database changes. The logical-level flashback features allow the objects not affected to remain available, and Flashback Database allows for faster rewind of the entire database than point-in-time recovery.
  • Incorporating Flashback Database or guaranteed restore points into the strategy requires to enable a flash recovery area, as well as creating guaranteed restore points at the right points in time, or configuring flashback logging.
5. Backup Retention Policy
  • RMAN automates the implementation of a backup retention policy, using the following commands:
    1. CONFIGURE RETENTION POLICY command lets to set the retention policy that will apply to all of your database files by default.
    2. REPORT OBSOLETE command lets to list backups currently on disk that are obsolete under the retention policy.
    3. DELETE OBSOLETE command deletes the files which REPORT OBSOLETE lists as obsolete.
    4. CHANGE… KEEP lets to set a separate retention policy for specific backups, such as long-term backups kept for archival purposes.
  • Redundancy based retention policy
    1. In a redundancy-based retention policy, you a number n is specified such that you always keep at least n distinct backups of each file in the database.
    2. A redundancy-based backup retention policy determines whether a backup is obsolete based on how many backups of a file are currently on disk.
  • Recovery window-based retention policy
    1. In a recovery window-based retention policy, a time interval is specified in the past and keep all backups required to let you perform point-in-time recovery to any point during that window.
    2. A recovery window-based retention policy lets you guarantee that you can perform point-in-time recovery to any point in the past, up to a number of days that you specify.

6. Archiving Older Backups

  • An older backup of datafiles and archived logs is necessary for performing point-in-time recovery to a time before your most recent backup.
  • If your most recent backup is corrupt, you can still recover your database using an older backup and the complete set of archived logs since that older backup.
  • Keep a copy of the database for archival purposes.

7. Frequency of backups

  • Base the frequency of backups on the rate or frequency of database changes such as:
    1. Addition and deletion of tables
    2. Insertions and deletions of rows in existing tables
    3. Updates to data within tables
  • The more frequently your database is updated, the more often you should perform database backups.
  • If database updates are relatively infrequent, then you can make whole database backups infrequently and supplement them with incremental backups.

8.  Backups before and after structural changes

  • If you make any of the following structural changes, then perform a backup of the appropriate portion of your database immediately before and after completing the following changes:
    1. Create or drop a tablespace.
    2. Add or rename a datafile in an existing tablespace.
    3. Add, rename, or drop an online redo log group or member.
  • If you are in NOARCHIVELOG mode, then you must shut down the database and perform a consistent whole database backup after any such change.
  • If you are running in ARCHIVELOG mode, then you must make a control file backup after any such change, using either RMAN’s BACKUP CONTROLFILE command or the SQL ALTER DATABASE BACKUP CONTROLFILE statement.

9. Scheduling Backups for Frequently-Updated Tablespaces

  • If you run in ARCHIVELOG mode, then you can back up an individual tablespace or even a single datafile. You might want to do this for one or more tablespaces that are updated much more often than the rest of your database, as is sometimes the case for the SYSTEM tablespace and automatic undo tablespaces.

10. Backing Up after NOLOGGING Operations

  • When a direct path load is performed to populate a database, no redo data is logged for those database changes. You cannot recover these changes after a restore from backup using conventional media recovery.
  • Likewise, when tables and indexes are created as NOLOGGING, the database does not log redo data for these objects, which means that you cannot recover these objects from existing backups. Therefore, you should back up your datafiles after operations for which no redo data is logged.

11. Exporting Data for Added Protection and Flexibility

  • Oracle database import and export utilities are used to export database objects (tables, stored procedures, and so forth) from databases to be stored as files, and re-import objects from those files.
  • An export provides a logical-level snapshot of the exported objects at the time of the export, as a binary file that can be imported back into the source database or some other database.

12. Preventing the Backup of Online Redo Logs

  • Online redo logs, unlike archived logs, should never be backed up. The chief danger associated by having backups of online redo logs is that you may accidentally restore those backups without meaning to, and corrupt your database.
  • The best method for protecting the online logs against media failure is to multiplex them, with multiple log members in each group, on different disks attached to different disk controllers.

13. Keeping Records of the Hardware and Software Configuration of the Server

  • You should have the following documentation about the hardware configuration:
    1. The name, make, and model of the machine that hosts the database
    2. The version and patch of the operating system
    3. The number of disks and disk controllers
    4. The disk capacity and free space
    5. The names of all datafiles
    6. The name and version of the media management software (if you use a third-party media manager)
  • You should also keep the following documentation about the software configuration:
  1. The name of the database instance (SID)
  2. The database identifier (DBID)
  3. The version and patch release of the Oracle database server
  4. The version and patch release of the networking software
  5. The method (RMAN or user-managed) and frequency of database backups
  6. The method of restore and recovery (RMAN or user-managed
Article based on the Oracle Doc – Planning Backup Strategy

Quick RMAN Backup

November 20, 2011 Leave a comment

rman> backup database plus archivelog;

rman> backup current controlfile;

rman> restore database validate;

rman> delete noprompt obsolete;

rman> crosscheck backup;

Backup password file, tnsnames.ora & listener file.

RMAN – Database Backup

October 17, 2011 Leave a comment

Oracle Backup and Recovery on Windows, Part I – Database Backups using RMAN:

(Article  extracted from  Orafusion.com)

Note: The scripts listed below should be customized and tested thoroughly before implementation on a production system. Please read through the Oracle backup documentation before implementing your backups. References have been listed at the end of this article.

1. Introduction:

This article is the first in a series of three, introducing Oracle’s RMAN (Recovery MANager) backup and recovery tool to novice DBAs. The articles focus on the Windows operating system, but can be easily adapted to other platforms. RMAN will work in the same manner on all operating systems (OS) that Oracle runs on. However, OS functions such as script scheduling etc. obviously vary from platform to platform. The present article describes the implementation of a simple Oracle backup strategy using RMAN. The next article will discuss some recovery scenarios within this backup framework, and the final article will deal with disaster recovery.

We begin with a statement of assumptions regarding the database environment and business requirements regarding data recoverability. This serves to anchor the subsequent discussion in a definite context. We then briefly discuss the files relevant to operation of Oracle. Finally, we move on to the main topic – a discussion of RMAN and how it can be used to backup a database.

Following are the assumptions pertaining to the database environment and business expectations regarding data recovery:

  • The database is hosted on a Windows NT / 2000 / 2003 server.
  • The database software is Oracle 9i.

Users expect the database to be available round the clock, so the database cannot be shutdown for backups. This makes an online (hot) backup mandatory.

In case of a disaster, where the server is destroyed, the business expects us to be able to recover all transactions up to the previous working day – i.e. a maximum of 24 hours data loss is acceptable. (We will discuss options for doing better than this in the third article of this series)

  • The database is relatively small – say 1 to 30 GB in size
  • Specified directories on the host server are backed up to to tape every night using an OS backup utility.

We will configure RMAN to backup to disk. These backups will then be copied to tape by the OS backup utility. For completeness we note that RMAN can be configured to backup the database directly to tape, via an interface called MML (Media Management Library) which integrates third party backup products with RMAN. MML can be somewhat fiddly to setup, and the details depend on the third-party product used. We will not discuss RMAN backups to tape in the present article.

The server has three independent disks – named C:, D: and E:. The contents of the drives are as follows:

  • C: – The operating system and database software (Oracle home is c:\oracle\ora92. The oracle home directory is normally referred to as ORACLE_HOME). We will also keep a copy of the controlfile and a group of online redo logs on this disk. This isn’t ideal, but will have to do because of the limited number of disks we have. Database files and their function are described in the next section.
  • D: – All datafiles, a copy of the controlfile and one group of online redo logs.
  • E: – A copy of the controlfile, all archived logs and database backups. RMAN will be configured to backup to this drive. Note that the backups could reside on D: instead.

All disks should be mirrored using some form of RAID technology. Note that the above layout isn’t ideal – we’re limited by the number of disks available. If you have more disks you could, and probably should, configure a better layout.

2. Oracle database files:

In order to perform backups effectively, it is necessary to understand a bit about the various files that comprise a database. This section describes the files that make up an Oracle database. The descriptions given here are very brief. Please check the Oracle Administrator’s Guide for further details.

Oracle requires the following files for its operation:

1.Datafiles: These hold logical units (called blocks) that make up the database. The SYSTEM tablespace datafile, which holds the data dictionary and other system-owned database objects, is required for database operation. The database can operate without non-SYSTEM datafiles as long as no data is requested from or modified in blocks within those files. In our case all datafiles are located on D:.

Data files can be backed up by RMAN.

2.Online redo logs: These files hold transaction information that is necessary for transaction rollback, and for instance recovery in case of a database crash. Since these files hold critical information, it is wise to maintain two or more identical copies of these files on independent disks (multiplexing). Each set of copies is known as a redo log group. The members of a group should be on separate disks. In our case we maintain three groups with two members per group – each group has one member D: and the other on E: (instead of E:, one could maintain the second set on C:).

When writing to the logs, Oracle cycles through a group at a time in a circular manner – i.e. once all groups have been written to, the first one is reused, overwriting its previous contents. Hence the importance of archiving filled logs as the system switches from one log to the next. – see item (4) below.

Note that online logs should NEVER be backed up in a hot backup. If you do back them up, there is a danger that you may overwrite your current (valid) logs with the backed up ones, which are out of date and therefore invalid. Never ever backup your redo logs in a hot backup scenario

3.Control file:  This file holds, among other things, the physical structure of the database (location of all files), current database state (via a system change number, or SCN – which is a monotonically increasing number that is incremented on every commit), and information regarding backups taken by RMAN. The control files, being critical to database operation, should also be multiplexed. We will maintain three copies, one each on C:, D: and E: drive.

Controlfiles can be optionally backed up by RMAN.

4.Archived redo logs: These are archived copies of online redo logs, created after a switch occurs from one online log to the next one in the group. Archiving ensures that a complete transaction history is maintained. Once an online log is archived, it can be overwritten without any loss of transaction history. Archive logs are required for online backups, so they are mandatory in our scenario.

We maintain a single copy of these on E: drive. This is a weak point in the present backup strategy (only one copy of archive logs). There are ways to do better – one can maintain up to ten independent sets of archive logs at different physical locations – check the documentation for details.

Note that online logs are archived only when the database operates in ARCHIVELOG mode. The default operation mode is NOARCHIVELOG – where online redo logs are not archived before being overwritten. It would take us too far afield to discuss this any further – please check the Oracle Administrator’s Guide for instructions on configuring your database to archivelog mode.

Archive logs can be optionally backed up by RMAN.

5. Server parameter file (also known as Stored parameter file): This file contains initialization parameters that are used to configure the Oracle instance on startup. We maintain this file in its default location (ORACLE_HOME\database).

The server parameter file can be optionally backed up by RMAN.

6.Password file: This file stores credentials of users who are allowed to log on to Oracle as privileged (sysdba) users without having to supply an Oracle password. All users belonging to the Windows OS group ORA_DBA are included in the password file. Users with sysdba privileges are allowed to perform database startup, shutdown and backups (among other administrative operations). Please check the Oracle Administrator’s guide for details on sysdba and sysoper privileges. We maintain the password file in its default location (ORACLE_HOME\database).

This file is not backed up by RMAN, and must be backed up via OS commands.

7.Networking files: These files (tnsnames.ora, listener.ora and sqlnet.ora) are Oracle network configuration files. They are maintained in their standard location – ORACLE_HOME\network\admin.

These files are not backed up by RMAN, and must be backed up via OS commands.

Additionally, it is a good idea to store a copy of all database creation scripts in the backup directory. This will help in determining the correct database file directory structure in case of a disaster. This, however, is not absolutely necessary as file placement information can be retrieved from the backup controlfile. More on this in the third article of this series.

3. OS backup utilities vs. RMAN- a brief discussion:

OS Backup utilities copy OS files from disk to tape. By themselves they are not useful for database backups, unless the database is closed.

The reason they cannot be used to backup open databases is as follows: If the database is open, it is possible that contents of a datafile block are being modified at the very instant that the block is being copied by the utility. In such a situation the copy of the block would be inconsistent, and hence useless for recovery. The way to avoid this is to put a tablespace into a special “backup mode” before copying the datafiles associated with the tablespace. Such OS level backups, also called user managed backups, are the traditional (non-RMAN) way to backup Oracle databases. When a tablespace is in backup mode, the SCN, which is marked in the header of each datafile in the tablespace, is frozen until the tablespace is taken out of backup mode. Additionally, whenever a data block in the tablespace is modified, the entire block is copied to the online redo log (in contrast to only modified rows being copied when the tablespace is not in backup mode). This causes a huge increase in the redo generated, which is a major disadvantage of user managed backups.

One can perform user managed backups of a database using homegrown scripts. Such a script would cycle through all tablespaces in the database, putting each tablespace in backup mode, copying the associated datafiles and finally switching the tablespace out of backup mode. A fragment of a user managed hot backup script for Windows might read as follows:

--put USERS tablespace in backup mode
 alter tablespace users begin backup;
 --copy files belonging to USERS tablespace
 host copy d:\oracle\ora92\orcl\users.dbf e:\backup;
 --take USERS tablespace out of backup mode
 alter tablespace users end backup;
 --continue with other tablespaces and then copy other oracle files.

The above would be invoked from sqlplus, via an appropriately scripted batch file.

Most OS backup utility vendors provide optional add-ons that automate the process of user managed backups. These add-ons, which usually do no more than the script shown above, are sold as extra cost add-ons to the base backup software.

RMAN is a database backup utility that comes with the Oracle database, at no extra cost. As such, it is aware of the internal structure of Oracle datafiles and controlfiles, and knows how to take consistent copies of data blocks even as they are being written to. Furthermore, it does this without putting tablespaces in backup mode. Therefore RMAN does not cause a massive increase in generated redo. Another advantage of using RMAN is that it backs up only those blocks that have held or currently hold data. Hence RMAN backups of datafiles are generally smaller than the datafiles themselves. In contrast, OS copies of datafiles have the same size as the original datafiles. Finally, with RMAN backups it is possible to recover individual blocks in case of block corruption of datafiles. Considering the above, it makes good sense to use RMAN instead of vendor supplied add-ons or homegrown user managed backup scripts.

4. Configuring RMAN:

RMAN is a command line utility that is installed as a part of a standard database installation. Note that RMAN is only a command interface to the database – the actual backup is performed by a dedicated server process on the Oracle database.

RMAN can invoked from the command line on the database host machine like so:

C:\>rman target /
Recovery Manager: Release 9.2.0.1.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database: ORCL (DBID=1036216947)
RMAN>

The first line is the one we type and the remaining ones are feedback from execution of the command. The net result is to leave us connected to the target database – the database we want to back up – with the RMAN> prompt, indicating that RMAN is ready for further work. Here we have invoked RMAN on the server, and have logged on to the server using an account that belongs to the ORA_DBA OS group. As described earlier, this enables us to connect to the target database (as sysdba – this is implicit) without having to supply a password. Note that on Windows, one must also set SQLNET.AUTHENTICATION_SERVICES=(NTS) in the sqlnet.ora file order to connect using OS authentication as above.

At this point a digression is in order. RMAN can be run in two modes – catalog and nocatalog. In the former, backup information and RMAN scripts are stored in another database known as the RMAN catalog. In the latter, RMAN stores backup information in the target database controlfile. Catalog mode is more flexible, but requires the maintenance of a separate database on another machine (there’s no point in creating the RMAN catalog on the database to be backed up!). Nocatalog mode has the advantage of not needing a separate database, but places more responsibility on the controlfile. We will use nocatalog mode in our discussion, as this is a perfectly valid choice for sites with a small number of databases.

RMAN can be configured through various persistent parameters. Note that persistent parameters can be configured only for Oracle versions 9i and better. The current configuration can be seen via the “show all” command:

RMAN> show all;
RMAN configuration parameters are:
 CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
 CONFIGURE BACKUP OPTIMIZATION OFF; # default
 CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
 CONFIGURE CONTROLFILE AUTOBACKUP ON;
 CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'e:\backup\ctl_sp_bak_%F';
 CONFIGURE DEVICE TYPE DISK PARALLELISM 2;
 CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
 CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
 CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT 'e:\backup\%U.bak' MAXPIECESIZE 4G;
 CONFIGURE CHANNEL 2 DEVICE TYPE DISK FORMAT 'e:\backup\%U.bak' MAXPIECESIZE 4G;
 CONFIGURE MAXSETSIZE TO UNLIMITED; # default
 CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'C:\ORACLE\ORA92\DATABASE\SNCFORCL.ORA'; # default
RMAN>

The reader is referred to the RMAN documentation for a detailed explanation of the options attached to each of these parameters. Here we will discuss only those of relevance to our backup requirements.

Retention Policy: This instructs RMAN on the backups that are eligible for deletion. For example: A retention policy with redundancy 2 would mean that two backups – the latest and the one prior to that – should be retained. All other backups are candidates for deletion. Retention policy can also be configured based on time – check the docs for details on this option.

Default Device Type: This can be “disk” or “sbt” (system backup to tape). We will backup to disk and then have our OS backup utility copy the completed backup, and other supporting files, to tape.

Controlfile Autobackup: This can be set to “on” or “off”. When set to “on”, RMAN takes a backup of the controlfile AND server parameter file each time a backup is performed. Note that “off” is the default.

Controlfile Autobackup Format: This tells RMAN where the controlfile backup is to be stored. The “%F” in the file name instructs RMAN to append the database identifier and backup timestamp to the backup filename. The database identifier, or DBID, is a unique integer identifier for the database. We have configured RMAN to store controlfile backups in the directory e:\backup.

Parallelism: This tells RMAN how many server processes you want dedicated to performing the backups.

Device Type Format: This specifies the location and name of the backup files. We need to specify the format for each channel. The “%U” ensures that Oracle appends a unique identifier to the backup file name. The MAXPIECESIZE attribute sets a maximum file size for each file in the backup set. We have configured RMAN to store backups in the directory e:\backup.

Any of the above parameters can be changed using the commands displayed by the “show all” command. For example, one can turn off controlfile autobackups by issuing:

RMAN> configure controlfile autobackup off;
using target database controlfile instead of recovery catalog
 old RMAN configuration parameters:
 CONFIGURE CONTROLFILE AUTOBACKUP ON;
 new RMAN configuration parameters:
 CONFIGURE CONTROLFILE AUTOBACKUP OFF;
 new RMAN configuration parameters are successfully stored
RMAN>

5. Scripting the backup:

With the background stuff out of the way, we now move on to the actual backup. We will write a simple script that will backup our database, verify that the backup can be restored and then delete all obsolete backups and archive logs (based on a redundancy of 2, as discussed above). The Windows scheduler will be used to run the script at a time of our choosing.

An Aside: Before we move on it is worth stating that RMAN can perform full or incremental backups. Full backups, as their name suggests, are backups of every data block in the datafiles. In contrast, Incremental backups backup only those database blocks that have changed since the last higher level backup. It would take us too far afield to detail the intricacies of incremental backups – we refer you to the Oracle documentation for more details on this. For the case at hand, we can afford to perform full backups every night as the database is relatively small.

The backup script, which we store in a file named “rman_backup.rcv”, is very simple:

#contents of rman_backup.rcv. "#" denotes a comment line, and will be ignored by RMAN.
 backup database plus archivelog;
 restore database validate;
 delete noprompt obsolete;
 host 'copy C:\oracle\ora92\database\pwdorcl.ora e:\backup';
 host 'copy C:\oracle\ora92\network\admin\tnsnames.ora e:\backup';
 host 'copy C:\oracle\ora92\network\admin\listener.ora e:\backup';
 exit;

The script backs up the database and all archive logs and then checks that the backup can be restored. After that it deletes backups according to the configured retention policy – the “noprompt” in the delete command instructs RMAN not to prompt us before deleting files. Finally it does an OS copy of the password file and the relevant network configuration files. The RMAN “host” command enables us to execute any operating system command (on Linux, for instance, we would use “cp” instead of “copy”). In the above script the database name is ORCL, hence the password file is pwdORCL.ora. You will need to adapt each of the “host ‘copy…” commands in the script to your specific directory structure and filenames. As an aside, it is worth pointing out that SQL commands can be executed from RMAN. A couple of examples:

sql 'alter system archive log current';
 sql "create pfile=''e:\backup\initORCL.ora'' from spfile";

The “sql” keyword tells RMAN what follows is to be interpreted as an SQL command. The actual SQL should be enclosed in single or double quotes. The latter is useful if the command contains single quotes, as in the second example above. Note: In the second example, the quotes enclosing the pfile path are two single quotes, and the quotes enclosing the entire command are double quotes.

The script, rman_backup.rcv, is invoked by the following one line batch file:

REM contents of rman_backup.bat
rman target / cmdfile rman_backup.rcv log rman_backup.log

The “target /” indicates that the script logs on to Oracle as sysdba via an OS account that belongs to the ORA_DBA group. The “cmdfile” option indicates the name of the command file that RMAN should execute, in this case it is rman_backup.rcv. The “log” option tells rman that we want a transcript of the RMAN session to be stored in the file that follows the option – rman_backup.log in this case. Remember to check the log file once between each backup for any errors that may have occurred. The log file is overwritten on each execution of the batch file so it may be worth changing the name to include a unique identifier (such as a timestamp). The backup scripts could reside anywhere on the server, but it may be best to keep them in e:\backup so that they are archived off to tape along with the backups.

The next step is to schedule our batch file (rman_backup.bat) to run at the desired interval. This is done by scheduling the batch file via the Window Scheduler wizard, which is accessed through Control Panel>Scheduled Tasks>Add Scheduled Task>.

Finally, it should be ensured that the entire backup directory (e:\backup) is copied to tape nightly, after the database backup has been completed. There is no need to backup any other Oracle related directory. The tapes must be stored offsite so that they aren’t destroyed in case the site is struck by disaster. In a disaster situation, we can recreate the database and then restore and recover data files (with up to a 24 hour data loss), using the backups that are on tape. The procedure for recovering from a disaster will be covered in the third article of this series. In case the database fails (say due to datafile corruption, for example) but the host server remains available, we can recover right up to the instant of failure using the backup on disk together with all archive logs since the backup and the current online redo logs. Some of these scenarios will be covered in the next article of this series.

6. Summary and Further Reading:

This article provides steps on setting up automated RMAN based backups of Oracle databases on Windows. As with all critical DBA tasks, scripts and commands described above should be customised to your requirements and tested thoroughly before implementation on your production systems.

In the interest of brevity, we have had to rush through some of the detail that is relevant to backup and recovery. The reader is therefore urged to read the pertinent Oracle documentation for complete coverage. The books of interest are:

Oracle 9i Backup and Recovery Concepts – This book discusses basics of Oracle backup and recovery.
Oracle 9i Recovery Manager User’s Guide – This book discusses backup and recovery using RMAN.
Oracle 9i Administrator’s Guide – discusses backup related issues such as how to put the database in ARCHIVELOG mode.

These books can be downloaded, free of charge, from Oracle’s documentation site. You will need to register with the Oracle Technology Network (OTN) to gain access to the documentation. Membership of OTN is free, and well worth it for Oracle professionals.

Source

Reference links:

 

User Managed Hot Backup

October 12, 2011 Leave a comment

User managed backup means taking backup of the database without making use of any oracle feature. Suppose if you take backup by Operating system then it is called user managed backup. And the term hot backup refers to taking the backup of database whenever it is in open state.

To use hot backups, the database should operate in ARCHIVELOG mode.

1. Check whether the database is in ARCHIVELOG mode or not

SQL> SELECT LOG_MODE FROM V$DATABASE;
LOG_MODE
——————-
ARCHIVELOG

If not, place the database in archive log mode ( assuming your database is up and running ) :

SQL> alter database close;
SQLPLUS>alter database archivelog;
SQL>alter database open;

Display online log sequence:

SQL> archive log list;

Database log mode Archive Mode
Automatic archival Enabled
Archive destination E:\oracle\product\10.1.0\db_2\RDBMS
Oldest online log sequence 169
Next log sequence to archive 171
Current log sequence 171

2. Find files required to backup using the following query:

SQL> SELECT NAME “FILES TO BACKUP” FROM V$DATAFILE
2 UNION ALL
3 SELECT NAME FROM V$CONTROLFILE
4 UNION ALL
5 SELECT MEMBER FROM V$LOGFILE
6 UNION ALL
7 SELECT VALUE FROM V$PARAMETER WHERE NAME=’SPFILE’;

FILES TO BACKUP
——————————————————-

D:\CRM\DATA\SYSTEM.DBF
D:\CRM\DATA\SYSAUX.DBF
D:\CRM\DATA\UNDOTBS.DBF
D:\CRM\DATA\CRM.DBF
D:\CRM\CONTROL\CRM_CTL1.CTL
D:\CRM\CONTROL\CRM_CTL2.CTL
D:\CRM\CONTROL\CRM_CTL3.CTL
D:\CRM\REDO\R1.LOG
D:\CRM\REDO\R2.LOG
D:\CRM\REDO\R3.LOG

10 rows selected.

For online tablespace, it must be taken to backup mode. Check status of tablespace :

SQL> SELECT S.STATUS, S.TABLESPACE_NAME “TABLESPACE”, D.FILE_NAME “DATAFILE”
2 FROM DBA_TABLESPACES S, DBA_DATA_FILES D
3 WHERE S.TABLESPACE_NAME = D.TABLESPACE_NAME;

STATUS TABLESPACE DATAFILE
————– ———————– —————–

ONLINE SYSTEM D:\CRM\DATA\SYSTEM.DBF

ONLINE SYSAUX D:\CRM\DATA\SYSAUX.DBF

ONLINE UNDOTBS D:\CRM\DATA\UNDOTBS.DBF

ONLINE CRM D:\CRM\DATA\CRM.DBF

Determine datafile status for online tablespace backups:

To check whether a datafile is part of a current online tablespace backup, query the V$BACKUP view. This view is useful only for user-managed online tablespace backups, not offline tablespace backups or RMAN backups.

The V$BACKUP view is most useful when the database is open. It is also useful immediately after an instance failure because it shows the backup status of the files at the time of the failure. Use this information to determine whether you have left any tablespaces in backup mode.

V$BACKUP is not useful if the control file currently in use is a restored backup or a new control file created after the media failure occurred. A restored or re-created control file does not contain the information Oracle needs to fill V$BACKUP accurately. Also, if you have restored a backup of a file, this file’s STATUS in V$BACKUP reflects the backup status of the older version of the file, not the most current version. Thus, this view can contain misleading data about restored files.

SQL> SELECT * FROM V$BACKUP;

FILE# STATUS CHANGE# TIME
———- —————— ———- ———
1 NOT ACTIVE 1772260 30-SEP-11
2 NOT ACTIVE 1772260 30-SEP-11
3 NOT ACTIVE 1772260 30-SEP-11
4 NOT ACTIVE 1772260 30-SEP-11

SQL> SELECT ‘ALTER TABLESPACE ‘ ||TABLESPACE_NAME || ‘ BEGIN BACKUP; ‘ “SCRIPT”
FROM DBA_TABLESPACES WHERE STATUS NOT IN (‘READ ONLY’,’OFFLINE’);

SCRIPT
—————————————————————————–
ALTER TABLESPACE SYSTEM BEGIN BACKUP;
ALTER TABLESPACE SYSAUX BEGIN BACKUP;
ALTER TABLESPACE UNDOTBS BEGIN BACKUP;
ALTER TABLESPACE TEMP BEGIN BACKUP;
ALTER TABLESPACE CRM BEGIN BACKUP;

SQL> ALTER TABLESPACE SYSTEM BEGIN BACKUP;

Tablespace altered.

SQL> SELECT T.NAME AS “TABLESPACE”, D.NAME AS “DATAFILE”, S.STATUS
2 FROM V$DATAFILE D, V$TABLESPACE T, V$BACKUP S
3 WHERE D.TS#=T.TS# AND S.FILE#=D.FILE#
4 AND S.STATUS=’ACTIVE’;

TABLESPACE DATAFILE STATUS
———————– —————— —————-
SYSTEM D:\CRM\DATA\SYSTEM.DBF ACTIVE

In the STATUS column, NOT ACTIVE indicates that the file is not currently in backup mode (that is, ALTER TABLESPACE … BEGIN BACKUP), whereas ACTIVE indicates that the file is currently in backup mode.

Copy the physical files associated with this tablespace to desired backup directory using OS commands / graphical utility.

C:\>COPY D:\CRM\DATA\SYSTEM.DBF E:\BACKUP_CRM\2011_OCTOBER_11
1 file(s) copied.

SQL> ALTER TABLESPACE SYSTEM END BACKUP;
Tablespace altered.

Alternatively, put all tablespaces in backup mode at the same time using ‘ALTER DATABASE BEGIN BACKUP’ :

Archive the unachieved redo logs so that the redo required to recover the tablespace backups is archived.

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

System altered.

Backup archive files to backup folder.

SQL>ALTER DATABASE BEGIN BACKUP;

Database altered.

Use OS commands / graphical utility / run script to take the backup of all the datafiles.

SQL>ALTER DATABASE END BACKUP;

Database altered.

Archive the unarchived redo logs so that the redo required to recover the tablespace backup is archived.

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

System altered.

Backup archive files, control files and spfile to backup folder.

It is a bad idea to put all tablespaces in backup mode, as it is better to do it one by one in order to minimize the supplemental redo logging overhead.

Oracle introduces this ‘shortcut’ for one reason only: when doing backup with a mirror split (BCV, Flashcopy, etc), the copy gets all the datafiles at the same time, and the copy lasts only few seconds. In that case, it is easier to use that command to put all tablespaces in backup mode during the operation.

When BEGIN BACKUP is issued:

The hot backup flag in the datafile headers is set, so that the copy is identified to be a hot backup copy.
This is to manage the backup consistency issue when the copy will be used for a recovery.
A checkpoint is done for the tablespace, so that no dirty buffer remains from modifications done before that point.
Begin backup command completes only when checkpoint is done.

During backup:

The datafile header is frozen so that whenever it is copied, it reflects the checkpoint SCN that was at the beginning of the backup.
Then, when the copy will be restored, Oracle knows that it needs to start recovery at that SCN to apply the archived redo logs.
This is to avoid the header inconsistency issue.
That means that any further checkpoints do not update the datafile header SCN (but they do update a ‘backup’ SCN)
Each first modification to a block in buffer cache will write the full block into the redo thread (in addition to the default behaviour that writes only the change vector).
This is to avoid the fractured block issue. There may be a fractured block in the copy, but it will be overwritten during the recovery with the full block image.

When END BACKUP is issued:

A record that marks the end of backup is written to the redo thread so that if the copy is restored and recovered, it cannot be recovered earlier than that point. This is to avoid the backup consistency issue.
The hot backup flag in the datafile headers is unset.
The header SCN is written with the current one.

Force a log switch, so that Oracle will create an archived log file

SQL> ALTER SYSTEM SWITCH LOGFILE;

Backup control file

SQL> ALTER DATABASE BACKUP CONTROLFILE TO filespec;

User Managed Hot Backup

September 30, 2011 Leave a comment

To use hot backups, the database should operate in ARCHIVELOG mode.

1. Check whether the database is in ARCHIVELOG mode or not

SQL> SELECT LOG_MODE FROM V$DATABASE;
LOG_MODE
————
ARCHIVELOG

If not, place the database in archive log as follows, (assuming your database is up and running)

SQL> alter database close;
SQLPLUS>alter database archivelog;
SQL>alter database open;

Display online log sequence

SQL> archive log list;

Database log mode Archive Mode
Automatic archival Enabled
Archive destination E:\oracle\product\10.1.0\db_2\RDBMS
Oldest online log sequence 169
Next log sequence to archive 171
Current log sequence 171

2. Files that require to backup found using the following query ie. backup of data files , online redo log files ,control files, spfile.

SQL> SELECT NAME “FILE NEED BACKUP” FROM V$DATAFILE
2 UNION ALL
3 SELECT NAME FROM V$CONTROLFILE
4 UNION ALL
5 SELECT MEMBER FROM V$LOGFILE
6 UNION ALL
7 SELECT VALUE FROM V$PARAMETER WHERE NAME=’SPFILE’;

FILE NEED BACKUP
——————————————————————————–

D:\CRM\DATA\SYSTEM.DBF
D:\CRM\DATA\SYSAUX.DBF
D:\CRM\DATA\UNDOTBS.DBF
D:\CRM\DATA\CRM.DBF
D:\CRM\CONTROL\CRM_CTL1.CTL
D:\CRM\CONTROL\CRM_CTL2.CTL
D:\CRM\CONTROL\TICKET_CTL3.CTL
D:\CRM\REDO\R1.LOG
D:\CRM\REDO\R2.LOG
D:\CRM\REDO\R3.LOG

10 rows selected.

SQL> SELECT S.STATUS, S.TABLESPACE_NAME “Tablespace”, D.FILE_NAME “Datafile”
2 FROM DBA_TABLESPACES S, DBA_DATA_FILES D
3 WHERE S.TABLESPACE_NAME = D.TABLESPACE_NAME;

STATUS Tablespace Datafile
———— —————- ———-

ONLINE SYSTEM
D:\CRM\DATA\SYSTEM.DBF

ONLINE SYSAUX
D:\CRM\DATA\SYSAUX.DBF

ONLINE UNDOTBS
D:\CRM\DATA\UNDOTBS.DBF

STATUS Tablespace Datafile
———— —————- ———–

ONLINE CRM
D:\CRM\DATA\CRM.DBF

Force a log switch, so that Oracle will create an archived log file

SQL> ALTER SYSTEM SWITCH LOGFILE;

Backup control file

SQL> ALTER DATABASE BACKUP CONTROLFILE TO filespec;