You perform the steps listed below when using SQL and RMAN commands to create a physical standby database. These are high level steps and are explained in detail later in the blog.
- Prepare the primary database.
- Set parameters on the physical standby database.
- Configure Oracle Net Services.
- Start the standby database instance.
- Execute the DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE RMAN command.
- Start the transport and application of redo.
Preparing the Primary Database
High level steps for preparing primary database is as follows:
- Enable FORCE LOGGING at the database level.
- Create a password file if required.
- Create standby redo logs.
- Set initialization parameters Set initialization parameters.
- Enable archiving.
The FORCE LOGGING mode determines whether the Oracle database server logs all changes in the database (except for changes to temporary tablespaces and temporary segments).
Notes:
Unless you have configured Oracle Advanced Security and public key infrastructure (PKI) certificates, every database in a Data Guard configuration must use a password file, and the password for the SYS user must be identical on every system for redo data transmission to succeed.
A standby redo log is used to store redo received from another Oracle database. Additional information about creating standby redo log files is provided in this lesson. On the primary database, you define initialization parameters that control redo transport services while the database is in the primary role. There are other parameters that you need to add that control the receipt of the redo data and log apply services when the primary database is transitioned to the standby role.
The Data Guard broker requires the use of a server parameter file. If archiving is not enabled, issue the ALTER DATABASE ARCHIVELOG command to put the primary database in ARCHIVELOG mode and enable automatic archiving.
SQL> SHUTDOWN IMMEDIATE; SQL> STARTUP MOUNT; SQL> ALTER DATABASE ARCHIVELOG; SQL> ALTER DATABASE OPEN;
FORCE LOGGING Mode
- FORCE LOGGING mode is recommended to ensure data consistency.
- FORCE LOGGING forces redo to be generated even when NOLOGGING operations are executed.
- Temporary tablespaces and temporary segments are not logged.
- FORCE LOGGING is recommended for both physical and logical standby databases.
- Issue the following command on the primary database:
SQL> ALTER DATABASE FORCE LOGGING;
FORCE LOGGING mode determines whether the Oracle database server logs all changes in the database (except for changes to temporary tablespaces and temporary segments). The [NO]FORCE LOGGING clause of the ALTER DATABASE command contains the following settings:
- FORCE LOGGING: This setting takes precedence over (and is independent of) any NOLOGGING or FORCE LOGGING settings that you specify for individual tablespaces and any NOLOGGING setting that you specify for individual database objects. All ongoing, unlogged operations must finish before forced logging can begin.
- NOFORCE LOGGING: Places the database in NOFORCE LOGGING mode. This is the default.
Issue the following command on the primary database:
SQL> ALTER DATABASE FORCE LOGGING;
The FORCE_LOGGING column in V$DATABASE contains a value of YES if the database is in FORCE LOGGING mode. Although the database can be placed in FORCE LOGGING mode when the database is OPEN, the mode does not change until the completion of all operations that are currently running in NOLOGGING mode. Therefore, it is recommended that you enable FORCE LOGGING mode when the database is in the MOUNT state.
Configuring Standby Redo Logs
A standby redo log is used only when the database is in the standby role to store redo data received from the primary database. Standby redo logs form a separate pool of log file groups. Configuring standby redo log files is highly recommended for all databases in a Data Guard configuration to aid in role reversal.
You should create at least one more standby redo log group than you have online redo log groups in the primary database. In addition, each standby redo log file must be at least as large as the largest redo log file in the redo log of the redo source database.
A standby redo log is required to implement:
- Synchronous transport mode
- Real-time apply
- Cascaded redo log destinations
Note: By configuring the standby redo logs on the primary database, the standby redo logs are created automatically on the standby database when you execute the DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE RMAN command. They will also be automatically created on the Far Sync server as well when it is created, provided that they already exist on the primary machine. Far Sync will be discussed later in the course.
Standby Redo Log Usage
If standby redo logs are available and configured correctly, the RFS process will write the redo received from the primary database to the standby redo logs. Because the files are generally the same size as the online redo logs of the primary database, they should fill up and switch at a similar frequency as the online redo logs. There may be some delay due to the latency of the network transmission of the redo. The additional standby redo log group should allow the log switch of the standby redo logs without causing wait states. At log switch of the standby redo logs, the ARC0 process will create an archived redo log using the completed standby redo log.
The RFS process creates and writes directly to an archived redo log file instead of the standby redo log if any of the following conditions are met:
- There are no standby redo logs.
- It cannot find a standby redo log that is the same size as or larger than the incoming online redo log file.
- All standby redo logs of the correct size have not yet been archived.
Using SQL to Create Standby Redo Logs
Create standby redo logs on the primary database to assist role changes:
SQL> alter database add standby logfile ('/u01/app/oracle/oradata/boston/stdbyredo01.log') size 50M; Database altered.
or
SQL> ALTER DATABASE ADD STANDBY LOGFILE 2 '+DATA' SIZE 52428800; Database altered.
You can create standby redo logs by using the ADD STANDBY LOGFILE clause of the ALTER DATABASE statement. Although standby redo logs are used only when the database is operating in the standby role, you should create standby redo logs on the primary primary database so that switching roles does not require additional DBA intervention.
You should create standby redo log files on the primary database prior to using the DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE RMAN command so that RMAN creates standby redo log files automatically on the standby database.
Create standby redo log file groups by using the following guidelines:
- Each standby redo log file must be at least as large as the largest redo log file in the redo source database. For administrative ease, Oracle recommends that all redo log files in the redo source database and the redo transport destination be of the same size.
- The standby redo log must have at least one more redo log group than the redo log on the redo source database.
Viewing Standby Redo Log Information
To view information about the standby redo logs:
SQL> SELECT group#, type, member FROM v$logfile 2 WHERE type = 'STANDBY'; GROUP# TYPE MEMBER ------ ------- ----------------------------------- 4 STANDBY /u01/app/oracle/oradata/boston/stdbyredo01.log 5 STANDBY /u01/app/oracle/oradata/boston/stdbyredo02.log 6 STANDBY /u01/app/oracle/oradata/boston/stdbyredo03.log 7 STANDBY /u01/app/oracle/oradata/boston/stdbyredo04.log
SQL> SELECT group#, type, member FROM v$logfile WHERE type = 'STANDBY‘; GROUP# TYPE MEMBER ---------- ------- ------------------------------------------------ 4 STANDBY +SBDAT/london/onlinelog/group_4.266.711624939 5 STANDBY +SBDAT/london/onlinelog/group_5.267.711624945 6 STANDBY +SBDAT/london/onlinelog/group_6.268.711624951 7 STANDBY +SBDAT/london/onlinelog/group_7.269.711624957 4 STANDBY +SBFRA/london/onlinelog/group_4.259.711624941 5 STANDBY +SBFRA/london/onlinelog/group_5.260.711624947 6 STANDBY +SBFRA/london/onlinelog/group_6.261.711624955 7 STANDBY +SBFRA/london/onlinelog/group_7.262.711624963 8 rows selected.
Setting Initialization Parameters on the Primary Database to Control Redo Transport
On the primary database, you define initialization parameters that control redo transport services while the database is in the primary role.
Parameter Name | Description |
---|---|
LOG_ARCHIVE_CONFIG | Specifies the unique database name for each database in the configuration Enables or disables sending and receiving of redo |
LOG_ARCHIVE_DEST_n | Controls redo transport services |
LOG_ARCHIVE_DEST_STATE_n | Specifies the destination state |
ARCHIVE_LAG_TARGET | Forces a log switch after the specified number of seconds |
LOG_ARCHIVE_TRACE | Controls output generated by the archiver process |
Setting LOG_ARCHIVE_CONFIG
Specify the DG_CONFIG attribute of the LOG_ARCHIVE_CONFIG parameter to list the DB_UNIQUE_NAME of the primary and standby databases in the Data Guard configuration. By default, the LOG_ARCHIVE_CONFIG parameter enables the database to send and receive redo. The LOG_ARCHIVE_CONFIG parameter can be used to disable the sending of redo logs to remote destinations or disable the receipt of remote redo logs. The complete syntax for the LOG_ARCHIVE_CONFIG parameter is as follows:
LOG_ARCHIVE_CONFIG = { [ SEND | NOSEND ][ RECEIVE | NORECEIVE ] [ DG_CONFIG=(remote_db_unique_name1 [, ... remote_db_unique_name9) | NODG_CONFIG ] }
For example:
LOG ARCHIVE CONFIG 'DG CONFIG =(boston,london)'
Use the V$DATAGUARD_CONFIG view to see the unique database names defined with the DB_UNIQUE_NAME and LOG_ARCHIVE_CONFIG initialization parameters; you can thus view the Data Guard environment from any database in the configuration. The first row of the view lists the unique database name of the current database that was specified with the DB_UNIQUE_NAME initialization parameter. Additional rows reflect the unique database names of the other databases in the configuration that were specified with the names of the other databases in the configuration that were specified with the DG CONFIG DG_CONFIG keyword of the LOG_ARCHIVE_CONFIG initialization parameter.
The following example illustrates the use of V$DATAGUARD_CONFIG:
SQL> show parameter log_archive_config NAME TYPE VALUE ------------------- ------- --------------------------- log_archive_config string dg_config=(boston,london)
SQL> SELECT * FROM v$dataguard_config; DB_UNIQUE_NAME ------------------------------ boston london
Setting LOG_ARCHIVE_DEST_n
By using the various LOG_ARCHIVE_DEST_n attributes, you define most of the settings for the Data Guard configuration. The Redo Transport Service is directly controlled by these settings. A number of different attributes can be set for each LOG_ARCHIVE_DEST_n parameter. Most have defaults that are adequate for most configurations. See Oracle Data Guard Concepts and Administration for a complete list and a description of each.
You should specify a LOG_ARCHIVE_DEST_n parameter (where n is an integer from 1 to 31) for the local archiving destination and one for the standby location. In previous versions of Oracle Database, LOG_ARCHIVE_DEST_10 was set to USE_DB_RECOVERY_FILE_DEST when the fast recovery area was being used. Beginning with Oracle Database 11g Release 2, you must manually set a location to use the fast recovery area Query the you must manually set a location to use the fast recovery area. Query the V$ARCHIVE DEST V$ARCHIVE_DEST view to see current settings of the LOG_ARCHIVE_DEST_n initialization parameter.
All defined LOG_ARCHIVE_DEST_n parameters must contain, at a minimum, either a LOCATION attribute or a SERVICE attribute. In addition, you must have a LOG_ARCHIVE_DEST_STATE_n parameter for each defined destination. LOG_ARCHIVE_DEST_STATE_n defaults to ENABLE.
Include a LOG_ARCHIVE_DEST_STATE_n parameter for each defined destination:
LOG_ARCHIVE_DEST_2= 'SERVICE=london VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=london' LOG_ARCHIVE_DEST_STATE_2=ENABLE
Specifying Role-Based Destinations
The VALID_FOR attribute of the LOG_ARCHIVE_DEST_n initialization parameter enables you to identify exactly when the archive destination is to be used, as well as which type of log file it is used for. The attribute uses a keyword pair to identify the redo log type as well as them database role. Using this attribute enables you to set up parameters in anticipation of switchover and failover operations.
You supply two values for the VALID FOR VALID_FOR attribute: redo_log_type and database_role. The redo_log_type keywords are:
- ONLINE_LOGFILE: This destination is used only when archiving online redo log files.
- STANDBY_LOGFILE: This destination is used only when archiving standby redo log files or receiving archive logs from another database.
- ALL_LOGFILES: This destination is used when archiving either online or standby redo log files.
The database_role keywords are the following:
- PRIMARY_ROLE: This destination is used only when the database is in the primary database role.
- STANDBY ROLE STANDBY_ROLE: This destination is used only when the database is in the standby (logical or physical) role.
- ALL_ROLES: This destination is used when the database is in either the primary or the standby (logical or physical) role.
Note: Because the keywords are unique, the archival_source and database_role values can be specified in any order. For example VALID FOR= _ (PRIMARY ROLE _ ,ONLINE LOGFILE _ ) is functionally q e uivalent to VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE).
Combinations for VALID_FOR
Combination | Primary | Physical | Logical |
---|---|---|---|
ONLINE_LOGFILE, PRIMARY_ROLE | Valid | Ignored | Ignored |
ONLINE_LOGFILE, STANDBY_ROLE | Ignored | Ignored | Valid |
ONLINE_LOGFILE, ALL_ROLES | Valid | Ignored | Valid |
STANDBY_LOGFILE,STANDBY_ROLE | Ignored | Valid | Valid |
STANDBY_LOGFILE, ALL_ROLES | Ignored | Valid | Valid |
ALL_LOGFILES, PRIMARY_ROLE | Valid | Ignored | Ignored |
ALL_LOGFILES, STANDBY_ROLE | Ignored | Valid | Valid |
ALL_LOGFILES, ALL_ROLES | Valid | Valid | Valid |
In the table above, Valid indicates that the archive log destination is used in a database that is in the role defined by the column heading. Ignored means that the archive log destination is not appropriate and that a destination of this type is ignored. An ignored destination does not generate an error.
There is only one invalid combination: STANDBY_LOGFILE, PRIMARY_ROLE. Specifying this combination causes an error for all database roles. If it is set, you receive the following error at startup:
ORA-16026: The parameter LOG_ARCHIVE_DEST_n contains an invalid attribute value
Defining the Redo Transport Mode
The following attributes of the LOG_ARCHIVE_DEST_n initialization parameter define the redo transport mode that is used by the primary database to send redo to the standby database.
- SYNC: Specifies that redo data generated by a transaction must have been received at
destination that has this attribute before the transaction can commit; otherwise, the destination is deemed to have failed. In a configuration with multiple SYNC destinations, the redo must be processed as described here for every SYNC destination. - ASYNC (default): Specifies that redo data generated by a transaction need not have been received at a destination that has this attribute before the transaction can commit
- AFFIRM: Specifies that a redo transport destination acknowledges received redo data
after writing it to the standby redo log - NOAFFIRM: Specifies that a redo transport destination acknowledges received redo data
before writing it to the standby redo log
If neither the AFFIRM nor the NOAFFIRM attribute is specified, the default is AFFIRM when the SYNC attribute is specified and NOAFFIRM when the ASYNC attribute is specified.
Setting Initialization Parameters on the Primary Database
The parameters listed in the table below are required if the disk configuration is not the same for the primary and standby databases. The parameters are also applicable when the primary database is transitioned to a standby database.
Parameter Name | Description |
---|---|
DB_FILE_NAME_CONVERT | Converts primary database file names |
LOG_FILE_NAME_CONVERT | Converts primary database log file names |
STANDBY_FILE_MANAGEMENT | Controls automatic standby file management |
FAL_SERVER | Specifies the fetch archive log server for a standby database |
Specifying Values for DB FILE NAME CONVERT
When files are added to the standby database, the DB_FILE_NAME_CONVERT parameter is used to convert the data file name on the primary database to a data file name on the standby database. The file must exist and be writable on the physical standby database; if it is not, the recovery process halts with an error.
You specify the path name and file name location of the primary database data files followed by the standby location by setting the value of this parameter to two strings. The first string is the pattern found in the data file names on the primary database. The second string is the pattern found in the data file names on the physical standby database. You can use as many pairs of primary and standby replacement strings as required. You can use single or double quotation marks Parentheses are optional quotation marks. Parentheses are optional.
DB_FILE_NAME_CONVERT =('/oracle1/dba/', '/ora1/stby_dba/', '/oracle2/dba/', '/ora2/stby_dba/')
In the example above, /oracle1/dba/ and /oracle2/dba/ are used to match file names coming from the primary database. /ora1/stby_dba/ and /ora2/stby_dba/ are the corresponding strings for the physical standby database. A file on the primary database named /oracle1/dba/system01.dbf is converted to /ora1/stby_dba/system01.dbf on the standby database. Multiple pairs can be specified such as (‘a’,’b’,’1′,’2′).
Specifying Values for LOG_FILE_NAME_CONVERT
The LOG_FILE_NAME_CONVERT parameter is used to convert the name of a redo log file on the primary database to the name of a redo log file on the standby database. Adding a redo log file to the primary database requires adding a corresponding file to the standby database. When the standby database is updated, this parameter is used to convert the log file name from the primary database to the log file name on the standby database. This parameter is required if the standby database is on the same system as the primary database or on a separate system that uses different path names.
Specify the location of the primary database online redo log files followed by the standby location. The use of parentheses is optional.
LOG_FILE_NAME_CONVERT = ('/oracle1/logs/', '/ora1/stby_logs/')
Both DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT parameters perform simple string substitutions. For example, (‘a’,’b’) will transform the following:
/disk1/primary/mya/a.dbf into
/disk1/primbry/myb/b.dbf
Multiple pairs can be specified such as (‘a’,’b’,’1′,’2′).
Specifying a Value for STANDBY_FILE_MANAGEMENT
STANDBY_FILE_MANAGEMENT is used to maintain consistency when you add or delete a data file on the primary database.
– MANUAL(default): Data files must be manually added to the standby database.
– AUTO: Data files are automatically added to the standby database. Certain ALTER statements are no longer allowed on the standby database.
STANDBY_FILE_MANAGEMENT applies to physical standby databases only, but can be set on a primary database for role changes.
STANDBY_FILE_MANAGEMENT = auto
When STANDBY_FILE_MANAGEMENT is set to AUTO, you cannot execute the following commands on the standby database:
- ALTER DATABASE RENAME
- ALTER DATABASE ADD/DROP LOGFILE [MEMBER]
- ALTER DATABASE ADD/DROP STANDBY LOGFILE MEMBER
- ALTER DATABASE CREATE DATAFILE AS …
When you add a log file to the primary database and want to add it to the physical standby database as well (or when you drop a log file from the primary and want to drop it from the physical), you must do the following:
1. Set STANDBY_FILE_MANAGEMENT to MANUAL on the physical standbyd atabase.
2. Add the redo log files to (or drop them from) the primary database.
3. Add them to (or drop them from) the standby database.
4. Reset to AUTO afterward on the standby database.
Specifying a Value for FAL_SERVER
The FAL_SERVER parameter is used to specify the Oracle Net service name of the fetch archive log (FAL) server (typically this is the database running in the primary role). When the London database is running in the standby role, it uses the Boston database as the FAL server from which to fetch (request) missing archived redo log files if Boston is unable to automatically send the missing log files. It is assumed that the Oracle Net service name is configured properly on the standby database system to point to the desired FAL server.
– On the boston primary database:
FAL_SERVER = london
– On the london standby database:
FAL_SERVER = boston
Example: Setting Initialization Parameters on the Primary Database
Primary database: boston;
standby database: london
DB_NAME=boston DB_UNIQUE_NAME=boston LOG_ARCHIVE_CONFIG='DG_CONFIG=(boston,london)' CONTROL_FILES='/u01/app/oracle/oradata/boston/control01.ctl', '/u01/app/oracle/oradata/boston/control02.ctl' LOG_ARCHIVE_DEST_2='SERVICE=london VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=london' LOG_ARCHIVE_DEST_STATE_2=ENABLE REMOTE LOGIN PASSWORDFILE EXCLUSIVE REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE LOG_ARCHIVE_FORMAT=arch_%t_%s_%r.log FAL_SERVER=london STANDBY_FILE_MANAGEMENT=auto DB_FILE_NAME_CONVERT='boston','london' LOG_FILE_NAME_CONVERT='boston','london'
In the example above, assume that the primary database is named boston and the standby is named london. For each, an Oracle Net Services name is defined.
Creating an Oracle Net Service Name for Your Physical Standby Database
Use Oracle Net Manager to define a network service name for your physical standby database. The entry below in the tnsnames.ora file that was generated by Oracle Net Manager.
LONDON = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS=(PROTOCOL=TCP)(HOST=host03.example.com) (PORT=1521)(SEND_BUF_SIZE=10485760) (RECV_BUF_SIZE=10485760)) ) (SDU=65535) (CONNECT_DATA=(SERVICE_NAME=london.example.com)) )
Creating a Listener Entry for Your Standby Database
Use Oracle Net Manager to configure a new listener (if necessary) or to update the listener.ora file with an entry for your physical standby database. The entry shown below in the listener.ora file that was generated by Oracle Net Manager.
SID_LIST_LISTENER = (SID LIST = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = london.example.com) (ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1) (SID_NAME = london) ) )
Copying Your Primary Database Password File to the Physical Standby Database Host
You can create a password file for your physical standby database by copying the primary database password file to the physical standby database host and renaming it.
- Copy the primary database password file to the $ORACLE_HOME/dbs directory on the standby database host.
- Rename the file for y y our standby database: orapw.
Creating an Initialization Parameter File for the Physical Standby Database
Create a text initialization parameter file containing only the DB_NAME and DB_DOMAIN initialization parameters.
File: $ORACLE_HOME/dbs/initlondon.ora
DB_NAME=london DB_DOMAIN=example.com
This initialization parameter file is used to start the physical standby database in NOMOUNT mode prior to the execution of the DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE RMAN command. When you execute this command, RMAN creates a server parameter file for the standby database.
Creating Directories for the Physical Standby Database
Create an initial directory structure for the physical standby database in the $ORACLE_BASE/admin and $ORACLE_BASE/oradata locations. The directories to be created depend on whether the primary database is using file locations for data files or Automatic Storage Manager. If multi-tenant architecture is being used, additional directories are needed. The above example is for a file system–based, multi-tenant installation using a single pluggable database named DEV1. Additional pluggable databases on the primary server will require additional directories to be created if the file system is used for storage.
[oracle@host03]$ mkdir -p /u01/app/oracle/admin/london/adump [oracle@host03]$ mkdir -p /u01/app/oracle/oradata/london [oracle@host03]$ mkdir -p /u01/app/oracle/oradata/london/pdbseed [oracle@host03]$ mkdir -p /u01/app/oracle/oradata/london/dev1 [oracle@host03]$ mkdir -p /u01/app/oracle/fast_recovery_area/london
Starting the Physical Standby Database
Set the ORACLE_SID environment variable to your physical standby database. Start the physical standby database instance in NOMOUNT mode by using the text initialization parameter file.
SQL> startup nomount pfil $HOME/db /i itl d file=$HOME/dbs/initlondon.ora ORACLE instance started. Total System Global Area 150667264 bytes Fixed Size 1298472 bytes Variable Size 92278744 bytes Database Buffers 50331648 bytes Redo Buffers 6758400 bytes
With ASM installed, there will be multiple software home locations on each machine. This will require that the ORACLE_HOME and PATH location change accordingly. Oracle recommends the oraenv utility to change environment variables provided entries exist in the /etc/oratab file. The oraenv utility will adjust ORACLE_SID, ORACLE_BASE, ORACLE_HOME, PATH, and LD_LIBRARY_PATH environment variables. The ORACLE_HOME variable should point to the Grid Infrastructure software directories when starting the listener by using the LSNRCTL utility. However, the ORACLE_HOME variable should point to the database software directories when starting the database.
Creating an RMAN Script to Create the Physical Standby Database
Create an RMAN script containing the DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE command.
run { allocate channel prmy1 type disk; allocate channel prmy1 type disk; allocate channel prmy2 type disk; allocate channel prmy3 type disk; allocate channel prmy4 type disk; allocate auxiliary channel stby type disk; duplicate target database for standby from active database spfile parameter_value_convert 'boston','london' set db_unique_name='london' set db_file_name_convert='boston','london' set log_file_name_convert='boston','london' set log_archive_max_processes='10' set fal_server='boston' set log_archive_config='dg_config=(boston,london)' set log_archive_dest_2='service=boston ASYNC valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db unique name boston' db_unique_name=boston' nofilenamecheck; }
There are advantages to using RMAN to create the standby database. They include:
- RMAN can create a standby database by copying the files currently in use by the primary database. No backups are required.
- RMAN can create a standby database by restoring backups of the primary database to the standby site. Thus, the primary database is not affected during the creation of the standby database.
- RMAN automates renaming g g () of files, including Oracle Managed Files (OMF) and directory structures.
- RMAN restores archived redo log files from backups and performs media recovery so that the standby and primary databases are synchronized.
In the RMAN script, specify the settings for the physical standby initialization parameters. You only need to specify parameters that are different than those on the primary database. Parameters related to directory paths for data files, control files, and online redo logs that are different do not have to be specified if DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT are used.
Creating the Physical Standby Database
1. Connect to the primary database instance (target) and physical standby database instance (auxiliary).
2. Execute the script that you created. The script can be run using the RMAN utility either on the primary database or standby database.
RMAN> connect target sys/oracle_4U@boston RMAN> connect auxiliary sys/oracle_4U@london RMAN> @cr_phys_standby
Real-Time Apply (Default)
With the real-time apply feature, log apply services apply the redo data from standby redo log files in real time (at the same time the log files are being written to) as opposed to recovering redo from archived redo log files when a log switch occurs. If for some reason the apply service is unable to keep up (for example, if you have a physical standby in read-only mode for a period of time), then the apply service automatically goes to the archived redo log files as needed. The apply service also tries to catch up and go back to reading the standby redo log files as soon as possible.
Real-time application of redo information provides a number of benefits, including faster switchover and failover operations, up-to-date results after you change a physical standby database to read database to read-only up only, up-to-date reporting from a logical standby database and the ability to date reporting from a logical standby database, and the ability to leverage larger log files on the primary database resulting in larger standby redo logs on the standby database.
Having larger log files with real-time apply is desirable because the apply service stays with a log longer and the overhead of switching has less impact on the real-time apply processing. The RECOVERY_MODE column of the V$ARCHIVE_DEST_STATUS view contains the value MANAGED REAL TIME APPLY when log apply services are running in real-time apply mode.
If you define a delay on a destination (with the DELAY attribute) and use real-time apply, the delay is ignored. For physical standby databases, the managed recovery process (MRP) applies the redo from the standby g ( )p g redo log files after the remote file server (RFS) process finishes writing.
Note: Beginning with Oracle Database 12c, real-time apply is enabled by default during Redo Apply. You can disable real-time apply by stopping the Redo Apply process, and you can restart it with the USING ARCHIVED LOGFILE clause as follows:
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL; ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING ARCHIVED LOGFILE;
Starting Redo Apply in Real-Time
On the standby database, issue the ALTER DATABASE RECOVER MANAGED STANDBY DATABASE SQL command to start Redo Apply. This statement automatically mounts the database. In addition, include the DISCONNECT FROM SESSION option so that Redo Apply runs in a background session. The FROM SESSION portion of syntax is no longer needed, but is acceptable.
Execute the following command on the standby database to start Redo Apply in real time:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNET
The transmission of redo data to the remote standby location does not occur until after a log switch. Issue the following command on the primary database to force a log switch:
SQL> ALTER SYSTEM SWITCH LOGFILE;
To disable real-time Redo Apply:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL; SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING ARCHIVED LOGFILE DISCONNECT;
Preventing Primary Database Data Corruption from Affecting the Standby Database
Data Guard uses Oracle processes to validate redo data before it is applied to the standby database. Corruption-detection checks occur at the following key interfaces:
- On the primary database during redo transport by the LGWR, LNS, and ARCn processes
- On the standby database during Redo Apply by the RFS, ARCn, MRP, and DBWn processes
If redo corruption is detected by Redo Apply at the standby database, Data Guard will re-fetch valid logs as part of archive log gap handling.
A lost write occurs when an I/O subsystem acknowledges the completion of a write but the write did not occur in persistent storage. On a subsequent block read, the I/O subsystem returns the stale version of the data block, which is used to update other blocks of the database, thereby corrupting the database.
Set the DB_LOST_WRITE_PROTECT initialization parameter on the primary and standby databases to enable the database server to record buffer cache block reads in the redo log so that lost writes can be detected.
You can set DB_LOST_WRITE_PROTECT as follows:
- TYPICAL on the primary database: The instance logs buffer cache reads for read/write tablespaces in the redo log.
- FULL on the primary database: on the primary database: The instance logs reads for read The instance logs reads for read-only tablespaces as only tablespaces as well as read/write tablespaces.
- TYPICAL or FULL on the standby database or on the primary database during media recovery: The instance performs lost-write detection.
- NONE on either the primary database or the standby database (the default): No lost- write detection functionality is enabled.
When a standby database applies redo during managed recovery, it reads the corresponding blocks and compares the system change numbers (SCNs) with the SCNs in the redo log before doing the following:
- If the block SCN on the primary database is lower than on the standby database, it detects a lost write on the primary database and returns an external error (ORA-752).
- If the SCN is higher, it detects a lost write on the standby database and returns an internal error (ORA-600 3020).
In both cases, the standby database writes the reason for the failure in the alert log and trace file. The recommended procedure to repair a lost write on a primary database is to fail over to the physical standby and re-create the primary. To repair a lost write on a standby database, you must re-create the standby database or affected files.
Special Note:Data Guard Support for Oracle Multitenant
Data Guard includes support for the multitenant architecture in Oracle Database 12c Release 1 (12.1). Data Guard is managed at the CDB level. Individual PDBs cannot have a different database role than that of the CDB. Role transitions such as switchover and failover are performed at the CDB level. A primary database that is a CDB can have both physical and logical standby databases. You are not required to have the same set of PDBs at the primary database and standby. However, only tables that exist in the same container at both the primary and standby are replicated.
Summary:
- A multitenant container database (CDB) can have a physical standby database and/or a logical standby database.
- Database role is defined at the CDB level only.
- Individual pluggable databases (PDBs) do not have their own roles.
- Role transitions are executed at CDB level.
- DDL related to role changes is executed in the root container (CDB$ROOT) of the CDB container (CDB$ROOT) of the CDB.
- PDBs created from XML files or clones from a different PDB will need their data files manually copied to the standby host.
Special Note:Standby Database on the Same System
If you have a standby database on the same system as the primary database, you must use the following guidelines:
- The data files must be renamed. The actual file names can be the same,but at least the directory path must be different. This means that you must use the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT parameters. Note: If the standby database uses Oracle Managed Files (OMF), do not set the DB_FILE_NAME_CONVERT or LOG_FILE_NAME_CONVERT parameters.
- If a standby database is located on the same system as the primary database, the archival directories for the standby database must use a different directory structure than the primary database. Otherwise, the standby database may overwrite the primary database files.
- If you do not explicitly specify unique service names and if the primary and standby databases are located on the same system, the same default global name (consisting of the database name and domain name from the DB_NAME and DB_DOMAIN parameters) will be in effect for both the databases.
- If the standby database is on the same system as the primary database, it does not If the standby database is on the same system as the primary database, it does not protect against disaster. A disaster is defined as total loss of the primary database system. If the standby database is on the same system, it will be lost as well. This configuration should be used only for testing and training purposes.
Creating a Physical Standby Without Using RMAN
If the database is using the file system for storage instead of Automatic Storage Manager (ASM), then a physical standby database can be created without using RMAN. With ASM, RMAN will have to be used to create the backup.
You can use any backup copy of the primary database to create the physical standby database, as long as you have the necessary archived redo log files to completely recover the database. A standby control file will have to be created on the primary database and transferred to the standby database. A text-based parameter file will need to be created from the primary database binary server parameter file. The file needs adjusting in the same way as shown earlier in the lesson. It will then need to be transferred to the standby system and converted to a binary server parameter file converted to a binary server parameter file.
Steps:
1. Create a backup copy of the primary database data files.
2. Create a control file for the standby database.
SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/tmp/london01 ctl';
3. Create a parameter file from the server parameter file used by the primary database, and then adjust the parameters.
SQL> CREATE PFILE='/tmp/initlondon.ora' FROM SPFILE;
4. Transfer the complete backup, standby control file, and parameter file to the standby system.
5. Create a server parameter file and a startup instance.
6. Restore the backup and start Redo Apply.