Data Guard consists of the following components
- Automatic Redo Shipping
- Automatic Apply Services
- Redo Apply
- SQL Apply
- Broker Services
- Logical Standby (See notes).
- Reporting
- Physical Standby (See notes).
- Active Data Guard (See notes).
- Real Time Query
- Block Change Tracking file for standby database
- Snapshot Standby (See notes)
Notes
1. Logical Standby - Uses SQL Apply and does not have to be structurally identical or on the same infrastructure. Can be open in read only whilst being updated from the primary.
2. Physical Standby - Uses redo/archive log shipping to remote standby database and by definition will be structurally in sync.
3. Active Data Guard - Allows you to setup a BCT file for backups on the physical standby and have the standby database open for read-only.
- _bct_chunk_size tells Oracle how to read and write the BCT file for backups and defaults to 32k because this is the largest block-size for a DB. However, for 8k databases then performance gains can be got by setting this to 8k.
4. Snapshot Standby - Under 11g this can be automated using the DG broker. Effectively, it is using flashback technology and allows the standby database to be taken out of standby mode and used for testing purposes. After testing is complete, the database is flashed back and put back into standby where the redo logs are applied to bring the standby back up to date. Prior to 11g, this could be done, but not in an automated fashion.
Useful Views
Below are some useful views for Data Guard
Logical Standby Views
~ dba_logstdby_unsupported_table
~ dba_logstdby_unsupported
Shows which objects and data types are unsupported in logical standby. For example, abstract data types are not shipped in logical standby.
Unsupported objects can be worked around using code. e.g. Unsupported data types can be replicated by having shadow tables in the primary database that get updated via triggers on the tables with unsupported data types. The shadow tables then get moved across. Code is then used to populate the unsupported tables in the logical standby from the shadow tables that have been replicated.
~ dba_logstdby_not_unique
Shows tables that do not have PKs or that oracle cannot identify as having unique data.
General Views
~ v$controlfile_record_section
Provides information on controlfile entries.
~ v$rman_configuration
Provides information on any non standard rman configuration
~ v$dataguard_status
~ v$managed_standby
~ v$dataguard_stats
Provides status information on the Data Guard set-up
Background Processes
MRP0 is the recovery process on the Standby Database
Real Time Apply
This will apply current redo log info on the standby database rather then applying standby archive logs. Not licensable, but is required for RTQ (Real Time Query) which is licensable.
Best Practice
1. Advisable to use redundant networks for redo transport. Consideration should be given to using a dedicated VLAN.
2. Have a least one extra standby redo log group than on-line redo log groups
Gap Resolution in 11g
This is where a standby may be missing archive info due to network interruptions. The following information details some processes involved in managing this issue.
- FAL (Fetch archive log) background process - Is a handshake mechanism between standby and primary databases where the standby requests archive logs that have yet to be applied. No configuration required in 11g. However, if you have many standby databases you may wish to tell other standby databases to get archive logs from the standby rather than the primary. Clearly at least one standby needs to get info from the primary. In this instance the fal info would need configuring.
- RFS (Remove File Server) background process - Is the process on the standby database responsible for writing log data to the standby redo logs or standby archive log area.
- MRPx (Managed Recovery Process) background process - Is the process on the standby database responsible for applying the log data to the standby database.
Protection Modes
Data Guard can be run in a number of protection modes which determine the performance and availability impact of the primary and standby. The different modes are detailed below.
Maximum protection
This protection mode ensures that no data loss will occur if the primary database fails. To provide this level of protection, the redo data needed to recover a transaction must be written to both the online redo log and to the standby redo log on at least one synchronized standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut down, rather than continue processing transactions, if it cannot write its redo stream to at least one synchronized standby database.
Because this data protection mode prioritizes data protection over primary database availability, Oracle recommends that a minimum of two standby databases be used to protect a primary database that runs in maximum protection mode to prevent a single standby database failure from causing the primary database to shut down.
Maximum availability
This protection mode provides the highest level of data protection that is possible without compromising the availability of a primary database. Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to the standby redo log on at least one synchronized standby database. If the primary database cannot write its redo stream to at least one synchronized standby database, it operates as if it were in maximum performance mode to preserve primary database availability until it is again able to write its redo stream to a synchronized standby database.
This mode ensures that no data loss will occur if the primary database fails, but only if a second fault does not prevent a complete set of redo data from being sent from the primary database to at least one standby database.
Maximum performance (Default Mode)
This protection mode provides the highest level of data protection that is possible without affecting the performance of a primary database. This is accomplished by allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log. Redo data is also written to one or more standby databases, but this is done asynchronously with respect to transaction commitment, so primary database performance is unaffected by delays in writing redo data to the standby database(s).
This protection mode offers slightly less data protection than maximum availability mode and has minimal impact on primary database performance.
Physical Standby Creation
High Level Steps
Part 1 - Standby Creation
- Prepare primary
- Set parameters for primary
- Set-up TNS requirements
- Prepare Standby
- Start standby database
- Duplicate target DB for standby
- Start transport and apply mechanism
Part 2 Setup the Data Guard Broker. (Optional, but strongly recommended).
- Start the Brokers
- Configure the listeners
- Set-up TNS requirements
- Create the Data Guard Broker Configuration
PART1 - Standby Creation
Step 1
a) Database must be in archive log mode.
b) Database must be in 'force logging' mode
- alter database force logging;
Step 2
a) Create standby redo logs. Logs must be the same size as on-line redo logs and must have a least one extra group to accommodate lag.
- alter system set log_archive_dest_2 = 'service=pc01sby1 async valid_for=(online_logfile,primary_role) db_unique_name=pc01sby1';
b) Configure log_archive_config parameter
- alter system set log_archive_config = 'dg_config=(pc01prmy,pc01sby1)';
- IMPORTANT Do not multiplex standby redo logs. See section 8.3.6 in Database High Availability Best Practices Guide 11.2
Notes
- LOG_ARCHIVE_CONFIG must be set - This has various parameters in its own right. e.g. DG_CONFIG, Send, Nosend etc. When configuring for data Guard, the DG_CONFIG parameter must list the DB Unique name for the primary database and all standby databases in the Data Guard configuration. For example, LOG_ARCHIVE_CONFIG='DG_CONFIG=(myprimarydb,mystandbydb)'
- By default the LOG_ARCHIVE_CONFIG parameter allows a database to both send and receive redo. You can use this parameter to disable the sending or receiving of redo. In most instances, this is not necessary, unless there is a need to disable the transmission of redo.
- The LOG_ARCHIVE_CONFIG parameter can be configured using the Data Guard broker CLI. If using the Data Guard Broker CLI, then the DMON processes will issue the alter system command against the relevant instances on the users behalf.
- Use SELECT * FROM V$STANDBY_LOG; to check the standby redo logs.
Step 3
a) Create a tnsnames.ora service entry on the primary database that points to the standby database.
- vi tnsnames.ora
PC01SBY1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = edbvr6p2.us.oracle.com)(PORT = 1521))
)
(CONNECT_DATA = (SERVICE_NAME = pc01sby1.us.oracle.com))
)
Step 4
a) Create standby password file for standby DB - This is a copy of the primary DB password file but renamed as appropriate.
b) Create an initialization parameter file for the standby database with just one line
- db_name=mystandbydb
c) Create the admin directories
- cd /u01/app/oracle/admin
- mkdir -p pc01sby1/adump
- mkdir -p pc01sby1/dpdump
- mkdir -p pc01sby1/pfile
Step 5
a) Now start the standby db using the following
- startup nomount pfile='mypfile';
Step 6
a) Start RMAN and connect to both the primary and the standby database.
- RMAN> connect target sys/<password>
- RMAN> connect auxiliary sys/<password>@pc01sby1
b) Create the standby database using the primary as the source
- RMAN>
run {
allocate channel prmy1 type disk;
allocate channel prmy2 type disk;
allocate channel prmy3 type disk;
allocate channel prmy4 type disk;
allocate auxiliary channel stby type disk;
duplicate target database for standby from active database
spfile
parameter_value_convert 'pc01prmy','pc01sby1'
set db_unique_name='pc01sby1'
set db_create_file_dest='+SBDAT'
set db_recovery_file_dest='+SBFRA'
set db_recovery_file_dest_size='5G'
set control_files='+SBDAT'
set log_archive_max_processes='5'
set fal_client='pc01sby1'
set fal_server='pc01prmy'
set standby_file_management='AUTO'
set log_archive_config='dg_config=(pc01prmy,pc01sby1)'
set log_archive_dest_2='service=pc01prmy ASYNC
valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE)
db_unique_name=pc01prmy'
;
}
c) Update the oratab on the standby machine with the new standby database
Step 7
a) Start the transport and apply mechanism
- SQL> alter database recover managed standby database using current logfile disconnect;
PART 2 - Creating the Data Guard Broker Configuration
Step 1
a) Start the Brokers
- On the primary database
- alter system set dg_broker_start=true;
- On the standby database
- alter system set dg_broker_start=true;
Step 2
a) Create listener services for the databases using db_unique_name_DGMGRL.db_domain values for GLOBAL_DBNAME.
- On the primary database update the listener.ora file
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = pc01prmy_DGMGRL.us.oracle.com)
(ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1)
(SID_NAME = pc01prmy)
)
)
- On the primary database update the listener.ora file
SID_LIST_LISTENER1 =
(SID_DESC =
(GLOBAL_DBNAME = pc01sby1_DGMGRL.us.oracle.com)
(ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1)
(SID_NAME = pc01sby1)
)
)
Step 3
a) Create some tns service entries on the standby database that point to the primary and the standby
- In the tnsnames on the standby add a service entry for the primary
PC01PRMY =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP) (HOST = edbvr6p1.us.oracle.com) (PORT = 1521))
)
(CONNECT_DATA = (SERVICE_NAME = pc01prmy.us.oracle.com))
)
- In the tnsnames on the standby add a service entry for the standby
PC01SBY1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = edbvr6p2.us.oracle.com)(PORT = 1521))
)
(CONNECT_DATA = (SERVICE_NAME = pc01sby1.us.oracle.com))
)
b) Configure the local listener parameter on the standby
- alter system set local_listener=pc01sby1;
4. Create the Data Guard Broker Configuration
- On the primary database create the configuration using dgmgrl
- DGMGRL> create configuration 'DGConfig1' as primary database is 'pc01prmy' connect identifier is pc01prmy;
- Check the configuration
- DGMGRL> show configuration
- Add the standby database to the configuration
- DGMGRL> add database 'pc01sby1' as connect identifier is pc01sby1;
- Check the configuration
- DGMGRL> show configuration
- Enable the configuration
- DGMGRL> enable configuration;
- Check configuration
- DGMGRL> show configuration
Creating a Snapshot Standby Database
It is possible to convert a standby database to what is called a snapshot standby database. This is a standby database that has been taken out of standby mode in order to perform read/write operations, perhaps for testing. Once testing is complete, the database can be converted back to a standby database at which point it catches up with the primary.
This feature relies on flashback database technology under the covers.
High Level Steps
1. Convert the database to a snapshot
2. Perform the testing
3. Convert the database back to a standby
Step 1,2 and 3
a) Using DGMGRL perform the following steps to convert the standby database to a snapshot database
- convert database pc10sby1 to snapshot standby;
b) ** PERFORM TESTING **
c) Using DGMGRL perform the following steps to convert the snapshot database back to a standby database
- convert database pc10sby1 to physical standby;
RMAN Commands to Create a Standby Database
In 11g, there is an RMAN command to duplicate a DB for standby.
duplicate target database for standby from active database
DB Lost Write Protection
Use db_lost_write_protection set to typical to enable lost write protection
Data Guard Broker
In RAC, the dmon config file must be on shared storage. e.g ASM
When you update the DG Config using dgmgrl, then dmon communicates with the standby dmon process prompting it to update the dmon config file on the standby database.
Note - the dmon config files between standby and primary do not have to be on shared storage.
In the event of a fail-over whereby the new primary is at a lower scn than the old primary, perhaps as a result of a delay in the apply of redo, then the broker can be used to reconfigure the old primary db to a new standby. It does this by flashing back the old primary database to an SCN prior to the new primary. It use flashback to do this, so flashback needs to be on.
Logical Standby
Not overly useful due to the following
1) Some objects/data types are unsupported. e.g. mat. views and tables supporting mat. views, xml data types, bfile etc
2) A mass update/insert of 1m rows on the primary is converted to 1m individual updates/inserts on standby. This can cause significant lag.
3) Because the row IDs on a logical standby database might not be the same as the row IDs on the primary database, a different mechanism must be used to match the updated row on the primary database to its corresponding row on the logical standby database. Oracle does this using the PK. As such, ideally all tables need a PK. Without a PK oracle cannot easily identify which row to update on the standby database. It tries to automatically detect the row when no PK is available by using all the data in row to imply uniqueness. But tables whose data is not unique, will cause an issue. The tables that cause issues are identified in DBA_LOGSTDBY_NOT_UNIQUE.
WARNING - If ever used, read the training docs for key usage information. e.g. guard data, deleting shipped logs, filter settings etc.
Switchover and Failover
A standby can be activated using two different methods.
SWICTHOVER - Gracefully switch to standby
and
FAILOVER - Ungraceful switch to standby
Notes
1. After failover the primary becomes disabled.
2. After switchover the primary becomes the standby.
3. After failover the old primary needs to be reinstated in order to become the new standby. To reinstate an old primary to a new standby, the database needs to have flashback on. If flashback is not on, then a reinstatement cannot be done and instead the old primary will need to be recreated as a standby from the new primary.
Fast Start Failover (FSF)
Needs an observer process on a separate machine to ensure that the failover is initiated in a appropriate manner. e.g. If a standby lost connectivity to the primary due to a network issue, rather than because the primary has failed.
Client Connections
Role based services is available in Grid Infrastructure.
This is where you create various services for different database roles. So a "production" service will only be running and accepting connections when the database role is "primary" e.g.
- srvctl add service -d <db name> -s <service name> -l PRIMARY
The services are created with srvctl.
Prior to 11gr2, this could only be performed using dbms_service and triggers. This not recommended for 11gr2. However, it does mean that grid infrastructure needs to be installed if triggers and dbms_service is not going to be used.
How to Configure Services for Client Failover
High Level Steps
1. Create production service on primary node
2. Start service on production node
3. Ensure standby database is know to srvctl
4. Create production service on standby node.
5. Create production service entry in tnsnames on primary and standby nodes
Notes
1. The production service does not get started on the standby database as it cannot be started on the standby node as it is already running on the production node. At failover time the service will be moved to the standby node.
Step 1
a) Create production service
- srvctl add service -d pc10prmy-s pc10prod -l PRIMARY -m BASIC -e SELECT -w 1 -z 180
Step 2
a) Start service
- srvctl start service -d pc10prmy -s pc10prod
Step 3
a) Add standby database to the srvctl on standby node.
- srvctl add database -d pc10sby1 -o /u01/app/oracle/product/11.2.0/dbhome_1 -m us.oracle.com -p /u01/app/oracle/product/11.2.0/dbhome_1/dbs/spfilepc10sby1.ora -r PHYSICAL_STANDBY -a "SBDAT,SBFRA"
- srvctl config database -d pc10sby1
Step 4
a) Create the production service on the standby node
- srvctl add service -d pc10sby-s pc10prod -l PRIMARY -m BASIC -e SELECT -w 1 -z 180
Step 5
a) Create the service entries in the tnsnames on both nodes
- Add service entry on the primary node
PC10PROD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP) (HOST = edu3r4p10.us.oracle.com)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP) (HOST = edu3r4p9.us.oracle.com)(PORT = 12001))
)
(CONNECT_DATA =
(SERVICE_NAME = pc10prod.us.oracle.com)
)
- Add service entry on the standby node
PC10PROD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP) (HOST = edu3r4p10.us.oracle.com)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP) (HOST = edu3r4p9.us.oracle.com)(PORT = 12001))
)
(CONNECT_DATA =
(SERVICE_NAME = pc10prod.us.oracle.com)
)
Notes
1. The address list is specified for each node in the DG configuration. In RAC this would be the scan listener in each configuration. e.g. a-racprd-scan-d, a-racsby-scan-d
RAC Considerations
1. In DGMGRL configure preferred instance apply.
2. On RAC you need an address list in the TNS names. With each address pointing to a scan address for each DG node.
3. In 11.2 the snapshot control file needs to be on shared storage. i.e. ASM
RMAN
Backups can be performed on the standby database to "off load" the work. In order to do this there are a few prerequisites.
Configure RMAN
a) An RMAN repository must be created and used.
- Log onto the RMAN repository and create a catalog
- RMAN> create catalog;
b) Set you environment to point to the primary database and then connect to the RMAN catalog
- rman target / catalog rcowner@pc01db11
c) Register the primary database
- RMAN> register database;
Notes
1. At this point, running a list db_unique_name of database; will show that the primary database has been registered with the catalog.
2. Other useful commands at this point are;
- report schema for db_unique_name pc01prmy;
- list archivelog all for db_unique_name pc01prmy;
- show all for db_unique_name pc01prmy;
d) Create connect identifiers for both the primary and standby database in the RMAN catalog.
- RMAN> configure db_unique_name pc01prmy connect identifier 'pc01prmy';
- RMAN> configure db_unique_name pc01sby1 connect identifier 'pc01sby1';
Notes
1. The connect identifiers are the TNS entries for the database and tell RMAN how to connect to each database. At the point when you create the connect identifier RMAN will perform a full resync with the catalog. When the entry for the standby database is created RMAN will try to connect to the standby database and if available catalog the standby database in the RMAN catalog.IMPORTANT
The standby database will have the same DBID as the primary database because it is effectively the same database. However, it has a different DB UNIQUE name. Therefore, when the command list db_unique_name of database; is run, both the standby database and the primary database will show up. They are in the same catalog.e) It is now important to set the configuration properties for the standby database in the RMAN catalog.
- Connect to the standby database and the RMAN catalog
- rman target sys/oracle_4U@pc01sby1 catalog rcowner/rcpass@pc01db11
- The following command can be used to view the configuration parameters for a given DB.
- show all for db_unique_name pc01sby1;
- Run the relevant commands to set the configuration parameters. e.g.
- configure controlfile autobackup on;
- configure archivelog deletion policy to applied on all standby;
f) It is now possible to use the standby database to backup and recover the database.
Backup
Use the following commands to backup the database.
a) To backup the standby
- rman target sys/oracle_4U@pc01sby1 catalog rcowner/rcpass@pc01db11
- backup database
Recovery
The following is an example of a datafile recovery from the standby database
a) To recover a primary datafile from a standby connect to RMAN and backup the standby datafile and output to the primary system
- rman
- RMAN> connect target sys/oracle_4U@pc01sby1
- RMAN> connect auxiliary sys/oracle_4U@pc01prmy
- RMAN> backup as copy datafile 6 auxiliary format '/home/oracle/labs/newex01.dbf';
b) Now connect to the primary and catalog the file backed up from above.
- rman target / catalog rcowner/rcpass@pc01db11
- catalog datafilecopy '/home/oracle/labs/newex01.dbf';
c) Because the datafile was known as a different name on the standby to the primary it has to be renamed.
- RMAN> sql 'alter tablespace example2 offline immediate';
- RMAN>
run {
set newname for datafile 6 to '/home/oracle/labs/newex01.dbf';
switch datafile 6;
}d) Recover the tablespace and put it online
- RMAN> recover tablespace example2;
- RMAN> sql 'alter tablespace example2 online';
Notes
1. The rename is necessary if standby_file_management is set to auto. This effectively means that any new tablespaces will have their filenames generated automatically on the standby database. It also means that the filenames between the primary and standby may be different. Therefore, when a back up of a standby file is written out into the primary location, although the filename itself will have the new name because the format command was used, the header information will still contain the standby filename. As such, the rename is required.
IMPORTNAT - The restore above is not restoring from a backup, but restoring from the live standby file on the standby database.
Archive Log Deletion Policy
Configured in RMAN to tell RMAN when to delete archive logs. i.e after shipping or after apply. For example,
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
Or
CONFIGURE ARCHIVELOG DELETION POLICY TO SHIPPED TO ALL STANDBY;
Upgrading a Data Guard Physical Standby
High Level Steps
1. Disable the configuration
2. Stop the brokers
3. Shutdown the primary
4. Shutdown the standby
5. Stop all listeners, agents etc
6. Shutdown ASM
7. Install the database software on the standby and the primary
8. Restart ASM
9. Restart listener, agents etc
10. Mount the standby
11. Upgrade the primary database. (The upgrade of the DB will propagate the REDO to the Standby).
12. start the brokers
13. Enable the configuration
14. Open the upgraded primary
Tracing and Tracking Problems
Trace files for DMON can be found in the bdump directory and is called drc<sid>.log
Enterprise manage can be used to monitor, manage and trigger alerts for DG configurations
Use the following commands from within DGMGRL
- SHOW DATABASE <'db name'> 'StatusReport'
- SHOW DATABASE <'db name'> 'LogXptStatus'
- SHOW DATABASE <'db name'> 'InconsistentProperties'
- SHOW DATABASE <'db name'> 'InconsistentLogXptProps'
- SHOW CONFIGURATION
- SHOW DATABASE <db name>
Use the following useful SQL
- SELECT group#, member FROM v$logfile WHERE type = 'STANDBY';
- SELECT group#, dbid, archived, status FROM v$standby_log;
- SELECT dest_id,valid_type,valid_role,valid_now FROM v$archive_dest;
- SELECT process, status, group#, thread#, sequence# FROM v$managed_standby order by process, group#, thread#, sequence#;
- SELECT name, value, time_computed FROM v$dataguard_stats;
- SELECT timestamp, facility, dest_id, message_num, error_code, message FROM v$dataguard_status ORDER by timestamp;
- SELECT primary_xid, type, mining_status, apply_status FROM v$logstdby_transaction;
Use LOG_ARCHIVE_TRACE for tracing various redo propagation and apply processes/tasks
- ALTER SYSTEM SET LOG_ARCHIVE_TRACE=<trace level>
The trace level can be as follows
- 0 Disables archived redo log tracing (default setting)
- 1 Tracks archiving of redo log file
- 2 Tracks archival status per archived redo log destination
- 4 Tracks archival operational phase
- 8 Tracks archived redo log destination activity
- 16 Tracks detailed archived redo log destination activity
- 32 Tracks archived redo log destination parameter modifications
- 64 Tracks ARCn process state activity
- 128 Tracks FAL server process activity
- 256 Reserved for future use
- 512 Tracks asynchronous LGWR activity
- 1024 Tracks RFS physical client
- 2048 Tracks ARCn or RFS heartbeat
- 4096 Tracks real-time apply activity
- 8192 Tracks Redo Apply activity (media recovery or physical standby)
Notes
1. The files get written to the DIAGNOSTIC_DEST
2. The trace levels are bitmaps, so adding them together will generate both trace levels. For example, setting trace level to 6, i.e. 2 + 4, will give you the following:-
* Tracks archival status per archived redo log destination (2)
* Tracks archival operational phase (4)
Interesting Information
Active Data Guard is actually the usage of RTQ (Real Time Query) and BCT for Standby (Block Change Tracking for Standby).
RTQ allows you to open a standby database for read-only, whilst at the same time being able to apply changes to the standby database.
However, it is possible to have the standby database open, have the redo shipped (but not applied) and not invoke the usage of the RTQ component of Active Data Guard.
Obviously, when you shut the database down and restart it in mount mode, then the REDO would need to be applied and may take some time if the database has been open for read-only for a long period.
A clever implementation of data guard allows for applications that are read mostly, i.e. do lots of reads but only update tables infrequently (perhaps PCI), to use the standby database as their main connection, but update tables on the primary database.
The way this works as follows…
1. Creating a database link on the primary database that points to the primary database.
2. This link then gets propagated to the standby database. (but clearly points back to the primary)
3. Create a user on the primary database with private synonyms that point to
* the read only tables
* the update tables via the dblink
* NOTE This will get propagated to the Standby database
4. Put the standby database into read-only.
5. Connect to the standby database as the user with the private synonyms.
6. Run the read-only query. This will read its data from the standby database via the private synonyms.
7. Run the update statements. This will update the tables via the private synonym using the dblink. i.e the tables on the primary database. After a few seconds the data will be propagated across to the redo on the standby.
8. If using active data guard then the data will be available for query in a few seconds. If not using active data guard then the data will get applied once the database is put back into standby mode.
Notes
1. If you did not want to have to log on as a different user in step 5. You could create a logon trigger that checked the dbname and if on the standby database, then the trigger could switch the current users schema to the schema with the private synonyms.
Overheads of Data Guard Versus San Replication
1. DUPLICATE SET OF BINARIES TO MANAGE
2. MUST REMEMBER TO ADD EXTRA STANDBY REDO LOGS WHEN ADDING REDO LOGS
3. MUST REMEMBER TO PROPAGATE CERTAIN CHANGES ON THE PRIMARY TO THE STANDBY. e.g. init.ora and redo logs
General Warnings
- Switch over to a logical standby from a physical is possible, but not advisable because all physical standbys will no longer be valid as the logical DB is not the same DB as the physical standby.
- Flashback should be switched on for the primary database when using DG. Otherwise, on a failover it is impossible to use the old primary as the new standby. Instead, the old primary would have to be recreated as a standby which could take a very long time on a large database. With flashback turned on for the old primary database, it can be flashed back to an earlier SCN and then used as the standby. Without flashback, the SCN of the old primary would be higher than the new primary and therefore the old primary would be incompatible for a new standby.
- In DGMGRL always connect in the following manner sys/<password>@tns. This is so that when issuing commands DGMGRL is always able to connect to the other nodes in the DG configuration. This is because when DGMGRL tries to connect to other nodes in the DG configuration, it uses the same method of connecting as specified for the local connection. So if connect / was used then connections to other nodes would not work. So any commands from DGMGRL that needed to be issued on another node would fail.
- Remember when adding datafiles with standby_file_management set to auto, then the file names between the primary and standby will be different. This has implications when restoring files into the primary database from the standby.
Common Questions
Q1 - On the create standby, how many param are mandatory?
A1 - Certainly not as many as there are above. Test to see what is required.
Q2 - Why did we specify the fal parameters in the create standby database command?
Q2 - You don't need to.
Q3 - Do you really need flashback on to create a snapshot standby?
A3 - No. BUT…you will need it on to reinstate an old primary database to a new standby.
Q4 - Should a dedicated network be used for the redo shipping?
A4 - This is site specific, but highly recommended for systems that generate lots of REDO
Q5 - Can FSF be configured to failover on user written errors to the alert log?
A5 - Yes
Q6 - Do the files in the standby database have a different name to the files in the primary?
A6 - With standby_file_management set to AUTO, definitely.