Clustered Replication running on ACFS enabled servers.
The below article gives a detailed worked example of using Clustered Dbvisit Replicate between a 2 node RAC Source 11g database and 2 node RAC Target both with ACFS enabled.
Environment Details
Source System: 2 Node Oracle RAC | Target System: 2 Node Oracle RAC |
|
|
Assumptions:
It is assumed the grid binaries have already been pre-installed and that the acfs mount point has been created.This article will proceed from the point of creating a Directory for the replication configuration on mount point /acfs and symbolic links from the oracle user home on each node are created to point to this directory. E.g. on nodes kiwi81/kiwi82
NODE kiwi81 as oracle:
oracle@kiwi81[/home/oracle]: cd /acfs
oracle@kiwi81[/acfs]: mkdir laa_rac
oracle@kiwi81[/acfs]: cd
oracle@kiwi81[/home/oracle]: ln -s /acfs/laa_rac
oracle@kiwi81[/home/oracle]: ls -l
lrwxrwxrwx 1 oracle oinstall 13 Mar 13 12:17 laa_rac -> /acfs/laa_rac
NODE kiwi82 as oracle:
oracle@kiwi82[/home/oracle]: ln -s /acfs/laa_rac
oracle@kiwi82[/home/oracle]: ls -l
lrwxrwxrwx 1 oracle oinstall 13 Mar 13 12:19 laa_rac -> /acfs/laa_rac
References:
“ora.registry.acfs” ("ora.drivers.acfs") Resource Was Not Configured Therefore RAC ACFS Filesystem Is Not Mounting During The Reboot. (Doc ID 1486208.1)
https://dbvisit.atlassian.net/wiki/display/ugd8/
http://blog.dbvisit.com/adding-dbvisit-replicate-as-an-oracle-clusterware-resource/
https://docs.oracle.com/cd/E18283_01/server.112/e16102/asmfs_util010.htm#CACGHBHI
Pre-Steps Performed - Before Replication Setup
Ensure the Acfs mount point auto-mounts at startup. (MOS# 1486208.1) Even though registry was configured correctly, the /acfs was not mounting on startup. This in turn has a knock on affect to the start of Dbvisit Replicate as the configuration files resided on this drive. The issue can be solved by issuing the following commands on all nodes as the root user.
# /bin/acfsroot install
# /bin/acfsroot enable
root@kiwi91[/home/oracle/laa_rac]: cd /u01/app/11.2.0/grid/bin
root@kiwi91[/home/oracle/laa_rac]: . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/grid
root@kiwi91[/u01/app/11.2.0/grid/bin]: acfsroot install
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9118: oracleacfs.ko driver in use - cannot unload.
ACFS-9312: Existing ADVM/ACFS installation detected.
ACFS-9118: oracleacfs.ko driver in use - cannot unload.
ACFS-9314: Removing previous ADVM/ACFS installation.
ACFS-9315: Previous ADVM/ACFS components successfully removed.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9309: ADVM/ACFS installation correctness verified.
root@kiwi91[/u01/app/11.2.0/grid/bin]: acfsroot enable
ACFS-9376: Adding ADVM/ACFS drivers resource succeeded.
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'kiwi91'
CRS-2676: Start of 'ora.drivers.acfs' on 'kiwi91' succeeded
ACFS-9380: Starting ADVM/ACFS drivers resource succeeded.
ACFS-9368: Adding ACFS registry resource succeeded.
CRS-2672: Attempting to start 'ora.registry.acfs' on 'kiwi92'
CRS-2672: Attempting to start 'ora.registry.acfs' on 'kiwi91'
CRS-2676: Start of 'ora.registry.acfs' on 'kiwi91' succeeded
CRS-2676: Start of 'ora.registry.acfs' on 'kiwi92' succeeded
ACFS-9372: Starting ACFS registry resource succeeded.
2. Create New dedicated Services on each database for replication to run. These services will be used in the section on TNS entry.
oracle@kiwi81[/home/oracle]: . oraenv
ORACLE_SID = [oracle] ? LAA1
The Oracle base remains unchanged with value /u01/app/oracle
oracle@kiwi81[/home/oracle]: srvctl add service -d LAA -s DBVLAA -r LAA1 -a LAA2
oracle@kiwi81[/home/oracle]: srvctl status service -d LAA -s DBVLAA
Service DBVLAA is not running.
oracle@kiwi81[/home/oracle]: srvctl start service -d LAA -s DBVLAA
oracle@kiwi81[/home/oracle]: srvctl status service -d LAA -s DBVLAA
Service DBVLAA is running on instance(s) LAA1
Repeat on Nodes 91/92
oracle@kiwi91[/home/oracle]: srvctl add service -d DEV -s DBVDEV -r DEV1 -a DEV2
oracle@kiwi91[/home/oracle]: srvctl status service -d DEV -s DBVDEV
Service DBVDEV is not running.
oracle@kiwi91[/home/oracle]: srvctl start service -d DEV -s DBVDEV
oracle@kiwi91[/home/oracle]: srvctl status service -d DEV -s DBVDEV
Service DBVDEV is running on instance(s) DEV1
Add the TNS Entries for both in 4 locations. $ORACLE_HOME/network/admin/tnsnames.ora on each node.
DBVLAA =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = kiwi812-scan)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = DBVLAA)
)
)
DBVDEV =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = kiwi912-scan)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = DBVDEV)
)
)
3. Created new dedicated Virtual IP addresses (used as host name in setup wizard )
Steps Performed - Replication Setup
4. Install replication binaries on all 4 nodes : Servers kiwi81/82 & kiwi91/92
root@kiwi91[/home/oracle]: rpm -ivh dbvisit_replicate-2.8.04_rc1-1.x86_64.rpmPreparing... ########################################### [100%] 1:dbvisit_replicate ########################################### [100%] root@kiwi91[/home/oracle]:
5. Run from 1 source node (kiwi81) to configure Replication
6. Run the generated ./laa_rac-all.sh script from 1 of the Target Nodes.
7. Perform the actions listed in the /home/oracle/laa_rac/Nextsteps.txt
8. Before starting the MINE and APPLY processes, run crsctl to check the locations of the vip and the oracle service. Then start processes on same node. If the vip and services are running on different nodes, relocate one to be the same as the other.
Check replication Status by running the console from the Node running the MINE process
Perform the following steps to relocate the service and vip to the 2nd node (kiwi82). The restart the MINE process from the 2nd node.
Shutdown Mine Process
Relocate Database Service and VIP
Start MINE process from other node
Confirm replication is running
Before adding replication to the cluster resources we need to configure an action script to start/stop the MINE (kiwi81/kiwi82) and APPLY (kiwi91/kiwi92) processes. The scripts for this example are created in the /home/oracle/laa_rac directory on both clusters and are owned by user oracle with execute permissions.
9. Now we can proceed to add the MINE process as a clustered Resource (
dbvMine on NODE kiwi82 as root)
10. Now we can repeat and add the APPLY process as a clustered Resource
11. Check the Status of Replication and run a few inserts/updates
12. Testing Failover of the dbvMine and dbvApply resources. Because of hard dependencies on the vip and service the -f (force) option must be used.
- 1. dbvMine Relocate from kiwi82 to kiwi81
- 2. dbvApply Relocate from kiwi91 to kiwi92 continuing to monitor replication in session above.
root@kiwi91[/root]: crsctl relocate resource dbvApply -n kiwi92 -f
CRS-2673: Attempting to stop 'dbvApply' on 'kiwi91'
CRS-2677: Stop of 'dbvApply' on 'kiwi91' succeeded
CRS-2673: Attempting to stop 'dbvrep912-vip' on 'kiwi91'
CRS-2673: Attempting to stop 'ora.dev.dbvdev.svc' on 'kiwi91'
CRS-2677: Stop of 'ora.dev.dbvdev.svc' on 'kiwi91' succeeded
CRS-2672: Attempting to start 'ora.dev.dbvdev.svc' on 'kiwi92'
CRS-2676: Start of 'ora.dev.dbvdev.svc' on 'kiwi92' succeeded
CRS-2677: Stop of 'dbvrep912-vip' on 'kiwi91' succeeded
CRS-2672: Attempting to start 'dbvrep912-vip' on 'kiwi92'
CRS-2676: Start of 'dbvrep912-vip' on 'kiwi92' succeeded
CRS-2672: Attempting to start 'dbvApply' on 'kiwi92'
CRS-2676: Start of 'dbvApply' on 'kiwi92' succeeded
root@kiwi91[/root]: crsctl status resource dbvApply
NAME=dbvApply
TYPE=cluster_resource
TARGET=ONLINE
STATE=ONLINE on kiwi92
13. Final Tests. Shutdown/Crash of 1 node from each cluster by running the init 6 command as root. Also continuing to monitor replication status from the console in a shell seperate window.
root@kiwi92[/root]: init 6 root@kiwi91[/root]: crsctl status resource dbvApply NAME=dbvApply TYPE=cluster_resource TARGET=ONLINE STATE=ONLINE on kiwi91
root@kiwi81[/root]: init 6
root@kiwi82[/root]: . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/grid
root@kiwi82[/root]: crsctl status resource dbvMine
NAME=dbvMine
TYPE=cluster_resource
TARGET=ONLINE
STATE=ONLINE on kiwi82
Conclusion
In summary the process is really straight forward and works very well with a slick and seamless failover between the 2 RAC nodes of the SOURCE and also between the 2 RAC nodes of the TARGET.