Prerequisites
On this page we’ll list necessary prerequisites and checks you should perform before creating standby database for your primary to ensure whole process will go smoothly.
You will find some prerequisites listed here also directly on the page https://dbvisit.atlassian.net/wiki/spaces/DSMP/pages/3289350145/Create+DR+Database+for+Oracle. This duplicity is intentional as not everybody reads prerequisites.
1. Oracle Home Check
Standby (DR) Oracle Home and Primary Oracle Home must be on identical patch level. To identify the patch level, you can run:
$ORACLE_HOME/OPatch/opatch lsinventory
Note that you need to have exactly same patches installed on primary and standby, not only matching release.
2. Prepare Filesystem Space
In general, you will need to have free space on primary equal to the size of the primary database. This space will be used by RMAN full backup which Dbvisit uses for standby database creation. On standby server you need to have free space equal to 2x size of your primary database (1= space for database, 1=space for backup).
SQL for size of your primary database:
sqlplus / as sysdba
SQL> select sum(bytes)/1024/1024 "Size in MB" from v$datafile;
On primary and standby server, you can pre-create directories which you will use for storing the backup during standby database creation. For example:
mkdir /u01/app/oracle/dbvisit_csd_backup
If you have sufficient space and network bandwidth, you can add new DDC variable RMAN_DBF_BACKUP_TYPE=AS BACKUPSET
to disable RMAN compression. Backup will then be done faster, but will occupy up to equal space of primary DB size. Check Miscellaneous Oracle Functions | 5.1 Edit DDC File how to modify DDC variables.
3. Server Time Check
Oracle Database time is derived from OS session time which starts up the database. Dbvisit uses Database time for calculating various variables including standby database time lag. You need to ensure that your primary and standby servers have same time and same timezone. On both serves run:
$ date
Wed Feb 14 15:35:38 CET 2024
$ sudo systemctl status chronyd #EL8 and newer
$ sudo systemctl status ntpd #EL7 and older
The different time is indication that OS time synchronization daemons (chronyd, ntpd) are not working properly or are not configured. The time difference (or timezone difference) will then cause confusing gap time report of standby database in the GUI dashboard.
4. Check Primary Database
You need to verify following on the primary database:
4.1 Database is in archivelog mode
SQL> archive log list;
Database log mode Archive Mode
...
4.2 Database uses spfile:
SQL> sho parameter spfile
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string /u01/app/oracle/product/23.0.0
/dbhome_1/dbs/spfileSLASH.ora
If you don’t get file path, then your database uses pfile only and you need to create & start your primary database with spfile.
4.3 Database is in force logging mode:
SQL> select force_logging from v$database;
FORCE_LOGGING
---------------------------------------
YES
Dbvisit can’t keep your standby database effectively in sync without force logging enabled. In case result of the select is “NO”, you will need to enable the force logging mode by running:
SQL> alter database force logging;
Setting primary database to force logging may have consequences for your primary database archivelog generation rate. Consult Oracle documentation if you’re not sure about enabling this setting
4.4 Database redo logs should use 512 byte sectors only:
SQL> select distinct(blocksize) from v$log;
BLOCKSIZE
----------
512
If the select result would be 4096, you can still create your standby database, but you will need your standby filesystem to use same sector size. Contact Dbvisit Support if unsure
4.5 Database redo logs size
The redo on your primary DB should be at least of 200MB in size (each member):
SQL> select distinct(bytes)/1024/1024 "Redo Member Size in MB" from v$log;
Redo Member Size in MB
----------------------
200
Note that 200MB is the bare minimum. Usually we would recommend to have even bigger redologs, because too frequent redo log switches will cause your standby database to get out of sync. You should aim for archivelog switch to happen at most every 5 minutes (approx. 300 archlogs per day) in your “business as usual” situation, before you create any standby database.
You can use following select to determine archivelog count generated per day for past 7 days:
select to_char(trunc(l.first_time,'dd'),'yyyy-mm-dd') date_time
, count(l.recid) archivelog_count
, round(sum(l.blocks*l.block_size)/1024 /1024) MB
from v$archived_log l
where l.first_time > sysdate-7
group by to_char(trunc(l.first_time,'dd'),'yyyy-mm-dd') order by date_time desc;
If your archivelog generation is higher than required, you should increase size of your redo appropriately, otherwise the synchronization might experience delays.
5. Check Primary Database Files Naming Structure
For creating standby database, It’s not important whether primary database files are stored on ASM, ACFS, Filesystem or NFS, because Dbvisit does support all storage methods certified by Oracle. What matters is the naming structure on your primary database - whether it’s OMF (Oracle Managed Files) or whether you use non-OMF (user custom naming). There are several database files for which we need to perform this check:
Controlfiles
Redo log files
Datafiles
Tempfiles
You can use following select for your convenience to determine your File Name structure:
set pages 1000
column File format a20
select 'Datafile '||file# "File",decode(DBMS_METADATA_UTIL.IS_OMF(name),1,'OMF',0,'NOT OMF') "Result" from v$datafile
union all
select 'Redo Group '||group# "File",decode(DBMS_METADATA_UTIL.IS_OMF(member),1,'OMF',0,'NOT OMF') "Result" from v$logfile
union all
select 'Tempfile '||file# "File",decode(DBMS_METADATA_UTIL.IS_OMF(name),1,'OMF',0,'NOT OMF') "Result" from v$tempfile
union all
select 'Controlfile' "File",decode(DBMS_METADATA_UTIL.IS_OMF(name),1,'OMF',0,'NOT OMF') "Result" from v$controlfile;
For Dbvisit it doesn’t matter whether Controlfile is OMF or non-OMF. SQL result has only informational character.
5.1 Using Exclusively OMF Naming Structure
If the previous SQL returned only “OMF” values in Result column for all lines, you’re using OMF naming structure only.
Using OMF (Oracle Managed Files) naming means that all of your database files are fully managed by Oracle database itself. The files have automatically generated identifier in their name, for example:
Controlfiles (format o1_mf_%u.ctl)
/u02/app/oracle/fast_recovery_area/SLASH/controlfile/o1_mf_kqh8jsjw_.ctl
Redo logs (format o1_mf_%g_%u.log), %g = group#:
/u02/app/oracle/fast_recovery_area/SLASH/onlinelog/o1_mf_2_lw9orcp3_.log
Datafiles (format o1_mf_%t_%u.dbf), %t = tablespace name:
/u02/app/oracle/oradata/SLASH/datafile/o1_mf_sysaux_kqh8n4jq_.dbf
Tempfiles (format o1_mf_%t_%u.dbf), %t = temporary tablespace name:
/u02/app/oracle/oradata/SLASH/datafile/o1_mf_temp_lw9ornvw_.tmp
The file names are slightly different for ASM, but they always contain the unique identifier.
Exclusive OMF naming structure is fully supported by Dbvisit Standby with following conditions:
All of your OMF datafiles and tempfiles must be in the same ASM diskgroup / Filesystem directory which is defined by parameter db_create_file_dest
AND
All of your OMF redo logs must be in locations specified by parameter(s) db_create_online_log_dest_[x]. If db_create_online_log_dest_[x] is not set, redo logs must be in locations defined by parameter db_create_file_dest and db_recovery_file_dest (if set)
Otherwise functionality of Dbvisit standby is very limited. If for example OMF datafiles are spread across more diskgroups, you will have to manually create your standby database and the only functionality available will be failover and archivelog synchronization.
5.2 Using Exclusively non-OMF Naming Structure
If the previous SQL returned only “NOT OMF” values in Result column for all lines, you’re using user defined naming structure. The file names contain “human readable” names, for example:
Controlfiles:
/u02/app/oracle/fast_recovery_area/SLASH/control01.dbf
Redo logs:
/u02/app/oracle/fast_recovery_area/SLASH/redo01.log
Datafiles:
/u02/app/oracle/oradata/SLASH/sysaux01.dbf
Tempfiles:
/u02/app/oracle/oradata/SLASH/temp01.dbf
Exclusive non-OMF naming structure is fully supported by Dbvisit with following condition:
OMF parameters db_create_file_dest, db_create_online_log_dest_[x] must not be set on primary database nor on standby
5.3 Using mix of OMF and non-OMF Naming Structure
If only Controlfiles have different naming structure, it doesn’t matter. Consider your configuration as either 5.1 Exclusively OMF or 5.2 Exclusively non-OMF
If the previous SQL returned some “OMF” and some “NOT OMF” values (regardless for which file types) then you’re using mix of OMF and non-OMF files on your primary database.
Such configuration is not according to the best practices, but can be supported by Dbvisit. Typical example is mix of datafiles:
SQL> select name from v$datafile;
NAME
--------------------------------------------------------------------------------
+DATA/SOLO/CEE1AB036E0D3E18E053D908A8C0F3B5/DATAFILE/system.292.1148040717
+DATA/SOLO/CEE1AB036E0D3E18E053D908A8C0F3B5/DATAFILE/sysaux.293.1148040733
+DATA/SOLO/CEE1AB036E0D3E18E053D908A8C0F3B5/DATAFILE/undotbs1.284.1148040759
+DATA/SOLO/CEE1AB036E0D3E18E053D908A8C0F3B5/DATAFILE/users.295.1148040771
+DATA/SOLO/DATAFILE/new_tbs01.dbf
+DATA/SOLO/DATAFILE/new_tbs201.dbf
or mix between OMF datafiles and non-OMF redo log files:
SQL> select member from v$logfile;
MEMBER
--------------------------------------------------------------------------------
/u02/app/oracle/oradata/SLASH/redo01.log
/u02/app/oracle/oradata/SLASH/redo02.log
/u02/app/oracle/oradata/SLASH/redo03.log
SQL> select name from v$datafile;
NAME
--------------------------------------------------------------------------------
/u02/app/oracle/oradata/SLASH/datafile/o1_mf_system_kqh8ln12_.dbf
/u02/app/oracle/oradata/SLASH/datafile/o1_mf_sysaux_kqh8n4jq_.dbf
/u02/app/oracle/oradata/SLASH/datafile/o1_mf_undotbs1_kqh8ohhc_.dbf
/u02/app/oracle/oradata/SLASH/datafile/o1_mf_users_kqh8pjld_.dbf
For mix of OMF and non-OMF files, following restrictions apply:
you must set OMF parameters db_create_file_dest and db_create_online_log_dest_[x] on primary and standby database. In case db_create_online_log_dest_[x] is not set on primary, db_create_file_dest and db_recovery_file_dest must be set.
in case of switchover, all redo logs and tempfiles on new primary server (formerly standby) will be forcefully re-created as OMF
datafile naming structure will remain the same even after switchover
In addition same conditions as in “5.1 Using Exclusively OMF Naming Structure” apply:
All of your OMF datafiles and tempfiles must be in the same ASM diskgroup / Filesystem directory which is defined by parameter db_create_file_dest
AND
All of your OMF redo logs must be in locations specified by parameter(s) db_create_online_log_dest_[x]. If db_create_online_log_dest_[x] is not set, redo logs must be in locations defined by parameter db_create_file_dest and db_recovery_file_dest (if set)
6. Cleanup Standby Server
Before creating standby database, you must ensure that the standby server doesn’t contain any old database files. This step doesn’t apply for first-time standby database setup.
If standby server contains existing running database which you want to replace with new standby, you can use following command for cleanup:
export ORACLE_SID=<yoursid>
rman target /
STARTUP mount FORCE dba;
DROP DATABASE INCLUDING BACKUPS NOPROMPT;
Standby controlfile contains information about backups from primary database. If primary database backups are accessible from standby server, this command would delete them unless you’re using different db_unique_name for your standby database.
As the command is very destructive, make sure to double check that you’re on correct server and ORACLE_SID is set properly.
If there are old database files on standby server, but database can’t be started, you will have to cleanup all locations manually
Take special care to cleanup all files in db_recovery_file_dest. This is especially important for standby servers with ASM storage.
7. Verify Standby Server Memory Limits
Dbvisit doesn’t configure in any way memory limits on your standby server but we need those to be correctly set before you will start standby database creation. The easiest way to verify if the memory limits are set correctly, is to startup a dummy instance on standby server.
check your current memory allocation on primary:
SQL> sho parameter sga_max_size
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_max_size big integer 6G
SQL> sho parameter sga_target
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 6G
SQL> sho parameter pga_aggregate_target
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
pga_aggregate_target big integer 1G
When using Automatic Memory Management (AMM) you need to look for memory_target and memory_max_target instead
on standby server create pfile for the dummy instance:
$ cat $ORACLE_HOME/dbs/initMEMTEST.ora
*.pga_aggregate_target=1g
*.sga_max_size=6g
*.sga_target=6g
*.db_name=MEMTEST
attempt to start the instance:
$ export ORACLE_SID=MEMTEST
$ sqlplus / as sysdba
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 6439854152 bytes
Fixed Size 9986120 bytes
Variable Size 6354370560 bytes
Database Buffers 67108864 bytes
Redo Buffers 8388608 bytes
SQL> shu abort;
if you see any issues or errors, you need to refer to dummy instance alert log, for example:
$ cat /u01/app/oracle/diag/rdbms/memtest/MEMTEST/trace/alert_MEMTEST.log
2024-03-18T15:36:20.878263+01:00
WARNING: You are trying to use the MEMORY_TARGET feature. This feature requires the /dev/shm file system to be mounted for at least 6442450944 bytes.
/dev/shm is either not mounted or is mounted with available space less than this size. Please fix this so that MEMORY_TARGET can work as expected.
Current available is 4166488064 and used is 0 bytes. Ensure that the mount point is /dev/shm for this directory.
8. Check your backup scripts on Primary database
You must ensure that no archivelog on your primary database will be deleted before Dbvisit is able to send it to the standby server.
Very often RMAN backup scripts on production are set to backup archivelogs and delete all archivelogs immediately after backup:
RMAN> backup archivelog all not backed up 1 times delete input;
Such archivelog backup & retention strategy won’t be compatible with Dbvisit. Archivelog might be deleted by RMAN backups before it is shipped to standby causing the synchronization to break.
Dbvisit requires retention of at least 1-2 hours worth of archivelogs (recommendation would be 1 day if your disk space is available).
This can be achiveved for example by following commands:
RMAN> backup archivelog all not backed up 1 times;
RMAN> delete force noprompt archivelog all completed before 'sysdate -1';
this will delete all archivelogs from primary older than 1 day.
RMAN backups of your primary database archivelogs are out of Dbvisit Standby scope. If you decide to manage their deleteion on your own, it’s completely up to you how you purge them as long as they are not deleted before Dbvisit can transfer them to standby server (otherwise the synchronization will break).
If you use 3rd party tool instead of RMAN (Commvault, Avamar, Acronis etc … ) for archivelog backup & deletion on primary you will need to resolve this as well with the functionality of the particular tool.
You can consider removing archivelog deletion step from your own backups and enabling Dbvisit AMM on your primary to purge archivelogs: Archivelog Management Module | 5.1 Primary Archive Management Settings