Sample Migration involving an Oracle Upgrade with Oracle RMAN
The following steps can be used as a guide to perform an Oracle database migration using Dbvisit Replicate. In this example we will use Oracle RMAN to perform the initial data synchronization. Before any migration is done, it is suggested that you run a test on a small database so that you can familiarize yourself with the steps involved. After you have tested on a small database, it is suggested to do a test database with a database that is similar to your production database so that you can now concentrate on the timings involved with the same volumes of data as the production site.
Configure the target database
Step 1
Use RMAN to create & instantiate Target database following one of these methods:
RMAN Duplicate OR RMAN Restore and Recover
Step 2
Run the Setup Wizard as directed and exclude any tables that contain columns that the datatypes are not supported. See Data Discovery - Pre-flight SQL script for datatypes that are not currently supported.
Step 3
After the Setup Wizard is completed, run the *-all.sh script and instantiate the target database as directed in Step 1.
Step 4
Disable all triggers and delete cascade constraints on target database.
Step 5
Start the MINE & APPLY processes to enable target to catch up with source database.
Upgrade the target database
Step 6
Once the MINE and APPLY processes are in sync, shutdown the APPLY process but leave the MINE process running.
Step 7
Perform the Oracle upgrade on the target database as per the Oracle documents and standard procedure.
Step 8
Compile all schema objects and check for any errors.
Step 9
After the upgrade successfully completes once again check the target database to ensure all TRIGGERS & DELETE CASCADE CONSTRAINTS are DISABLED before restarting the APPLY process.
Step 10
Check and correct the TNS_ADMIN variable in DDC configuration files if it has changed during the Oracle upgrade.
Step 11
Restart APPLY process so that target database can catch up to source database, and to confirm that there are no issues.
Cutover to the upgraded target database
Step 12
- Once data is in sync between the source and target databases. Shutdown any applications accessing the Source database.
- Perform a few log switches verify they have been applied then shutdown MINE & APPLY process.
- Shutdown replication (first MINE then APPLY) once SCN is in sync (see console).
Step 13
- Manually migrate any tables with unsupported types ***This requires downtime!!!*** Shutdown & startup the source database first in restricted mode for a consistent export of
- Drop tables that had unsupported types in initial setup on Target. This may require dropping and recreating foreign key constraints on Target & recreating (verifying) they are enabled after import. Datapump impdp with TABLE_EXISTS_ACTION=REPLACE can be also used. The dependencies still have to be resolved manually.
- Run export and imports of these tables once the application(s) is shutdown on Source.
Step 14
Optional: METADATA export/import. This will bring over Packages, Procedures, Constraints, etc if they do not exist on target or you want to be sure the latest copy is there.
You may wish to use this if you have METADATA_ONLY
unloads only database object definitions; no table row data is unloaded. Be aware that if you specify CONTENT=METADATA_ONLY
, then when the dump file is subsequently imported, any index or table statistics imported from the dump file will be locked after the import.
One reason you may do this is you want to ensure your PL/SQL (procedure,package,etc.) are up to date on target especially if you have tables/columns that had unsupported types that you migrated in the previous step.
On source
expdp system/<passwd> directory=DATA_PUMP_DIR dumpfile=METADATA.dmp logfile=METADATA.log SCHEMAS=<SchemaName,SchemaName,etc) CONTENT=METADATA_ONLY
On target
impdp SYSTEM/<passwd dumpfile= METADATA.dmp directory=DATA_PUMP_DIR logfile= METADATA.log JOB_NAME=DP_META_0022
Step 15
Compile each migrated schema & generate fresh statistics on Target
SQL> EXEC DBMS_UTILITY.COMPILE_SCHEMA( schema => '<schema>');
SQL> exec dbms_stats.gather_schema_stats('<schema>',cascade => TRUE);
Migrate Sequences : Generate dynamic SQL to alter sequences from the source database to build the SQL to run on the target database. These sequence #’s should be set to .nextval for each sequence on source database else you may end up with constraint violations if you use these for key generation. See Sequences for more information on this topic.
Step 16
Build dynamic SQL to disable/enable triggers for the target database. (See sample SQL provided below, although your script may vary based on your needs)
SET LONG 20000 LONGCHUNKSIZE 20000 PAGESIZE 0 LINESIZE 1000 FEEDBACK OFF VERIFY OFF TRIMSPOOL ON
spool disable_triggers.sql
select 'alter trigger '||owner||’.’||trigger_name||' enable;'
from dba_triggers
where owner not in (
'XDB','SYS','SYSTEM','SYSMAN','WMSYS','OLAPSYS','MDSYS','APEX_030200','FLOWS_FILES','EXFSYS','APPQOSSYS','CTXSYS','ORDDATA','OUTLN','DBSNMP','ORDSYS',
'SI_INFORMTN_SCHEMA','ORACLE_OCM','ORDPLUGINS',
'OWBSYS','OWBSYS_AUDIT','DBVREP');
spool off
If triggers were dropped instead of disabling during instantiation then you will need to extract them from the source database and recreate them on the target database (sample SQL provided below, although your script may vary based on your needs):
SET LONG 20000 LONGCHUNKSIZE 20000 PAGESIZE 0 LINESIZE 1000 FEEDBACK OFF VERIFY OFF TRIMSPOOL ON
BEGIN
DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'SQLTERMINATOR', true);
DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'PRETTY', true);
END;
/
ACCEPT filename PROMPT "Filename for Output: ";
spool &&filename
SELECT DBMS_METADATA.get_ddl ('TRIGGER', trigger_name, owner)
FROM all_triggers
where owner not in (
'XDB','SYS','SYSTEM','SYSMAN','WMSYS','OLAPSYS','MDSYS','APEX_030200','FLOWS_FILES','EXFSYS','APPQOSSYS','CTXSYS','ORDDATA','OUTLN','DBSNMP','ORDSYS',
'SI_INFORMTN_SCHEMA','ORACLE_OCM','ORDPLUGINS',
'OWBSYS','OWBSYS_AUDIT','DBVREP')
order by table_owner;
Step 17
Check count of objects and compare the source and target databases(sample SQL provided below, although your script may vary based on your needs):
set pagesize 40000
select owner,object_type,count(*) from dba_objects
where owner not in (
'XDB','SYS','SYSTEM','SYSMAN','WMSYS','OLAPSYS','MDSYS','APEX_030200','FLOWS_FILES','EXFSYS',
'APPQOSSYS','CTXSYS','ORDDATA','OUTLN','DBSNMP','ORDSYS','SI_INFORMTN_SCHEMA','ORACLE_OCM','ORDPLUGINS',
'OWBSYS','OWBSYS_AUDIT','DBVREP')
group by owner,object_type
order by owner, object_type;
NOTE: Oracle Bug 9935857 - LOB indexes are missing from DBA_OBJECTS (Doc ID 9935857.8) Versions affected: 11.2.0.2, 11.2.0.1, 11.1.0.7, 10.2.0.4
Step 18
Check count of constraints & compare the source and target databases (sample SQL provided below, although your script may vary based on your needs):
set pagesize 40000
set linesize 200
column owner format A40
select owner,constraint_type,status,count(*) from dba_constraints
where owner not in (
'XDB','SYS','SYSTEM','SYSMAN','WMSYS','OLAPSYS','MDSYS','APEX_030200','FLOWS_FILES','EXFSYS',
'APPQOSSYS','CTXSYS','ORDDATA','OUTLN','DBSNMP','ORDSYS','SI_INFORMTN_SCHEMA','ORACLE_OCM','ORDPLUGINS',
'OWBSYS','OWBSYS_AUDIT','DBVREP')
group by owner,constraint_type,status
order by owner, constraint_type,status
Step 19
Check counts of triggers, status and compare the source and target databases (sample SQL provided below, although your script may vary based on your needs):
set linesize 200
set pagesize 40000
column owner format A30
column trigger_name format A30
select owner,trigger_name, trigger_type, status,count(*) from dba_triggers
where owner not in (
'XDB','SYS','SYSTEM','SYSMAN','WMSYS','OLAPSYS','MDSYS','APEX_030200','FLOWS_FILES','EXFSYS',
'APPQOSSYS','CTXSYS','ORDDATA','OUTLN','DBSNMP','ORDSYS','SI_INFORMTN_SCHEMA','ORACLE_OCM','ORDPLUGINS',
'OWBSYS','OWBSYS_AUDIT','DBVREP')
group by owner,trigger_name,trigger_type,status
order by owner, trigger_name, trigger_type,status;
Step 20
Continue to do any additional checks on the target server before pointing application(s) to this server and opening for database traffic.