Dbvisit Replicate 2.9 – the Technical Look

Two weeks ago I wrote about some of the new features in Dbvisit Replicate 2.9. This week let’s take a deeper look at some of the technical features that I mentioned.

There are some new features in the support package but nothing will change as far as the user experience.  Same commands, with only a few more pieces of information which are included in the zip file.

Support will often ask for an Oracle ‘SQL Trace’ to be run on the target database to help diagnose performance problems.  We have noticed that some customers are not very familiar with the SQL Trace tool in Oracle and we would usually explain what it does and how it works with them.  This time around we thought it might be wise to incorporate something very user-friendly to our product, which turns it on for them. We have a new parameter, appropriately named SQL_TRACE.  It works the same as the rest of our parameters which you can find here.

The default is NO for this parameter and can easily be turned on by changing the value to YES.  This will turn on tracing to all Replicate sessions on the target database.

Many customers have commented on how they really like the setup wizard and how it is just a ‘recipe for replication’.  And after the *-all.sh script is run we provide the ‘NextSteps’ to continue.  We have further enhanced NextSteps to make the instructions clearer.  We get into a little more detail about which TNSNAMES we are talking about.  We did this just so that there would be no confusion regarding what databases and TNSNAME aliases we are talking about.

There are a few new datatypes which are now supported. Nothing will need to be done as far as replication is concerned. No parameters or anything extra. Just full support for those tables with those specific datatypes. One note about ROWID, a lot of people have asked for it to be supported but I would again like to leave a word of caution - the ROWID pseudocolumn in the database is a pointer to the physical location of where that row is stored.  Some applications use that column to have faster access to look up that column. However be very careful when replicating this column.  When the row is replicated over to the target database it will of course be a possible (probable) location for the source row but certainly not a guarantee of the location of the target row.  This means that the ROWID may be populated from the source that is no longer pointing to the correct location.  Make sure you test this thoroughly before replicating this column.

We also have some new features that are released for Dbvisit Replicate Connector for Kafka.  Rather than going into great depth about them, I will just highlight them here.  Our CTO will be having a blog about Kafka very soon on these features.  There are three major features that we are supporting in the Dbvisit Replicate 2.9 release for Kafka:

  1. You will now have the option to publish CHANGE SETS as well as CHANGE ROWS.  CHANGE ROWS mode will allow you to get each operation made on the Oracle source database. So the latest representation of this row is there in a message of Kafka. Each message comprises changes made to that row, along with existing row information (the non-changed data) and related metadata.  CHANGE SETS mode will allow you obtain the KEY, NEW, OLD and LOB change fields separately).

  2. Changes made to enable CHANGE SET mode also allowed us to publish keys to Kafka topics.

  3. The OSFM utility will manage Replicate PLOG files.  Many users are used to having Replicate do its own maintenance on older PLOGs.  We have extended that to users who are using Replicate with Kafka.

Customers and partners have also asked for enhancements in the conflict handler area of Replicate.  The first enhancement is automatic and needs no user intervention. The Dbvisit Repository contains a table containing all of the errors that may have been encountered. It is very useful when working on a problem. However you may not care about the history. In this case you can certainly purge old data.  _DELETE_OBSOLETE_ERRORLOG_DAYS is a new internal parameter that will delete old information older than N days.  N is the value that the user can set. The default is OFF.  Related is older dictionary information.  That parameter can be set as well. _MINE_DELETE_OBSOLETE_DICTIONARY.  The values here can be YES (Delete obsolete information right away), NO (do not delete) or an integer value (if you want to have some days of information saved).  Typically these parameters are only set with the advice of Dbvisit Support.

There are times when you want to resolve the last conflict but don’t really want to type the number in. Typically you would resolve a conflict with the command:

RESOLVE CONFLICT id AS resolution RESOLVE CONFLICT 123432142 AS IGNORE

Where 123432142 is the conflict number that will be displayed in the Console.

We have made a change where you now have to type the following command.

RESOLVE CONFLICT LAST AS resolution RESOLVE CONFLICT LAST AS IGNORE

This will resolve the last (current) conflict that occurred.  This change was made simply to allow the user to not have to type in the number of the conflict.

The other major change that was done regarding conflict handlers was allowing users to create their own default values.  By default, the default value for all conflict handlers is RETRY.  While that value may make sense for some replication setups, one size does not fit all.  Of course you can make individual conflict handlers but what about the other ones, you may want to change the default.  Here is an example of how to change the default:

dbvrep> SET_CONFLICT_HANDLERS FOR DEFAULT FOR UPDATE ON NO_DATA TO OVERWRITE dbvrep> SHOW_CONFLICT_HANDLERS FOR DEFAULT The default handling is set as follows: UPDATE (error): handler: RETRY logging: LOG UPDATE (data): handler: OVERWRITE logging: LOG DELETE (error): handler: RETRY logging: LOG DELETE (data): handler: RETRY logging: LOG INSERT (error): handler: RETRY logging: LOG TRANSACTION (error): handler: RETRY logging: LOG

A note that you should be aware of when the default global handlers are updated, is that this will only APPLY to newly prepared tables. Tables that have already been prepared (or replicated) will still use the previous defined default global handler. This means you may want to set this up right away before you start adding tables to replication.

And the last major new features to talk about are the new targets: PostgreSQL and Tibero.  PostgreSQL would come up in just about every conversation we had when discussing new targets.  We are happy it is out and hope it is well received.  For both PostgreSQL and Tibero we have incorporated them into our regular Setup Wizard.  This means the customers who already use Dbvisit Replicate will have almost zero training to do with setting replication up. 

Notice in the Setup Wizard Step 1:

Step 1 - Describe databases
========================================
The first step is to describe databases used in the replication. There are usually two of them (source and target).
Store SYSDBA and DBA passwords? Passwords only required during setup and initialization? (Yes/No) [Yes] Yes
Yes
Let's configure the database, describing its type, connectivity, user names etc.
What type of database is this? (Oracle/MySQL/Google Cloud SQL/SQL Server/Tibero/Postgres/Oracle AWS RDS/CSV/Hadoop)

We simply added Tibero and PostgreSQL into the Setup Wizard.  After that the exact same steps that you normally run are done.  Nothing special needs to occur. We have examples that walks you through step-by-step, including setting up a bit for PostgreSQL and Tibero.

We are already busy working on the first patch of 2.9 (2.9.02) and are even going to have a surprise or two in that patch.  As mentioned before we are also already charging ahead with the 3.0 Replicate release.  If there are any features you would like to see in a future release, please let us know.