After running ‘utils dbreplication status’ you can sometimes run into errors or mismatches. If it is a minor issue you you can try the following.
Run the following command to see what tables are out of sync
admin:file view activelog cm/trace/dbl/sdi/ReplicationStatus.2020_05_08_19_58_32.out
It should give you a fair amount of output but the section you want to look is ‘Suspect Replication Summary’
Fri May 8 19:58:32 2020 main() DEBUG: -->
Fri May 8 19:58:37 2020 main() DEBUG: Replication cluster summary:
SERVER ID STATE STATUS QUEUE CONNECTION CHANGED
-----------------------------------------------------------------------
g_2_ccm11_0_1_22900_14 2 Active Local 0
g_3_ccm11_0_1_22900_14 3 Active Connected 0 May 8 19:23:58
g_7_ccm11_0_1_22900_14 7 Active Connected 0 May 8 19:48:27
Fri May 8 19:58:46 2020 main() DEBUG: <--
---------- Suspect Replication Summary ----------
For table: ccmdbtemplate_g_2_ccm11_0_1_22900_14_1_604_devicerelatedversionstamp
replication is suspect for node(s):
g_7_ccm11_0_1_22900_14
For table: ccmdbtemplate_g_2_ccm11_0_1_22900_14_1_627_mediaresourcegroupmember
replication is suspect for node(s):
g_7_ccm11_0_1_22900_14
In this case there are 2 table on the g7 node. ‘utils dbreplication runtimestate’ will allow you to confirm which node is which by looking at the ‘Replication Group ID’.
PING DB/RPC/ REPL. Replication REPLICATION SETUP
SERVER-NAME IP ADDRESS (msec) DbMon? QUEUE Group ID (RTMT) & Details
----------- ---------- ------ ------- ----- ----------- ------------------
CCMSUB 172.17.1.11 0.234 Y/Y/Y 1426 (g_3) (2) Setup Completed
ccmsub2 172.17.55.10 51.131 Y/Y/Y 1426 (g_7) (2) Setup Completed
CCMPUB 172.17.1.10 0.009 Y/Y/Y 0 (g_2) (2) Setup Completed
You can then run each command one at a time
utils dbreplication repairtable devicerelatedversionstamp 172.17.55.10
utils dbreplication repairtable mediaresourcegroupmember 172.17.55.10
utils dbreplication status