On 08/24/2016 01:08 AM, Ian Harding wrote:

On 08/23/2016 03:14 AM, Ludwig Krispenz wrote:
On 08/23/2016 11:52 AM, Ian Harding wrote:
Ah.  I see.  I mixed those up but I see that those would have to be
consistent.

However, I have been trying to beat some invalid RUV to death for a long
time and I can't seem to kill them.

For example, bellevuenfs has 9 and 16 which are invalid:

[ianh@seattlenfs ~]$ ldapsearch -ZZ -h seattlenfs.bpt.rocks -D
"cn=Directory Manager" -W -b "dc=bpt,dc=rocks"
"(&(objectclass=nstombstone)(nsUniqueId=ffffffff-ffffffff-ffffffff-ffffffff))"

| grep "nsds50ruv\|nsDS5ReplicaId"
Enter LDAP Password:
nsDS5ReplicaId: 7
nsds50ruv: {replicageneration} 55c8f364000000040000
nsds50ruv: {replica 7 ldap://seattlenfs.bpt.rocks:389}
568ac3cc000000070000 57
nsds50ruv: {replica 20 ldap://freeipa-sea.bpt.rocks:389}
57b10377000200140000
nsds50ruv: {replica 18 ldap://bpt-nyc1-nfs.bpt.rocks:389}
57a47801000100120000
nsds50ruv: {replica 15 ldap://fremontnis.bpt.rocks:389}
57a403860000000f0000 5
nsds50ruv: {replica 14 ldap://freeipa-dal.bpt.rocks:389}
57a2dccd0000000e0000
nsds50ruv: {replica 17 ldap://edinburghnfs.bpt.rocks:389}
57a422f9000000110000
nsds50ruv: {replica 19 ldap://bellevuenfs.bpt.rocks:389}
57a4f20d000600130000
nsds50ruv: {replica 16 ldap://bellevuenfs.bpt.rocks:389}
57a41706000000100000
nsds50ruv: {replica 9 ldap://bellevuenfs.bpt.rocks:389}
570484ee000000090000 5


So I try to kill them like so:
[ianh@seattlenfs ~]$ ipa-replica-manage clean-ruv 9 --force --cleanup
ipa: WARNING: session memcached servers not running
Clean the Replication Update Vector for bellevuenfs.bpt.rocks:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Background task created to clean replication data. This may take a while.
This may be safely interrupted with Ctrl+C
^C[ianh@seattlenfs ~]$ ipa-replica-manage clean-ruv 16 --force --cleanup
ipa: WARNING: session memcached servers not running
Clean the Replication Update Vector for bellevuenfs.bpt.rocks:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Background task created to clean replication data. This may take a while.
This may be safely interrupted with Ctrl+C
^C[ianh@seattlenfs ~]$ ipa-replica-manage list-clean-ruv
ipa: WARNING: session memcached servers not running
CLEANALLRUV tasks
RID 16: Waiting to process all the updates from the deleted replica...
RID 9: Waiting to process all the updates from the deleted replica...

No abort CLEANALLRUV tasks running
[ianh@seattlenfs ~]$ ipa-replica-manage list-clean-ruv
ipa: WARNING: session memcached servers not running
CLEANALLRUV tasks
RID 16: Waiting to process all the updates from the deleted replica...
RID 9: Waiting to process all the updates from the deleted replica...

and it never finishes.

seattlenfs is the first master, that's the only place I should have to
run this command, right?
right, you need to run it only on one master, but this ease of use can
become the problem.
The cleanallruv task is propagated to all servers in the topology and it
does this based on the replication agreements it finds.
A frequent cause of failure is that replication agreements still exist
pointing to no longer existing servers. It is a bit tedious, but could
you run the following search on ALL
of your current replicas (as directory manager):

ldapsearch ...... -b "cn=config" "objectclass=nsds5replicationagreement"
nsds5replicahost

if you find any agreement where nsds5replicahost is a host no longer
existing or working, delete these agreements.
I have 7 FreeIPA servers, all of which have been in existence in some
form or another since I started.  It used to work great.  I've broken it
now but the hostnames and ip addresses all still exist.  I've
uninstalled and reinstalled them a few times which I think is the source
of my troubles so I tried to straighten out the RUVs and probably messed
that up pretty good

Anyway, now what I THINK I have is

seattlenfs
|-freeipa-sea
   |- freeipa-dal
   |- bellevuenfs
   |- fremontnis
   |- bpt-nyc1-nfs
   |- edinburghnfs

Until I get this squared away I've turned off ipa services on all but
seattlenfs, freeipa-sea and freeipa-dal and am hoping that any password
changes etc. happen on seattlenfs.  I need the other two because they
are my DNS.  The rest I can kind of live without since they are just
local instances living on nfs servers.

Here's the output from that ldap query on all the hosts:
yes, looks like the replication agreements are fine, but the RUVs are not.

In the o=ipaca suffix, there is a reference to bellvuenis:

 [{replica 76
ldap://bellevuenis.bpt.rocks:389} 56f385eb0007004c0000


but this seems to be now bellevuenfs.

In the dc=bpt,dc=rocks replica id 9 is causing the trouble. There are two replicaids : 9 and 16 for bellevuenfs, and it causes replication failure from edinburgh to freeipa-sea. Looks like replicaid 9 is not present in freeipa-sea and edinburgh "thinks" it has to send changes, but can't position in changelog.

You had tried to cleanallruv for rid9, which seemed not to complete, but I don't know what the status is on all servers.
what I would do is

check again the ruvs (the fffff.... tombstone) on all servers,
check if there are still active tasks, try to get rid of them, (but they can be stubborn), either by trying abort cleanallruv or the hard way, stop the server, check the dse.ldif for existing task attributes in the replica object and remove them.

then either retry cleanallruv, but without the force option (this makes the task live until all servers are cleaned, but if replication does not work this will not happen), or, on each server do individual ruv cleaning (only on the server, not the cleanallruv task), you can have a look here: http://www.port389.org/docs/389ds/howto/howto-cleanruv.html


SEATTLENFS

[root@seattlenfs ianh]# ldapsearch -D "cn=Directory Manager" -W -b
"cn=config" "objectclass=nsds5replicationagreement" nsds5replicahost
Enter LDAP Password:
# extended LDIF
#
# LDAPv3
# base <cn=config> with scope subtree
# filter: objectclass=nsds5replicationagreement
# requesting: nsds5replicahost
#

# meTofreeipa-sea.bpt.rocks, replica, dc\3Dbpt\2Cdc\3Drocks, mapping
tree, conf
  ig
dn:
cn=meTofreeipa-sea.bpt.rocks,cn=replica,cn=dc\3Dbpt\2Cdc\3Drocks,cn=mappin
  g tree,cn=config
nsds5replicahost: freeipa-sea.bpt.rocks

# masterAgreement1-bellevuenfs.bpt.rocks-pki-tomcat, replica, o\3Dipaca,
mappin
  g tree, config
dn:
cn=masterAgreement1-bellevuenfs.bpt.rocks-pki-tomcat,cn=replica,cn=o\3Dipa
  ca,cn=mapping tree,cn=config
nsds5replicahost: bellevuenfs.bpt.rocks

# masterAgreement1-bpt-nyc1-nfs.bpt.rocks-pki-tomcat, replica,
o\3Dipaca, mappi
  ng tree, config
dn:
cn=masterAgreement1-bpt-nyc1-nfs.bpt.rocks-pki-tomcat,cn=replica,cn=o\3Dip
  aca,cn=mapping tree,cn=config
nsds5replicahost: bpt-nyc1-nfs.bpt.rocks

# masterAgreement1-freeipa-dal.bpt.rocks-pki-tomcat, replica, o\3Dipaca,
mappin
  g tree, config
dn:
cn=masterAgreement1-freeipa-dal.bpt.rocks-pki-tomcat,cn=replica,cn=o\3Dipa
  ca,cn=mapping tree,cn=config
nsds5replicahost: freeipa-dal.bpt.rocks

# masterAgreement1-freeipa-sea.bpt.rocks-pki-tomcat, replica, o\3Dipaca,
mappin
  g tree, config
dn:
cn=masterAgreement1-freeipa-sea.bpt.rocks-pki-tomcat,cn=replica,cn=o\3Dipa
  ca,cn=mapping tree,cn=config
nsds5replicahost: freeipa-sea.bpt.rocks

# masterAgreement1-fremontnis.bpt.rocks-pki-tomcat, replica, o\3Dipaca,
mapping
   tree, config
dn:
cn=masterAgreement1-fremontnis.bpt.rocks-pki-tomcat,cn=replica,cn=o\3Dipac
  a,cn=mapping tree,cn=config
nsds5replicahost: fremontnis.bpt.rocks

# search result
search: 2
result: 0 Success

# numResponses: 7
# numEntries: 6

FREEIPA-SEA

[root@freeipa-sea ianh]# ldapsearch -D "cn=Directory Manager" -W -b
"cn=config" "objectclass=nsds5replicationagreement" nsds5replicahost
Enter LDAP Password:
# extended LDIF
#
# LDAPv3
# base <cn=config> with scope subtree
# filter: objectclass=nsds5replicationagreement
# requesting: nsds5replicahost
#

# meTobellevuenfs.bpt.rocks, replica, dc\3Dbpt\2Cdc\3Drocks, mapping
tree, conf
  ig
dn:
cn=meTobellevuenfs.bpt.rocks,cn=replica,cn=dc\3Dbpt\2Cdc\3Drocks,cn=mappin
  g tree,cn=config
nsds5replicahost: bellevuenfs.bpt.rocks

# meTobpt-nyc1-nfs.bpt.rocks, replica, dc\3Dbpt\2Cdc\3Drocks, mapping
tree, con
  fig
dn:
cn=meTobpt-nyc1-nfs.bpt.rocks,cn=replica,cn=dc\3Dbpt\2Cdc\3Drocks,cn=mappi
  ng tree,cn=config
nsds5replicahost: bpt-nyc1-nfs.bpt.rocks

# meToedinburghnfs.bpt.rocks, replica, dc\3Dbpt\2Cdc\3Drocks, mapping
tree, con
  fig
dn:
cn=meToedinburghnfs.bpt.rocks,cn=replica,cn=dc\3Dbpt\2Cdc\3Drocks,cn=mappi
  ng tree,cn=config
nsds5replicahost: edinburghnfs.bpt.rocks

# meTofreeipa-dal.bpt.rocks, replica, dc\3Dbpt\2Cdc\3Drocks, mapping
tree, conf
  ig
dn:
cn=meTofreeipa-dal.bpt.rocks,cn=replica,cn=dc\3Dbpt\2Cdc\3Drocks,cn=mappin
  g tree,cn=config
nsds5replicahost: freeipa-dal.bpt.rocks

# meTofremontnis.bpt.rocks, replica, dc\3Dbpt\2Cdc\3Drocks, mapping
tree, confi
  g
dn:
cn=meTofremontnis.bpt.rocks,cn=replica,cn=dc\3Dbpt\2Cdc\3Drocks,cn=mapping
   tree,cn=config
nsds5replicahost: fremontnis.bpt.rocks

# meToseattlenfs.bpt.rocks, replica, dc\3Dbpt\2Cdc\3Drocks, mapping
tree, confi
  g
dn:
cn=meToseattlenfs.bpt.rocks,cn=replica,cn=dc\3Dbpt\2Cdc\3Drocks,cn=mapping
   tree,cn=config
nsds5replicahost: seattlenfs.bpt.rocks

# cloneAgreement1-freeipa-sea.bpt.rocks-pki-tomcat, replica, o\3Dipaca,
mapping
   tree, config
dn:
cn=cloneAgreement1-freeipa-sea.bpt.rocks-pki-tomcat,cn=replica,cn=o\3Dipac
  a,cn=mapping tree,cn=config
nsds5replicahost: seattlenfs.bpt.rocks

# search result
search: 2
result: 0 Success

# numResponses: 8
# numEntries: 7

FREEIPA-DAL

[root@freeipa-dal ianh]# ldapsearch -D "cn=Directory Manager" -W -b
"cn=config" "objectclass=nsds5replicationagreement" nsds5replicahost
Enter LDAP Password:
# extended LDIF
#
# LDAPv3
# base <cn=config> with scope subtree
# filter: objectclass=nsds5replicationagreement
# requesting: nsds5replicahost
#

# meTofreeipa-sea.bpt.rocks, replica, dc\3Dbpt\2Cdc\3Drocks, mapping
tree, conf
  ig
dn:
cn=meTofreeipa-sea.bpt.rocks,cn=replica,cn=dc\3Dbpt\2Cdc\3Drocks,cn=mappin
  g tree,cn=config
nsds5replicahost: freeipa-sea.bpt.rocks

# cloneAgreement1-freeipa-dal.bpt.rocks-pki-tomcat, replica, o\3Dipaca,
mapping
   tree, config
dn:
cn=cloneAgreement1-freeipa-dal.bpt.rocks-pki-tomcat,cn=replica,cn=o\3Dipac
  a,cn=mapping tree,cn=config
nsds5replicahost: seattlenfs.bpt.rocks

# search result
search: 2
result: 0 Success

# numResponses: 3
# numEntries: 2

BELLEVUENFS

[root@bellevuenfs ianh]# ldapsearch -D "cn=Directory Manager" -W -b
"cn=config" "objectclass=nsds5replicationagreement" nsds5replicahost
Enter LDAP Password:
# extended LDIF
#
# LDAPv3
# base <cn=config> with scope subtree
# filter: objectclass=nsds5replicationagreement
# requesting: nsds5replicahost
#

# meTofreeipa-sea.bpt.rocks, replica, dc\3Dbpt\2Cdc\3Drocks, mapping
tree, conf
  ig
dn:
cn=meTofreeipa-sea.bpt.rocks,cn=replica,cn=dc\3Dbpt\2Cdc\3Drocks,cn=mappin
  g tree,cn=config
nsds5replicahost: freeipa-sea.bpt.rocks

# cloneAgreement1-bellevuenfs.bpt.rocks-pki-tomcat, replica, o\3Dipaca,
mapping
   tree, config
dn:
cn=cloneAgreement1-bellevuenfs.bpt.rocks-pki-tomcat,cn=replica,cn=o\3Dipac
  a,cn=mapping tree,cn=config
nsds5replicahost: seattlenfs.bpt.rocks

# search result
search: 2
result: 0 Success

# numResponses: 3
# numEntries: 2


FREMONTNIS

[root@fremontnis ianh]# ldapsearch -D "cn=Directory Manager" -W -b
"cn=config" "objectclass=nsds5replicationagreement" nsds5replicahost
Enter LDAP Password:
# extended LDIF
#
# LDAPv3
# base <cn=config> with scope subtree
# filter: objectclass=nsds5replicationagreement
# requesting: nsds5replicahost
#

# meTofreeipa-sea.bpt.rocks, replica, dc\3Dbpt\2Cdc\3Drocks, mapping
tree, conf
  ig
dn:
cn=meTofreeipa-sea.bpt.rocks,cn=replica,cn=dc\3Dbpt\2Cdc\3Drocks,cn=mappin
  g tree,cn=config
nsds5replicahost: freeipa-sea.bpt.rocks

# cloneAgreement1-fremontnis.bpt.rocks-pki-tomcat, replica, o\3Dipaca,
mapping
  tree, config
dn:
cn=cloneAgreement1-fremontnis.bpt.rocks-pki-tomcat,cn=replica,cn=o\3Dipaca
  ,cn=mapping tree,cn=config
nsds5replicahost: seattlenfs.bpt.rocks

# search result
search: 2
result: 0 Success

# numResponses: 3
# numEntries: 2

BPT-NYC1-NFS

[root@bpt-nyc1-nfs ianh]# ldapsearch -D "cn=Directory Manager" -W -b
"cn=config" "objectclass=nsds5replicationagreement" nsds5replicahost
Enter LDAP Password:
# extended LDIF
#
# LDAPv3
# base <cn=config> with scope subtree
# filter: objectclass=nsds5replicationagreement
# requesting: nsds5replicahost
#

# meTofreeipa-sea.bpt.rocks, replica, dc\3Dbpt\2Cdc\3Drocks, mapping
tree, conf
  ig
dn:
cn=meTofreeipa-sea.bpt.rocks,cn=replica,cn=dc\3Dbpt\2Cdc\3Drocks,cn=mappin
  g tree,cn=config
nsds5replicahost: freeipa-sea.bpt.rocks

# cloneAgreement1-bpt-nyc1-nfs.bpt.rocks-pki-tomcat, replica, o\3Dipaca,
mappin
  g tree, config
dn:
cn=cloneAgreement1-bpt-nyc1-nfs.bpt.rocks-pki-tomcat,cn=replica,cn=o\3Dipa
  ca,cn=mapping tree,cn=config
nsds5replicahost: seattlenfs.bpt.rocks

# search result
search: 2
result: 0 Success

# numResponses: 3
# numEntries: 2

EDINBURGHNFS

[root@edinburghnfs ianh]# ldapsearch -D "cn=Directory Manager" -W -b
"cn=config" "objectclass=nsds5replicationagreement" nsds5replicahost
Enter LDAP Password:
# extended LDIF
#
# LDAPv3
# base <cn=config> with scope subtree
# filter: objectclass=nsds5replicationagreement
# requesting: nsds5replicahost
#

# meTofreeipa-sea.bpt.rocks, replica, dc\3Dbpt\2Cdc\3Drocks, mapping
tree, conf
  ig
dn:
cn=meTofreeipa-sea.bpt.rocks,cn=replica,cn=dc\3Dbpt\2Cdc\3Drocks,cn=mappin
  g tree,cn=config
nsds5replicahost: freeipa-sea.bpt.rocks

# cloneAgreement1-edinburghnfs.bpt.rocks-pki-tomcat, replica, o\3Dipaca,
mappin
  g tree, config
dn:
cn=cloneAgreement1-edinburghnfs.bpt.rocks-pki-tomcat,cn=replica,cn=o\3Dipa
  ca,cn=mapping tree,cn=config
nsds5replicahost: freeipa-sea.bpt.rocks

# search result
search: 2
result: 0 Success

# numResponses: 3
# numEntries: 2

Here's the errors from starting up EDINBURGHNFS to run that query.  It
has some familiar looking problems.

[23/Aug/2016:23:56:35 +0100] SSL Initialization - Configured SSL version
range: min: TLS1.0, max: TLS1.2
[23/Aug/2016:23:56:35 +0100] - 389-Directory/1.3.4.0 B2016.215.1556
starting up
[23/Aug/2016:23:56:35 +0100] - WARNING: changelog: entry cache size
2097152B is less than db size 12361728B; We recommend to increase the
entry cache size nsslapd-cachememsize.
[23/Aug/2016:23:56:35 +0100] schema-compat-plugin - scheduled
schema-compat-plugin tree scan in about 5 seconds after the server startup!
[23/Aug/2016:23:56:35 +0100] NSACLPlugin - The ACL target
cn=groups,cn=compat,dc=bpt,dc=rocks does not exist
[23/Aug/2016:23:56:35 +0100] NSACLPlugin - The ACL target
cn=computers,cn=compat,dc=bpt,dc=rocks does not exist
[23/Aug/2016:23:56:35 +0100] NSACLPlugin - The ACL target
cn=ng,cn=compat,dc=bpt,dc=rocks does not exist
[23/Aug/2016:23:56:35 +0100] NSACLPlugin - The ACL target
ou=sudoers,dc=bpt,dc=rocks does not exist
[23/Aug/2016:23:56:35 +0100] NSACLPlugin - The ACL target
cn=users,cn=compat,dc=bpt,dc=rocks does not exist
[23/Aug/2016:23:56:35 +0100] NSACLPlugin - The ACL target
cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
[23/Aug/2016:23:56:35 +0100] NSACLPlugin - The ACL target
cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
[23/Aug/2016:23:56:35 +0100] NSACLPlugin - The ACL target
cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
[23/Aug/2016:23:56:35 +0100] NSACLPlugin - The ACL target
cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
[23/Aug/2016:23:56:35 +0100] NSACLPlugin - The ACL target
cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
[23/Aug/2016:23:56:35 +0100] NSACLPlugin - The ACL target
cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
[23/Aug/2016:23:56:35 +0100] NSACLPlugin - The ACL target
cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
[23/Aug/2016:23:56:35 +0100] NSACLPlugin - The ACL target
cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
[23/Aug/2016:23:56:35 +0100] NSACLPlugin - The ACL target
cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
[23/Aug/2016:23:56:35 +0100] NSACLPlugin - The ACL target
cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
[23/Aug/2016:23:56:35 +0100] NSACLPlugin - The ACL target
cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
[23/Aug/2016:23:56:35 +0100] NSACLPlugin - The ACL target
cn=ad,cn=etc,dc=bpt,dc=rocks does not exist
[23/Aug/2016:23:56:35 +0100] NSACLPlugin - The ACL target
cn=casigningcert cert-pki-ca,cn=ca_renewal,cn=ipa,cn=etc,dc=bpt,dc=rocks
does not exist
[23/Aug/2016:23:56:35 +0100] NSACLPlugin - The ACL target
cn=casigningcert cert-pki-ca,cn=ca_renewal,cn=ipa,cn=etc,dc=bpt,dc=rocks
does not exist
[23/Aug/2016:23:56:35 +0100] NSACLPlugin - The ACL target cn=automember
rebuild membership,cn=tasks,cn=config does not exist
[23/Aug/2016:23:56:35 +0100] auto-membership-plugin -
automember_parse_regex_rule: Unable to parse regex rule (invalid regex).
  Error "nothing to repeat".
[23/Aug/2016:23:56:35 +0100] NSMMReplicationPlugin - ruv_compare_ruv:
RUV [changelog max RUV] does not contain element [{replica 1095
ldap://freeipa-sea.bpt.rocks:389} 579a963c000004470000
57a575a0000004470000] which is present in RUV [database RUV]
[23/Aug/2016:23:56:35 +0100] NSMMReplicationPlugin - ruv_compare_ruv:
RUV [changelog max RUV] does not contain element [{replica 81
ldap://seattlenfs.bpt.rocks:389} 568ac431000000510000
57a4175f000500510000] which is present in RUV [database RUV]
[23/Aug/2016:23:56:35 +0100] NSMMReplicationPlugin - ruv_compare_ruv:
RUV [changelog max RUV] does not contain element [{replica 96
ldap://freeipa-sea.bpt.rocks:389} 55c8f3bd000000600000
5799a02e000000600000] which is present in RUV [database RUV]
[23/Aug/2016:23:56:35 +0100] NSMMReplicationPlugin - ruv_compare_ruv:
RUV [changelog max RUV] does not contain element [{replica 86
ldap://fremontnis.bpt.rocks:389} 5685b24e000000560000
5703db4b000500560000] which is present in RUV [database RUV]
[23/Aug/2016:23:56:35 +0100] NSMMReplicationPlugin - ruv_compare_ruv:
RUV [changelog max RUV] does not contain element [{replica 91
ldap://seattlenis.bpt.rocks:389} 567ad6180001005b0000
568703740000005b0000] which is present in RUV [database RUV]
[23/Aug/2016:23:56:35 +0100] NSMMReplicationPlugin - ruv_compare_ruv:
RUV [changelog max RUV] does not contain element [{replica 97
ldap://freeipa-dal.bpt.rocks:389} 55c8f3ce000000610000
56f4d70b000000610000] which is present in RUV [database RUV]
[23/Aug/2016:23:56:35 +0100] NSMMReplicationPlugin - ruv_compare_ruv:
RUV [changelog max RUV] does not contain element [{replica 76
ldap://bellevuenis.bpt.rocks:389} 56f385eb0007004c0000
56f386180004004c0000] which is present in RUV [database RUV]
[23/Aug/2016:23:56:35 +0100] NSMMReplicationPlugin - ruv_compare_ruv:
RUV [changelog max RUV] does not contain element [{replica 71
ldap://bellevuenfs.bpt.rocks:389} 57048560000900470000
5745722e000000470000] which is present in RUV [database RUV]
[23/Aug/2016:23:56:35 +0100] NSMMReplicationPlugin - ruv_compare_ruv:
RUV [changelog max RUV] does not contain element [{replica 66
ldap://bpt-nyc1-nfs.bpt.rocks:389} 5733e594000a00420000
5733e5b7002f00420000] which is present in RUV [database RUV]
[23/Aug/2016:23:56:35 +0100] NSMMReplicationPlugin - ruv_compare_ruv:
RUV [changelog max RUV] does not contain element [{replica 61
ldap://edinburghnfs.bpt.rocks:389} 574421250000003d0000
57785b420004003d0000] which is present in RUV [database RUV]
[23/Aug/2016:23:56:35 +0100] NSMMReplicationPlugin - ruv_compare_ruv:
RUV [changelog max RUV] does not contain element [{replica 1090
ldap://freeipa-dal.bpt.rocks:389} 57a2dd35000004420000
57a2dd35000404420000] which is present in RUV [database RUV]
[23/Aug/2016:23:56:35 +0100] NSMMReplicationPlugin - ruv_compare_ruv:
RUV [changelog max RUV] does not contain element [{replica 1085
ldap://fremontnis.bpt.rocks:389} 57a403e60000043d0000
57a403e70002043d0000] which is present in RUV [database RUV]
[23/Aug/2016:23:56:35 +0100] NSMMReplicationPlugin - ruv_compare_ruv:
RUV [changelog max RUV] does not contain element [{replica 1080
ldap://bellevuenfs.bpt.rocks:389} 57a41767000004380000
57a41768000004380000] which is present in RUV [database RUV]
[23/Aug/2016:23:56:35 +0100] NSMMReplicationPlugin -
replica_check_for_data_reload: Warning: for replica o=ipaca there were
some differences between the changelog max RUV and the database RUV.  If
there are obsolete elements in the database RUV, you should remove them
using the CLEANALLRUV task.  If they are not obsolete, you should check
their status to see why there are no changes from those servers in the
changelog.
[23/Aug/2016:23:56:35 +0100] set_krb5_creds - Could not get initial
credentials for principal [ldap/edinburghnfs.bpt.rocks@BPT.ROCKS] in
keytab [FILE:/etc/dirsrv/ds.keytab]: -1765328228 (Cannot contact any KDC
for requested realm)
[23/Aug/2016:23:56:35 +0100] attrlist_replace - attr_replace
(nsslapd-referral, ldap://freeipa-sea.bpt.rocks:389/o%3Dipaca) failed.
[23/Aug/2016:23:56:35 +0100] attrlist_replace - attr_replace
(nsslapd-referral, ldap://freeipa-sea.bpt.rocks:389/o%3Dipaca) failed.
[23/Aug/2016:23:56:35 +0100] NSMMReplicationPlugin - ruv_compare_ruv:
RUV [changelog max RUV] does not contain element [{replica 20
ldap://freeipa-sea.bpt.rocks:389} 57b10377000200140000
57bb7bc9000500140000] which is present in RUV [database RUV]
[23/Aug/2016:23:56:35 +0100] NSMMReplicationPlugin - ruv_compare_ruv:
RUV [changelog max RUV] does not contain element [{replica 18
ldap://bpt-nyc1-nfs.bpt.rocks:389} 57a47801000100120000
57b03107000100120000] which is present in RUV [database RUV]
[23/Aug/2016:23:56:35 +0100] NSMMReplicationPlugin - ruv_compare_ruv:
RUV [changelog max RUV] does not contain element [{replica 15
ldap://fremontnis.bpt.rocks:389} 57a403860000000f0000
57b036b20002000f0000] which is present in RUV [database RUV]
[23/Aug/2016:23:56:35 +0100] NSMMReplicationPlugin - ruv_compare_ruv:
RUV [changelog max RUV] does not contain element [{replica 14
ldap://freeipa-dal.bpt.rocks:389} 57a2dccd0000000e0000
57bb7b690005000e0000] which is present in RUV [database RUV]
[23/Aug/2016:23:56:35 +0100] NSMMReplicationPlugin - ruv_compare_ruv:
RUV [changelog max RUV] does not contain element [{replica 19
ldap://bellevuenfs.bpt.rocks:389} 57a4f20d000600130000
57b0fa3b000100130000] which is present in RUV [database RUV]
[23/Aug/2016:23:56:35 +0100] NSMMReplicationPlugin - ruv_compare_ruv:
RUV [changelog max RUV] does not contain element [{replica 16
ldap://bellevuenfs.bpt.rocks:389} 57a41706000000100000
57a41706000100100000] which is present in RUV [database RUV]
[23/Aug/2016:23:56:35 +0100] NSMMReplicationPlugin - ruv_compare_ruv:
RUV [changelog max RUV] does not contain element [{replica 9
ldap://bellevuenfs.bpt.rocks:389} 570484ee000000090000
579f6419000000090000] which is present in RUV [database RUV]
[23/Aug/2016:23:56:35 +0100] NSMMReplicationPlugin -
replica_check_for_data_reload: Warning: for replica dc=bpt,dc=rocks
there were some differences between the changelog max RUV and the
database RUV.  If there are obsolete elements in the database RUV, you
should remove them using the CLEANALLRUV task.  If they are not
obsolete, you should check their status to see why there are no changes
from those servers in the changelog.
[23/Aug/2016:23:56:35 +0100] attrlist_replace - attr_replace
(nsslapd-referral,
ldap://seattlenfs.bpt.rocks:389/dc%3Dbpt%2Cdc%3Drocks) failed.
[23/Aug/2016:23:56:35 +0100] attrlist_replace - attr_replace
(nsslapd-referral,
ldap://seattlenfs.bpt.rocks:389/dc%3Dbpt%2Cdc%3Drocks) failed.
[23/Aug/2016:23:56:35 +0100] schema-compat-plugin - schema-compat-plugin
tree scan will start in about 5 seconds!
[23/Aug/2016:23:56:35 +0100] - slapd started.  Listening on All
Interfaces port 389 for LDAP requests
[23/Aug/2016:23:56:35 +0100] - Listening on All Interfaces port 636 for
LDAPS requests
[23/Aug/2016:23:56:35 +0100] - Listening on
/var/run/slapd-BPT-ROCKS.socket for LDAPI requests
[23/Aug/2016:23:56:35 +0100] slapd_ldap_sasl_interactive_bind - Error:
could not perform interactive bind for id [] mech [GSSAPI]: LDAP error
-2 (Local error) (SASL(-1): generic failure: GSSAPI Error: Unspecified
GSS failure.  Minor code may provide more information (No Kerberos
credentials available)) errno 0 (Success)
[23/Aug/2016:23:56:35 +0100] slapi_ldap_bind - Error: could not perform
interactive bind for id [] authentication mechanism [GSSAPI]: error -2
(Local error)
[23/Aug/2016:23:56:35 +0100] NSMMReplicationPlugin -
agmt="cn=meTofreeipa-sea.bpt.rocks" (freeipa-sea:389): Replication bind
with GSSAPI auth failed: LDAP error -2 (Local error) (SASL(-1): generic
failure: GSSAPI Error: Unspecified GSS failure.  Minor code may provide
more information (No Kerberos credentials available))
[23/Aug/2016:23:56:39 +0100] NSMMReplicationPlugin -
agmt="cn=meTofreeipa-sea.bpt.rocks" (freeipa-sea:389): Replication bind
with GSSAPI auth resumed
[23/Aug/2016:23:56:40 +0100] schema-compat-plugin - Finished plugin
initialization.
[23/Aug/2016:23:56:41 +0100] agmt="cn=meTofreeipa-sea.bpt.rocks"
(freeipa-sea:389) - Can't locate CSN 570484ee000000090000 in the
changelog (DB rc=-30988). If replication stops, the consumer may need to
be reinitialized.
[23/Aug/2016:23:56:41 +0100] NSMMReplicationPlugin - changelog program -
agmt="cn=meTofreeipa-sea.bpt.rocks" (freeipa-sea:389): CSN
570484ee000000090000 not found, we aren't as up to date, or we purged
[23/Aug/2016:23:56:41 +0100] NSMMReplicationPlugin -
agmt="cn=meTofreeipa-sea.bpt.rocks" (freeipa-sea:389): Data required to
update replica has been purged. The replica must be reinitialized.
[23/Aug/2016:23:56:42 +0100] NSMMReplicationPlugin -
agmt="cn=meTofreeipa-sea.bpt.rocks" (freeipa-sea:389): Incremental
update failed and requires administrator action


I went around and around re-initializing from various servers last night
to try make these go away but it's like whackamole.

What's the best way you can think of to put humpty dumpty back together
again?

Thank you so much for your time.  Come to Tacoma and I will buy you all
the beer.
I'm about to burn everything down and ipa-server-install --uninstall but
I've done that before a couple times and that seems to be what got me
into this mess...

Thank you for your help.




On 08/23/2016 01:37 AM, Ludwig Krispenz wrote:
looks like you are searching the nstombstone below "o=ipaca", but you
are cleaning ruvs in "dc=bpt,dc=rocks",

your attrlist_replace error refers to the bpt,rocks backend, so you
should search the tombstone entry ther, then determine which replicaIDs
to remove.

Ludwig

On 08/23/2016 09:20 AM, Ian Harding wrote:
I've followed the procedure in this thread:

https://www.redhat.com/archives/freeipa-users/2016-May/msg00043.html

and found my list of RUV that don't have an existing replica id.

I've tried to remove them like so:

[root@seattlenfs ianh]# ldapmodify -D "cn=directory manager" -W -a
Enter LDAP Password:
dn: cn=clean 97, cn=cleanallruv, cn=tasks, cn=config
objectclass: top
objectclass: extensibleObject
replica-base-dn: dc=bpt,dc=rocks
replica-id: 97
replica-force-cleaning: yes
cn: clean 97

adding new entry "cn=clean 97, cn=cleanallruv, cn=tasks, cn=config"

[root@seattlenfs ianh]# ipa-replica-manage list-clean-ruv
CLEANALLRUV tasks
RID 9: Waiting to process all the updates from the deleted replica...
RID 96: Successfully cleaned rid(96).
RID 97: Successfully cleaned rid(97).

No abort CLEANALLRUV tasks running


and yet, they are still there...

[root@seattlenfs ianh]# ldapsearch -ZZ -h seattlenfs.bpt.rocks -D
"cn=Directory Manager" -W -b "o=ipaca"
"(&(objectclass=nstombstone)(nsUniqueId=ffffffff-ffffffff-ffffffff-ffffffff))"


| grep "nsds50ruv\|nsDS5ReplicaId"
Enter LDAP Password:
nsDS5ReplicaId: 81
nsds50ruv: {replicageneration} 55c8f3ae000000600000
nsds50ruv: {replica 81 ldap://seattlenfs.bpt.rocks:389}
568ac431000000510000 5
nsds50ruv: {replica 1065 ldap://freeipa-sea.bpt.rocks:389}
57b103d400000429000
nsds50ruv: {replica 1070 ldap://bellevuenfs.bpt.rocks:389}
57a4f2700000042e000
nsds50ruv: {replica 1075 ldap://bpt-nyc1-nfs.bpt.rocks:389}
57a478650000043300
nsds50ruv: {replica 1080 ldap://bellevuenfs.bpt.rocks:389}
57a4176700000438000
nsds50ruv: {replica 1085 ldap://fremontnis.bpt.rocks:389}
57a403e60000043d0000
nsds50ruv: {replica 1090 ldap://freeipa-dal.bpt.rocks:389}
57a2dd3500000442000
nsds50ruv: {replica 1095 ldap://freeipa-sea.bpt.rocks:389}
579a963c00000447000
nsds50ruv: {replica 96 ldap://freeipa-sea.bpt.rocks:389}
55c8f3bd000000600000
nsds50ruv: {replica 86 ldap://fremontnis.bpt.rocks:389}
5685b24e000000560000 5
nsds50ruv: {replica 91 ldap://seattlenis.bpt.rocks:389}
567ad6180001005b0000 5
nsds50ruv: {replica 97 ldap://freeipa-dal.bpt.rocks:389}
55c8f3ce000000610000
nsds50ruv: {replica 76 ldap://bellevuenis.bpt.rocks:389}
56f385eb0007004c0000
nsds50ruv: {replica 71 ldap://bellevuenfs.bpt.rocks:389}
57048560000900470000
nsds50ruv: {replica 66 ldap://bpt-nyc1-nfs.bpt.rocks:389}
5733e594000a00420000
nsds50ruv: {replica 61 ldap://edinburghnfs.bpt.rocks:389}
574421250000003d0000
nsds50ruv: {replica 1195 ldap://edinburghnfs.bpt.rocks:389}
57a42390000004ab00

What have I done wrong?

The problem I am trying to solve is that seattlenfs.bpt.rocks sends
updates to all its children, but their changes don't come back because
of these errors:

[23/Aug/2016:00:02:16 -0700] attrlist_replace - attr_replace
(nsslapd-referral,
ldap://seattlenfs.bpt.rocks:389/dc%3Dbpt%2Cdc%3Drocks) failed.

in effect, the replication agreements are one-way.

Any ideas?

- Ian


--
Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric 
Shander

--
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project

Reply via email to