On 09/21/2016 11:43 AM, Rob Crittenden wrote:
> Ian Harding wrote:
>> I used to have a lot of replicas, but like a house of cards, it all came
>> crashing down.
>>
>> I was down to two, that seemed to be replicating, but last few days I've
>> noticed that they haven't always been.
>>
>> freeipa-sea.bpt.rocks is where we do all our admin.
>> seattlenfs.bpt.rocks is also up and running and can be used for
>> authentication.
>>
>> When I noticed that logins were failing after password changes I did
>>
>> ipa-replica-manage re-initialize --from=freeipa-sea.bpt.rocks
> 
> Note that this is the hammer approach. Diagnosing the underlying issues
> would probably be best.
> 
> What is the output of:
> 
> $ rpm -q 389-ds-base freeipa-server
> 
> (or ipa-server depending on distro).
> 
> That will give us the info needed to suggest what else to look for.
> 
> rob
> 

Hammer sounds pretty good.

# rpm -q 389-ds-base ipa-server
389-ds-base-1.3.4.0-33.el7_2.x86_64
ipa-server-4.2.0-15.0.1.el7.centos.19.x86_64

>>
>> on seattlenfs.bpt.rocks and replication appeared to be working again.
>>
>> Well it happened again, and this time I peeked at the dirsrv errors log
>> and saw some scary things having to do with the CA.
>>
>> [19/Sep/2016:02:55:50 -0700] slapd_ldap_sasl_interactive_bind - Error:
>> could not perform interactive bind for id [] mech [GSSAPI]: LDAP error
>> -1 (Can't contact LDAP server) ((null)) errno 0 (Success)
>> [19/Sep/2016:02:55:50 -0700] slapi_ldap_bind - Error: could not perform
>> interactive bind for id [] authentication mechanism [GSSAPI]: error -1
>> (Can't contact LDAP server)
>> [19/Sep/2016:02:55:50 -0700] NSMMReplicationPlugin -
>> agmt="cn=meTofreeipa-sea.bpt.rocks" (freeipa-sea:389): Replication bind
>> with GSSAPI auth failed: LDAP error -1 (Can't contact LDAP server) ()
>> [19/Sep/2016:02:56:04 -0700] NSMMReplicationPlugin -
>> agmt="cn=meTofreeipa-sea.bpt.rocks" (freeipa-sea:389): Replication bind
>> with GSSAPI auth resumed
>> [20/Sep/2016:10:18:25 -0700] NSMMReplicationPlugin -
>> multimaster_be_state_change: replica dc=bpt,dc=rocks is going offline;
>> disabling replication
>> [20/Sep/2016:10:18:26 -0700] - WARNING: Import is running with
>> nsslapd-db-private-import-mem on; No other process is allowed to access
>> the database
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Workers finished;
>> cleaning up...
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Workers cleaned up.
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Indexing complete.
>> Post-processing...
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Generating
>> numsubordinates (this may take several minutes to complete)...
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Generating
>> numSubordinates complete.
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Gathering ancestorid
>> non-leaf IDs...
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Finished gathering
>> ancestorid non-leaf IDs.
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Creating ancestorid
>> index (new idl)...
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Created ancestorid index
>> (new idl).
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Flushing caches...
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Closing files...
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Import complete.
>> Processed 1324 entries in 3 seconds. (441.33 entries/sec)
>> [20/Sep/2016:10:18:29 -0700] NSMMReplicationPlugin -
>> multimaster_be_state_change: replica dc=bpt,dc=rocks is coming online;
>> enabling replication
>> [20/Sep/2016:10:18:29 -0700] NSMMReplicationPlugin - replica_reload_ruv:
>> Warning: new data for replica dc=bpt,dc=rocks does not match the data in
>> the changelog.
>>   Recreating the changelog file. This could affect replication with
>> replica's  consumers in which case the consumers should be reinitialized.
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=groups,cn=compat,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=computers,cn=compat,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=ng,cn=compat,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> ou=sudoers,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=users,cn=compat,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=ad,cn=etc,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=casigningcert cert-pki-ca,cn=ca_renewal,cn=ipa,cn=etc,dc=bpt,dc=rocks
>> does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=casigningcert cert-pki-ca,cn=ca_renewal,cn=ipa,cn=etc,dc=bpt,dc=rocks
>> does not exist
>>
>> Any clues what that's about?  The high numbered RUV are for CA RUV I
>> assume.  Those other machines are still installed but IPA services
>> turned off so people can't authenticate to them because replication was
>> not working.
>>
>> If there was some way I could go down to one machine (freeipa-sea) and
>> get it all cleaned up, no ghost RUV, everything quiet in the logs, and
>> start over creating replicas, I would love to do that.  Seems like
>> someone smarter than me could stop the server, back up the ldif files
>> and edit them to make all the cruft go away.
>>
>> Is that possible?  I've started a conversation with RedHat about getting
>> on board with the official bits and support but I want to know if it's
>> possible/cost effective to do what I describe, along with, I assume,
>> migrating to the official versions of Spacewalk and FreeIPA.
>>
>> Thanks!
>>
>> Ian
>>
> 

-- 
Ian Harding
IT Director
Brown Paper Tickets
1-800-838-3006 ext 7186
http://www.brownpapertickets.com

-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project

Reply via email to