accesslog contextcsn isn't always updated
hi, i am seeing a symptom where the accesslog contextcsn is not always updated when a new entry is added to the accesslog. i have a test setup [config is below], with a content database using the accesslog and syncprov overlays, and an accesslog database using the syncprov overlay. for the purposes of testing, i'm not using it as a provider for any consumers. just running by itself, and watching its behavior. when a modification is made to an entry in the content db, the contextcsn value for the content db is always updated, and a new entry is always added to the accesslog db. but when the accesslog db gets a new entry, the accesslog contextcsn does not always update to match the new entry [see example below]. ldap_accesslog_noop is just a small shell script which updates the info attribute for an entry. it's somewhat anecdotal, but there may be a timing factor involved. if there is no activity for a little while [as little as a few minutes, sometimes], then a modification performed, that does not update the accesslog contextcsn. but if subsequent modifications are done within a few moments, it then eventually updates the accesslog contextcsn correctly, typically as of the second modification, but sometimes the third. if modifications then continue, with little delay between them, then the contextcsn seems to stay consistently up to date. if activity then stops, and some time passes as before, the symptom reappears. this is version 2.4.44 on freebsd 10.3-release, built from ports. i'm hoping someone can offer some guidance on how to troubleshoot this further, or what i might be doing wrong. i can provide more config details, logs, debugging ,etc., if needed. apologies for the long collections of details following, and thanks! ## first mod, after some time of inactivity: ## >ldapsearch -xLLLH ldap://localhost/ -D uid=admin,dc=example,dc=com -w xx -b cn=accesslog2 -s base contextcsn dn: cn=accesslog2 contextCSN: 20170825225855.866010Z#00#001#00 >ldapsearch -xLLLH ldap://localhost/ -D uid=admin,dc=example,dc=com -w xx -b uid=accesslog_noop,ou=replication,ou=system,ou=accounts,dc=example,dc=com -s base info entrycsn dn: uid=accesslog_noop,ou=replication,ou=system,ou=accounts,dc=example,dc=com info: 1503221460 entryCSN: 20170825225855.866010Z#00#001#00 >./ldap_accesslog_noop >ldapsearch -xLLLH ldap://localhost/ -D uid=admin,dc=example,dc=com -w xx -b uid=accesslog_noop,ou=replication,ou=system,ou=accounts,dc=example,dc=com -s base info entrycsn dn: uid=accesslog_noop,ou=replication,ou=system,ou=accounts,dc=example,dc=com info: 1503717700 entryCSN: 20170826032140.674259Z#00#001#00 >ldapsearch -xLLLH ldap://localhost/ -D uid=admin,dc=example,dc=com -w xx -b dc=example,dc=com -s base contextcsn dn: dc=example,dc=com contextCSN: 20170826032140.674259Z#00#001#00 >ldapsearch -xLLLH ldap://localhost/ -D uid=admin,dc=example,dc=com -w xx -b cn=accesslog2 (reqdn=uid=accesslog_noop,ou=replication,ou=system,ou=accounts,dc=example,dc=com) entrycsn dn: reqStart=20170825034142.04Z,cn=accesslog2 entryCSN: 20170825034142.304465Z#00#001#00 dn: reqStart=20170825034147.04Z,cn=accesslog2 entryCSN: 20170825034147.248214Z#00#001#00 dn: reqStart=20170825034238.04Z,cn=accesslog2 entryCSN: 20170825034238.430123Z#00#001#00 dn: reqStart=20170825034239.04Z,cn=accesslog2 entryCSN: 20170825034239.815833Z#00#001#00 dn: reqStart=20170825034320.04Z,cn=accesslog2 entryCSN: 20170825034320.198025Z#00#001#00 dn: reqStart=20170825034321.04Z,cn=accesslog2 entryCSN: 20170825034321.767124Z#00#001#00 dn: reqStart=20170825225347.04Z,cn=accesslog2 entryCSN: 20170825225347.344528Z#00#001#00 dn: reqStart=20170825225849.07Z,cn=accesslog2 entryCSN: 20170825225849.109615Z#00#001#00 dn: reqStart=20170825225855.07Z,cn=accesslog2 entryCSN: 20170825225855.866010Z#00#001#00 dn: reqStart=20170826032140.07Z,cn=accesslog2 entryCSN: 20170826032140.674259Z#00#001#00 >ldapsearch -xLLLH ldap://localhost/ -D uid=admin,dc=example,dc=com -w xx -b cn=accesslog2 -s base contextcsn dn: cn=accesslog2 contextCSN: 20170825225855.866010Z#00#001#00 ## second mod, a few seconds later: ## ldapsearch -xLLLH ldap://localhost/ -D uid=admin,dc=example,dc=com -w xx -b cn=accesslog2 -s base contextcsn dn: cn=accesslog2 contextCSN: 20170825225855.866010Z#00#001#00 >ldapsearch -xLLLH ldap://localhost/ -D uid=admin,dc=example,dc=com -w xx -b uid=accesslog_noop,ou=replication,ou=system,ou=accounts,dc=example,dc=com -s base info entrycsn dn: uid=accesslog_noop,ou=replication,ou=system,ou=accounts,dc=example,dc=com info: 1503717700 entryCSN: 20170826032140.674259Z#00#001#00 >./ldap_accesslog_noop >ldapsearch -xLLLH ldap://localhost/ -D uid=admin,dc=example,dc=com -w xx -b
Re: OpenLDAP Replication Error
Am Thu, 24 Aug 2017 15:08:44 + schrieb "Zpyro .": > Hi All - I am trying to setup replication between a Centos 5 (2.3) > and Centos 7 (2.4) server. > > Partial replication is working - however it has not fully replicated. > I am receiving an error of "syncrepl_message_to_entry: rid=123 mods > check (postalAddress: value #0 invalid per syntax)" in the logs. > > From the research I was doing, it looks like this is a reference to a > missing schema - however I am pretty sure they are all in place. > > Below are the results from querying the schemas on both - ldapsearch > -H ldap://localhost -x -s base -b "cn=subschema" objectclasses as > well as the slapd.conf files from both hosts. > > > Any insight into what I am missing would be greatly appreciated!! > > Please let me know if you need any more information. [...] your atribute value of postalAddress seems not to be conforming to rfc-4517, section 3.3.28. -Dieter -- Dieter Klünter | Systemberatung http://sys4.de GPG Key ID: E9ED159B 53°37'09,95"N 10°08'02,42"E
Re: Removing Berkeley DB Log Files
Douglas Duckworth wrote: Hi I am running openldap-servers-2.4.40-16.el6.x86_64 cluster on Centos 6.9. My /var/lib/ldap directory contains many 10MB log files. /var partition rather small... I've read they can be removed either by running "sudo db_archive -d -h /var/lib/ldap/domain" or by defining "DB_LOG_AUTOREMOVE" within the file "DB_CONFIG." That file does not presently exist whereas the db_archive command does not actually remove any of the log files. If the db_archive command doesn't remove anything, that means it thinks all of the log files are still in active use. Read the docs more carefully. http://docs.oracle.com/cd/E17076_05/html/programmer_reference/transapp_logfile.html Can I remove the old log files manually using rm? Not if the above is true, you will corrupt the logs and the DB will fail to open on a subsequent restart. If not should I create /var/lib/ldap/DB_CONFIG then restart slapd to make this removal automatic? Do you have any idea why db_archive does not work or produce any helpful error to stdout? There's no error message because there's no error, everything is working as designed. You need to do periodic checkpoints to allow log files to be closed, and then db_archive will be able to remove some of them. -- -- Howard Chu CTO, Symas Corp. http://www.symas.com Director, Highland Sun http://highlandsun.com/hyc/ Chief Architect, OpenLDAP http://www.openldap.org/project/
Removing Berkeley DB Log Files
Hi I am running openldap-servers-2.4.40-16.el6.x86_64 cluster on Centos 6.9. My /var/lib/ldap directory contains many 10MB log files. /var partition rather small... I've read they can be removed either by running "sudo db_archive -d -h /var/lib/ldap/domain" or by defining "DB_LOG_AUTOREMOVE" within the file "DB_CONFIG." That file does not presently exist whereas the db_archive command does not actually remove any of the log files. Can I remove the old log files manually using rm? If not should I create /var/lib/ldap/DB_CONFIG then restart slapd to make this removal automatic? Do you have any idea why db_archive does not work or produce any helpful error to stdout? Thanks, Douglas Duckworth, MSc, LFCS HPC System Administrator Scientific Computing Unit Physiology and Biophysics Weill Cornell Medicine E: d...@med.cornell.edu O: 212-746-6305 F: 212-746-8690
OpenLDAP Replication Error
Hi All - I am trying to setup replication between a Centos 5 (2.3) and Centos 7 (2.4) server. Partial replication is working - however it has not fully replicated. I am receiving an error of "syncrepl_message_to_entry: rid=123 mods check (postalAddress: value #0 invalid per syntax)" in the logs. >From the research I was doing, it looks like this is a reference to a missing >schema - however I am pretty sure they are all in place. Below are the results from querying the schemas on both - ldapsearch -H ldap://localhost -x -s base -b "cn=subschema" objectclasses as well as the slapd.conf files from both hosts. Any insight into what I am missing would be greatly appreciated!! Please let me know if you need any more information. Thank You!! - PRIMARY SERVER # # # base
Re: LDAPCon 2017 programme now online
On Fri, 2017-08-11 at 13:43 +0100, Andrew Findlay wrote: > The programme for the 2017 LDAP Conference has just been published: > > https://ldapcon.org/2017/conference-program/ > > It's looking good, so get your booking in quickly to get early-bird > tickets and start thinking about where you want to stay in Brussels! > > The conference runs 19th and 20th October 2017. > > Andrew I don't think I can afford an Inter Continental trip right now, is there anyway to get the slide decks afterwards?
Re: mdb fragmentation
On 8/25/17 2:30 AM, Quanah Gibson-Mount wrote: Hi Geert, If I could, I would delete 8664 from the ITS system entirely as it was filed based on invalid information that was provided to me. It generally should be ignored. When a write operation is performed with LMDB, the freelist is scanned for available space to reuse if possible. The larger the size of the freelist, the longer amount of time it will take for the operation to complete successfully. When the database has gotten to a certain point of fragmentation (This differs based on any individual use case), it will be start taking a noticeable amount of time for those write operations to complete and the server processing the write operation does essentially come to a halt during this process. > [...] Hope this helps! --Quanah Hi all! Hmm, I am a bit alarmed by this. I would have expected that the free blocks would be sorted by size to some extent, so that suitable blocks are found fairly fast. But I already had the impression that this is not the case when I analyzed mdb_stat.c how it calculates the size of free space... Since I ran into an allocation problem with my software on a test system anyway -- the database was "full" despite of reported gigabytes free space --, I wonder whether I should limit the size of larger data values and also round the sizes up, e.g. to the next power of two, in order to reduce the risk of such problems. From that perspective it would be also interesting to me from which size on LMDB allocates extents to store the data (please forgive me if this is obvious and I missed that or if I have a conceptual misunderstanding). Klaus
Re: mdb fragmentation
On Thu, Aug 24, 2017 at 19:30:17 -0500, Quanah Gibson-Mount wrote: > When a write operation is performed with LMDB, the freelist is scanned > for available space to reuse if possible. The larger the size of the > freelist, the longer amount of time it will take for the operation to > complete successfully. When the database has gotten to a certain point > of fragmentation (This differs based on any individual use case), it will > be start taking a noticeable amount of time for those write operations to > complete and the server processing the write operation does essentially > come to a halt during this process. Once the write operation completes, > things go back to normal. The only solution is to dump and reload the > database (slapcat/slapadd or mdb_copy -c). Eventually, you will get back > into the same situation and have to do this again. > > [..] > > This is one area where LMDB differs significantly from back-hdb/bdb. You > could have back-bdb/hdb databases that endured a high rate of write > operations be in effect for years w/o needing maintenance. With LMDB, > you get better read & write rates, but it requires periodic reloads. Thanks Quanah, this definitely explains the issues we saw. So we'll have to live with periodic mdb maintenance. I think with mdb_copy -c it should be quite doable, as opposed to slapcat/slapadd which took all day. I'll have to look for some freelist size threshold on which we can set an alert, before we get into noticeable trouble again. Can the need for this periodic mdb maintenance be documented in the OpenLDAP admin guide? I'll respond to the Zimbra specific remarks off-list. Geert -- geert.hendrickx.be :: ge...@hendrickx.be :: PGP: 0xC4BB9E9F This e-mail was composed using 100% recycled spam messages!