Re: Backup Mirrormode setup

2023-03-08 Thread Meike Stone
Am Mi., 16. März 2022 um 21:39 Uhr schrieb Quanah Gibson-Mount
:
>
>
>
> --On Wednesday, March 16, 2022 10:23 PM +0100 Meike Stone
>  wrote:
>
> >
> > We are still using the bdb backend and the latest 2.4.59 (don't ask,
> > it will be replaced soon) and I remember, a "few" years ago, I had
> > problems with slapcat and online databases, because the server was
> > stucking for a while and answers were delayed ..
> > (The Admin Guide tells, that "Backups are managed slightly differently"
> > ...) Secondly, while offline, I can copy the whole database directory
> > including the transaction logs ..
>
> Ah, ok.  Yes, you could set up a read-only consumer node to take backups
> from.  I'd highly prioritize moving to a supported release of OpenLDAP with
> the back-mdb backend as well (I see you say it should be replaced soon). :)
>

Oh my god, almost a year has passed - now finally our department wants
to migrate to the latest version 2.5.14. I'm responsible for the
Master LDAP-Server, but there are a few ro-replicas in other
departments where I'm not responsible - and I have no access.
Is it possible to migrate the master server to the new version and
update the replicas (running 2.4.59) later, independent from the
master (migration)? Are there any problems to be expected?

Regards and many thanks
Meike


Re: Backup Mirrormode setup

2022-03-16 Thread Meike Stone
Am Mi., 16. März 2022 um 21:39 Uhr schrieb Quanah Gibson-Mount
:
>
>
>
> --On Wednesday, March 16, 2022 10:23 PM +0100 Meike Stone
>  wrote:
>
> >
> > We are still using the bdb backend and the latest 2.4.59 (don't ask,
> > it will be replaced soon) and I remember, a "few" years ago, I had
> > problems with slapcat and online databases, because the server was
> > stucking for a while and answers were delayed ..
> > (The Admin Guide tells, that "Backups are managed slightly differently"
> > ...) Secondly, while offline, I can copy the whole database directory
> > including the transaction logs ..
>
> Ah, ok.  Yes, you could set up a read-only consumer node to take backups
> from.  I'd highly prioritize moving to a supported release of OpenLDAP with
> the back-mdb backend as well (I see you say it should be replaced soon). :)
>
Quanah, thanks for guiding me!
Meike


Re: Backup Mirrormode setup

2022-03-16 Thread Meike Stone
Am Mi., 16. März 2022 um 19:31 Uhr schrieb Quanah Gibson-Mount
:
>
>
>
> --On Wednesday, March 16, 2022 7:59 PM +0100 Meike Stone
>  wrote:
>
> > Hello,
> >
> > what is the right solution to backup a Mirromode setup?
> > I've a simple setup with two servers, running in mirromode and a
> > virtual IP is moved on "request" between the two servers (nodes). The
> > DNS-Name of the virtual IP is used for the client ldap requests. The
> > server certificate is issued to the DNS name of the virtual IP, but
> > has two SANs for the both node names. The node names will be used for
> > replication in mirrormode. Everything is running well.
> > Can I set up a third (readonly) server, who only replicates the
> > Database from the "mirromode cluster'' using the DNS-Name without
> > problems and then shutdown slapd temporarily and splacat the database?
>
> You don't need to shut down slapd to slapcat the database, so I'm not clear
> what the concern is here?  You should just do a periodic slapcat.
>
We are still using the bdb backend and the latest 2.4.59 (don't ask,
it will be replaced soon) and I remember, a "few" years ago, I had
problems with slapcat and online databases, because the server was
stucking for a while and answers were delayed ..
(The Admin Guide tells, that "Backups are managed slightly differently" ...)
Secondly, while offline, I can copy the whole database directory
including the transaction logs ..

Thanks Meike


Backup Mirrormode setup

2022-03-16 Thread Meike Stone
Hello,

what is the right solution to backup a Mirromode setup?
I've a simple setup with two servers, running in mirromode and a
virtual IP is moved on "request" between the two servers (nodes). The
DNS-Name of the virtual IP is used for the client ldap requests. The
server certificate is issued to the DNS name of the virtual IP, but
has two SANs for the both node names. The node names will be used for
replication in mirrormode. Everything is running well.
Can I set up a third (readonly) server, who only replicates the
Database from the "mirromode cluster'' using the DNS-Name without
problems and then shutdown slapd temporarily and splacat the database?

Thanks Meike


Re: slapo-memberof and Replication

2018-10-01 Thread Meike Stone
Hello Quanah,

Thanks for clarification.

> > That confuses me a little bit.
> > All replication on openLDAP are based on syncreplication (slurpd is
> > vanished a long time ago)
> > So what kind of replication means the manual page (-> "Replica servers")?
>
> It means that you run it in a replicated environment at your own risk.
> Unfortunately, there is no defined standard for the "memberOf"
> functionality (it's a MS hack) and so there's nothing that details how it
> should or shouldn't behave with replication.  In general, things work fine
> as long as:
>
> a) The server(s) never go into REFRESH
> and
> b) You never bring up a new replica with an empty database (which then does
> a full REFRESH)

That means, if I run in mirrormode, I can turn on the memberOf overlay
on the active openLDAP server and off on the slave.
Then REFESH ist supported?! In emegency case (hardware error) I can
make the mirror (manual) aktive an turn the overlay on?!

Thanks Meike



slapo-memberof and Replication

2018-09-28 Thread Meike Stone
Hello,

I need the memberof Attribute on users, and I configured it with the
memberof overlay. Every thing ist working fine. I like to deploy a
second server for redundancy reason., but the manual page of the
overlay says:
" .. Replica servers should be configured with their
own instances of the memberOf overlay if it is desired to maintain
these memberOf attributes on the replicas. Note that slapo-memberOf
is not compatible with syncrepl based replication, and should not be
used in a replicated environment. ..."

That confuses me a little bit.
All replication on openLDAP are based on syncreplication (slurpd is
vanished a long time ago)
So what kind of replication means the manual page (-> "Replica servers")?


Thanks Meike



Re: use proprietary password hash in "userpassword"

2017-01-24 Thread Meike Stone
>> I don't have to recompile the whole openldap, compiling the module is
>> sufficient?
>>
>> (1) we think about a subscription from symas ...
>
>
> Correct.  Any distributor (symas included) should include a development
> package that allows the ability to rebuild a module without rebuilding
> everything.
>

Ah, ok - it works  :-D
If I use a own compiled module and I bought a gold subscription, does
Symas still give full support?

Thanks Meike



using two password hashes

2017-01-23 Thread Meike Stone
Hello,

the userPassword is a multivalued attribute.
If there are set two values with different schemes,
how will openldap handle the request?

Will only checked one password and if it is wrong,
the access will be declined or will openldap proceed to the second
password hash?

If both userPassword schemes differs, which one will be used first?
Is the handling configurable?

thank you

Meike



Re: use proprietary password hash in "userpassword"

2017-01-23 Thread Meike Stone
2017-01-19 12:31 GMT+01:00 Howard Chu :
> Meike Stone wrote:
>>

>> Write a openldap modul like pw-sha2 is not the first choice, because
>> we need to compile the openldap after each update on our own and that
>> prevents us to use the distribution packages.
>
>
> Writing an OpenLDAP module like pw-sha2 is precisely the way to write a
> small external binary to validate passwords.
>
> There's no need to recompile all of OpenLDAP just to update a password
> module.

If I use the binary openldap package from the distributor (*1), and I
like to use a own module,
I don't have to recompile the whole openldap, compiling the module is
sufficient?

(1) we think about a subscription from symas ...

Thanks Meike



use proprietary password hash in "userpassword"

2017-01-19 Thread Meike Stone
Hello dear list,

we like to migrate an a user database from SQL to LDAP and need to
take over the user passwords.
Problem is, the passwords are hashed by an known but proprietary algorithm.
Is there a possibility, to write an small external binary, that is
used by slapd to validate these passwords? (Maybe, we import that in a
own attribute?)
After password change, we want write a ssha hash, so that we can
disable this external binary...

Write a openldap modul like pw-sha2 is not the first choice, because
we need to compile the openldap after each update on our own and that
prevents us to use the distribution packages.


Thanks for help,

kindly regards

Meike



Re: ldap proxy to AD with local ACLs

2015-08-06 Thread Meike Stone
sorry, wrong button ...

>> I don't know of any way currently to allow only passwordModify exops, it 
>> would actually
>> allow all extended operations.

 Maybe it will not work, because "UnicodePwd" is only changeable be del+add ..


Meike



Re: ldap proxy to AD with local ACLs

2015-08-06 Thread Meike Stone
Hello,
thanks for answering ...

2015-08-06 16:24 GMT+02:00 Howard Chu :
> Meike Stone wrote:
>>
>> Hello,
>>
>> it is me again regarding the ldap-backend.
>>
>> As told, I've installed a openldap as proxy in a DMZ for authentication
>> forwarding to an Active Directoy. The Proxy is used by a VPN gateway.
>>
>> That all works very well. But now, I want to protect the AD from
>> modifying.
>> Only password changes from the user by self should be allowed.
>>
>> But as I see or understand, ACLs from a backend are used, AFTER the
>> result from remote LDAP (AD) are coming back?! See second sentence
>> from http://www.openldap.org/faq/data/cache/532.html:
>>
>> "It allows the common configuration directives as suffix, which is
>> used to select it when a request is received by the server, *ACLs,
>> which are applied to search results*, size and time limits, and so on.
>> "
>
>
> Correct. back-ldap only performs ACL checks on search responses.
>
>> So is it (and how is it) possible, to "switch" the ldap-backend in
>> "read only mode" and only pass the the password change (modify:
>> DEL/ADD)?
>
>
> You could use the denyop overlay to deny all write operations.
I found following comment to denyop:
http://www.openldap.org/faq/data/cache/1202.html
So it is possible to do this, without rebuild openldap? (my binary is
compiled without --enable-denyop=yes)


> I don't know of any way currently to allow only passwordModify exops, it 
> would actually
> allow all extended operations.

Maybe it will not work, because



ldap proxy to AD with local ACLs

2015-08-06 Thread Meike Stone
Hello,

it is me again regarding the ldap-backend.

As told, I've installed a openldap as proxy in a DMZ for authentication
forwarding to an Active Directoy. The Proxy is used by a VPN gateway.

That all works very well. But now, I want to protect the AD from modifying.
Only password changes from the user by self should be allowed.

But as I see or understand, ACLs from a backend are used, AFTER the
result from remote LDAP (AD) are coming back?! See second sentence
from http://www.openldap.org/faq/data/cache/532.html:

"It allows the common configuration directives as suffix, which is
used to select it when a request is received by the server, *ACLs,
which are applied to search results*, size and time limits, and so on.
"

So is it (and how is it) possible, to "switch" the ldap-backend in
"read only mode" and only pass the the password change (modify:
DEL/ADD)?

Thanks Meike



Re: ldap proxy to AD - UnicodePwd: attribute type undefined

2015-07-31 Thread Meike Stone
>> Hello
>>
>>
>> I've installed a openldap as proxy in a DMZ for authentication
>> forwarding to an Active Directoy.
>> The Proxy is used by a VPN gateway.
>>
>> That all works very well, but password change from client fails with
>> following error:
>>
>> slapd[30661]: conn=1001 op=5 do_modify
>> slapd[30661]: conn=1001 op=5 do_modify: dn
>> (cn=XPTEST5,ou=Users,dc=myorg,dc=net) slapd[30661]: >>>
>> dnPrettyNormal:  slapd[30661]: <<<
>> dnPrettyNormal: ,
>>  slapd[30661]: conn=1001 op=5
>> modifications: slapd[30661]:   delete: UnicodePwd
>> slapd[30661]:   one value, length 26
>> slapd[30661]:   add: UnicodePwd
>> slapd[30661]:   one value, length 26
>> slapd[30661]: conn=1001 op=5 MOD
>> dn="cn=TEST5,ou=Users,dc=myorg,dc=net" slapd[30661]: conn=1001 op=5
>> MOD attr=UnicodePwd UnicodePwd slapd[30661]: send_ldap_result:
>> conn=1001 op=5 p=3 slapd[30661]: send_ldap_result: err=17 matched=""
>> text="UnicodePwd: attribute type undefined"
>> slapd[30661]: send_ldap_response: msgid=6 tag=103 err=17
>> slapd[30661]: conn=1001 op=5 RESULT tag=103 err=17 text=UnicodePwd:
>> attribute type undefined
>> slapd[30661]: daemon: activity on 1 descriptor
>> slapd[30661]: daemon: activity on:
>> slapd[30661]:
>> slapd[30661]: daemon: epoll: listen=7 active_threads=0 tvp=zero
>> slapd[30661]: daemon: activity on 1 descriptor
>> slapd[30661]: daemon: activity on:
>>
>> As I understand, UnicodePwd is a proprietary "standard" MS attribute
>> in AD to store the password but the RFC attribute is the userPassword.
>>
>>
>> Is it possible, to get the proxy working to process this MOD request,
>> may be that openldap proxy pass through the MOD operation with the
>> attribute UnicodePwd from the VPN-gateway?
> [...]
>
> create a private schema with all relevant attribute types and object
> classes

Thanks, that worked!!!

Meike



ldap proxy to AD - UnicodePwd: attribute type undefined

2015-07-30 Thread Meike Stone
Hello


I've installed a openldap as proxy in a DMZ for authentication
forwarding to an Active Directoy.
The Proxy is used by a VPN gateway.

That all works very well, but password change from client fails with
following error:

slapd[30661]: conn=1001 op=5 do_modify
slapd[30661]: conn=1001 op=5 do_modify: dn (cn=XPTEST5,ou=Users,dc=myorg,dc=net)
slapd[30661]: >>> dnPrettyNormal: 
slapd[30661]: <<< dnPrettyNormal: ,

slapd[30661]: conn=1001 op=5 modifications:
slapd[30661]:   delete: UnicodePwd
slapd[30661]:   one value, length 26
slapd[30661]:   add: UnicodePwd
slapd[30661]:   one value, length 26
slapd[30661]: conn=1001 op=5 MOD dn="cn=TEST5,ou=Users,dc=myorg,dc=net"
slapd[30661]: conn=1001 op=5 MOD attr=UnicodePwd UnicodePwd
slapd[30661]: send_ldap_result: conn=1001 op=5 p=3
slapd[30661]: send_ldap_result: err=17 matched="" text="UnicodePwd:
attribute type undefined"
slapd[30661]: send_ldap_response: msgid=6 tag=103 err=17
slapd[30661]: conn=1001 op=5 RESULT tag=103 err=17 text=UnicodePwd:
attribute type undefined
slapd[30661]: daemon: activity on 1 descriptor
slapd[30661]: daemon: activity on:
slapd[30661]:
slapd[30661]: daemon: epoll: listen=7 active_threads=0 tvp=zero
slapd[30661]: daemon: activity on 1 descriptor
slapd[30661]: daemon: activity on:

As I understand, UnicodePwd is a proprietary "standard" MS attribute
in AD to store the password but the RFC attribute is the userPassword.


Is it possible, to get the proxy working to process this MOD request,
may be that openldap proxy pass through the MOD operation with the
attribute UnicodePwd from the VPN-gateway?

I use openldap 2.4.40, here is my configuration:

==
include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/rfc2307bis.schema

pidfile /var/run/slapd/slapd.pid
argsfile/var/run/slapd/slapd.args
modulepath  /usr/lib/openldap/modules
moduleload  back_ldap

disallowbind_anon
require authc

TLSCACertificateFile/etc/openldap/certs/myorg.net.root.pem
TLSCertificateFile  /etc/openldap/certs/proxy1.myorg.net.pem
TLSCertificateKeyFile   /etc/openldap/certs/proxy1.myorg.net.pem.key
TLSVerifyClient never
TLSCipherSuite  ALL:!DH:!EDH

databaseldap
securitytls=256
rebind-as-user  yes
suffix  "dc=myorg,dc=net"
uri "ldap://dc1.myorg.net ldap://dc2.myorg.net";
tls start
tls_cacert=/etc/openldap/certs/adroot.pem
chase-referrals no
protocol-version3

loglevel-1
==

Thanks for help!!

Meike



Re: separate loglevels for different databases?

2015-04-27 Thread Meike Stone
Hello,

2015-04-17 17:18 GMT+02:00 Meike Stone :
> Dear list,
>
>> I've configured two different databases (one ldap, one bdb) in openLDAP.
>> Is it possible, to configure separate loglevels for each database?
>
> maybe at least different logfiles?


No one who can help ?


 Thanks Meike



Re: separate loglevels for different databases?

2015-04-17 Thread Meike Stone
Dear list,

> I've configured two different databases (one ldap, one bdb) in openLDAP.
> Is it possible, to configure separate loglevels for each database?

maybe at least different logfiles?


Thanks Meike



separate loglevels for different databases?

2015-04-15 Thread Meike Stone
Hello,

I've configured two different databases (one ldap, one bdb) in openLDAP.
Is it possible, to configure separate loglevels for each database?

Thanks Meike



Re: Have you seen this FUD - IT pros suffer OpenLDAP configuration headaches ?

2014-02-03 Thread Meike Stone
2014-02-03 Pieter Baele :
> It's a sadly a bit true.
>
> I like OpenLDAP a lot but if you don't need the *fastest* LDAP server,
> something as OpenDJ from Forgerock
> is a lot easier to configure.
>

I tried to use aliases (as defined in rfc 4512/2.6) with OpenDJ, but
it is not implemented.
So if anyone like to switch, you should take care. I can confirm, that
the configuration seems at first a little easier with OpenDJ and ACLs
working well, but thats not the only thing to make a choice ;-)

Kindly regards

Meike



Re: run test suite separately from the source code compilation?

2013-06-06 Thread Meike Stone
>
> If your purpose is to test the distribution's builds,
Yes, that's is my intention.

> you can surely
> download the corresponding OpenLDAP source code, build it, replace slapd and
> slap* tools in BUILDDIR/servers/slapd with those provided by the
> distribution, and run the tests using "make test" or "cd tests; ./run
> testXXX" where XXX is the number of the test you need.

Ahh,  that sounds great, I'll check that!

Thanks to all,

Meike



Re: run test suite separately from the source code compilation?

2013-06-06 Thread Meike Stone
Hello,

thanks for answer, that a great pity!

Meike

2013/6/6 Hallvard Breien Furuseth :
> Meike Stone writes:
>> is it possible and how, to run the complete test suite included in the
>> source tarball later, after installing the openldap rpm/deb package
>> independently and separated from the compilation?
>
> No.  tests/progs/ in the test suite uses libraries and generated include
> files which are not installed, they're only used for building OpenLDAP.
>
> --
> Hallvard



Re: run test suite separately from the source code compilation?

2013-06-06 Thread Meike Stone
2013/6/6 Howard Chu :
> Meike Stone wrote:
>>
>> Hello,
>>
>> is it possible and how, to run the complete test suite included in the
>> source tarball later, after installing the openldap rpm/deb package
>> independently and separated from the compilation?
>
>
> We can't answer this since there is no "the openldap rpm/deb package". The
> OpenLDAP Project distributes source code, not binary packages. What you can
> or can't do with a particular distro's binary package is a question you
> should ask of your distro/package provider.

I mean, I have installed the normal (latest) openldap package. After
them, I download the source tarball from OpenLDAP  and want only use
the tests to check the installation. How can in invoke the test
separately from the unpacked tarball?

Thanks Meike



run test suite separately from the source code compilation?

2013-06-06 Thread Meike Stone
Hello,

is it possible and how, to run the complete test suite included in the
source tarball later, after installing the openldap rpm/deb package
independently and separated from the compilation?

Thanks Meike



Re: use ldif backup with operational attributes in conjunction with slapadd?

2013-05-31 Thread Meike Stone
>
> If you ever get "Permission denied" there's something wrong with
> ownership/permissions of your slapd setup or slapcat process. You should
> immediately fix it.

Yes, slapd runs under user "ldap" and I used slapcat as root,
but slapcat shouldn't change permissions or write any things?
If so, there should an explicit note in the manual page appended to
the sentence "..It is always safe to run slapcat with the slapd-bdb(5).." ?

Kindly regards

Meike



Re: use ldif backup with operational attributes in conjunction with slapadd?

2013-05-31 Thread Meike Stone
2013/5/30 Quanah Gibson-Mount :
> --On Thursday, May 30, 2013 11:39 AM +0200 Meike Stone
>  wrote:
>
>> Hello,
>>
>>
>> is it possible to use a ldif-backup with operation attributes
>> (ldapsearch ... '+' '*') with slapadd, to save the operation
>> attributes, if no slapcat backup is available? Are there any concerns?
>
>
> If you can't get a slapcat backup, how would you get a ldapsearch backup?
>

A seconds reason, why I try to prevent using slapcat on a productive system is:

===
slapd[4812]: bdb(ou=root): PANIC: Permission denied
slapd[4812]: bdb(ou=root): DB_ENV->log_newfh: 15752: DB_RUNRECOVERY:
Fatal error, run database recovery
slapd[4812]: bdb(ou=root): txn_checkpoint: failed to flush the buffer
cache: DB_RUNRECOVERY: Fatal error, run database recover
slapd[4812]: bdb(ou=root): PANIC: fatal region error detected; run recovery
slapd[4812]: null_callback : error code 0x50
slapd[4812]: syncrepl_updateCookie: rid=001 be_modify failed (80)
slapd[4812]: do_syncrepl: rid=001 rc 80 retrying (4 retries left)
===

This message I got today on a productive system while getting a ldif
via slapcat for setting up an syncrepl slave.
This shouldn happen, slapcat (8)  manpage tells:

"For  some  backend  types,  your  slapd(8)  should not be running (at
least, not in read-write mode) when you do this to ensure consistency
of the
 database. *It is always safe to run slapcat with the slapd-bdb(5)*,
slapd-hdb(5), and slapd-null(5) backends."

So using ldapsearch is more reliable for me ...

Kindly regards Meike



Re: use ldif backup with operational attributes in conjunction with slapadd?

2013-05-31 Thread Meike Stone
2013/5/30 Quanah Gibson-Mount :
> --On Thursday, May 30, 2013 8:04 PM +0200 Meike Stone
>  wrote:
>
>> I want to preserve the operational attributes from the ldapsearch ldif
>> (created with '+' '*').
>> But I saw, that a ldapsearch ldif with operational attributes has a
>> more operational attributes than from the slapcat ldif.
>
>
> An ldapsearch generated and slapcat generated LDIF of the same db will be
> identical for *,+ for ldapsearch.  So your statement doesn't really make
> much sense.

I compared it and found this three additional attributes are in the
ldapsearch, but not in slapcat:

- entryDN
- subschemaSubentry
- hasSubordinates

It seems, that slapcat ignores this values or at least, it does not complain...

Kindly regards

Meike



Re: use ldif backup with operational attributes in conjunction with slapadd?

2013-05-30 Thread Meike Stone
2013/5/30 Quanah Gibson-Mount :
> --On Thursday, May 30, 2013 7:51 PM +0200 Meike Stone
>  wrote:
>
>> 2013/5/30 Quanah Gibson-Mount :
>>>
>>> --On Thursday, May 30, 2013 11:39 AM +0200 Meike Stone
>>>  wrote:
>>>
>>>> Hello,
>>>>
>>>>
>>>> is it possible to use a ldif-backup with operation attributes
>>>> (ldapsearch ... '+' '*') with slapadd, to save the operation
>>>> attributes, if no slapcat backup is available? Are there any concerns?
>>>
>>>
>>>
>>> If you can't get a slapcat backup, how would you get a ldapsearch backup?
>>>
>>
>> That's a a ldif created from a colleague, before the database on the
>> test system was deleted..
>> I want to simulate some documented test from this colleague, but ony
>> the ldif exist and no slapcat.
>
>
> So slapadd it.  slapadd will automatically generate the operational attrs.
>
>

I want to preserve the operational attributes from the ldapsearch ldif
(created with '+' '*').
But I saw, that a ldapsearch ldif with operational attributes has a
more operational attributes than from the slapcat ldif.

Is it possible with this ldif, to create the database like my colleague it used?

Thanks Meike



Re: use ldif backup with operational attributes in conjunction with slapadd?

2013-05-30 Thread Meike Stone
2013/5/30 Quanah Gibson-Mount :
> --On Thursday, May 30, 2013 11:39 AM +0200 Meike Stone
>  wrote:
>
>> Hello,
>>
>>
>> is it possible to use a ldif-backup with operation attributes
>> (ldapsearch ... '+' '*') with slapadd, to save the operation
>> attributes, if no slapcat backup is available? Are there any concerns?
>
>
> If you can't get a slapcat backup, how would you get a ldapsearch backup?
>

That's a a ldif created from a colleague, before the database on the
test system was deleted..
I want to simulate some documented test from this colleague, but ony
the ldif exist and no slapcat.

Kind regards Meike



use ldif backup with operational attributes in conjunction with slapadd?

2013-05-30 Thread Meike Stone
Hello,


is it possible to use a ldif-backup with operation attributes
(ldapsearch ... '+' '*') with slapadd, to save the operation
attributes, if no slapcat backup is available? Are there any concerns?

Thanks Meike



Re: ldap query performance issue

2013-05-28 Thread Meike Stone
2013/5/28 Meike Stone :

>
> I ask this, because it seems to me, that the basedn does not matter in
> the search ...

In my special (real world)  case, I have in the basedn 84,000 objects
but only one of this is a person with objectclass=inetOrgperson.
I have about 420,000 objectclass=inetOrgperson. In the directory are
2,000,000 objects at all.

The search with the specified basedn where only the one inetOrgperson
is located needs about 5 minutes ...

Thanks Meike



Re: ldap query performance issue

2013-05-28 Thread Meike Stone
>
> Indexing is all about making rare data easy to find. If you have an
> attribute that occurs on 99% of your entries, indexing it won't save any
> search time, and it will needlessly slow down modify time.
>
> Asking about "1,000,000" entries is meaningless on its own. It's not the raw
> number of entries that matters, it's the percentage of the total directory.
> If you have 1,000,000,000 entries in your directory, then 1,000,000 is
> actually quite a small percentage of the data and it might be smart to index
> it. If you have only 2,000,000 entries total, it may not make enough
> difference to be worthwhile.
>
> It's not the raw numbers that matter, it's the frequency of occurrences.
>


How does the search work in conjunction with the base dn?

Does the search at first go to the index and lookup the
attribute/value from the search filter, get all releated id's and then
take the id's and get dn's from the entires via id2entry.bdb
and then compare this with the basedn?

For example:
I have an index objectclass and all my people in the dirctory have the
objectClass=inetOrgPerson

In my company works about 140.000 people, divided in seven different
departments (under each department are about 20.000 people).
So my index  for objectlcass=inetorgperson has about 140.000 entires,
thats over BDB_IDL_LOGN 17
If I search over the whole DIT (from root), I catch the problem
because 140.000>2^17

But if I limit the search with a more specific search base over only
one department, does this matter to the speed in this case, or better
get around the problem with BDB_IDL_LOGN 17?
Can a more specific basedn speed up the search for filter with large results?

I ask this, because it seems to me, that the basedn does not matter in
the search ...


Thanks and kindly regards Meike



Re: ldap query performance issue

2013-05-27 Thread Meike Stone
Hello,

because of this,  does it make sense in a directory with > 1,000,000
people to index the sex?

thanks Meike

2013/5/23 Quanah Gibson-Mount :
> --On Thursday, May 23, 2013 4:40 PM + Chris Card 
> wrote:
>
>> Hi all,
>>
>> I have an openldap directory with about 7 million DNs, running openldap
>> 2.4.31 with a BDB backend (4.6.21), running on CentOS 6.3.
>>
>> The structure of the directory is like this, with suffix dc=x,dc=y
>>
>> dc=x,dc=y
>>account=a,dc=x,dc=y
>>   mail=m,account=a,dc=x,dc=y   // Users
>>   
>>   licenceId=l,account=a,dc=x,dc=y  // Licences,
>> objectclass=licence   
>>   group=g,account=a,dc=x,dc=y  // Groups
>>   
>>   // etc.
>>
>>account=b,dc=x,dc=y
>>   
>>
>> Most of the DNs in the directory are users or groups, and the number of
>> licences is small (<10) for each account.
>>
>> If I do a query with basedn account=a,dc=x,dc=y and filter
>> (objectclass=licence) I see wildly different performance, depending on
>> how many users are under account a. For an account with ~3 users the
>> query takes 2 seconds at most, but for an account with ~6 users  the
>> query takes 1 minute.
>>
>> It only appears to be when I filter on objectclass=licence that I see
>> that behaviour. If I filter on a different objectclass which matches a
>> similar number of objects to the objectclass=licence filter, the
>> performance doesn't seem to depend on the number of users.
>>
>> There is an index on objectclass (of course), but the behaviour I'm
>> seeing seems to indicate that for this query, at some point slapd stops
>> using the index and just scans all the objects under the account.
>>
>> Any ideas?
>
>
> Increase the IDL range.  This is how I do it:
>
> --- openldap-2.4.35/servers/slapd/back-bdb/idl.h.orig   2011-02-17
> 16:32:02.598593211 -0800
> +++ openldap-2.4.35/servers/slapd/back-bdb/idl.h2011-02-17
> 16:32:08.937757993 -0800
> @@ -20,7 +20,7 @@
> /* IDL sizes - likely should be even bigger
>  *   limiting factors: sizeof(ID), thread stack size
>  */
> -#defineBDB_IDL_LOGN16  /* DB_SIZE is 2^16, UM_SIZE is 2^17
> */
> +#defineBDB_IDL_LOGN17  /* DB_SIZE is 2^16, UM_SIZE is 2^17
> */
> #define BDB_IDL_DB_SIZE(1< #define BDB_IDL_UM_SIZE(1<<(BDB_IDL_LOGN+1))
> #define BDB_IDL_UM_SIZEOF  (BDB_IDL_UM_SIZE * sizeof(ID))
>
>
> --Quanah
>
> --
>
> Quanah Gibson-Mount
> Sr. Member of Technical Staff
> Zimbra, Inc
> A Division of VMware, Inc.
> 
> Zimbra ::  the leader in open source messaging and collaboration
>



Re: ldap query performance issue

2013-05-24 Thread Meike Stone
Sorry for top posting, google web client is hiding always the message
while answering *grrr*

Meike



Re: ldap query performance issue

2013-05-24 Thread Meike Stone
Hello,

had the same problem years ago and the patch worked for me. As I
understood, this special problem exist in mdb too
(http://www.openldap.org/lists/openldap-technical/201301/msg00185.html)
Thats one reason, because I did not switch till now.

Thanks Meike

2013/5/24 Howard Chu :
> Chris Card wrote:
>
> Any ideas?


 Increase the IDL range. This is how I do it:

 --- openldap-2.4.35/servers/slapd/back-bdb/idl.h.orig 2011-02-17
 16:32:02.598593211 -0800
 +++ openldap-2.4.35/servers/slapd/back-bdb/idl.h 2011-02-17
 16:32:08.937757993 -0800
 @@ -20,7 +20,7 @@
 /* IDL sizes - likely should be even bigger
 * limiting factors: sizeof(ID), thread stack size
 */
 -#define BDB_IDL_LOGN 16 /* DB_SIZE is 2^16, UM_SIZE is 2^17
 */
 +#define BDB_IDL_LOGN 17 /* DB_SIZE is 2^16, UM_SIZE is 2^17
 */
 #define BDB_IDL_DB_SIZE (1<>>> #define BDB_IDL_UM_SIZE (1<<(BDB_IDL_LOGN+1))
 #define BDB_IDL_UM_SIZEOF (BDB_IDL_UM_SIZE * sizeof(ID))
>>>
>>> Thanks, that looks like it might be the issue. Unfortunately I only see
>>> the issue in production, so patching it might be a pain.
>>
>> I've tried this change, but it made no difference to the performance of
>> the query.
>
>
> You have to re-create all of the relevant indices as well. Also, it's always
> possible that some slots in your index are still too big, even for this
> increased size.
>
> You should also test this query with your data loaded into back-mdb.
> --
>   -- Howard Chu
>   CTO, Symas Corp.   http://www.symas.com
>   Director, Highland Sun http://highlandsun.com/hyc/
>   Chief Architect, OpenLDAP  http://www.openldap.org/project/
>



Re: slow replication

2013-04-26 Thread Meike Stone
2013/4/26 Marc Patermann :
> Meike Stone schrieb (26.04.2013 14:34 Uhr):
>
>
>>
>> Is it possible to simulate the present phase with ldapsearch, to look
>> if the provider needs so long and if, what part (entries updated or
>> unchanged entry ) needs so long?
>
> look at
> # man ldapsearch
> for "-E" and sync=rp[/][/] "(LDAP Sync refreshAndPersist)"
> cookie is something like "rid=${RID},csn=${CSN}
>
> But I'm not sure, it does what you want.
>

For that point yes, thanks.

I tried it:
- got the ContextCSN on server via
  ldapsearch -x -h localhost -w password -D cn=admin,ou=root
-bou=root  -s base contextCSN -LLL
- waited for about 20min
- startet a refreshOnly
 ldapsearch -x -h localhost -wpassword -D"cn=admin,ou=root"
-b"ou=root" -s sub -E
sync=ro/rid=103,csn=20130426125054.388178Z#00#001#00/0
and got the whole directory back.

I thought, only modified entries are transmitted completely and
unmodified entires are empty (plus entryUUID) is sent?

Is this check valid? If I use slapd, get the contextCSN, do nothing
modify, and start the ldapsearch -E ..., I should only get back empty
entires plus entryUUID? I'm wrong?


Thanks Meike



Re: slow replication

2013-04-26 Thread Meike Stone
>
> syncrepl really isn't intended for initial "full" loads, although it will
> work eventually (as you've seen). The preferred method for standing up an
> offline server is slapadd -q. syncrepl can then handle deltas since the LDIF
> was generated; this should complete fairly rapidly.
>

Ok, sound logical, but if I use slapcat on a running slapd with big
db, is it guaranteed,
that the resulting ldif is consistent and will work after slapadd?

Second is, I used a 3h old ldif from slapcat on the consumer and the
replication needed about 6h for the resync (same ContextCSN).
On the provider (master) I've set loglevel 256. So I used the script
ldap-stats.pl (http://prefetch.net/code/ldap-stats.pl.html), to
determine how many changes are made in this hour (about 5000/h). The
server hardware itself is idling the whole time (cpu/disk/nework).
On the consumer the CPU is running on nearly 100%.

Is it possible, the determine in which state (present phase, delete
phase,) the consumer is staying (e.g. monitoring db).

Is it possible to simulate the present phase with ldapsearch, to look
if the provider needs so long and if, what part (entries updated or
unchanged entry ) needs so long?

>
>> I'm of the opinion that it was significantly faster with a smaller
>> database.
>
>
> Rule of thumb, the less data to process, the less time it takes...
>

But presumably non-liniar ...


Thanks Meike



slow replication

2013-04-24 Thread Meike Stone
Hello,

I've a problem with the speed of replication.

I've set up openldap 2.4.33 with a Master and one consumer. At the
moment the full replaction takes abaout 32hours.
No LDAP operations are made on master or consumer during this time.
(I know, i depends on Hardware too, but the two servers are fast )

How long should it need, to replicate a DB from about 6GByte
(id2entry.bdb + dn2id.bdb) with 1.6M DN's and about 66M Attributes.
Replication is configured with RefreshAndPersist, no DeltaSync. Both
servers are on the same IP segment, connected via gigabit ethernet
switch.

I played in test environment with different parameters:
- shm_key
- dbnosync
- switched off all indexes on consumer except entryUUID and entryCSN
- different bdb cachesize
- noatime, relatime
- ext3/xfs

I locked on disk via iostat (nothing seen), no io waits with top,
looked on network, but max 5Mbit/s is used,
I listen with strace on slapd and I see, that slapd is reading from
Network and wrinting it to id2entiry.bdb.

Before each Test, I deleted complete ldap db (except DB_CONFIG) and
shared memory ipcrm -m

Are there similar limitations, that will trigger slow replication like
BDB_IDL_LOGN?
How can I accelerate this Replication.
I'm of the opinion that it was significantly faster with a smaller database.


Thanks and kindly regards Meike



Configuration:


Configuration is only a test configuration, some values differs,
some are commented out because of playing with them.

# Master (Provider)
==
include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/yast.schema
include /etc/openldap/schema/rfc2307bis.schema

pidfile /var/run/slapd/slapd.pid
argsfile/var/run/slapd/slapd.args

modulepath  /usr/lib/ldap
moduleload  back_bdb
moduleload  syncprov

sizelimit   -1
timelimit   300


tool-threads8
threads 8

serverID001


databasebdb
suffix  "ou=root"
rootdn  "cn=admin,ou=root"

#loglevel   stats sync
loglevel0
rootpw  
directory   /DATA/ldap

#cachesize  50
#dncachesize50
#idlcachesize   15
cachefree   500

dirtyread
dbnosync
shm_key 7

checkpoint  4096 15

index   objectClass,entryUUID,entryCSN  eq
index   cn  eq,sub
index   ownattributes 


overlay syncprov
syncprov-checkpoint 100 5



# Consumer
==
include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/yast.schema
include /etc/openldap/schema/rfc2307bis.schema


pidfile /var/run/slapd/slapd.pid
argsfile/var/run/slapd/slapd.args



modulepath  /usr/lib/ldap
moduleload  back_bdb
moduleload  syncprov

sizelimit   -1
timelimit   300


serverID002

#loglevel   stats sync
loglevel0



databasebdb
suffix  "ou=root"
rootdn  "cn=admin,ou=root"

checkpoint  4096 15
rootpw  
directory   /DATA/ldap

dbnosync

shm_key 7

checkpoint  4096 15

#cachesize   10
#dncachesize 10
#idlcachesize15
#cachefree   500
#dirtyread

syncrepl rid=020
provider=ldap://192.168.1.10
type=refreshAndPersist
retry="5 5 300 +"
searchbase="ou=root"
attrs="*,+"
bindmethod=simple
binddn="cn=admin,ou=root"
credentials=


index entryUUID,entryCSN   eq
#index cn  eq,sub

mirrormode  FALSE



Re: Search speed regarding BDB_IDL_LOGN and order in search filter

2013-01-31 Thread Meike Stone
Hello Howard,

thanks for fast answer!

>> - An index slot is loosing precision if the search result for an
>> (indexed) attribute is larger than 2^16. Then the search time is going
>> to increase a lot.
>> - I can change this via BDB_IDL_LOGN.
>> - But if I have a directory, that holds 200.000 employees with
>> '(ObjectClass=Employees)', the result is larger than 2^16 and it is
>> slow.
>> - lets say, the employees are distributed over 4 continents and the
>> DIT is structured geographical eg.:
>>
>>   o=myOrg, c=us (100,000 employees)
>>   o=myOrg, c=gb ( 30,000  employees)
>>   o=myOrg, c=de ( 25,000 employees)
>>   o=myOrg, c=br  ( 45,000 employees)
>>
>> Can i prevent it this problem with index slot size, if I change the
>> search base to "o=myOrg, c=gb", because there are only 30,000
>> employees.
>
>
> Try it and see.
>
> If you're already playing with low-level definitions in the source code, you
> have no need for us to answer these questions. Or, if you need us to answer
> these questions, you have no business playing with low-level definitions in
> the source code.
>

I'm the Linux admin of the systems and I'm responsible for all the
services like ldap, mysql, ...
In our company we have a few programer, how cares for the data for our
ldap-server. Suddenly, the slapd was slow and I found the solution
here:
http://www.openldap.org/lists/openldap-technical/201101/msg00102.html
I and increased BDB_IDL_LOGN.

But now, month later - answers are slow again.
So I wrote a small perl script, who searches for the "longtimers"  in
the log file (with loglevel 256).
And I see a lot off searches takes a long time (until 500s) and I want
to understand why.

I'll try to understand, how all the internal stuff works, but it is
hard for an non c programmer like me. A book about all of this would
be very appreciated.
I understand, how the ldap operation works, but not how they are
implemented and how must I care, that hey work fast.
Our programer says only: "That's my question (ldap search), it is
"well formed", why do I must wait so long for answer, didn't we build
a index for all needed attributes?"

>
>> This takes me to the second question:
>>
>> "How is a search filter evaluated?"
>>
>> Lets say, I combine  three filter via "and" like
>> '(&(objectlass=Employees)(age=30)(sex=m))' and the all attributes are
>> indexed. Each filter results:
>> (objectlass=Employees) => 200,000 entries
>> (age=30)  => 10,000 entries
>> (sex=m)=> 3,000 entries
>>
>> Does the order matter regarding speed, is it better to form the filter
>> like this?
>>   '(&(sex=m)(age=30)(objectlass=Employees))'
>
>
> Try it and see.
>
> If you have advance knowledge of the characteristics of your data, perhaps
> you can optimize the filter order. In most cases, your applications will not
> have such knowledge, or it will be irrelevant. For example, if you have a
> filter term that matches zero entries, it is beneficial to evaluate that
> first in an AND clause. But it would make no difference in an OR clause.

Ok, thanks a lot

kindly regards, Meike



Search speed regarding BDB_IDL_LOGN and order in search filter

2013-01-31 Thread Meike Stone
Hello,

I'm sorry, but I want to ask again for clarifying.

First question:

- An index slot is loosing precision if the search result for an
(indexed) attribute is larger than 2^16. Then the search time is going
to increase a lot.
- I can change this via BDB_IDL_LOGN.
- But if I have a directory, that holds 200.000 employees with
'(ObjectClass=Employees)', the result is larger than 2^16 and it is
slow.
- lets say, the employees are distributed over 4 continents and the
DIT is structured geographical eg.:

 o=myOrg, c=us (100,000 employees)
 o=myOrg, c=gb ( 30,000  employees)
 o=myOrg, c=de ( 25,000 employees)
 o=myOrg, c=br  ( 45,000 employees)

Can i prevent it this problem with index slot size, if I change the
search base to "o=myOrg, c=gb", because there are only 30,000
employees. This takes me to the second question:

"How is a search filter evaluated?"

Lets say, I combine  three filter via "and" like
'(&(objectlass=Employees)(age=30)(sex=m))' and the all attributes are
indexed. Each filter results:
(objectlass=Employees) => 200,000 entries
(age=30)  => 10,000 entries
(sex=m)=> 3,000 entries

Does the order matter regarding speed, is it better to form the filter
like this?
 '(&(sex=m)(age=30)(objectlass=Employees))'


Thanks Meike



Re: missing entry in slapcat backup

2013-01-30 Thread Meike Stone
Hello Andrew,
>
> Dryrun won't be able to detect missing structural entries: that
> requires a database. Even an internal list of DNs is not
> enough, as the actual entries have to be available in order to
> check things like schema and content rules.
>
> To be a valid test you really have to import the data into a
> server with configuration identical to the production server.
> If that would take too long then a reasonable compromise might
> be to import to a server set up to write data as fast as
> possible. You could reasonably turn off all database-safety
> functions to do that, or put the database on a ramdisk.
> Even if you use hdb on the production servers you might
> consider trying mdb for the backup tests.
>

Thanks for enlighten me. I've a separate backup server (read only
slave), where I can do this.
So I'll try to get money from the FC for more RAM to make the test in
a ramdisk ^^

kindly regards

Meike



Re: syncrepl issue

2013-01-28 Thread Meike Stone
>
> a) Use a current release.  That would be 2.4.33.
> b) Delta-syncrepl supports MMR in current releases
> c) The reason I suggest delta-syncrepl is because syncrepl is known to be
> problematic, particularly with MMR.  If you want reliable replication, use
> delta-syncrepl.

Is it recommended in MMR setups to use delta-syncrepl before syncrepl
in general?
Are there any disadvantages, like using more RAM, or other things to care..?

Thanks, Meike



Re: missing entry in slapcat backup

2013-01-28 Thread Meike Stone
>> -
>> ~ # slapcat -f /etc/openldap/slapd.conf >/backup.ldif; echo $?
>> 0
>>
>>
>> It seems to me, that in such case, the slapcat does not trows an error?!
>
>
> slapcat doesn't check for missing entries. Its only job is to dump out the
> contents of what is in the DB. It doesn't try to tell you what isn't in the
> DB. Your DB must have been in this state for a long time, probably ever
> since its initial import.

Hello Howard,

thanks for answer, I presumed something like this!

I think, it would be a great thing to test the slapcat file (after
dumping it) instantly.

So as reported in
http://www.openldap.org/lists/openldap-technical/201301/msg00254.html
I tried to do this with a broken backup and the dry-run switch. But
slapadd did not report the error.
It will work only with a real import.
Are there other possibilities, or can be slapadd modified, that this
will report such error?


Thanks in advance

Meike



Re: missing entry in slapcat backup

2013-01-25 Thread Meike Stone
>> and if I try to add this missing node, then I get:
>> ldapadd -x -h localhost -w password -D"cn=admin,ou=root" -f test.ldif
>> adding new entry ou=a,ou=b,ou=c,ou=root
>> ldap_add: Already exists (68)
>
>
> Use slapadd to add the missing entry. For back-mdb you don't need to stop
> slapd while running other slap* tools.

I tried it on the test server sucessfully. But in production
evironment the Server is
configured as one master in a MMR evironment.

What is the best way there? If I add the missing entry on on server
does this entry replicate to the second master?
Or is in dangerous to do this - maybe it is better stop all server and
add the entry?

Thanks in advance

Meike



Re: missing entry in slapcat backup

2013-01-25 Thread Meike Stone
2013/1/24 Hallvard Breien Furuseth :
> Meike Stone writes:
>> - What ist the origin for such orphaned nodes (In MMR, it happens and
>> I see a few glue records, but in my backup this one node is complete
>> missing...)?
>
> Do you check the exit code from slapcat before saving its output?
> If slapcat (well, any program) fails, discard the output file.


Hello,

yes, every time I make a Backup, the exitcode from slapcat is evaluated.
If any error occur, I write a message in syslog.
I searched in syslog over one year, but no error occured.
Espechially the date, where the backup was created, what I use now for tests.

So I tried it again:


Load broken DB via slapadd in slapd (messages proof that it is broken):
---
debld02:~ # slapadd  -f /etc/openldap/slapd.conf -q -l /backup.ldif
_ 100.00% eta   none elapsed  19m11s spd   2.2 M/s
Closing DB...Error, entries missing!
  entry 1156449: ou=a,ou=b,ou=c,ou=root



Check, that the "parent entry" not exist:
-
~ # ldapdelete -x -h localhost -w password -D cn=admin,ou=root
"cn=cname,ou=a,ou=b,ou=c,ou=root"
ldap_delete: Other (e.g., implementation specific) error (80)
additional info: could not locate parent of entry
~ # echo $?
80


Try to check exit code from slapcat after backup the broken DB:
-
~ # slapcat -f /etc/openldap/slapd.conf >/backup.ldif; echo $?
0


It seems to me, that in such case, the slapcat does not trows an error?!

Thanks Meike



Re: missing entry in slapcat backup

2013-01-25 Thread Meike Stone
>>
>> - How can I prevent from such entires and how can I recognize them
>> without importing?
>
>
> It's easiest just to let slapadd tell you.

So I understand, I make a dry-run (slapadd -u) to test the backup?

I tried this, but got no error, only if I make a real import, then
slapadd throws the error.

(The following import I've tested with BDB configuration.)

Real import:

debld02:~ # slapadd  -f /etc/openldap/slapd.conf -q -l /backup.ldif
_ 100.00% eta   none elapsed  19m11s spd   2.2 M/s
Closing DB...Error, entries missing!
  entry 1156449: ou=a,ou=b,ou=c,ou=root
debld02:~ # echo $?
1

Quick and dry-run import:

debld02:~ # slapadd  -f /etc/openldap/slapd.conf -u -q -l /backup.ldif
. 100.00% eta   none elapsed  03m56s spd  10.8 M/s
debld02:~ # echo $?
0

Dry-run import without quick option:
---
debld02:~ # slapadd  -f /etc/openldap/slapd.conf -u -l /backup.ldif
. 100.00% eta   none elapsed  04m04s spd  10.4 M/s
debld02:~ # echo $?
0

So I get no error after dry run import, while the backup is damaged.
Do I misunderstand anything?

>
>
>> - How can I remove this entry (esp. in production DB without
>> downtime), because a delete show following messages:
>>
>> ~ # ldapdelete -x -h localhost -w password -D cn=admin,ou=root
>> 'cn=cname,ou=a,ou=b,ou=c,ou=root'
>> ldap_delete: Other (e.g., implementation specific) error (80)
>>  additional info: could not locate parent of entry
>>
>> and if I try to add this missing node, then I get:
>> ldapadd -x -h localhost -w password -D"cn=admin,ou=root" -f test.ldif
>> adding new entry ou=a,ou=b,ou=c,ou=root
>> ldap_add: Already exists (68)
>
>
> Use slapadd to add the missing entry. For back-mdb you don't need to stop
> slapd while running other slap* tools.
>

I'll try this!


Many thanks, Meike



missing entry in slapcat backup

2013-01-24 Thread Meike Stone
Hello dear List,

I tried to import a slapcat backup from our production machine in a
test environment and got following message:

debld02:~ # time slapadd  -w -q -f /etc/openldap/slapd.conf -l /backup.ldif
50f98421 mdb_monitor_db_open: monitoring disabled; configure monitor
database to enable
- 100.00% eta   none elapsed  09m18s spd   4.6 M/s
Closing DB...Error, entries missing!
  entry 1156449: ou=a,ou=b,ou=c,ou=root

First I did not noticed this message, but now I see the database is
broken, because the node "ou=a" is missing.
So my questions:

- What ist the origin for such orphaned nodes (In MMR, it happens and
I see a few glue records, but in my backup this one node is complete
missing...)?

- How can I prevent from such entires and how can I recognize them
without importing?

- How can I remove this entry (esp. in production DB without
downtime), because a delete show following messages:

~ # ldapdelete -x -h localhost -w password -D cn=admin,ou=root
'cn=cname,ou=a,ou=b,ou=c,ou=root'
ldap_delete: Other (e.g., implementation specific) error (80)
additional info: could not locate parent of entry

and if I try to add this missing node, then I get:
ldapadd -x -h localhost -w password -D"cn=admin,ou=root" -f test.ldif
adding new entry ou=a,ou=b,ou=c,ou=root
ldap_add: Already exists (68)


Thanks for help

Meike



Re: slapd segfaults with mdb

2013-01-22 Thread Meike Stone
> File an ITS (http://www.openldap.org/its/) with a full backtrace of all
> threads from gdb.

=> #7496

Thanks



slapd segfaults with mdb

2013-01-21 Thread Meike Stone
Hello,


I play a little with the mdb on a test machine, and imported our db
from production system.
(about 1,500,000 entires, 2,5GByte ldif from slapcat)

I took the slapd source from git today, and because of segmentation
fault, I compiled slapd with debugging symbols.

My configuration is simple:

include   /etc/openldap/schema/core.schema
include   /etc/openldap/schema/cosine.schema
include   /etc/openldap/schema/inetorgperson.schema
include   /etc/openldap/schema/yast.schema
include   /etc/openldap/schema/rfc2307bis.schema
include   /etc/openldap/schema/my_ldap_attributes.schema
include   /etc/openldap/schema/my_ldap_ObjectClasses.schema


pidfile /var/run/slapd/slapd.pid
argsfile   /var/run/slapd/slapd.args


sizelimit -1
timelimit 300
disallow  bind_anon
requireauthc

gentlehupon
tool-threads 4

serverID  001


# mdb database definitions


databasemdb
suffix  "ou=root"
rootdn "cn=admin,ou=root"
rootpwpassword
directory /var/lib/ldap/ldap.mdb
loglevel  256
maxsize10737418240

envflags writemap,nometasync

index   objectClass,entryUUID,entryCSN  eq
index   cn  eq,sub
.
.
. some other own indexes here



I started my search via:

/tmp/ol/usr/local/bin/ldapsearch -x -h localhost -w password -D
cn=admin,ou=root -b'ou=root' '(objectClass=*)' >/dev/null


and got in the syslog folloing meassge:

Jan 21 17:50:31 debld02 kernel: [348860.053152] slapd[19394]: segfault
at 7f760f974ce8 ip 0053bea1 sp 7f738b10a650 error 4 in
slapd
[40+227000]



Because of this, I configured the kernel to core dump:

sysctl -w kernel.core_pattern=/tmp/core
sysctl -w kernel.core_uses_pid=1
sysctl -w kernel.suid_dumpable=2
ulimit -c unlimited
ulimit -v unlimited


and got the result via:

~# gdb /tmp/ol/usr/local/libexec/slapd /tmp/core.19386

Core was generated by
`/tmp/buildforopenldaptechnical/usr/local/libexec/slapd -h ldap:///
-f /etc/op'.
Program terminated with signal 11, Segmentation fault.
#0  0x0053bea1 in mdb_entry_decode (op=0xc3a020,
data=0x7f738b10a770, e=0x7f738b11a828) at id2entry.c:666
666 a->a_desc = mdb->mi_ads[*lp++];
(gdb)


How can I solve this?

Btw.: This error is repeatable and I saw the same error with the
release OL2.4.33.


Thanks Meike



Re: don't get running the slapd while using mdb backend

2013-01-18 Thread Meike Stone
>
> So my first question:
> Does mdb have limitations like bdb it have aka BDB_IDL_LOGN?



 Yes. back-mdb is ~60% the same code as back-bdb/hdb, its indexing
 functions are basically identical.
>>>
>>>
>>>
>>> However, I never got mdb to work successfully by modifying these values.
>>
>>
>> Does this mean, it's not possible at moment to get running the mdb
>> with BDB_IDL_LOGN = 2^17?
>
>
> I can guarantee that would never work, as the variable with MDB is
> MDB_IDL_LOGN.  I don't recall exactly the issues I hit when changing it to
> 17 from 16.  Also, MDB has changed substantially since I did that testing.
> ;)  I was ok with not modifying it given the read speed improvements in mdb
> vs bdb.
>

Thank for sharing, so I'll do test regarding this with the unmodified
slapd and mdb, if it is fast enough in our environment!
Have a nice weekend!

Meike



Re: don't get running the slapd while using mdb backend

2013-01-18 Thread Meike Stone
>>> So my first question:
>>> Does mdb have limitations like bdb it have aka BDB_IDL_LOGN?
>>
>>
>> Yes. back-mdb is ~60% the same code as back-bdb/hdb, its indexing
>> functions are basically identical.
>
>
> However, I never got mdb to work successfully by modifying these values.

Does this mean, it's not possible at moment to get running the mdb
with BDB_IDL_LOGN = 2^17?

@Howard: I'm sorry, it was really not my intention to annoy, because
of my statement about this problem in the bdb/mdb.
Yes and I've read/followed each document under http://symas.com/mdb/,
but not every one is a developer and not all of them is easy to
understand.
I try to get deeper inside to openldap and understand how it works,
but it is really hard ;-)

But anyway, I have to say thanks to all the great work you and the
developer have done on openldap and especially mdb too!

Thanks Meike



Re: don't get running the slapd while using mdb backend

2013-01-18 Thread Meike Stone
>> So my first question:
>> Does mdb have limitations like bdb it have aka BDB_IDL_LOGN?
>
>
> Yes. back-mdb is ~60% the same code as back-bdb/hdb, its indexing functions
> are basically identical.
>

Thanks for information, .. it was not that what I expected, so I think
for a lot of users with larger database
this is a problem and they use patched versions...

To find out, what the problem was, why *suddenly* the database was
extreme slowly, in spite of indexes, it took a long time for us.

So it would be great, to notice this in the admin guide and make a
page in the Faq-O-Matic.

>
>> Second, I set up an small lab for tests with mdb and don't get the
>> slapd started
>> with larger mdb size (10GB).
>
>
> Check your ulimits. MDB is much simpler than anything else but you are still
> required to understand how your own OS/platform works.

Thank for pointing me, this was the right hint (would be nice, to
notice this in the manual page too), because at this point I did not
realized, that I have to increase
the size of the virtual memory!

(I tried to generate a sparse  manual (see my last post), to check if
there are any problems related to this.)

I use SLES and as discussed in my last thread with Quanah
(http://www.openldap.org/lists/openldap-technical/201301/msg00179.html),
there is limited the virtual memory
to the physical memory. Thats bad, but it happened. I don't know the
limits of other distributions.

I hope, that for mdb I don't must have at least the same size of RAM,
like the database is..?

Thank you very much!

Meike



don't get running the slapd while using mdb backend

2013-01-18 Thread Meike Stone
Hello,

because of problems with bdb (virtual memory using and glibc) and
limitiations (IDL),
I want migrate to mdb.

So my first question:
Does mdb have limitations like bdb it have aka BDB_IDL_LOGN?

Second, I set up an small lab for tests with mdb and don't get the
slapd started
with larger mdb size (10GB).

Here the lab and test:

Start slapd with an empty /var/lib/ldap/ldap.mdb directory
I want reserve 10GByte disk space for mdb (maxsize  10737418240)

error message in syslog
==
 @(#) $OpenLDAP: slapd 2.4.33 $ opensuse-buildserv...@opensuse.org
 daemon: IPv6 socket() failed errno=97 (Address family not supported
by protocol)
 mdb_db_open: database "ou=demo" cannot be opened, err 12. Restore from backup!
 backend_startup_one (type=mdb, suffix="ou=demo"): bi_db_open failed! (12)
 slapd stopped.


created files from slapd while using
==
~# du -b /var/lib/ldap/ldap.mdb/*
12288   /var/lib/ldap/ldap.mdb/data.mdb
8192/var/lib/ldap/ldap.mdb/lock.mdb

I "straced" the start of slapd, here I think is the importand
part:
==
.
.
.
stat("/var/lib/ldap/ldap.mdb", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
mmap(NULL, 2101248, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS,
-1, 0) = 0x7fe38f915000
brk(0x7fe3948ae000) = 0x7fe3948ae000
open("/var/lib/ldap/ldap.mdb/lock.mdb", O_RDWR|O_CREAT, 0600) = 9
fcntl(9, F_GETFD)   = 0
fcntl(9, F_SETFD, FD_CLOEXEC) = 0
fcntl(9, F_SETLK, {type=F_WRLCK, whence=SEEK_SET, start=0, len=1}) = 0
lseek(9, 0, SEEK_END)   = 0
ftruncate(9, 8192)  = 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_SHARED, 9, 0) = 0x7fe3941e4000
open("/var/lib/ldap/ldap.mdb/data.mdb", O_RDWR|O_CREAT, 0600) = 11
read(11, "", 4096)  = 0
mmap(NULL, 10737418240, PROT_READ, MAP_SHARED, 11, 0) = -1 ENOMEM
(Cannot allocate memory)
close(11)   = 0
munmap(0x7fe3941e4000, 8192) = 0
close(9)= 0
.
.
.


So I tried to start slapd with the default of maxsize
-- every thing is ok --
==
If i comment out maxsize 10737418240 (use the default of 10M), start
slapd every thing is fine, slapd starts and create files:

~# du -b /var/lib/ldap/ldap.mdb/*
12288   /var/lib/ldap/ldap.mdb/data.mdb
8192/var/lib/ldap/ldap.mdb/lock.mdb


As I understand, at first slapd start, the database file will be created
with the full size, but as sparse file? So why I don't see this, if
I use the default (maxsize 10485760)


Here my slapd.configuration file
==
include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/yast.schema
include /etc/openldap/schema/rfc2307bis.schema
pidfile /var/run/slapd/slapd.pid
argsfile/var/run/slapd/slapd.args

modulepath  /usr/lib/ldap
moduleload back_mdb
moduleload  back_monitor

sizelimit   -1
timelimit   300
disallowbind_anon
require authc

gentlehup   on
tool-threads4
serverID001

databasemdb
suffix  "ou=demo"
rootdn  "cn=admin"
rootpw  password
directory   /var/lib/ldap/ldap.mdb
loglevel256
maxsize 10737418240
index   objectClass,entryUUID,entryCSN  eq


mounted filesystem for database
==
debld02:/etc/openldap # mount | grep ldap
/dev/sdb5 on /var/lib/ldap type ext3 (rw,relatime)

free space on filsystem
==
debld02:/etc/openldap # df -h | grep ldap
/dev/sdb5 30G  5.6G   23G  20% /var/lib/ldap


So I tried to create the sparse file manual
==
~# su -- ldap
~> id
uid=76(ldap) gid=70(ldap) groups=70(ldap)

~> dd if=/dev/zero of=/var/lib/ldap/ldap.mdb/test.sparse bs=1 count=0 seek=10G
0+0 records in
0+0 records out
0 bytes (0 B) copied, 1.3413e-05 s, 0.0 kB/s

~> du -b /var/lib/ldap/ldap.mdb/test.sparse
10737418240 /var/lib/ldap/ldap.mdb/test.sparse



What is going wrong with my test lab?

Thanks Meike



Re: slapd crashes with ch_realloc of X bytes failed

2013-01-17 Thread Meike Stone
> 2013/1/12 Meike Stone :
>
> What I see, that slapd had reserved "Total: 7350688K"
> (overcommitted?), but only referenced 4900700K.
> Why does slapd reserve so much memory and use it not? Because of this,
>  I changed the default values for memory overcomittment
>
> from:
> vm.overcommit_memory = 0 (guess, default)
> vm.overcommit_ratio = 50
>
> to:
> vm.overcommit_memory=2 (strict overcommit)
> vm.overcommit_ratio=100
>
> but it crashed again (ch_malloc of 34545 bytes failed):
> .

I was wondering, why it crashed in spite of overcommitting .

The solution sometimes is simple, but "far away"  ;-) ...:
~# ulimit -v
7353440

Thanks Meike



Re: slapd crashes with ch_realloc of X bytes failed

2013-01-15 Thread Meike Stone
>
> Yes, that would significantly increase memory usage.  I have only ever done
> the *second* modification (BDB_IDL_LOGN) to fix the IDL issues.  I've run
> that way for years.

How much have you increased the BDB_IDL_LOGN -> 2^17 or more, would be
interesting for me, because we are nearly reach this size too.

> Try dropping the modification to LDAP_PVT_THREAD_STACK_SIZE.  It has never
> been necessary for any of my customers, many of whom have DBs with 6 to 10
> million entries.

Ok, thanks, I'll do this and check.

Thanks Mike



Re: slapd crashes with ch_realloc of X bytes failed

2013-01-15 Thread Meike Stone
>>
>>> From slapd.conf/cn=config:
>>> a) cachesize setting
>>> b) idlcachesize setting
>>> c) dncachesize setting
>>
>>
>> cachesize   75
>> dncachesize 75
>> idlcachesize225
>>
>> Thanks and best regards
>
>
> Your settings here don't make a lot of sense.
>
> I would try
> cachesize 150
> dncachesize 0 (or leave it unset)
> idlcachesize 300
>

Yes, I know this values are very low in my configuration, but if I
have problems with the memory, should I increase the caches that use
this memory?

In the last thread , you confirmed, that if I increasing the DB
cachesize, the system is faster running out of memory.
http://www.openldap.org/lists/openldap-technical/201211/msg00171.html
("ch_realloc means the system ran out of memory.  Increasing the
DB_CONFIG cachesize will run you out of memory more quickly.")

To get the slapd a little bit more stable, I reduced this values
(caches) to the posted in last message. Before i had:
cachesize   100
dncachesize 100
idlcachesize300


> Your current idlcachesize in particular may be part of the issue in addition
> to your glibc setting.

But the problem exist on the test system with tcmalloc too.


So are 24GByte RAM for 1.500.000 DNs and 6GByte DB are not enough?
During all time (especially the crash), there was enough physical
(unallocated) memory free. But "slapd/malloc" tried to
allocate more and more (as you can see in the post from 2013/01/12),
but did not use it, no matter if glibc or tcmalloc
I hope I interpreted it right?!

Thanks Meike

Btw.: I use a "special version" of openldap.
Because of limited index slot of 65535, I changed

in openldap-2.4.33.orig/include/ldap_pvt_thread.h
from
#   define LDAP_PVT_THREAD_STACK_SIZE ( 1 * 1024 * 1024 * sizeof(void *) )
to
#   define LDAP_PVT_THREAD_STACK_SIZE ( 2 * 1024 * 1024 * sizeof(void *) )

and in openldap-2.4.33/servers/slapd/back-bdb/idl.h
from
#define BDB_IDL_LOGN16  /* DB_SIZE is 2^16, UM_SIZE is 2^17 */
to
#define BDB_IDL_LOGN17  /* DB_SIZE is 2^17, UM_SIZE is 2^18 */

as Howard it recommended
http://www.openldap.org/lists/openldap-technical/201101/msg00095.html

Maybe this could be the problem??

My changes are available under:
https://build.opensuse.org/package/show?expand=0&package=openldap2&project=home%3Ameikestone



Re: slapd crashes with ch_realloc of X bytes failed

2013-01-14 Thread Meike Stone
2013/1/14 Quanah Gibson-Mount :

>
>
> Sorry, I don't have your configuration memorized.  Generally, you should
> list:

Oops sorry, I used my gmail account and did not see, that the thread
in the mailing list is "broken"..

Here are all posted informations from my production system
http://www.openldap.org/lists/openldap-technical/201211/msg00164.html

>
> memory allocator (should be tcmalloc, if you've set that up correctly)
In the test system yes (us see it in the pmaps), but on production, I use glibc.

>
> DB Information:
> a) Total number of DNs in your database
about 1.500.000

> b) DB_CONFIG settings

set_cachesize 2 0 1
set_lg_regionmax 262144
set_lg_bsize 2097152
set_flags DB_LOG_AUTOREMOVE

> c) Size of database on disk: du -c -h *.bdb

27M MyActivateTime.bdb
8.0KMyClassName.bdb
30M MyDeletedTime.bdb
14M MyInsertDate.bdb
92K MyJpeg.bdb
1.9MMyKey.bdb
2.2MMyLanguage.bdb
184KMyPhoneNumber.bdb
38M MyPk.bdb
14M MyPId.bdb
5.4MMyPIfId.bdb
9.4MMyProductTyp.bdb
140KMyProperty.bdb
2.5MMySystemFlag.bdb
22M MyUserName.bdb
184KMyVId.bdb
428Mcn.bdb
728Mdn2id.bdb
37M entryCSN.bdb
39M entryUUID.bdb
4.0Gid2entry.bdb
6.2MobjectClass.bdb
5.4Gtotal


> From slapd.conf/cn=config:
> a) cachesize setting
> b) idlcachesize setting
> c) dncachesize setting

cachesize   75
dncachesize 75
idlcachesize225

Thanks and best regards

Meike



Re: slapd crashes with ch_realloc of X bytes failed

2013-01-12 Thread Meike Stone
Hello,

 I could update both systems during my vacation to 2.4.33.
Both servers had 16GByte RAM.
The system crashed  again randomly (as expected).
So we increased the memory on one server to 24 GByte RAM. No effect,
this server crashes too, sometime till 5 times a day.

I installed a test machine (VM),  but only with 7GByte RAM and could
reproduce the crashes with one/two parallel searches.
So I tried tcmalloc, but with the same result.

I digged a little deeper and the results are, that not the physical
memory from the system is insufficient.

First - slapd tries to allocate memory more then available, but not
use it. It is enough physical memory available during slapd crashes.


I ran the commands continuously with while and sleep 1:
 free -m, and pmap $PID_OF_LDAP
and
vmstat 1

The last memory information before the crash was:

pmap:
--
21954: slapd
START   SIZE RSS PSS   DIRTYSWAP PERM MAPPING
7f63f4904000  4K  0K  0K  0K  0K rw-p [anon]
7f63f4905000  4K  0K  0K  0K  0K ---p [anon]
7f63f4906000  16384K   2080K   2080K   2080K  0K rw-p [anon]
7f63f5906000  4K  0K  0K  0K  0K ---p [anon]
7f63f5907000  16384K   4120K   4120K   4120K  0K rw-p [anon]
7f63f6907000  4K  0K  0K  0K  0K ---p [anon]
7f63f6908000  16384K   4120K   4120K   4120K  0K rw-p [anon]
7f63f7908000  4K  0K  0K  0K  0K ---p [anon]
7f63f7909000  16384K   4104K   4104K   4104K 16K rw-p [anon]
7f63f8909000  4K  0K  0K  0K  0K ---p [anon]
7f63f890a000  16384K  8K  8K  8K  4K rw-p [anon]
7f63f990a000 24K  8K  8K  8K  0K rw-s /SYSV000c
7f63f991544K 44K 44K 44K  0K rw-s /SYSV000b
7f63f9998000   2304K 12K 12K 12K  0K rw-s /SYSV000a
7f63f9bd8000 2097152K 2097052K 2097052K 2097052K  0K rw-s /SYSV0009
7f6479bd8000  78008K  60508K  60508K  60508K  0K rw-s /SYSV0008
7f647e806000 16K  0K  0K  0K  0K r-xp
/usr/lib64/sasl2/libplain.so.2.0.22
7f647e80a000   2044K  0K  0K  0K  0K ---p
/usr/lib64/sasl2/libplain.so.2.0.22
7f647ea09000  4K  0K  0K  0K  4K r--p
/usr/lib64/sasl2/libplain.so.2.0.22
7f647ea0a000  4K  0K  0K  0K  4K rw-p
/usr/lib64/sasl2/libplain.so.2.0.22
7f647ea0b000 16K  0K  0K  0K  0K r-xp
/usr/lib64/sasl2/libanonymous.so.2.0.22
7f647ea0f000   2044K  0K  0K  0K  0K ---p
/usr/lib64/sasl2/libanonymous.so.2.0.22
7f647ec0e000  4K  0K  0K  0K  4K r--p
/usr/lib64/sasl2/libanonymous.so.2.0.22
7f647ec0f000  4K  0K  0K  0K  4K rw-p
/usr/lib64/sasl2/libanonymous.so.2.0.22
7f647ec1 20K  0K  0K  0K  0K r-xp
/usr/lib64/sasl2/libsasldb.so.2.0.22
7f647ec15000   2044K  0K  0K  0K  0K ---p
/usr/lib64/sasl2/libsasldb.so.2.0.22
7f647ee14000  4K  0K  0K  0K  4K r--p
/usr/lib64/sasl2/libsasldb.so.2.0.22
7f647ee15000  4K  0K  0K  0K  4K rw-p
/usr/lib64/sasl2/libsasldb.so.2.0.22
7f647ee16000 16K  0K  0K  0K  0K r-xp
/usr/lib64/sasl2/liblogin.so.2.0.22
7f647ee1a000   2044K  0K  0K  0K  0K ---p
/usr/lib64/sasl2/liblogin.so.2.0.22
7f647f019000  4K  0K  0K  0K  4K r--p
/usr/lib64/sasl2/liblogin.so.2.0.22
7f647f01a000  4K  0K  0K  0K  4K rw-p
/usr/lib64/sasl2/liblogin.so.2.0.22
7f647f01b000  8K  0K  0K  0K  0K r-xp
/lib64/libkeyutils-1.2.so
7f647f01d000   2044K  0K  0K  0K  0K ---p
/lib64/libkeyutils-1.2.so
7f647f21c000  4K  0K  0K  0K  4K r--p
/lib64/libkeyutils-1.2.so
7f647f21d000  4K  0K  0K  0K  4K rw-p
/lib64/libkeyutils-1.2.so
7f647f21e000 28K  0K  0K  0K  0K r-xp
/usr/lib64/libkrb5support.so.0.1
7f647f225000   2048K  0K  0K  0K  0K ---p
/usr/lib64/libkrb5support.so.0.1
7f647f425000  4K  0K  0K  0K  4K r--p
/usr/lib64/libkrb5support.so.0.1
7f647f426000  4K  0K  0K  0K  4K rw-p
/usr/lib64/libkrb5support.so.0.1
7f647f427000 12K  0K  0K  0K  0K r-xp
/lib64/libcom_err.so.2.1
7f647f42a000   2044K  0K  0K  0K  0K ---p
/lib64/libcom_err.so.2.1
7f647f629000  4K  0K  0K  0K  4K r--p
/lib64/libcom_err.so.2.1
7f647f62a000  4K  0K  0K  0K  4K rw-p
/lib64/libcom_err.so.2.1
7f647f62b000144K  0K  0K  0K  0K r-xp
/usr/lib64/libk5crypto.so.3.1
7f647f64f000   2044K  0K  0K  0K  0K ---p
/usr/lib64/libk5crypto.so.3.1
7f647f84e000  8K  0K  0K 

Re: slapd crashes with ch_realloc of X bytes failed

2012-11-23 Thread Meike Stone
>
>> Yes, not before January 2013 ...
>> Hope after reorganization, slapd runs more stable...
>> The only thing I can do for now.
>
>
> It is highly unlikely that "reorganization" will change the overall
> footprint of the slapd database.

Yes, I see this now ...  Id does not matter ...


> Your best bet is to use tcmalloc with
> slapd -- Just LD_PRELOAD it.

But where to get, I use SLES11SP1

Thanks

Meike



Re: slapd crashes with ch_realloc of X bytes failed

2012-11-23 Thread Meike Stone
>
>> I'm afraid to increase the cachesize in DB_CONFIG:
>
>
> ch_realloc means the system ran out of memory.  Increasing the DB_CONFIG
> cachesize will run you out of memory more quickly.

I'm sitting JUST NOW in front of the LDAP Server and slapcat/slapadd
the database to reorganize ..
(database is almost 4 years old)
2 hours ago, booth servers crashed again ...
I can't belief, that 16G RAM not enough for 150 entires/objects ...

I need about 5G for set_cache_size (DB_CONFIG) and the other RAM (11G) is  for
shared memory and the other caches (cachesize, dncachesize, idlcachesize).
How much need the three caches?

And if to less cache, the machines should run slower, but not run out of memory?

>
> The best thing for you to do would be to upgrade to OpenLDAP 2.4.33 and
> switch to using back-mdb.

Softwareupdate is possible not before january 2013.
I was wrong with the version, we use 2.4.31
I'm looking to mdb since a few month, but is it usable for production?


>
> If you are not capable of doing that, I would suggest you use tcmalloc as an
> alternative memory allocator to glibc, and reduce your entry cachesize and
> idlcachesize.  If you can reclaim enough space, you should increase your
> DB_CONFIG cachesize, as that is the most important element of performance
> with back-bdb/hdb.

Yes, not before january 2013 ...
Hope after reorganization, slapd runs more stable...
The only thing I can do for now.


Thanks Meike



Re: slapd crashes with ch_realloc of X bytes failed

2012-11-23 Thread Meike Stone
I'm afraid to increase the cachesize in DB_CONFIG:

At the moment slapd uses 13146656K referenced Memory - that is a lot ...

Memoy usage:
~# free -m
total   used   free shared
buffers cached
Mem: 15946  15857 88  0  7  4777
-/+ buffers/cache:  11072   4873
Swap: 2055 11 2043

~# pmap 13631
13631: slapd
START   SIZE RSS PSS   DIRTYSWAP PERM OFFSET
DEVICE MAPPING
7fd16800  40712K  40712K  40712K  40712K  0K rw-p
 00:00  [anon]
7fd16a7c2000  24824K  0K  0K  0K  0K ---p
 00:00  [anon]
7fd17000  65532K  65532K  65532K  65532K  0K rw-p
 00:00  [anon]
7fd173fff000  4K  0K  0K  0K  0K ---p
 00:00  [anon]
7fd17400  65532K  65532K  65532K  65532K  0K rw-p
 00:00  [anon]
7fd177fff000  4K  0K  0K  0K  0K ---p
 00:00  [anon]
7fd17800 131072K 131072K 131072K 131072K  0K rw-p
 00:00  [anon]
7fd18000 131072K 131072K 131072K 131072K  0K rw-p
 00:00  [anon]
7fd18800 131072K 131072K 131072K 131072K  0K rw-p
 00:00  [anon]
7fd19000 131072K 131072K 131072K 131072K  0K rw-p
 00:00  [anon]
7fd19800 131068K 131068K 131068K 131068K  0K rw-p
 00:00  [anon]
7fd19000  4K  0K  0K  0K  0K ---p
 00:00  [anon]
7fd1a000  19444K  13424K  13424K  13424K  0K rw-p
 00:00  [anon]
7fd1a12fd000  46092K  0K  0K  0K  0K ---p
 00:00  [anon]
7fd1a400  65536K  65536K  65536K  65536K  0K rw-p
 00:00  [anon]
7fd1a800  65532K  65532K  65532K  65532K  0K rw-p
 00:00  [anon]
7fd1abfff000  4K  0K  0K  0K  0K ---p
 00:00  [anon]
7fd1ac00  65532K  65532K  65532K  65532K  0K rw-p
 00:00  [anon]
7fd1a000  4K  0K  0K  0K  0K ---p
 00:00  [anon]
7fd1b000  65536K  27444K  27444K  27444K  0K rw-p
 00:00  [anon]
7fd1b800 131072K 131072K 131072K 131072K  0K rw-p
 00:00  [anon]
7fd1c000  65536K  65536K  65536K  65536K  0K rw-p
 00:00  [anon]
7fd1c800  88512K  77512K  77512K  77512K  0K rw-p
 00:00  [anon]
7fd1cd67  42560K  0K  0K  0K  0K ---p
 00:00  [anon]
7fd1d000  65532K  65532K  65532K  65532K  0K rw-p
 00:00  [anon]
7fd1d3fff000  4K  0K  0K  0K  0K ---p
 00:00  [anon]
7fd1d400  65536K  65536K  65536K  65536K  0K rw-p
 00:00  [anon]
7fd1d800 129592K 129592K 129592K 129592K  0K rw-p
 00:00  [anon]
7fd1dfe8e000   1480K  0K  0K  0K  0K ---p
 00:00  [anon]
7fd1e000 131072K 131072K 131072K 131072K  0K rw-p
 00:00  [anon]
7fd1e800 131012K 131012K 131012K 130992K  0K rw-p
 00:00  [anon]
7fd1efff1000 60K  0K  0K  0K  0K ---p
 00:00  [anon]
7fd1f000 131072K 131072K 131072K 131068K  0K rw-p
 00:00  [anon]
7fd1f800 110192K 104172K 104172K 104172K  0K rw-p
 00:00  [anon]
7fd1feb9c000  20880K  0K  0K  0K  0K ---p
 00:00  [anon]
7fd2  65524K  65524K  65524K  65524K  0K rw-p
 00:00  [anon]
7fd203ffd000 12K  0K  0K  0K  0K ---p
 00:00  [anon]
7fd20400  65536K  65536K  65536K  65536K  0K rw-p
 00:00  [anon]
7fd20800 131056K 131024K 131024K 131008K 32K rw-p
 00:00  [anon]
7fd20fffc000 16K  0K  0K  0K  0K ---p
 00:00  [anon]
7fd21000  65532K  65532K  65532K  65532K  0K rw-p
 00:00  [anon]
7fd213fff000  4K  0K  0K  0K  0K ---p
 00:00  [anon]
7fd21400  65508K  65508K  65508K  65508K  0K rw-p
 00:00  [anon]
7fd217ff9000 28K  0K  0K  0K  0K ---p
 00:00  [anon]
7fd21800 131072K 131072K 131072K 131072K  0K rw-p
 00:00  [anon]
7fd22000  65532K  65532K  65532K  65532K  0K rw-p
 00:00  [anon]
7fd223fff000  4K  0K  0K  0K  0K ---p
 00:00  [anon]
7fd22400  65536K  65536K  65536K  65536K  0K rw-p
 00:00  [anon]
7fd22800  6553

Re: slapd crashes with ch_realloc of X bytes failed

2012-11-23 Thread Meike Stone
Hello Dieter,

>> My configuration:
>> == DB_CONFIG ==
>> set_cachesize 2 0 1
>> set_lg_regionmax 262144
>> set_lg_bsize 2097152
>> set_flags DB_LOG_AUTOREMOVE
>
> you have a cache of 2GB and about 1.5M entries, you should definitly
> increase the cachesize, take the size of id2entry, add the size of all
> index files and allow additional 20 % for growth.
>
Ok, would be possible to check this ...
So are 16GByte RAM enough for such a database?

>> Additional I see messages like:
>> "bdb_idl_delete_key: c_del id failed: DB_LOCK_DEADLOCK: Locker killed
>> to resolve a deadlock (-30995)"
>> Should I care about it?
>
> probably due to heavy write operations and insufficient cache.

What kind of cache (kernel, bdb, idl, dn, ...) does you mean? (Same
Question again:
So are 16GByte RAM enough for such a database? )


Thanks

Meike



slapd crashes with ch_realloc of X bytes failed

2012-11-23 Thread Meike Stone
Hello,

since a short time, my slapd crashes often.
I have two servers running in MM replication.
I use openldap version 2.4.30 (for updates are only dedicated timeslots...)
The loglevel is set to 256

I see some strange messages in my log before the slapd crashes:

"ch_realloc of 986032 bytes failed"
---
"ch_malloc of 294896 bytes failed"
---
"bdb(ou=root): txn_checkpoint: failed to flush the buffer cache:
Cannot allocate memory"
---
"ch_malloc of 34159 bytes failed"


What does they mean, how can I solve this problem

The System has 16GByte RAM, no other service is running there.
The Database size is about 150 entires and the size of the ldif is
about 2Gbyte

Because of the memory messages, I reduced the
cachesize   100
dncachesize 100
idlcachesize300

to
cachesize   75
dncachesize 75
idlcachesize225

but the problem exist still again.

I can't believe, that the memory is insufficient. Sysstat is running,
and I see enough cache memory (about 5GByte all time), and the Swap
(2GByte) is almost not used (about 2MByte).

vm.swappiness is set to default (60), so the Swap should used more
before the memory is running out.
OOM Kill is enabled via SYSRQ (signalling of processes), so slapd
should terminated by the kernel ...


My configuration:
== DB_CONFIG ==
set_cachesize 2 0 1
set_lg_regionmax 262144
set_lg_bsize 2097152
set_flags DB_LOG_AUTOREMOVE

== slapd.conf ==
include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/yast.schema
include /etc/openldap/schema/rfc2307bis.schema

pidfile /var/run/slapd/slapd.pid
argsfile/var/run/slapd/slapd.args

modulepath  /usr/lib/ldap
moduleload  back_bdb
moduleload  syncprov
moduleload  back_monitor
sizelimit   -1
timelimit   300
disallowbind_anon
require authc
gentlehup   on
tool-threads8
serverID<001|002>

databasebdb
suffix  "ou=demo"
rootdn  "cn=admin"

directory   /var/lib/ldap
loglevel256

cachesize   75
dncachesize 75
idlcachesize225
cachefree   500

dirtyread
dbnosync
shm_key 7
checkpoint  4096 15

index   objectClass,entryUUID,entryCSN  eq
index   cn  eq,sub
index... own indexes

syncrepl rid=<001|002>
provider=ldap://master-<01|02>
type=refreshAndPersist
keepalive=360:10:5
retry="5 5 300 +"
searchbase="ou=demo"
attrs="*,+"

overlay syncprov
mirrormode  TRUE
syncprov-checkpoint 100 5

databasemonitor

==

(I know, dirtyread and dbnosync are not recommended..)


Additional I see messages like:
"bdb_idl_delete_key: c_del id failed: DB_LOCK_DEADLOCK: Locker killed
to resolve a deadlock (-30995)"
Should I care about it?



Thanks Meike



Re: loglevel expected performance impact

2012-06-13 Thread Meike Stone
If we talk about syslog ..
SuSE (opensuse/SLES) writes local4 in /var/log/localmessages and
/var/log/messages!!
Best way here to write messages in separate file is:

 part from syslog-ng.conf #

filter f_ldap   { program(slapd);};

#change original lines:
filter f_local  { facility(local0, local1, local2, local3, local4,
local5, local6, local7) and not filter(f_ldap); };
filter f_messages   { not facility(news, mail) and not
filter(f_iptables) and not filter(f_ldap); };

#
# LDAP in separate file
#
destination ldaplog { file("/var/log/ldap.log"
  owner(ldap) group(ldap)); };
log { source(src); filter(f_ldap); destination(ldaplog); };

### end ##

Additionally there is a sync option for syslg-ng. It can be set in the
options { ...} line.


BTW a few other thoughts, who may influence the speed if you turn on
logging...:
* How many ldap requests per second you get on LDAP?
   - you can check this via monitor db
  ~# ldapsearch  ... -b"cn=Search,cn=Operations,cn=Monitor" monitorCounter
  ~# ldapsearch ... -b"cn=Uptime,cn=Time,cn=Monitor" monitoredInfo
  then you can calculate the average of your req/s ...

* How is sized your system (especially CPU, RAM, disks, raid,
controller, partitions, ..)
* database size
* Do you use shm_key (takes away IO from your disk)?

We log (level 256) since a short time too, with about (max) 1200 ldap
req/sec and the system and IO is mostly idle :-)

Kindly regards Meike


2012/6/13 Maucci, Cyrille :
> How does your syslog config look like ?
> Did you prefix the filename with the magic - symbol in order to ask for async 
> I/Os on that file?
>
> E.g.
>
> *.info;mail.none;authpriv.none;cron.none                        
> -/var/log/messages
>
> mail.*                                                          
> -/var/log/maillog
>
> local4.*                                                                
> -/var/log/slapd.log
>
> Instead of
>
> *.info;mail.none;authpriv.none;cron.none                        
> /var/log/messages
>
> mail.*                                                          
> /var/log/maillog
>
> local4.*                                                                
> /var/log/slapd.log
>
> ++Cyrille
>
> Man syslog.conf
>   Regular File
>       Typically  messages  are  logged  to real files.  The file has to be 
> specified with full
>       pathname, beginning with a slash ??/??.
>
>       You may prefix each entry with the minus ??-?? sign to omit syncing the 
> file after every
>       logging.   Note  that  you  might  lose information if the system 
> crashes right behind a
>       write attempt.  Nevertheless this might give you back some  
> performance,  especially  if
>       you run programs that use logging in a very verbose manner.
> -Original Message-
> From: openldap-technical-boun...@openldap.org 
> [mailto:openldap-technical-boun...@openldap.org] On Behalf Of Berend De 
> Schouwer
> Sent: Wednesday, June 13, 2012 12:28 PM
> To: openldap-technical@openldap.org
> Subject: loglevel expected performance impact
>
> I'm running some 2.4.23 servers, and I've encountered some slowdown on 
> loglevel other than 0.  Even 256 (stats; recommended) impacts about a 4x 
> slowdown on queries.  Logging is to syslog.
>
> Running ldapsearch slows from 0.005-0.010 seconds to about 0.030-0.040 
> seconds; and that includes loading the binary.  That's from localhost to 
> remove potential DNS lookups.
>
> I stumbled across this when logging was wrong, and the slowdown was 100x.
>
> I'm aware that 2.4.23 isn't the latest version.  I'm also quite happy, for 
> now, to run loglevel 0.
>
> I'm wondering if this is the expected behaviour, given that it's the 
> recommended configuration.  Or should I go dig to find the slowdown?
>
> (I did check the indexes, and db_stats, etc.  All seems fine.)
>
> I apologise for the disclaimer,
> Berend
>
>
>
>
>
> CONFIDENTIALITY NOTICE
> The contents of and attachments to this e-mail are intended for the addressee 
> only, and may contain the confidential information of Argility (Proprietary) 
> Limited and/or its subsidiaries. Any review, use or dissemination thereof by 
> anyone other than the intended addressee is prohibited.
> If you are not the intended addressee please notify the writer immediately 
> and destroy the e-mail. Argility (Proprietary) Limited and its subsidiaries 
> distance themselves from and accept no liability for unauthorised use of 
> their e-mail facilities or e-mails sent other than strictly for business 
> purposes.
>
>



Re: slapd hangs - subtree insert failed: -30995

2012-06-11 Thread Meike Stone
 0 028

During the whole time, the CPU was idling between 97% ... 100%

The Systems are running in production, so I can't make any tests. In
our test environment are no problems seen till now.
But there the load (ldap operations) is very low ..

The configuration for  larger IDL (see first posting), we running
since 2,5 years without problems by mostly the
same size of the directory.

Thanks Meike


2012/6/6 Meike Stone :
> Hello dear list,
>
> does anyone can help me?
>
> Kindly regards and thanks
>
> Meike
>
> 2012/6/1 Meike Stone :
>> Hello,
>>
>> after inserting (ADD) one object, I get following messages in the
>> logfile and the sapld hangs:
>>
>> Jun  1 09:02:24 ldap-01 slapd[8836]: conn=633789 op=1 ADD
>> dn="cn=3,cn=2,cn=node,cn=1,cn=BBB,cn=AAA,cn=companies,ou=root"
>> Jun  1 09:02:24 ldap-01 slapd[8836]: => bdb_idl_insert_key: c_get
>> failed: DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock (-30995)
>> Jun  1 09:02:24 ldap-01 slapd[8836]: => bdb_dn2id_add 0x205e7e:
>> subtree (cn=BBB,cn=AAA,cn=companies,ou=root) insert failed: -30995
>>
>> After this, I don't see any messages in the log till the staff was
>> initiating a stop/start:
>>
>> Jun  1 09:09:29 ldap-01 slapd[8836]: daemon: shutdown requested and 
>> initiated.
>> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=633113 fd=9 closed (slapd shutdown)
>> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=405426 fd=12 closed (slapd 
>> shutdown)
>> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=633787 fd=13 closed (slapd 
>> shutdown)
>> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=1011 fd=14 closed (slapd shutdown)
>> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=1013 fd=18 closed (slapd shutdown)
>> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=632703 fd=33 closed (slapd 
>> shutdown)
>> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=632710 fd=37 closed (slapd 
>> shutdown)
>> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=632883 fd=39 closed (slapd 
>> shutdown)
>> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=632762 fd=40 closed (slapd 
>> shutdown)
>> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=633211 fd=41 closed (slapd 
>> shutdown)
>> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=633735 fd=45 closed (slapd 
>> shutdown)
>> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=632829 fd=47 closed (slapd 
>> shutdown)
>> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=633170 fd=48 closed (slapd 
>> shutdown)
>> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=633200 fd=50 closed (slapd 
>> shutdown)
>> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=633788 fd=55 closed (slapd 
>> shutdown)
>> Jun  1 09:09:29 ldap-01 slapd[8836]: slapd shutdown: waiting for 22
>> operations/tasks to finish
>> Jun  1 09:09:42 ldap-01 slapd[20945]: @(#) $OpenLDAP: slapd 2.4.30 $
>>   opensuse-buildserv...@opensuse.org
>> Jun  1 09:09:43 ldap-01 slapd[20945]: slapd starting
>>
>> After the error message at 09:02:24 the slapd did not answer any request.
>> I cannot recover that problem in a test environment.
>>
>> The server is running in a MM environment (two masters), and the
>> server gets 200-1200 search request/s
>> Because of this high rate, we set "loglevel 0". Since we updated the
>> slapd to 2.4.30 (from 2.4.28) the server crashes/hangs about on times
>> a week.
>> Because of this we set loglevel 256 back again.
>>
>>
>> Would be very nice, if I can fix the problem, please help.
>>
>> Thanks in advance
>>
>> Meike
>>
>>
>> PS:
>> The slapd is a modified, self compiled version because of larger IDL
>> with following changes:
>>
>> openldap-2.4.30/servers/slapd/back-bdb/idl.h:
>> -#define        BDB_IDL_LOGN    16      /* DB_SIZE is 2^16, UM_SIZE is 2^17 
>> */
>> +#define        BDB_IDL_LOGN    17      /* DB_SIZE is 2^17, UM_SIZE is 2^18 
>> */
>>
>> openldap-2.4.30/include/ldap_pvt_thread.h:
>> -#      define LDAP_PVT_THREAD_STACK_SIZE ( 1 * 1024 * 1024 * sizeof(void *) 
>> )
>> +#      define LDAP_PVT_THREAD_STACK_SIZE ( 2 * 1024 * 1024 * sizeof(void *) 
>> )
>>
>> All tests where running well!



Re: slapd hangs - subtree insert failed: -30995

2012-06-06 Thread Meike Stone
Hello dear list,

does anyone can help me?

Kindly regards and thanks

Meike

2012/6/1 Meike Stone :
> Hello,
>
> after inserting (ADD) one object, I get following messages in the
> logfile and the sapld hangs:
>
> Jun  1 09:02:24 ldap-01 slapd[8836]: conn=633789 op=1 ADD
> dn="cn=3,cn=2,cn=node,cn=1,cn=BBB,cn=AAA,cn=companies,ou=root"
> Jun  1 09:02:24 ldap-01 slapd[8836]: => bdb_idl_insert_key: c_get
> failed: DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock (-30995)
> Jun  1 09:02:24 ldap-01 slapd[8836]: => bdb_dn2id_add 0x205e7e:
> subtree (cn=BBB,cn=AAA,cn=companies,ou=root) insert failed: -30995
>
> After this, I don't see any messages in the log till the staff was
> initiating a stop/start:
>
> Jun  1 09:09:29 ldap-01 slapd[8836]: daemon: shutdown requested and initiated.
> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=633113 fd=9 closed (slapd shutdown)
> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=405426 fd=12 closed (slapd shutdown)
> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=633787 fd=13 closed (slapd shutdown)
> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=1011 fd=14 closed (slapd shutdown)
> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=1013 fd=18 closed (slapd shutdown)
> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=632703 fd=33 closed (slapd shutdown)
> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=632710 fd=37 closed (slapd shutdown)
> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=632883 fd=39 closed (slapd shutdown)
> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=632762 fd=40 closed (slapd shutdown)
> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=633211 fd=41 closed (slapd shutdown)
> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=633735 fd=45 closed (slapd shutdown)
> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=632829 fd=47 closed (slapd shutdown)
> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=633170 fd=48 closed (slapd shutdown)
> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=633200 fd=50 closed (slapd shutdown)
> Jun  1 09:09:29 ldap-01 slapd[8836]: conn=633788 fd=55 closed (slapd shutdown)
> Jun  1 09:09:29 ldap-01 slapd[8836]: slapd shutdown: waiting for 22
> operations/tasks to finish
> Jun  1 09:09:42 ldap-01 slapd[20945]: @(#) $OpenLDAP: slapd 2.4.30 $
>   opensuse-buildserv...@opensuse.org
> Jun  1 09:09:43 ldap-01 slapd[20945]: slapd starting
>
> After the error message at 09:02:24 the slapd did not answer any request.
> I cannot recover that problem in a test environment.
>
> The server is running in a MM environment (two masters), and the
> server gets 200-1200 search request/s
> Because of this high rate, we set "loglevel 0". Since we updated the
> slapd to 2.4.30 (from 2.4.28) the server crashes/hangs about on times
> a week.
> Because of this we set loglevel 256 back again.
>
>
> Would be very nice, if I can fix the problem, please help.
>
> Thanks in advance
>
> Meike
>
>
> PS:
> The slapd is a modified, self compiled version because of larger IDL
> with following changes:
>
> openldap-2.4.30/servers/slapd/back-bdb/idl.h:
> -#define        BDB_IDL_LOGN    16      /* DB_SIZE is 2^16, UM_SIZE is 2^17 */
> +#define        BDB_IDL_LOGN    17      /* DB_SIZE is 2^17, UM_SIZE is 2^18 */
>
> openldap-2.4.30/include/ldap_pvt_thread.h:
> -#      define LDAP_PVT_THREAD_STACK_SIZE ( 1 * 1024 * 1024 * sizeof(void *) )
> +#      define LDAP_PVT_THREAD_STACK_SIZE ( 2 * 1024 * 1024 * sizeof(void *) )
>
> All tests where running well!



slapd hangs - subtree insert failed: -30995

2012-06-01 Thread Meike Stone
Hello,

after inserting (ADD) one object, I get following messages in the
logfile and the sapld hangs:

Jun  1 09:02:24 ldap-01 slapd[8836]: conn=633789 op=1 ADD
dn="cn=3,cn=2,cn=node,cn=1,cn=BBB,cn=AAA,cn=companies,ou=root"
Jun  1 09:02:24 ldap-01 slapd[8836]: => bdb_idl_insert_key: c_get
failed: DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock (-30995)
Jun  1 09:02:24 ldap-01 slapd[8836]: => bdb_dn2id_add 0x205e7e:
subtree (cn=BBB,cn=AAA,cn=companies,ou=root) insert failed: -30995

After this, I don't see any messages in the log till the staff was
initiating a stop/start:

Jun  1 09:09:29 ldap-01 slapd[8836]: daemon: shutdown requested and initiated.
Jun  1 09:09:29 ldap-01 slapd[8836]: conn=633113 fd=9 closed (slapd shutdown)
Jun  1 09:09:29 ldap-01 slapd[8836]: conn=405426 fd=12 closed (slapd shutdown)
Jun  1 09:09:29 ldap-01 slapd[8836]: conn=633787 fd=13 closed (slapd shutdown)
Jun  1 09:09:29 ldap-01 slapd[8836]: conn=1011 fd=14 closed (slapd shutdown)
Jun  1 09:09:29 ldap-01 slapd[8836]: conn=1013 fd=18 closed (slapd shutdown)
Jun  1 09:09:29 ldap-01 slapd[8836]: conn=632703 fd=33 closed (slapd shutdown)
Jun  1 09:09:29 ldap-01 slapd[8836]: conn=632710 fd=37 closed (slapd shutdown)
Jun  1 09:09:29 ldap-01 slapd[8836]: conn=632883 fd=39 closed (slapd shutdown)
Jun  1 09:09:29 ldap-01 slapd[8836]: conn=632762 fd=40 closed (slapd shutdown)
Jun  1 09:09:29 ldap-01 slapd[8836]: conn=633211 fd=41 closed (slapd shutdown)
Jun  1 09:09:29 ldap-01 slapd[8836]: conn=633735 fd=45 closed (slapd shutdown)
Jun  1 09:09:29 ldap-01 slapd[8836]: conn=632829 fd=47 closed (slapd shutdown)
Jun  1 09:09:29 ldap-01 slapd[8836]: conn=633170 fd=48 closed (slapd shutdown)
Jun  1 09:09:29 ldap-01 slapd[8836]: conn=633200 fd=50 closed (slapd shutdown)
Jun  1 09:09:29 ldap-01 slapd[8836]: conn=633788 fd=55 closed (slapd shutdown)
Jun  1 09:09:29 ldap-01 slapd[8836]: slapd shutdown: waiting for 22
operations/tasks to finish
Jun  1 09:09:42 ldap-01 slapd[20945]: @(#) $OpenLDAP: slapd 2.4.30 $
   opensuse-buildserv...@opensuse.org
Jun  1 09:09:43 ldap-01 slapd[20945]: slapd starting

After the error message at 09:02:24 the slapd did not answer any request.
I cannot recover that problem in a test environment.

The server is running in a MM environment (two masters), and the
server gets 200-1200 search request/s
Because of this high rate, we set "loglevel 0". Since we updated the
slapd to 2.4.30 (from 2.4.28) the server crashes/hangs about on times
a week.
Because of this we set loglevel 256 back again.


Would be very nice, if I can fix the problem, please help.

Thanks in advance

Meike


PS:
The slapd is a modified, self compiled version because of larger IDL
with following changes:

openldap-2.4.30/servers/slapd/back-bdb/idl.h:
-#defineBDB_IDL_LOGN16  /* DB_SIZE is 2^16, UM_SIZE is 2^17 */
+#defineBDB_IDL_LOGN17  /* DB_SIZE is 2^17, UM_SIZE is 2^18 */

openldap-2.4.30/include/ldap_pvt_thread.h:
-#  define LDAP_PVT_THREAD_STACK_SIZE ( 1 * 1024 * 1024 * sizeof(void *) )
+#  define LDAP_PVT_THREAD_STACK_SIZE ( 2 * 1024 * 1024 * sizeof(void *) )

All tests where running well!



Re: Memory consumption when increasing BDB_IDL_LOGN

2012-05-07 Thread Meike Stone
Howard,

thanks for answering so fast!

>
>> After a search, each returned up ID from bdb is located in one slot in
>> the IDL list. On a x86_64 system, each slot is 8Byte. Each search
>> stack in each thread (threads in slapd.conf) gets his own IDL slots.
>> The default value for the threads are 16.
>> With 16 bit BDB_IDL_LOGN (default) on x86_64 with default threads, we
>> need 64k*8*16 = 1024KB memory?
>>
>> If I increase the BDB_IDL_LOGN to 20 we need 512k*8*16 = 8MB?
>>
>> 1) Are my assumptions above correct?
>> 2) How do I increase  LDAP_PVT_THREAD_STACK_SIZE? In my tests I used
>> "4 * 1024 * 1024 * sizeof(void *)" and all tests where running well.
>> 3) Are there other variables to increase before compiling?
>> 4) Here we talk about 8MB memory, did I miss something, that is not
>> the problem today or are there other things I did not catch (other
>> caches in memory e.g. cachesize, dncachesize, idlcachesize or shared
>> memory ...)?
>
>
> As far as compile-time options, that's all there is to worry about.

Yes, but what about for memory usage. I need to size the memory and
disks for the ldap server.
How do I calculate the real memory usage or need for the server. I
don't want, that the server crashes
during productive operating because of memory needs, or the kernel oops...

>
>
>> 5) What is the amount overall I have to expect for memory consumption?
>>
>> I understand, that after adding and deleting entires, the IDs "sparse
>> out" for one index and we loose precision (if we have after search
>> more IDs than  BDB_IDL_LOGN), because using ranges. This Problem will
>> increase as older the database becomes.
>>
>> But if I increase the BDB_IDL_LOGN to my needed size (max expected
>> returned IDs from during search a indexed attibute), the problem with
>> "getting older" is not important for me?
>
>
> Correct.
Thanks :-)



Memory consumption when increasing BDB_IDL_LOGN

2012-05-07 Thread Meike Stone
Hello,

how does the memory usage increase if I increase the BDB_IDL_LOGN?

I tried to discover and understand this by searching in the mailing
list (Is there is a good guide to understand all of this?).

After a search, each returned up ID from bdb is located in one slot in
the IDL list. On a x86_64 system, each slot is 8Byte. Each search
stack in each thread (threads in slapd.conf) gets his own IDL slots.
The default value for the threads are 16.
With 16 bit BDB_IDL_LOGN (default) on x86_64 with default threads, we
need 64k*8*16 = 1024KB memory?

If I increase the BDB_IDL_LOGN to 20 we need 512k*8*16 = 8MB?

1) Are my assumptions above correct?
2) How do I increase  LDAP_PVT_THREAD_STACK_SIZE? In my tests I used
"4 * 1024 * 1024 * sizeof(void *)" and all tests where running well.
3) Are there other variables to increase before compiling?
4) Here we talk about 8MB memory, did I miss something, that is not
the problem today or are there other things I did not catch (other
caches in memory e.g. cachesize, dncachesize, idlcachesize or shared
memory ...)?
5) What is the amount overall I have to expect for memory consumption?

I understand, that after adding and deleting entires, the IDs "sparse
out" for one index and we loose precision (if we have after search
more IDs than  BDB_IDL_LOGN), because using ranges. This Problem will
increase as older the database becomes.

But if I increase the BDB_IDL_LOGN to my needed size (max expected
returned IDs from during search a indexed attibute), the problem with
"getting older" is not important for me?

Thanks a lot,

Meike



Re: HowTo index generalizedTimeOrderingMatch

2012-05-07 Thread Meike Stone
Thanks for *both* advices, that helped me a lot!

Kind regards

Meike

2012/5/4 Michael Ströder :
> Hallvard Breien Furuseth wrote:
>> On Fri, 4 May 2012 14:13:38 +0200, Meike Stone wrote:
>>> attributetype (1.3.6.1.4
>>>     NAME ('InsertTime')
>>>     EQUALITY generalizedTimeMatch
>>>     ORDERING generalizedTimeOrderingMatch
>>>     SYNTAX 1.3.6.1.4.1.1466.115.121.1.24
>>> )
>>>
>>> Now I can use this and search, but it takes very long.
>>>
>>> So I want index it for searching:
>>>
>>>  *  greater (">=")
>>>  *  less     ("<=")
>>
>> Use an eq index.  This doubles as an ordering index for some non-string
>> syntaxes, including integer, generalizedTime, and (I think) CSN.
>
> I brought up this discussion before:
> http://www.openldap.org/lists/openldap-technical/201204/msg00116.html
>
> In case the range filter is combined with a another indexed filter the
> eq-index can even lead to slower searches.
>
> Ciao, Michael.
>



HowTo index generalizedTimeOrderingMatch

2012-05-04 Thread Meike Stone
Hello,

I have in my own schema an attribute defined:

attributetype (1.3.6.1.4
NAME ('InsertTime')
EQUALITY generalizedTimeMatch
ORDERING generalizedTimeOrderingMatch
SYNTAX 1.3.6.1.4.1.1466.115.121.1.24
)

Now I can use this and search, but it takes very long.

So I want index it for searching:

 *  greater (">=")
 *  less ("<=")

Is it possible and how can I do this?

I have only seen "eq" and "pres".

Thanks in advance Meike



Re: circularly MMR Replication ?

2012-02-19 Thread Meike Stone
Hello, thanks for answer!

Is delta-syncrepl a solid option to configure in a circular
replication or should I configure better full syncrepl?
Does delta-syncrepl need more CPU and RAM?

thanks Meike!



Re: circularly MMR Replication ?

2012-02-13 Thread Meike Stone
> If you've got 5 hosts, Each host should connect to 3 other hosts for a mesh
> network wherein any node can fail and the others remain online without
> requiring every host be connected to every other host.

Ok, But what is a/the recommended replication setup?
It depends on the requirements if availability but also on the bandwidth
between the loaction, hardware.
If I configure such setup, it is much more complex like a circle. The
hardware and bandwidth using for each
server increases a lot.

We also want use one load balancer (one vrrp pair) in each location.
So each master has additionally a few ro slaves!

The application separates write and read access to two vips on the
load balancer.
If the load balancer can't reach the local master, all further request
(ro and rw) are directed to
the closest remote location and LDAP-Servers.

I also saw the last mail from Howard in the openldap-devel ml
regarding "ITS#7052" and ITS#6024.

So I'm absolutely unsure, what I should configure! Are there
recommendations or practical experiences?


> If any one host
> should be considdered a true master,  you're better off with a standard star
> topology where all updates go to the master.

You mean "If only one host ..."? then I understand this recommendation


Thanks Meike



circularly MMR Replication ?

2012-02-09 Thread Meike Stone
Hello,

I have 5 different locations and want use MMR. I could configure the
replication in a chain,
but if on *in* the chain fails, the complete replication fails. So is
it a good idea to configure/organize
the replication circularly? If one of the "replication member" fails,
the replication between the others should still work.

Thanks for help

Meike



Re: Searches causing disk writes

2011-11-10 Thread Meike Stone
2011/11/10 Adam Wale :
> For anyone that was interested in the fix for this, moving to shared memory 
> resolved the issue.

Hello Adam,

we had the same problem and could it solve the same way. Sorry, I
haven't seen this thread..

Do you have tried to mount your partition where the data directory is
located with "noatime"?
This (atime) causes a lot of very expensive write operations on every
read access of the memory mapped files.
It would be very interesting if mounting the data partition with
"noatime" can solve the problem too?
But be careful, some other programs (e.g. mail, backup,..) may rely on
atime, it depends what programs using this partition.

kindly regards
Meike



Re: loadbalancer in OopenLDAP environment

2011-11-09 Thread Meike Stone
> I use HAProxy to do load balancing and fail over for the LDAP service.
> And to manage the read/write problem, I put LDAP proxies that catch
> referrals and send them to the master(s).
>
Hello Clément,

I see, HA-Proxy is a TCP/HTTP-Loadbalancer. You put in front of them a
LDAP-Proxy to divide write/modify from read access? And so, rhe read
access is balanced between the RO Replicas?

Thanks Meike!



Re: loadbalancer in OopenLDAP environment

2011-11-09 Thread Meike Stone
2011/11/9 pradyumna dash :
> We are running mirror mode replication with Openldap with loadbalancer.

Which loadbalancer do you use? You dont separate write/modify from searches?
All LDAP traffic is "balanced between the two servers?

kindly regards
Meike



loadbalancer in OopenLDAP environment

2011-11-09 Thread Meike Stone
Hello,

does anywhere use loadbalancer in his OpenLDAP setup?

I have two locations (data center). In each location I want install a
OpenLDAP server who replicate with the other (MM N-Way)
Then I want install a few (depends on the load) OpenLDAP ro replicas
(replicate from the local OpenLDAP).

- In the location are setup loadbalancers who are asked from the local clients.
- The loadbalancer direct the searches to the local ro replicas and
writes/modifies to the local "rw Master".
- Only if all local ro resources are down or have a long response
time, the loadbalancer redirects the searches to the remote ro
replicas.
- Same way with write access, if local rw master is down, write/modify
access is redirect to the remote rw master.

Is this a possible setup?
What are the experiences with such setups, can you share them? Are there snares?
I saw one example in the admin guide, but it shows 4 loadbalancers.
Thats to much for my budget.

Thanks for help,
Meike



delta-syncrepl based N-way MMR/Mirror mode replication

2011-11-03 Thread Meike Stone
Hello,

a second question I have. I read in the list, that OpenLDAP 2.4.27
will support delta-syncrepl based N-way MMR/Mirror mode replication
setups. That would solve my problem with a small WAN line and the MM
replication between my two LDAP-Servers.

1) Is it reliable enough, to configure this in a production setup?
2) Does it need more CPU and/or RAM than the full replication on the servers?


Thanks Meike



Re: Configuring shared memory / memory mapped files

2011-11-03 Thread Meike Stone
>>
>> I was thinking we should hold it off until OpenLDAP 2.5. But it actually is
>> working perfectly fine already; we may include it in 2.4 as an Experimental
>> feature.
>
> I'm testing back-mdb in a local environment. No problems so far. I think it
> could be added in 2.4.27 announcing it for public testing. Otherwise no-one
> else will test it thoroughly.
>

Hello Michael,

how do you test? Can you share Test, environment and your findings and
experiences?

Thanks Meike



Re: Configuring shared memory / memory mapped files

2011-11-03 Thread Meike Stone
Hello Howard,

Thanks for the helpful information!
All about the  back-mdb sounds so good! Will the new back-mdb included
in the next release?
Is it recommended to use this backend in production environment?

Thanks for hard work on the great OpenLDAP!

Meike


2011/11/1 Howard Chu :
> Meike Stone wrote:
>>
>> Hello,
>>
>> time ago, we installed a Linux Guest with OpenLDAP (db size appox.
>> 650MByte / ) server in a ESXi environment.
>> Maybe because of a read/write ratio 100:1, the hard discs where heavy
>> used by writing bdb backends memory mapped files.
>> The CPU in that Linux system had iowait (top) between 80% and 100% and
>> the other VMs on the ESXi went slow down.
>>
>> After changing to shared memory (shm_key), all problems with disc IO where
>> gone.
>>
>> I read in the mailing list and on "OpenLDAP performance tuning" guide,
>> that it does not matter if using memory mapped files or shared memory
>> until the database is over 8GB. But why we had such problems?
>>
>> Please note, the OpenLDAP was operating very fast with the memory
>> mapped files, because of using indexes and proper caching.
>>
>>
>> Now, I want install more than one OpenLDAP server on one Linux system
>> (now real Hardware).
>> Every OpenLDAP server will be bind on a separate IP and DNS host name.
>>
>> So in this scenario it is hard to calculate the shared memory and
>> assign each LDAP server to the right shared memory region (key).
>>
>> Therefore I want go back to memory mapped files. Are there any
>> recommendation for sizing the Linux system like:
>>  - type of file system (ext3, ext4, xfs, ..)
>>  - parameters of file system (syncing ->  commit=nrsec, data=*, ... )
>>  - swap using (swappiness, dirty_background_ratio)
>>  - ???
>
> Also, back-mdb (in git master) will behave much better in a VM deployment.
> (Actually, back-mdb behaves better than back-bdb/hdb in all environments.)
>
> --
>  -- Howard Chu
>  CTO, Symas Corp.           http://www.symas.com
>  Director, Highland Sun     http://highlandsun.com/hyc/
>  Chief Architect, OpenLDAP  http://www.openldap.org/project/
>



Configuring shared memory / memory mapped files

2011-11-01 Thread Meike Stone
Hello,

time ago, we installed a Linux Guest with OpenLDAP (db size appox.
650MByte / ) server in a ESXi environment.
Maybe because of a read/write ratio 100:1, the hard discs where heavy
used by writing bdb backends memory mapped files.
The CPU in that Linux system had iowait (top) between 80% and 100% and
the other VMs on the ESXi went slow down.

After changing to shared memory (shm_key), all problems with disc IO where gone.

I read in the mailing list and on "OpenLDAP performance tuning" guide,
that it does not matter if using memory mapped files or shared memory
until the database is over 8GB. But why we had such problems?

Please note, the OpenLDAP was operating very fast with the memory
mapped files, because of using indexes and proper caching.


Now, I want install more than one OpenLDAP server on one Linux system
(now real Hardware).
Every OpenLDAP server will be bind on a separate IP and DNS host name.

So in this scenario it is hard to calculate the shared memory and
assign each LDAP server to the right shared memory region (key).

Therefore I want go back to memory mapped files. Are there any
recommendation for sizing the Linux system like:
 - type of file system (ext3, ext4, xfs, ..)
 - parameters of file system (syncing -> commit=nrsec, data=*, ... )
 - swap using (swappiness, dirty_background_ratio)
 - ???


Thanks for any help.
Meike



Re: chaining and referral object with two referrals

2011-08-12 Thread Meike Stone
Hello,

I've tried to readce the source, but I'm no C-programer, so it is very
hard for me to follow 

In the chain.c I've red the following comment:

"/* We're setting the URI of the first referral;
* what if there are more?"

Does this mean, the overlay can not handle more than one referral in
the referral object?

Thanks Meike


2011/8/9 Meike Stone :
> Hello,
>
> sorry for asking again.
>
> If I use the chaining overlay (slapo-chain), and I put more then one
> referral in the referral-object, how does the overlay behave and can I
> configure this?
> Background is, that I want put two referrals to two LDAP-Servers
> (multi master) and if one of them is missing, the secondary should be
> asked from the slapo.
>
> (http://www.openldap.org/lists/openldap-technical/201107/msg00267.html)
>
>
> Thanks, Meike!
>
> (Excuse me for using the mail address from my colleague last time)..
>



chaining and referral object with two referrals

2011-08-09 Thread Meike Stone
Hello,

sorry for asking again.

If I use the chaining overlay (slapo-chain), and I put more then one
referral in the referral-object, how does the overlay behave and can I
configure this?
Background is, that I want put two referrals to two LDAP-Servers
(multi master) and if one of them is missing, the secondary should be
asked from the slapo.

(http://www.openldap.org/lists/openldap-technical/201107/msg00267.html)


Thanks, Meike!

(Excuse me for using the mail address from my colleague last time)..