Re: ldap query performance issue

2013-05-24 Thread Howard Chu

Chris Card wrote:

Any ideas?


Increase the IDL range. This is how I do it:

--- openldap-2.4.35/servers/slapd/back-bdb/idl.h.orig 2011-02-17
16:32:02.598593211 -0800
+++ openldap-2.4.35/servers/slapd/back-bdb/idl.h 2011-02-17
16:32:08.937757993 -0800
@@ -20,7 +20,7 @@
/* IDL sizes - likely should be even bigger
* limiting factors: sizeof(ID), thread stack size
*/
-#define BDB_IDL_LOGN 16 /* DB_SIZE is 2^16, UM_SIZE is 2^17
*/
+#define BDB_IDL_LOGN 17 /* DB_SIZE is 2^16, UM_SIZE is 2^17
*/
#define BDB_IDL_DB_SIZE (1BDB_IDL_LOGN)
#define BDB_IDL_UM_SIZE (1(BDB_IDL_LOGN+1))
#define BDB_IDL_UM_SIZEOF (BDB_IDL_UM_SIZE * sizeof(ID))

Thanks, that looks like it might be the issue. Unfortunately I only see the 
issue in production, so patching it might be a pain.

I've tried this change, but it made no difference to the performance of the 
query.


You have to re-create all of the relevant indices as well. Also, it's always 
possible that some slots in your index are still too big, even for this 
increased size.


You should also test this query with your data loaded into back-mdb.
--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: ldap query performance issue

2013-05-24 Thread Howard Chu

Meike Stone wrote:

Hello,

had the same problem years ago and the patch worked for me. As I
understood, this special problem exist in mdb too
(http://www.openldap.org/lists/openldap-technical/201301/msg00185.html)
Thats one reason, because I did not switch till now.


Yes, back-mdb uses the same index design, but it is still inherently faster 
than BDB backends.


Thanks Meike

2013/5/24 Howard Chu h...@symas.com:

Chris Card wrote:


Any ideas?



Increase the IDL range. This is how I do it:

--- openldap-2.4.35/servers/slapd/back-bdb/idl.h.orig 2011-02-17
16:32:02.598593211 -0800
+++ openldap-2.4.35/servers/slapd/back-bdb/idl.h 2011-02-17
16:32:08.937757993 -0800
@@ -20,7 +20,7 @@
/* IDL sizes - likely should be even bigger
* limiting factors: sizeof(ID), thread stack size
*/
-#define BDB_IDL_LOGN 16 /* DB_SIZE is 2^16, UM_SIZE is 2^17
*/
+#define BDB_IDL_LOGN 17 /* DB_SIZE is 2^16, UM_SIZE is 2^17
*/
#define BDB_IDL_DB_SIZE (1BDB_IDL_LOGN)
#define BDB_IDL_UM_SIZE (1(BDB_IDL_LOGN+1))
#define BDB_IDL_UM_SIZEOF (BDB_IDL_UM_SIZE * sizeof(ID))


Thanks, that looks like it might be the issue. Unfortunately I only see
the issue in production, so patching it might be a pain.


I've tried this change, but it made no difference to the performance of
the query.



You have to re-create all of the relevant indices as well. Also, it's always
possible that some slots in your index are still too big, even for this
increased size.

You should also test this query with your data loaded into back-mdb.
--
   -- Howard Chu
   CTO, Symas Corp.   http://www.symas.com
   Director, Highland Sun http://highlandsun.com/hyc/
   Chief Architect, OpenLDAP  http://www.openldap.org/project/






--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: ldap query performance issue

2013-05-27 Thread Howard Chu

Meike Stone wrote:

Hello,

because of this,  does it make sense in a directory with  1,000,000
people to index the sex?


Indexing is all about making rare data easy to find. If you have an attribute 
that occurs on 99% of your entries, indexing it won't save any search time, 
and it will needlessly slow down modify time.


Asking about 1,000,000 entries is meaningless on its own. It's not the raw 
number of entries that matters, it's the percentage of the total directory. If 
you have 1,000,000,000 entries in your directory, then 1,000,000 is actually 
quite a small percentage of the data and it might be smart to index it. If you 
have only 2,000,000 entries total, it may not make enough difference to be 
worthwhile.


It's not the raw numbers that matter, it's the frequency of occurrences.



thanks Meike

2013/5/23 Quanah Gibson-Mount qua...@zimbra.com:

--On Thursday, May 23, 2013 4:40 PM + Chris Card ctc...@hotmail.com
wrote:


Hi all,

I have an openldap directory with about 7 million DNs, running openldap
2.4.31 with a BDB backend (4.6.21), running on CentOS 6.3.

The structure of the directory is like this, with suffix dc=x,dc=y

dc=x,dc=y
account=a,dc=x,dc=y
   mail=m,account=a,dc=x,dc=y   // Users
   
   licenceId=l,account=a,dc=x,dc=y  // Licences,
objectclass=licence   
   group=g,account=a,dc=x,dc=y  // Groups
   
   // etc.

account=b,dc=x,dc=y
   

Most of the DNs in the directory are users or groups, and the number of
licences is small (10) for each account.

If I do a query with basedn account=a,dc=x,dc=y and filter
(objectclass=licence) I see wildly different performance, depending on
how many users are under account a. For an account with ~3 users the
query takes 2 seconds at most, but for an account with ~6 users  the
query takes 1 minute.

It only appears to be when I filter on objectclass=licence that I see
that behaviour. If I filter on a different objectclass which matches a
similar number of objects to the objectclass=licence filter, the
performance doesn't seem to depend on the number of users.

There is an index on objectclass (of course), but the behaviour I'm
seeing seems to indicate that for this query, at some point slapd stops
using the index and just scans all the objects under the account.

Any ideas?



Increase the IDL range.  This is how I do it:

--- openldap-2.4.35/servers/slapd/back-bdb/idl.h.orig   2011-02-17
16:32:02.598593211 -0800
+++ openldap-2.4.35/servers/slapd/back-bdb/idl.h2011-02-17
16:32:08.937757993 -0800
@@ -20,7 +20,7 @@
/* IDL sizes - likely should be even bigger
  *   limiting factors: sizeof(ID), thread stack size
  */
-#defineBDB_IDL_LOGN16  /* DB_SIZE is 2^16, UM_SIZE is 2^17
*/
+#defineBDB_IDL_LOGN17  /* DB_SIZE is 2^16, UM_SIZE is 2^17
*/
#define BDB_IDL_DB_SIZE(1BDB_IDL_LOGN)
#define BDB_IDL_UM_SIZE(1(BDB_IDL_LOGN+1))
#define BDB_IDL_UM_SIZEOF  (BDB_IDL_UM_SIZE * sizeof(ID))


--Quanah

--

Quanah Gibson-Mount
Sr. Member of Technical Staff
Zimbra, Inc
A Division of VMware, Inc.

Zimbra ::  the leader in open source messaging and collaboration







--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: ldap query performance issue

2013-05-28 Thread Howard Chu

Meike Stone wrote:

2013/5/28 Meike Stone meike.st...@googlemail.com:



I ask this, because it seems to me, that the basedn does not matter in
the search ...


In my special (real world)  case, I have in the basedn 84,000 objects
but only one of this is a person with objectclass=inetOrgperson.
I have about 420,000 objectclass=inetOrgperson. In the directory are
2,000,000 objects at all.

The search with the specified basedn where only the one inetOrgperson
is located needs about 5 minutes ...


Looks like a bug in back-bdb, it retrieves the scope index but isn't using it 
correctly with the filter index. Please submit an ITS for this.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: migrating from SUN one C SDK to openldap C sdk (Linux).

2013-06-06 Thread Howard Chu

Far a wrote:

As part of Solaris to Linux migration, I am planning to migrate my application
that uses SUN one C SDK to openldap C sdk (Linux). I have various questions
that I need to address at the beginning. I am hoping I can get some help over
here. The questions are as follows

  * Can client use openldap C sdk (Linux) while the server is still on Sun one
LDAP
server.


The point of a standardized protocol like LDAP and the SDK is to allow you to 
talk to any server that speaks LDAP.



  * Is there a list of dos and don'ts and list of possible issues for
migrating from SUN
one LDAP TO openldap on Linux


I haven't seen any such list.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: run test suite separately from the source code compilation?

2013-06-06 Thread Howard Chu

Meike Stone wrote:

Hello,

is it possible and how, to run the complete test suite included in the
source tarball later, after installing the openldap rpm/deb package
independently and separated from the compilation?


We can't answer this since there is no the openldap rpm/deb package. The 
OpenLDAP Project distributes source code, not binary packages. What you can or 
can't do with a particular distro's binary package is a question you should 
ask of your distro/package provider.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: migrating from SUN one C SDK to openldap C sdk (Linux).

2013-06-06 Thread Howard Chu
Probably worth pointing out - Solaris 11 now bundles OpenLDAP by default. If 
there were any issues in migrating, the OpenSolaris guys must have already 
encountered them and they can surely provide you answers.


Howard Chu wrote:

Far a wrote:

As part of Solaris to Linux migration, I am planning to migrate my application
that uses SUN one C SDK to openldap C sdk (Linux). I have various questions
that I need to address at the beginning. I am hoping I can get some help over
here. The questions are as follows

   * Can client use openldap C sdk (Linux) while the server is still on Sun one
 LDAP
 server.


The point of a standardized protocol like LDAP and the SDK is to allow you to
talk to any server that speaks LDAP.


   * Is there a list of dos and don'ts and list of possible issues for
 migrating from SUN
 one LDAP TO openldap on Linux


I haven't seen any such list.




--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: migrating from SUN one C SDK to openldap C sdk (Linux).

2013-06-06 Thread Howard Chu

Clément OUDOT wrote:

2013/6/6 Howard Chu h...@symas.com:

Far a wrote:

   * Is there a list of dos and don'ts and list of possible issues for

 migrating from SUN
 one LDAP TO openldap on Linux



I haven't seen any such list.


Hi,

you can find some notes here:
http://www.linid.org/projects/openldap-manager/wiki/MigrationSunOracle


This appears to be a list of differences between the SunOne DS and OpenLDAP 
slapd, not differences in the Sun LDAP C SDK and the OpenLDAP C SDK. Quite a 
different topic from the question in this thread.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: run test suite separately from the source code compilation?

2013-06-06 Thread Howard Chu

Meike Stone wrote:

Hello,

thanks for answer, that a great pity!

Meike

2013/6/6 Hallvard Breien Furuseth h.b.furus...@usit.uio.no:

Meike Stone writes:

is it possible and how, to run the complete test suite included in the
source tarball later, after installing the openldap rpm/deb package
independently and separated from the compilation?


No.  tests/progs/ in the test suite uses libraries and generated include
files which are not installed, they're only used for building OpenLDAP.


Well. The tools in tests/progs are only used in a handful of the tests (008, 
039, 060, maybe a couple other back-ldap/back-meta tests) and you could of 
course compile them in the source tree and use them against the packaged 
binaries. Or just skip the tests that use them.


But the bigger problem is the configuration that's needed to make the test 
scripts work with an installed package. I.e., tests/run.in needs to be 
converted to tests/run and tweaking that manually would be quite a chore. Also 
tests/scripts/defines.sh assumes a lot of the binaries are present in the 
source tree; you'd either have to override the paths in the script or create 
symlinks in the source tree to point to the corresponding binaries.


It could all be done, certainly, if you have the patience.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: run test suite separately from the source code compilation?

2013-06-06 Thread Howard Chu

Meike Stone wrote:


If your purpose is to test the distribution's builds,

Yes, that's is my intention.


you can surely
download the corresponding OpenLDAP source code, build it, replace slapd and
slap* tools in BUILDDIR/servers/slapd with those provided by the
distribution, and run the tests using make test or cd tests; ./run
testXXX where XXX is the number of the test you need.


Ahh,  that sounds great, I'll check that!


As I note in my previous email, it's not as simple as that, unless you build 
your source tree with exactly the same configure options as the package you 
want to test against.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: migrating from SUN one C SDK to openldap C sdk (Linux).

2013-06-06 Thread Howard Chu

Doug Leavitt wrote:


On 06/06/13 09:51, Howard Chu wrote:

Clément OUDOT wrote:

2013/6/6 Howard Chu h...@symas.com:

Far a wrote:

* Is there a list of dos and don'ts and list of possible issues for

  migrating from SUN
  one LDAP TO openldap on Linux



I haven't seen any such list.


Hi,

you can find some notes here:
http://www.linid.org/projects/openldap-manager/wiki/MigrationSunOracle


This appears to be a list of differences between the SunOne DS and
OpenLDAP slapd, not differences in the Sun LDAP C SDK and the OpenLDAP
C SDK. Quite a different topic from the question in this thread.



I don't have a thorough list or document with specific guidance but I
can pass on a few pointers.
Assuming the goal is to move off Solaris (iPlanet/Mozilla vintage)
libldap.so.5 to a recent OpenLDAP
libldap-2.4.so.2 library, then here are a few things to think about.

1) for the most part if you are using any of the common APIs such as
ldap_search_ext_s etc. these
will work fine with just a recompile.  In my experience 80-90+% of your
LDAP code will remain unmodified.
Also you can probably catch everything you need to change by just
updating your build environment
to the OpenLDAP headers and libraries, and fix the compile errors that
show up.

2) Presumably your application uses the old ldap_init style interfaces
(the only ones that existed in libldap.so.5)
so you need to make a decision to either convert them to newer
ldap_initialize APIs or enable LDAP_DEPRECATE.

If you are using non-secure connections then you can stay with the old
APIs, but if you are using secure
connections and have tapped into the NSPR portions of libldap.so.5 (aka
any prldap* API) you need to
rip that stuff out and start using ldap_initialize.  My $0.02
regardless, just change over to ldap_initialize even
if you are not using SSL.

3) review your ldap_get_option and ldap_set_option calls.  There are a
few differences
such as LDAP_OPT_NETWORK_TIMEOUT instead of LDAP_X_OPT_CONNECT_TIMEOUT
and those differences need to be changed.  A test compile will expose
them all.

4) If you are using libldap.so.5 ldap_get_lderrno you will need to
change that code to call
  ldap_get_option and call
  (void) ldap_get_option(ld, LDAP_OPT_RESULT_CODE, err);
  (void) ldap_get_option(ld, LDAP_OPT_DIAGNOSTIC_MESSAGE, s);
  (void) ldap_get_option(ld, LDAP_OPT_MATCHED_DN, m);
as needed.  Similarly a test compile will flag these places.

5) The function calls to manage controls (such as virtuallist) are
different between the libraries, and
if used, the application will need some adjustment there.

If you are using some other libldap.so.5 specific functions you might
have to do a little more
work, but in general everything above probably accounts for 99+% of an
application conversion.

The only other big difference is if your application specifically has
multiple threads sharing/multiplexing
requests over the same connection.  If it does you need to look at and
use the OpenLDAP
ldap_dup/ldap_destroy APIs.  Here you may need to rip out some more
prldap* functions and
rework the code, but that will be dependent on your application specifics.

In my experience, if you look out for these issues, then moving to the
OpenLDAP libldap library
is pretty straight forward.


Thanks Doug, great info.

For a practical example, here's a patch I wrote to transition Mozilla Firefox 
off of the Mozilla C SDK and onto OpenLDAP, back in 2008. The patch did a 
number of the things Doug mentioned.


https://bug292127.bugzilla.mozilla.org/attachment.cgi?id=334117
(Read the bug report for more context. 
https://bugzilla.mozilla.org/show_bug.cgi?id=292127 )


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: migrating from SUN one C SDK to openldap C sdk (Linux).

2013-06-06 Thread Howard Chu

Doug Leavitt wrote:

Finally, Solaris direct linking should protect the third party application
in the event that dynamically loaded Solaris library dynamically loads
one of the two libldaps for it's needs.  In this event even if both libraries
are loaded into the application, the Solaris library will use the one it needs
while the application will use the one it was linked with and they won't
cross name space or functional boundaries.


You might suggest to your colleagues at Oracle that they do this in other 
libraries they ship too.


http://www.openldap.org/its/index.cgi/Incoming?id=7599

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: migrating from SUN one C SDK to openldap C sdk (Linux).

2013-06-06 Thread Howard Chu

Aaron Richton wrote:

On Thu, 6 Jun 2013, Howard Chu wrote:


Doug Leavitt wrote:

Finally, Solaris direct linking should protect the third party
application in the event that dynamically loaded Solaris library
dynamically loads one of the two libldaps for it's needs.  In this
event even if both libraries are loaded into the application, the
Solaris library will use the one it needs while the application will
use the one it was linked with and they won't cross name space or
functional boundaries.


You might suggest to your colleagues at Oracle that they do this in other
libraries they ship too.

http://www.openldap.org/its/index.cgi/Incoming?id=7599


To be fair, that ITS is for Linux, and last I heard the direct linking
support patches weren't accepted to glibc. So the Solaris-style ld
-Bdirect -Minterposers.mapfile won't work for that report.

We're getting a bit off-topic, but there's no reason for a vendor library 
whose purpose is not to provide an LDAP API to expose/export LDAP API symbols. 
They could use -Bsymbolic or an explicit list of exported symbols in a mapfile 
to prevent such symbol leaks from occurring. You don't *need* any particular 
Solaris-specific features to avoid these issues.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Building 32-bit libraries on 64-bit machine

2013-06-07 Thread Howard Chu

On 07.06.2013 12:05, Ashwin Kumar wrote:
  On Fri, Jun 7, 2013 at 3:04 PM, Christian Manal 
  moen...@informatik.uni-bremen.de
mailto:moen...@informatik.uni-bremen.de wrote:
 
  -m32
 
 
  Can I not pass -m32 flag to make while compiling?


I'm note sure about that. You could just try to run

make CFLAGS=-m64

but that'd override whatever else configure put in there in the Makefiles.


Safer to use
make CC=gcc -m32
for that reason

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Open LDAP ACL and Group

2013-06-07 Thread Howard Chu

Dysan 67 wrote:

Hello,
I have a problem with acl and group.
I configured a proxy slapd and add acl (see slapd.conf below)


Read the slapd-ldap(5) manpage. Since your remote server is AD you must 
configure explicit Bind credentials for any access of the remote server. In 
this case, back-ldap cannot look up the remote group memberships because you 
have failed to configure acl-bind.


Run slapd with -d7 and it will be obvious that this is the problem.


When I run a ldapsearch command with user 'Test User' the attributes are
displayed. It's Ok

But when I run the same ldapsearch command with user 'Synchro1 User' the
message 'Insufficient access (50)' are displayed. It's not ok
The user 'Synchro1 User' is member of
CN=Grp_Users_UG,OU=Gina,OU=Applications,DC=activedir,DC=example,DC=ch

Are you an idea ?
Thank you for you help
Dysan

My environment
-
ldapproxy server is CentOS release 5.9 (Final) openldap version 2.3.43
dc1-test Windows Server 2008 R2 (Domain Controler)

Ldapsearch command
---
$ ldapsearch -x -LLL -H ldaps://ldapproxy.example.ch:636
http://ldapproxy.example.ch:636  -D CN=Test
User,OU=TST,OU=USERS,DC=activedir,DC=example,DC=ch -W -b
dc=activedir,dc=example,dc=ch -s sub cn=*
Enter LDAP Password:
dn: 
...

$ ldapsearch -x -LLL -H ldaps://ldapproxy.example.ch:636
http://ldapproxy.example.ch:636  -D CN=Synchro1
User,OU=TST,OU=USERS,DC=activedir,DC=example,DC=ch -W -b
dc=activedir,dc=example,dc=ch -s sub cn=*
Enter LDAP Password:
Insufficient access (50)

slapd.conf
--
include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/nis.schema
allow bind_v2

pidfile /var/run/openldap/slapd.pid
argsfile/var/run/openldap/slapd.args
TLSCipherSuite  HIGH:-SSLv2
TLSCACertificateFile /etc/openldap/cacerts/cacerts.crt
TLSCertificateFile /etc/openldap/cacerts/ldapproxy.example.ch.crt
TLSCertificateKeyFile /etc/openldap/cacerts/ldapproxy.example.ch.key

loglevel -1
disallowbind_anon

# AD
databaseldap
suffix  dc=activedir,dc=example,dc=ch
uri ldaps://dc1-test.example.ch/ http://dc1-test.example.ch/
readonly on
rebind-as-user
lastmod  off

access to attrs=displayname,sn,givenname,mail,telephoneNumber
   by dn.exact=CN=Test User,OU=TST,OU=USERS,DC=activedir,DC=example,DC=ch read
   by
group.exact=CN=Grp_Users_UG,OU=Gina,OU=Applications,DC=activedir,DC=example,DC=ch
read
   by * none

# The users must see the entry itself
access to attrs=entry
   by dn.exact=CN=Test User,OU=TST,OU=USERS,DC=activedir,DC=example,DC=ch read
   by
group.exact=CN=Grp_Users_UG,OU=Gina,OU=Applications,DC=activedir,DC=example,DC=ch
read
   by * none

# Other attributes, others users have no access
access to *
   by * none
#---
slapd.conf end



--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Openldap

2013-06-07 Thread Howard Chu

Caldwell, Carmela wrote:

We are trying to install Open LDAP, but we are not sure if it will work with
z/OS 1.13 mainframe. Does anyone have experience with z/OS mainframe?


It has been many years since we ported to z/OS. Some notes are here
http://www.openldap.org/faq/data/cache/719.html

I have no idea how out-of-date the info is relative to the version of z/OS 
you're using. Good luck.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Question Sun Directory Server upgrades from version 6.3.1.1.1 to version 11.1.1.5.0

2013-06-09 Thread Howard Chu

Far a wrote:

I am new with LDAP.I am not sure if this is proper place to post this. I could
use all the help I can get.


I'm sure you could but this is not the Sun Directory support channel. Contact 
your Oracle support rep.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: LMDB: MDB_MAP_FULL doesn't allow deletions

2013-06-11 Thread Howard Chu

Jeremy Bernstein wrote:

Although I didn't figure out a good way to do what I want, this is what I am
now doing:

if (MDB_MAP_FULL while putting) {
   abort txn, close the database
   reopen the database @ larger mapsize
   perform some pruning of dead records
   commit txn, close the database
   reopen the database @ old mapsize
   try to put again
}

At this point, the database is probably larger than the old mapsize. To handle
that, I make a copy  of the DB, kill the original, open a new database and
copy the records from the old DB to the  new one.

All of this is a lot more complicated and code-verbose than I want, but it
works and seems to be reliable.

Nevertheless, if there's an easier way, I'm all ears. Thanks for your
thoughts.


Use mdb_stat() before performing the _put(). If the total number of pages in 
use is large (whatever threshold you choose, e.g. 90%) then start pruning.


Look at the mdb_stat command's output to get an idea of what you're looking for.
--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: LMDB: MDB_MAP_FULL doesn't allow deletions

2013-06-11 Thread Howard Chu

Jeremy Bernstein wrote:

Thanks Howard,

OK, so I tried this again with a slightly more modest toy database (after 
reading the presentation, thanks), 1MB (256 pages). Blasting a bunch of records 
into it at once (with a transaction grain of 100 records) I am getting 
MDB_MAP_FULL with 1 branch, 115 leaf and 0 overflow nodes. So I suppose that I 
can use 1/3 of the database size (85 leaf pages in this example) as a rough 
guideline as to when I should prune. My real database are between 4 and 128MB, 
32MB being typical and my real transactions are generally a bit smaller.

Does that seem reasonable to you, or do I need to be working on a different 
scale entirely?


I doubt that the cutover point will scale as linearly as that, you should just 
experiment further with your real data.


Jeremy

Am 11.06.2013 um 20:11 schrieb Howard Chu h...@symas.com:


Your entire mapsize was only 64K, 16 pages? That's not going to work well. 
Please read the LMDB presentations to understand why not. Remember that in 
addition to the main data pages, there is also a 2nd DB maintaining a list of 
old pages, and since LMDB uses copy-on-write every single write you make is 
going to dirty multiple pages, and dirty pages cannot be reused until 2 
transactions after they were freed. So you need enough free space in the map to 
store ~3 copies of your largest transaction, in addition to the static data.


Thanks
Jeremy

Am 11.06.2013 um 19:32 schrieb Howard Chu h...@symas.com:


Jeremy Bernstein wrote:

Although I didn't figure out a good way to do what I want, this is what I am
now doing:

if (MDB_MAP_FULL while putting) {
   abort txn, close the database
   reopen the database @ larger mapsize
   perform some pruning of dead records
   commit txn, close the database
   reopen the database @ old mapsize
   try to put again
}

At this point, the database is probably larger than the old mapsize. To handle
that, I make a copy  of the DB, kill the original, open a new database and
copy the records from the old DB to the  new one.

All of this is a lot more complicated and code-verbose than I want, but it
works and seems to be reliable.

Nevertheless, if there's an easier way, I'm all ears. Thanks for your
thoughts.


Use mdb_stat() before performing the _put(). If the total number of pages in 
use is large (whatever threshold you choose, e.g. 90%) then start pruning.

Look at the mdb_stat command's output to get an idea of what you're looking for.



--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: LMDB: MDB_MAP_FULL doesn't allow deletions

2013-06-12 Thread Howard Chu

Jeremy Bernstein wrote:

Hi,

I may have spoken too soon. I'm now using a 32MB toy database, typical for

my users. I'm pruning at 90% which seems to be about the right number, but…
the number of pages in my mdb_stat() isn't reducing. So I'll do a round of
pruning, maybe 5% of the records. But the next time I come around to
mdb_put(), I have to do it again. Until some critical pruning mass is reached
(for instance, I started pruning at 688622 and the pages didn't reduce below
90% until I had pruned down to 389958). My transaction grain is a very modest
50. I've tried closing and reopening the environment just to ensure that
everything is fresh, but it doesn't change the stat I'm getting.


Does that make any sense/ring any bells? Thanks again for your help so far.


That's normal for a B+tree with multiple nodes on a page. And if you're only 
looking at mdb_stat() of the main DB, you're only seeing part of the picture. 
Again, look at the output of the mdb_stat command line tool. Use mdb_stat 
-ef so you also see the freelist info.


The trick, in your case, is to make sure that the number of free pages is 
always sufficient, and also the number number of freelist entries. A freelist 
entry is created by a single commit, and you want to always have at least 3 of 
them (because the 2 most recent ones are not allowed to be used). If you do 
all of your deletes in a single commit you will not free up usable space as 
quickly as doing them in several commits.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: translucent overlay and orphaned local entry when remote entry moves

2013-06-14 Thread Howard Chu

Steve Eckmann wrote:

Is there a standard way to recover a local entry when its proxied entry is
moved, that is, when a remote DN changes? It looks like the local entry and
its attribute values become inaccessible via ldapsearch. I found the orphaned
entry in the output of slapcat, but the man page warning about stopping slapd
before running slapcat makes that seem like an impractical way to find and
recover the orphans.


None of the current backends require slapd to be stopped before running slapcat.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: no-op search control, large result sets and abandon

2013-06-22 Thread Howard Chu

Michael Ströder wrote:

HI!

In web2ldap I've implemented a feature some time ago which uses the no-op
search control to determine the overall number of results when only displaying
a partial result set.
Obviously this does not scale well when interactively browsing into a
container with many entries.

So I'm considering whether to make this configurable (default off) or to make
something automagically. My current working code sends a search no-op request
with small timeout (5 sec), catches the error if the no-op search would take
longer (Python exception ldap.TIMEOUT) and sends abandon request in this case.

The idea is trying to determine the overall number of results only if it does
not put too much burden on the server. But I wonder whether sending the
abandon request really helps to reduce the server load:
Does slapd immediately stop processing the no-op search request?
Or does it generate the result set but not return it to the client?


If the connection has not been throttled due to too many outstanding requests, 
the abandon will be processed almost immediately. It will generate the entire 
filtered candidate set first, but usually those index lookups are cheap.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: OpenLDAP Proxy using PKCS#11/SmartCard client authentication

2013-06-24 Thread Howard Chu

Michael Ströder wrote:

Stefan Scheidewig wrote:

After I managed to connect to the LDAP server with gnutls-cli (with a PKCS#11
URI containing a pinfile attribute) I tried to set those PKCS#11 URIs to the
ldaprc settings TLS_KEY and TLS_CERT. But these settings are handled as PEM
encoded file (see function tlsg_ctx_init in tls_g.c) and a connection
initialization fails trying to read the PKCS#11 URI from the local file system.

So currently there seems to be no way to configure the OpenLDAP client to look
up the pkcs#11 store for the client key as well as the client certificate to
establish a client authenticated TLS connection.


If PKCS#11 support for smartcard/HSM is needed I'd try to use libnss
(--with-tls=moznss). Never tried that myself though.


Or submit appropriate GnuTLS or OpenSSL patches to add the feature.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: understanding ldap

2013-06-24 Thread Howard Chu

Michael Ströder wrote:

Rodney Simioni wrote:

/etc/openldap/ldap.conf  # this config file is openldap server's ldap
config file?


No, it's a LDAP client config.  Mostly likely for OpenLDAP ldap* command-line
tools but sometimes also for other components.


/etc/ldap.conf # This config file is for ldap's clients?


Sometimes it's used for LDAP clients like pam_ldap, sudo-ldap etc. It also
might affect the behaviour of clients implement in a scripting language which
uses OpenLDAP client libs through C wrapper modules (like php-ldap,
python-ldap, etc.)


Not quite. There is no specific config file for OpenLDAP command line tools. 
The /etc/openldap/ldap.conf is a config for libldap, and as such it affects 
everything that uses libldap - command line tools, scripting modules, whatever.


/etc/ldap.conf was used by pam_ldap/nss_ldap, certainly. Possibly by some 
other things too, and yes it's a mess. pam_ldap/nss_ldap are now 
obsolete/unmaintained. You should be using nssov or nss-pam-ldapd now, and 
neither of them use /etc/ldap.conf.



The way various software and distributions deal with ldap.conf in several
directories is a mess and entirely depends on how the software author / Linux
distributor built the client software.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: unsupported extended operation

2013-06-25 Thread Howard Chu

Rodney Simioni wrote:

Hi,

I just compiled openldap with: ./configure --prefix=/usr/local/openldap
--enable-ldap --with-tls=openssl --with-cyrus-sasl --enable-crypt

I did a ‘make depend’, ‘make’, and a ‘make install’; I didn’t see any errors.

I fired up ldap with: ‘./slapd -d127 -h ldap:///’

Then I went to test my install with:’ ldapsearch -x -ZZ -d1 -H ldap://blah.com/’

And I’m still getting:

ldap_msgfree

ldap_err2string

ldap_start_tls: Protocol error (2)

 additional info: unsupported extended operation

ldap_free_connection 1 1

ldap_send_unbind

ber_flush2: 7 bytes to sd 3

ldap_free_connection: actually freed

Does anybody have a clue?


You haven't configured any of the TLS settings in the server yet.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: High load times with mdb

2013-06-26 Thread Howard Chu

Bill MacAllister wrote:

--On Tuesday, June 25, 2013 03:10:17 PM -0700 Howard Chu h...@symas.com wrote:

Probably bad default FS settings, and changed from your previous OS revision.

Also, you should watch vmstat while it runs to get a better idea of
how much time the system is spending in I/O wait.


I have just re-mkfs'ed the new, slow system to make it look like the
old, fast system.  Just to make sure nothing else changed I have
started a load on the older system.  Things look fine.


I meant mount options, mkfs should have very little impact.

ext3 journaling is pretty awful. For ext4 you probably want data=writeback 
mode, and you probably should compare the commit= value, which defaults to 5 
seconds and also barrier; I believe the barrier default changed between kernel 
revisions.



Now, comparing vmstat output, the new system is clearly badness incarnate.

Fast

procs ---memory-- ---swap-- -io -system-- cpu
  r  b swpd   freebuff  cache   si   sobibo   in   cs us sy id  wa
  0  00 6358656 303468 9301620   00 0 0   45   45  0  0 100  0
  0  00 6358780 303468 9301620   00 0 0   47   41  0  0 100  0
  0  00 6358780 303468 9301620   00 0 0   47   41  0  0 100  0
  0  00 6358780 303468 9301620   00 0 0   93   43  0  1 99   0
  0  00 6358532 303468 9301620   00 0 0  141   71  0  1 99   0
  1  00 6358488 303468 9301620   00 014  116   48  0  1 99   0

Slow

procs  ---memory--  ---swap-- -io -system-- cpu
  r  b  swpdfreebuff  cachesi   sobibo   in   cs us sy id wa
  1  40  13318088  36128 275960000 0  2134  379   83  0  0 88 12
  0  40  13318308  36128 275960000 0  1044  277   70  0  0 88 12
  0  40  13318508  36132 275960000 0   765  267   69  0  0 88 12
  0  20  13318240  36152 275960400 0   818  593  104  0  0 88 12
  0  20  13318332  36168 275960400 0  2611 1489  138  0  0 89 11

Lots of waiting, lots of blocking.  What's the deal with all that
free memory on the slow system?

I will interate on mkfs for a bit, but I thought I would send this off
incase something jumps out.

Bill




--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: LB health check during syncrepl refresh

2013-06-28 Thread Howard Chu

Michael Ströder wrote:

HI!

Inspired by ITS#7616 and looking at our monitoring:

If I bring up a syncrepl consumer with empty DB it seems contextCSN attribute
is missing in DB base entry during refresh phase.
This is nice because we could use that in the load-balancer health check to
prevent clients to connect to this replica until all data is loaded into the
consumer and contextCSN is present.

So is it guaranteed that contextCSN is missing during lengthy refresh phase?


Yes. The consumer always omits it from a refreshed entry and always writes it 
only at the end of a successful refresh.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: No such object error with translucent overlay and base scope search

2013-07-10 Thread Howard Chu

Steve Eckmann wrote:

We found that we get a No such object error from the translucent overlay
when we do a search like this:

  ldapsearch -x -H ldaps://localhost -LLL \

  -b cn=John Doe,ou=Users,dc=example,dc=com -s base \

  -D cn=admin,dc=example,dc=com -w admin \

  '()'

if there is no entry for cn=John Doe,ou=Users,dc=example,dc=com in the local
database, whether or not the remote entry exists. It seems like a mistake for
the translucent overlay to report an error if the remote entry exists, since
it only means that we haven’t added any local attributes yet. Is there a way
to suppress the error result when the proxied server returns an entry, so we
don’t have to hack around this weirdness in our client?


Re-read the slapo-translucent manpage, check your local/remote configuration. 
The overlay won't query the remote server if you've only specified 
translucent_local attributes.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: High load times with mdb

2013-07-11 Thread Howard Chu

Bill MacAllister wrote:



--On June 25, 2013 10:29:26 AM -0700 Bill MacAllister w...@stanford.edu
wrote:


With the release of Debian 7 (wheezy) I was rebuilding a couple test
systems and was surprised to find that the load times I am seeing for
populating the mdb database with slapd have gone up dramatically.  The
load for a master server that was taking about 10 minutes just took
35 minutes.  The slave is worse.  A normal load time is 20 minutes
and it is at 31 minutes now with an eta of about 2.5 hours.


I tested this file system and that file system with this set of options and
that set of options and really never moved the problem significantly.  I
finally realized I should believe my experiments and think about what else
could be causing the problem I was seeing.

What changed about the same time that I starting building with
wheezy/stable was that I removed a partition option that had been added to
improve performance on our VM farm, i.e. align-at:4k.  Our LDAP servers are
physical servers after all.  Reinstating the parameter resulted in
dramatically faster, and reproducible, load times on any file system I
tried.  We are now using ext4 for our LDAP server farm, well, we are when I
get done rebuilding them.

This problem will be specific to the disk in use.  Looking at the
manufacturers documentation it never really states what the block size it,
but implies it is 512 bytes.  I think that is a lie, I am told most modern
disks lie about their geometry.  In any case, for the disks we are using 4k
alignment works.

Thanks everyone for the suggestions.


Thanks for the followup. Modern hard drives have moved to 4096 byte physical 
sectors, they advertise 512 byte sectors for compatibility with older OSs. 
This will probably become an issue more often, although I expect modern Linux 
tools to be able to operate with actual 4096 byte sectors and make the issue 
more obvious. There should be a drive option that reports its true sector 
size, I just don't remember the details at the moment.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: caseIgnoreMatch for Country String

2013-07-11 Thread Howard Chu

Maucci, Cyrille wrote:

Hello gurus,

When I have this piece of schema included in my slapd.conf file…

attributetype ( 1.3.6.1.4.1.11.11.1.1.1.1.402

NAME ( 'TestCountryString' 'TestCountryStringSyn' )

DESC 'Test Country String'

EQUALITY caseIgnoreMatch

ORDERING caseIgnoreOrderingMatch

SUBSTR caseIgnoreSubstringsMatch

SYNTAX 1.3.6.1.4.1.1466.115.121.1.11

SINGLE-VALUE  )

… my 2.4.32 slapd –Tt complains with this error:

line 8 attributetype: AttributeType inappropriate matching rule:
caseIgnoreMatch.

My understanding of http://tools.ietf.org/html/rfc4517 is that caseIgnoreMatch
is valid for country string:

The *caseIgnoreMatch *rule compares an assertion value of the Directory

String syntax to an attribute value of a syntax (e.g., the Directory

String, Printable String,*Country String*, or Telephone Number syntax)

whose corresponding ASN.1 type is DirectoryString or one of its

alternative string types.



Could you confirm whether this is a bug in openldap or simply a
misunderstanding from me ?


Looks like a bug, schema_init.c doesn't list Country String as a compatible 
syntax here. You should submit an ITS for this.


Thanks in advance

++Cyrille




--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Q: Managing entryCSN with slapadd

2013-07-12 Thread Howard Chu

Ulrich Windl wrote:

Hi!

I have a question: Is there a way to fix entryCSNs for slapadd? Reasoning: With 
dynamic configuration, sometimes (if you messed up) you'll have to edit your  
slapcat dump to reinitialize the config database. Unfortunately with 
replication the entry CSNs seem to casue trouble:
If you don't change the CSN, the canges are not replicated, it seems; if you 
change the CSN, OpenLDAP doesn't seem to like it.

I see repeating message slike these:
slapd[3017]: Entry olcDatabase={0}config,cn=config CSN 
20130709140112.108165Z#00#000#00 older or equal to ctx 
20130709140112.108165Z#00#000#00
slapd[3017]: do_syncrep2: rid=001 CSN too old, ignoring 
20130712085712.643156Z#00#000#00 (olcDatabase={2}monitor,cn=config)

Ist there a way to fix this?


When editing the slapcat dump, delete the entryCSN. slapadd will generate a 
new one.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: need help interpreting Error: ldap_back_is_proxy_authz returned 0, misconfigured URI?

2013-07-15 Thread Howard Chu

Steve Eckmann wrote:

The answer would be obvious if we had a misconfigured URI, but I don't think we 
do. In fact, we are getting this error from the ldap/translucent proxy on a 
first attempt to retrieve a DN from a remote Active Directory, then a second 
identical ldapsearch always succeeds. That makes us think there might be a 
timing issue getting from our openldap server, through a forwarding proxy out 
of a DMZ, and finally to the target AD server. But since all the openldap log 
messages appear with the same timestamp, there would have to be a sub-second 
timeout somewhere in the path. Does openldap have any default sub-second 
timeouts? I haven't configured any of the slapd or slapd-ldap timeout options.


You can find the relevant code in back-ldap/bind.c. It means it was trying to 
do proxy authorization on the connection but doesn't have the required 
credentials or some other config item so it couldn't authorize to the remote 
server. So go check your back-ldap configuration again.


Here is a typical log from a failed search:

   Jul 15 09:46:09 eck1 slapd[9198]: conn=1001 fd=10 ACCEPT from 
IP=172.20.11.85:54864 (IP=0.0.0.0:636)
   Jul 15 09:46:09 eck1 slapd[9198]: conn=1001 fd=10 TLS established 
tls_ssf=256 ssf=256
   Jul 15 09:46:09 eck1 slapd[9198]: conn=1001 op=0 BIND 
dn=cn=localuser,ou=users,ou=Native,dc=example,dc=com method=128
   Jul 15 09:46:09 eck1 slapd[9198]: conn=1001 op=0 BIND 
dn=cn=localuser,ou=users,ou=Native,dc=example,dc=com mech=SIMPLE ssf=0
   Jul 15 09:46:09 eck1 slapd[9198]: conn=1001 op=0 RESULT tag=97 err=0 text=
   Jul 15 09:46:09 eck1 slapd[9198]: conn=1001 op=1 SRCH base=dc=example,dc=com scope=2 
deref=0 filter=(sAMAccountName=steve.eckmann)
   Jul 15 09:46:09 eck1 slapd[9198]: conn=1001 op=1 SRCH attr=cn
   Jul 15 09:46:09 eck1 slapd[9198]: conn=1001 op=1 ldap_back_retry: retrying 
URI=ldap://172.30.11.20; DN=cn=remoteuser,ou=users,ou=system 
accounts,dc=example,dc=com
   Jul 15 09:46:09 eck1 slapd[9198]: Error: ldap_back_is_proxy_authz returned 
0, misconfigured URI?
   Jul 15 09:46:09 eck1 slapd[9198]: = mdb_equality_candidates: 
(sAMAccountName) not indexed
   Jul 15 09:46:09 eck1 slapd[9198]: conn=1001 op=1 SEARCH RESULT tag=101 err=0 
nentries=0 text=
   Jul 15 09:46:09 eck1 slapd[9198]: conn=1001 op=2 UNBIND
   Jul 15 09:46:09 eck1 slapd[9198]: conn=1001 fd=10 closed

Thanks.

Steve






--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: delete members in big groups with back_mdb

2013-07-16 Thread Howard Chu

Marco Schirrmeister wrote:

Hi,

I have a problem with mdb and modify operations on very large groups. 
Specifically deleting members from those groups.
Removing 10 members from a group with 25000 members takes 23 seconds. Which 
also means, all other clients that want to do something hang.
Deleting a user from multiple big groups takes minutes before it finishes.
Adding members to a large group is quick though.

When this delete is running, the cpu goes also up to 100%.

It looks like it has to do with the index that I have on uniqueMember.
If I remove the index on uniqueMember, the delete of members in big groups is 
fast.

System details are
CentOS 6 64bit
OpenLDAP 2.4.35
slapd.conf below

Is this something normal/exptected or is it maybe a bug?


Read slapd.conf(5) manpage, sortvals keyword.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Antw: Re: Q: TLS support

2013-07-17 Thread Howard Chu

Ulrich Windl wrote:

Quanah Gibson-Mount qua...@zimbra.com schrieb am 16.07.2013 um 18:08 in

Nachricht 7D4A20353DA988409253CCDE@[192.168.1.22]:

--On Tuesday, July 16, 2013 8:17 AM +0200 Ulrich Windl
ulrich.wi...@rz.uni-regensburg.de wrote:


Hi!

I have some questions on TLS support in OpenLDAP:

1) How can I find out which cipher suite had been configured (when using
the distribution-supplied version)? From ldd I guess my slapd is using
libopenssl0_9_8.


If specific cipher suites have been configured, it would be in the slapd
configuration.  Otherwise, they'll be negotiated.


The question was: (How) can (if at all) I find out what cipher suite was 
compiled (linked with) into slapd?


The answer is there are no cipher suites compiled into slapd.


2) Is the restriction (This directive is not supported when using
GnuTLS.) on TLSCACertificatePath and GunTLS still effective? I'd like to
use it, but I'm unsure what the cipher suite is.


Why would you want to use an inferior and insecure TLS implementation?


I don't want to use GnuTLS; I wonder whether I can safely use the more flexible 
TLSCACertificatePath instead of a CA bundle file.


If you're using OpenSSL then a comment specifically about GnuTLS does not 
apply to you.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Antw: Q: Multi-Master setup

2013-07-19 Thread Howard Chu

Ulrich Windl wrote:

I wrote on 16.07.2013 at 09:02:

[...]

2) What is the correct syntax for the second argument (URI): Do you need a
final slash, do you need the port? Currently I'm using a syntax like
olcServerID: 1 ldap://ldap.domain.org;


There is no specific requirement. The only actual requirement is that a URL in 
the list of serverIDs must match one of the URLs in slapd's -h option. If 
you put trailing slashes or not that's your choice, just be consistent and use 
the exact same format in both places.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: lmdb - atomic actions

2013-07-24 Thread Howard Chu

Tomer Doron wrote:

wondering what the best strategy to achieve atomic updates with LMDB.

what i am trying to achieve is a read then update atomic action given a
highly concurrent use case,  for example, if a key/value pair represents a
counter, how does one increment or decrement the counter atomically.

i am pretty sure mdb_get - mdv_set sequence is not atomic, wondering if
mdb_cursor_get - mdv_cursor_put sequence is? perhaps a certain flag is
required on the get action to achieve a lock? in my bdb implementation i
used lockers to achieve this.


Transactions are atomic. Whatever operations you perform in a single 
transaction will occur atomically. BDB-style locking is unnecessary.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: lmdb - atomic actions

2013-07-24 Thread Howard Chu

Tomer Doron wrote:

howard - thank you for the prompt reply and for putting this wonderful library
together.

to clarify my understanding, can you elaborate a little on the locking lmdb tx
provide cross threads and processes? in particular

1. do they provide a writer lock by default and one needs to flag
MDB_TXN_RDONLY when a writer lock is not required?


That is answered in the documentation.


2. given that mdb_txn_begin is a precursor to most other actions, are these
locks data store  wide? is there some other mechanism to lock a key/value
pair or cursor so not to delay access to other keys? or is the basic
assumption such is unnecessary?


That is answered in the presentations.


thank again!
tomer


On Jul 24, 2013, at 1:52 PM, Howard Chu h...@symas.com mailto:h...@symas.com
wrote:


Tomer Doron wrote:

wondering what the best strategy to achieve atomic updates with LMDB.

what i am trying to achieve is a read then update atomic action given a
highly concurrent use case,  for example, if a key/value pair represents a
counter, how does one increment or decrement the counter atomically.

i am pretty sure mdb_get - mdv_set sequence is not atomic, wondering if
mdb_cursor_get - mdv_cursor_put sequence is? perhaps a certain flag is
required on the get action to achieve a lock? in my bdb implementation i
used lockers to achieve this.


Transactions are atomic. Whatever operations you perform in a single
transaction will occur atomically. BDB-style locking is unnecessary.

--
 -- Howard Chu
 CTO, Symas Corp. http://www.symas.com
 Director, Highland Sun http://highlandsun.com/hyc/
 Chief Architect, OpenLDAP http://www.openldap.org/project/





--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Antw: delta sync error message in log

2013-07-25 Thread Howard Chu

Ulrich Windl wrote:

I thought I read that delta sync with multi-master is not working yet... Is
ist working in the meantime?


Read the Changelog for 2.4.27.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: ldap_bind() extended response for password policy

2013-07-28 Thread Howard Chu

Andrius Kulbis wrote:

Hello,

I'm trying to pull the password policy response message from ldap_bind()
method.

While checking the packet content from OpenLDAP after ldap_bind()
request, with Wireshark, there is a control hooked to the ldap_bind()
response, were the message code and message text about password
expiration is, but I can't manage to parse that message from response.

AFAIK, the OpenLDAP C API ldap_get_option() method doesn't have
LDAP_OPT_SERVER_CONTRLOLS case implementation, and I can't get the
PASSWORDPOLICYRESPONSE, although I have set the PASSWORDPOLICYREQUEST
before the bind.


Use ldap_parse_result().

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Q: olcMirrorMode: no equality matching rule

2013-07-29 Thread Howard Chu

Ulrich Windl wrote:

Hi!

When trying to add olcMirrorMode: TRUE to a database where the value already 
exists, I get:
---
# ldapmodify -ZZ -x -W -D cn=config -f mirrormode.ldif -v -c
ldap_initialize( DEFAULT )
Enter LDAP Password:
add olcMirrorMode:
 TRUE
modifying entry olcDatabase={0}config,cn=config
ldap_modify: Inappropriate matching (18)
 additional info: modify/add: olcMirrorMode: no equality matching rule

add olcMirrorMode:
 TRUE
modifying entry olcDatabase={1}hdb,cn=config
modify complete
---

It looks to me as if a compare operator for TRUE (Boolean?) is not defined. 
Am I right?


It means the attribute has no equality matching rule, exactly what the error 
message says.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: OpenLDAP (using BDB) stalls adding 65,536th entry

2013-07-30 Thread Howard Chu

Mark Cooper wrote:

I've been doing some testing using OpenLDAP with BDB on a couple of different
platforms.  I noticed a similar situation.  When I sit in a loop doing adds,
at the 65,536th added entry the process stalls for a short period of time.
  After a minute or two, the add succeeds.  My first thought is that this is a
BDB issue, so I posted this question to Oracle's BDB forum.  But I have yet to
receive any answer.


This is all known/expected behavior. One (or more) of your index slots hit its 
maxsize of 65535 elements and was collapsed into a range. This typically 
happens with the objectClass index first, if you're adding a bunch of objects 
all of the same classes.


Taking a minute or two is abnormal, but I suppose is possible if multiple 
indices hit the condition at the same time.



This situation seems to happen when I have around 43 10MB log files.  During
the stall, I notice many log files are being written (another 25 or so), which
is a much quicker rate than was being written prior to the stall.

The stall only happens once. I added another 350,000 entries and no more
stalls.  I ran a few other tests.  Added 65,535 entries.  All is fine.  As
soon as the next entry is add, even if I recycle the server, I hit the
condition.  I even tried deleting 1,000 entries.  I would then need to add
1,0001 to get to 65,536 entries in the database and then hit the delay.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: add rich data into attribute type definition possible?

2013-08-06 Thread Howard Chu

Zhang Weiwu wrote:

2013/8/6 Ulrich Windl ulrich.wi...@rz.uni-regensburg.de:

For a solution, I'm afraid you'll have to provide your own local
translations. Actually I did something like that in Perl for a CGI
application


Local translation in application layer is NO-GO. I would have to put the
translation to 2 desktop clients in different programming languages and one
Android application.

A shadow entry for every customized attribute is the only way to go.

Talking about the general design, I don't expect the standard to shift for
the simple requirement of translating description. But make new attribute
type definition into the object-like 'entry' brings all the advantage
object-oriented data can offer.

- Add slapo-constraint rule right on attribute definition.
- Add auditing rule on speicfic attributes, like password.
- Define an attribute's value should be shorted, right on the attribute's
definition.
- Define an attribute's value should be unique across the database.

I couldn't be the first to think about this, after-all OOP is so ubiquitous.
Even if the idea is valid, a change is always difficult, that's why in IT
evolution is usually done by new systems invented to replace the old ones,
not by old ones morphing into new ones.


The elements and syntax of an attribute definition are specified in X.500 and 
ASN.1. We don't have the freedom to arbitrarily add extensions to these 
definitions.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Editing Schema

2013-08-06 Thread Howard Chu

Bram Cymet wrote:

Hi,

I am wondering the best way to edit a schema after it is loaded. I would
like to change the syntax OID for an object type.

I have tried using Apache Directory Studio to do this and it just causes
OpenLDAP to segfault.



Is this possible? How would I do it? Do I need to remove all entries
currently referencing the schema?


Yes, you need to remove all references.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Replicating Schema, olcAccess and olcLimits

2013-08-07 Thread Howard Chu

Andrew Devenish-Meares wrote:

On 6/08/2013 3:56 PM, Andrew Devenish-Meares wrote:

Hi List,

I'm attempting to set up replication of schema, olcAccess and olcLimits.
It appears replicating the schema works, but the olcAccess and
olcLimits do not appear to replicate under olcDatabase={2}bdb,cn=config.
(Additionally the DIT under dc=une,dc=edu,dc=au is also replicated
without issue).


Having turned logging to 1024 to trace shell calls I get the following:
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: = test_filter 5
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: send_ldap_result:
conn=-1 op=0 p=0
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: send_ldap_result: err=0
matched= text=
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: syncrepl_entry: rid=006
be_search (0)
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: syncrepl_entry: rid=006
olcDatabase={2}bdb,cn=config
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: slap_queue_csn: queing
0x7f23d1d2b730 20130808010713.847335Z#00#000#00
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: = access_allowed: add
access to olcDatabase={2}bdb,cn=config entry requested
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: = root access granted
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: = access_allowed: add
access granted by manage(=mwrscxd)
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: = acl_access_allowed:
granted to database root
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: conn=-1 op=0:
config_add_internal: DN=olcDatabase={2}bdb,cn=config already exists
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: send_ldap_result:
conn=-1 op=0 p=0
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: send_ldap_result: err=68
matched= text=
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]:
slap_graduate_commit_csn: removing 0x7f23d1d2bae0
20130808010713.847335Z#00#000#00
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: syncrepl_entry: rid=006
be_add olcDatabase={2}bdb,cn=config (68)
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: = access_allowed:
search access to olcDatabase={2}bdb,cn=config entry requested
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: = root access granted
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: = access_allowed:
search access granted by manage(=mwrscxd)
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: = test_filter
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: PRESENT
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: = access_allowed:
search access to olcDatabase={2}bdb,cn=config objectClass requested
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: = root access granted
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: = access_allowed:
search access granted by manage(=mwrscxd)
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: = test_filter 6
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: send_ldap_result:
conn=-1 op=0 p=0
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: send_ldap_result: err=0
matched= text=
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: = acl_access_allowed:
granted to database root
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: send_ldap_result:
conn=-1 op=0 p=0
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: send_ldap_result: err=67
matched= text=Use modrdn to change the entry name
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: null_callback : error
code 0x43
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: syncrepl_entry: rid=006
be_modify olcDatabase={2}bdb,cn=config (67)
Aug  8 11:07:13 ldap-slave-dev-00 slapd[19914]: syncrepl_entry: rid=006
be_modify failed (67)

My reading of this suggests that the existance of the
olcDatabase={2}bdb,cn=config is causing an issue.  I'm unsure how to
proceed at this point.

Any help would be appreciated.


When syncrepl's Add attempt fails, it falls back to doing a Modify, trying to 
set whatever attribute values differ between the local entry and the syncrepl 
update. In this particular case, it seems that syncrepl thinks the two 
entries' RDNs are not exactly the same, so it tries to modify them as well. 
Your log shows that this attempt also fails (err=67). You'll have to 
doublecheck that the local and remote entries have exactly identical DNs.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: developing module that instantiates check_password() function

2013-08-10 Thread Howard Chu

Scott Koranda wrote:

On Sat, Aug 10, 2013 at 10:30 AM, Howard Chu h...@symas.com wrote:

Scott Koranda wrote:


Hello,

I wish to develop a user-defined loadable module that instantiates the
check_password() function as described in the slapo-ppolicy man page.

The man page specifies the function prototype as

int check_password (char *pPasswd, char **ppErrStr, Entry *pEntry);

In which header file is the 'Entry' type defined?



In slap.h. You cannot develop any code for slapd without this header file.



Thank you.

Is this a correct statement?

When one develops a check_password() function as described in the slapo-ppolicy
man page one does not develop against header files installed as part
of an OpenLDAP
./configure, make depend, make install cycle but instead develops
against the OpenLDAP
code base as if developing for slapd.


Yes.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: RE24 testing call (OpenLDAP 2.4.36)

2013-08-11 Thread Howard Chu

Chris Card wrote:

Built on CentOS 6.3, for bdb, hdb and mdb, all tests passed.


Please also tell if you built 32 or 64 bit. If you're testing on some other 
CPU architecture (e.g. ARM, ARM64) that would obviously be relevant too. Thanks.


Chris

  Date: Fri, 9 Aug 2013 16:38:07 -0700
  From: qua...@zimbra.com
  To: peter.gi...@daasi.de
  Subject: Re: RE24 testing call (OpenLDAP 2.4.36)
  CC: openldap-technical@openldap.org
 
 
 
  --On August 9, 2013 9:34:24 PM +0200 Peter Gietz peter.gi...@daasi.de
  wrote:
 
   Built on CentOS release 6.4 (Final) against openssl with hdb and mdb
   went fine
 
  Thanks!
 
  --Quanah
 
  --
  Quanah Gibson-Mount
  Principal Software Engineer
  Zimbra, Inc
  
  Zimbra :: the leader in open source messaging and collaboration
 



--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: attribute to store system mailbox value

2013-08-19 Thread Howard Chu

Nick Milas wrote:

On 19/8/2013 6:20 μμ, Zeus Panchenko wrote:


may somebody to recommend the attribute to store path to system mailbox,
among attributes of schema files shipped with openldap,

system mailbox is the path to mbox format file or maildir directory
where MDA (depends on MDA configuration) stores received mail messages

so, to not to add new LDAP object and attribute definition, I'd like to
know, may be close by function attribute already exists?



I suggest using a specialized schema for such use. Maybe you would want
to read through this thread:

http://www.openldap.org/lists/openldap-technical/201202/msg00147.html

I've faced a lot of problems until I decided to follow the approach
described in that thread, i.e. a custom schema (with our own OIDs) based
on laser and qmail-ldap schemas.

IMHO there is a lack of a good ldap mail schema to cover a wide range of
user needs out-of-the-box.


Storing site-local information inside a distributed database strikes me as a 
bit counter-intuitive. But if your MDAs have distributed access to these 
mailboxes, then the most natural solution would be to use an attribute 
containing a URL.


Our usual approach, since LDAP lacks an actual URL attribute syntax, is to 
define attributes that inherit from the labeledURI attributetype for these 
purposes.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: How to start slapd without slapd.conf?

2013-08-20 Thread Howard Chu

Steppacher Ralf wrote:

Ben,

I re-read those sections. But they only describe how to convert a

pre-existing slapd.conf file. So, to bootstrap slapd I created a minimal
slapd.conf with just the config database and a rootdn/pw for it and converted
that with slaptest. But I find it a bit awkward that slapd.conf should be
mandatory to get started, but at the same time is declared deprecated in
chapter 5 of the administrator's guide.

slapd.conf is not mandatory. Only a few lines of LDIF are needed to bootstrap.

This is a minimal slapd.ldif that you can use; it's also provided in the test 
suite as data/slapd-dynamic.ldif:



dn: cn=config
objectClass: olcGlobal
cn: config

dn: olcDatabase=config,cn=config
objectClass: olcDatabaseConfig
olcDatabase: config
olcRootPW: SuperSecret





Thanks anyway!
Ralf


From: openldap-technical-boun...@openldap.org 
[openldap-technical-boun...@openldap.org] on behalf of btb [b...@bitrate.net]
Sent: Monday, August 19, 2013 13:57
To: openldap-technical@openldap.org
Subject: Re: How to start slapd without slapd.conf?

On 2013.08.19 07.35, Steppacher Ralf wrote:

Hello all,

this is probably a really stupid question... But I cannot figure out how
to start a freshly built slapd using only slapd-config configuration.


please see section 5 [configuring slapd] of the administrator's guide.
also see man 5 slapd-config and man 8 slaptest

-ben






--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: OpenLDAP 2.4.36 available

2013-08-21 Thread Howard Chu

Nick Milas wrote:

On 21/8/2013 11:48 πμ, Clément OUDOT wrote:



LTB project RPMs for OpenLDAP 2.4.36 are available:
http://tools.ltb-project.org/news/40

I also created a yum repository to ease the installation:
http://ltb-project.org/wiki/documentation/openldap-rpm#yum_repository


Thanks Clement for your effort. It surely helps the community!

Thanks to OpenLDAP project for another fine release as well.

A question: If I remember right, upgrading 2.4.34 -- 2.4.35 with mdb
backend, required a rebuild of the db.


It required a rebuild of the dn2id index, accomplished by running slapindex. 
Not a rebuild of the entire DB.



If upgrading 2.4.35 -- 2.4.36, is there any such requirement?


No.


I assume that upgrading from any version = 2.4.34 to 2.4.36 with mdb
backend will require a rebuild of the db. Right?


Probably. slapd prints a message to this effect if it is needed.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Issues arising from creating powerdns backend based on LMDB

2013-08-22 Thread Howard Chu
 whatever they 
do. FreeBSD and Debian have LMDB packages now; if you want RPMs I suggest you 
ask your distro provider.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Issues arising from creating powerdns backend based on LMDB

2013-08-22 Thread Howard Chu

Mark Zealey wrote:

On 22/08/13 23:37, Howard Chu wrote:



1) Can you update documentation to explain what happens when I do a
mdb_cursor_del() ? I am assuming it advances the cursor to the next
record (this seems to be the behaviour). However there is some sort of
bug with this assumption. Basically I have a loop which jumps
(MDB_SET_RANGE) to a key and then wants to do a delete until key is like
something else. So I do while(..) { mdb_cursor_del(),
mdb_cursor_get(..., MDB_GET_CURRENT)}. This works fine mostly, but
roughly 1% of the time I get EINVAL returned when I try to
MDB_GET_CURRENT after a delete. This always seems to happen on the same
records - not sure about the memory structure but could it be something
to do with hitting a page boundary somehow invalidating the cursor?


That's exactly what it does, yes.


Any idea about the EINVAL issue?


Yes, as I said already, it does exactly what you said. When you've deleted the 
last item on the page the cursor no longer points at a valid node, so 
GET_CURRENT returns EINVAL.



None of the memory behavior you just described makes any sense to me.
LMDB uses a shared memory map, exclusively. All of the memory growth
you see in the process should be shared memory. If it's anywhere else
then I'm pretty sure you have a memory leak. With all the valgrind
sessions we've run I'm also pretty sure that *we* don't have a memory
leak.

As for the random I/O, it also seems a bit suspect. Are you doing a
commit on every key, or batching multiple keys per commit?


I'm not doing *any* commits just one big txn for all the data...

The below C works fine up until i=4m (ie 500mb of residential memory
shown in top), then has massive slowdown, shared memory (again, as seen
in top) increases, waits about 20-30 seconds and then disks get hammered
writing 10mb/sec (200txns) when they are capable of 100-200mb/sec
streaming writes... Does it do the same for you?

int main(int argc,char * argv[]) {
  int i = 0, j = 0, rc;
  MDB_env *env; MDB_dbi dbi; MDB_val key, data; MDB_txn *txn; char
buf[40];
  int count = 1;

  rc = mdb_env_create(env);
  rc = mdb_env_set_mapsize(env, (size_t)1024*1024*1024*10);
  rc = mdb_env_open(env, ./testdb, 0, 0664);
  rc = mdb_txn_begin(env, NULL, 0, txn);
  rc = mdb_open(txn, NULL, 0, dbi);

  for (i=0;icount;i++) {
  sprintf( buf, blah foo %9d%9d%9d, (long)(random() *
(float)count / RAND_MAX) - i, i, i );
  if( i %10 == 0 )
  printf(%s\n, buf);
  key.mv_size = sizeof(buf); key.mv_data = buf;
  data.mv_size = sizeof(buf); data.mv_data = buf;
  rc = mdb_put(txn, dbi, key, data, 0);
  }
  rc = mdb_txn_commit(txn);
  mdb_close(env, dbi);

  mdb_env_close(env);

  return 0;
}

By the way, I've just generated our biggest database (~4.5gb) from
scratch using our standard perl script. Using kyoto (treedb) with
various tunings it did it in 18 min real time vs lmdb at 50 minutes
(both ssd-backed in a box with 24gb free memory).


Kyoto writes async by default. You should do the same here, use MDB_NOSYNC on 
the env_open.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Issues arising from creating powerdns backend based on LMDB

2013-08-22 Thread Howard Chu

Mark Zealey wrote:

I'm not doing *any* commits just one big txn for all the data...

The below C works fine up until i=4m (ie 500mb of residential memory
shown in top), then has massive slowdown, shared memory (again, as seen
in top) increases, waits about 20-30 seconds and then disks get hammered
writing 10mb/sec (200txns) when they are capable of 100-200mb/sec
streaming writes... Does it do the same for you?
...


Kyoto writes async by default. You should do the same here, use
MDB_NOSYNC on the env_open.


MDB_NOSYNC makes no difference in my test case above - seeing exactly
the same memory, speed and disk patterns. Are you able to reproduce it?


Yes, I see it here, and I see the problem. LMDB was not originally designed to 
handle transactions of unlimited size. It originally had a txn sizelimit of 
about 512MB. In 0.9.7 we added some code to raise this limit, and it's 
performing quite poorly here. I've tweaked my copy of the code to alleviate 
that problem but your test program still fails here because the volume of data 
being written also exceeds the map size. You were able to run this to completion?


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Issues arising from creating powerdns backend based on LMDB

2013-08-22 Thread Howard Chu

Howard Chu wrote:

Mark Zealey wrote:

I'm not doing *any* commits just one big txn for all the data...

The below C works fine up until i=4m (ie 500mb of residential memory
shown in top), then has massive slowdown, shared memory (again, as seen
in top) increases, waits about 20-30 seconds and then disks get hammered
writing 10mb/sec (200txns) when they are capable of 100-200mb/sec
streaming writes... Does it do the same for you?
...


Kyoto writes async by default. You should do the same here, use
MDB_NOSYNC on the env_open.


MDB_NOSYNC makes no difference in my test case above - seeing exactly
the same memory, speed and disk patterns. Are you able to reproduce it?


Yes, I see it here, and I see the problem. LMDB was not originally designed to
handle transactions of unlimited size. It originally had a txn sizelimit of
about 512MB. In 0.9.7 we added some code to raise this limit, and it's
performing quite poorly here. I've tweaked my copy of the code to alleviate
that problem but your test program still fails here because the volume of data
being written also exceeds the map size. You were able to run this to 
completion?

Two things... I've committed a patch to mdb.master to help this case out. It 
sped up my run of your program, using only 10M records, from 19min to 7min.


Additionally, if you change your test program to commit every 2M records, and 
avoid running into the large txn situation, then the 10M records are stored in 
only 1m51s.


Running it now with the original 100M count. Will see how it goes.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Issues arising from creating powerdns backend based on LMDB

2013-08-23 Thread Howard Chu

Mark Zealey wrote:

On 23/08/13 04:55, Howard Chu wrote:

Howard Chu wrote:

Yes, I see it here, and I see the problem. LMDB was not originally
designed to
handle transactions of unlimited size. It originally had a txn
sizelimit of
about 512MB. In 0.9.7 we added some code to raise this limit, and it's
performing quite poorly here. I've tweaked my copy of the code to
alleviate
that problem but your test program still fails here because the
volume of data
being written also exceeds the map size. You were able to run this to
completion?


Two things... I've committed a patch to mdb.master to help this case
out. It sped up my run of your program, using only 10M records, from
19min to 7min.

Additionally, if you change your test program to commit every 2M
records, and avoid running into the large txn situation, then the 10M
records are stored in only 1m51s.

Running it now with the original 100M count. Will see how it goes.


I never actually ran it through (hence the map size issue) it was more
just an unlimited number to investigate the slowdown - 10M seems fine. I
just pulled from git (assumed this was better than the patch you sent)
and rebuilt, certainly seems a bit better now although at around 6m
records (ext4) it has some awful IO - drops to 1mb/sec in places on our
normal disk (first few writes are 100mb/s then it starts writing all
over the place). I've tried on both ext4 and xfs with no special tuning
and pretty much the same thing happens although closer to 7m records on
xfs. This is with NOSYNC option too. If I set the commit gap to 1m
records performance is ok up to around 8.4m records on ext4 and then
just stops for a minute or two doing small writes. Same thing at about
9.4m. It seems that the patch has  pushed the performance dropoff back a
bit and perhaps improved on it but there is still an issue there as far
as I can see.


Agreed, it's still fairly slow. I reran the 100M using commits at 100,000 and 
it finished in 18m26s.



The test program with 10m records committing every 1m completes in 1m10s
user time, but 5m30s real time because of all the pausing for disk
writes (ext4 but as above doesn't seem to make much difference compared
to xfs)... Same programlatest git on an SSD-backed system (ie massive
number of small write transactions don't cause any issues) with slightly
faster CPU - user time 47sec, real time 1min. On the SSD-backed box
without any commits - 5m30s user time, 6min real time.

So committing every 1-2m records is much better. I don't mind using
short transactions (in fact the program doesn't actually need any
transactions). Perhaps it would be good to have a Allow LMDB to
automatically commit+reopen this transaction for optimal performance
flag, or some way of easily knowing when the txn should be committed and
reopened rather than trying to guess roughly how many bytes i've written
since the last txn and commit if  a magic number of 400mb?

Also I don't know how intentional the 512mb limit you mention is but
perhaps that could be set at runtime - in that way I could just set to
half the box's mem size and ensure I don't need to write anything until
I have the whole thing generated?

By the way, looking at `free` output seems to imply that `top` is lying
about how much memory the program is using - residential looks like it
is capped at 500mb but it keeps rising along with shared which is
presumably the pages in the mmap that are in memory at the moment.


Yes, the shared memory is included in the rss, it's quite deceptive especially 
if you have multiple processes using shared memory.



wrt the ssd vs hdd performance differences, I did see similar disk write
issues in kyoto. So for that we generate onto a memdisk, however it
seems a bit strange to have to do this with LMDB given it's advertised
as a memory database.


LMDB is *not* advertised as a memory database - it is advertised as a 
memory-mapped disk database. It is only people who have no clue what they're 
talking about who refer to it as a memory database. Memory databases have no 
persistence and are limited to the size of RAM. LMDB has neither of those 
traits. Being a disk-based DB means we're affected by issues like disk seek time.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Issues arising from creating powerdns backend based on LMDB

2013-08-23 Thread Howard Chu

Mark Zealey wrote:



I've found another weird - I have now converted the database to use
duplicates. Typically when I do mdb_cursor_get(... MDB_NEXT ) it will
set the key and value but I've found 1 place so far where I do it and on
the duplicate's second entry the value is set but the key is empty.


I don't see how this can happen; the only time we don't return the key
is if some operation actually failed. Can you send test code to
reproduce this?


Attached .c shows it - create 3 keys with 5 entries under each. Actually
my report was incorrect - cursor_get() with MDB_NEXT or MDB_NEXT_DUP
never seems to set the key unless it is the first entry read... Perhaps
this is intended?!


Yes and no. It was intended for NEXT_DUP because, since it's a duplicate, you 
already know what the key is. It is unintended for NEXT, for the opposite 
reason, and in this case it's a bug.



Mark




--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Issues arising from creating powerdns backend based on LMDB

2013-08-23 Thread Howard Chu

Mark Zealey wrote:

On 23/08/13 17:08, Howard Chu wrote:

Mark Zealey wrote:



I've found another weird - I have now converted the database to use
duplicates. Typically when I do mdb_cursor_get(... MDB_NEXT ) it will
set the key and value but I've found 1 place so far where I do it
and on
the duplicate's second entry the value is set but the key is empty.


I don't see how this can happen; the only time we don't return the key
is if some operation actually failed. Can you send test code to
reproduce this?


Attached .c shows it - create 3 keys with 5 entries under each. Actually
my report was incorrect - cursor_get() with MDB_NEXT or MDB_NEXT_DUP
never seems to set the key unless it is the first entry read... Perhaps
this is intended?!


Yes and no. It was intended for NEXT_DUP because, since it's a
duplicate, you already know what the key is. It is unintended for
NEXT, for the opposite reason, and in this case it's a bug.


It would be nice to have it for NEXT_DUP as well to be honest - I have a
function that gets called for each record and it would be good not have
to save state between calls.


See latest mdb.master.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: OpenLDAP Samba4

2013-08-28 Thread Howard Chu

Pascal den Bekker wrote:

Hello,

I want to use openldap as a backend for Samba4. I set up the openldap
with a different port, because samba4 has an own ldap server running
on port 389.
I set up the standard config for samba4 like this:


As far as I know, the last time this was anywhere close to working was in 2010 
and since then the Samba Team ripped out a lot of the OpenLDAP support. We 
(Symas) have recently hired a former Samba Team engineer to get this code back 
into working order but it's been off to a very slow start. I expect it will be 
several months before we have anything back in usable state, based on the 
current rate of progress.


  passdb backend = ldapsam:ldap://ldap.example.com:3389
  ldap suffix = dc=ldap,dc=example,dc=com
  ldap user suffix = ou=users
  ldap group suffix = ou=groups
  ldap machine suffix = ou=computers
  ldap idmap suffix = ou=Idmap
  ldap delete dn = no
  ldap admin dn = cn=admin,dc=ldap,dc=example,dc=com
  ldap ssl = no
  ldap passwd sync = yes
  idmap_ldb:use rfc2307 = Yes
  invalid users = root

Created also the ou's in openldap, added a couple of users in openldap.
Also set the smbpasswd, but everytime when I try to ask the openldap
through samba. Im getting:

smbldap_search_domain_info: Adding domain info for OPENCHANGE failed
with NT_STATUS_UNSUCCESSFUL

Do I still need to load the samba.schema in openldap ? And when yes..
How do I do that??


Before taking any guesses at what actions you could take, first you need to 
see what the actual underlying error messages were. NT_STATUS_UNSUCCESSFUL 
is a generic Windows error code, and doesn't tell anything about what happened 
at the LDAP layer. What errors are in the slapd log?


openldap: 2.4.31
samba: 4.0.1
OS:   Debian Wheezy


2.4.31 is relatively old, you should use the current release (2.4.36).


Cheers,




--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Issue with mdb_cursor_del in MDB_DUPSORT databases

2013-08-29 Thread Howard Chu

Mark Zealey wrote:

Hi Howard,

I've now switched the database to use MDB_DUPSORT and found what seems
to be another issue to do with invalid data being returned after
mdb_cursor_del. The following code shows the behaviour against a
database that I can provide you with if you need:


It would help if you actually checked the return code from mdb_cursor_get.


  rc = mdb_cursor_open(txn, dbi, cursor);
  char seek[] = ku.oc.repmulp\t;
  key.mv_size = sizeof(seek);
  key.mv_data = seek;
  mdb_cursor_get(cursor, key, data, MDB_SET_RANGE);
  for( i = 0; i  15; i++ ) {
  mdb_cursor_del( cursor, MDB_NODUPDATA );
  data.mv_size = 0;
  key.mv_size = 0;
  mdb_cursor_get( cursor, key, data,
  //MDB_NEXT
  MDB_GET_CURRENT
  );

  if( key.mv_size == 0 )
  printf(WARNING 0 SIZE KEY\n);
  else
  printf(KEY OK: %d: %.*s\n, key.mv_size, key.mv_size,
key.mv_data);
  if( data.mv_size == 0 )
  printf(WARNING 0 SIZE DATA\n);
  else
  printf(DATA OK: %d: %.*s\n, data.mv_size,
data.mv_size, data.mv_data);
  }
  mdb_txn_abort(txn);



--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Group values not returned with id command

2013-09-05 Thread Howard Chu

Justin Edmands wrote:

Hey,
Certainly new to migrations of LDAP. I migrated our old setup from OpenLDAP to
389 Directory Server. When using the id command on an LDAP client, it only
returns uid,gid, and one group. It for some reason does not show all of the
actual groups that the user is associated with. What is set to return these
values and what setting ensures they are properly mapped from OpenLDAP to 389DS?

### OpenLDAP example: ###

[root openldapclient ~]# id jedmands
uid=(jedmands) gid=100(users)
groups=100(users),5000(manager),5001(linuxadmin),5002(storageadmin),5003(dbadmin),5004(webadmin),5006(it)

### 389 DS Example: ###

[root 389dsclient ~]# id jedmands
uid=(jedmands) gid=100(users) groups=100(users)

Notes:
Posted this to the 389-users list, nothing received.
We are using the memberOf plugin for 389DS.
I don't know too much about the openldap environment. I moved to CentOS 6 and
figured DS was the way to go with SSL/TLS


I'm pretty sure you figured wrong. OpenLDAP actually works, implements the 
LDAP RFCs correctly, and outperforms all other LDAP servers. Compared to 
389DS, OpenLDAP bulk-loads data 2x faster, uses 10% less space on disk, 
answers search queries 4x faster, and uses 50% less RAM to do it. (Also 
answers Binds 6x faster, and performs updates 11x faster.) 389DS is a hulking 
pile of obsolete code; the only reason it still exists today is because RedHat 
has support contracts for RedHatDS from customers too ignorant to realize how 
bad the product they've paid for actually is.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Group values not returned with id command

2013-09-05 Thread Howard Chu

Justin Edmands wrote:

Thank god you got that off of your chest. the solution is:


And OpenLDAP actually has a knowledgeable community that responds to posts, 
and gives correct answers.



/etc/sssd/sssd.conf
  [domain/default]
  ..
  ldap_group_member = memberUid


You should look into switching to RFC2307bis; using non-DNs for references 
within an LDAP directory is a really bad idea.



  ldap_group_search_base = ou=Group,dc=mysite,dc=com
  ..

after flushing cache, the clients see the proper groups.


That should concern you too. You're now knowingly relying on a caching 
mechanism that serves stale data for your systems' base security. You should 
look into using OpenLDAP nssov+pcache instead; pcache has active cache refresh 
among other things so you don't need to restart or flush anything to keep your 
system security up to date.



https://bugzilla.redhat.com/show_bug.cgi?id=599713


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Log service time?

2013-09-05 Thread Howard Chu

Покотиленко Костик wrote:

Hi,

Is there a way to log the time each operation took?


Every message in syslog already has a timestamp.

Also, recent versions of OpenLDAP include a timestamp on debug output too. You 
didn't mention which version you're using so can't tell you much more.



I have strange CPU load (~200%) with just ~15 operations per second.
SRCH is 90% of all operations. All attributed involved in search a
indexed (many single attribute indexes, ~30).

The point is to find which search operations a taking long time to
develop a solution.






--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Re : Re: (ITS#7676) OpenLDAP 2.4.36 slapd crash with assertion failed message

2013-09-06 Thread Howard Chu

POISSON Frédéric wrote:

Hello all,

Thanks first for the patch, i have applied it on my own build of 2.4.36 but i
have now a strange behavior, the slapd do not crash but it refused operations.

First here is the diff after applying the patch :
$ diff ../BUILD/openldap-2.4.36/servers/slapd/bconfig.c
../BUILD/openldap-2.4.36/servers/slapd/bconfig.c.orig
3795d3794
   slap_tls_ctx = NULL;
3804,3808d3802
   } else {
   if ( rc == LDAP_NOT_SUPPORTED )
   rc = LDAP_UNWILLING_TO_PERFORM;
   else
   rc = LDAP_OTHER;

Now when i only add or replace only attribute olcTLSRandFile on cn=config i 
have :

ldap_modify: Server is unwilling to perform (53)


When i replace following values in this order with 4 actions/operations or
with a single action/operation it works :

dn: cn=config
changetype: modify
replace: olcTLSCACertificateFile
olcTLSCACertificateFile: 
/usr/products/openldap/etc/openldap-single/tls/cacert.pem
-
replace: olcTLSCertificateFile
olcTLSCertificateFile: /usr/products/openldap/etc/openldap-single/tls/cert.pem
-
replace: olcTLSCertificateKeyFile
olcTLSCertificateKeyFile: /usr/products/openldap/etc/openldap-single/tls/key.pem
-
replace: olcTLSRandFile
olcTLSRandFile: /dev/random

But it don't works with only olcTLSRandfile if i do an add or replace first, 
why ?

What do you need for investigation ?


There's nothing to investigate, this works as designed. The config engine 
requires your TLS configuration to be valid when you configure it. That means 
at a minimum you must configure a server cert and key. If you only configure 
the randfile and nothing else, the config is rejected.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Antw: Re: Log service time?

2013-09-06 Thread Howard Chu

Ulrich Windl wrote:

Quanah Gibson-Mount qua...@zimbra.com schrieb am 05.09.2013 um 22:58 in

Nachricht 0FCBC02976FFDC0CF5D9A489@[192.168.1.22]:

--On Thursday, September 05, 2013 10:58 PM +0300 Покотиленко
Костик cas...@meteor.dp.ua wrote:

[...]

OS: Ubuntu 12.04.2 LTS
Slapd: 2.4.28-1.1ubuntu4.3


Ugh, ancient.


Backend: HDB


Yuck.


[...]

Hi guys!

While I have nothing against bug-free software, I cannot read that update to
the latest version and database any more: Is it really because the releases of
the previous years that many people used were so terrible (which, by induction,
means that the latest versions recommended by that time were terrible, so, as
seen from tomorrow's perspective, the versions advertised today are also
terribly full of bugs. In effect this means that there will never be a version
that is not full of terrible bugs), or is it that no-one wants to care or take
a look at about previous releases? Or are you just recruiting beta-testers for
the current release?


It is Project policy to only investigate issues in the current release. There 
is no sense in tracing back thru old code whose bugs have already been fixed.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Antw: Re: Log service time?

2013-09-06 Thread Howard Chu

Покотиленко Костик wrote:

В Птн, 06/09/2013 в 04:42 -0700, Howard Chu пишет:

Ulrich Windl wrote:

Quanah Gibson-Mount qua...@zimbra.com schrieb am 05.09.2013 um 22:58 in

Nachricht 0FCBC02976FFDC0CF5D9A489@[192.168.1.22]:

--On Thursday, September 05, 2013 10:58 PM +0300 Покотиленко
Костик cas...@meteor.dp.ua wrote:

[...]

OS: Ubuntu 12.04.2 LTS
Slapd: 2.4.28-1.1ubuntu4.3


Ugh, ancient.


Backend: HDB


Yuck.


[...]

Hi guys!

While I have nothing against bug-free software, I cannot read that update to
the latest version and database any more: Is it really because the releases of
the previous years that many people used were so terrible (which, by induction,
means that the latest versions recommended by that time were terrible, so, as
seen from tomorrow's perspective, the versions advertised today are also
terribly full of bugs. In effect this means that there will never be a version
that is not full of terrible bugs), or is it that no-one wants to care or take
a look at about previous releases? Or are you just recruiting beta-testers for
the current release?


It is Project policy to only investigate issues in the current release. There
is no sense in tracing back thru old code whose bugs have already been fixed.


This means old versions are not supported and makes problems with
openldap distribution packages as distributions don't update upstream
versions inside distribution version. :(

For Debian that means staying with bugs for 2 years. It's hard to call
this policy right.


Distro packages are supported by their distros. We have no way to support them 
anyway since they tend to insert their own private patches and we have no 
visibility into what they changed. (Nor do we want it - there's doeznes of 
distros out there and it's not our responsibility to keep track of what 
they're all doing.) And in the specific case of Debian, given their history of 
introducing critical bugs into their builds 
http://www.schneier.com/blog/archives/2008/05/random_number_b.html there is no 
way any upstream project will ever take responsibility for supporting Debian 
packages.


You may not think this policy is right but it's the only practical approach 
when distros take liberties with what they release.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: OpenLDAP 2.4.36 slapd stop with assertion fail message

2013-09-06 Thread Howard Chu

POISSON Frédéric wrote:

Hello,

I'm testing the latest release of OpenLDAP 2.4.36 and my slapd stop while i'm
doing a change on cn=config.
My tests are with my own compilation of OpenLDAP on a RHEL6 server but i see
the same problem with LTB project RPMs
http://ltb-project.org/wiki/download#openldap with RHEL6 package.


Why are you reposting this?
http://www.openldap.org/lists/openldap-technical/201308/msg00283.html

You reported this bug in the ITS. The bug (ITS#7676) was fixed.


My aim is to modify cn=config like this in order to implement TLS, here is my
ldif :

dn: cn=config
changetype: modify
add: olcTLSRandFile
olcTLSRandFile: /dev/random

The server shutdown when i add this entry and with slapd option -d 255 i have 
:
slapd: result.c:813: slap_send_ldap_result: Assertion `!((rs-sr_err)0)' 
failed.
/etc/init.d/slapd: line 285:  5461 Aborted $SLAPD_BIN -h
$SLAPD_SERVICES $SLAPD_PARAMS

Notice that i test this ldif modification on release 2.4.35 without problem.

Is there any changes inside cn=config behavior with release 2.4.36 that i
don't see ?

Thanks in advance,

Regards,
PS: In attachment my cn=config with slapcat, and the lines when starting slapd
with debug -d 255.
--

*Frederic Poisson*





--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Perfect Forward Secrecy

2013-09-06 Thread Howard Chu

Dieter Klünter wrote:

Hi,
I wonder whether openldap, if compiled with openssl-1.x, will support
PFS. http://en.wikipedia.org/wiki/Perfect_forward_secrecy
This issue has been discussed on several mailinglists recently.


It already does, but you have to use the right cipher suites.

Also see ITS #7595 http://www.openldap.org/its/index.cgi/Incoming?id=7595

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Perfect Forward Secrecy

2013-09-06 Thread Howard Chu

Michael Ströder wrote:

http://www.openldap.org/doc/admin24/tls.html mentions directive
'TLSEphemeralDHParamFile' whereas slapd.conf(5) mentions 'TLSDHParamFile'.


This was noted in ITS#7506. Apparently no one considered it an important 
enough issue to fix it in the meantime.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Antw: Re: Log service time?

2013-09-06 Thread Howard Chu

Chris Jacobs wrote:

You left off the part where I remind that he was looking for information - 
specifically how to get said information:

 If the information Casper requested isn't available, say so. If it is, how 
would he get it?

As it stands now, his initial question remains unanswered, with the only

guidance being upgrade; which lacking anything else he is running with in
the blind hope it makes things faster (his actual issue, not his questions).

You seem to have missed this answer
http://www.openldap.org/lists/openldap-technical/201309/msg00033.html

If you can't discern how much time an operation is taking from the timestamps 
that are already provided, that's your own inadequacy, not ours.


When the OP talks about getting 15 queries/sec it's clear that timing info 
down to the microsecond is superfluous.



That's really the meat of my response/addition to this conversation, and it's 
been simply side stepped again.



--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Perfect Forward Secrecy

2013-09-06 Thread Howard Chu

Michael Ströder wrote:

Howard Chu wrote:

Dieter Klünter wrote:

Hi,
I wonder whether openldap, if compiled with openssl-1.x, will support
PFS. http://en.wikipedia.org/wiki/Perfect_forward_secrecy
This issue has been discussed on several mailinglists recently.


It already does, but you have to use the right cipher suites.

Also see ITS #7595 http://www.openldap.org/its/index.cgi/Incoming?id=7595


Please correct if I'm wrong. But this ITS seems to be about using the cipher
suites based on elliptic curves with EC server key/cert.

But what about just the DHE-RSA cipher suites like DHE-RSA-AES256-SHA for
TLSv1 with RSA-based server key/cert?

Why does Apache support this out-of-the-box and OpenLDAP 2.4.36 does not?
Do I have to configure something else?


You have to configure TLSDHParamFile. This appears to be an oversight, while 
we have some default DH parameters hardcoded in libldap, none of them actually 
get used unless you've set the TLSDHParamFile directive. Also related to ITS#7506.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Antw: Re: Perfect Forward Secrecy

2013-09-09 Thread Howard Chu

Ulrich Windl wrote:

Michael Strödermich...@stroeder.com schrieb am 06.09.2013 um 23:33 in

Nachricht 522a4a3a.9060...@stroeder.com:

Howard Chu wrote:

Dieter Klünter wrote:

Hi,
I wonder whether openldap, if compiled with openssl-1.x, will support
PFS. http://en.wikipedia.org/wiki/Perfect_forward_secrecy
This issue has been discussed on several mailinglists recently.


It already does, but you have to use the right cipher suites.

Also see ITS #7595 http://www.openldap.org/its/index.cgi/Incoming?id=7595


http://www.openldap.org/doc/admin24/tls.html mentions directive
'TLSEphemeralDHParamFile' whereas slapd.conf(5) mentions 'TLSDHParamFile'.


Please let me note that 'TLSDHParamFile' is just a terrible identifier. How
large is the fine for using underscores like in 'TLS_DH_ParamFile'? ;-)


You're about 8 years late to be making that suggestion.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: configure: error: BDB/HDB: BerkeleyDB not available

2013-09-11 Thread Howard Chu

Quanah Gibson-Mount wrote:

--On Wednesday, September 11, 2013 12:41 PM -0500 Mónico Briseño
monico.bris...@gmail.com wrote:




Hi, all. I decided to install openldap release 2.4.36 from tarball file
with BerkeleyDB support.


According openldap documentation I downloaded and installed Berkeley DB
release 6.0


I configured the configure script as follows:


CPPFLAGS=-I/usr/local/BerkeleyDB.6.0/include
LDFLAGS=-L/usr/local/BerkeleyDB.6.0/lib
export CPPFLAGS LDFLAGS
./configure --enable-sql


Sadly, I have the following error:


configure: error: BDB/HDB: BerkeleyDB not available


Any idea?


How I did wrong?


I don't know that there is any support for BDB 6, or that it will be
introduced.  BDB support is being replaced with MDB support.  If you want
to build against BDB, then use an older 5.x or 4.x release series.


There are no relevant API changes in BDB 6 to prevent using it with OpenLDAP. 
It builds fine and all tests pass. However, due to the changed license in BDB 
6, I don't believe many sites will actually be able to deploy it and remain in 
compliance with the BDB 6 license.


Also none of the new features added in BDB over the past several years have 
had any relevance to OpenLDAP, they've tended to be focused on BDB replication 
and distributed transactions, which OpenLDAP has never (and will never) use. 
If you insist on using BDB, there's no reason to prefer BDB 6 over BDB 5. But 
in either case, LMDB is far superior so there's no point in using BDB (and 
risking a run-in with Oracle's license compliance lawyers) at all.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: LMDB compiling procedure

2013-09-11 Thread Howard Chu

Mónico Briseño wrote:

Hi, there. I just downloaded LMDB tarfile. I verified its folder content but I
couldn't find any information of how can I use this files with Ldap.


The default OpenLDAP build already uses it, there is no special command 
needed. If you only downloaded the LMDB source, you should instead have 
downloaded the regular OpenLDAP source.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Kudos to all who contributed to MDB

2013-09-18 Thread Howard Chu

Brent Bice wrote:

 I've started testing an LDAP server here using MDB and ran across a
few caveats that might be of use to others looking into using it.  But
first off, let me say a hearty THANKS to anyone who's contributed to it.
In this first OpenLDAP server I've converted over to MDB it's
*dramatically* faster and it's definitely nice to not worry about having
to setup script/s to occasionally (carefully) commit/flush DB logs, etc.

 One caveat that might be worth mentioning in release notes
somewhere...  Not all implementations of memory mapped I/O are created
equal.  I ran into this a long time back when I wrote a multi-threaded
quicksort program for a friend to had to sort text files bigger than 10
gigs and didn't want to wait for the unix sort command. :-)  The program
I banged together for him used memory mapped I/O and one of the things I
found was that while Solaris would let me memory map a file bigger than
I had physical or virtual memory for, linux wouldn't.  It appeared that
some versions of the 2.x kernels wouldn't let me memory-map a file
bigger than the total *virtual* memory size, and I think MDB is running
into the same limitation.  On a SLES11 system, for instance with the
2.6.32.12 kernel, I can't specify a maxsize bigger than the total of my
physical memory and swap space.  So just something to keep in mind if
you're using MDB on the 2.x kernels - you may need a big swap area even
though the memory mapped I/O routines in the kernel seem to be smart
enough to avoid swapping like mad.


Some Linux distros also ship with a default VM ulimit that will get in your 
way, but Linux kernel overcommit settings have been around for a long time. 
I don't think what you saw was particular to the 2.6.32 kernel, most likely 
related to your distro's default config.



 On a newish ubuntu system with a 3.5 kernel this doesn't seem to be
an issue - tell OpenLDAP to use whatever maxsize you want and it just
works.  :-)

 I'd also only use MDB on a 64 bit linux system. One of the other
headaches I remember running into with memory mapped I/O was adding in
support for 64 bit I/O on 32 bit systems.  Best to avoid that whole mess
and just use a 64 bit OS in the first place.


We definitely don't recommend using 32 bit servers with LMDB.


 Lastly... At the risk of making Howard and Quanah cringe... :-)  The
OpenLDAP DB I've been testing this with is the back-end to an email
tracking tool I setup several years ago.  More as an excuse to
edjimicate myself on the java API for LDAP than anything else, I wrote a
quick bit of java that watches postfix and sendmail logs and writes
pertinent bits of info into an LDAP database, and a few PHP scripts to
then query that database for things like to/from addresses, queue IDs,
and message IDs.  'Makes it easy for junior admins to quickly search
through gigabytes of logs to see what path an email took to get from
point A to point B, who all received it (after it went through one or
more list servers and a few aliases got de-ref'd, etc).

 Yeah, it's an utter abuse of LDAP which is supposed to be
write-rarely and read-mostly, especially as our postfix relays handle
anywhere from 1 to 10 messages per second on average. :-)  But what the
heck, it works fine and was a fun weekend project.  It's also served as
a way to stress-test new versions of OpenLDAP before I deploy them
elsewhere. :-)

 Anyway, thanks again to everyone who contributed to MDB. It's lots
faster than BerkeleyDB in all of my testing so far. 'Looking forward to
gradually shifting more of my LDAP servers over to it.


You're welcome.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: PFS Ciphers

2013-09-19 Thread Howard Chu

Emmanuel Dreyfus wrote:

Hi

I tried to use ciphers that bring PFS for OpenLDAP, but it did not work.
I used this cipher specification:

TLSCipherSuite ECDH:DH:!SHA:!MD5:!aNULL:!eNULL

I test it this way:
for i in `openssl ciphers ALL|tr ':' '\n'` ; do
 echo ''|openssl s_client -cipher $i -connect server:636 \
  2/dev/null |awk  '/  Cipher/{print }' ;
done

I get nothing. I understand ECDH needs some support code, but why aren't
DH ciphers available?


Read the slapd.conf(5) or slapd-config(5) manpage. You must configure the 
TLSDHParamFile.


Your ciphersuite is wrong anyway. You want DHE, not DH, for PFS.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Building LMDB for Windows 64bits

2013-09-28 Thread Howard Chu

Alain wrote:

I am not an expert at this, so I might be doing things incorrectly.

I used Mingw x86_64 to build LMDB (just changed the CC in the Makefile). I had
an issue with srandom and random in the test programs and switch to rand
instead. Now I can build successfully and make test runs mtest successfully.

Now if I try one of the other mtest[2-5] or mtest itself, I get sporadic
segmentation fault. If I wait long enough it will always work, but running the
programs in a kind of loop guarantees a seg fault error.


That's pretty normal. The test programs don't do any error checking. You're 
welcome to submit a patch adding the requisite error checking.


When you run the tests repeatedly eventually the DB grows to its maxsize limit 
and a write request fails. After that things crash because the library is 
returning failure codes that the test programs ignore.



I wanted to try running with gdb but it seems that it doesn't ship with cygwin
for Mingw. Using standard gdb gives me a 193 error. Again this is getting past
my expertise here.


The test programs are actually only intended to be used with gdb, but yes, 
it's a pain finding a working gdb for MinGW64. The one I'm currently using is 
7.1.90.20100730-cvs


http://sourceforge.net/projects/mingw-w64/files/External%20binary%20packages%20%28Win64%20hosted%29/gdb/


Has anyone successfully build LMDB for Windows and can help here.

Cheers,
Alain



--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Antw: Kudos to all who contributed to MDB

2013-09-30 Thread Howard Chu

Ulrich Windl wrote:

  Brent Bice bb...@sgi.com schrieb am 18.09.2013 um 22:01 in Nachricht
523a068f.1000...@sgi.com:

I've started testing an LDAP server here using MDB and ran across a
few caveats that might be of use to others looking into using it.  But
first off, let me say a hearty THANKS to anyone who's contributed to it.
In this first OpenLDAP server I've converted over to MDB it's
*dramatically* faster and it's definitely nice to not worry about having
to setup script/s to occasionally (carefully) commit/flush DB logs, etc.

 One caveat that might be worth mentioning in release notes
somewhere...  Not all implementations of memory mapped I/O are created
equal.  I ran into this a long time back when I wrote a multi-threaded
quicksort program for a friend to had to sort text files bigger than 10
gigs and didn't want to wait for the unix sort command. :-)  The program
I banged together for him used memory mapped I/O and one of the things I
found was that while Solaris would let me memory map a file bigger than
I had physical or virtual memory for, linux wouldn't.  It appeared that


I doubt that Solaris allows you to mmap() a file to an area larger than
the

virtual address space, however you can mmap() a file area larger than RAM+swap
when a demand paging strategy is used. However once you start modifying the
mapped pages you may run out of memory, so thing twice.

No OS can let you mmap a single region larger than the address space. But 
mapping a file larger than RAM is no problem, the OS will swap pages in and 
out as needed.



some versions of the 2.x kernels wouldn't let me memory-map a file
bigger than the total *virtual* memory size, and I think MDB is running
into the same limitation.  On a SLES11 system, for instance with the
2.6.32.12 kernel, I can't specify a maxsize bigger than the total of my
physical memory and swap space.  So just something to keep in mind if


Also be aware that in SLES11 SP2 the kernel update release some weeks ago

strengthened the checks for mmap()ed areas: I had a program that started to
fail when I tried to change one byte after the end of the file, while this
worked with the kernel before.

Irrelevant for LMDB since we never do such a thing.


you're using MDB on the 2.x kernels - you may need a big swap area even
though the memory mapped I/O routines in the kernel seem to be smart
enough to avoid swapping like mad.


I'd like to object: AFAIR, MDB used mmap()ed areas in strictly read-only

fashion, so the backing store is the original file, being demand paged. When
data is write()n, the system will dirty buffers in real RAM that are
eventually written back to the file blocks. I see no path where dirty buffers
should be swapped unless the mapping is PRIVATE.

Correct; since LMDB uses an mmap'd file it will *never* use swap space.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Unknown db in slapd.conf

2013-10-03 Thread Howard Chu

Christian Kratzer wrote:

Hi,

snipp/

moduleload back_mdb.la
moduleload back_ldap.la
moduleload back_hdb.la


on my centos system your config started once I changed above to

moduleload back_mdb


In his actual paste, all of his moduleload statements have a leading space, so 
they are simply continuations of the preceding comment line. I.e., they never 
actually got processed.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Log level will not change.

2013-10-03 Thread Howard Chu

espe...@oreillyauto.com wrote:


Here is the contents of configlog.ldif


You seem to have continual difficulty understanding that whitespace is 
significant. Please read the ldif(5) manpage more carefully. Judging from your 
other lengthy email thread, you also need to read slapd.conf(5) more carefully.


dn: cn=config
changetype: modify

delete: olcLogLEvel
-
add: olcLogLevel
olcLogLevel: 0

I run the following commend:

ldapmodify -Wx -D uid=admin,dc=oreillyauto,dc=com -H
ldap://tntest-ldap-1.oreillyauto.com -c -f /tmp/configlog.ldif

the output shows:

Enter LDAP Password:
modifying entry cn=config

Except the loglevel cn=config does not change.  The modifyTimeStamp and
entryCSN change to match the server.  I have a 3 node MMR cluster on
2.4.31.  I am working on  a build to go to 2.4.36, but in the mean time I
need to get this working.

Thanks,
Eric Speake
Web Systems Administrator
O'Reilly Auto Parts

This communication and any attachments are confidential, protected by 
Communications Privacy Act 18 USCS § 2510, solely for the use of the intended 
recipient, and may contain legally privileged material. If you are not the 
intended recipient, please return or destroy it immediately. Thank you.





--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Modification hooks for the OpenLDAP system

2013-10-09 Thread Howard Chu

Mailing Lists wrote:

Hi,

I was looking into OpenLDAP and couldn't find the following information in the 
documentation:


slapo-sock exists for that purpose.


Is there any possibility to have hooks in the openLDAP system which allow
a

customizable action to be performed when records are added/deleted/updated?

An example for this would be to send a message to an external system in
case

a modification of the directory service has occured, so that this system can
act on this (the 'external' system could, of course, belong to the same
organization as well). E.g. for improving synchronization with GAE (but I
could name some other uses for it too).


I have found a prior post on hooks for authentication

(http://www.openldap.org/lists/openldap-software/200510/msg00549.html) but
this is not what I am looking for.


If it is not supported in openLDAP, are there any plug-ins which do support
it? 
Thanks in advance!





--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Solaris 10 tls:simple binding to OpenLDAP

2013-10-10 Thread Howard Chu

Ben Babich wrote:

Folks,

I have been fighting along getting some Solaris 10 nodes (both SPARC
and x86) to talk via TLS/SSL to our OpenLDAP infrastructure.
Without SSL (tls:simple) it binds and functions fine which in my mind
rules out most of the usual culprits.


Looks like a question for Sun/Solaris support. Clearly your problems have 
nothing to do with OpenLDAP itself.


As for the certificates, I have verified connectivity with the
certificate via openssl s_client -connect fqdn -CAfile cacert
-showcerts but I cannot get the correct version/combination of
certutil to setup the appropriate keystore (cert[78].db, key3.db and
secmod.db) and make the native SUN ldapsearch or native ldapclient
work correctly.

Oct 10 10:48:29 solaris1 /usr/lib/nfs/nfsmapid[3668]: [ID 293258
daemon.warning] libsldap: Status: 91  Mesg: createTLSSession: failed
to initialize TLS security (security library: bad database.)
Oct 10 10:48:29 solaris1 /usr/lib/nfs/nfsmapid[3668]: [ID 292100
daemon.warning] libsldap: could not remove ldapserver from servers
list
Oct 10 10:48:29 solaris1 /usr/lib/nfs/nfsmapid[3668]: [ID 293258
daemon.warning] libsldap: Status: 7  Mesg: Session error no available
conn.

# certutil -d /var/ldap -L

Certificate Nickname Trust Attributes
  SSL,S/MIME,JAR/XPI

CA certificate   CT,,
# ldapclient list
NS_LDAP_FILE_VERSION= 2.0
NS_LDAP_BINDDN= masked
NS_LDAP_BINDPASSWD= masked
NS_LDAP_SERVERS= masked
NS_LDAP_SEARCH_BASEDN= masked
NS_LDAP_AUTH= tls:simple
NS_LDAP_CACHETTL= 0
NS_LDAP_CREDENTIAL_LEVEL= proxy
NS_LDAP_HOST_CERTPATH= /var/ldap
#


I've tried a few of the older certutil's getting around, including the
one from here: along with libraries from openCSW to get it all working
http://www.gurulabs.com/downloads/certutil-1.0-sol9-sun4u-local.gz

I'm pretty sure its the cert database or something to do with
certutill being painful. Any suggestions?

Thanks
Ben





--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: sasl/plain with hashed password not working

2013-10-10 Thread Howard Chu

b...@bitrate.net wrote:

On Oct 8, 2013, at 09.56, Dan White dwh...@olp.net wrote:


That was referring to auxprop. In newer versions ( 2.1.23) of Cyrus SASL
there is an undocumented 'pwcheck_method: auxprop-hashed' which you can use
to support hashed passwords, but I do not believe that slapd/ldapdb are
supported.


See ITS#7419. We will not support it until it is properly documented. It would 
be foolish to attempt otherwise.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Antw: Re: another CSN too old N-WAY master

2013-10-10 Thread Howard Chu

Lanfeust troy wrote:

Do I open an issue about that with more log. Everyday we have consistency 
problem.

This bug ( for me ) appear on ou with more than 120K entrie. Every entry is
check by another application every hour.

when are schedule the next release please ?
Thanks


2013/10/8 Ulrich Windl ulrich.wi...@rz.uni-regensburg.de
mailto:ulrich.wi...@rz.uni-regensburg.de

  Michael Ströder mich...@stroeder.com
mailto:mich...@stroeder.com schrieb am 08.10.2013 um 12:40 in
Nachricht dbc9e8c25a1bb1bac7905cb6950c8...@srv1.stroeder.com
mailto:dbc9e8c25a1bb1bac7905cb6950c8...@srv1.stroeder.com:
  On Tue, 08 Oct 2013 12:15:56 +0200 Ulrich Windl
  ulrich.wi...@rz.uni-regensburg.de
mailto:ulrich.wi...@rz.uni-regensburg.de wrote
  Are you sure you are chasing a real problem? AFAIK, in multi-master 
sync,
  the
  server that receives a change will send that change to all other nodes,
  which, in turn, will also send the changes received to all other nodes.
 
  It would be pretty inefficient if a provider in a MMR setup receives its
own
  modifications.

I agree, but from the logs my version seems to do exactly that, so maybe my
config is not correct, or the software has a bug. As it still does what it
should do, I don't care about the bug for the moment...


If a server is receiving its own modifications that means your serverIDs are 
not configured correctly. If your serverIDs are not configured correctly MMR 
cannot work correctly. Fix your configuration.


 
  Given the fact that I'm also seeing missing entries in a MMR setup I
suspect
  there are indeed problems with the contextCSN.
 
  Another one ITS#7710 which likely is limited to use of slapo-memberof but
  I'm
  not really sure about that. (The data consistency issues happened without
  slapo-memberof.)
 
  Ciao, Michael.






--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



LDAP Injection attacks

2013-10-11 Thread Howard Chu
A paper and presentation making the rounds, claiming to show how webapps using 
LDAP are vulnerable to search filter spoofing attacks.


http://www.youtube.com/watch?v=wtahzm_R8e4
http://www.blackhat.com/presentations/bh-europe-08/Alonso-Parada/Whitepaper/bh-eu-08-alonso-parada-WP.pdf

Can't imagine that work like this gets peer-reviewed, because it's mostly 
garbage. They concoct a scenario in section 4.1.1 of their paper, supposedly 
showing how filter manipulation can allow a webapp user to bypass LDAP-based 
authentication. It's ridiculous drivel though, since LDAP-based authentication 
uses Bind requests and not search filters. Most LDAP deployments don't even 
give search/compare access to userPassword attributes in the first place.


Just in case anybody out there might be bitten by this info - client-enforced 
security is no security at all. This is why slapd has such an extensive ACL 
engine - you enforce access controls on the server, and then it doesn't matter 
what kind of garbage requests your clients send to you, they can only ever 
access information that they were allowed to access. This is also why the old 
pam_ldap authorization scheme was such a bad idea, it relied on the LDAP 
client (pam_ldap) to correctly implement authorization, instead of the server. 
(Multiply that by hundreds or thousands of clients and you have an 
unmanageable, insecurable mess.) This is why we have nssov today.


Of course, this is no excuse to be sloppy when writing your web apps. But if 
you've configured ACLs to adequately protect your data, then it doesn't matter 
how sloppy your clients are.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: LDAP Injection attacks

2013-10-12 Thread Howard Chu

devzero2000 wrote:

On Fri, Oct 11, 2013 at 8:33 PM, Howard Chu h...@symas.com wrote:

A paper and presentation making the rounds, claiming to show how webapps
using LDAP are vulnerable to search filter spoofing attacks.

http://www.youtube.com/watch?v=wtahzm_R8e4
http://www.blackhat.com/presentations/bh-europe-08/Alonso-Parada/Whitepaper/bh-eu-08-alonso-parada-WP.pdf

Can't imagine that work like this gets peer-reviewed, because it's mostly
garbage. They concoct a scenario in section 4.1.1 of their paper, supposedly
showing how filter manipulation can allow a webapp user to bypass LDAP-based
authentication. It's ridiculous drivel though, since LDAP-based
authentication uses Bind requests and not search filters. Most LDAP
deployments don't even give search/compare access to userPassword attributes
in the first place.

Just in case anybody out there might be bitten by this info -
client-enforced security is no security at all. This is why slapd has such
an extensive ACL engine - you enforce access controls on the server, and
then it doesn't matter what kind of garbage requests your clients send to
you, they can only ever access information that they were allowed to access.
This is also why the old pam_ldap authorization scheme was such a bad idea,
it relied on the LDAP client (pam_ldap) to correctly implement
authorization, instead of the server. (Multiply that by hundreds or
thousands of clients and you have an unmanageable, insecurable mess.) This
is why we have nssov today.

Of course, this is no excuse to be sloppy when writing your web apps. But if
you've configured ACLs to adequately protect your data, then it doesn't
matter how sloppy your clients are.

Personally, as a penetration tester, security professional - among
other things - I do not agree.
O partially.

IMNSHO, the authors have simply traslated in the LDAP world the
exactly same problematic that a generic web apps can have regarding
SQLI. I do not think they are very different, but   LDAP problem in
these area are much less popular in the web apps context, probably for
reasons of implementation: who  like to put some  structured
information in an LDAP server instead of a relational db? zero or
nearly so (I do, because i know LDAP but few others do the same) And I
am equally convinced that when there are deficiencies in the webapp
input validation the issues are actually the same.


Nonsense, the issues are nowhere near the same. SQL is a liability because 
with injection you can construct any statement of your choosing - Query, 
create, modify, or delete information. This liability exists precisely because 
SQL is a text-based language.


For an LDAP scenario the window of vulnerability is restricted to a search 
filter, only. You cannot use injection to turn an LDAP Search request into a 
Modify or Delete request. You cannot destroy or forge data using this attack 
- it is a much smaller attack surface, and the payoff for an attack is 
miniscule. This is a fact - all LDAP-based apps are inherently less vulnerable 
than all SQL-based apps, for this simple reason.


And again, with ACLs configured on the server, all any attacker can get is 
access to information they were already permitted to see, which completely 
closes the window of vulnerability.


 Most of the web app

are based in carrying out these operations - against the relational or
the  ldap server - using a webapps user (not the real user), with data
access privilege, in general , much broader. For ease of
implementation, perhaps. And the ACL serve little in these cases, both
in the case of a relational db that of a LDAP server.

Why ? the design of the application is wrong , it does not respect the
principle of least privilege. A long time ago i have deployed  a web
application based on different levels of security privileges: the
service users could access the backend LDAP server , but only if the
access came from a specific ip address (an application server) and
this user can could read some attribute of a specific OU of a specific
DIT, others authenticated user could write some specific attribute of
a subset of a DIT (and they cannot read other attribute). Using the
sophisticated ACL that OPENLDAP offer, different security zone and so
on.

How many would do it or do it?


Look at the volume of messages on this list related to ACLs - clearly, most 
OpenLDAP admins are both conscious of and conscientious about using effective 
ACLs.



Best Regards and thanks for sharing.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: LDAP Injection attacks

2013-10-12 Thread Howard Chu

Michael Ströder wrote:

Howard Chu wrote:

A paper and presentation making the rounds, claiming to show how webapps using
LDAP are vulnerable to search filter spoofing attacks.

http://www.youtube.com/watch?v=wtahzm_R8e4
http://www.blackhat.com/presentations/bh-europe-08/Alonso-Parada/Whitepaper/bh-eu-08-alonso-parada-WP.pdf


Can't imagine that work like this gets peer-reviewed, because it's mostly
garbage. They concoct a scenario in section 4.1.1 of their paper, supposedly
showing how filter manipulation can allow a webapp user to bypass LDAP-based
authentication. It's ridiculous drivel though, since LDAP-based authentication
uses Bind requests and not search filters. Most LDAP deployments don't even
give search/compare access to userPassword attributes in the first place.


Well, this is not really new:
https://www.owasp.org/index.php/LDAP_injection

Anyway, the paper is a bit bloated and the term code injecting sounds really
over-loaded here.

SQL injection attacks are generally much more powerful since an attacker can
also write data. Compared to that manipulating search requests with LDAP
filter injection is not such a massive attack vector.


Agreed.


Just in case anybody out there might be bitten by this info - client-enforced
security is no security at all. This is why slapd has such an extensive ACL
engine - you enforce access controls on the server, and then it doesn't matter
what kind of garbage requests your clients send to you, they can only ever
access information that they were allowed to access.


Ack, but ACLs only protect what's stored inside the LDAP server.

There could be possible attacks when mapping username to wrong user entry or
when reading access control data from wrong LDAP entries based on user's input
which protects other app data.


I suppose in a poorly designed app this is possible. Reading access control 
data from wrong LDAP entries is also wrong design. There is no reason for an 
app to ever read access control data. At most, it only needs to do an LDAP 
Compare operation and let the server verify such data. And again, Compare 
requests aren't vulnerable.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: LDAP Injection attacks

2013-10-12 Thread Howard Chu

Michael Ströder wrote:

Howard Chu wrote:

A paper and presentation making the rounds, claiming to show how webapps using
LDAP are vulnerable to search filter spoofing attacks.

http://www.youtube.com/watch?v=wtahzm_R8e4
http://www.blackhat.com/presentations/bh-europe-08/Alonso-Parada/Whitepaper/bh-eu-08-alonso-parada-WP.pdf


Can't imagine that work like this gets peer-reviewed, because it's mostly
garbage. They concoct a scenario in section 4.1.1 of their paper, supposedly
showing how filter manipulation can allow a webapp user to bypass LDAP-based
authentication. It's ridiculous drivel though, since LDAP-based authentication
uses Bind requests and not search filters. Most LDAP deployments don't even
give search/compare access to userPassword attributes in the first place.


Well, this is not really new:
https://www.owasp.org/index.php/LDAP_injection


Indeed, quite old. The blackhat paper is from 2008 and the owasp writeup is 
from 2009. Not sure why they're being discussed currently, particularly since 
so much of the information was flat wrong.


In the blackhat paper Section 4, page 5:

– (attribute=value): If the filter used to construct the query lacks a logic 
operator (OR
or AND), an injection like ”value)(injected_filter” will result in two filters:
(attribute=value)(injected_filter). In the OpenLDAP implementations the second 
filter
will be ignored, only the first one being executed. In ADAM, a query with two 
filters
isn ́t allowed. Therefore, the injection is useless.


In fact libldap was changed in January 2007, over a year before this paper was 
written, to reject improperly constructed filters of this form, for ITS#4648. 
(git commit f1784a54e693d68fc9b9cc1b566aa0880a419d70) Aside from having the 
facts completely wrong, the writing is also poor, conflating the OpenLDAP 
clientside behavior with the server behavior. If in fact you could convince a 
client to generate the BER for such a malformed filter, slapd would still have 
rejected it because it cannot be decoded correctly on the server side (and 
that has *always* been true).


I can't speak to what other LDAP client APIs might do, but regardless, the 
requests described on this page would always fail if they made it to the 
server, and an OpenLDAP client would have rejected the attempt in the first place.



Anyway, the paper is a bit bloated and the term code injecting sounds really
over-loaded here.

SQL injection attacks are generally much more powerful since an attacker can
also write data. Compared to that manipulating search requests with LDAP
filter injection is not such a massive attack vector.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Berkeley DB backend - exact version check

2013-10-16 Thread Howard Chu

Jan Synacek wrote:

Hello,

slapd checks the exact version of BDB at runtime and fails if it doesn't equal
to the version that it's been linked against. Is there a reason for such precise
check? Shouldn't the major and minor version numbers be enough?

The code seems to have been there for a while, so I'm sorry if such question has
been asked before. I couldn't find any answers.


No, our previous experience has been that just the major and minor version 
numbers are not enough. APIs changed quite a bit between the earliest versions 
of a release and later versions. (E.g., 4.1.1 vs 4.1.25, etc.) Check the 
DB_VERSION ifdefs in the code and you'll see.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Subject Alternative Name in TLS - does this work?

2013-10-18 Thread Howard Chu

Aleksander Dzierżanowski wrote:

On Fri, Oct 18, 2013 at 11:25:59AM +0100, lejeczek wrote:



[...]

my case is, well should be a lot more simpler, one box with

slapd.local.domain
slap.public.external

and this one host I would like to be able to search through on/via
both hostnames/IPs with TLS
so I issue myself and sign a certificate, CA issuer is
CA.local.domain

Subject: .. CN=slapd.local.domain/email.
and
X509v3 Subject Alternative Name:
 DNS:slap.public.external, IP Address:ex.te.rn.al



Please add slapd.local.domain also to SAN and problem will be fixed.


Nonsense. Unnecessary.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Antw: Re: RE24 testing call (OpenLDAP 2.4.37)

2013-10-24 Thread Howard Chu

Ulrich Windl wrote:

Quanah Gibson-Mount qua...@zimbra.com schrieb am 23.10.2013 um 19:12 in

Nachricht 921249C16E0FB2B352961386@[192.168.1.93]:

--On Wednesday, October 23, 2013 6:40 PM +0200 Patrick Lists
openldap-l...@puzzled.xs4all.nl wrote:


Hi Quanah,


On 10/22/2013 10:27 PM, Quanah Gibson-Mount wrote:

If you know how to build OpenLDAP manually, and would like to
participate in testing the next set of code for the 2.4.37 release,
please do so.

Generally, get the code for RE24:

http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=snapshot;h=re
fs/heads/OPENLDAP_REL_ENG_2_4;sf=tgz


Configure  build.

Execute the test suite (via make test) after it is built.


On a CentOS 6.4 x86_64 VM I downloaded RE24 git rev f9e417a, did make
test and scrolling back all tests were OK.

Is there perhaps a summary file somewhere which contains the results of
all tests? That would be a lot easier and quicker than scrolling back a
zillion lines.


If a test fails, the test suite will stop. ;)


Most test suits work differently, and they write a summary line at the end

(like # test suceeded, # tests failed, # test skipped)

The OpenLDAP test suite's default behavior hasn't changed in 15 years. It was 
written for developers; developers should fix problems as soon as they are 
detected.


In recent releases, the behavior you describe was added. You enable it by 
setting the NOEXIT environment variable. Packagers tend to want this behavior. 
Note that the underlying assumption of the OpenLDAP Project is that we are 
developing source code and releasing it to be read by other developers. When 
we preface an announcement with If you know how to build OpenLDAP manually 
that should also mean, at the very least, you are not afraid to read 
Makefiles and shell scripts and thus there is no need to explain any of this.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Syncrepl with subordinate databases

2013-10-24 Thread Howard Chu

Robert Minsk wrote:

A summary of what I posted below .  I have several subordinate databases and
each subordinate database acquires there data via a refreshOnly syncrepl.
Instead of storing the contextCSN on the subordinate database the contextCSN
gets stored on the superior database.  As a result the superior databases
contextCSN is the maximum of the subordinate databases.  This causes all but
the syncprov server with the latest contextCSN to abort the sync with
consumer state is newer than provider!

It seems a configuration option needs to be added that allows storing and
reading of the contextCSN on the subordinate databases as well as the maximum
contextCSN on the superior database.


Use a unique ServerID per provider.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Antw: use openssl or moznss for more than TLS?

2013-10-26 Thread Howard Chu

devzero2000 wrote:

On Fri, Oct 25, 2013 at 7:59 PM, Michael Ströder mich...@stroeder.com wrote:

Steve Eckmann wrote:

We are using {SSHA} (SHA-1) in OpenLDAP now. The customer wants SHA-512.
And they require a FIPS-validated implementation, which I think narrows our
options to using either OpenSSL or NSS in FIPS mode. I cannot see a better
way to meet the customer's two requirements than gutting pw-sha2 and using
that as a thin wrapper for the raw crypto functions in either openssl or
nss.


You probably should first ask on the openssl-users mailing list under which
conditions you get some FIPS-validated code regarding the whole OpenLDAP
application. Likely it's not feasible.

I'm pretty sure that your customer FIPS requirement is plain nonsense and you
might work around this by some other strange policy text. ;-}

I am not sure nonsense if some distro are doing something in this
area. Right or,
perhaps, sometime wrong (o perhaps sometime break).
http://fedoraproject.org/wiki/FedoraCryptoConsolidation


FIPS spec is clearly nonsense, from a technical perspective. E.g. it required 
support of Dual_EC_DRBG random number generator which was proven to be 
inferior and almost certainly has an NSA backdoor.


The Fedora crypto consolidation rationale is completely bogus; in unifying the 
crypto database across the entire machine it fails to recognize that different 
services (and different clients) will have completely different security 
policies and requirements. Of course this is common knowledge for actual 
security practitioners - servers generally should only recognize and trust 
client certificates issued by a single CA, while clients generally need to be 
able to trust many different CAs for a wide range of services. HTTP servers 
have much different certificate requirements from LDAP/FTP/SMTP servers; 
almost nobody uses client certificates with HTTP but they are commonplace for 
many other authenticated services.


Of course, the Fedora crypto scene is dictated more by political concerns than 
technical. https://bugzilla.redhat.com/show_bug.cgi?id=319901


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: RE24 testing call (OpenLDAP 2.4.37)

2013-10-26 Thread Howard Chu

Brandon Hume wrote:

On 10/25/13 01:23 PM, Quanah Gibson-Mount wrote:


Thanks for the report.  Hopefully it's an issue with the sun compiler. :P


That may be the case.  Built with gcc 4.8, all the tests run fine.  One
issue I ran into, right at the beginning, was slapd dying due to an
undefined reference to fdatasync(). back_mdb-2.4-releng.so.2.9.2 wasn't
linked to librt.so.  Fixed that and a 'make test' completed successfully.

I'm going to mess around, trying 32-bit builds, and also Studio 12.3
w/64 bit and see if it's specific to 12.1 + 64 bit.  But for the moment
it looks like, at the least, Sol 10 + Studio 12.1 + 64 bit may be a no-go.


Fwiw, I built with Studio 12.2 (both 32 and 64 bit SPARC) on Solaris 10 and 
had no errors.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Antw: Re: Trouble with delta-syncrepl MMR: delta-sync lost sync on X, switching to REFRESH

2013-10-29 Thread Howard Chu

Ulrich Windl wrote:

Quanah Gibson-Mount qua...@zimbra.com schrieb am 28.10.2013 um 18:14 in

Nachricht 97CEE97E42D7FD3A860611C8@[192.168.1.93]:

--On Monday, October 28, 2013 5:14 AM +0100 Patrick Lists
openldap-l...@puzzled.xs4all.nl wrote:


Hi,

I'm trying to create a 2-node delta-syncrepl Multi-Master setup with the
help of the Admin Guide, man pages and
tests/scripts/test063-delta-multimaster. I see the following problem
repeat on the slave master aka ldap02 which initially syncs with ldap01
aka the primary master:


Why do you have your cn=config db reading from the same accesslog for
replication as your primary DB?

If you are going to set up cn=config AND your primary db both as
delta-syncrepl, you're going to need 2 different accesslog DBs.


Hi!

Why? That's completely unobvious: accesslog is write-only, right? So why can't 
two sources write to one accesslog?


They can, but it means your syncrepl consumers must use a more specific filter 
to extract the modifications that are relevant. Also it's useful to use 
separate logs because different databases will have different rates of 
modifications, and you can thus configure a log purge interval more suited to 
each.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: limit on number of members in groupOfNames

2013-10-29 Thread Howard Chu

Dorit Eitan wrote:

Hello,
I am using openldap 2.4.23-32, on a rhel6 machine.  I am trying to form groups
with many members.  I encountered a problem that when I try to form a group
with over 670 members, it fails with a message:

Unknown error - 80.

My group is of the following object classes:

objectClass: top
objectClass: groupOfNames

I have searched and found no documentation on the limit of group members
number.  What am I doing wrong?


There is no coded limit on number of group numbers, therefore there is nothing 
to document. You need to see what the actual error message was that slapd 
logged. Most likely you've run out of BDB locks or some other BDB config needs 
to be increased.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: OpenLDAP with ssl client certs

2013-11-01 Thread Howard Chu

Brent Bice wrote:

 I was recently asked if we could use ssl client certs as a 2nd form
of authentication with OpenLDAP and didn't know for sure.  Is it
possible to have OpenLDAP require both a DN/password pair *and* a client
ssl cert?


You can make the server require a client cert, but it won't use the 
certificate identity for anything unless you Bind with SASL/EXTERNAL.


http://www.openldap.org/doc/admin24/sasl.html#EXTERNAL

And naturally, if you're using SASL, then the DN/password pair is ignored.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: OpenLDAP with ssl client certs

2013-11-01 Thread Howard Chu

Michael Ströder wrote:

Howard Chu wrote:

Brent Bice wrote:

  I was recently asked if we could use ssl client certs as a 2nd form
of authentication with OpenLDAP and didn't know for sure.  Is it
possible to have OpenLDAP require both a DN/password pair *and* a client
ssl cert?


You can make the server require a client cert, but it won't use the
certificate identity for anything unless you Bind with SASL/EXTERNAL.

http://www.openldap.org/doc/admin24/sasl.html#EXTERNAL

And naturally, if you're using SASL, then the DN/password pair is ignored.


BTW:

In case of client certs the cert's subject-DN is the authc-DN which can be
directly used in authz-regexp which very much ties the mapping to subject-DN
conventions of the PKI.

But in some cases it would be very handy to map a distinct client cert to a
authz-DN by issuer-DN/serial or even by fingerprint.  One use-case is cert
pinning of client certs and revocation checking done off-line.

Should I file an ITS for that?


I would reject such an ITS. Cert-pinning is an issue for clients that have a 
very large collection of trusted CAs. The Admin Guide clearly states that 
servers should only trust a single CA - the CA that signed its own certs and 
the certs of its clients. In that case, no one else can issue a valid cert 
with the same subjectDN.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: OpenLDAP on CF disk

2013-11-07 Thread Howard Chu

Michael Ströder wrote:

Maucci, Cyrille cyrille.mau...@hp.com wrote:

You should be specifying shm-key to benefit from shared mem vs memory mapped
files


I wonder whether switching to back-mdb would be a better solution.


If the machine is 64 bit, yes absolutely. If the machine is 32 bit, depends on 
how large the DB can grow, since there's a 2-3GB limit on the address space.


Also while the main DB file would be 99% read-only, the lock.mdb file would 
not. In an application like this it's a good idea to symlink the lock.mdb file 
to a RAM filesystem/tmpfs.


Ciao, Michael.


Le 7 nov. 2013 à 12:04, richard lucassen mailingli...@lucassen.org a
écrit :
Hello list,

I want to migrate some OpenLDAP servers from 3.5 disks to CF-disks.
The data in the OpenLDAP is only updated once a month or so. It is just
an 99%-read-only LDAP implementation.

However, with a standard Debian install, some files in
the /var/lib/ldap directory are updated upon each query:

# ls -altr
-rw-r--r--  1 openldap openldap  96 2008-11-19 11:45 DB_CONFIG
drwxr-xr-x 28 root root4096 2008-12-03 15:03 ..
-rw---  1 openldap openldap8192 2013-04-08 10:50 cn.bdb
-rw---  1 openldap openldap   24576 2013-09-29 13:49 objectClass.bdb
-rw---  1 openldap openldap  180224 2013-09-29 13:49 id2entry.bdb
-rw---  1 openldap openldap8192 2013-09-29 13:49 entryUUID.bdb
-rw---  1 openldap openldap8192 2013-09-29 13:49 entryCSN.bdb
-rw---  1 openldap openldap   36864 2013-09-29 13:49 dn2id.bdb
-rw---  1 openldap openldap 1168654 2013-10-17 09:40 log.01
-rw---  1 openldap openldap   24576 2013-11-07 05:45 __db.005
-rw---  1 openldap openldap   98304 2013-11-07 05:45 __db.003
-rw-r--r--  1 openldap openldap4096 2013-11-07 05:45 alock
drwx--  2 openldap openldap4096 2013-11-07 05:45 accesslog
drwx--  3 openldap openldap4096 2013-11-07 05:45 .
-rw---  1 openldap openldap  565248 2013-11-07 11:30 __db.004
-rw---  1 openldap openldap 2629632 2013-11-07 11:30 __db.002
-rw---  1 openldap openldap8192 2013-11-07 11:30 __db.001

Apparently the cluster is doing some synchronizing at 05:45 in the
morning, but that's once a day. My concern is the files called

__db.001
__db.002
__db.004

Is there a simple way to prevent OpenLDAP from updating these files at
each query?

R.







--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: OpenLDAP DB question

2013-11-07 Thread Howard Chu

Dheeraj Khanna wrote:

Thanks Michael

I could not see a specific config settings on ldap.conf which was shown in the
document. Basically I want to add another level of authentication where I can
configure my host's ldap.conf to reflect which user/groups can be allowed to
access a specific host.

I am not able to find the correct syntax which needs to be entered in host's
ldap.conf file


Better solution is to use slapo-nssov(5) and host/service authorization.


Please advise when you get a chance.

Thanks

Dheeraj


On Wed, Oct 30, 2013 at 10:12 AM, Michael Proto michael.pr...@tstllc.net
mailto:michael.pr...@tstllc.net wrote:

Try this:

http://www.redhat.com/resourcelibrary/whitepapers/netgroupwhitepaper

It talks about RedHat Directory Server but you can skip that part and go
straight to the Populating the Directory portion and go from there. It
mentions using NetGroups and PAM to facilitate access to systems based on
group membership.


-Proto


On Wed, Oct 30, 2013 at 12:31 PM, Dheeraj Khanna dheer...@zoosk.com
mailto:dheer...@zoosk.com wrote:

Hi

I wanted to find if I can add a host based authentication, here is my
setup.

Regular LDAP DB , I use group and users and associate permissions to
users based on groups. What I want to achieve is this:

*If a User A is a member of Group A and has access to hostsA allow
else deny, this will allow me to limit access to certain server types
based on user groups. I think we can define this in /etc/ldap.conf but
I could not find find the right syntax to add hosts in this config 
file.*

*Question: *I do not know how to add this ou called hostaccess, I
used a GUI portal called Apache Directory Studio to add/delete users
and groups.

If some one knows how to add hosts in LDAP and be able t map groups
and users to it that would greatly help me.

Thanks

Dheera






--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: ldapmodify replace olcAccess

2013-11-11 Thread Howard Chu

Покотиленко Костик wrote:

В Вто, 22/10/2013 в 18:37 -0700, Daniel Jung пишет:

Hi all,

Is it possible to use the replace the instead of delete then add again
for olcAccess?


Why didn't you just try it out and see for yourself?


dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcDbCacheSize
olcDbCacheSize: 10240
-
replace: olcAccess
olcAccess: {0}to dn.base=  attrs=namingContexts  by * none
olcAccess: {1}to * by * read
-


Yes, that would work. You don't need to provide the {x} prefixes unless you're 
adding in a non-sequential order.



Possible. You replace all values with first one, then adding rest
values, like this:

dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcAccess
olcAccess: {0}to dn.base=  attrs=namingContexts  by * none
-
add: olcAccess
olcAccess: {1}to * by * read
-


There's no need to break it up that way.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: openldap syncrepl issue

2013-11-11 Thread Howard Chu

Michael Ströder wrote:

Chris Card wrote:

I am running openldap 2.4.36 with BDB for my main backend db, and

multi-master replication setup using delta-syncrepl with MDB for the
cn=accesslog db.


I monitor the contextCSN to check that replication is in sync, but I've

noticed what looks like a bug:


If I try to delete a non-existent DN from the main db on machine A, I
see

the delete attempt in the cn=accesslog db on machine A with status 32, but the
contextCSN of the main db is not changed, as expected.


On machine B the contextCSN of the main db is updated, as if the delete
had

succeeded, and then machine A appears to be behind machine B according to the
contextCSN values.


Is this a known bug?


In delta-syncrepl the consumer is supposed to be configured to only receive 
updates from the log whose reqResult=0. Otherwise the consumer shouldn't 
receive anything at all.


Are you using slapo-memberof or slapo-refint?

If yes, you're probably hitting ITS#7710 which was fixed recently in OpenLDAP
2.4.37:

http://www.openldap.org/its/index.cgi?findid=7710

Ciao, Michael.




--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



<    2   3   4   5   6   7   8   9   10   11   >