Re: Empty Strings attributes in OpenLDAP

2012-10-09 Thread Howard Chu

Emilio García wrote:

Dear all,

Is there a way to define a string to be empty in OpenLDAP Schema? I read that
RFC 2252 doesnt allow empty strings. But is there any workaround for this like
a different SYNTAX or attribute type? We have a preexisting software which
tries to write empty strings and it is crashing because of this.


google site:www.openldap.org zero-length strings

This has been hashed out numerous times. Your software is broken.
--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: one scope indexing for mdb

2012-10-13 Thread Howard Chu

Roman Rybalko wrote:

AFAIK mdb does not use any index shortcut for one search scope.
That makes almost impossible to view the tree in a gui tool.
Gui tools often use one-scope searches.
Using graphical viewing tools is frequently needed for administration stuff.

Are there any plans to implement scope=one searches optimization for mdb?


Feel free to submit a patch to the ITS.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: MDB Status

2012-10-13 Thread Howard Chu

Jonathan Clarke wrote:

On 28/09/12 02:38, Howard Chu wrote:

For those who haven't been following along, support for OpenLDAP's MDB
(memory-mapped database) library is also available for several other
open source projects, including Cyrus SASL (sasldb mech), Heimdal
Kerberos (hdb module), SQLite3, OpenDKIM, and MemcacheDB. A
work-in-progress patch for Postfix is also available, with a final
version coming soon. A backend for SQLite4 is also in the works. A
port of Android (JellyBean) for the Motorola Droid4 using MDB/SQLite3
is in progress (since my current phone is a Droid4).


Thanks for the update Howard.


Other projects are also in progress and will be announced in the near
future. The current list is also posted on
http://highlandsun.com/hyc/mdb/ - feel free to suggest other projects.


CFEngine is a similar project that would undoubtedly benefit from the
use of MDB - it's a lightweight, C program, to apply configuration
automatically to many servers across many operating systems and
architectures. They have use BerkeleyDB in the past, but abandoned it
for TokyoCabinet, although I know not all users are happy about that...

See http://www.cfengine.com and
https://github.com/cfengine/core/blob/master/src/dbm_tokyocab.c for the
source implementation of the TokyoCabinet usage.


Interesting. It would be a pretty simple adaptation, but I don't see any API 
for setting database-specific options. Being able to configure mapsize is 
essential; being able to configure maxreaders is often necessary too.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: ACI module

2012-10-14 Thread Howard Chu

Brian Empson wrote:

Is it possible to load ACI support as a module? It would really help when
installing from a precompiled package, all of which seem to turn off this very
powerful feature. (I really like it anyway)


The source code is there, build it however you like. One of the reasons open 
source software exists is to free you from whatever limitations are imposed by 
a binary provider. If you tie yourself to precompiled packages that don't do 
exactly what you want, you're somewhat missing the point.


ACIs are a security liability from a centralized administration perspective. 
They are intrinsically difficult to audit and difficult to track. Use of ACIs 
can make it impossible to prove that a given deployment correctly implements a 
formally defined security policy. IME, people who are fond of ACIs either 
don't understand the security risks, or don't care.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: mdb_stat question

2012-10-16 Thread Howard Chu

Frank Swasey wrote:

If I've failed to find this on the mailinglist or in my google searches,
I am sorry.  However, as I'm experimenting with using mdb for the
backend with OpenLDAP-2.4.33 on RHEL 6.3, I'm wondering what am I
looking for when I run mdb_stat and it tells me:


Huh? I'm wondering what you're looking for when you run mdb_stat. What did you 
expect to see?



Page size: 4096
Tree depth: 2
Branch pages: 1
Leaf pages: 2
Overflow pages: 0
Entries: 59




--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: mdb_stat question

2012-10-16 Thread Howard Chu

Frank Swasey wrote:

Today at 4:42am, Howard Chu wrote:

Huh? I'm wondering what you're looking for when you run mdb_stat. What did
you expect to see?


I guess I was expecting to see something more like the output of
db_stat, which I ran looking for deadlocks and evidence that any limits
were starting to be hit so I should tweak the DB_CONFIG file and
restart.



Since mdb doesn't have any of those issues, then I guess I'm just really
looking for guidance about the use of mdb_stat and what I should be
monitoring to be sure that I'm not on the brink of something breaking.


Right. Since MDB is deadlock proof there's no information of that sort to show.


Perhaps I have read the existing documentation too quickly.  However, so
far, I don't think I've found any guidance about how to monitor the
back_mdb to stay on top of possible issues.


There are only two settings to worry about: mapsize and maxreaders. If your 
maxreaders is greater than the number of slapd threads, you probably don't 
have to worry about it. For the mapsize, you can just use du on the MDB 
directory to see how much space is in use.


There's really not much more to tell.

These tools are definitely immature though. In the source tree you'll also 
find mdb_stata.c and mfree.c; their functionality is intended to be merged 
into a more useful mdb_stat command at some point in the future.


We'll likely also add some function to dump out the contents of the readers 
table. But the need for these things is pretty low.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: mdb_stat question

2012-10-17 Thread Howard Chu

Roman Rybalko wrote:

17.10.2012 04:19, Howard Chu пишет:

Have a look at mdb_stat in git mdb.master commit
617769bce5bcac809791adb11301e40d27c31566

Use options -e and -f, that should give you everything you want.
Feedback appreciated, I doubt this is its final form yet.


Thanks!
Nice tool!

But how would I determine storage usage percent?
((Number of pages used)-(Free pages))/(Max pages)*100


Yes.


or just
(Number of pages used)/(Max pages)*100
?


No.


I mean is there possible a situation when (Number of pages used)==(Max
pages) and (Free pages)!=0 ?


Yes.



root@log:~# mdb_stat -e /mnt/data/ldap/2
Environment Info
   Map address: (nil)
   Map size: 15032385536
   Page size: 4096
   Max pages: 3670016
   Number of pages used: 1042935
   Last transaction ID: 2802459
   Max readers: 126
   Number of readers used: 18
Status of Main DB
   Tree depth: 1
   Branch pages: 0
   Leaf pages: 1
   Overflow pages: 0
   Entries: 12
root@log:~# mdb_stat -f /mnt/data/ldap/2
Freelist Status
   Tree depth: 3
   Branch pages: 7
   Leaf pages: 673
   Overflow pages: 1
   Entries: 2745
   Free pages: 140824
Status of Main DB
   Tree depth: 1
   Branch pages: 0
   Leaf pages: 1
   Overflow pages: 0
   Entries: 12
root@log:~#




--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Using replication LDAP

2012-10-23 Thread Howard Chu

Daniel Lopes de Carvalho wrote:

Rodrigo,

Try this:

...
type=refreshOnly
retry=60 30 300 +
...


When replying to questions, please post what your answer *means*, not just a 
canned response. Our purpose here is to help people to *understand* how to do 
things on their own, not to just give them meaningless snippets to memorize.


This particular question is addressed in the slapd.conf(5) manpage. The right 
answer is RTFM.



Daniel

On Tue, Oct 23, 2012 at 10:39 AM, rodrigo tavares
rodrigofar...@yahoo.com.br wrote:

Hello !

I'm trying  make a replication from  master for slave.

My configure master:

moduleload syncprov.la
moduleload back_monitor.la
moduleload back_bdb

overlay syncprov
syncprov-checkpoint 100 10
syncprov-sessionlog 100

Slave:

syncrepl rid=123
provider=ldap://10.26.7.45:389
type=refreshOnly
interval=00:00:05:00
searchbase=dc=company,dc=mg,dc=gov,dc=br
filter=(objectClass=organizationalPerson)
scope=sub
attrs=*,+
schemachecking=off
bindmethod=simple
binddn=cn=admin,dc=company,dc=mg,dc=gov,dc=br
credentials=secret


I syslog i see this message:

Oct 23 10:23:32 replica slapd[1827]: syncrepl rid=123
searchbase=dc=company,dc=mg,dc=gov,dc=br: no retry defined, using default.

How I can to resolve it ?

Best regards,

Rodrigo Faria










--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: compile 2.4.33 on AIX 6.1 with IBM vac - mdb.c - mdb_cursor_pop fails

2012-10-25 Thread Howard Chu

Howard Allison wrote:

Hi
I've compiled openldap 2.4.33 on AIX 6.1 and had to edit the file
libraries/libmdb/mdb.c.
In mdb_cursor_pop I had to comment out the #if MDB_DEBUG directive to make
*top visible.
#if MDB_DEBUG
*/
 MDB_page*top = mc-mc_pg[mc-mc_top];
/*
#endif
*/

Is this something particular to xlc?


Sounds like it. It seems that your compiler/preprocessor doesn't support CPP 
macros with variable number of arguments.



Thanks
Howard Allison



--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Newbie question about host base authentication

2012-10-29 Thread Howard Chu

Dan White wrote:

On 10/29/12 13:23 +0100, Simone Scremin wrote:

Hi all,

I'm in the process of learning the OpenLDAP authentication mechanics.

I'd need to know what is the best way to configure an host based
authentication system that allow to configure a per-user rule to include a
group of host to which the user is allowed to login.

In example:

user Bob needs to authenticate on systems:

sys01pra
sys02pre
sys03pra
sys03pre

some configuration on the LDAP server enable this hostnames for Bob with a
regular expression like:

sys0*pr*

Is it feasable?


Assuming that you will be using a PAM module on each host, the answer to
that question will depend on which PAM module you choose, and what
configuration it supports.

If that module supports placing a filter within the PAM configuration, then
'host=sys0*pr*' should work.


The PADL pam_ldap module has no such feature. The OpenLDAP nssov overlay does.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: max. numbers of subordinate databases

2012-10-29 Thread Howard Chu

Dieter Klünter wrote:

Hello,
I have a strange requirement to setup a Server with as much as possible
subordinate databases. In a test environment I could create any number
of databases, but a one level search presents the results of  27
databases and after this an internal error is reported.
(thread_pool_setkey failed err (12)) I did this tests with back-hdb and
back-mdb.
Is there any way to increase the number of subordinate databases?


There is no limit on the number of subordinate databases. (You could, e.g., 
define 1 back-ldap instances if you really wanted to. Probably, if you 
actually wanted to do this, you're doing something wrong...)


Your error is due to hitting a limit on the number of thread-specific keys 
defined in libldap_r, which is arbitrarily defined as 32. Change tpool.c 
MAXKEYS to whatever number you need it to be. Note that increasing this number 
will also consume more memory in each thread's local stack; setting this to 
thousands would probably be a bad idea.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Memory usage with bdb

2012-11-01 Thread Howard Chu

Roman Rybalko wrote:

01.11.2012 20:24, Friedrich Locke пишет:

Is anyone aware of any buggy in openldap+bdb related to memory usage?
Have anyone here using this type of backend ? Any problem to report?


It's leaking. I think. Needs regular restart.
slapd-2.4.23+libdb-4.8.30
slapd-git-a54957b+libdb-4.8.30


If you can show concrete evidence of a leak, please file an ITS with your 
configuration, sample workload, and the memory leak report. (E.g. from 
valgrind, tcmalloc leak check, or some other memory tracking tool.)


I routinely test with valgrind and tcmalloc, and have not seen any memory 
leaks in back-bdb/hdb in many years.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Index Add Failures

2012-11-01 Thread Howard Chu

Kyle Smith wrote:

Ok, I have been running 2.4.32 for some time with no issues. Yesterday, 2
different servers (both part of a 4-way MMR) produced an index add failure
and an index delete failure. I went back over the bdb DB_CONFIG Settings
(listed below) and everything looks nominal to me. Would it just make more
sense to switch from bdb to mdb instead of troubleshooting these random
errors too much? I also noticed that the number of deadlocks corresponds to
the number of errors that were produced. Is there correlation there?


Probably, but that's not an indication of any actual failure. Deadlocks are 
normal occurrences in BerkeleyDB and the backends automatically retry when 
they occur. You can basically ignore any error that accompanies a deadlock.


And yes, if you switch to MDB, all of these issues go away.

Given that reads in MDB are 5-20x faster than BDB, and writes are 2-5x faster, 
and MDB uses 1/4 as much RAM as BDB, there's hardly any reason to use BDB any 
more*. No tuning, no maintenance. MDB just works, quickly and efficiently.


*If you're still using a 32 bit machine, you may be better off using BDB, 
especially if you have databases 1GB or larger. But seriously, why are you 
still using a 32 bit machine?



Thanks!


578 Last allocated locker ID
0x7fff  Current maximum unused locker ID
9   Number of lock modes
3000Maximum number of locks possible
1500Maximum number of lockers possible
1500Maximum number of lock objects possible
1   Number of lock object partitions
15  Number of current locks
1029Maximum number of locks at any one time
17  Maximum number of locks in any one bucket
0   Maximum number of locks stolen by for an empty partition
0   Maximum number of locks stolen for any one partition
123 Number of current lockers
224 Maximum number of lockers at any one time
15  Number of current lock objects
526 Maximum number of lock objects at any one time
5   Maximum number of lock objects in any one bucket
0   Maximum number of objects stolen by for an empty partition
0   Maximum number of objects stolen for any one partition
3581M   Total number of locks requested (3581768929)
3581M   Total number of locks released (3581768869)
0   Total number of locks upgraded
77  Total number of locks downgraded
7041Lock requests not available due to conflicts, for which we waited
43  Lock requests not available due to conflicts, for which we did not wait
2   Number of deadlocks
0   Lock timeout value
0   Number of locks that have timed out
0   Transaction timeout value
0   Number of transactions that have timed out
1MB 392KB   The size of the lock region
0   The number of partition locks that required waiting (0%)
0   The maximum number of times any partition lock was waited for (0%)
0   The number of object queue operations that required waiting (0%)
577 The number of locker allocations that required waiting (0%)
32148   The number of region locks that required waiting (0%)
5   Maximum hash bucket length


On Wed, Aug 29, 2012 at 12:04 PM, Quanah Gibson-Mount qua...@zimbra.com
mailto:qua...@zimbra.com wrote:

--On Wednesday, August 29, 2012 11:32 AM -0400 Kyle Smith
alacer.cogita...@gmail.com mailto:alacer.cogita...@gmail.com wrote:

Quanah, Thanks for the info, I have confirmed I'm hitting the lock maxes
of 1000. And I will be upgrading to 2.4.32. I was wondering, what steps
should be done to have the changes in DB_CONFIG take effect?


stop slapd
make changes to DB_CONFIG
db_recover
start slapd


Will this also auto remove the log.* files? ( I plan on setting this:
set_flags DB_LOG_AUTOREMOVE in DB_CONFIG)


If you have checkpointing set in slapd.conf/cn=config, it should, yes.

--Quanah


--

Quanah Gibson-Mount
Sr. Member of Technical Staff
Zimbra, Inc
A Division of VMware, Inc.

Zimbra ::  the leader in open source messaging and collaboration





--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: general protection fault when deleting members from a large group (mdb)

2012-11-03 Thread Howard Chu

Mundry, Marvin wrote:

Hi,
i was trying to remove all members from a group initially having  48428 members 
one by one.



after some time ldapmodify stops as the slapd stops running. In the syslog i 
see the following message:

Nov  3 15:05:45 oscar slapd[11040]: conn=1000 op=133 MOD 
dn=cn=students,ou=groups,dc=university,dc=com
Nov  3 15:05:45 oscar slapd[11040]: conn=1000 op=133 MOD attr=member
Nov  3 15:05:45 oscar slapd[11040]: conn=1000 op=134 MOD 
dn=cn=students,ou=groups,dc=university,dc=com
Nov  3 15:05:45 oscar slapd[11040]: conn=1000 op=134 MOD attr=member
Nov  3 15:05:45 oscar slapd[11040]: conn=1000 op=133 RESULT tag=103 err=0 text=
Nov  3 15:05:45 oscar kernel: [21207.720483] slapd[11045] general protection 
ip:7fae665cbc1f sp:7faded9c9fc0 error:0 in 
back_mdb-2.4.so.2.8.5[7fae665a5000+3]


am i doing something wrong or is there a bug in my slapd?


slapd should never crash. Any crash is a bug. Please submit a report to the 
ITS, including the relevant parts of your configuration and the backtrace from 
the crash.



i am running openldap-2.4.33 on ubuntu 12.10, compiled with:
--enable-dynamic=yes --enable-syslog=yes --enable-slapd=yes --enable-dynacl=yes 
--enable-crypt=yes --enable-modules=yes --enable-rewrite=yes --enable-bdb=mod 
--enable-hdb=mod --enable-ldap=mod --enable-mdb=mod --enable-meta=mod 
--enable-monitor=mod --enable-overlays=mod --with-tls=openssl



--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: OpenLDAP on SMP/multicore environment

2012-11-12 Thread Howard Chu

Friedrich Locke wrote:

Hi folks,

i would like to set up a new openldap server in my workplace. If possible, i
would like to install openldap in a multicore server running OpenBSD/amd64  5.1.

May OpenLDAP benefit from a SMP system? If yes, any tip on where to go to
learn how to set up OpenLDAP to take full advantage of a SMP system ?


Yes, OpenLDAP benefits from SMP. Whether or not OpenBSD supports SMP 
adequately, I can't say.


Read the Admin Guide.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Index Add Failures

2012-11-13 Thread Howard Chu

Quanah Gibson-Mount wrote:

--On Tuesday, November 13, 2012 6:38 PM -0500 Allan E. Johannesen
a...@wpi.edu wrote:


Is there a measure of how full a MDB database is?  Maybe a Monitor
value of some sort?


du file.mdb

I.e., the filesize of mdb grows as it gets data, at least on Linux.  This
is why I set my maxsize to 80GB.  This allows it to grow up to 80GB before
running out of room.  Even with BDB, which takes substantially more space,
I've never had an 80GB DB.

One of these days, I plan on writing a monitoring script that compares the
max size of the DB with the actual size on disk.

I found that OSX acts different -- It actually allocates the entire size of
the database on disk, regardless of how much is used.  That may be common
to all the BSDs.


Sounds like whatever filesystem is default on BSDish systems doesn't support 
sparse files.


Anyway, you can use mdb_stat to see what's in the MDB environment.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Index Add Failures

2012-11-13 Thread Howard Chu

Quanah Gibson-Mount wrote:

--On Tuesday, November 13, 2012 4:25 PM -0800 Howard Chu h...@symas.com
wrote:


I found that OSX acts different -- It actually allocates the entire size
of the database on disk, regardless of how much is used.  That may be
common to all the BSDs.


Sounds like whatever filesystem is default on BSDish systems doesn't
support sparse files.

Anyway, you can use mdb_stat to see what's in the MDB environment.


Oh, one additional note about the BSDs... I didn't see that behavior until
I enabled writemap.  Disabling writemap reverted it to the same behavior as
linux, but then of course the advantage of writemap is lost. :P


That's to be expected. When writemap is enabled, we ask the OS to preallocate 
the entire map space. Otherwise, writing to a new page would get a SIGBUS.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: DN matching rules

2012-11-16 Thread Howard Chu

Chris Card wrote:


Hi,


I see that openldap supports a number of matching rules for DNs,
e.g. dnOneLevelMatch, dnSubtreeMatch, dnSubordinateMatch and
dnSuperiorMatch.

Please can someone point me to documentation about these matching
rules? (Google doesn't seem to bring up much useful).


RFC 4517, section 4.

Thanks, but I don't see anything about these matching rules in
Rfc4517 section 4.


Substring assertion is discussed in section 3


I'm not trying to awkward, but I don't see how that relates to my question.

I understand how to use the matching rules syntactically, but
I have not found documentation anywhere that describes how these matching rules 
work.

I can try out examples and/or read the openldap source code to try and deduce 
their behaviour, but I'd
prefer to see documentation.


This feature has been present in OpenLDAP since 2004.

https://www.openldap.org/its/private.cgi/Archive.Software%20Enhancements?id=3112;selectid=3112;usearchives=1

Nobody has asked for docs thus far, because everybody recognizes that 
subtree/onelevel/subordinate are the same as the corresponding LDAP search 
scopes, and their behavior is already specified.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: DN matching rules

2012-11-16 Thread Howard Chu

Chris Card wrote:



I see that openldap supports a number of matching rules for DNs,
e.g. dnOneLevelMatch, dnSubtreeMatch, dnSubordinateMatch and
dnSuperiorMatch.

snip

I have not found documentation anywhere that describes how these matching rules 
work.

I can try out examples and/or read the openldap source code to try and deduce 
their behaviour, but I'd
prefer to see documentation.


This feature has been present in OpenLDAP since 2004.

https://www.openldap.org/its/private.cgi/Archive.Software%20Enhancements?id=3112;selectid=3112;usearchives=1





That link needs a login.


http://www.openldap.org/its/index.cgi/Archive.Software%20Enhancements?id=3112;selectid=3112;usearchives=1


Nobody has asked for docs thus far, because everybody recognizes that
subtree/onelevel/subordinate are the same as the corresponding LDAP search
scopes, and their behavior is already specified.


Ok, but there's no superior scope. Also, while it's possible to try and
deduce behaviour by similarity of names and by experiment, that's not a
foolproof method, which is why I asked for a link to documentation. What
little documentation I did find indicates that these matching rules are
'experimental' and shouldn't be used in released code
(http://www.openldap.org/faq/data/cache/200.html) - is that still the
case?


That FAQ says these OIDs shouldn't be used in released code. That's generally 
true, but obviously we've broken those rules various times. The intent of 
these rules is that we expect experimental features to either progress, in 
which case a formal specification is published, using non-experimental OIDs, 
or the experiments are deemed a failure and withdrawn/deleted. Either way, the 
experiments actually need to be tested by actual users, which means the 
corresponding code winds up in public releases.


The reality is that authors of experiments have moved on to other work, 
leaving these features in limbo, and no one has stepped in to drive them 
forward to completion (published status).


In this particular case, the features themselves were demonstrably stable 
years ago.


If you're inclined to only use features that have published documentation, 
you're welcome to forget everything you ever heard about dnSubtreematch and go 
about your business. OpenLDAP is a volunteer based open source project - work 
happens when a volunteer is interested in making it happen. The fact that what 
you're asking for hasn't been written in the past 8 years indicates to me that 
no one is interested.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: slapo-rwm overlay and backend databases

2012-11-22 Thread Howard Chu

Bryce Powell wrote:


If this is possible, does the configuration allow one to define the overlay at
the “backend” level, so that it applies to all databases of the same type?
e.g.
backendldap
overlay rwm
rwm-rewriteEngine   on


No. No modules in OpenLDAP have ever implemented anything for the backend 
keyword, it is purely a no-op.


Also No: overlays may only be configured on databases. Not backends.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: MSYS build and POSIX regex

2012-11-23 Thread Howard Chu

Anton Malov wrote:

Trying to build openldap with msys and mingw using distribution from
http://nuwen.net/mingw.html
The problem is the configure script can not find regex library and
provides following output:

checking for regex.h... yes
checking for library containing regfree... no
configure: error: POSIX regex required.

But the distribution contains precompiled pcre library and regex.h
contains regfree function declaration.
What is going wrong?


Only you can answer that. Read the config.log file and see what error message 
was in it for the regfree test.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Announcing MDB C++ wrapper

2012-11-25 Thread Howard Chu

ArtemGr wrote:

http://code.google.com/p/libglim/source/browse/trunk/mdb.hpp
The wrapper adds Boost Serialization, iterators and triggers to the mix.
Any serializable class can be used as a key or a value.
Triggers can be used to maintain indexes.


Great, thanks. I've added a link to you on the main MDB page.


Notice that there is an MDB issue triggered when using this wrapper
( http://www.openldap.org/its/index.cgi?findid=7448 ), use at your own risk.


We will investigate this when a reproducible test case is available.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: MSYS build and POSIX regex

2012-11-26 Thread Howard Chu

Anton Malov wrote:

First of all, GNU regex library is seriously outdated and unmaintained anymore.
Second, there is no such library in most popular MinGW distro -
http://nuwen.net/mingw.html
I can try to build GNU regex lib by myself, but for now I need to know
is there any way to build OpenLDAP client library using PCRE.


Read the INSTALL document.


On Mon, Nov 26, 2012 at 5:32 PM, Sergio NNX sfhac...@hotmail.com wrote:

Ciao.

I've built several versions of OpenLDAP and never had any issues with regex.
If PCRE libraries don't work, you could try GNU regex instead.

Let me know how you get on.

Sergio.






--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Difference between 2.4.30 and 2.3.43 in certificateMatch.

2012-11-27 Thread Howard Chu

Mike Hulsman wrote:

Hi,

I stumbled upon an difference between openldap 2.4.30 and 2.3.43.

This is my configuration.
X509 certificates are stored in the directory and a search is done with:
((mail=aaa@a.b)(userCertificate:certificateMatch:=binary
certificate)) if that is a match the uid must be returned.

That is working on 2.3.43 but when I try that on 2.4.30 it does not
work and I start debugging I see
filter=((mail=aaa@a.b)(?=undefined)) in the logfiles.


The certificateMatch rule takes a certificateAssertion, not a certificate. 
Your filter value is invalid.


But it also looks like there may be a bug in 2.4.x also, as the support for 
certificateMatch was removed in commit 
4c64b8626d5b2b26256446dbc29f63ab45b5ec1d March 2006. Not sure why, would have 
to check the email archives or ask Kurt.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Problems getting mdb_stat to work

2012-11-27 Thread Howard Chu

Mark Cairney wrote:

Hi,

I've acquired the libmdb tools from the Gitorious page. I've got it to

compile (with some warnings, see below) but it throws an error when I try and
run mdb_stat on the mdb directory:

Your first mistake was not actually reading the gitorious page. Go read it 
again.

The OpenLDAP source tree already includes the MDB source code.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Difference between 2.4.30 and 2.3.43 in certificateMatch.

2012-11-28 Thread Howard Chu

Mike Hulsman wrote:


Quoting Howard Chu h...@symas.com:


Mike Hulsman wrote:

Hi,

I stumbled upon an difference between openldap 2.4.30 and 2.3.43.

This is my configuration.
X509 certificates are stored in the directory and a search is done with:
((mail=aaa@a.b)(userCertificate:certificateMatch:=binary
certificate)) if that is a match the uid must be returned.

That is working on 2.3.43 but when I try that on 2.4.30 it does not
work and I start debugging I see
filter=((mail=aaa@a.b)(?=undefined)) in the logfiles.


The certificateMatch rule takes a certificateAssertion, not a
certificate. Your filter value is invalid.

Sorry for the kmisunderstanding, I don't know all correct naming.
But from what I understand after a lot of reading I am doing an
certificateAsserion.

I try to do a certificateMatch on an octet string.


No. Read RFC4523.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: slapo-pcache seems broken in openldap-2.4.31

2012-11-30 Thread Howard Chu

Tio Teath wrote:

I'm trying to set up slapo-pcache using cn=config, and this is my settings:

dn: olcDatabase={1}ldap,cn=config
objectClass: olcConfig
objectClass: olcDatabaseConfig
objectClass: olcLDAPConfig
objectClass: top
olcDatabase: {1}ldap
olcRootDN: cn=admin,cn=config
olcAccess: {0}to * by * read
olcDbACLBind: bindmethod=simple binddn=ou=group,dc=remote
  credentials=password tls_cacert=/etc/ssl/certs/ca-certificates.crt
  starttls=yes
olcDbURI: ldap://remote.host
olcSuffix: dc=remote

dn: olcOverlay={0}pcache,olcDatabase={1}ldap,cn=config
objectClass: olcPcacheConfig
objectClass: olcOverlayConfig
objectClass: olcConfig
objectClass: top
olcOverlay: {0}pcache
olcPcache: hdb 1 1 50 100
olcPcacheAttrset: 0 member
olcPcacheTemplate: (objectClass=) 0 3600

dn: olcDatabase={0}hdb,olcOverlay={0}pcache,olcDatabase={1}ldap,cn=config
objectClass: olcPcacheDatabase
objectClass: olcHdbConfig
objectClass: olcDatabaseConfig
objectClass: olcConfig
objectClass: top
olcDatabase: {0}hdb
olcDbDirectory: /var/lib/ldap/cache
olcDbIndex: objectClass eq
olcDbIndex: pcacheQueryid eq

But each time I'm trying to run
ldapsearch -bcn=test2,ou=group,dc=remote  '(objectClass=*)' member
  I'm getting QUERY NOT ANSWERABLE/QUERY NOT CACHEABLE errors in the log.


That's correct, since your search query doesn't match your template. Your 
search uses '(objectclass=*)' which is a Presence filter. Your template only 
supports Equality (and Substring) filters. Your template needs to be 
(objectclass=*) to support Presence filters.


Besides, it is impossible to modify attributes
olcPcacheTemplate/olcPcacheAttrset:
modify/add: olcPcacheTemplate: no equality matching rule


This is definitely a bug, please submit this to the ITS. Thanks.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: OpenLDAP as an address book for MS Outlook

2012-11-30 Thread Howard Chu

Victor Sudakov wrote:

Victor Sudakov wrote:


I have been trying to investigate what is needed in OpenLDAP to have
Microsoft Outlook 2007 display a list of names in the addressbook when
first accessed in the same way that it does with ActiveDirectory/Exchange.


Here is a dump of an LDAP session between Microsoft Outlook and a
CommunigatePro server: http://zalil.ru/34017194 where a list of names
is being displayed.

Could someone with sufficient LDAP knowledge look at it and advise how to
configure OpenLDAP to achieve the same result?


Your trace shows two supportedControls and two supportedCapabilities. The 
controls are for server-side sorting and paged results. OpenLDAP supports 
paged results intrinsically, and server-side sorting when the sssvlv overlay 
is configured. If those aren't sufficient to make Outlook behave, then things 
get trickier.


supportedCapabilities is not a standard attribute, it appears to be specific 
to M$AD. The two supportedCapabilities in your trace are:

1.2.840.113556.1.4.800  LDAP_CAP_ACTIVE_DIRECTORY_OID
1.2.840.113556.1.4.1791 LDAP_CAP_ACTIVE_DIRECTORY_LDAP_INTEG_OID

If your sssvlv is configured correctly, and Outlook sees both Server Side 
Sorting and Paged Results in the supportedControls that OpenLDAP returns, but 
it still doesn't do what you want, then apparently Outlook requires the server 
to claim to be Active Directory.


You could fake this, by copying the schema definition of the 
supportedCapabilities attribute and loading it into slapd. You would also need 
to populate the values. You can use the rootdse directive to do that. I 
would guess you only need the first capability, but I don't use Outlook so 
have no way to verify this.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: OpenLDAP as an address book for MS Outlook

2012-11-30 Thread Howard Chu

Victor Sudakov wrote:

Howard Chu wrote:

You could fake this, by copying the schema definition of the
supportedCapabilities attribute and loading it into slapd. You would also need
to populate the values. You can use the rootdse directive to do that. I
would guess you only need the first capability, but I don't use Outlook so
have no way to verify this.


Could you please be more specific how I can load the attribute into slapd
and populate it? Please refer me to an example.


Read the slapd.conf(5) or slapd-config(5) manpage. See test000 in the test 
suite for an example.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: dnMatch flooding logs and access blocked

2012-12-05 Thread Howard Chu

Al Dispennette wrote:

Seriously, I need help.
Can anyone help me?

None of the avenues I have looked into have amounted to anything.
The logging is not helping.  I believe whatever is happening is supposed to be
happening but when it does blocking occurs.
I have commented out all of the syncing properties in slapd.conf
I do still have checkpoint 1024 15 enabled.

I am pretty desperate for help and I have not had a response from anyone on
any site I have posted this.
I have been searching for an direction for a couple weeks now, I'm not asking
for an answer just a direction on where I maybe should look.


Why are you using loglevel 3? or 255? What do those loglevels mean, do you 
know? Have you read the slapd.conf(5) or slapd-config(5) manpages?


Give some more information on the actual operations involved. Use a loglevel 
that's actually useful. If you don't know what operations are occurring, then 
clearly the loglevel you've chosen isn't helping.


Thanks,


From: Al Dispennette al.dispenne...@clairmail.com
mailto:al.dispenne...@clairmail.com
Date: Tue, 4 Dec 2012 10:32:40 -0800
To: openldap-technical@openldap.org mailto:openldap-technical@openldap.org
Subject: Re: dnMatch flooding logs and access blocked

So I downloaded the openldap source and looked at the places where the debug
output logs the message below.
That being said it looks like it is happening during some group entry
modification.

I am not that knowledgeable with ldap so I have another question related to
the blocking that is occurring.
So the situation is this, in my application I allow users to update their
usernames and password.
For the username update I copy the user into a cloned object delete the entry
from ldap and then add the cloned object with the new username to ldap.
As for the password I simply update the password attribute.

Is there something in the removal and addition of the user object that is
causing the group to need to be reindexed or the cache to be reloaded or
anything that may cause the blocking that I am seeing?

I changed the log level from 255 to 3 so I should see some different debug
output, but until this occurs again does anyone have any insight or knowledge
that could help me.

Thanks,

*
*
*Al Dispennette*

*
*


From: Al Dispennette al.dispenne...@clairmail.com
mailto:al.dispenne...@clairmail.com
Date: Mon, 3 Dec 2012 14:35:44 -0800
To: openldap-technical@openldap.org mailto:openldap-technical@openldap.org
Subject: dnMatch flooding logs and access blocked

Hello,

I am seeing the following get repeated in my slapd logs for hundreds of line.
  I know it is due to the logging level.

However, when this starts happening no one can access the server because what
ever is logging this is blocking.

Can anyone tell me what is causing this log entry?


slapd[20616]: dnMatch
-1#012#011uid=item1,ou=users,dc=example,dc=com#012#011uid=user,ou=users,dc=example,dc=com

slapd[20616]: dnMatch
2#012#011uid=item2,ou=users,dc=example,dc=com#012#011uid=user,ou=users,dc=example,dc=com

slapd[20616]: dnMatch
2#012#011uid=item3,ou=users,dc=example,dc=com#012#011uid=user,ou=users,dc=example,dc=com

slapd[20616]: dnMatch
-2#012#011uid=item4,ou=users,dc=example,dc=com#012#011uid=user,ou=users,dc=example,dc=com

slapd[20616]: dnMatch
-1#012#011uid=item5,ou=users,dc=example,dc=com#012#011uid=user,ou=users,dc=example,dc=com

slapd[20616]: dnMatch
-2#012#011uid=item6,ou=users,dc=example,dc=com#012#011uid=user,ou=users,dc=example,dc=com


*
*
*Al Dispennette*

*
*




--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: DB_LOCK_DEADLOCK

2012-12-06 Thread Howard Chu

Pawan Kamboj wrote:

This error comes frequently when lots of write operation happen on ldap server
and at the same time we do the ldapsearch from our application. ldapsearch
command say user does not exit in DB but user exit in DB, But after some time
it's work well.


You're using an ancient release. You should consider upgrading to 2.4.33, and 
using back-mdb, which is simpler to configure and cannot deadlock.



On Thu, Dec 6, 2012 at 9:31 PM, Howard Chu h...@symas.com
mailto:h...@symas.com wrote:

Pawan Kamboj wrote:

Hi,

We are getting  bdb_idl_fetch_key: get failed: DB_LOCK_DEADLOCK: Locker
killed to resolve a deadlock (-30995) error in Openldap debug log. We 
are
using Openldap.2.3.31 and db.4.5 version and have configured
master-slave. Any
help on this? Is it something need to worry or we can ignore this.
This error
coming frequently when we do lots of write opration on ldap.


It's normal, ignore it.

--
   -- Howard Chu
   CTO, Symas Corp. http://www.symas.com
   Director, Highland Sun http://highlandsun.com/hyc/
   Chief Architect, OpenLDAP http://www.openldap.org/__project/
http://www.openldap.org/project/




--
Thanks  Regards
Pawan Kamboj
Infra Analyst, Sapient





--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: slapd-meta configuration details

2012-12-06 Thread Howard Chu

Scott Koranda wrote:

Hello,

As part of an evaluation and testing phase, on a Debian
Squeeze system using version 2.4.23 of OpenLDAP I successfully
configured and used the slapd-meta backend. The configuration
looked like this:

database meta
suffix dc=test,dc=myorg,dc=org

uri ldapi:///o=external,dc=test,dc=myorg,dc=org

acl-authcDN uid=foswiki,ou=system,o=external,dc=test,dc=myorg,dc=org
acl-passwd passwd
idassert-bind bindmethod=simple
 binddn=uid=foswiki,ou=system,o=external,dc=test,dc=myorg,dc=org
 credentials=passwd
 mode=self

uri ldapi:///o=internal,dc=test,dc=myorg,dc=org

acl-authcDN uid=foswiki,ou=system,o=external,dc=test,dc=myorg,dc=org
acl-passwd passwd
idassert-bind bindmethod=simple
 binddn=uid=foswiki,ou=system,o=external,dc=test,dc=myorg,dc=org
 credentials=passwd
 mode=self

To prepare for a production deployment I then compiled
OpenLDAP 2.4.33 using this set of configure options:

./configure --prefix=/opt/openldap-2.4.33 --enable-slapd
--enable-cleartext --enable-rewrite --enable-bdb --enable-hdb
--enable-ldap --enable-meta --enable-rwm

I attempted to use the same configuration for the slapd-meta
backend. My queries to slapd no longer returned anything and I
saw this in the debug ouput:

50c15573 conn=1000 op=1 meta_search_dobind_init[0] mc=0x22c2da0: non-empty dn 
with empty cred; binding anonymously
50c15573 conn=1000 op=1 meta_search_dobind_init[1] mc=0x22c2da0: non-empty dn 
with empty cred; binding anonymously

I interpret this to mean that the slapd-meta backend is
deciding it does not have a credential to use and is binding
anonymously to the proxied services.

How should I change my configuration above so that the most
recent version of OpenLDAP will be able to bind to the proxied
services in the way that happened with version 2.4.23?

Note that I installed versions between 2.4.23 and 2.4.33
(bisection) and found that the change from 2.4.25 to 2.4.26
causes the configuration above to go from working to not
working. Versions 2.4.26 and above that I tested result in
the non-empty dn with empty cred in the debug output.


The only relevant change to back-meta from 2.4.25 to .26 is for ITS#6909. 
Perhaps you can retest your config with that patch reverted and see how it goes.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: OpenLDAP Sandbox on AWS/Eucalyptus

2012-12-09 Thread Howard Chu

Harold Spencer Jr. wrote:

Hey guys,

I wanted to share a blog I recently published.  It shows how to build an
OpenLDAP server (from git) on Ubuntu 12.04 LTS in Amazon Web Services and
Eucalyptus using cloud-init.  Hope you enjoy.  Any feedback would be greatly
appreciated.


Looks nice, thanks for the post. With the popularity of cloud deployments 
these days, this is going to be useful for a lot of people, I'm sure.


http://blogs.mindspew-age.com/2012/12/08/openldap-sandbox-in-the-clouds/

*_*


*Harold Spencer, Jr. - Technical Support Engineer*

*Eucalyptus Systems*

www.eucalyptus.com http://www.eucalyptus.com/

+1 805 845-8400

IRC: #eucalyptus

 #eucalyptus-devel


Follow us on Twitter http://twitter.com/#!/eucalyptuscloud

Like our Facebook Page
http://www.facebook.com/pages/Eucalyptus-Systems-Inc/164828240204708

Keep up-to-date with us
http://go.eucalyptus.com/Sign-Up-for-Cloud-Computing-News.html?Offer=OtherOfferDetails=Sign%20Up%20for%20Cloud%20Computing%20NewsLeadSourceDetails=Eucalyptus%20WebsiteOfferURL=http%3A%2F%2Fgo.eucalyptus.com%2FSign-Up-for-Cloud-Computing-News.html

*_*





--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: DB_LOG_AUTOREMOVE not working

2012-12-19 Thread Howard Chu

BheeshmaManjunath wrote:

Hi

I am using the latest Openldap.

Software- Openldap 2.4.33

Server - RHEL 5.8 64 bit

When i do an Ldapsearch, Ldapadd, Ldapdelete , the log.x files are 
getting generated in the directory folder.

To auto remove them i did add the  set_flags DB_LOG_AUTOREMOVE  to the 
DB_CONFIG file which is present in the directory folder itself.

And I am using crontab to add the new data on the daily bases, but the 
log.x files which are generated are not being auto removed..?


Can anyone help me regarding this issue..?

Is there a need to restart the server before adding the new data on the daily 
bases..? If so why is it so..?


No, there's no need to restart. But you must configure a checkpoint. Read the 
slapd-bdb(5) manpage.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: nssov works fine, unable to set nssov-pam-session

2012-12-20 Thread Howard Chu

Василий Молостов wrote:

Before enabling nssov I have got nslcd working, so the transition from
one to another was clear (ubuntu 12.04.1, openldap
2.4.28-1.1ubuntu4.2).

according to slapo-nssov man page I have added to my slapd.conf:

moduleload nssov
overlay nssov
nssov-pam userhost
nssov-pam-session login
nssov-pam-session sshd

but olcOverlay=nssov,olcDatabase=hdb,cn=config has no olcNssPamSession
and its related values after creation of db.

When I add olcNssPamSession into cn=config with above values by hand -
loginstatus is working well, but when nssov-pam-session into
slapd.conf - none.

Does it mean that I am missed something to correctly set
nssov-pam-session in slapd.conf?


Sounds like a bug in nssov, please submit this to the ITS, thanks.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: olcToolthreads and slapadd -n0

2013-01-01 Thread Howard Chu

Василий Молостов wrote:

i have migrated server side config for my db and I have found some
strange behavior of slapadd:


Sounds like you're running an old release, but impossible to tell since you 
didn't post your version number. This was fixed in 2.4.32.


I've got started migration from converting slapd.conf containing my db
config and a 'tool-threads' parameter (with value above 1) into common
ldif file, in which an 'olcToolThreads' was defined appropriately (i.e
above 1). It was the simplest and clear step.

The next step was to 'slapadd -n 0' this ldif file into an empty
server config (having slapd stopped), but  just at adding 'cn=config'
object (which contains olcToolThreads with value  1) 'slapadd' stuck
indefinitely at shed_yelds() call (I have observed this via strace
tool from ubuntu).

So as a result slapadd had been in processing its operation
indefinitely without any output but was capable to stop working by
Ctrl-C handler.

Setting olcToolThreads to 0 or 1 has solved this problem.

I dont know is this a bug or my misconfiguration?


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Multi-Master OpenLDAP Replication for 3 nodes -- slapadd command failing

2013-01-02 Thread Howard Chu

Dieter Klünter wrote:

Am Tue, 1 Jan 2013 23:50:20 -0800
schrieb fal patel fal0pa...@gmail.com:

Assuming not, I typed in each value into all its relevant places in
my LDIF file and re-ran slapadd.
Now it gives me the following error (on latest redhat 64bit):
loaded module syncprov.la
*module syncprov.la: null module registered*

Surely the above message signifies an error?


No, it is normal.


[...]


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Multi-Master OpenLDAP Replication for 3 nodes -- slapadd command failing

2013-01-03 Thread Howard Chu

fal patel wrote:

Hi Philip,

hank you very much for your email.

In that case, my original surmise is correct:  The OpenLDAP Administrator's
Guide's Section 18.3.3 N-Way Multi-Master definitely is buggy.
Because my LDIF file is a direct copy thereof (except for my environment
values substituted in instead of the labels such as $URI1, of course.)


The example in 18.3.3 works perfectly when you substitute the correct values 
in for the variables and follow the steps listed.



How can this be reported as a bug, please?  (In OpenLDAP documentation or/and
code).
And can a *working* sample LDIF file be provided, please, for this important
replication design, n-way multi-master?


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Bind failing with critical extension is unavailable error when used with pagedresultscontrol

2013-01-05 Thread Howard Chu

Cannady, Mike wrote:

I have a program that works with Microsoft LDAP that I'm trying to get to

work with openldap. The program is from a third-party and I have NO access to
the code. I was able to duplicate the behavior with Perl code (testPaged.pl)
later in this email.


The bind is to a domain that uses the BDB backend. If my test code ignores

the error from bind, it does the paged results just fine. The problem is the
third-party program stops if it sees the error in bind. I've captured network
traffic to the Microsoft LDAP server and see that it generates NO error.


The version I'm running is:  [root@radius1 openldap]# slapd -V
@(#) $OpenLDAP: slapd 2.4.23 (Jul 31 2012 10:47:00) $
 
mockbu...@x86-001.build.bos.redhat.com:/builddir/build/BUILD/openldap-2.4.23/openldap-2.4.23/build-servers/servers/slapd

Is this a bug? I have the perl code, the debug run of the perl code, the

slapd.d structure, dumps of the olcDatabase entries, and the results of slapd
-d -1 output.

Yes, it's obviously a bug in your client. The PagedResults control is defined 
in RFC2696 and only defined for the Search operation, not for Bind. Ask 
whoever gave you that broken client to fix it.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: back-sql documentation (was: SELinux woes)

2013-01-17 Thread Howard Chu

Ori Bani wrote:

Trying to switch to slapd.conf instead of dynamic configuration in
order to test the back-sql backend


As of version 2.4.27, I believe back-sql supports dynamic configuration, so
there's no need to switch to slapd.conf

see http://www.openldap.org/software/release/changes.html :

OpenLDAP 2.4.27 Release (2011/11/24)

Added slapd-sql dynamic config support


Oh!  Thanks for the pointer.  Anyone know if there's any chance the
documentation is going to be updated?  It's over a year later and the
docs (and everything I saw in the list archives) say I must use
slapd.conf


I suggest you submit a request to the ITS and list all of the places you see 
that you think need updating.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: don't get running the slapd while using mdb backend

2013-01-18 Thread Howard Chu

Meike Stone wrote:

Hello,

because of problems with bdb (virtual memory using and glibc) and
limitiations (IDL),
I want migrate to mdb.

So my first question:
Does mdb have limitations like bdb it have aka BDB_IDL_LOGN?


Yes. back-mdb is ~60% the same code as back-bdb/hdb, its indexing functions 
are basically identical.



Second, I set up an small lab for tests with mdb and don't get the
slapd started
with larger mdb size (10GB).


Check your ulimits. MDB is much simpler than anything else but you are still 
required to understand how your own OS/platform works.



stat(/var/lib/ldap/ldap.mdb, {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
mmap(NULL, 2101248, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS,
-1, 0) = 0x7fe38f915000
brk(0x7fe3948ae000) = 0x7fe3948ae000
open(/var/lib/ldap/ldap.mdb/lock.mdb, O_RDWR|O_CREAT, 0600) = 9
fcntl(9, F_GETFD)   = 0
fcntl(9, F_SETFD, FD_CLOEXEC) = 0
fcntl(9, F_SETLK, {type=F_WRLCK, whence=SEEK_SET, start=0, len=1}) = 0
lseek(9, 0, SEEK_END)   = 0
ftruncate(9, 8192)  = 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_SHARED, 9, 0) = 0x7fe3941e4000
open(/var/lib/ldap/ldap.mdb/data.mdb, O_RDWR|O_CREAT, 0600) = 11
read(11, , 4096)  = 0
mmap(NULL, 10737418240, PROT_READ, MAP_SHARED, 11, 0) = -1 ENOMEM
(Cannot allocate memory)
close(11)   = 0
munmap(0x7fe3941e4000, 8192) = 0
close(9)= 0
.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: don't get running the slapd while using mdb backend

2013-01-18 Thread Howard Chu

Meike Stone wrote:

So my first question:
Does mdb have limitations like bdb it have aka BDB_IDL_LOGN?



Yes. back-mdb is ~60% the same code as back-bdb/hdb, its indexing functions
are basically identical.



Thanks for information, .. it was not that what I expected,


Read the MDB docs/papers/presentations.

http://symas.com/mdb/

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: openldap 2.4.33 from source, can't find slapd.d directory

2013-01-20 Thread Howard Chu

Dan White wrote:

On 01/20/13 21:25 +0100, Benin Technologies wrote:
Subject: openldap 2.4.33 from source, can't find slapd.d directory

The slapd.d directory is created by slapd, slaptest, or slapadd, depending
on whether your existing configuration is in slapd.conf format or
portable slapd-config. See the man page for any of these subjects for details.

Not quite. The *contents* of slapd.d ar ecreated by those tools. The directory 
itself has to be created explicitly.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Preoperation Plugin Questions

2013-01-23 Thread Howard Chu

Dan White wrote:

On 01/23/13 17:00 +0100, Julius Plenz wrote:

Hi,

I'm writing an preoperation authentication plugin for OpenLDAP, but I
have trouble finding any documentation whatsoever on this. So most of
what I know comes from tutorials like this one from Oracle:
http://docs.oracle.com/cd/E19099-01/nscp.dirsvr416/816-6683-10/custauth.htm



P.S.: What I'm actually trying to achieve is to do RADIUS
authentification via an external library. But I want to send the
client's IP in a Calling-Station-Id attribute, so I cannot simply
write a password check function, right? If you got any ideas that are
better than a preop module, please tell me...


You should be able to accomplish this via a SASL mechanism (and possibly an
existing one), which would not require any code changes within slapd or
client libraries. See sasl_server_new(3) and its ipremoteport parameter.


That would require the client to perform a SASL Bind instead of a Simple Bind. 
Not unreasonable, but it's obvious the OP is doing Simple Bind.


I would just take the current radius.c checker and modify it to stash the 
Operation pointer somewhere it can be retrieved, then grab it in the password 
check function and pull the client IP address out of there. The smbk5pwd 
module already uses this trick so it should be trivial to copy/paste that code 
into radius.c.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Preoperation Plugin Questions

2013-01-23 Thread Howard Chu

Julius Plenz wrote:

Hi Howard,
hi Dan,

thanks for your reply.

* Howard Chu h...@symas.com [2013-01-23 18:38]:

I would just take the current radius.c checker and modify it to
stash the Operation pointer somewhere it can be retrieved, then grab
it in the password check function and pull the client IP address out
of there. The smbk5pwd module already uses this trick so it should
be trivial to copy/paste that code into radius.c.


I'll take a look at that code tomorrow, thanks.

But the problem is not that I don't want to write the code: The module
is working fine already, but since I found exactly *zero*
documentation to warrant what I'm doing in the plugin, I thought it
best to ask here. Considering the original (technical) questions, do
you have an answer to that?


1) For developers, the source code is the authoritative documentation. Always.

2) SLAPI is a Sun/Netscape spec. There is no documentation for it in the 
OpenLDAP Project because it would be redundant. Whatever the official docs say 
a correct plugin must do, is what you should do.


3) However, if you find some aspect of SLAPI is unimplemented in OpenLDAP, 
then see (1). If what you're trying to do is not supported, you're welcome to 
contribute patches to implement whatever missing feature you need.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: missing entry in slapcat backup

2013-01-24 Thread Howard Chu

Meike Stone wrote:

Hello dear List,

I tried to import a slapcat backup from our production machine in a
test environment and got following message:

debld02:~ # time slapadd  -w -q -f /etc/openldap/slapd.conf -l /backup.ldif
50f98421 mdb_monitor_db_open: monitoring disabled; configure monitor
database to enable
- 100.00% eta   none elapsed  09m18s spd   4.6 M/s
Closing DB...Error, entries missing!
   entry 1156449: ou=a,ou=b,ou=c,ou=root

First I did not noticed this message, but now I see the database is
broken, because the node ou=a is missing.
So my questions:

- What ist the origin for such orphaned nodes (In MMR, it happens and
I see a few glue records, but in my backup this one node is complete
missing...)?

- How can I prevent from such entires and how can I recognize them
without importing?


It's easiest just to let slapadd tell you.


- How can I remove this entry (esp. in production DB without
downtime), because a delete show following messages:

~ # ldapdelete -x -h localhost -w password -D cn=admin,ou=root
'cn=cname,ou=a,ou=b,ou=c,ou=root'
ldap_delete: Other (e.g., implementation specific) error (80)
 additional info: could not locate parent of entry

and if I try to add this missing node, then I get:
ldapadd -x -h localhost -w password -Dcn=admin,ou=root -f test.ldif
adding new entry ou=a,ou=b,ou=c,ou=root
ldap_add: Already exists (68)


Use slapadd to add the missing entry. For back-mdb you don't need to stop 
slapd while running other slap* tools.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: missing entry in slapcat backup

2013-01-26 Thread Howard Chu

Meike Stone wrote:

2013/1/24 Hallvard Breien Furuseth h.b.furus...@usit.uio.no:

Meike Stone writes:

- What ist the origin for such orphaned nodes (In MMR, it happens and
I see a few glue records, but in my backup this one node is complete
missing...)?


Do you check the exit code from slapcat before saving its output?
If slapcat (well, any program) fails, discard the output file.



Hello,

yes, every time I make a Backup, the exitcode from slapcat is evaluated.
If any error occur, I write a message in syslog.
I searched in syslog over one year, but no error occured.
Espechially the date, where the backup was created, what I use now for tests.

So I tried it again:


Load broken DB via slapadd in slapd (messages proof that it is broken):
---
debld02:~ # slapadd  -f /etc/openldap/slapd.conf -q -l /backup.ldif
_ 100.00% eta   none elapsed  19m11s spd   2.2 M/s
Closing DB...Error, entries missing!
   entry 1156449: ou=a,ou=b,ou=c,ou=root



Check, that the parent entry not exist:
-
~ # ldapdelete -x -h localhost -w password -D cn=admin,ou=root
cn=cname,ou=a,ou=b,ou=c,ou=root
ldap_delete: Other (e.g., implementation specific) error (80)
 additional info: could not locate parent of entry
~ # echo $?
80


Try to check exit code from slapcat after backup the broken DB:
-
~ # slapcat -f /etc/openldap/slapd.conf /backup.ldif; echo $?
0


It seems to me, that in such case, the slapcat does not trows an error?!


slapcat doesn't check for missing entries. Its only job is to dump out the 
contents of what is in the DB. It doesn't try to tell you what isn't in the 
DB. Your DB must have been in this state for a long time, probably ever since 
its initial import.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Search speed regarding BDB_IDL_LOGN and order in search filter

2013-01-31 Thread Howard Chu

Meike Stone wrote:

Hello,

I'm sorry, but I want to ask again for clarifying.

First question:

- An index slot is loosing precision if the search result for an
(indexed) attribute is larger than 2^16. Then the search time is going
to increase a lot.
- I can change this via BDB_IDL_LOGN.
- But if I have a directory, that holds 200.000 employees with
'(ObjectClass=Employees)', the result is larger than 2^16 and it is
slow.
- lets say, the employees are distributed over 4 continents and the
DIT is structured geographical eg.:

  o=myOrg, c=us (100,000 employees)
  o=myOrg, c=gb ( 30,000  employees)
  o=myOrg, c=de ( 25,000 employees)
  o=myOrg, c=br  ( 45,000 employees)

Can i prevent it this problem with index slot size, if I change the
search base to o=myOrg, c=gb, because there are only 30,000
employees.


Try it and see.

If you're already playing with low-level definitions in the source code, you 
have no need for us to answer these questions. Or, if you need us to answer 
these questions, you have no business playing with low-level definitions in 
the source code.



This takes me to the second question:

How is a search filter evaluated?

Lets say, I combine  three filter via and like
'((objectlass=Employees)(age=30)(sex=m))' and the all attributes are
indexed. Each filter results:
(objectlass=Employees) = 200,000 entries
(age=30)  = 10,000 entries
(sex=m)= 3,000 entries

Does the order matter regarding speed, is it better to form the filter
like this?
  '((sex=m)(age=30)(objectlass=Employees))'


Try it and see.

If you have advance knowledge of the characteristics of your data, perhaps you 
can optimize the filter order. In most cases, your applications will not have 
such knowledge, or it will be irrelevant. For example, if you have a filter 
term that matches zero entries, it is beneficial to evaluate that first in an 
AND clause. But it would make no difference in an OR clause.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: meta backend and bad configuration file

2013-02-07 Thread Howard Chu

francesco.policas...@selex-es.com wrote:

Hi,

I am running slapd 2.4.33 on RHEL, compiled from the sources.
I successfully configured meta backend using old style slapd.conf.
My aim is to browse two Active Directories in two separate forests (success)
and to collect in a new group all users members of two local groups, one for
each domain (not  done yet).
For the latter I read that the dynlist overlay is what I need and I created a
hdb database.
The configuration examples I found for dynlist are in the cn=config style, so
I felt pushed to convert my configuration (slapd -f slapd.conf -F slapd.d).
I did it, but the results are not as expected, because slapd starts, but
slaptest for the new config issues an error.

# slaptest -f slapd.conf
51137c4c hdb_monitor_db_open: monitoring disabled; configure monitor database
to enable
config file testing succeeded

# slaptest
51137c5a olcDbURI: value #0: unable to parse URI #0 in olcDbURI
protocol://server[:port]/naming context.
51137c5a config error processing
olcMetaSub={0}uri,olcDatabase={1}meta,cn=config: unable to parse URI #0 in
olcDbURI protocol://server[:port]/naming context
slaptest: bad configuration file!

I assume, please tell me if I am wrong, that if you have the new cn=config
files, then slapd.conf is not used.
But I remove or rename it slapd does not start and the error message is  the
same as for slaptest.


Sounds like a bug in the conversion from slapd.conf to cn=config format. 
Please submit an ITS with your slapd.conf.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Using multi-value attributes for ACLs

2013-02-08 Thread Howard Chu

Andrew Heagle wrote:

After reading Philip Colmer's Access control (Jan 24) thread and trying out
using sets on a test server, this solution will work out quite nicely. Just
have to change/add any role with the an appropriate owner attribute pointing
to the proper group.


Sets have a pretty high performance cost. The better solution in your case is 
to use an appropriately located break statement. Read slapd.access(5).



On Thu, Feb 7, 2013 at 11:22 AM, Andrew Heagle and...@logaan.com
mailto:and...@logaan.com wrote:

Hi,

At my work, we use LDAP as the backend for Puppet node definitions. Each
host would have an LDAP entry specifying things like which puppet classes
to apply, host specific variables, environment (which git branch to use
for puppet manifests and a few other things.

There are different teams that would like to be able to manage these
attributes when deploying software. For example, DBA should be able to
manage DB servers while QA need to be able to configure their hosts to
test different software.

Any hosts that DBA can manage has a role=DBA applied and likewise an QA
hosts has role=QA set. Since role is multi-valued, a QA DB can have
role=DBA and role=QA set on it, since both QA and DBAs might need to be
able to make changes to the host.

Our slapd.conf has these ACLS
access to dn.subtree=ou=hosts,dc=example,dc=info filter=(role=DBA)
 attrs=puppetclass,puppetvar,environment
 by
group/groupOfUniqueNames/uniqueMember=cn=dba,ou=groups,dc=example,dc=info 
write
 by group=cn=sysadmin,ou=ldapgroup,dc=example,dc=info write
 by * read

access to dn.subtree=ou=hosts,dc=example,dc=info filter=(role=QA)
 attrs=puppetclass,puppetvar,environment
 by
group/groupOfUniqueNames/uniqueMember=cn=qa,ou=groups,dc=example,dc=info
write
 by group=cn=sysadmin,ou=ldapgroup,dc=example,dc=info write
 by * read


let say a LDAP host entry looks like this:
dn: cn=qadb1.int.example.info

http://qadb1.int.example.info,ou=QAServers,dc=tor,ou=hosts,dc=example,dc=info
objectClass: puppetClient
objectClass: device
objectClass: exampleHost
objectClass: dNSDomain2
cn=qadb1.int.example.info http://qadb1.int.example.info
role: DBA
role: QA
environment: qa
datacenter: tor
aRecord: 10.10.12.53
dc: qadb1.int.example.info http://qadb1.int.example.info

if a DBA wants to edit this entry, it hits the first ACL, sees the DBA
user in the dba group and allows the change. If a QA person wants to edit
the entry, it hits the first ACL, sees role=DBA, and checks the DBA group,
but the QA user is not in the DBA group and rejects the change, even
though the next ACL would have allowed the change, it just doesn't hit it.

Is it possible to somehow use the ACLs above without have to make lots of
ACL rules that combine all the possible combos, such as role=DBA, role=QA,
role=DBAQA, role=DBADEV, role=DBABI, etc...? Such as adding a third entry
(eg, this would work, but would like to find a more elegant solution):

access to dn.subtree=ou=hosts,dc=example,dc=info filter=(role=DBAQA)
 attrs=puppetclass,puppetvar,environment
 by
group/groupOfUniqueNames/uniqueMember=cn=dba,ou=groups,dc=example,dc=info 
write
 by
group/groupOfUniqueNames/uniqueMember=cn=qa,ou=groups,dc=example,dc=info
write
 by group=cn=sysadmin,ou=ldapgroup,dc=example,dc=info write
 by * read


Thanks,
Andrew





--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Mirror mode difference from N-Way Multi-Master

2013-02-08 Thread Howard Chu

Покотиленко Костик wrote:

В Птн, 08/02/2013 в 11:07 -0800, Quanah Gibson-Mount пишет:

--On Friday, February 08, 2013 8:12 PM +0200 Покотиленко
Костик cas...@meteor.dp.ua wrote:


В Птн, 08/02/2013 в 09:47 -0500, John Madden пишет:

As far as I can see there is no difference in configuration. The only
distinction I can see is the use case, as N-Way supposed to accept
writes on all Masters, but Mirror only on one at a time.


There are configuration differences (see the Admin Guide for the needed
snippets; N-Way requires MirrorMode and then some) but yes, MirrorMode
requires writes to go to only one node at a time.


I've re-red those sections of docs two more times, still don't see the
difference


There is no configuration difference at this point.  I think there was one
initially ages ago.  The only difference between MMR and mirror-mode MMR is
that in the latter case, writes only go to a single master instead of any
master in the cluster.

In all cases, I strongly advise using delta-syncrepl based MMR with the
most current OpenLDAP release (2.4.33).


Quanah, thanks alot, this clears up things.

Maybe somebody comment on number 4 from my initial post:


4. In Mirror mode, can I use UpdateRef on one mirror to direct writes
to another so proxy/balancer will become not required?


Of course not. Updateref is ignored in mirrormode. As it must be, because 
otherwise the server would never be able to accept writes at all, which would 
make it useless for failover.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: openLDAP is not working with MySQL cluster

2013-02-21 Thread Howard Chu

Quanah Gibson-Mount wrote:

--On Wednesday, February 20, 2013 12:48 AM +0530 SHEKHAR PODICHETI
shekhar...@gmail.com wrote:


Hi,

I tried connecting openLDAP server with MySQL cluster through NDB APIs.
I made configuration changes in slapd.conf so that it connect to NDB.

However it is throwing Unrecongnized NDB.

I request tou to help me. It will be great if u share me sample
slapd.conf file that has the NDB configuration.


Did you configure with --enable-ndb?  I would note NDB support was never
finished.


Nor is it likely to ever be. The partnership with MySQL (under which this
backend was written) was terminated when Oracle acquired them and Oracle has
ignored all our inquiries on the subject.

Speaking officially for the OpenLDAP Project, at this time, the MySQL Cluster
support in OpenLDAP should be considered totally abandoned due to Oracle's
complete lack of cooperation.

(Of course, the source code is still there if any interested hackers are
looking for a project to adopt.)

On a related note, the Project will also be dropping support for BerkeleyDB 
since OpenLDAP's Lightning MDB has proven to be superior.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: new module overlay modtrigger

2013-02-22 Thread Howard Chu

Maarten Vanraes wrote:

Hey,

a few years back i wrote an overlay module modtrigger, that executes a
script on modifications.


The preferred approach these days is to use slapo-sock.

fork/exec is still risky on some threading implementations. This is also why 
back-shell is not recommended for general use.



Some of the comments i got was that it should really require to work with
the new config structure.

Finally i've done that, but i'm sure it could be improved.

So, i'm kind of asking if someone can review it and give some pointers to
improve.

and next if it's possible to include in the openldap distribution, in the
overlay section or contrib section.


Thanks for the effort but I don't believe it would be a good idea to include 
this code for the above reasons.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: RE24 testing call (OpenLDAP 2.4.34)

2013-02-22 Thread Howard Chu

Bill MacAllister wrote:



--On Thursday, February 21, 2013 07:01:14 PM -0800 Bill MacAllister 
w...@stanford.edu wrote:




--On Thursday, February 21, 2013 03:24:01 PM -0800 Quanah Gibson-Mount 
qua...@zimbra.com wrote:


If you know how to build OpenLDAP manually, and would like to participate in 
testing the next set of code for the 2.4.34 release, please do so.

Generally, get the code for RE24:

http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=snapshot;h=refs/heads/OPENLDAP_REL_ENG_2_4;sf=tgz

Configure  build.

Execute the test suite (via make test) after it is built.

Thanks!

--Quanah


Built debian packages, installed on a master, and loaded the database.
This is on debian testing (wheezy).  The master looks fine.  This is
using back-mdb.  Load with mdb is much faster than old hdb backends.
I will get some real numbers in a day or two.

I am having issues with the slave.  Packages install fine.  When I
start to slapadd load data theeta starts at 5 minutes and just keeps
increasing.  I killed it after about 15 minutes when the eta hit 25
minutes and was still climbing.  Thought it must be something in my
configuration and just tried pulling the exact configuration I used to
load the master to the slave hardware and got the same results.  More
debugging to do to try and figure out what is up, but it does not look
like a openldap problem.  When the load starts to slow down the progress
display freezes for 30-60 seconds (a guess, but a while anyway) and then
picks up again.


The partition holding the mdb database was initialized as ext4 and
mounted with acls turned on.  I reinitialized the partition as ext3
and mounted it without acl support.  The load of the slave looks more
reasonable now.  Unfortunately, it did not finish.

slapadd -q /etc/ldap/slapd.d -b dc=stanford,dc=edu -l ./db-ldif.0
# 46.37% eta 07m12s elapsed  06m13s spd   1.9 M/s 
51272d39 = mdb_idl_insert_keys: c_put id failed: MDB_TXN_FULL: Transaction has too 
many dirty pages - transaction too big (-30788)
51272d39 = mdb_tool_entry_put: index_entry_add failed: err=80
51272d39 = mdb_tool_entry_put: txn_aborted! Internal error (80)
slapadd: could not add entry 
dn=suRegID=a9a6183ae77011d183712436000baa77,cn=people,dc=stanford,dc=edu 
(line=17741844): txn_aborted! Internal error (80)
*# 46.43% eta 07m11s elapsed  06m13s spd   1.9 M/s
Closing DB...

Not sure what to do about that.  Any suggestions?


That's pretty odd, if this same LDIF already loaded successfully on another 
machine. Generally I'd interpret this error to mean you have a lot of indexing 
defined and some entry is generating a lot of index values. But that doesn't 
seem true if you're using the same config as another machine.


You can probably get past this error by tweaking back-mdb/tools.c but again, 
it makes no sense that you run into this problem on one machine but not on 
another, with identical config. (in tool_entry_open change writes_per_commit 
to 500 from the current value of 1000.)


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: new module overlay modtrigger

2013-02-22 Thread Howard Chu

Maarten Vanraes wrote:

Maarten Vanraes wrote:

Hey,

a few years back i wrote an overlay module modtrigger, that executes a
script on modifications.


The preferred approach these days is to use slapo-sock.


reading up on slapo-sock, i don't understand how i could do the same.

modtrigger is just doing a notification, not actually handling the queries
itself, it's not a backend.

or am i misunderstanding this?


Re-read the manpage.
 This  module  may also be used as an overlay

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: ppolicy overlay with sasl bind

2013-02-23 Thread Howard Chu

General Stone wrote:

Hello,

can I use the ppolicy overlay wtih sasl binds? It seems, that it dosn't
work?!


No.

The spec for interaction of ppolicy with SASL (and other) authentication 
methods was never completed.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: slapd, SASL passthrough and changing passwords smashing userPassword

2013-02-25 Thread Howard Chu

Tim Watts wrote:

Hi folks,

I hope this is a quick and easy one :)

I have slapd 2.4.23 working with passthrough to MIT kerberos via
saslauthd. I use smbkrb5pwd (a hack on smbk5pwd) to pass password
changes through to kerberos (creating or modifying the target principle
as required)


I haven't seen smbkrb5pwd but, as the author of smbk5pwd, it sounds like the 
hack is inadequate. smbk5pwd provides the {K5KEY} password hash mechanism, so 
you can use the Kerberos password directly, and you don't need {SASL} at all.



To enable a particular user to bind to slapd with their kerberos
password, I'm setting:

userPassword: {SASL}my...@my.kerberos.realm.example.com

This works *very nicely*. Except one thing...

Using passwd via pam_ldap or ldappasswd directly smashes userPassword:
and replaces the value with the password hash. Both machanisms are doing
EXOP password changes.


Is there any way to stop this happening when the mechanism in
userPassword is {SASL} ?


Set password-hash to {SASL} in slapd.conf/slapd-config.


Or maybe there is another way to enable global SASL password passthroughs?

==

I'm in a transition phase. I need to import the slapcat output from the
old LDAP server to my new one. At this point, all authentication should
be done with the existing userPassword hash. Password changes should
update this hash and create/modify principles on the kerberos server.


Set the password-hash to {SASL} and whatever other hash you want to use.


3 months later, I want to switch the auth mechanism on all accounts to
passthrough to kerberos, at which point, ldappasswd should still work
but via smbkrb5pwd updating kerberos.

Maybe my strategy is wrong, but that's the basic problem I need to solve.

Am I trying this the wrong way?

Cheers and thanks in advance for any ideas.

Tim




--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: internal_modify in overlays

2013-02-28 Thread Howard Chu

Tim Watts wrote:

Hi,

Does anyone know of a bit of code I can look at that does an *internal*
(completed inline) LDAP_MOD_REPLACE operation on one attribute without
chaining (ie it does a return 0)?

I've found Sun docs for doing this in a slapi plugin but not an openldap
slapd plugin.


Reason:

Basically, I've been hacking on smbkrb5pwd.c and discovered if I do a
return 0; at the end, I can prevent chaining (not documented but found
some openldap hacking - denyop.c - that demonstrated this).


Documented. slapd/overlays/slapover.txt.

You should not be attempting to write code here without having actually read 
what's in front of you. If your first instinct is *not* naturally to read the 
source tree, your programming habits need sharpening.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: additional info: objectClasses: value #0 invalid per syntax

2013-02-28 Thread Howard Chu

Jimmy Royer wrote:

Hello,

I am starting out with openldap and I don't know it that much. I got
the error mentioned in the title when trying to add an object class,
which is apparently a very common one per my google searches. I've
read that common causes are:

* extraneous white space (especially trailing white space)
* improperly encoded characters (LDAPv3 uses UTF-8 encoded Unicode)
* empty values (few syntaxes allow empty values)

This is the object class file I am trying to add, I picked it as an
example on some website, to have something minimal and make it easier
to test:

# cat exObjectClasses.ldif
dn: cn=schema
changetype: modify
add: objectClasses
objectClasses: ( 2.16.840.1.113730.3.2.2.9
  NAME 'blogger'
  DESC 'Someone who has a blog'
  SUP inetOrgPerson STRUCTURAL
  MAY blog )

I've checked if there was any trailing spaces at the end with the following:

# cat -vte exObjectClasses.ldif
dn: cn=schema$
changetype: modify$
add: objectClasses$
objectClasses: ( 2.16.840.1.113730.3.2.2.9$
  NAME 'blogger'$
  DESC 'Someone who has a blog'$
  SUP inetOrgPerson STRUCTURAL$
  MAY blog )$

I've made sure the file is UTF-8:

# iconv -f ASCII -t UTF-8 exObjectClasses.ldif  exObjectClasses.ldif.utf8


Redundant. 7-bit ASCII is already valid UTF-8. And if you had any stray 8-bit 
ASCII characters in there, they obviously would be erroneous and should be 
deleted, not converted to UTF-8.


Most likely you trimmed too many spaces. Read the ldif(5) manpage.

Also, cn=schema is not a user modifiable entry in OpenLDAP. If you want to add 
new schema you must add it to cn=schema,cn=config.


Seems like, given that you haven't mentioned cn=config, you're probably using 
a pretty old version of OpenLDAP as well.



And I don't think there are any empty values defined in the LDIF file.
So when I type this command, I still have the invalid per syntax
error:

# ldapmodify -x -W -H ldaps://127.0.0.1 -D
cn=Manager,dc=modelsolv,dc=com -f exObjectClasses.ldif
Enter LDAP Password:
modifying entry cn=schema
ldap_modify: Invalid syntax (21)
 additional info: objectClasses: value #0 invalid per syntax



I was able to add a few entries in LDAP so far. So I know I am able to
reach the server, the connection is fine, and LDAP is somewhat
functional. But I can't modify the schema with objectclasses.

Is there anything obvious that I am doing wrong? Do you have any
recommendation for debugging further?

Regards,
Jimmy Royer





--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Combining AD and Local DB into single 'virtual' tree

2013-03-04 Thread Howard Chu

Mailing Lists wrote:

Hello,
I posted a question along these lines a few months ago and received replies,
but never understood enough to implement them. I've done more research in the
meantime and hopefully have learned enough to ask this question intelligently.
I'm working on a project proposal for integrating Linux machines into a
Windows environment. The client is very concerned about their AD environment
and wants to do as little modification to it as possible (preferably none).

What I'd like to propose is that we set up an OpenLDAP server that chains to
AD. If possible, I would like to use the OpenLDAP client's credentials to bind
to AD instead of having a dedicated user for the OpenLDAP -- AD connection.
I believe this can be accomplished with the 'rebind-as-user' option of the
ldap backend (slapd-ldap). Is this correct?


No. That is not what the slapd-ldap(5) manpage says for rebind-as-user. Go 
RTFM. What you want is idassert-bind.



Now here's where I think it gets tricky. We also need to be able to store
information for the Linux boxes in LDAP (samba winbind mappings for example),
but keep it separate from AD. I know that part of this would require a
dedicated LDAP database backend (slapd-bdb) to be configured, but what
confuses me is how to combine these two separate entities (the AD proxy and
this bdb database) into one 'virtual' backend that clients can query against.
Is this where slapd-translucent would come into play?


slapo-translucent has only one purpose - to override the attributes of an 
entry that exists on a remote server with values stored in a local server. If 
the entry doesn't exist on the remote server, then slapo-translucent is not 
what you want.



Finally, if I want to create OUs in the Linux LDAP database that contain user
DNs from AD, is that possible?


Anything is possible. Dunno if it makes sense though.


Any guidance, example solutions, or suggested reading is greatly appreciated.
-Dave


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: SASL/PLAIN Passthrough auth

2013-03-08 Thread Howard Chu

Robin Helgelin wrote:

Hi,

I have a SASL pass-through authentication working when using a simple
bind only on users that has a userPassword starting with {SASL}. When
the users password contains {SASL}extraAuthInformation, the
extraAuthInformation is passed on as username to the saslauthd and
everything works as it should.

However, when using SASL/PLAIN all requests goes to the saslauthd,
without passing the extra information found in userPassword. Another
issue is that the username sent to saslauthd is the username entered
by the user, not the dn found when rewriting the username with
authz-regexp.

Is this by design or did I miss anything? Documentation states that
pass-through should be working with SASL/PLAIN, but perhaps I
misunderstood what it really meant?


That's by design. The authz-regexp mapping is only used when the target 
credentials are stored in slapd. Since you're using SASL/PLAIN to actually 
talk to saslauthd, nothing inside slapd is relevant.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: ldap_add: Invalid syntax (21)

2013-03-11 Thread Howard Chu

Graeme Gemmill wrote:

On 10/03/13 18:44, Keutel, Jochen (mlists) wrote:

Hi,
   I don't think dc=name is related to your schema problem.
Probably object class mozillaAbPersonAlpha was not included in your
schema files.
Please provide /usr/local/etc/openldap/schema/mozilla.schema to check
this.



By the way, having done a  bit of reading, I think the problem may be
the lack of a cn: or sn: when processing an inetOrgPerson.


No. Lack of a required attribute would return Objectclass violation, not 
Invalid syntax. Most likely you have trailing whitespace or some other 
invisible character garbling the input.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Adding samba schema to OpenLDAP2.4

2013-03-14 Thread Howard Chu

Wes Modes wrote:

Previously, I was running 2.3 and then 2.4 using all the 2.3 config files.

I am building a new 2.4 server the right way using OpenLDAP native
database and config schema.

As I migrate the functionality of the old server to the new one, I will
have various questions.

Today's question:  How do I import the samba (3.6.9) schema (previously
in an include schema file) to the new 2.4 server?


Why would you need to do this as an explicit step? When you convert slapd.conf 
to a database, everything included in the config is converted at the same time.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Adding samba schema to OpenLDAP2.4

2013-03-14 Thread Howard Chu

Wes Modes wrote:

On 3/14/2013 5:50 AM, Howard Chu wrote:

Wes Modes wrote:

Previously, I was running 2.3 and then 2.4 using all the 2.3 config
files.

I am building a new 2.4 server the right way using OpenLDAP native
database and config schema.

As I migrate the functionality of the old server to the new one, I will
have various questions.

Today's question:  How do I import the samba (3.6.9) schema (previously
in an include schema file) to the new 2.4 server?


Why would you need to do this as an explicit step? When you convert
slapd.conf to a database, everything included in the config is
converted at the same time.


Rather than convert the entire database, I was bringing over stuff more
piecemeal in an effort to clean up some of my fumbling attempts several
years ago to set up the ldap and the samba servers.

The samba schema in my 2.3 configuration is not in ldif format.  It is
in /etc/openldap/schemas and is included by the ldap.conf file.

I was surprised that I was unable to google explicit instructions on
adding a samba schema to 2.4.  Has anyone done this, and can you point
me to the proper docs?


Instructions for manually converting are in schema/openldap.ldif.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Encryption or hash for password?

2013-03-15 Thread Howard Chu

Gerhardus Geldenhuis wrote:

Thanks,
I thought crypt as well... but then I would expect it to look like:
userPassword: {CRYPT}saHW9GdxihkGQ

instead slapcat generates:
userPassword:: skadfjsajf=

Two small differences: there is two :: instead of one and all of the
userPassword entries ends in =.


Read the ldif(5) manpage.


Regards


On 15 March 2013 15:19, Marot Laurent laurent.ma...@alliacom.com
mailto:laurent.ma...@alliacom.com wrote:

Hello,

Seems to be base64 encoded {crypt} password

http://www.openldap.org/faq/data/cache/344.html

{crxPt}$1$I0(g7lbc$Zp/rgvZBd0eHöndgh0W3L/

Laurent

*De :*openldap-technical-boun...@openldap.org
[mailto:openldap-technical-boun...@openldap.org
mailto:openldap-technical-boun...@openldap.org] *De la part de*
Gerhardus Geldenhuis
*Envoyé :* vendredi 15 mars 2013 15:58
*À :* openldap-technical@openldap.org 
mailto:openldap-technical@openldap.org
*Objet :* Encryption or hash for password?

Hi

I am using the default Ubuntu 12.10 openldap installation and have
inherited an existing ldap setup. When I do a slapcat -n 1

It shows userPassword entries as follows:

userPassword:: e2NyeFB0fSQxJEkwKGc3bGJjJFpwL3JndlpCZDBlSPZuZGdoMFczTC8=

( password string has been edited... )

I am not sure how this is encoded... is there a way to find out? I have
tried md5 which is currently the default encoding for our servers.

I have also tried slappasswd with various -h option to see if I can
recreate the same hash if it is a hash.

I want to add new users using ldif and would like to encrypt/hash their
passwords in a similar fashion if possible.

Any help would be appreciated.

Regards

--
Gerhardus Geldenhuis



--

Le papier est un support de communication naturel, renouvelable et
recyclable. Si vous devez imprimer ce mail, n’oubliez pas de le recycler.




--
Gerhardus Geldenhuis



--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Lua wrapper for LMDB

2013-03-15 Thread Howard Chu

Shmulik Regev wrote:

Hi,

I've been working on a Lua wrapper for LMDB -
https://github.com/shmul/lightningdbm . It is a thin wrapper around the
database leveraging upon Lua's elegant integration with C libraries.

Feedback is welcomed warmly.


Looks nice and clean. I'll add a link to it on the LMDB main page.


Cheers,
Shmul




--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: LMDB - growing the database

2013-03-16 Thread Howard Chu

Shmulik Regev wrote:

Hi,

I understand that the DB size has an upper limit set by the call to
mdb_env_set_mapsize . I wonder what is the best strategy for growing the size.


The best strategy is to initially pick a large enough size that growing it is 
never an issue. E.g., set it to the amount of free space on the disk partition 
where the DB resides. As the docs state, when the DB is actually in use, 
there's no guarantee that there will be enough free system resources left for 
a resize attempt to even succeed. I.e., if you initially choose a small size, 
by the time you need to deal with it it may be too late.



From what I read, the put operations of either txn or cursor may fail with
the MDB_MAP_FULL error but then it is too late to change the DB size. On the
other hand I didn't find in mdb_env_stat (or perhaps I didn't understand) any
information suggesting how full is the DB so I can't really implement any
strategy for preemptively growing the DB based on the used space.

Did I miss anything?


There are 2 sets of information you can use for an estimate.

1) MDB_stat gives you the page size.

2) MDB_envinfo tells the mapsize and the last_pgno. If you divide mapsize by 
pagesize you'll get max pgno. The MAP_FULL error is returned when last_pgno 
reaches max pgno.


This is also tempered by the contents of the freelist. The mdb_stat command 
can tell you how many pages are on the freelist. (mdb_stat -f). The DB isn't 
actually full until the freelist is used up, and last_pgno is maxed out.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: impact on performance by adding jpeg file

2013-03-17 Thread Howard Chu

Jignesh Patel wrote:

We have around 10 million users and trying to add images(only faces), is it 
wise to do that way?
Or should I bring images from database?


As databases go, OpenLDAP with BerkeleyDB is far more efficient than typical 
SQL databases. And OpenLDAP with LMDB is far more efficient than BerkeleyDB.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: provider/consumer: entries have identical CSN

2013-03-18 Thread Howard Chu

Walter Werner wrote:

hi everyone

ok, i think i found it :-). It is the sizelimit parameter on the provider.

'The olcSizeLimit/sizelimit attribute/directive specifies the number
of entries to return to a search request'

Due to the website

http://www.zytrax.com/books/ldap/ch6/


Zytrax are plagiarists, they republish old versions of OpenLDAP docs and claim 
it as their own work. Zytrax are nothing more than a cancer on open source.



It says that 'If no sizelimit directive is defined the default is
500.' No wonder that i had always 500 results with ldapsearch -x,
despite the fact, that i deleted some entries.


The same information came from the slapd.conf(5) manpage, as well as the Admin 
Guide. http://www.openldap.org/doc/admin24/limits.html


You're always safest to use the official documentation.


Walter

2013/3/18 Walter Werner wer...@gmail.com:

hello everyone

I still did not solve my problem, but i think the solution could be
really some size limitation (already suggested by Marc)

LDAP_RES_SEARCH_RESULT (4) Size limit exceeded

The replication was until Object stud31. So i deleted on provider (on
the test environment i can do that) Objects stud01 until stud06. And
the replication went until stud34. 6 deleted and 3 could replicate. I
guess the other 3 objects where replicated in some other sub trees, i
did not noticed, if the size-limit is a constant. The question is, is
there an easy way to see the difference between the provider and
consumer?

And the main question is, where can the size limitation (if i am
thinking right) comes from?

Every help is highly appreciated.

   Walter

2013/3/15 Walter Werner wer...@gmail.com:

hi Marc

Thanks a lot for you quick answer.

2013/3/15 Marc Patermann hans.mo...@ofd-z.niedersachsen.de:

Walter,

Walter Werner schrieb (15.03.2013 10:58 Uhr):



I get a strange replication problem. After i didn't find a solution
somewhere on internet i decided to post to this mailing-list. Probably
i should describe my system settings. Both consumer and provider are
running on suse 12.1. And i got the errors with openldap version
2.4.26-3.1.3. Since it is a good
behavior i red somewhere on this email-list, i compiled the latest
openldap v2.4.34 and could unfortunately reproduce the same error.


The is a repo, did you know?
http://download.opensuse.org/repositories/network:/ldap:/OpenLDAP:/
(it is still 2.4.33, but anyway)


No, i didn't. That can save me a lot of time in the future.




Mar 15 09:17:43 ismvm22 slapd[17313]: dn_callback : entries have
identical CSN uid=stud31,ou=Student,ou=People,ou=myou,dc=mybase
20130315072217.081269Z#00#000#00


do the objects differ from provider to consumer?


Especially that stud31 object is exactly the same. I am not sure all
copied objects are the same, if that was the question. Apparently ldap
has added the stud objects in alphabetical order. There are all studXX
until stud31. So after stud31 there should be stud32 stud33 and so on,
but they are missing on the consumer. It is maybe no accident that in
the log it ends with stud31 object.

dn_callback : entries have identical CSN uid=stud31...




Mar 15 09:17:43 ismvm22 slapd[17313]: do_syncrep2: rid=010
LDAP_RES_SEARCH_RESULT (4) Size limit exceeded
Mar 15 09:17:43 ismvm22 slapd[17313]: do_syncrep2: rid=010 (4) Size
limit exceeded
Mar 15 09:17:43 ismvm22 slapd[17313]: do_syncrepl: rid=010 rc -2
retrying (58 retries left)


Change that!


Do you mean Size limit exceeded? I already thought about that. Please
look at the partial configs in the first mail.On the provider time and
size are set to unlimited for the replicator reader. On the consumer
it is also set to unlimited. Maybe i forgot some other option.


overlay syncprov


man slapo-syncprov tells more about options to the syncprov overlay.


There are indeed a lot more. I tried with additional parameter for checkpoint

syncprov-checkpoint 100 10

No effect. Other options seems to me for speeding up ldap. I do not
want to complicate things. I don't now if it is useful, before trying
out something, i also always deleted the database files on the
consumer to avoid memory effects of some sort.

Still not replicating properly.

Walter






--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: object class values in a read or search result

2013-03-22 Thread Howard Chu

Michael Ströder wrote:

Manuel Gaupp wrote:


I don't think so, because RFC 4512, section 3.3 says:

   When creating an entry or adding an 'objectClass' value to an entry,
all superclasses of the named classes SHALL be implicitly added as
well if not already present. [...]

If I'm interpreting this correctly, the OpenLDAP behaviour is a bug.


Well, implicitly added is a bit vague to call it a bug since the entries are
returned when searching for the superior object class.


In the sense that implicit is the opposite of explicit the OpenLDAP 
behavior is exactly correct. Also as a general rule the X.500 data model 
requires that a server store and return exactly what the user provided.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: LMDB vs FastDB

2013-03-22 Thread Howard Chu

Tobias Oberstein wrote:

Am 21.03.2013 21:58, schrieb Howard Chu:

Tobias Oberstein wrote:

Hello,

I have read the - very interesting - performance comparison
http://symas.com/mdb/microbench/

I'd like to ask if someone did benchmark LMDB (and/or the others)
against http://www.garret.ru/fastdb.html

FastDB is an in-memory ACID database that works via shadow paging, and
without a transaction log.


OK, like LMDB it uses shadow root pages. I think the similarity ends there.


Ah. Ok.


It is a relational database with an ASCII query language, while LMDB is
strictly a key/value store. That automatically means for simple get/put
operations LMDB will be orders of magnitude faster (just as it is so
much faster than SQLite3 and SQLite4).


Mmh. The overhead of parsing a SELECT value FROM kv WHERE key = ? or
executing a prepared version of the former versus a direct kv.get(key)
is there, sure, but _orders_ of magnitude larger?


Measure it for yourself and let us know, if you have any doubts.


Btw: could LMDB be used as a backend of sqlite4? I.e. does LMDB support
ordered access?


Yes. We have that already under way.


FastDB is also written in C++ which is another inherent disadvantage
compared to LMDB which is pure C.


Yes, though I'm a C++ fan, I agree with this point. E.g., here is a nice
Python wrapper

https://github.com/dw/py-lmdb/

which interfaces using Cython and wouldn't be possible if LMDB would be C++.


Exactly. If you want to write a tool that's useful, don't choose a language 
that automatically precludes other uses. People writing libraries in Java, 
this means you. (e.g. SimpleDBM, which I stumbled on a few years ago while 
looking into B-link trees. http://code.google.com/p/simpledbm/)



You could adapt the LevelDB microbenchmarks to test it but ultimately I
believe it would be a waste of time.



Thanks for your detailed answer and sharing information! It seems LMDB
deserves much more visibility in the community.


Totally. I've been hitting every relevant conference that comes my way.
http://symas.com/mdb/

but there's only so much time, and travel interferes with development. As is 
so often the case in OpenLDAP, huge numbers of people/projects use our work 
but very few publish their success stories.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: shadowLastChange can't be read

2013-03-23 Thread Howard Chu

Maria McKinley wrote:

Hi there,

I can change the shadowLastChange attribute:

maria@mimi:~/sysadmin/ldap$ ldapmodify -x -v -r -W -D
cn=admin,dc=example,dc=com -f pass.expldap_initialize( DEFAULT )
Enter LDAP Password:
replace shadowLastChange:
 15786
modifying entry uid=chris,ou=people,dc=example,dc=com
modify complete

But, I can't see it:

annette:~# ldapsearch -x uid=chris shadowLastChange
# extended LDIF
#
# LDAPv3
# base dc=example,dc=com (default) with scope subtree
# filter: uid=chris
# requesting: shadowLastChange
#

# chris, people, example.com http://example.com
dn: uid=chris,ou=people,dc=example,dc=com

# search result
search: 2
result: 0 Success

# numResponses: 2
# numEntries: 1

Even though this is my permission:

olcAccess: {0}to attrs=shadowLastChange by self write by anonymous auth by dn=
  cn=admin,dc=example,dc=com write by * read
olcAccess: {1}to attrs=userPassword by self write by anonymous auth by dn=cn=
  admin,dc=example,dc=com write by * none
olcAccess: {2}to dn.base= by * read
olcAccess: {3}to * by self write by dn=cn=admin,dc=example,dc=com write by *
   read

Have I done something wrong with my permissions? Is there something else that
could be going on here?


Looks like it's behaving exactly as you specified. As admin you have write 
access. When you searched anonymously, you got no access. (You gave anonymous 
auth access, but a search is obviously not an auth request.)


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: object class values in a read or search result

2013-03-23 Thread Howard Chu

Keutel, Jochen (mlists) wrote:

Hello,

  Also as a general rule the X.500 data model requires that a server store
and return exactly what the user provided.

   please tell me where in X.500 you find this. I couldn't find it. Instead I
found (X.501 (2008), chapter 13.3.2 ( The object class attribute) :

Every entry shall contain an attribute of type objectClass to identify the
object classes and superclasses to which the entry belongs. The definition of
this attribute is given in 13.4.8. This attribute is multi-valued.
There shall be one value of the objectClass attribute for the entry's
structural object class and a value for each of its superclasses. top may be
omitted.

This means - in my understanding - that the server has to set these values for
the attribute object class - one per superclass.


Please, this is an ancient question, answered long ago.

http://www.openldap.org/lists/openldap-software/200208/msg00721.html

Search the archives. There's no point carrying this one on again.



Am 22.03.2013 21:02, schrieb Howard Chu:

Michael Ströder wrote:

Manuel Gaupp wrote:


I don't think so, because RFC 4512, section 3.3 says:

   When creating an entry or adding an 'objectClass' value to an entry,
all superclasses of the named classes SHALL be implicitly added as
well if not already present. [...]

If I'm interpreting this correctly, the OpenLDAP behaviour is a bug.


Well, implicitly added is a bit vague to call it a bug since the entries are
returned when searching for the superior object class.


In the sense that implicit is the opposite of explicit the OpenLDAP
behavior is exactly correct. Also as a general rule the X.500 data model
requires that a server store and return exactly what the user provided.






--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: [LMDB] Append To Value

2013-03-26 Thread Howard Chu

Yucheng Low wrote:

Hi,

I am considering using LMDB to store a dynamic graph structure, and it will be
very helpful to support append to value having the same semantics as
append in Kyoto Cabinet.
http://fallabs.com/kyotocabinet/api/classkyotocabinet_1_1BasicDB.html#a23e776e5bd1e3c5caa0f62edffb87a54

For instance, if I were to store an adjacency list representation of a graph
in LMDB, I would use vertex ID as the key, and the value is a vector of 64-bit
IDs. To insert an edge into the graph will simply require appending 8 bytes to
the end of the vector. Re-reading and re-writing the entire value will be
somewhat excessive (and will actually change the asymptotic runtime of the
insertion).

This is also one of those situations where a batch append might be useful. For
instance, if I have an edge list on file which is too large to retain in
memory. To convert it to an adjacency list in LMDB will require batch-append.
Batch-writes are insufficient since that will require me to cache at least,
large parts of the adjacency list in memory, and the original edge list could
be ordered arbitrarily.

The actual graph representation I am thinking of is somewhat more intelligent
than a straight adjacency list, but similar concepts hold.


Based solely on the description here, I would use MDB_DUPFIXED and 
MDB_APPENDDUP or MDB_MULTIPLE. You could later use MDB_GET_MULTIPLE to read 
the values and reconstruct the vector.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: [LMDB] Append To Value

2013-03-26 Thread Howard Chu

Yucheng Low wrote:

I see, correct me if I misunderstood: you are suggesting to use duplicate keys
to store the vector?
In other words, if I have an neighbor list for vertex 5, comprising of values
[1,2,3], I will store it as 3 duplicate values for key 5.

Also, does the LMDB store the key just once for each duplicate value? Or
multiple times once for each value.


The key is only stored once. Duplicating the key would be stupid. Also 
DUPFIXED values are stored contiguously, with no headers or other overhead. It 
will be effectively the same amount of storage as storing your single record.



Thanks,
Yucheng

On Tue, Mar 26, 2013 at 11:35 AM, Howard Chu h...@symas.com
mailto:h...@symas.com wrote:

Yucheng Low wrote:

Hi,

I am considering using LMDB to store a dynamic graph structure, and it
will be
very helpful to support append to value having the same semantics as
append in Kyoto Cabinet.

http://fallabs.com/__kyotocabinet/api/__classkyotocabinet_1_1BasicDB.__html#__a23e776e5bd1e3c5caa0f62edffb87__a54

http://fallabs.com/kyotocabinet/api/classkyotocabinet_1_1BasicDB.html#a23e776e5bd1e3c5caa0f62edffb87a54

For instance, if I were to store an adjacency list representation of a
graph
in LMDB, I would use vertex ID as the key, and the value is a vector
of 64-bit
IDs. To insert an edge into the graph will simply require appending 8
bytes to
the end of the vector. Re-reading and re-writing the entire value will 
be
somewhat excessive (and will actually change the asymptotic runtime of 
the
insertion).

This is also one of those situations where a batch append might be
useful. For
instance, if I have an edge list on file which is too large to retain in
memory. To convert it to an adjacency list in LMDB will require
batch-append.
Batch-writes are insufficient since that will require me to cache at
least,
large parts of the adjacency list in memory, and the original edge
list could
be ordered arbitrarily.

The actual graph representation I am thinking of is somewhat more
intelligent
than a straight adjacency list, but similar concepts hold.


Based solely on the description here, I would use MDB_DUPFIXED and
MDB_APPENDDUP or MDB_MULTIPLE. You could later use MDB_GET_MULTIPLE to
read the values and reconstruct the vector.

--
   -- Howard Chu
   CTO, Symas Corp. http://www.symas.com
   Director, Highland Sun http://highlandsun.com/hyc/
   Chief Architect, OpenLDAP http://www.openldap.org/__project/
http://www.openldap.org/project/





--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: LMDB vs FastDB

2013-03-30 Thread Howard Chu

Tobias Oberstein wrote:

FastDB also appears to use locking, while LMDB is MVCC and readers


Yeah, MVCC is the right thing ..


require no locks, so even with all of the other disadvantages out of the
way, LMDB will scale better across multiple CPUs.


So _one_ LMDB can be concurrently used from multiple threads and
multiple processes, with multiple readers and writers?

Writers wont block readers, but writers require exclusive lock?

When used from different process, the single-layer design making use of
the filesystem buffer cache will mean there is no buffer cache per
process, and memory-consumption won't skyrocket?


Correct. I've already demonstrated this by running SLAMD benchmarks against 
two slapds operating on the same database. (Due to inefficiencies in slapd's 
frontend/connection manager/threadpool, you can actually double slapd 
throughput using 2 slapds on the same database.)


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: I can't delete a shell DB

2013-04-17 Thread Howard Chu

Michael Ströder wrote:

Diego Woitasen wrote:

  I was using shell backend and now switched to sock backend (shell
looks unstable). My problem not is that I can't delete the shell DB.

This is the entry in cn=config:

dn: olcDatabase={3}shell,cn=config
objectClass: olcShellConfig
objectClass: olcDatabaseConfig
olcDatabase: {3}shell

When I try to remove it I get Error 53 - Unwilling to perform.

Looks like there is a dependency with this entry, but I don't see
where. Is the last one, i removed the suffix hints?

And don't see anything usefull in the logs.


back-config does not support delete operation by default.

You can enable delete support by compiling with
CFLAGS=.. -DSLAP_CONFIG_DELETE

Not sure whether that's officially supported/recommended by OpenLDAP
developers though.


Not yet.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: I can't delete a shell DB

2013-04-17 Thread Howard Chu

Diego Woitasen wrote:

On Wed, Apr 17, 2013 at 12:52 PM, Michael Ströder mich...@stroeder.com wrote:

Diego Woitasen wrote:

  I was using shell backend and now switched to sock backend (shell
looks unstable). My problem not is that I can't delete the shell DB.

This is the entry in cn=config:

dn: olcDatabase={3}shell,cn=config
objectClass: olcShellConfig
objectClass: olcDatabaseConfig
olcDatabase: {3}shell

When I try to remove it I get Error 53 - Unwilling to perform.

Looks like there is a dependency with this entry, but I don't see
where. Is the last one, i removed the suffix hints?

And don't see anything usefull in the logs.


back-config does not support delete operation by default.

You can enable delete support by compiling with
CFLAGS=.. -DSLAP_CONFIG_DELETE

Not sure whether that's officially supported/recommended by OpenLDAP
developers though.

Ciao, Michael.



No way to remove configuration??? Sounds really weird :P


From a philosophical standpoint - production configurations generally only 
grow. If you're mucking around and experimenting, you do that on a throw-away 
development system.


From a practical standpoint - behavior of the service when clients are making 
requests to a backend that gets removed is totally undefined.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: I can't delete a shell DB

2013-04-17 Thread Howard Chu

Michael Ströder wrote:

 From a practical standpoint - behavior of the service when clients are making
requests to a backend that gets removed is totally undefined.


LDAP clients do not care about (OpenLDAP) database backends at all.
They simply query a DIT.


Yes, but they expect to get consistent answers to their queries. You cannot 
make any assertions about consistency when the rug is pulled out from under a 
running query.



AFAICS the original poster wanted to replace back-shell with back-sock for the
very same naming context. In theory this could be done with back-config - only
requring a very small downtime - entry deletion in back-config would be 
possible.


It would require adding a suffix to one backend while removing it from 
another. Since this can't be done in a single LDAP request it would require 
wrapping both changes in a single LDAP Transaction.


Doing it non-atomically would invariably result in inexplicable client error 
messages as they send requests to an LDAP server that was working fine 
before but suddenly replies no global superior knowledge.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: How to improve performance with MDB backend?

2013-04-19 Thread Howard Chu

Quanah Gibson-Mount wrote:

--On Thursday, April 18, 2013 3:59 PM + Chris Card ctc...@hotmail.com
wrote:


Maybe MDB performance relative to BDB degrades as the database get
bigger.  From your wiki page: This particular client has 25,208 entries
in their LDAP database. My test database has over 3 million entries
(production has nearly 7 million), which take about 20 minutes to slapadd
into MDB initially. My machine has 128 Gb RAM and the MDB db size
is 429496729600, so slapd can't map the whole db file into RAM.


How big is your DB when loaded vs maxsize?  If you are using writemap, the
maxsize *must* be larger than the DB size.  Also, what OpenLDAP version are
you using?  That's always important to note.


The maxsize should always be larger than the DB, regardless of writemap. That 
is, if you expect the DB to grow over time.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: How to improve performance with MDB backend?

2013-04-19 Thread Howard Chu

Quanah Gibson-Mount wrote:

--On Thursday, April 18, 2013 8:18 PM + Chris Card ctc...@hotmail.com
wrote:


I'll give that a try. I had perhaps been taking the advice given in the
slapd-mdb man page too seriously:


Heh, yes, it seems so. ;)  With writemap, you definitely do not want to
exceed actual RAM.


Nonsense. Set the size to as large as you need to allow the DB to grow. 
Period. Writemap has no bearing on this.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: slapd terminates with error read_config: no serverID / URL match found. Check slapd -h arguments

2013-04-19 Thread Howard Chu

Joe Phan wrote:

Hi,

When I configure N-Way Multi-Master configuration with cn=config, slapd
terminates with the error: module syncprov.la: null module registered and
read_config: no serverID / URL match found. Check slapd -h arguments.
olcServerID IDs are already added and configured properly, but when slapd
starts with -h option, it always fails.


This means your olcServerID IDs are *not* configured properly. In particular, 
none of the URLs in your IDs match any of the URLs in your -h option.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: How to improve performance with MDB backend?

2013-04-19 Thread Howard Chu

Saša-Stjepan Bakša wrote:

First test with your sugestions.
I am using Phyton program writen by me to add data to server.
Server is Centos 6.2 based (hardware described in my first post)
Python runs on separeate dual core PC with 1Gb connection to servers.
Servers are configured as N-way Multymaster

Test startTest stop Test duration  Num usersUser/sec

19.4.2013   19.4.2013  sec 7789,00 100128,39
11:53:45 14:03:34min 129,817

Database location mounted as:
UUID=616c291a-7fe4-47a1-87d1-c221a8e1c4f8 /optext4
noatime,auto1 2

vm.dirty_ratio = 90
vm.dirty_expire_centisecs = 6

Scheduler is:
[root@spr1 ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq]

memory manager libhoard.so (latest from hoard site)


You really need to learn something more about system administration; you 
clearly don't know what to investigate but this is all fundamental sysadmin 
knowledge.


First things first - when something is slow - what exactly is slow? Is it 
using excessive CPU time? Is it waiting for disk I/O? Every sysadmin should 
automatically ask this question first of all, and every sysadmin should know 
how to tell the difference. If you don't know these things then you are not 
qualified to be a sysadmin and need to go get training. This is not the forum 
for teaching you these things.


Copy/pasting someone else's VM tuning settings without understanding what they 
mean or why they are being set is cargo cult sysadmin. It is wrong and 
nobody on this list / in this community should be encouraging it. Quick easy 
spoonfed answers don't actually help understanding, and understanding is the 
only real way forward.


In particular, VM tuning settings are highly OS dependent, and probably kernel 
version dependent too. Good settings depend on exactly what your own system 
contains; settings that work for someone else may be useless or worse on your 
own setup.


Simple answers have narrow relevance that gets obsolete quickly. Learning how 
to think and investigate problems is knowledge that serves you the rest of 
your life.


As a starting point - what does vmstat tell you? Don't just paste its output 
here, learn what it means.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: disabling user account

2013-04-19 Thread Howard Chu

Liam Gretton wrote:

On 16/04/2013 19:49, Jignesh Patel wrote:

Does openldap has a provision like active directory to disable a user?

useraccountcontrol 544


At our site I created a new attribute 'globalLock' for every account and
filter on that at the service end. For example in /etc/ldap.conf for PAM:

pam_filter  (globalLock=off)

Enabled users get globalLock set to 'off'. Any other value will lock the
user out.

It's simple enough to use in Apache and other applications too.


Better to do this in a slapd ACL and enforce from the server side, than to 
rely on correctness of multiple clients.


access to attrs=userpassword filter=(globalLock=off)
by anonymous auth


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: How to improve performance with MDB backend?

2013-04-19 Thread Howard Chu

Saša-Stjepan Bakša wrote:

How do you measure or compare speed?

I have a Python program which add a predefined number of users (1mil and each
user consists of 8 DNs and 3 alias DNs) and for Modify test I have also Python
program which modify 1 attribute with random value under one DN changed
sequentially from 1 to 1mil)

Now I can add those 1 million of users in 5508 sec what amounts to 181,55
users per second.

Modify program can do 2141,33 modifications per sec.

Do you find those numbers reasonable or are they nonsense?

Is there a way to get 1 modifications per second or I am asking a stupid
question?



Sasa


On Mon, Apr 8, 2013 at 4:40 PM, Quanah Gibson-Mount qua...@zimbra.com
mailto:qua...@zimbra.com wrote:

--On Monday, April 08, 2013 4:49 AM -0700 Howard Chu h...@symas.com
mailto:h...@symas.com wrote:

There's nothing particular to LMDB to tune. But if you're seeing pauses
due to disk I/O, as your iotop output seems to indicate, you might want
to look into using a different I/O scheduler.


Not quite true.  On Linux, I suggest setting the olcDbFlags as follows:

olcDbEnvFlags: writemap
olcDbEnvFlags: nometasync

With those flags set, writes are 65x faster for me with back-mdb than they
are with back-hdb/back-bdb.


Since he already had DbNosync set, nometasync is irrelevant. One thing to note 
is that ext3 and ext4 filesystems have their own writeback parameters, which 
are independent of the settings in /proc/sys/vm, so settings that work with 
other FSs like jfs or xfs will have no effect.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: slapd-meta as a proxy for a monolithic namespace

2013-04-24 Thread Howard Chu
 23 08:44:13  slapd[26200]: conn=1015 op=0 meta_back_search[0] match= 
err=32 (No such object).
Apr 23 08:44:13  slapd[26200]: conn=1015 op=0 meta_back_search[1] match= 
err=32 (No such object).
Apr 23 08:44:13  slapd[26200]: send_ldap_result: conn=1015 op=0 p=3
Apr 23 08:44:13  slapd[26200]: send_ldap_result: err=32 matched=ou=rsp1,c=de,o=mno 
text=
Apr 23 08:44:13  slapd[26200]: conn=1015 op=0 meta_back_bind: no target for dn 
uid=admin,ou=rsp1,c=de,o=mno (32).






--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: hdb and mdb dereferencing aliases differently

2013-04-24 Thread Howard Chu

juergen.spren...@swisscom.com wrote:

Hi Michael,

NSS results must not be dependent on the backend database a directory service 
uses.

I activated connection logging and here's the proof that NSS is not the culprit.

Searches initiated by NSS are identical and exactly this behavior can also be 
seen  when using ldapsearch from command line with parameters from the log:

# running 'getent passwd' with hdb backend:
Apr 24 09:53:54 openldap-dev slapd[19240]: conn=1000 op=0 BIND 
dn=cn=itsAgent,ou=customerAgent,dc=scom method=128
Apr 24 09:53:54 openldap-dev slapd[19240]: conn=1000 op=0 BIND 
dn=cn=itsAgent,ou=customerAgent,dc=scom mech=SIMPLE ssf=0
Apr 24 09:53:54 openldap-dev slapd[19240]: conn=1000 op=0 RESULT tag=97 err=0 
text=
Apr 24 09:53:54 openldap-dev slapd[19240]: conn=1000 op=1 SRCH 
base=ou=account,dc=its,dc=scom scope=1 deref=3 
filter=(objectClass=posixAccount)
Apr 24 09:53:54 openldap-dev slapd[19240]: conn=1000 op=1 SRCH attr=uid 
userPassword uidNumber gidNumber cn homeDirectory x-LinuxLoginShell gecos 
description objectClass
Apr 24 09:53:54 openldap-dev slapd[19240]: conn=1000 op=1 SEARCH RESULT tag=101 
err=0 nentries=656 text=
Apr 24 09:53:54 openldap-dev slapd[19240]: conn=1000 fd=13 closed (connection 
lost)

# running 'getent passwd' with mdb backend:
Apr 24 10:00:17 openldap-dev slapd[19300]: conn=1002 op=0 BIND 
dn=cn=itsAgent,ou=customerAgent,dc=scom method=128
Apr 24 10:00:17 openldap-dev slapd[19300]: conn=1002 op=0 BIND 
dn=cn=itsAgent,ou=customerAgent,dc=scom mech=SIMPLE ssf=0
Apr 24 10:00:17 openldap-dev slapd[19300]: conn=1002 op=0 RESULT tag=97 err=0 
text=
Apr 24 10:00:17 openldap-dev slapd[19300]: conn=1002 op=1 SRCH 
base=ou=account,dc=its,dc=scom scope=1 deref=3 
filter=(objectClass=posixAccount)
Apr 24 10:00:17 openldap-dev slapd[19300]: conn=1002 op=1 SRCH attr=uid 
userPassword uidNumber gidNumber cn homeDirectory x-LinuxLoginShell gecos 
description objectClass
Apr 24 10:00:17 openldap-dev slapd[19300]: conn=1002 op=1 SEARCH RESULT tag=101 
err=0 text=
Apr 24 10:00:17 openldap-dev slapd[19300]: conn=1002 fd=13 closed (connection 
lost)

I suspect that aliases are not handled the same way in hdb and mdb as I am 
using aliases here and deref=3 in both searches, example:

dn: uid=joe,ou=Account,dc=its,dc=scom
objectClass: alias
objectClass: extensibleObject
uid: joe
aliasedObjectName: uid=joe,ou=Person,dc=its,dc=scom
structuralObjectClass: alias

When using hdb, the alias is dereferenced correctly (nentries=656), when using 
mdb it seems not to be dereferenced at all (nentries=0).

Maybe there's a parameter around for mdb which I couldn't find in the docs

to prevent this, but if not I consider this as a bug.

There is no parameter. Seems like you've found a bug, please submit the info 
to the ITS.


Regards

Juergen

mich...@stroeder.com wrote:

It would certainly help if you could examine the issue with pure LDAP search
operations preferably with OpenLDAP's ldapsearch command-line tool.

When looking at NSS results too many things can go wrong with other
components' configuration.

Ciao, Michael.

juergen.spren...@swisscom.com wrote:

Hi,

I have running OpenDLAP 2.4.35 on  Gentoo Linux and wanted to make some tests 
with mdb.

Slapd was running fine with hdb, no problems so far.
Then I exported contents via slapcat and switched config to mdb.
When slapd started using mdb no users from directory were shown by 'getent 
passwd':

### hdb part 
# using hdb parameters
databasehdb
dirtyread
cachesize   15
cachefree  100
idlcachesize45
dncachesize 10

# slapadd from backup and run slapd with hdb backend
/etc/init.d/unscd stop
/etc/init.d/slapd stop
rm /var/lib/openldap-data/*
rm -rf /etc/openldap/slapd.d/*
cp -p /etc/openldap/DB_CONFIG /var/lib/openldap-data/
cp -p /etc/openldap/slapd.conf.hdb /etc/openldap/slapd.conf
su ldap -c '/usr/sbin/slapadd -f /etc/openldap/slapd.conf -l odsldap-dev.ldif'
/etc/init.d/slapd start
/etc/init.d/unscd start
slapcat -f /etc/openldap/slapd.conf -b dc=scom | md5sum
# 73850f9a3f7ff9d3d1ddb7663cd046a6  -

getent passwd
# all users shown, everything ok

### mdb part 
# using mdb paramters
databasemdb
dbnosync
maxsize 2094967296
searchstack 64

# slapadd from backup and run slapd with mdb backend
/etc/init.d/unscd stop
/etc/init.d/slapd stop
rm /var/lib/openldap-data/*
rm -rf /etc/openldap/slapd.d/*
cp -p /etc/openldap/slapd.conf.mdb /etc/openldap/slapd.conf
su ldap -c '/usr/sbin/slapadd -f /etc/openldap/slapd.conf -l odsldap-dev.ldif'
/etc/init.d/slapd start
/etc/init.d/unscd start
slapcat -f /etc/openldap/slapd.conf -b dc=scom | md5sum
# 73850f9a3f7ff9d3d1ddb7663cd046a6  -



getent passwd
# no users from ldap shown

Am I missing something  when setting up and using mdb?
Both backends have exactly the same content, and so the results for searches 
should also be identical.

Regards

Jürgen Sprenger







--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director

Re: Debian Squeeze: Slapd subtree disappears, but ldapsearch finds it | unable to allocate memory for mutex; resize mutex region

2013-05-02 Thread Howard Chu

Simone Piccardi wrote:

On 05/02/2013 04:08 PM, Quanah Gibson-Mount wrote:

--On Thursday, May 02, 2013 8:32 AM +0200 Denny Schierz
linuxm...@4lin.net wrote:


but than you have to download, patch and update security fixes by your
self.


Yep. Part of being a competent sys admin anyhow.

Sorry, I disagree.

A competent sysadmin has to make choices on how he has to employ his
time. When having limited resources the choice you suggest can be easily
seen as an incompetent wasting of time.

For example when you have to manage  70 small server for  70 school,
applying security upgrade by recompiling apache, bind, samba, openldap
(just to cite some of the services on them) every time is plain wrong.
It's a waste of the scarce sysadmin time that could not be afforded.


A competent sysadmin knows how to leverage tools such that 7 servers or 7000 
servers requires the same amount of hands-on time. One element of making this 
feasible is certainly to have the minimum possible variations in deployed 
configurations. But a frozen configuration that you built yourself with known 
components is just as viable for this purpose as one you obtained from a 
distro. And in most cases, due to distro lag times, a config you build 
yourself will be superior.



That's just an example, but there are lot of situations in which the
solution to bad distribution packaging cannot be recompile it by
yourself and reinstall. Better to point to another distribution or to a
good packaging (if they exist). Otherwise every competent sysadmin will
use the packages, also if they are suboptimal.

I'm sorry to hear that Debian OpenLDAP packages are in a such bad state,
but if, as it seems, there no distribution getting OpenLDAP right (I
heard complaints also about RedHat), then I start thinking that
something is not working fine, at least on the user end of OpenLDAP
distribution.

Simone




--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Checking that a change has reached all servers

2013-05-03 Thread Howard Chu

Hallvard Breien Furuseth wrote:

We have clients which must check that an update has reached all LDAP
servers before they start some task.  So we need to publish a list
of all servers.

Where would you put that list, when clients should normally not contact
these servers directly (ldap-prod*.uio.no) but instead contact the load
balancer sitting in front of them (ldap.uio.no)?  'altServer' in the
root DSE anyway, or has someone defined another attribute?

With transactional backend databases, an existing slow LDAP operation
predating the change might return the old value while this quick poll
sees the change.  I'm content to just tell clients to wait a second
after seeing the change though, unless someone has brighter ideas.

Finally, has anyone written a nice little server (LDAP or otherwise)
which does this - client sends a request, server checks all LDAP
servers and either returns true/false or waits  retries while false?

This requirement makes no sense, particularly if the clients can only access 
the directory through the load balancer. What is the ultimate goal, what does 
the client do next?


The most obvious thing to do is to use a postread control on the original 
request, to read the entryCSN of the change. Then use an assert control on the 
following request, assuming it references the same entry.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: cleaning HDB after an unclean shutdown

2013-05-06 Thread Howard Chu

Benin Technologies wrote:

Hi,

I'm doing some tests on a perl backend, which causes sometimes my
OpenLDAP to hang. I then kill the process, but when I try to restart
openldap it won't, because of my HDB backend. I get the following message :

db_db_open: database dc=mycompany: database already in use.

After rebooting the server, everything works fine.

Any way to clean the HDB backend manually, without having to reboot
the server ?


Sounds like you're running on Windows, which takes a long time after a process 
dies for it to release its file locks. I don't know of any solution other than 
to wait for Windows to eventually notice the process is gone.


The smarter solution of course is to quit using such a braindead OS as a 
server platform.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: cleaning HDB after an unclean shutdown

2013-05-06 Thread Howard Chu

Benin Technologies wrote:

nope, Debian 6.0.4


The only reason for slapd to say the database is already in use is because a 
file lock still exists. In this case it implies that the original slapd 
process is still there. You said you already killed it but it sounds like the 
process hasn't gone away.


Le 06/05/2013 20:25, Howard Chu a écrit :

Benin Technologies wrote:

Hi,

I'm doing some tests on a perl backend, which causes sometimes my
OpenLDAP to hang. I then kill the process, but when I try to restart
openldap it won't, because of my HDB backend. I get the following
message :

db_db_open: database dc=mycompany: database already in use.

After rebooting the server, everything works fine.

Any way to clean the HDB backend manually, without having to reboot
the server ?


Sounds like you're running on Windows, which takes a long time after a
process dies for it to release its file locks. I don't know of any
solution other than to wait for Windows to eventually notice the
process is gone.

The smarter solution of course is to quit using such a braindead OS as
a server platform.







--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: SSH Gateway

2013-05-07 Thread Howard Chu

Stuart Watson wrote:

Hi

I am looking at creating a SSH gateway using OpenLDAP.  The idea is to store
our devs public keys in OpenLdap, which would give us the ability to control
who has SSH access to our servers.

Currently everyone shares the same key which means it is impossible to control
access.

Do I just need to...

Install OpenLDAP
Import the public keys into OpenLDAP
Install OpenSSH Server on the OpenLDAP server and configure it to use LDAP.
Configutre the remote servers to use the OpenLDAP servers to authenticate

The the devs can ssh from their computers through the OpenLDAP server to the
remote servers.

Can anyone help?


Sounds more like a question for the OpenSSH mailing lists. The last I knew, 
they refused to integrate patches providing LDAP key lookup support.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Unable to login applications using LDAP alias

2013-05-07 Thread Howard Chu

Geo P.C. wrote:

We have several applications and we are able to integrate LDAP
successfully.


In application we have given base dn asou=People,dc=geo,dc=com and the

userdn:uid=geo_pc,ou=People,dc=geo,dc=com can able to login to the application
successfully.


Now we created an alias as follows: 

dn: uid=geo_pc,ou=Applications,ou=Groups,dc=geo,dc=com
aliasedobjectname: uid=geo_pc,ou=People,dc=geo,dc=com
objectclass: alias
objectclass: extensibleObject
objectclass: top
uid: geo_pc

Now in application we have given base dn

asou=Applications,ou=Groups,dc=geo,dc=com but with this
userou=Applications,ou=Groups,dc=geo,dc=com we are unable to login to the
application.

Correct. Aliases are only processed in search operations. What you're trying 
to do will not work.


Please let us know is there any additional configuration we need to done. Can 
anyone please help us on it.


Thanks
Geo



--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: getting bindDN in perl script

2013-05-15 Thread Howard Chu

Benin Technologies wrote:

Hi,

I needed to access from an LDAP client (Outlook or Thunderbird) some
data stored in several locations (an OpenLDAP server with back-hdb, and
a PostgreSQL database).

I wrote a perl script used with back-perl, and everything works fine.
The client queries that back-ldap server, wich in turn retrieves data
both from the back-hdb server and the PostgreSQL server, does some
formatting on it, and returns it to the client

It works fine, except that I have to use a standard bindDN/password from
the perl script to access the back-hdb server, because I don't know how
to retrieve in that perl script the initial bindDN/password (the
credentials provided initially by the client).

I guess there is a way to do it, because I found some links like
http://osdir.com/ml/network.openldap.general/2002-09/msg00021.html where
people seem to have been able to get the bindDN and password provided by
the client, but they didn't say how and I couldn't figure it out.

Does anybody know if it's possible to get, within the perl script, the
bindDN/password provided by the client ?


The DN is the same as for all the other operations - it's the first parameter. 
The password is the 2nd parameter. How else would you expect it to be passed?


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: olcAccess replication - error 80 attributes not within database namespace

2013-05-17 Thread Howard Chu

Igor Zinovik wrote:

   Hello.

I'm trying to replicate access rules and limits for one of my databases, but
with no success:
suse:~ # cat olcAccess-syncrepl.ldif
dn: olcDatabase={1}mdb,cn=config
changetype: modify
add: olcSyncrepl
olcSyncrepl: {1}rid=002
   provider=ldap://ldap1.local
   bindmethod=simple
   binddn=cn=admin,cn=config
   credentials=TopSecret
   searchbase=olcDatabase={1}mdb,cn=config
   attrs=olcAccess,olcLimits
   timeout=3
   network-timeout=0
   starttls=yes
   tls_cert=/etc/openldap/ldap.pem
   tls_key=/etc/openldap/ldap.key
   tls_cacert=/etc/ssl/local-ca.pem
   tls_reqcert=demand
   tls_crlcheck=none


suse:~ # ldapmodify -H ldap://ldap2.local -ZZxWD cn=admin,cn=config -f
olcAccess-syncrepl.ldif
Enter LDAP Password:
modifying entry olcDatabase={1}mdb,cn=config
ldap_modify: Other (e.g., implementation specific) error (80)
 additional info: Base DN olcAccess,olcLimits is not within the
database naming context


 slapd-2.4.33 if it matters.

The error message is a bit garbled (obviously the Base DN is wrong) but the 
error is basically correct. You're trying to replicate the wrong thing from 
the wrong place. Setting a syncrepl consumer on the olcDatabase={1}mdb 
database lets you replicate the *content* of that database. To replicate the 
*configuration* of that database your consumer must be set where that 
configuration is stored.


The configuration is stored in olcDatabase={0}config.

--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: translucent overlay - bogus local entries

2013-05-19 Thread Howard Chu

Steve Eckmann wrote:

We noticed that adding a local entry for which there is no corresponding
remote entry doesn’t cause an error to be reported, but the bogus local entry
cannot then be found or deleted, as far as I can tell. I realize it was a
mistake to add such an entry, but is it possible to configure the translucent
overlay to prevent the client from making this mistake, or is it up to the
client to ensure a remote entry exists before adding a local entry? And is
there some way to find and delete such bobus local entries, either via LDAP
commands or by directly querying and managing the local mdb instance?


Adds only work when performed by the rootDN. Likewise for Deletes. If your 
clients are using the rootDN for routine operation, you're doing something wrong.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: Deadlock problem on objectClass.bdb

2013-05-22 Thread Howard Chu
cloudIdeAliases.bdb   page   2806
8573 WRITE 1 HELDcloudIdeAliases.bdb   page   2806

8573 READ  1 HELDcloudIdeAliases.bdb   page   4737
8573 WRITE 1 HELDcloudIdeAliases.bdb   page   4737

8573 READ  1 HELDou.bdbpage271
8573 WRITE 1 HELDou.bdbpage271

8572 READ  1 HELDdn2id.bdb page  10752
8573 WRITE 1 HELDdn2id.bdb page  10752

e READ  1 HELDid2entry.bdb  handle0

845f WRITE 1 HELDcloudIdeAliases.bdb   page   5337
8573 READ  1 WAITcloudIdeAliases.bdb   page   5337

8573 READ  1 HELDcloudIdeAliases.bdb   page   1438
8573 WRITE 1 HELDcloudIdeAliases.bdb   page   1438

f READ  1 HELDdn2id.bdb handle0

8573 READ  1 HELDdn2id.bdb page  2
8573 WRITE 1 HELDdn2id.bdb page  2

8572 READ  1 HELD0x23f140 len:   9 data: 02

8573 READ  1 HELDdn2id.bdb page  10234
8573 WRITE 1 HELDdn2id.bdb page  10234

8573 WRITE 1 HELDdn2id.bdb page  10225

8573 READ  1 HELDcloudIdeAliases.bdb   page123
8573 WRITE 1 HELDcloudIdeAliases.bdb   page123

   12 READ  1 HELDcloudIdeAliases.bdb   handle0

8573 READ  1 HELDcloudIdeAliases.bdb   page 16
8573 WRITE 1 HELDcloudIdeAliases.bdb   page 16

8573 READ  1 HELDcloudIdeAliases.bdb   page200
8573 WRITE 1 HELDcloudIdeAliases.bdb   page200

8573 READ  1 HELDcloudIdeAliases.bdb   page   6375
8573 WRITE 1 HELDcloudIdeAliases.bdb   page   6375

8573 READ  3 HELDobjectClass.bdb   page  3
8573 WRITE 7 HELDobjectClass.bdb   page  3
81b6 READ  1 WAITobjectClass.bdb   page  3

8573 READ  1 HELDobjectClass.bdb   page  2
8573 WRITE 3 HELDobjectClass.bdb   page  2

   11 READ  1 HELDobjectClass.bdb   handle0

8573 READ  1 HELDcloudIdeAliases.bdb   page   2308
8573 WRITE 1 HELDcloudIdeAliases.bdb   page   2308

8573 READ  1 HELDcloudIdeAliases.bdb   page286
8573 WRITE 1 HELDcloudIdeAliases.bdb   page286



--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: MaxDBs

2013-05-22 Thread Howard Chu

Ben Johnson wrote:

Is there an upper limit to mdb_env_set_maxdbs()? And what's the overhead for
adding additional DBs? Can I change this number once it's set if I close and
reopen the env?


The upper limit is the upper limit of an unsigned int.
The overhead is about 96 bytes per DB on a 64 bit machine. Yes, you can change 
the number if you close and reopen the env, it's not persisted on disk. It's 
just sizing an array in your MDB_env.



Ben Johnson
b...@skylandlabs.com mailto:b...@skylandlabs.com






--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: [lmdb]

2013-05-23 Thread Howard Chu

Ben Johnson wrote:

I posted this to openldap-bugs but I didn't see it actually posted to the
archive so I'll try in openldap-technical. I have LMDB integrated and it's
working smoothly except for this issue.

 Original Message 

I'm an author of an open source database called Sky (http://skydb.io/) and I'm
interested in porting the backend off LevelDB and move it over to LMDB. I
pulled down the LMDB code using the instructions on Ferenc Szalai's gomdb
(https://github.com/szferi/gomdb) README:

git clone -b mdb.master --single-branch git://git.openldap.org/openldap.git

When I ran make test against the code it seems to run through most of the
tests but I get a Resource Busy error at one point. Here's the full output:

https://gist.github.com/benbjohnson/5628725

I'm running Mac OS X 10.8.3 and here's the output of my gcc -v:


MacOSX and FreeBSD are somewhat dicy. If you can run the failing command 
(mdb_stat testdb) by hand, successfully, it's probably fine.


$ gcc -v
Using built-in specs.
Target: i686-apple-darwin11
Configured with:
/private/var/tmp/llvmgcc42/llvmgcc42-2336.11~182/src/configure
--disable-checking --enable-werror
--prefix=/Applications/Xcode.app/Contents/Developer/usr/llvm-gcc-4.2
--mandir=/share/man --enable-languages=c,objc,c++,obj-c++
--program-prefix=llvm- --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/
--with-slibdir=/usr/lib --build=i686-apple-darwin11
--enable-llvm=/private/var/tmp/llvmgcc42/llvmgcc42-2336.11~182/dst-llvmCore/Developer/usr/local
--program-prefix=i686-apple-darwin11- --host=x86_64-apple-darwin11
--target=i686-apple-darwin11 --with-gxx-include-dir=/usr/include/c++/4.2.1
Thread model: posix
gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00)

Let me know if you need any other info from me.


Ben Johnson
b...@skylandlabs.com mailto:b...@skylandlabs.com



Ben Johnson
b...@skylandlabs.com mailto:b...@skylandlabs.com






--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



Re: [lmdb]

2013-05-23 Thread Howard Chu

Ben Johnson wrote:

Running mdb_stat testdb gives me the same Resource Busy error.

It looks like mdb.c:2941 is checking if the return address from mmap() is the
same as the hint passed in. Is there a problem with just using the mmap()
return address? It looks like the mmap() is successful but just allocating to
a different place.


That's treated as a failure since mtest uses the FIXEDMAP flag. In most 
applications you won't be using FIXEDMAP so you can ignore this error.



Ben Johnson
b...@skylandlabs.com mailto:b...@skylandlabs.com



On May 23, 2013, at 4:55 PM, Howard Chu h...@symas.com mailto:h...@symas.com
wrote:


Ben Johnson wrote:

I posted this to openldap-bugs but I didn't see it actually posted to the
archive so I'll try in openldap-technical. I have LMDB integrated and it's
working smoothly except for this issue.

 Original Message 

I'm an author of an open source database called Sky (http://skydb.io/) and I'm
interested in porting the backend off LevelDB and move it over to LMDB. I
pulled down the LMDB code using the instructions on Ferenc Szalai's gomdb
(https://github.com/szferi/gomdb) README:

git clone -b mdb.master --single-branch git://git.openldap.org/openldap.git

When I ran make test against the code it seems to run through most of the
tests but I get a Resource Busy error at one point. Here's the full output:

https://gist.github.com/benbjohnson/5628725

I'm running Mac OS X 10.8.3 and here's the output of my gcc -v:


MacOSX and FreeBSD are somewhat dicy. If you can run the failing command
(mdb_stat testdb) by hand, successfully, it's probably fine.


$ gcc -v
Using built-in specs.
Target: i686-apple-darwin11
Configured with:
/private/var/tmp/llvmgcc42/llvmgcc42-2336.11~182/src/configure
--disable-checking --enable-werror
--prefix=/Applications/Xcode.app/Contents/Developer/usr/llvm-gcc-4.2
--mandir=/share/man --enable-languages=c,objc,c++,obj-c++
--program-prefix=llvm- --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/
--with-slibdir=/usr/lib --build=i686-apple-darwin11
--enable-llvm=/private/var/tmp/llvmgcc42/llvmgcc42-2336.11~182/dst-llvmCore/Developer/usr/local
--program-prefix=i686-apple-darwin11- --host=x86_64-apple-darwin11
--target=i686-apple-darwin11 --with-gxx-include-dir=/usr/include/c++/4.2.1
Thread model: posix
gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00)

Let me know if you need any other info from me.


Ben Johnson
b...@skylandlabs.com mailto:b...@skylandlabs.com mailto:b...@skylandlabs.com



Ben Johnson
b...@skylandlabs.com mailto:b...@skylandlabs.com mailto:b...@skylandlabs.com






--
 -- Howard Chu
 CTO, Symas Corp. http://www.symas.com
 Director, Highland Sun http://highlandsun.com/hyc/
 Chief Architect, OpenLDAP http://www.openldap.org/project/





--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/



<    1   2   3   4   5   6   7   8   9   10   >