Re: Dynamic Indexing through cn=config

2007-06-26 Thread Eric Irrgang

What version of OL are you running?

It sounds like the new index was not built properly.  How did you add the 
indexing?  You should either


a) add the index while slapd is running by doing an ldapmodify to the 
database entry in cn=config


or

b) stop the slapd server, edit the config, run slapindex, and then restart 
the server


Any other approach will yield strange results.

On Thu, 21 Jun 2007, Arunachalam Parthasarathy wrote:


Hello,



I used bdb as a backend

Started slapd with ,  ./slapd -h ldap://ip:port -F
/etc/openldap/slapd.d/ -f  /etc/openldap/slapd.conf

Step 1: I added indexing on a attribute (sn), in the cn=config sub-tree,

Step 2: If I search through the tree (cn=config), the added entry is not
reflected

To Check, I added entries, it was not indexed on the added attribute (sn.bdb
is not getting generated in bdb directory)

Step 3: When I try to add one more time , the same entry as Step 1, it says
, entry already exists

Step 4: Now when I search cn=config tree, I am able to see the sn index
entry in olCDbIndex

Now I added entries, it was getting indexed on the added attribute (sn.bdb
is generated in bdb directory)



Please say why is this happening



Thanks a lot in advance,

Arunachalam.











This e-mail and attachments contain confidential information from HUAWEI,
which is intended only for the person or entity whose address is listed
above. Any use of the information contained herein in any way (including,
but not limited to, total or partial disclosure, reproduction, or
dissemination) by persons other than the intended recipient's) is
prohibited. If you receive this e-mail in error, please notify the sender by
phone or email immediately and delete it!






--
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


server startup overhead

2007-05-25 Thread Eric Irrgang
I continue to have trouble with getting a freshly started server to be 
responsive.  One problem in particular is one that I thought had been 
resolved some time ago but is apparently biting me right now...


With the hdb backend (at least in OL 2.3.34 and OL 2.3.35) if you perform 
a search with a search base deeper than the root suffix, the search takes 
a very long time to complete if the cache hasn't been established.  In my 
case the difference is less than a second versus several hours.  I'm not 
sure yet which bit of cache needs to be primed.  I can switch back and 
forth searching with the same filter in the root and then a child search 
base with the same results.


Is this a bug recursion or something that I just hadn't been noticing?

What would be the best search to perform to prepare whatever cache is 
getting hit to make searches outside of the root DN faster?


--
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


Re: server startup overhead

2007-05-25 Thread Eric Irrgang
Well, once an (objectclass=*) search finishes the ou=people,dc=basedn 
searches run fast again.  Unfortunately it takes over half an hour to run 
the first time and I have to make sure that during that time no one has 
access to cause extra threads to start searching.


On Fri, 25 May 2007, Eric Irrgang wrote:

I continue to have trouble with getting a freshly started server to be 
responsive.  One problem in particular is one that I thought had been resolved 
some time ago but is apparently biting me right now...


With the hdb backend (at least in OL 2.3.34 and OL 2.3.35) if you perform a 
search with a search base deeper than the root suffix, the search takes a very 
long time to complete if the cache hasn't been established.  In my case the 
difference is less than a second versus several hours.  I'm not sure yet which 
bit of cache needs to be primed.  I can switch back and forth searching with 
the same filter in the root and then a child search base with the same 
results.


Is this a bug recursion or something that I just hadn't been noticing?

What would be the best search to perform to prepare whatever cache is getting 
hit to make searches outside of the root DN faster?





--
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


Re: server startup overhead

2007-05-25 Thread Eric Irrgang
Searching for objectclass=* and only asking for the entryDN attribute is 
almost an order of magnitude faster than searching for everything.  I like 
the idea of searching for a non-existent value of an unindexed attribute. 
It is 40% faster than the objectclass=* search (down to 16 minutes) but 
does not fix the searches made outside of the directory's root search 
base.  At least not quite.  The test search I tried took four minutes 
(about an order of magnitude better than no priming search) and the 
following test searches were under a second.  Any thoughts on what extra 
magic might be coming out of the objectclass=* into the caches?



My DB cache is just barely sufficient but it seems to be large enough. 
Examining the db_stat output, iostat, and vmstat, all indications are that 
by the time I've got right about the time my DB cache approaches full my 
DB cache hits rapidly approach 100% and I'm neither paging in or out.  I 
can't really go any bigger.


cachesize 10
idlcachesize 30
dbconfig set_cachesize 10 0 1
shm_key 7
dbconfig set_shm_key 7

A freshly restarted server quickly ends up with 11 or 12 Gigs resident 
with a VM size of 13+ gigs and within a week or so is up to 16 Gig VM size 
and 13 or 14 Gigs resident.  On a 16Gig machine, an inch beyond that and 
the OS starts to run out of room and I risk swap madness.


I suppose I ought to track down where the CPU cycles are going, whether it 
is ACL processing or just the overhead of getting all of the attributes 
from DB and building LDIF.


On Fri, 25 May 2007, Howard Chu wrote:


Eric Irrgang wrote:
I continue to have trouble with getting a freshly started server to be 
responsive.  One problem in particular is one that I thought had been 
resolved some time ago but is apparently biting me right now...


With the hdb backend (at least in OL 2.3.34 and OL 2.3.35) if you perform a 
search with a search base deeper than the root suffix, the search takes a 
very long time to complete if the cache hasn't been established.  In my case 
the difference is less than a second versus several hours.  I'm not sure yet 
which bit of cache needs to be primed.  I can switch back and forth 
searching with the same filter in the root and then a child search base with 
the same results.


If it takes several hours, then most likely your BDB cache is too small.

As for which cache, it's either the DN cache (aka EntryInfo in the code) or 
the IDL cache. (Currently the DN cache size is not configurable, will probably 
add a keyword for that in 2.4.)



Is this a bug recursion or something that I just hadn't been noticing?

What would be the best search to perform to prepare whatever cache is 
getting hit to make searches outside of the root DN faster?


Priming the caches will only help if you actually have sufficient RAM 
available. If the DB is too large, then there's not much you can do about it. 
If you have sufficient RAM, then doing a subtree search from the root, on an 
unindexed attribute, looking for a value that doesn't exist, will hit every 
entry in the DB and fully prime the DN cache (and the DN-related info in the 
IDL cache). It will cycle the full contents of the dn2id and id2entry DBs 
through the BDB cache as well.





--
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


Re: server startup overhead

2007-05-25 Thread Eric Irrgang
Is there a way (with or without attaching a debugger) to find out what my 
IDL cache and DN cache is doing?


--
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


RE: Server Certificate Chain

2007-04-19 Thread Eric Irrgang
The server needs to be able to generate the full certificate chain during 
the SSL conversation such that the final cert is signed by something in 
the ca certificate store in use by the client.  This means that in 
addition to the intermediate CA that is the issuer of your server cert, 
your slapd needs to have the other CAs in the chain as well.  Sticking the 
intermediate certs at the end of the cacert-bundle file should work.


You can confirm that your ca cert bundle is adequate by doing
openssl verify -CAfile /etc/ldap/cacert-bundle.pem /etc/ldap/servercrt.pem

If that doesn't succeed in verifying servercrt.pem then cacert-bundle.pem 
doesn't have the right stuff in it.  If cacert-bundle.pem is good, then 
openssl s_client -verify 2 -connect hostname:636
should show you the trust chain one element at a time with the 
(s)ubject and (i)ssuer at each step.  If you have more than one 
intermediate CA then you would specify a number higher than '2'.  The 
final cert in the chain should be the real root CA and be self-signed as 
indicated by the subject and issuer being the same.  If that cert is in 
the client CA cert bundle then you should be good to go.  If it isn't, 
then either your clients need to be upgraded or your CA is lousy.



On Thu, 19 Apr 2007, Krasimir Ganchev wrote:


Howard,

I have read that and I have set a bundle of my Root/Child CA included with
the TLSCACertificateFile directive.

My TLS configuration is as follows:

TLSCertificateFile /etc/ldap/servercrt.pem
TLSCertificateKeyFile /etc/ldap/serverkey.pem
TLSCACertificateFile /etc/ldap/cacert-bundle.pem
TLSCipherSuite HIGH:MEDIUM:+SSLV3
TLSVerifyClient never

Anyway if I do not include the Child CA certificate in the appropriate
stores at the client side the server certificate could not be verified.

I have tried to get some more info with openssl (openssl s_client -connect
hostname:636) and it returns that there are no client certificate CA names
sent.

Any suggestions?

~Cheers~

-Original Message-
From: Howard Chu [mailto:[EMAIL PROTECTED]
Sent: Wednesday, April 18, 2007 11:38 PM
To: Krasimir Ganchev
Cc: openldap-software@openldap.org
Subject: Re: Server Certificate Chain

Read the Admin Guide, section 12.2.1.1.

Krasimir Ganchev wrote:

Hello guys,



I am using a globally recognized certificate with my openldap server
which is issued by a Child CA trusted by the Root CA of my certificate
provider. Is there any possible way to include the Child CA certificate
within the server certificate chain?



The thing is that I have couple of windows based clients using my
openldap server and I can't make them verify the server certificate. The
Root CA is included in the trusted Root CAs Windows store, but since the
Child CA ain't there and doesn't appear in the certificate chain the
clients could not verify the server certificate and give up with an
error unless they are being configured to ignore errors.



That's the reason why I would like to include the Child CA /Signing CA/
certificate within the server certificate chain which will allow those
clients to confirm server's certificate and its signing CA certificate
against the trusted root CA.



Is there any possible way to achieve that and is it up to configuration?



--
  -- Howard Chu
  Chief Architect, Symas Corp.  http://www.symas.com
  Director, Highland Sunhttp://highlandsun.com/hyc/
  Chief Architect, OpenLDAP http://www.openldap.org/project/



--
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


startup costs was Re: filter preprocessing for performance improvement

2007-04-04 Thread Eric Irrgang
I have a big problem whenever I have to restart a server.  Obviously, the 
first search for objectclass=* is going to enumerate the whole directory. 
The problem is that even an anonymous user can cause the server to execute 
this search on the backend even though the ACLs and limits will keep them 
from getting any results.  All it takes is a few poorly configured client 
applications to do some sort of poll and I have connections hanging for 
half an hour until the first objectclass=* search finishes.  I run out of 
threads and every one of them is constantly trying to get CPU time.


What I currently do is to keep a machine from being accessible by taking 
it out of the load-balancer's rotation for the half hour or so that it 
takes for me to do a search for objectclass=*, but I figure there has got 
to be another way.  I have both eq and pres indexes on objectclass.  It's 
just that I have a very big directory.


I'm not trying to speed up the objectclass=* search.  I'm trying to figure 
out how to keep it from impacting the server's responsiveness when it is 
being performed under circumstances where no entries will be returned, 
such as when sizelimits or ACLs (which are evaluated at the frontend after 
the backend has performed the operation, right) will block things.  Any 
suggestions?


One thing I just thought of would be to have a single entry that would 
always be accessible to any searcher and then set 'limits anonymous 
size=1'.  Would that cause the backend operation to be canceled once the 
first entry were returned?  That might save me something.


On Fri, 2 Mar 2007, Howard Chu wrote:

(objectclass=*) is already a no-op in back-bdb/back-hdb. I made that 
optimization back in November 2001.


So yes, it's a nice idea, been there, done that.

I have a problem in that the first time someone performs a search for 
'objectclass=*' after slapd is restarted, the server is really bogged down 
for a while.  Once the search has completed once, this is not a problem. I 
assume that's due to the IDL cache.  However, I currently have to keep the 
server unavailable after restarting slapd for upwards of half an hour while 
I do an 'objectclass=*' search the first time.


Changing the behavior of (objectclass=*) isn't going to make any difference. 
If you specify a filter that is too broad, then slapd has to trawl through 
every entry in the DB because every entry satisfies the condition.


--
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


cn=include

2007-01-23 Thread Eric Irrgang
Are olcInclude attributes in cn=config honored as per the Admin Guide 
section 5.2.2 or is that documentation misleading?


--
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


olcTLSCipherSuite?

2007-01-23 Thread Eric Irrgang

Does the TLSCipherSuite directive translate to anything in cn=config?

I'm not seeing anything when I convert my slapd.conf file and the server 
doesn't seem to be interpreting the directive TLSCipherSuite HIGH:MEDIUM 
the way I would expect.  That is, to only allow high and medium strength 
connections while trying to negotiate high-strength first.


--
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


tool threads

2006-08-17 Thread Eric Irrgang
In OpenLDAP 2.3.24 and 2.3.25 the tool-threads parameter has the desired effect 
when running slapadd or slapindex on an hdb only when using the '-q' flag. 
Without the '-q' flag, I only get one thread running.  Is that the intended 
behavior or is this a bug?


This is with 64-bit builds on Sparc Solaris on UltraSparc 3 and UltraT1 
and BDB 4.4.20 with the four current patches.


--
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


Re: slow cn=config changes; Re: Correct procedure to backup LDAP?

2006-08-15 Thread Eric Irrgang

On Mon, 14 Aug 2006, Howard Chu wrote:


Howard Chu wrote:
This sounds pretty normal. When you issue a modify on cn=config, it suspends 
the thread pool. That means that no new operations can start, and cn=config 
waits for any currently running operations to finish, then it does the 
modification you requested. The busier the server, the longer you have to 
wait before that modification can execute.


Ah, this explains a lot.  I suppose it could be a pretty long wait if 
someone were doing a particularly onerous search.  Maybe the real answer 
is to get the server out of the load-balancer rotation first, though that 
kind of shoots down the idea of being able to quiesce the service 
completely locally without interruptions.


In response to Aaron, yes, I'm trying to find the best way to get a good 
point-in-time snapshot because that is the easiest thing to convey to the 
various automated sources of updates.  My preference would be to take 
backups from a slave instance, but I just can't finagle the resources to 
do it, even a trimmed no-indexes case.  I like the accesslog ideas, 
though.  That could be handy.


I should note that for this specific example, you can also set readonly mode 
using cn=monitor, which has no particular impact on the thread pool, so it 
should take effect immediately.


Okay, that I don't understand.  Are you still talking about putting the 
hdb database in readonly mode?  You can do that from cn=monitor? 
Apparently I'm not up on my cn=monitor lore... that sounds like the magic 
bullet.  Can you give an example?


--
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


Re: requesting clarification of slapd.conf-versus-slapd.d configuration

2006-04-27 Thread Eric Irrgang
On Thu, 20 Apr 2006, Howard Chu wrote:

Regardless of any existing config directory, when both flags are
specified, the slapd.conf file is read and written out in config
directory format. If there were any other conditions on the behavior, it
would say so. Since there is not, it does not.

With OpenLDAP 2.3.21, if I change the 'dbconfig shm_key #' in slapd.conf
and then restart slapd with both '-f' and '-F' the
slapd.d/cn=config/olcDatabase{1}bdb.ldif file does not get updated and a
new DB_CONFIG file is not generated.  It seems like when a valid config
directory exists the '-f' option isn't being considered.  I can't seem to
find any change I can make to the slapd.conf file or the slapd.d files
that make a difference as to whether slapd.d gets generated again.  Are
other people having different experiences or should I do some more digging
and open an ITS?

Actually, I seem to be having a variety of problems.  I imagine I'll have
to poke around with this a little while to see what all is going on...

 'ldapsearch -b cn=config'?  I figure there are fundamental problems with
 the notion of a 'slapadd -l config.ldif' but functionality to convert ldif
 to slapd.d the way slapd.conf is converted to slapd.d by specifying both
 -f and -F should be reasonably simple.  Maybe some code to handle a
 '-L config.ldif'?  Is this already done or underway or would such a
 contribution be in line with the current road-map?

Already done. Just do slapadd -n0 -l config.ldif

Maybe I'm missing something.  Not only does that not work for me, but I
can't see how it would without code to notice the absence of a config
directory.  Or maybe it would work fine if there were already a valid
config in place, but it doesn't seem to work for bootstrapping a config.

Wouldn't slapadd -n0 -l config.ldif try to open database 0 specified in
the default config file before ever parsing config.ldif?  What I'm getting
at is that it would be nice to simply say

slapadd -F new/slapd.d -l config.ldif

and construct a config directory from scratch in 'new/slapd.d' to
boot-strap a new directory.  I'm still unclear as to whether slapd.conf is
destined to go away completely, but if it is then being able to boot-strap
in this way would be a lot more convenient than building the config in
back-ldif format.  But slapadd would need to know that the ldif file it is
given is to be parsed before trying to open any databases and to ignore
the default behavior of looking for a default config directory followed by
looking for a default config file when neither '-f' nor '-F' is specified.

-- 
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


Re: requesting clarification of slapd.conf-versus-slapd.d configuration

2006-04-27 Thread Eric Irrgang
Okay... Maybe I should be asking instead what you consider best practice
for maintaining a configuration with OL2.3.  At this point it sounds like
every change to slapd.conf warrants removal of both slapd.d and DB_CONFIG.
That's fine, if maintaining configuration in slapd.conf is the way to go,
though it is a little tedious to try to confirm that changes made to
cn=config while running get made in exactly the same way to slapd.conf.
And as you have said explicitly, that is not the intended way to go.

... Which brings me back to my real question of how to best backup and
restore the configuration.  I can't get slapadd -n0 -l config.ldif to
work as a recovery procedure.  Without specifying '-f' or '-F', slapadd
consults the slapd.conf installed with the software and tries to use the
database directories specified in it rather than in the config.ldif.  With
'-F' specified, slapadd simply dies trying to open the slapd.d that isn't
yet populated:

$ /usr/local/openldap/sbin/slapcat -n0 -F /tmp/slapd.d 
|/usr/local/openldap/sbin/slapadd -n0 -F /tmp/slapd.d.2
= ldif_enum_tree: failed to open /tmp/slapd.d.2/cn=config.ldif: No such file 
or directory
slapadd: bad configuration directory!
$

If slapd.d is already populated, then slapadd fails with
slapadd: could not add entry dn=cn=config (line=33):

I can't seem to figure out how you've gotten this to work...

My slapd.conf is trivial:
include /usr/local/openldap/etc/openldap/schema/core.schema
include /usr/local/openldap/etc/openldap/schema/cosine.schema
databasebdb
suffix  dc=test
directory   /tmp/ldap
mode0600
index   objectclass eq

On Thu, 27 Apr 2006, Howard Chu wrote:

Eric Irrgang wrote:
 On Thu, 20 Apr 2006, Howard Chu wrote:


First of all - don't do this. The shm_key should only be set in
slapd.conf, not in DB_CONFIG.

The use of -f and -F together is only for creating a new slapd.d.
Once it's created, the slapd.conf file is ignored. If you want to
convert slapd.conf again, delete the old slapd.d first.

 Already done. Just do slapadd -n0 -l config.ldif


 Maybe I'm missing something.  Not only does that not work for me, but I
 can't see how it would without code to notice the absence of a config
 directory.  Or maybe it would work fine if there were already a valid
 config in place, but it doesn't seem to work for bootstrapping a config.

 Wouldn't slapadd -n0 -l config.ldif try to open database 0 specified in
 the default config file before ever parsing config.ldif?  What I'm getting
 at is that it would be nice to simply say


Database 0 is the config database, and its existence is hardcoded.

-- 
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


Re: verification

2006-04-18 Thread Eric Irrgang
I don't know what versions of OpenLDAP are affected, but ITS 4323 snagged
me in a very similar situation for several revisions of OL 2.3.1x.

From the looks of things at
http://www.openldap.org/devel/cvsweb.cgi/servers/slapd/backglue.c?hideattic=1sortbydate=0
the problem was patched before the release of 2.3.21.

On Fri, 14 Apr 2006, Douglas B. Jones wrote:


If I have the following in slapd.conf:

suffix dc=a,dc=x,dc=y
...
subordinate

suffix dc=b,dc=x,dc=y
...
subordinate

suffix dc=c,dc=x,dc=y
...
subordinate

suffix dc=x,dc=y


If I verify a user uid=userA,dc=a,dc=x,dc=y with the
correct password, then it works fine. If I try to verify
the user uid=userA,dc=x,dc=y with the correct password,
it fails with the error in the log as:

RESULT tag=97 err=53 text=unauthenticated bind
  (DN with no password) disallowed

The above is from a web app. I think that has something
to do with config. of the app. If I use the ldapsearch
command, I get:

BIND dn=uid=userA,dc=x,dc=y method=128
Apr 14 12:05:25 c01 slapd[208513]: conn=455 op=0 RESULT tag=97 err=49 text=

Works fine if I user in ldapsearch -D switch:

uid=userA,dc=a,dc=x,dc=y

which is where userA resides.

I believe I am doing something wrong, but not sure what.
Any ideas? Thanks!


-- 
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


Re: back-ldap with glue overlay

2006-03-23 Thread Eric Irrgang
On Mon, 20 Mar 2006, Aaron Richton wrote:

I had some fun with this a while back. Lots of syntax that you think would
work (and likely will work with better rwm/glue interaction) eventually
run into one ITS or another like Howard noted below. I don't remember
getting anywhere useful with back-relay. In the end, the simplest config
was the one that worked:

database hdb
subordinate
suffix ou=local,dc=example,dc=com

database ldap
suffix dc=example,dc=com

That didn't work for me.  With a setup like your example, if I bind as
cn=user,ou=a,dc=example,dc=com it seemed like the search base would get
stuck as ou=a,dc=example,dc=com and I couldn't retrieve
cn=foo,ou=b,dc=example,dc=com (though cn=foo,ou=local... worked fine).

What I ended up doing was this:

databasemeta
suffix  dc=example,dc=com
uri ldaps://example.com/dc=example,dc=com
subtree-exclude ou=groups,dc=example,dc=com
uri ldap://localhost/ou=groups,dc=example,dc=com;
suffixmassage   ou=groups,dc=example,dc=com ou=groups,dc=local

databaseldif
suffix  ou=groups,dc=local
directory   /var/ldap/local


I like the configuration syntax for back-meta, but it seems like there
ought to be a better way to do the loopback connection, but using both
back-relay and back-ldap/meta seemed like too much additional complexity.


-- 
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


Re: lastmod and index corruption

2006-02-22 Thread Eric Irrgang
I'll be looking at BDB 4.4 for OL 2.3.x but I couldn't get BDB 4.2 to work
with more than a total of 4 Gigs of db cache.  4.3 changed the limit from
4 GB max to 4 GB per cache segment.  Has a patch been backported to
4.2 to resolve this limitation?

On Tue, 21 Feb 2006, Quanah Gibson-Mount wrote:

How about the obligatory Don't use BDB 4.3 warning that we've been giving
for a while? :)

-- 
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


Re: lastmod and index corruption

2006-02-22 Thread Eric Irrgang
All this time I've been using 4.3 because I couldn't get 4.2 to work with
'set_cachesize 8 0 2'!  How embarrassing!

Maybe it didn't work until later versions of 4.2 or maybe I'm completely
crazy, but now I've definitely got no excuse but to dump 4.3.  The
question remains, though, of whether to go with 4.4 or 4.2.

You've been using 4.4 for a couple of months now, right Quanah?  Are there
still any outstanding kinks using it with OL 2.3?  The performance info
you gathered definitely has me looking at OL 2.3 with BDB 4.4 for our next
production deployment.

On Wed, 22 Feb 2006, Quanah Gibson-Mount wrote:

On Wednesday 22 February 2006 11:18, Eric Irrgang wrote:
 I'll be looking at BDB 4.4 for OL 2.3.x but I couldn't get BDB 4.2 to work
 with more than a total of 4 Gigs of db cache.  4.3 changed the limit from
 4 GB max to 4 GB per cache segment.  Has a patch been backported to
 4.2 to resolve this limitation?

I've not had an issue using  4GB with BDB 4.2.52 under Solaris.. You just
have to use more than one section, like:

set_cachesize 8 0 2

BDB 4.4 has the improvement of allowing you to have very large single
partitions.

In any case, you use BDB 4.3 at your own risk, as long noted.

-- 
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


lastmod and index corruption

2006-02-21 Thread Eric Irrgang
I don't see anything directly pertinent in the list archives so I thought
I'd share my experience for posterity.  Running OpenLDAP 2.2.30 on Solaris
9 with Berkeley DB 4.3.29, I would regularly see cases where indexes would
forget an entry upon modification, even if the index in question wasn't
for an attribute being modified.  Interestingly, applying the same
modifications to an identical directory would produce identical
corruption, but other than that it was totally unpredictable.

Anyway, I finally found a correlation with the lastmod configuration
option which I had turned off.  Setting lastmod on seems to have solved
the problem.

I don't plan to speculate about what's going on here and I doubt it's
worth pursuing a patch at this point in the 2.2.x saga, but if you have a
similar setup with irritatingly arbitrary index corruption, you might try
turning lastmod back on.

Other config info:

DB_CONFIG
-
using shared memory for db cache
db cache big enough to hold entire directory
mmapsize big enough to hold entire directory
no extra flags set

slapd.conf
--
gentlehup on
idletimeout 10
sizelimit 50
timelimit 10
allow bind_v2
backend bdb
databasebdb
mode0600
cachesize   8 (big enough for entire directory)
idlcachesize30
checkpoint  10005
lastmod off (changed to on)
heavy indexing

Incidentally, gentlehup seems to be problematic, too, but I won't go into
that...

Anyway, I hope this helps someone out there who fails to heed the
obligatory upgrade to 2.3.x warning.

-- 
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


Re: BDB fragmented

2006-02-09 Thread Eric Irrgang
It looks to me like DB 4.4 has a db-compact method that doesn't seem to
have been available in earlier versions.  I would think that this could
come in handy for collapsing btrees into fewer levels.  (back-bdb uses
btrees, right?)  This ought to reclaim storage and performance, even when
there haven't been any entries removed, shouldn't it?

Has anyone played with this yet or would it be worth experimenting with?

On Wed, 8 Feb 2006, Howard Chu wrote:

Ansar Mohammed wrote:
 Is there anyway to de-fragment the BDB backend?
 I recently cleared out 40k objects and the database is the same size.


Sounds like a question you'll find answers for in the BDB documentation.
Generally there's no point in freeing up the space since it will just
get reused the next time you add data.

-- 
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


Re: identity assertion

2006-01-24 Thread Eric Irrgang
On Sat, 21 Jan 2006, Pierangelo Masarati wrote:

authorization and SASL are orthogonal.  Without mucking with SASL, you
can use:

ldapsearch -x -W -D cn=authorizeduser,dc=test \
-e '!authzid=dn:cn=config,dc=test'

this causes the tool to use the proxyAuthz control on that operation
(the '!' is because the control MUST be critical).

Ah!  That's exactly what I've been looking for.

I suppose if I had just checked the ldapsearch command-line help I would
have seen that but I had been relying on the man page.

Thanks so much!

-- 
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


Re: reprocessing rejected replica entries

2006-01-24 Thread Eric Irrgang
On Mon, 23 Jan 2006, Angel L. Mateo wrote:

   So now, I have run slurpd in one-shot mode indicating with -r option
the .rej file and the updates were completed.

   My question is: is there any way to reprocess slurpd without needing to
run slurpd in one-shot mode?

No, that's what one-shot mode is for.  slurpd is behaving as documented.

   If I have to run slurpd with -o option, do I have to delete the .rej
file? Can I delete it while slurpd is running?

There's no reason to delete the reject file, but if you deleted it while
slurpd had an open filehandle you could lose an error.  Since slurpd will
just pick up where it left off I'd suggest simply stopping slurpd while
you clean out its working directory.


-- 
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


RE: [ldap] Implementation Suggestions

2006-01-20 Thread Eric Irrgang
On Fri, 20 Jan 2006, Spicer, Kevin wrote:

 You have enough memory for 1.25 Gb of bdb cache?  You don't post your

I should hope so, they have 2.5 Gb of ram each.

Good good, but keep an eye out for excessive swapping, just in case.

In my experience with OL on Solaris 9, the moment you have to swap it's
all over.  The memory management in Berkeley DB is much better suited to
paging the directory data than the OS.

However, I've found that performance begins to plummit well before you
actually swap because of the filesystem cache.  Leaving at least 30% of
the size of your database files of free memory available to the OS for
fscache seems to let the database's caching work more quickly.

That said, the only thing as bad as swapping for OL performance is not
having a big enough db cache.  When I'm tight on memory, I find that the
sweet spot is to have about 10% bigger db cache than the id2entry+dn2id
files.  More than that gets diminishing returns relative to letting the OS
have some RAM to play with but any less than that and nobody's happy.

Another thing that helps me out is allowing the database to mmap files.
This cuts down on heavyweight system calls and also helps the db to load
data on the fly a little more easily.  Of course, you need a little more
RAM for that, too, and isn't worth looking at it if you're entirely I/O
constrained.

All of this and using shared memory for the db cache gets me near RAM-disk
performance once the cache is populated and resident.

-- 
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


Re: identity assertion

2006-01-20 Thread Eric Irrgang
On Fri, 20 Jan 2006, Pierangelo Masarati wrote:

What I don't follow you about is why are you trying to put back-ldap in
the middle.  Isn't your problem about finding some way to allow regular
users to access the cn=config tree?  You don't need back-ldap, you just
need to be able to authorize users to assume the identity you specified
as rootdn of the cn=config database.  Slapd allows you to do that
without back-ldap.  You could also do something like

authz-policy   from

databaseconfig
rootdn  cn=config,dc=test

Then, in the dc=test database you can add a cn=config,dc=test entry
and, in that entry, add authzFrom rules that allow those users you
intend to authorize.  The dc=test database can be of any type that
allows you to store an entry with the authzFrom attribute.

I already have my target directory set up that way but I don't know how to
do identity assertion from a regular ldap client without using SASL.  Is
there a way?  For instance, the following fails with ldapsearch: not
compiled with SASL support

ldapsearch -x -W -D cn=authorizeduser,dc=test -X cn=config,dc=test


-- 
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


slapd-ldap configuration and identity assertion

2006-01-20 Thread Eric Irrgang
I think my problem at this point is that I can't seem to get back-ldap to
use the authzID to try to assert another identity.

If I have the following then all operations are carried out as the
binddn, which is what I would expect.

idassert-bind bindmethod=simple
 binddn=cn=erici,dc=cc,dc=utexas,dc=edu
 credentials=hithere
 mode=none


And if I set mode=self then I see things like the following in the logs
and I gather that appropriate things are happening.

==slap_sasl_authorized: can cn=erici,dc=cc,dc=utexas,dc=edu become
(null)?
==slap_sasl_check_authz: does cn=erici,dc=cc,dc=utexas,dc=edu match
authzFrom rule in ?
==slap_sasl_check_authz: authzFrom check returning 32
== slap_sasl_authorized: return 48
= get_ctrls: n=1 rc=47 err=not authorized to assume identity


But I can't seem to get authzID to work as documented.  When I don't
specify 'mode' and I do specify authzID, I'm led to believe that I should
see a bind from the binddn and then an identity assertion to the authzID.

databaseldap
suffix  dc=test
uri ldap://localhost:1389;
idassert-bind   bindmethod=simple
 binddn=cn=erici,dc=cc,dc=utexas,dc=edu
 credentials=hithere
 authzID=dn:cn=config,dc=test
idassert-authzFrom dn.regex:.*

Instead, the connection gets relayed without using the binddn or the
authzID as if I hadn't used idassert-bind at all.

Am I missing something?

-- 
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


identity assertion

2006-01-19 Thread Eric Irrgang
If you want to be able to do a simple bind as one DN but perform actions
as another DN, you need to use some sort of identity assertion.  Is there
a way to do this without using back-ldap?

Specifically, I'm trying to work around the lack of ACL access to the
config backend by allowing specific DNs to assert the cn=config rootDN.
I've got rootdn for cn=config set to cn=config,dc=test and an entry in a
bdb backend for cn=config,dc=test with a authzFrom attribute set.

So I just need to bind as a user that is authorized with the authzFrom and
assert the cn=config,dc=test identity, right?

The only way I could think to do that was to try to set up another virtual
naming context of something like cn=admin and use some sort of rewriting
so that cn=config,cn=admin points at cn=config.

So I'm trying to set up an ldap backend that will proxy the bind through
to the dc=test backend and then assert the identity of cn=config,dc=test
so that the authorization is handled at dc=test via the authz-policy from.

Is this the right approach?  I guess one thing I'm a little unclear on is
what exactly is the authcID?  That's what has to match the authzFrom
attribute, right?  Is it basically just the bind DN in a simple bind
situation?

So far, a simple slapd config looks like the following, but I think slapd
is first authenticating the client by proxy and then rebinding anonymously
to try to assert the cn=config,dc=test identity.

databaseconfig
rootdn  cn=config,dc=test

databasebdb
suffix  dc=test
...

databaseldap
suffix  cn=admin
uri ldap://localhost;
idassert-bind   mode=self
bindmethod=simple
authzID=dn:cn=config,dc=test
overlay rwm
rwm-suffixmassage   cn=admin 


I connect to localhost, do a simple bind as a real user, try to perform
and operation on cn=config,cn=admin, hoping that the operation will be
relayed to cn=config with an effective identity of cn=config,dc=test, but
I get insufficient access errors and the logs indicate that before the
identity assertion there is an anonymous bind.  Isn't the mode=self
supposed to take care of that?  Does my ldap database section need to have
'idassert-authzFrom dn:*'?

Suggestions, please?

I get the impression that I'm supposed to be using the extra features of
SASL binds but I was hoping not to open that can of worms for a while.  Do
I just need to go the SASL route?  Is there a way to move to SASL with my
current SSHA userPassword credentials intact?


-- 
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


Re: How can I build openldap on 64-bit machie?

2005-12-15 Thread Eric Irrgang
I ran into trouble getting CFLAGS to be passed on properly to libtool for
some overlays or something.  I eventually gave up and put everything in my
CC definition.

On UltraSparc III (sparcv9):
CC=/opt/SUNWspro/bin/cc -g -xtarget=ultra -xarch=v9 -xc99=%none

On Mon, 12 Dec 2005, Aaron Richton wrote:

If you point autoconf to a compiler that produces 64-bit executables, and
you have a 64-bit chain ready (64-bit bdb, for instance), you will produce
64-bit OpenLDAP software. However, your example of a 64-bit cc appears
dubious. Perhaps you want something along the lines of

CFLAGS='-xarch=v9 -g -xs' CC='/opt/SUNWspro/bin/cc' ./configure [...]

Please ask your compiler vendor if you need help with that, as cc is
not OpenLDAP software.

 Does openldap support 64-bit machine perfectly?

As perfectly as it supports any other architecture, more or less. I
produce and regularly run the test suite against a sparcv9 binary, and
usually file an ITS when it doesn't work (a rare situation lately). But I
don't use it in production.


-- 
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


slapd doesn't notice completion of background index rebuilding

2005-12-15 Thread Eric Irrgang
With OL 2.3.13 and an hdb or bdb backend, I've tried changing indexes
dynamically by LDAPModifying cn=config by deleting and adding attribute
values.  A slapd thread wakes up and rebuilds the index database and then
goes quiet again, but when I do a search that should hit the index, slapd
doesn't acknowledge the existance of the index.

Dec 15 10:04:41 shub.cc.utexas.edu slapd-entdir[22883]: [ID 618536 
local3.debug] do_abandon: bad msgid 0
Dec 15 10:04:41 shub.cc.utexas.edu slapd-entdir[22883]: [ID 925615 
local3.debug] = bdb_equality_candidates: (uid) index_param failed (18)

then slapd grinds away through the full directory until it finds and
returns the matching entries.

Restarting slapd fixes the problem, but I assume there is supposed to be
some sort of message-passing internal to slapd that isn't happening when
the index is done rebuilding.  Waiting long periods of time or adding
additional indexable entries doesn't change matters.

Am I doing something wrong or has anyone else had this problem?

Moreover, I just noticed that if I try to use an attribute 'replace' on
olcDbIndex, the server crashes immediately!  That can't be right...

-- 
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342


sizelimit evaluated before ACLs?

2005-11-23 Thread Eric Irrgang
I'm sorry if this has already been discussed, but I can't seem to find
such a thread in the archives...

With OL 2.2.29 it looks to me like the sizelimit specified by a client
search is evaluated after the ACLs on the server side, so that if a client
specifies a sizelimit of 10 and receives 8 results, it may be obvious that
2 entries matched the filter but failed the ACL check, disclosing perhaps
more information than the directory maintainers would like.

Is this expected/intended behavior?

-- 
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342