RE: JdbmPartition repair

2016-01-25 Thread Quanah Gibson-Mount

Yes.  It's BSD license.

<https://en.wikipedia.org/wiki/OpenLDAP>

--Quanah

--On Tuesday, January 26, 2016 5:35 AM + "Zheng, Kai" 
 wrote:



If we are to do that one day, we would rather use LMDB, which is way
faster than SqlLite, proven, and small.


Agree. Looking at the benchmark result http://symas.com/mdb/microbench/,
LMDB seems pretty good as well as LevelDB. One question, is it license
(the OpenLDAP public license) compatible with ASF 2.0?

Regards,
Kai

-Original Message-
From: Emmanuel Lécharny [mailto:elecha...@gmail.com]
Sent: Monday, January 25, 2016 11:58 PM
To: Apache Directory Developers List 
Subject: Re: JdbmPartition repair

Le 25/01/16 15:27, Zheng, Kai a écrit :

Thanks a lot for the detailed and insightful explanation. I'm not able
to absorb it well because not familiar with the codes. It will serve as
very good materials when someday I need to look into the LDAP things.
The details make me believe it's very necessary to have a strong,
mature, industry proven backend for the LDAP server because the LDAP
things are already kinds of complex enough. We can't combine the LDAP
logic with the storage engine, they need to be separated, developed and
tested separately. Looks like Mavibot is going in this way and sounds
good to me. What concerned me is that as we're lacking enough resources
on developing it, it may still take some time to become mature and
robust.

Mavibot code base is small : 17 947 SLOCS



But if we leverage some existing engine, then we can focus on the LDAP
stuffs and work on some advanced features, move on a little faster and
have releases like 2.x, 3.x and so on. Sqlite yes is C, but it's
supported on many platforms and Java can use it via JNI;

That would be a real pain. Linking som JNDI lib and make it a package is
really something we would like to avoid like plague.

If we are to do that one day, we would rather use LMDB, which is way
faster than SqlLite, proven, and small.


it's a library, can be embedded in an application. You may dislike
JNI, but only a few of APIs are going to be wrapped for the usage, and
actually there're already wonderful wrappers for Java. Like
SnappyJava, the JNI layer along with the library can be bundled within
a jar file and get distributed exactly as a maven module. One thing
I'm not sure is how well the LDAP entries fit with the sql table
model,

Bottom line : very badly. Actually, using a SQL backend to store LDAP
element is probably the worst possible solution. Simply because LDAP
support multi-valued entries, something SAL databases don't support
antively.


but I guess there could be pretty much investigations in this direction.
The benefit would be, saving us amounts of developing and debugging
time, robust and high performance, transaction support and easy query.
Some thoughts in case any helps. Thanks.


Thanks. We have been evaluation all thos options for more than a decade
now :-) OpenLDAP has gone the exact same path, for the exact same reasons.






--

Quanah Gibson-Mount
Platform Architect
Zimbra, Inc.

Zimbra ::  the leader in open source messaging and collaboration


Re: ApacheDS performances

2013-08-22 Thread Quanah Gibson-Mount
--On Thursday, August 22, 2013 11:39 PM +0200 Emmanuel Lécharny 
 wrote:



1) OpenLDAP is the fastest, with at least 26 000 search/s, and the CPU
wasn't maxed (we reached 85%). The network transfer rate was around
20Mb/s.


As a reference, it takes me about 120 clients running on 8 different 
servers (15 clients per server) driving load against OpenLDAP to max out 
slapd at 53,000 searches/second.  8 clients gets me 11,597 searches/second, 
16 clients is 21,754 searches/second etc.  It takes a lot of clients to 
really understand the full performance profile of OpenLDAP.


--Quanah


--

Quanah Gibson-Mount
Lead Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration


Re: Change Sequence Number

2013-05-24 Thread Quanah Gibson-Mount
--On Saturday, May 25, 2013 1:36 AM +0530 Kiran Ayyagari 
 wrote:









On Fri, May 24, 2013 at 8:38 PM, Emmanuel Lécharny 
wrote:

Le 5/24/13 4:33 PM, Rina Kotadia a écrit :


Hi ,
We are synchronizing users/groups from ApacheDS to our database using
JNDI. Have following questions


Using JNDI for LDAP is always a bad idea would be the other thing to note.

--Quanah

--

Quanah Gibson-Mount
Sr. Member of Technical Staff
Zimbra, Inc
A Division of VMware, Inc.

Zimbra ::  the leader in open source messaging and collaboration


[jira] [Commented] (DIRKRB-94) Keytab export for Kerberos principles

2013-05-11 Thread Quanah Gibson-Mount (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRKRB-94?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13655343#comment-13655343
 ] 

Quanah Gibson-Mount commented on DIRKRB-94:
---

Are you looking for something like http://www.eyrie.org/~eagle/software/wallet/ 
?

> Keytab export for Kerberos principles
> -
>
> Key: DIRKRB-94
> URL: https://issues.apache.org/jira/browse/DIRKRB-94
> Project: Directory Kerberos
>  Issue Type: New Feature
>Reporter: James C. Wu
>Assignee: Emmanuel Lecharny
>
> As an administrator, I should be able to export the keytab of any Kerberos 
> principle without having to know their password. 
> The keytab can either be generated in an interactive way like Kadmin or can 
> be generated through a script provided the right credential.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Error when using ldapsearch with GSSAPI mechanism

2010-08-25 Thread Quanah Gibson-Mount



--On August 25, 2010 8:38:01 PM +0200 Stefan Seelmann  
wrote:



Hi Amila,


a...@aj-laptop:~/development/Tools/LDAP$ saslpluginviewer | grep -i gssapi
ANONYMOUS LOGIN CRAM-MD5 DIGEST-MD5 GSSAPI PLAIN NTLM EXTERNAL
Plugin "gssapiv2" [loaded],     API version: 4
 SASL mechanism: GSSAPI, best SSF: 56, supports setpass: no
ANONYMOUS LOGIN CRAM-MD5 DIGEST-MD5 GSSAPI PLAIN NTLM EXTERNAL
Plugin "gssapiv2" [loaded],     API version: 4
 SASL mechanism: GSSAPI, best SSF: 56


Please try to set SSF to 0 when using ldapsearch:
  ldapsearch ... -Y GSSAPI -O "maxssf=0"


Why should that be required?  Encrypting the GSSAPI connection is generally 
desired much of the time...


--Quanah

--
Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration



Re: Back from week-end

2010-07-26 Thread Quanah Gibson-Mount
--On Monday, July 26, 2010 11:57 PM +0200 Emmanuel Lecharny 
 wrote:



  Hi guys,

I had 3 days off, and that was good !

However, I didn't spent all my time seeping and looking at the sun. I
fixed a few problems in the Collective Attributes handling (mainly schema
related issues).

We have some more issues to fix, one which is annoying is that you can do
a search requesting for a specific attributeType, or for a subtype, and
you'll get it. For instance, if your entry has a 'cn' attribute, you will
get it if you ask for the 'name' attributeType (you'll get every
attributeType subtyping 'name', like 'sn', etc, if they exist). It does
not work for lookup.


Isn't that the way it is supposed to work?

[zim...@build08 ~]$ ldapsearch -LLL -x -H ldapi:/// -D "cn=config" -W 
"(uid=testuser1)" name

Enter LDAP Password:
dn: uid=testuser1,ou=people,dc=build08,dc=lab,dc=zimbra,dc=com
sn: testuser1
cn: testuser1



--Quanah

--

Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration


Re: Some more speed improvement for basic operations

2010-06-07 Thread Quanah Gibson-Mount
--On Tuesday, June 08, 2010 1:01 AM +0200 Emmanuel Lecharny 
 wrote:



Hi guys,

today I finished to clean the rename() operation, and I just have the
three last operations remaining to be clean, ie move, moveAndRename and
compare.

In the meantime, I was able to avoid an Entry clone to be done, and this
has a huge impact on some operations. Here are the result I get with or
without the clone :

add: 578/s(with) / 607/s (without) / +5%
lookup : 19 568/s(with) / 26 542/s (without) / +36%
search : 19 727/s(with) / 19 560/s (without) / ---
modify : 1 991/s(with) / 2103/s (without) / +5%
delete : 248/s(with) / 248/s (without) / ---

As we can see, the lookup is really faster with such a modif. The other
operations aren't that much impacted, the cost of writing on disk kills
the gain we could have.

One more thing : this is a test done with one single thread, directly on
top of the core-session.

However, it demonstrates that with enough cache, and a good network
layer, we should be able to get some good performances out of the server.


Exciting. :)  One of these days soon, I'll have a slamd perf lab again (HW 
exists, waiting on installation), so I can draw up some numbers between 
OpenLDAP, ApacheDS, and maybe some others.


--Quanah


--

Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration


Re: Suggestion to get Groovy LDAP restarted

2010-03-14 Thread Quanah Gibson-Mount
--On Sunday, March 14, 2010 11:22 PM +0100 Emmanuel Lecharny 
 wrote:



On 3/14/10 11:09 PM, Quanah Gibson-Mount wrote:

--On Sunday, March 14, 2010 1:39 PM +0100 Stefan Zoerner
 wrote:


The question is: What is the target group of the library? LDAP people
like us don't want to hide LDAP functionality with a library like JNDI,
which uses strange names (bind for adding entries, for instance) for
common functionality.


Also worthy to note that JNDI is in general broken, poorly (if ever)
maintained, and the current people working on it at Sun broke it in
the 1.6.0_u17 release, and it remains broken in 1.6.0_u18, because
they don't understand BER.  Thankfully I was able to show them their
error, and they are working on a fix, but I don't know which release
will have it.

I would be interested to know what kind of errors you have found in
1.6.0_17, if you can provide some more info.


<http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6921610>

--Quanah



--

Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration


Re: Suggestion to get Groovy LDAP restarted

2010-03-14 Thread Quanah Gibson-Mount
--On Sunday, March 14, 2010 1:39 PM +0100 Stefan Zoerner  
wrote:



The question is: What is the target group of the library? LDAP people
like us don't want to hide LDAP functionality with a library like JNDI,
which uses strange names (bind for adding entries, for instance) for
common functionality.


Also worthy to note that JNDI is in general broken, poorly (if ever) 
maintained, and the current people working on it at Sun broke it in the 
1.6.0_u17 release, and it remains broken in 1.6.0_u18, because they don't 
understand BER.  Thankfully I was able to show them their error, and they 
are working on a fix, but I don't know which release will have it.


Anyone I've worked with who uses Java soon abandons JNDI because of these 
and other issues (custom controls, new RFC's, etc etc etc etc etc).


--Quanah



--

Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration


[jira] Commented: (DIRSERVER-1214) Searches done with an empty baseDN are not accepted, except for the rootDSE

2010-02-26 Thread Quanah Gibson-Mount (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRSERVER-1214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12838986#action_12838986
 ] 

Quanah Gibson-Mount commented on DIRSERVER-1214:


So ignore my first bit on Stanford.edu.  They have a "defaultsearchbase" value 
set, which redirects queries to "" to the dc=stanford,dc=edu base.  A subtree 
search when that is not set also returns error 32, no such object.

Not that this is necessarily the *correct* behavior. ;)

> Searches done with an empty baseDN are not accepted, except for the rootDSE
> ---
>
> Key: DIRSERVER-1214
> URL: https://issues.apache.org/jira/browse/DIRSERVER-1214
> Project: Directory ApacheDS
>  Issue Type: Bug
>Affects Versions: 1.5.3
>Reporter: Emmanuel Lecharny
> Fix For: 1.5.6
>
>
> We can't do a search with an empty baseDN, when it's not specifically a 
> rootDSE search (ie, (objectClass=*) and scope=OBJECT).
> We should consider that such a search is spreaded on all the partitions.
> This is not easy to implement without the nested partitions, as the current 
> existing partitions are potentially stoed in different backends.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (DIRSERVER-1214) Searches done with an empty baseDN are not accepted, except for the rootDSE

2010-02-26 Thread Quanah Gibson-Mount (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRSERVER-1214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12838973#action_12838973
 ] 

Quanah Gibson-Mount commented on DIRSERVER-1214:


So, at least with sub, the behavior is fairly clear, it returns whatever it has 
access to.  But with OpenLDAP, using a scobe of "one", using separate databases 
for the contexts (rather than using ""), I get:

[zim...@freelancer ~]$ ldapsearch -x -b "" -s one -h freelancer
# extended LDIF
#
# LDAPv3
# base <> with scope oneLevel
# filter: (objectclass=*)
# requesting: ALL
#

# search result
search: 2
result: 32 No such object

# numResponses: 1


So if there is no "" database, a one-level search says no such object.  I don't 
know if that's really the correct behavior or not, it's an interesting question.

> Searches done with an empty baseDN are not accepted, except for the rootDSE
> ---
>
> Key: DIRSERVER-1214
> URL: https://issues.apache.org/jira/browse/DIRSERVER-1214
> Project: Directory ApacheDS
>  Issue Type: Bug
>Affects Versions: 1.5.3
>Reporter: Emmanuel Lecharny
> Fix For: 1.5.6
>
>
> We can't do a search with an empty baseDN, when it's not specifically a 
> rootDSE search (ie, (objectClass=*) and scope=OBJECT).
> We should consider that such a search is spreaded on all the partitions.
> This is not easy to implement without the nested partitions, as the current 
> existing partitions are potentially stoed in different backends.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (DIRSERVER-1214) Searches done with an empty baseDN are not accepted, except for the rootDSE

2010-02-26 Thread Quanah Gibson-Mount (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRSERVER-1214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12838939#action_12838939
 ] 

Quanah Gibson-Mount commented on DIRSERVER-1214:


This is a very real issue, and ignoring it doesn't make it go away. :)

I can show you the behavior for OpenLDAP (For ldap.stanford.edu, which has a 
root of "dc=stanford,dc=edu"

tribes:~> ldapsearch -x -h ldap -b "" | more
# extended LDIF
#
# LDAPv3
# base <> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#

# stanford.edu
dn: dc=stanford,dc=edu
objectClass: dcObject
objectClass: organization
o: Stanford University
dc: stanford
l: Palo Alto


(etc)

More importantly, is how are you going to handle people who have databases 
rooted at ""?  That's what we do at Zimbra, as we support ISP's, and thus 
multiple domains that could exist across org, com, edu, etc.  You should 
*always* be able to do a subtree search on "", and it should simply return the 
databases as they exist (according to ACL rules, etc, of course).

It is the same as any other subtree search.

--Quanah


> Searches done with an empty baseDN are not accepted, except for the rootDSE
> ---
>
> Key: DIRSERVER-1214
> URL: https://issues.apache.org/jira/browse/DIRSERVER-1214
> Project: Directory ApacheDS
>  Issue Type: Bug
>Affects Versions: 1.5.3
>Reporter: Emmanuel Lecharny
> Fix For: 1.5.6
>
>
> We can't do a search with an empty baseDN, when it's not specifically a 
> rootDSE search (ie, (objectClass=*) and scope=OBJECT).
> We should consider that such a search is spreaded on all the partitions.
> This is not easy to implement without the nested partitions, as the current 
> existing partitions are potentially stoed in different backends.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Question about new OC addition to the schema

2009-12-11 Thread Quanah Gibson-Mount
--On Saturday, December 12, 2009 2:15 AM +0100 Emmanuel Lecharny 
 wrote:



Quanah Gibson-Mount a écrit :

--On Friday, December 11, 2009 9:46 PM +0200 Ersin ER
 wrote:


IMO it's an error and should be rejected with the appropriate message.
It's also possible that the user intended to write nn (or any valid
similar attribute) instead of the second cn.


Yes, it is always difficult to discern user intent, another excellent
point.


Same corener case : what if we have an AT present in the MUST *and* in
the MAY ? Should it be considered an error ?


Absolutely.  Then it is even more important to know what they meant.  Is it 
required, or is it optional?  That's a major distinction.


--Quanah


--

Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration


Re: Question about new OC addition to the schema

2009-12-11 Thread Quanah Gibson-Mount
--On Friday, December 11, 2009 9:46 PM +0200 Ersin ER  
wrote:



IMO it's an error and should be rejected with the appropriate message.
It's also possible that the user intended to write nn (or any valid
similar attribute) instead of the second cn.


Yes, it is always difficult to discern user intent, another excellent point.

--Quanah

--

Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration


Re: Question about new OC addition to the schema

2009-12-11 Thread Quanah Gibson-Mount
--On Friday, December 11, 2009 7:10 PM +0100 Emmanuel Lecharny 
 wrote:



Hi guys,

just a question about how to handle a specific case. If someone tries to
inject an OC with an AT present twice in the MAY or MUST, what should we
do ?

For instance, we have :

MAY ( cn $ sn $ cn )

Should it be considered as an error, and rejected, or should we just
accept the OC ?


I think it should be rejected as an error.  I should see what OL does with 
such a thing.


my 2c.

--Quanah



--

Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration


[jira] Commented: (DIRSTUDIO-513) Do delete before add when modifying attribute values

2009-08-21 Thread Quanah Gibson-Mount (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRSTUDIO-513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12746053#action_12746053
 ] 

Quanah Gibson-Mount commented on DIRSTUDIO-513:
---

Stefan,

I think the problem is more that some people choose to hide their schema from 
being read out of the subschema entry, which is a perfectly valid thing to do.  
So any well written LDAP client needs to be able to work with the fact that the 
schema may or may not be browsable.  There is even some security suite out 
there that "warns" people that exposing the subschema entry is a security risk. 
 Not that I agree with it. ;)

> Do delete before add when modifying attribute values
> 
>
> Key: DIRSTUDIO-513
> URL: https://issues.apache.org/jira/browse/DIRSTUDIO-513
> Project: Directory Studio
>  Issue Type: Improvement
>Affects Versions: 1.4.0
>Reporter: Martin Alderson
>Priority: Minor
>
> When connecting to Novell eDirectory and modifying the schema an "Attribute 
> Or Value Exists" error occurs.  This is due to the modification performing an 
> add before the delete and eDirectory (wrongly) complains that the same OID 
> has been used more than once before realising that the old value should be 
> deleted.  Note that this is a problem with eDirectory but it would be useful 
> if Studio asked for the delete to be performed before the add when modifying 
> an attribute value which eDirectory is OK with.
> An example of the LDIF in the modifications logs view for an operation that 
> fails is:
> dn: cn=schema
> changetype: modify
> add: objectClasses
> objectClasses: ( 2.16.840.1.113730.3.2.2 NAME 'inetOrgPerson' [...new 
> value...]
> -
> delete: objectClasses
> objectClasses: ( 2.16.840.1.113730.3.2.2 NAME 'inetOrgPerson' [...old 
> value...]
> -
> It also seems that modifying the schema on ApacheDS has the same issue.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: [Client-API] Should the unbind() operation close the connection ?

2009-08-05 Thread Quanah Gibson-Mount



--On August 5, 2009 11:26:02 AM -0700 Howard Chu  wrote:


Quanah Gibson-Mount wrote:

AFAIK, it is perfectly valid to unbind and then rebind as a new user,
without ever closing the connection, and without ever involving
connection pooling.  This is why various LDAP api's, such as Net::LDAP,
allow for the unbind operation as something separate than the close
operation.


Wrong.

To rebind as a new user and keep the same connection, simply issue a new
Bind request. Unbind = Close.


Eh, you're right.  I needed to refresh my Net::LDAP functions list. :P

--Quanah


--

Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration


Re: [Client-API] Should the unbind() operation close the connection ?

2009-08-05 Thread Quanah Gibson-Mount
AFAIK, it is perfectly valid to unbind and then rebind as a new user, 
without ever closing the connection, and without ever involving connection 
pooling.  This is why various LDAP api's, such as Net::LDAP, allow for the 
unbind operation as something separate than the close operation.


--Quanah

--On August 5, 2009 1:53:42 PM -0400 Alex Karasulu  
wrote:



I think this is a matter of whether or not you are dealing with
connection pooling.

Regards,
Alex


On Wed, Aug 5, 2009 at 12:05 PM, Emmanuel Lecharny 
wrote:

AFAICT, there is nowhere in the RFC a place where it's explicitely said
that the Unbnd operation closes the connection.

Should we close the connection on the client side when doing a Unbond(),
or keep it open ?

--
--
cordialement, regards,
Emmanuel Lécharny
www.iktek.com
directory.apache.org






--
Alex Karasulu
My Blog :: http://www.jroller.com/akarasulu/
Apache Directory Server :: http://directory.apache.org
Apache MINA :: http://mina.apache.org





--

Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration


Re: [jira] Created: (DIRSERVER-1383) There is a confusion between Anonymous access and Access to rootDSE

2009-07-20 Thread Quanah Gibson-Mount
--On Monday, July 20, 2009 9:50 PM -0400 Alex Karasulu 
 wrote:



Ahhh okie you're right on.  My bad.


This is quite correct.  There are even some (stupid) security programs that 
will say being able to read the rootDSE is a vulnerability.  OTOH, I've 
always left it read to the world, most clients prefer it. :P


--Quanah

--

Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration


Re: [Mitosis] Push or pul, plus some random thoughts

2009-01-14 Thread Quanah Gibson-Mount
--On Thursday, January 15, 2009 1:28 AM +0100 Emmanuel Lecharny 
 wrote:




The queue will be limited in size, obviously. I can even be empty, all
the modification being stored on disk.

However, we don't necessarily have to keep a track of pending modify for
disconnect replicas, as it's easy to know which entries has not been
replicated since the last time the replica was on. The connecting replica
could send the last entryCSN received, and then a search can be done on
the server for every CSN with a higher CSN. Then you don't need to keep
any modification on disk, as they are already stored in the DiT.


Sounds a lot like syncrepl. ;)

That search doesn't sound like it handles deletes though, which is always a 
PITA.


--Quanah


--

Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration


Re: [ApacheDS][Mitosis] Replication data

2007-12-01 Thread Quanah Gibson-Mount



--On December 1, 2007 4:45:51 PM -0800 Howard Chu <[EMAIL PROTECTED]> wrote:


That sounds like a sensible approach. Searching the changelog is the
key. I'd love to get the big picture here and try to make sure we can
replicate between ApacheDS and OpenLDAP. This would be very beneficial
to both user bases.


It sounds to me like you already have all of the schema elements in place
to get RFC4533 implemented. We can work out the delta MMR stuff as a
joint project, always good to have someone else to check our assumptions.


Also, if you are interested in setting up OpenLDAP with delta-syncrel, I 
documented it on Symas' tech tips at:


<http://www.connexitor.com/forums/viewtopic.php?t=3>


It's been my preferred replication mechanism with OpenLDAP, as it is 
significantly less overhead traffic wise as well in a high-write 
environment.


--Quanah

--

Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration


Re: [ApacheDS] Change log ietf draft

2007-10-01 Thread Quanah Gibson-Mount
--On Monday, October 01, 2007 2:19 AM +0200 Emmanuel Lecharny 
<[EMAIL PROTECTED]> wrote:



In any cases, we also need to be able to log Search requests. It's a
part of any AAA system... And some countries mandates you to store
such informations (nah, not china nor USA : Swiss !) for tracking
purpose if you open this system to the public.


Yeah, completely familiar with that. ;)

Also, I should correct my earlier bit -- With delta-sycnrepl, it is not 
required that you only log writes.  You can log any other operations you 
want too, but logging writes is required. ;)  Part of the setup for 
delta-syncrepl restricts it to only reading write ops from the access log 
database.


<http://www.connexitor.com/forums/viewtopic.php?t=3> has an example of 
setting it up under OpenLDAP that I wrote up a while ago.


I'd love to see a common replication mechanism between ADS and OpenLDAP 
(really, I'd love to see one (or more) across all the dir servers. ;) ).


--Quanah

--

Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration


Re: [ApacheDS] Change log ietf draft

2007-09-30 Thread Quanah Gibson-Mount
--On Monday, October 01, 2007 12:59 AM +0200 Emmanuel Lecharny 
<[EMAIL PROTECTED]> wrote:



Hi Quanah,

just wondering, why simply dump the data using LDIF format? Is there
any missing element which forbid you to use this format, with
changeType ?


I believe that part of it is to track informational data that's not as 
easily tracked in LDIF, but I didn't write the spec (like request Controls 
for example).  Mainly, I was noting something that the logschema draft is 
used for (and which so far is my preferred replication mechanism in 
OpenLDAP).


--Quanah

--

Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration


Re: [ApacheDS] Change log ietf draft

2007-09-30 Thread Quanah Gibson-Mount
--On Saturday, September 29, 2007 1:31 PM +0300 Ersin Er 
<[EMAIL PROTECTED]> wrote:



This is really not a change log draft. It goes beyond and logs any LDAP
operation. More to come soon..


If it is related to the accesslog backend in OpenLDAP, then it is 
configurable what operations get logged, so it can, for example, be limited 
to just write operations.  That's what the delta-syncrepl replication 
mechanism in OpenLDAP is based off of.


--Quanah


--

Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration


Re: [Exception Message Spelling] Syntaxically

2007-06-17 Thread Quanah Gibson-Mount
--On Sunday, June 17, 2007 5:10 PM -0600 Chris Custine 
<[EMAIL PROTECTED]> wrote:



I am pretty sure that syntaxically is not a real word and is probably a
misspelled form of "synactically".  However, for what its worth, you guys
do a better job with English than many people I know here in the US.  :-)


<http://www.m-w.com/cgi-bin/dictionary?va=syntactically>

is the correct spelling.  Syntaxically is not a word in the English 
language, although plenty of people seem to use it incorrectly. ;)


I find it highly amusing that a patent about teaching Enlish as a foreign 
language had that misspelling in it. ;)


--Quanah



--
Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration


Re: [jira] Created: (DIRSERVER-950) Server Left in Unstable State

2007-05-31 Thread Quanah Gibson-Mount
--On Thursday, May 31, 2007 12:43 PM -0500 Ole Ersoy <[EMAIL PROTECTED]> 
wrote:





Emmanuel Lecharny wrote:

On 5/31/07, Ole Ersoy <[EMAIL PROTECTED]> wrote:

Emmanuel,

I do my best to please you


It's not only about me...


Oh yes it is.  You are the
only one who has commented on this.

This is strictly your preference,
unless you get 5 people
on the "Entire" mailing list to
agree with you.

I'm being pretty flexible here.


The way in which emails are written are pretty standard.  And yes, your 
writing "style" is quite unique, and rather annoying.


D
o

yo
u

like t
o

rea

d

brok
en

up ema

il?


Because I sure don't.
--Quanah

--
Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration


Re: [ApacheDS] Internal vs. external lookups

2007-05-30 Thread Quanah Gibson-Mount
--On Thursday, May 31, 2007 1:17 AM -0400 Alex Karasulu 
<[EMAIL PROTECTED]> wrote:






On 5/31/07, Quanah Gibson-Mount <[EMAIL PROTECTED]> wrote:

--On Wednesday, May 30, 2007 10:11 PM -0700 Enrique Rodriguez
<[EMAIL PROTECTED]> wrote:


Actually, I very much care whether the request is internal vs.
external and much much less "who" is attempting the authentication.
The issue with what I want to do is that certain operations must NEVER
be allowed to occur from outside the server.  Basing this upon the
bind principal does not help since a bind principal can be
compromised.  To avoid a security problem when a principal is
compromised, I must prevent certain operations from ever occuring from
outside the server, and thus I must know whether a request is coming
from inside vs. outside the server and not who the bind principal is.


This is something that matters considerably when considering dynamic group
expansion.  I haven't followed whether or not Apache DS has implemented
(or
will implement) this, but that's certainly a place where I found that it
is
necessary to have the concept of an internal ID acting on different
permissions from the external ID making a request.



This is interesting can you elaborate or give an example of such a
situation?


Certainly. :)

At Stanford, user groups were always implemented via an attribute 
(suPrivilegeGroup).  Due to data policy restrictions because of US Law 
(FERPA, HIPAA), access to the attribute itself is highly restricted.  The 
attribute is multi-valued, and may contain data that is not covered by US 
Law (and thus world readable).  Stanford desires to use the attribute 
values for authorization via groups.  So an example entry might have 
something like:


suregid=1234,cn=people,dc=stanford,dc=edu

suPrivilegeGroup: FERPA1
suPrivilegeGroup: FERPA2
suPrivilegeGroup: WORLD1
suPrivilegeGroup: WORLD2


Dynamic groups work on the concept of evaluating an LDAP URI to create the 
membership list.  So that might be something like (and this is off the top 
of my head for that rather arcane URI syntax...)


ldap:///cn=people,dc=stanford,dc=edu??suPrivilegeGroup=FERPA1


Now, normally, to create the population for this dynamic group, it requires 
that the identity connection to the server (lets say "www") has at least 
search ability on the suPrivilegeGroup attribute.  However, if Stanford 
grants search on that attribute to the "www" principal, any user on the www 
servers can potentially use the credentials of the www server (via CGI 
scripts or other methods) to find out what people are in particular groups. 
To fix this issue, the general idea is to allow an "internal" ID to be 
defined in the dynamic group object that is what performs the actual 
evaluation of the LDAP URI.  This way, the LDAP access to the group object 
can be allowed for the "www" identity, but it itself actually has no 
ability to search the user entries for suPrivilegeGroup values.


There's an RFC on dynamic groups that is currently draft that incorporates 
this idea, but has several serious flaws in it.  A discussion about the 
draft and its current flaws can be found at:


<http://www.openldap.org/lists/ietf-ldapext/200702/threads.html>

--Quanah

--
Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration


Re: [ApacheDS] Internal vs. external lookups

2007-05-30 Thread Quanah Gibson-Mount
--On Wednesday, May 30, 2007 10:11 PM -0700 Enrique Rodriguez 
<[EMAIL PROTECTED]> wrote:



Actually, I very much care whether the request is internal vs.
external and much much less "who" is attempting the authentication.
The issue with what I want to do is that certain operations must NEVER
be allowed to occur from outside the server.  Basing this upon the
bind principal does not help since a bind principal can be
compromised.  To avoid a security problem when a principal is
compromised, I must prevent certain operations from ever occuring from
outside the server, and thus I must know whether a request is coming
from inside vs. outside the server and not who the bind principal is.


This is something that matters considerably when considering dynamic group 
expansion.  I haven't followed whether or not Apache DS has implemented (or 
will implement) this, but that's certainly a place where I found that it is 
necessary to have the concept of an internal ID acting on different 
permissions from the external ID making a request.


--Quanah



--
Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration


Re: Building an LDAP server is not an easy task (was: Re: [ApacheDS Webapps] Use Cases)

2007-05-21 Thread Quanah Gibson-Mount



--On May 21, 2007 6:19:27 PM -0400 Alex Karasulu <[EMAIL PROTECTED]> 
wrote:



Yeah this is not going to happen or if it does it will take another 2
decades.  The key to having
an LDAP reniassance is simple really.  Here's the formula IMO:

(1) Resusitate some critical X.500 concepts that the LDAP creators
chopped out of LDAP
  to oversimplify it: namely talking about the administrative model
here and not OSI stack.
  Kurt began doing this with couple RFCs like the one for subentries
and collective attributes.
(2) Provide some solid tooling to simplify and accomodate the lack of
knowledge around LDAP
  and X.500 concepts which LDAP is built upon.  The RDBMS world is
rich with tooling support
  yet the LDAP world has virtually none.
(3) Provide the rich integration tier constructs many RDBMS developers
are accustomed to yet
  transposed to the LDAP plane.  These constructs include:
  (a) LDAP Stored Procedures
  (b) LDAP Triggers
  (c) LDAP Views
  (d) LDAP Queues (to interface with MOMs)

*** incidentally we use the X.500 admin model to implement these features

If these critical peices of the puzzle are solved then we'll see the
Directory come back as the swiss army
knife of integration it was intended to be.  Right now Directories are
stuck serving out white pages and
admins are still scratching their heads when trying to figure out how to
remove users from groups when
users are deleted to maintain referrential integrity.  Why have we messed
this up so bad?

The key to solving the integration problems with LDAP which will plauge
the enterprise for the next 30 years
lie in these critical features.  If we cannot see this and correct our
path together then our chances
of renewing the demand for LDAP are lost as new half baked technologies
emerge to solve these problems
and clutter the vision of those that should be deciding on LDAP.


Works for me. :)

--Quanah

--
Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration


Re: Building an LDAP server is not an easy task (was: Re: [ApacheDS Webapps] Use Cases)

2007-05-21 Thread Quanah Gibson-Mount



--On May 21, 2007 5:34:45 PM -0400 Alex Karasulu <[EMAIL PROTECTED]> 
wrote:




LDAP servers today do no justice to the technology.  I think we have a
chance to bring this protocol to the
21st century.

Eventually we'll be at the same level as other servers out there.  And
hopefully those other efforts will start to
take LDAP where we want it to go just because they want to remain
competitive with our feature set.


I've discussed various feature enhancements to LDAP in general with Howard 
Chu over the years.  One of the major problems is if you want to stay RFC 
compliant or not.  The way in which the protocol was written makes it 
particularly difficult to add new features while remaining compliant (that 
is in large part why the overlay system was developed for OpenLDAP).  What 
I've suggested a few times, is to throw away LDAP and start over with 
something new (What I generally call SDAP or SMART DAP).  Of course, then 
one would no longer have an LDAP server, and I'm guessing corporate vendors 
would be slow to adopt (given the slow pace of their LDAP v3 adoption 
already...).


--Quanah


--
Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration


Re: [Kerberos] FYI, draft Kerberos schema

2007-05-17 Thread Quanah Gibson-Mount

--On Thursday, May 17, 2007 7:46 AM -0500 [EMAIL PROTECTED] wrote:


On May 13,  8:52pm, "Apache Directory Developers List" wrote:
} Subject: Re: [Kerberos] FYI, draft Kerberos schema

Good morning to everyone, I hope your respective weeks are going well.

First of all kudos to Enrique for his efforts on the Kerberos
implementation.  I've probably been as deep inside general Kerberos
issues as anyone.  Much of this stuff is vague and only characterized
by implementation(s).  It is a non-trivial exercise to get things
functional yet alone interoperable.


The IETF Krb working group is looking for people to work on a draft 
Kerberos LDAP schema.  If the Apache DS project is interested in having a 
valid RFC approved schema (which is something I've been wanting for years), 
those who are interested may wish to join and participate.


--Quanah

--
Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc

Zimbra ::  the leader in open source messaging and collaboration


Re: ou=system is parent context for all entries?

2007-03-30 Thread Quanah Gibson-Mount



--On Friday, March 30, 2007 3:49 PM -0500 Ole Ersoy <[EMAIL PROTECTED]> 
wrote:




Emmanuel Lecharny wrote:

Ole Ersoy a écrit :



Incidentally - Some might like it if
the DAS was able to just create this root context without using the
configuration file.

Then ADS could just add it to the configuration file.


This is something we could work on, but this is not currently
possible. The basic idea is to store all the configuration into ADS,
so that you can modify it through Ldap requests (of course, the port
and host will remain in a file :)


OK - 40 characters per line huh? :-)
Sure I like them short as well.


Usually, a smart email client will wrap lines as appropriate.  And aren't 
most terminals 79 characters??


--Quanah

--
Quanah Gibson-Mount
Senior Systems Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: V1.0.1 schema

2007-03-30 Thread Quanah Gibson-Mount



--On Friday, March 30, 2007 9:32 PM +0200 Stefan Zoerner <[EMAIL PROTECTED]> 
wrote:



Tony Thompson wrote:

Yeah, I am using that on the group side but I want to keep track of the
groups the user is in from the perspective of the user object.  So,
something like this:

cn=MyGroup,dc=example,dc=org
member: cn=MyUser,dc=example,dc=org

cn=MyUser,dc=example,dc=org
memberOf: cn=MyGroup,dc=example,dc=org

Tony



Hi Tony!

I know that Active Directory does something exactly like that. Most
directory servers I know don't. The information is redundant, and it is
not easy to keep both directions of the association consistent.

It seems to be an advantage to have the ability to perform a simple
lookup and know all the groups a user belongs to. But with clever filter
choice, you can determine direct group membership with a single search op
without an attribute on the user side. And for *all* groups a user
belongs to (directly or via groups within groups), you always need an
algorithm with several search ops -- even if you have both directions
stored.

I recommend this article, If you not already know it. It contains
descriptions of the algorithms.
http://middleware.internet2.edu/dir/groups/rpr-nmi-edit-mace_dir-groups_b
est_practices-1.0.html


Not necessarily.  If you use dynamic groups, you can have a single 
attribute on the user side that stores group membership, and then an 
evaluated URI in a group object that creates the group "on the fly".  It 
works very well.  Unfortunately, AD is broken in this area, and cannot use 
them for authorization (it can only use static groups).


--Quanah



--
Quanah Gibson-Mount
Senior Systems Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: [ADS LDAP DAS] ADS Specific

2007-03-29 Thread Quanah Gibson-Mount
--On Thursday, March 29, 2007 10:43 PM -0500 Ole Ersoy 
<[EMAIL PROTECTED]> wrote:



Oh - Cool - Do you know how to do it via JNDI by any chance?

I assume it's similar to how ADS does it...

I guess we'll just have to make sure the DAS understands various
"Configuration Dialects"
then.


Sorry, although we use JNDI, we've not switched to using back-config yet 
(instead of text configuration files) so it is not something we've played 
with. :)


--Quanah

--
Quanah Gibson-Mount
Senior Systems Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: [ADS LDAP DAS] ADS Specific

2007-03-29 Thread Quanah Gibson-Mount
--On Thursday, March 29, 2007 9:37 PM -0500 Ole Ersoy <[EMAIL PROTECTED]> 
wrote:



Hey Guys,

Lets get funky with the acronyms.

I think the LDAP DAS will be ADS Specific, because of
ADS's capability to add ObjectClasses on the fly,
unless we also know that other LDAP servers have this capability?

I just thought I'd ask, because in the design guide I'm referring to ADS
a lot based on this assumption.


OpenLDAP certainly allows schema additions and modifications on the fly.

<http://www.openldap.org/software/man.cgi?query=slapd-config&apropos=0&sektion=0&manpath=OpenLDAP+2.4-Release&format=html>


--Quanah


--
Quanah Gibson-Mount
Senior Systems Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: Benchmarks feedback

2007-03-27 Thread Quanah Gibson-Mount
--On Wednesday, March 28, 2007 12:17 AM +0200 Emmanuel Lecharny 
<[EMAIL PROTECTED]> wrote:



Quanah Gibson-Mount a écrit :


--On Tuesday, March 27, 2007 11:29 PM +0200 Emmanuel Lecharny
<[EMAIL PROTECTED]> wrote:


Unfortunately, one that is not proprietary is of course desired.



This is why slamd seams a good tool. If its performance is bad, then
it's
up to the users to improve it. Version 2.0.0 seems promizing, with
another backend (BDB JE)



I use 2.0 checked out of CVS. ;)  And I've got plenty of network
bandwidth at the moment (1GB between all systems).  It certainly can
do what is necessary as long as I give it enough machines to do the
work.  I'd just prefer to use fewer than I do now. ;)


Interesting. How does it compares to 1.8, so far ? Is it faster? (I agree
that everybody is not lucky enough to have as many server as google have
in his basement :)


It is definitely much faster in term of pulling up folders that have 
hundreds of results in them.  I don't see that the client interaction with 
the LDAP server and the feeding back of results to the master slamd is any 
faster. ;)  The instability issues with the clients that I've encountered 
from the start seem to remain, as well.  Periodically one will just lock 
up, and I'll have to restart an optimization job over, which can cost me 
many hours of time. :/


--Quanah


--
Quanah Gibson-Mount
Senior Systems Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: Benchmarks feedback

2007-03-27 Thread Quanah Gibson-Mount
--On Tuesday, March 27, 2007 11:29 PM +0200 Emmanuel Lecharny 
<[EMAIL PROTECTED]> wrote:



Unfortunately, one that is not proprietary is of course desired.


This is why slamd seams a good tool. If its performance is bad, then it's
up to the users to improve it. Version 2.0.0 seems promizing, with
another backend (BDB JE)


I use 2.0 checked out of CVS. ;)  And I've got plenty of network bandwidth 
at the moment (1GB between all systems).  It certainly can do what is 
necessary as long as I give it enough machines to do the work.  I'd just 
prefer to use fewer than I do now. ;)


--Quanah

--
Quanah Gibson-Mount
Senior Systems Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: Benchmarks feedback

2007-03-27 Thread Quanah Gibson-Mount
--On Tuesday, March 27, 2007 4:04 PM -0400 Alex Karasulu 
<[EMAIL PROTECTED]> wrote:





On 3/27/07, Quanah Gibson-Mount <[EMAIL PROTECTED]> wrote:

Java based load generators are just fine.  I guess this is a tactic to
toot the OpenLDAP (C) horn
and bash Java while doing it: subtle but apparent.  Even Java clients are
not good
enough I guess :/.


No, that wasn't the intent.  The fact is, a single C based client can 
generate 25 times more authentications/second than a single slamd client. 
So, that would mean I could use 75% fewer clients if I could use C, to 
generate a similar load.



The whole point to load generators is to increase the number of them as
needed to
blast the server with as much load to saturate it.  If one load injector
is not sufficient then
increase the number.  SLAMD does the job nicely.


Well, actually, it is a little more than that.  The more clients you put on 
a single system, the worse things get.  I currently use 5 systems with 4 
clients each, for a total of 20 clients.  Fewer servers with more clients 
performs worse.



One of the advantages to SLAMD is it's simplicity and the fact that it is
a platform.  It's easy to write
quick tests using Java and load them.  If you want to write a C based
platform that's just as easy to use
to do the same thing then be my guest.  However I don't think it's going
to go that far.


I would hope it would be built with some type of test template engine.  In 
any case, certainly none has been written so far. ;)  I'm just always on 
the lookout for something else.


--Quanah

--
Quanah Gibson-Mount
Senior Systems Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: Benchmarks feedback

2007-03-27 Thread Quanah Gibson-Mount



--On Tuesday, March 27, 2007 12:05 PM -0400 Matt Hogstrom 
<[EMAIL PROTECTED]> wrote:



I haven't used SLAMD.  I have a C based load generator that is
proprietary.  If it scales that would be awesome.


Is it a C based load generator that can distribute across clients, similar 
to slamd?  Or is it more like directoryMark?  I'm curious, because the 
problem we've run into in benchmarking OpenLDAP with slamd, is that slamd 
simply can't keep up when one has a highly tuned performant server. :/ 
There's been some interesting in writing (or finding) an alternate LDAP 
testing engine that is C based.  Unfortunately, one that is not proprietary 
is of course desired.



--Quanah

--
Quanah Gibson-Mount
Senior Systems Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: [Kerberos] Kerberos + OpenLDAP

2007-03-06 Thread Quanah Gibson-Mount



--On Tuesday, March 06, 2007 10:43 AM -0500 Jeffrey Hutzelman 
<[EMAIL PROTECTED]> wrote:





On Thursday, March 01, 2007 03:22:55 PM -0800 Enrique Rodriguez
<[EMAIL PROTECTED]> wrote:


On 3/1/07, Sam Hartman <[EMAIL PROTECTED]> wrote:

1) I'd really like to see interested individuals work on the LDAP schema
in the IETF. The effort has floundered for lack of people driving it.

2) I'd really love to see an ldap plugin that used some schema and
   called kadm5_* interfaces--I.E. a way to replace kadmind with
   openldap even in situations where the ldap kdb layer was not used.


1)  A standardized LDAP schema would be great and I'm sure we (Apache
Directory) would support it.  In the mean time we'll make our best
effort to reuse any existing schema rather than draft something new.

2)  I would personally participate in a standardization effort.  Is
anyone interested and who is also attending the Prague meeting?
(Prague Czech Republic - 68th IETF Meeting (March 18 - 23, 2007))


I'm glad to hear there are people actively interested in an effort to
produce a standardized LDAP schema for Kerberos.  As Sam noted, this has
been on the wish list for some time, but has received little attention
due to lack of interested parties with enough time.

I suggest that interested parties subscribe to the Kerberos working group
mailing list (ietf-krb-wg@anl.gov), and bring up this issue there.  If
there is enough interest in the working group to sustain this work, we
can consider adopting it as a work item.


<http://www3.ietf.org/proceedings/05nov/krb-wg.html>

has the instructions for subscribing.

--Quanah



--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: [SASL] SASL questions

2007-03-03 Thread Quanah Gibson-Mount



--On Friday, March 02, 2007 1:57 PM -0800 Enrique Rodriguez 
<[EMAIL PROTECTED]> wrote:



On 3/1/07, Quanah Gibson-Mount <[EMAIL PROTECTED]> wrote:

...
ldap:///cn=Webauth,cn=Applications,dc=stanford,dc=edu??sub?krb5Principal
Name=webauth/[EMAIL PROTECTED] sasl-regexp
uid=(.*),cn=stanford.edu,cn=gssapi,cn=auth
ldap:///uid=$1,cn=Accounts,dc=stanford,dc=edu??sub?suSeasStatus=active

In particular, if you look at the last one, this is dealing with
Accounts. Rather than looking at their Kerberos krb5Name at all, I do a
direct mapping if they have an active "full" account.  All users have
kerberos principals, but not all users have "full" accounts.  So in the
case that they don't have "full" accounts, I don't want them to just
automatically be able to search the directory with an authenticated view.


Thanks, this is really great feedback.

Can you clarify "full" user vs. "non-full"?  Is the deal here that a
non-full user is one who is coming in from outside Stanford via
Shibboleth credentials, and who is then mapped to Kerberos credentials
for access to intra-enterprise services?


Well, no, we support Shibboleth separately from full vs base (non-full). 
It has more to do with say, contractors or other outside people who may 
need access to specific resources.


<http://sunetid.stanford.edu/>

has a chart part way down describing the differences between base & full.

<http://www.stanford.edu/dept/as/mais/applications/sunetid/index.html>

has more charts, and a broader description of base vs full.



My only question here is if this is a reference to the strength of the
connection, but I'm guessing it isn't.  One of the things OpenLDAP lets
me do is enforce encryption strength of connections.  For example, in my
ACL files, I have:
...
which means the SASL SSF must be at least strength 56.  Java and other
applications will by default connect via SASL/GSSAPI with *no* encryption
(yuck!).


JNDI uses an env property Context.SECURITY_AUTHENTICATION.  The
javadoc states this can be "none", "simple", or "strong".  In the
current BindHandler, this property is set to "simple" which then
AFAICK maps to the SimpleAuthenticator.  Likely this is just another
mismatch between the client-side JNDI API and our needs as a server.


Yeah, we have a number of Java apps using SASL/GSSAPI.  It was through them 
that we discovered Java's default behavior. ;)


One of my co-workers actually has a document on accessing LDAP with 
SASL/GSSAPI up:


<http://www.stanford.edu/~kam/ldap.html>



I'll note as a separate feature the need to specify the connection
strength for access control and open a JIRA issue.



Sounds good. :)

--Quanah

--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: [SASL] SASL questions

2007-03-01 Thread Quanah Gibson-Mount



--On Thursday, March 01, 2007 4:24 PM -0800 Enrique Rodriguez 
<[EMAIL PROTECTED]> wrote:



Hi, Directory developers,

I have time this weekend so I'm looking at adding SASL\GSSAPI\Kerberos
V5 to LDAP binds.  After reading some RFCs and ApacheDS internals, I
have a couple questions:

1)  The Authenticator.authenticate() method requires an LdapDN.
GSSAPI returns a Kerberos principal name.  What's the best way to map
this to a DN?  We could use a regex, like OpenLDAP, but since we have
access to the Kerberos attributes, we can also search directly for the
principal name by specifying a baseDN.  This means an extra lookup,
but it may mean easier config.  Do we want to require that the
principal name map to a DN with a regex?

For example:

GSSAPI returns:  [EMAIL PROTECTED]
Desired DN:  uid=hnelson,ou=users,dc=example,dc=com

With OpenLDAP you specify mappings using the format:

uid=,cn=,cn=,cn=auth

A resulting regex for our typical example LDIF would be:

sasl-regexp
  uid=(.*),cn=example.com,cn=gssapi,cn=auth
  uid=$1,ou=users,dc=example,dc=com

The alternative would be to specify a baseDN, like we do for other
lookups.  We then search for the principal name and use the found DN.
Our configuration could be:

gssapiBaseDn = ou=users,dc=example,dc=com


My only comment here is that in my environment, I have more than just users 
that use Kerberos to bind to the server.  For example, I have cgi, service, 
webauth, and ldap principles.  They are all in their own trees, like:


cn=web,cn=service,cn=applications,dc=stanford,dc=edu
cn=www,cn=webauth,cn=applications,dc=stanford,dc=edu
cn=quanah,cn=cgi,cn=applications,dc=stanford,dc=edu

etc.

So I have multiple regex's in my slapd.conf:

sasl-regexp uid=(.*)/cgi,cn=stanford.edu,cn=gssapi,cn=auth 
ldap:///cn=cgi,cn=applications,dc=stanford,dc=edu??sub?krb5PrincipalName=$1/[EMAIL PROTECTED]
sasl-regexp uid=service/(.*),cn=stanford.edu,cn=gssapi,cn=auth 
ldap:///cn=Service,cn=Applications,dc=stanford,dc=edu??sub?krb5PrincipalName=service/[EMAIL PROTECTED]
sasl-regexp uid=webauth/(.*),cn=stanford.edu,cn=gssapi,cn=auth 
ldap:///cn=Webauth,cn=Applications,dc=stanford,dc=edu??sub?krb5PrincipalName=webauth/[EMAIL PROTECTED]
sasl-regexp uid=(.*),cn=stanford.edu,cn=gssapi,cn=auth 
ldap:///uid=$1,cn=Accounts,dc=stanford,dc=edu??sub?suSeasStatus=active



In particular, if you look at the last one, this is dealing with Accounts. 
Rather than looking at their Kerberos krb5Name at all, I do a direct 
mapping if they have an active "full" account.  All users have kerberos 
principals, but not all users have "full" accounts.  So in the case that 
they don't have "full" accounts, I don't want them to just automatically be 
able to search the directory with an authenticated view.



2)  Any opinion on the 'authenticatorType' to use?  Doco seems to
indicate that the choices are "none," "simple," and "strong."
However, it might be better (ie more modular) to have an authenticator
for each SASL type, eg "sasl-gssapi" and "sasl-digest-md5."  Even with
2 SASL mechanisms supported we could be looking at one large
Authenticator.  Would that be a pain for embedders, in which case we
could use "strong" and have a separate env property if we decide to
have multiple authenticators?


My only question here is if this is a reference to the strength of the 
connection, but I'm guessing it isn't.  One of the things OpenLDAP lets me 
do is enforce encryption strength of connections.  For example, in my ACL 
files, I have:


   by dn.base="cn=lsdb,cn=Service,cn=Applications,dc=stanford,dc=edu" 
sasl_ssf=56 read



which means the SASL SSF must be at least strength 56.  Java and other 
applications will by default connect via SASL/GSSAPI with *no* encryption 
(yuck!).



3)  I'm planning on adding GSSAPI.  What other SASL types are actually
used?


SASL/EXTERNAL is used a lot (Cert authentication)
SASL/DIGEST-MD5 is used a lot

--Quanah


--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: [Kerberos] Kerberos + OpenLDAP

2007-03-01 Thread Quanah Gibson-Mount



--On Thursday, March 01, 2007 12:09 AM -0600 [EMAIL PROTECTED] wrote:


On Feb 28,  1:21pm, "Apache Directory Developers List" wrote:
} Subject: Re: [Kerberos] Kerberos + OpenLDAP

Good evening to everyone.


--On Tuesday, February 27, 2007 6:34 PM -0800 Enrique Rodriguez
<[EMAIL PROTECTED]> wrote:

> Use 'ldap' for LDAP:
> krb5PrincipalName: ldap/[EMAIL PROTECTED]



Although this is the attribute I use for my OpenLDAP directories, I
will note that this attribute is not the part of any RFC standard.
In fact, there is no RFC standardized way of storing Kerberos
principals in a directory that I'm aware of.  I raised this issue to
MIT and Heimdal once, and apparently they are "working" on
something.  But that was several years ago.


The situation may have effectively changed now.

I'm polishing off the details of a kadmin back-end for OpenLDAP.  The
goal of this work is to be able to manage an MIT KDC implementation by
running an OpenLDAP server rather than kadmind on the KDC.  Putting
this into effective use requires some thought on how to develop an LDAP
based abstraction for a KDC entry.

I looked at a number of schema representations.  Its not an RFC but
the most logical abstraction to use seemed to be the schema which
Novell developed for the LDAP back-end to MIT Kerberos.  The 1.6
sources have the schema in the following location:

krb5-1.6/src/plugins/kdb/ldap/libkdb_ldap/kerberos.schema

I believe some effort was placed into coordinating schema details
between Novell, SUN, MIT and Heimdal if I'm not mistaken.


Greg,

Thanks for the update.  It would be nice to see such a schema RFC tracked 
so that it gets included by default with various LDAP providers.


--Quanah

--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: [Kerberos] Kerberos + OpenLDAP

2007-02-28 Thread Quanah Gibson-Mount



--On Tuesday, February 27, 2007 6:34 PM -0800 Enrique Rodriguez 
<[EMAIL PROTECTED]> wrote:



On 2/27/07, Mark Wilcox <[EMAIL PROTECTED]> wrote:

I have a quick question. Did you use the example Kerberos entries that
come with ApacheDS or are there example entries posted elsewhere?

I didn't see them on the Wiki docs.


No, I haven't posted them yet.  This is pretty alpha, which is why I
put them in the sandbox.  I'm not sure which example Kerberos entries
you're referring to, but IIRC the example we ship has entries for
similar services, like krbtgt, changepw, and ssh.  Below is a quick
entry for an LDAP server.  You need an LDAP service principal, krbtgt
entry, and at least one user principal to make this work.  The key
thing is the format of the LDAP service principal name:

Use 'ldap' for LDAP:
krb5PrincipalName: ldap/[EMAIL PROTECTED]


Although this is the attribute I use for my OpenLDAP directories, I will 
note that this attribute is not the part of any RFC standard.  In fact, 
there is no RFC standardized way of storing Kerberos principals in a 
directory that I'm aware of.  I raised this issue to MIT and Heimdal once, 
and apparently they are "working" on something.  But that was several years 
ago.  I certainly would ensure that this not be a hard-coded method of 
making SASL/GSSAPI work.  The sasl-regexp bits from OpenLDAP are pretty 
handy in this area, you may wish to review them if you haven't yet.


--Quanah


--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


RE: BigIP sends resets during ldapsearch over ssl

2007-01-18 Thread Quanah Gibson-Mount



--On Thursday, January 18, 2007 2:16 PM -0500 "Hayes, Cindy" 
<[EMAIL PROTECTED]> wrote:




BigIP is a load balancer.  I am not using Apache Directory Server, but am
using openldap.  Maybe I am in the wrong place :-(



ldap@umich.edu is for general LDAP related queries.

openldap-software@openldap.org is for OpenLDAP specific queries.

--Quanah

--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: Memory leak?

2007-01-09 Thread Quanah Gibson-Mount



--On Wednesday, January 10, 2007 1:05 PM +0800 wolverine my 
<[EMAIL PROTECTED]> wrote:




On 1/9/07, Emmanuel Lecharny <[EMAIL PROTECTED]> wrote:

Hi,

those slides are damn old now... We have fixed something like 50 bugs
since, (may be more), we have increase the performance of the server 3
folds (900 req/s instead of 250 req/s) and we have also made some kind
of load test ( the server has been running for 72 h being requested at
the maximum requests rate : 900/s, more than 200 000 000 requests,
without a problem - except that my CPU was damn hot ! :)

1.0 has been released
(http://directory.apache.org/subprojects/apacheds/releases.html),
it has been certified by OpenGroup (
http://www.opengroup.org/openbrand/register/brand3527.htm),
so I guess we are now on a much better shape than in june, 2006...

Give 1.0 a try, but you also might wait for 1.0.1, expected to be out
within the next couple of week, because some very serious bugs will be
fixed in this coming version.


So if someone wanted to benchmark ApacheDS, you would suggest waiting for 
the 1.0.1 release?


--Quanah


--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: Open architecture identity and authorization efforts.

2006-11-29 Thread Quanah Gibson-Mount



--On Wednesday, November 29, 2006 9:32 AM -0500 Alex Karasulu 
<[EMAIL PROTECTED]> wrote:



Unlike the MIT Kerberos + OpenLDAP solution which involves two separate
moving parts, an ApacheDS solution would be integrated into a single
process and embeddable.  These factors would allow the uptake of IDfusion
into several application servers and products on the market in addition
to a stand alone offering.


What MIT Kerberos + OpenLDAP solution?  One can currently use Heimdal 
Kerberos with OpenLDAP as its backend data store, but that is still 
something under development with MIT last I checked. ;)



--Quanah



--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: Yet another LDAP GUI tool

2006-09-25 Thread Quanah Gibson-Mount



--On Monday, September 25, 2006 10:57 PM +0200 Stefan Zoerner 
<[EMAIL PROTECTED]> wrote:



Hi all

For those of you interested in a free LDAP GUI tool, here is an option I
did not know since yesterday.

http://ldapadmin.sourceforge.net/

I have only checked, whether it is possible to connect to ApacheDS 1.0
RC4 (it is), and whether I am able to edit entries (I am). A nice feature
is the Schema browser.

Disadvantage: Windows only.


You might like JxPlorer better, it runs on multiple platforms (java).

--Quanah


--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: [ApacheDS] [Performance] Using indices to boost search performance

2006-09-02 Thread Quanah Gibson-Mount



--On Saturday, September 02, 2006 2:05 AM -0400 Alex Karasulu 
<[EMAIL PROTECTED]> wrote:



Hi all,
--
Data Setup
--

Wrote a simple tool to generate random values for descent sized entries.
  The data sort of looks like this for a user entry:


slamd has a really nice tool called "MakeLDIF" that'll do this for you, 
too.  Plus then how you made your LDIF is publishable to others who may 
also want to run the same benchmark, since all you have to do is provide 
the template for MakeLDIF to use. ;)


--Quanah


--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: Binding with enything but a dn?

2006-08-04 Thread Quanah Gibson-Mount



--On Friday, August 04, 2006 11:24 AM -0400 Alex Karasulu 
<[EMAIL PROTECTED]> wrote:




You mean the principal's entry may be in some place in the directory
which does not follow the domain to DN mapping I guess.  Like ...

[EMAIL PROTECTED] really being in uid=jhenne,ou=users,dc=apache,dc=org

Yes, yes, this is a reasonable conclusion.

I was thinking about the following

algorithm:
1. map the specified name to a base dn, like in your example. This might
be up to a specialized authentication module.
2. search this base dn for matching users
3. bind using this user's DN.

You are right, of course, that the search still needs to be carried out.
However, we're saving a network round-trip.


We might want to look at making a custom SASL mechanism to do this as
well.


You may want to look at how the "authz-regexp" pieces work in OpenLDAP, 
which allow you to map SASL identities to entries in the server based on 
regular expressions.


For example, in my system, I have:

authz-regexp uid=(.*)/cgi,cn=stanford.edu,cn=gssapi,cn=auth 
ldap:///cn=cgi,cn=applications,dc=stanford,dc=edu??sub?krb5PrincipalName=$1/[EMAIL PROTECTED]
authz-regexp uid=service/(.*),cn=stanford.edu,cn=gssapi,cn=auth 
ldap:///cn=Service,cn=Applications,dc=stanford,dc=edu??sub?krb5PrincipalName=service/[EMAIL PROTECTED]
authz-regexp uid=webauth/(.*),cn=stanford.edu,cn=gssapi,cn=auth 
ldap:///cn=Webauth,cn=Applications,dc=stanford,dc=edu??sub?krb5PrincipalName=webauth/[EMAIL PROTECTED]
authz-regexp uid=(.*),cn=stanford.edu,cn=gssapi,cn=auth 
ldap:///uid=$1,cn=Accounts,dc=stanford,dc=edu??sub?suSeasStatus=active



Stanford uses SASL/GSSAPI as a binding mechanism, and this allows me to map 
different types of Kerberos identities into different parts of the tree.


--Quanah

--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: Surprise from SUN: OpenDS

2006-08-01 Thread Quanah Gibson-Mount



--On Tuesday, August 01, 2006 3:21 PM -0700 Quanah Gibson-Mount 
<[EMAIL PROTECTED]> wrote:





--On Wednesday, August 02, 2006 1:16 AM +0300 Ersin Er
<[EMAIL PROTECTED]> wrote:


https://opends.dev.java.net/


And this would be better than FDS how?! :P


Oh, this is all in java, not based off their old Netscape branch... 
Interesting.


More about it here:

<http://blogs.sun.com/roller/page/DirectoryManager/>

--Quanah

--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: Surprise from SUN: OpenDS

2006-08-01 Thread Quanah Gibson-Mount



--On Wednesday, August 02, 2006 1:16 AM +0300 Ersin Er <[EMAIL PROTECTED]> 
wrote:



https://opends.dev.java.net/


And this would be better than FDS how?! :P

--Quanah

--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: [ApacheDS] New slamd lab

2006-07-29 Thread Quanah Gibson-Mount



--On Saturday, July 29, 2006 9:19 AM -0400 Alex Karasulu 
<[EMAIL PROTECTED]> wrote:



RedHat has expressed interest in setting up a benchmark lab w/ Symas for
doing fair comparisons of FDS & OpenLDAP.  Maybe we could all work
together towards that end.


Yes that would be nice.  I'm interested in doing this.


Cool, I'll see what I can do to start a conversation going along that end.

BTW, Sun did publish some benchmarks of their own on how SunOne performs, 
and we used those to compare to how CDS did compared to their product, and 
there was nothing they could say about that. ;)



<http://www.symas.com/benchmark-auth.shtml> has a link to the Sun 
benchmarks.


--Quanah


--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: [ApacheDS] New slamd lab

2006-07-28 Thread Quanah Gibson-Mount



--On Friday, July 28, 2006 10:15 AM -0400 Alex Karasulu 
<[EMAIL PROTECTED]> wrote:



Timothy Bennett wrote:

On 7/16/06, *Alex Karasulu* <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:


I had a few machines at home that I brushed off and setup a lab with
slamd for stress testing and profiling the server.


I recently had the privilege of checking out the SLAMD lab for ApacheDS
in person.  I felt like I walked into the engine room on the Enterprise.
;)

Alex needs to work on his Scottish brogue to make it an immersion
experience.  :D



Aye I'm giving her all she's got.

BTW it looks like SUN is unhappy though.  They just contacted me and told
me to cease and desist from using SUN DS for these tests.  They were not
happy with the results it seems.


Yeah, when I benchmarked SUN DS for Symas, we got the same response from 
them.  If you closely read their obnoxious license, it clearly states you 
are not allowed to publish benchmarks of their product.  Same with 
e-directory, IIRC.


RedHat has expressed interest in setting up a benchmark lab w/ Symas for 
doing fair comparisons of FDS & OpenLDAP.  Maybe we could all work together 
towards that end.


--Quanah

--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: [ApacheDS] Performance testing

2006-07-01 Thread Quanah Gibson-Mount



--On Thursday, June 29, 2006 4:13 PM -0400 Alex Karasulu 
<[EMAIL PROTECTED]> wrote:



-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Quanah,

...


Comments? Thoughts?



I'd note that SLAMD already does a lot of what you are discussing, is
open source, and is written in Java.


Yes it does! You're right.  I still do have to get acquainted with it.

But note that this is just part of the problem for us.  Setting up the
test harness, with comparable conditions is perhaps the most manual and
tedious part besides watching the tests chug along.  So I think we
should as you suggest look into the right tool for SLAM'in the server :).

Perhaps we can break it apart and build a client side maven mojo for it.
  SLAMD I'm sure has some configuration to it as well.



Yeah, it uses tomcat in particular for the server, which means some 
configuration around it there, in addition to the other bits it requires.




What it doesn't do is load the

LDIF into the server.  However, these are the tools that it does have:

(a) Ability to use very sophisticated templates to generate LDIF files
for loading into directory servers.
(b) Distributed clients -- You generally cannot get an idea of the
performance of an LDAP server from a single client
(c) Resource monitoring -- This monitors CPU usage, Disk usage, SWAP,
and Memory, maybe other things as well
(d) A number of pre-defined benchmarking tests as well as the ability to
create your own
(e) Report generation, including PDF and HTML



Very nice.  I'll have to tinker a bit.

Thanks for your recommendations.


Sounds good...  When you get to the OpenLDAP bits of cross-testing, I'd be 
happy to contribute any knowledge that might be useful there as well. ;)


--Quanah


--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: [ApacheDS] Performance testing

2006-06-28 Thread Quanah Gibson-Mount



--On Wednesday, June 28, 2006 12:23 PM -0400 Alex Karasulu 
<[EMAIL PROTECTED]> wrote:



-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Need Benchmarking/Profiling Perf Suite (BPPS) for ApacheDS
==

I just sat down in front of my machine and wanted to run some
performance tests on the server.  These questions came up:

o What tests do I run?
o What LDIF should I import first before running those tests?
o Which version of the server should I run these tests against?
o Can I run these tests against other LDAP servers?

I did the first thing anyone would do.  I tapped Emmanuel on the
shoulder to ask him for his materials for his AC EU presentation.  I did
not want to repeat the work that he had already done.

Please Emmanuel take no offense but I found the setup and repeated work
to be a bit of a hassle.  I'm sure you were bothered by doing things
manually yourself.  Plus I wanted to profile these tests too inside
Eclipse using Yourkit.  Anyway I came to a final conclusion:

*Conclusion*: We need some repeatable benchmarking/profiling perfromance
test suite for ApacheDS that can be run easily.


Requirements for BPPS
=

Here's what I started asking myself for internally.  Please add to this
list if you can think of other requirements.

(1a) Need repeatable performance tests with setup prep and tear down
(1b) Tests should be able to load an initial data set (LDIF) into server
(2) I should be able to use Maven or ant to kick off these tests
(3) Tests should produce some kind of report
(4) Tests should easily be pointed to benchmark other servers
(5) Make it easy to create a new performance test.
(6) I want a summary of the conditions in the test report which include
the setup parameters for:
o operations performed
o capacity,
o concurrency,
o hardware.
o operating system


Existing work and potential approaches
==

I figured using JUnit was the best way to test ApacheDS or anyother
server.  Plus I could setUp and tearDown test cases.  The only thing I
needed to do was make a base test case or two for the various apacheds
configurations (embedded testing verses full networked testing).

The first base test case, for embedded testing, was setup here:

http://svn.apache.org/viewvc/directory/trunks/apacheds/core-unit/src/main
/java/org/apache/directory/server/core/unit/AbstractPerformanceTest.java?
revision=414035&view=markup

Yeah it's weak and I'll try to add to it.  What I would like to do is
invite people to work with me on setting up this
benchmarking/profiling/perf testing framework.

Comments? Thoughts?



I'd note that SLAMD already does a lot of what you are discussing, is open 
source, and is written in Java.  What it doesn't do is load the LDIF into 
the server.  However, these are the tools that it does have:


(a) Ability to use very sophisticated templates to generate LDIF files for 
loading into directory servers.
(b) Distributed clients -- You generally cannot get an idea of the 
performance of an LDAP server from a single client
(c) Resource monitoring -- This monitors CPU usage, Disk usage, SWAP, and 
Memory, maybe other things as well
(d) A number of pre-defined benchmarking tests as well as the ability to 
create your own

(e) Report generation, including PDF and HTML


--Quanah

--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: [PERFORMANCE] Disabling Nagle Algorithm improves performance

2006-06-19 Thread Quanah Gibson-Mount



--On Monday, June 19, 2006 7:09 PM +0200 Emmanuel Lecharny 
<[EMAIL PROTECTED]> wrote:



Whaouh !!!

Thanks a lot guys ! I gonna test that as soon as possible, and will
keep you informed of the results.


This is why using a product like slamd for benchmarking LDAP servers is the 
way to go.  It uses multiple distributed clients to get an idea of a 
servers performance.


--Quanah

--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: Various questions

2006-06-06 Thread Quanah Gibson-Mount



--On Tuesday, June 06, 2006 9:33 AM -0400 Alex Karasulu 
<[EMAIL PROTECTED]> wrote:



Quanah Gibson-Mount wrote:




--On Tuesday, June 06, 2006 12:50 AM -0400 "Noel J. Bergman"
<[EMAIL PROTECTED]> wrote:


Quanah Gibson-Mount wrote:


I think the concept of applying all indexing to attributes is in itself
broken.



So is your suggestion that the option be made available, but that by
indexing selectively, Alex's concerns can be effectively addressed?  Do
you have any suggestions as to how that might be provided without losing
ease-of-use for the most common cases?



Well, in OpenLDAP, the way ease of use is met is by users being able
to define a default index type or types.  That way, they can specify
the default set, and then just use index , similar to what
is being done in Apache DS.

I think it is important to allow specification of what indices to use
for a given attribute for a few reasons.  One, that you can use it to
actually make some searches slow enough to hinder efforts (like we
have a spam troller routinely trying to get data from our sources that
is fairly obnoxious), another is that the more indices you have on an
attribute, the larger the total database is, and the longer it takes
to load.  This of course depends on part in the OS/Cpu used as well.
For example, I currently index 90 attributes in my database to varying
degrees (most are eq, which is a fairly minimal index).


Note that ApacheDS indices are very similar to OpenLDAP equality indices
with some minor differences for handling substring matching.  The cost
is about the same as an eq index.  So you get sub, eq, and existence for
the price of eq.



Ah, okay.  That is handy.


On my Solaris sparc systems, it takes 2.5ish hours to load the database.



What kind of sparc is that?  I have a blade 2000 here if you want to try
a current machine benchmark.  I could create an account for you to
test.  Just let me know.



My current systems are SunFire V120's, with a 650 MHz CPU and 4GB of RAM. 
I've played with more modern systems (8 CPU T2000's with 4-cores), and they 
are definitely faster, but the same underlying issue of needing to use a 
memory cache during bulk loads still applies, but that may be an 
OpenLDAP/BDB specific thing that ApacheDS wouldn't encounter.




On my new AMD systems that'll be replacing the Sun Sparc boxes, it
takes all of 14.5 minutes.  However, if all 90 of those attributes
were getting indexed pres,eq,sub, the amount of time to load would
increase significantly.

Currently, my indices take up 1.1GB of disk space in OpenLDAP (I'm not
sure how that exactly map out in Apache DS).  My database entry file
takes 2.7GB.  So my indices are approximately 1/3 of my database size.


Yeah the cost of disk space is just about the same but that's the least
of our worries.  Disk is cheap as Emmanuel stated.


Nods, memory was more my concern, but it may not apply in the ApacheDS 
case, given the use of a different database backend and the difference in 
how indices are done.


--Quanah

--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: Various questions

2006-06-06 Thread Quanah Gibson-Mount



--On Tuesday, June 06, 2006 9:37 AM +0200 Emmanuel Lecharny 
<[EMAIL PROTECTED]> wrote:



Hi guys !

Quanah Gibson-Mount a écrit :


I think it is important to allow specification of what indices to use
for a given attribute for a few reasons.  One, that you can use it to
actually make some searches slow enough to hinder efforts (like we
have a spam troller routinely trying to get data from our sources that
is fairly obnoxious),


In my mind, it's pretty much a security issue. You can add an
authentication to avoid such behavior, or, if your data are public, then
you have no reason to slow down the searches. Limiting the number of
results may be more efficient. Btw, this is a real problem for a server,
and something we sqhoudl consider : how to avoid DOS on a LDAP server
(either by flooding, or with malformed requests, or with huge data). We
still have to address those attacks. At this point, I may have a question
: is it frequent usage for Ldap server to be exposed outside a company?
Generally speaking, I never saw that. User data are really supposed to be
private and not accessible from unidentified user. I may be totally
wrong, but if I see a Ldap Server exposed to the world - never saw that
for years -, the first thing I would ask the Admins is to close the door
of their system. Just my opinion.



Well, I see ldap servers expose data to the world all the time.  Pretty 
much any university I send random queries to does so.  @ Stanford, we allow 
users to affect the "visibility" of their data, with 3 settings:


"world" -- Avaliable to anyone, including anonymous
"stanford" -- Available only to those people who have authenticated as 
being from Stanford
"private" -- Not visible to anyone by normal means (specific applications 
get by this)



Since there is a fair amount of data then available to anyone who wants to 
run a query because of policy, I do try my best to do due diligence and cut 
down on spam harvesting runs.  We do have a result limit on the server, but 
the people I've run across are savvy enough to use batched queries of 
different ranges to effectively get around that in at least part.


People also like to be able to use their email clients to get information 
from the directory servers, and very few of them (only one that I've found) 
support SASL/GSSAPI binds, which is the only authentication method we allow 
(no username/password).




another is that the more indices you have on an attribute, the larger
the total database is, and the longer it takes to load.  This of
course depends on part in the OS/Cpu used as well.  For example, I
currently index 90 attributes in my database to varying degrees (most
are eq, which is a fairly minimal index).  On my Solaris sparc
systems, it takes 2.5ish hours to load the database.  On my new AMD
systems that'll be replacing the Sun Sparc boxes, it takes all of 14.5
minutes.  However, if all 90 of those attributes were getting indexed
pres,eq,sub, the amount of time to load would increase significantly.


well, in production, loading a server ris not something you do very
often. You may need to restore a crashed database, or reload a database
which structure has change, but this is definitively not a real concern.
Load once, use many.



I think that's a good thought in theory, and is what I thought too. 
However, I run 4 environments (dev, test, uat, and production).  We have a 
custom schema that we modify a few times a year, and those modifications 
are usually large enough to warrant a complete reload of the data that is 
generated from our RDBMS for the ldap servers.  As a part of that process, 
dev may be reloaded several times as bugs are fixed, etc, and the same goes 
for test.  So I actually reload my servers a bit. ;)




Currently, my indices take up 1.1GB of disk space in OpenLDAP (I'm not
sure how that exactly map out in Apache DS).  My database entry file
takes 2.7GB.  So my indices are approximately 1/3 of my database size.


3Gb is really nothing. A 15K Rpm SCSI disk is now 36 Gb minimum and cost
aroung 200$. Not a big deal. Better spend money of memory sticks rather
that on high performance disks :)

I don't want to say that making it possible to select indices is *bad*,
but, IMHO, this may be a cool feature that is a little bit overkilling,
when you balance it with real usages. For real RDBMS, having twice the
size on disk for indices is considered plain normal. I don't think we
should go that far, but when you choose to set indices on  an attribute,
this may not be very important to offer a choice on which kind of indices
you want.


Yeah, my concerns here may be more specific to OpenLDAP and the use of BDB. 
When bulk loading, it is quickest to have enough BDB cache as the entire 
size of your database (3.8GB in the case above).  On Solaris SPARC, I found 
that the only good way to get performance was to use a shared memory region 
(Lin

Re: Bash script [was Re: Various questions]

2006-06-06 Thread Quanah Gibson-Mount



--On Tuesday, June 06, 2006 9:48 AM -0400 Alex Karasulu 
<[EMAIL PROTECTED]> wrote:



Emmanuel Lecharny wrote:


Thanks a lot Quanah !


Yes thanks.  Just as a heads up though you might want to file a JIRA
issue and attach a patch next time.

This way we can definitely remember to add it to the next release if you
schedule the fix for it when creating the JIRA.   Sometimes emails go by
and we may miss something.



Alex,

I'll be sure and use JIRA next time for anything I come up with. ;)

--Quanah

--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


RE: Various questions

2006-06-05 Thread Quanah Gibson-Mount



--On Tuesday, June 06, 2006 12:50 AM -0400 "Noel J. Bergman" 
<[EMAIL PROTECTED]> wrote:



Quanah Gibson-Mount wrote:


I think the concept of applying all indexing to attributes is in itself
broken.


So is your suggestion that the option be made available, but that by
indexing selectively, Alex's concerns can be effectively addressed?  Do
you have any suggestions as to how that might be provided without losing
ease-of-use for the most common cases?


Well, in OpenLDAP, the way ease of use is met is by users being able to 
define a default index type or types.  That way, they can specify the 
default set, and then just use index , similar to what is being 
done in Apache DS.


I think it is important to allow specification of what indices to use for a 
given attribute for a few reasons.  One, that you can use it to actually 
make some searches slow enough to hinder efforts (like we have a spam 
troller routinely trying to get data from our sources that is fairly 
obnoxious), another is that the more indices you have on an attribute, the 
larger the total database is, and the longer it takes to load.  This of 
course depends on part in the OS/Cpu used as well.  For example, I 
currently index 90 attributes in my database to varying degrees (most are 
eq, which is a fairly minimal index).  On my Solaris sparc systems, it 
takes 2.5ish hours to load the database.  On my new AMD systems that'll be 
replacing the Sun Sparc boxes, it takes all of 14.5 minutes.  However, if 
all 90 of those attributes were getting indexed pres,eq,sub, the amount of 
time to load would increase significantly.


Currently, my indices take up 1.1GB of disk space in OpenLDAP (I'm not sure 
how that exactly map out in Apache DS).  My database entry file takes 
2.7GB.  So my indices are approximately 1/3 of my database size.


--Quanah

--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: Various questions

2006-06-05 Thread Quanah Gibson-Mount



--On Tuesday, June 06, 2006 1:15 AM +0200 Emmanuel Lecharny 
<[EMAIL PROTECTED]> wrote:

Good call on the time, it was 54 minutes via ldapadd. :)

I gave it 2GB of memory to play with, that may have been overkill, but
it worked.


Any idea about the real memory used ? (jconsole) But I don't want to push
you on this, don't loose 54' again :)

What scared me a lot is that if we need to push ADS to the limit, I know
a company which had 70 millions entries in its ldap server. So if it goes
linear (very unlikely, and if it goes, I guess that their ise something
wrong in the B-tree impl :), that would mean 12 days of processing on
ADS. Oh My !


Yeah, in my work for Symas, I've worked with companies who use in excess of 
100 million entry databases.  Scaling loads really becomes a factor, which 
is why I was curious about the offline loading capabilities.  It led Symas 
to work on ways to optimize the bulk imports so that they are a lot more 
efficient that normal ldapadd (contributed back into OL of course).



btw, we are currently trying to close a RC4, as I stated recently in one
of my previous post, so don't spend too much time playing with RC3, it's
really not usable. Just enjoy the ease of installation it offers, thanks
to Alex work.



Sure thing, the install was quite easy.

I made a change to the startup init script that may be useful in adding to 
the release:


-bash-3.00$ diff -u /usr/local/apacheds-1.0-RC3/bin/server.init apacheds
--- /usr/local/apacheds-1.0-RC3/bin/server.init Sun Jun  4 20:38:28 2006
+++ apachedsMon Jun  5 16:14:20 2006
@@ -126,6 +126,8 @@
TMP_DIR=$SERVER_HOME/var/tmp
PID_FILE=$SERVER_HOME/var/run/server.pid

+MEM_OPS="-Xms256m -Xmx2048m"
+
cd $SERVER_HOME

case "$1" in
@@ -141,6 +143,7 @@
-user $APACHEDS_USER \
-home $JAVA_HOME \
-Djava.io.tmpdir=$TMP_DIR \
+$MEM_OPS \
-Dlog4j.configuration=file://$SERVER_HOME/conf/log4j.properties\
-pidfile $PID_FILE \
-outfile $SERVER_HOME/var/log/apacheds-stdout.log \


It makes it a lot easier to easily control the memory options.



1Am, CEST, time to dream about the fastest Ldap server on earth... (may
be OpenLdap for this night :)



Have a goodnights sleep.  I look forward to benchmarking 1.0RC4.


--Quanah

--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: Various questions

2006-06-05 Thread Quanah Gibson-Mount



--On Tuesday, June 06, 2006 1:03 AM +0200 Emmanuel Lecharny 
<[EMAIL PROTECTED]> wrote:



Quanah Gibson-Mount a écrit :




--On Monday, June 05, 2006 12:03 AM -0700 Quanah Gibson-Mount
<[EMAIL PROTECTED]> wrote:


I tried loading a 250,000 (very small) database into Apache DS tonight,
and it died horribly somewhere between 10k and 11k entries.



Emmanuel was able to help me resolve this, and I successfully loaded
the 250k entries into the Apache DS after increasing the java memory
allocation.



Cool ! We never went farther than 100k. I guess it was horribly long
(around 1 hour...). We still have a concern about memory. We may have to
check that we don't create an infinite number of threads on the newtwork
side. I think it's limited (and, yes, sorry, but no parameter right now).

Jeez... We are far from being production ready :)

Thanks for the head up, man !


Good call on the time, it was 54 minutes via ldapadd. :)

I gave it 2GB of memory to play with, that may have been overkill, but it 
worked.


--Quanah


--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: Various questions

2006-06-05 Thread Quanah Gibson-Mount



--On Monday, June 05, 2006 12:03 AM -0700 Quanah Gibson-Mount 
<[EMAIL PROTECTED]> wrote:



I tried loading a 250,000 (very small) database into Apache DS tonight,
and it died horribly somewhere between 10k and 11k entries.


Emmanuel was able to help me resolve this, and I successfully loaded the 
250k entries into the Apache DS after increasing the java memory allocation.


--Quanah


--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: Various questions

2006-06-05 Thread Quanah Gibson-Mount



--On Monday, June 05, 2006 7:54 PM -0400 Alex Karasulu 
<[EMAIL PROTECTED]> wrote:



Quanah,

Well I'm glad someone from Symas Corp.  is on this list.  However you're
just trying to make ApacheDS look bad while promoting OpenLDAP.  That's
not too sincere when we're trying to answer your questions to the best
of our ability.



Actually, I'm not trying to make Apache look bad at all, and although I do 
work part time for Symas, that is not my primary occupation.



We're not here to compete.  We appreciate OpenLDAP and the work done
there.


Did I say it was?


Now your less than friendly tone makes total sense.


If you think my observations on indexing are related to the above, you are 
quite mistaken.  As noted, my observations come from 7 years of running a 
directory service.  If you aren't interested in advice from someone who 
actually deals with directory services in a large scale production 
environment every day, fine, ignore them.  But that's certainly not a good 
way to go to improve the product.


--Quanah

--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: Various questions

2006-06-05 Thread Quanah Gibson-Mount



--On Monday, June 05, 2006 5:25 PM -0400 Alex Karasulu 
<[EMAIL PROTECTED]> wrote:



Quanah Gibson-Mount wrote:




--On Monday, June 05, 2006 2:54 PM -0400 Alex Karasulu
<[EMAIL PROTECTED]> wrote:


I assume it should also handle approx, which is not the same as
substring.



No as I mentioned before ApacheDS does not do approximate matching and
so it does not have an option to create approx indices.



Right, my point was, I'd assume that should be added, so that it could
be supported


I don't find the approx matching algorithms based on soundex etc to be
all that useful.  Plus the indices get bloated and the server's write
performance diminishes much faster.  Approx indices must generate all
the varients of a word using these algorithms which can be large.  Every
add, del or modify operation then must regenerate these soundex
derivatives for the old value as well as new values in the modify op.
Keep in mind also some attributes will be multivalued so the explosion
can be quit large.

IMO approx match is one of those things that was a good idea but is not
critical or used all that much.  If we find the time or if you're
interested you can implement this feature.  For now no index is created
for approx matching.



I think the concept of applying all indexing to attributes is in itself 
broken.  As someone who has been running Stanford's directory service for 7 
years, we have reasons as to why we index particular attributes the way we 
do.  It is in part sometimes to limit the feasability of doing some 
searches (leaving substr off of some attributes, for example).


In addition, soundex is quite useful for white page lookups, when someone 
knows a last name by sound, but not spelling.



In any case, the choice is obviously yours, but I think the thinking so far 
is flawed.



--Quanah


--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: Various questions

2006-06-05 Thread Quanah Gibson-Mount



--On Monday, June 05, 2006 2:54 PM -0400 Alex Karasulu 
<[EMAIL PROTECTED]> wrote:



I assume it should also handle approx, which is not the same as
substring.


No as I mentioned before ApacheDS does not do approximate matching and
so it does not have an option to create approx indices.


Right, my point was, I'd assume that should be added, so that it could be 
supported



--Quanah


--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: Various questions

2006-06-05 Thread Quanah Gibson-Mount



--On Monday, June 05, 2006 7:06 AM -0400 Alex Karasulu 
<[EMAIL PROTECTED]> wrote:



Emmanuel Lecharny wrote:



Hmmm, I have to check. I think it's equality. Presence is not
implemented atm, AFAIK


My bad I never documented this anywhere.

An index on an attribute handles equality, and presence.  Should also
handle substring matching but not as aggressively as would a substring
index in OpenLDAP.


I assume it should also handle approx, which is not the same as substring.

--Quanah


--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: Various questions

2006-06-05 Thread Quanah Gibson-Mount



--On Monday, June 05, 2006 7:02 AM -0400 Alex Karasulu 
<[EMAIL PROTECTED]> wrote:



Quanah Gibson-Mount wrote:


Hi,

I'm looking at apache DS as compared to OpenLDAP, and have an initial
set of questions:

(a) is it possible to bulk load an LDIF file while the server is
offline? If yes, how?


No there is no way to do this presently.  What benefit to you get from
offline loads?


At least with OpenLDAP, doing an offline load with slapadd is substantially 
faster than doing an ldapadd, if you are wanting to load a large database.



(b) If I choose to load an LDIF file via the server.xml file, is the
beginning and end of the load logged, so that a timing mark can be
drawn from it?


The file is marked as loaded.  Any changes to the file will not
propagate into the DIT.  To force the server to reload the ldif file you
just need to remove the entry for the loaded file under ou=loadedLdifs
(or something like that) under ou=configuration,ou=system.


I don't think you understand my question.  I want to know how long it takes 
Apache DS to load an LDIF file.




(c) How does one tune the underlying database?


Silly but all parameters are hardcoded right now.  We will expose cache
parameters soon.



(d) Is there any way to tune an entry cache, etc, for Apache DS?


We need to expose this as well.



(e) When one specifies an index for an attribute, I see no way to
differentiate between equality, substring, presence, etc.  If an
attribute is indexed, is it then simply indexed for all possible
search methods?


That is correct.  ApacheDS does not support approximate matching which
defaults to equality match FYI.


Approximate matching is not the same thing as substring indexing.



(f) Are there any other general guidelines for tuning Apache DS?



Really many tunable parameters have not been exposuted.  We'll try to
get some of these parameters out and in the hands of users in the next
release candidate.


Okay.

--Quanah



--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Re: Various questions

2006-06-05 Thread Quanah Gibson-Mount



--On Monday, June 05, 2006 8:49 AM +0200 Emmanuel Lecharny 
<[EMAIL PROTECTED]> wrote:



Quanah Gibson-Mount a écrit :
Hi !


Hi,

I'm looking at apache DS as compared to OpenLDAP, and have an initial
set of questions:

(a) is it possible to bulk load an LDIF file while the server is
offline? If yes, how?


Well, no.


(b) If I choose to load an LDIF file via the server.xml file, is the
beginning and end of the load logged, so that a timing mark can be
drawn from it?


It's better to do it another way. If you wait untl 1.0-RC4, which will be
out around this week or next week, you will be able to use an
apacheds-tools to import LDIF files into ADS. FYI, we have done some load
tests lately, and what I can say is that loading 10 000 entries into
OpenLdap takes 9 mins, and 1min 40sec on ADS.


I'm curious on how you are setting up OpenLDAP, because I can load a 
400,000 entry LDIF with heavy (90+ attributes) indexing in 14 minutes and 
30 seconds.  My guess is you don't understand how to configure OpenLDAP.


I tried loading a 250,000 (very small) database into Apache DS tonight, and 
it died horribly somewhere between 10k and 11k entries.


ldapadd logged the following error:

adding new entry uid=user.11084,ou=People,dc=example,dc=com
ldap_add: Loop detected
ldap_add: additional info: failed to add entry 
uid=user.11084,ou=People,dc=example,dc=com





(c) How does one tune the underlying database?


No way to do it right now.


(d) Is there any way to tune an entry cache, etc, for Apache DS?


It's a improvment listed in JIRA : currently, the cache size is fixed
(bad!), and we want to change that


(e) When one specifies an index for an attribute, I see no way to
differentiate between equality, substring, presence, etc.  If an
attribute is indexed, is it then simply indexed for all possible
search methods?


Hmmm, I have to check. I think it's equality. Presence is not implemented
atm, AFAIK



What about substring?



(f) Are there any other general guidelines for tuning Apache DS?


Not too much. We are not currently working too much on adding tunning
tools on the server, because we are working hard to fix some
functionnalites issues, and also some real serious performance problems
that kill the server when you have more than, say, 100 entries ;(

What I may suggest is that you fill JIRA's issues for those point you
think are lacking in ADS, in order for us to be able to fix those points
and not forget them (for instance, your index question is important). We
would be very pleased to ear about any bug, any missing functionnality,
any performance comparison you can do, this will help us to build a
better server. And if you have time, well, we also need some help
improving it, it's doco, it's testcases, and so on :)



I'll note that in my performance test with OpenLDAP, I can achieve nearly 
14,000 authentications/second.  There are two searches per authentication, 
so nearly 28,000 search/second.  The highest number I've seen quoted on 
this list is 780 or so searches/second.  Is there anyone with better 
results?


--Quanah



--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html


Various questions

2006-06-04 Thread Quanah Gibson-Mount

Hi,

I'm looking at apache DS as compared to OpenLDAP, and have an initial set 
of questions:


(a) is it possible to bulk load an LDIF file while the server is offline? 
If yes, how?
(b) If I choose to load an LDIF file via the server.xml file, is the 
beginning and end of the load logged, so that a timing mark can be drawn 
from it?

(c) How does one tune the underlying database?
(d) Is there any way to tune an entry cache, etc, for Apache DS?
(e) When one specifies an index for an attribute, I see no way to 
differentiate between equality, substring, presence, etc.  If an attribute 
is indexed, is it then simply indexed for all possible search methods?

(f) Are there any other general guidelines for tuning Apache DS?

Thanks,
Quanah

--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html