Hi,
I just realized that with the ClonedServerEntry we are now using
internally, point #2 can just be solved in a matter of seconds : we
don't need anymore to lookup for the entry as we already have the
original one with all the operational attributes !
So I just commented the useless lookup, and did some performance test
with the new version :
4686 req/s !!!
This is close to what we have in the trunk.
Let's move to the 5 other improvements now :)
Emmanuel Lecharny wrote:
Hi,
I did some debuging session today, in order to analyze where the
server was doing useless operations. Here is what I found :
1) The ExceptionInterceptor search method has this portion of code
which is problematic :
EntryFilteringCursor cursor = nextInterceptor.search(
opContext );
if ( ! cursor.next() )
The next() call will run the filters, leading to a call to the
CollectiveAttribute accept() method, which does a lookup. This lookup
is obviously overkilling (see point 2), but the problem is that when
the NamingEnumerationAdaptor is created in the ServerLdapContext,
filters are run a second time.
It generates a double call to all the filters, plus a double lookup on
the backup, so at the end we have cloned three times the entry : once
in the original pretech, and two in the collectiveAttribute accept
method.
If we can kill this duplication of effort, the speedup will be
enormous (I think we can even be faster than trunk)
2) The lookup in the CollectiveAttributeInterceptor's filter is
useless. It's done in the following method :
private void addCollectiveAttributes( LdapDN normName, ServerEntry
entry, String[] retAttrs ) throws Exception
The idea is to get the 'collectiveAttributeSubentries' attribute type,
which is not requested by default. This is done through a second fetch
of the whole entry. We could probably just request for this
attributeType (and I think it's already the case, as we get all the AT
from the entry, operationnal attributes included, and strip the not
requested attributes on the cloned entry). So avoiding such a lookup
should be possible.
3) In the DefaultSearchEngine.cursor() method, we call
JdbmStore.getEntryId( base.toString() ). Here is the code of this
method :
public Long getEntryId( String dn ) throws Exception
{
return ndnIdx.forwardLookup( dn );
}
and the JdbmIndex.forwardLookup code :
public Long forwardLookup( K attrVal ) throws Exception
{
return forward.get( getNormalized( attrVal ) );
}
The getNormalized() method will look into a cache for a normalized
form of the DN (but as this is a general index, K can be something
different from a LdapDN, of course). Anyway, if it's a DN, it's
already normalized, so this is costly and useless. We can check if the
index is the ndn index and in this case, simply pass the String
without checking in a cache.
4) In the same DefaultSearchEngine.cursor() method, we do a lookup on
the alias index :
String aliasedBase = db.getAliasIndex().reverseLookup( baseId );
Aliases could be hold in a cache (we won't have thousands of aliases)
in order to avoid a lookup. (not sure this is a great hack ...)
5) The ChangeLog, Event and Trigger interceptors should be bypassed.
6) The EvaluatorBuilder uses Node attributes as String, which leads to
many accesses to the OidRegistry, to check that those String are valid
AttributeTypes. As the Nodes are built earlier in the process, I would
suggest we store AttributeTypes instead of String into the Nodes, to
avoid many lookup in the Registries.
That's it for now, but enough places we can work on atm to get this
branch fast !
--
--
cordialement, regards,
Emmanuel Lécharny
www.iktek.com
directory.apache.org