Looks like x86_64 requires a specific option. Maybe it'd be best to
compile it that way for all platforms.
Are there reproducible tests/benchmarks that we can use to figure out
whether or not this actually makes sense on any given architecture?
The test I used most recently was to run test008 in the OpenLDAP test suite and
average the runtimes over 5-10 runs for each BDB version. In my tests I ran
with SLAPD_DEBUG=0 so the only I/O traffic is from BDB and not from debug
logging. I also ran with an extremely small cachesize setting (5, the default
is 1000) to further aggravate the locking contention in the underlying DB.
Under normal conditions (where the cache is not so ridiculously undersized for
the workload) the differences are not as apparent.
Also note that the runtimes for this test are non-deterministic since they are
affected by deadlock retries, and their pattern is unpredictable on most systems.
--
-- Howard Chu
Chief Architect, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]