On Sat, Jan 10, 2009 at 6:51 AM, Valerie Aurora Henson <vaur...@redhat.com>wrote:
> Most patches of this nature come with some description of a performance > > problem and how the patch solves it. Which caching algorithm is > > implemented below? There are, of course, many hashing algorithms for > > variable length strings. Can we get some rationale for the change? I'm > > not against it, I'd just like a *little* explanation. > > This is where I get to blame my cold for being stupid. :) > > Yes, the original author (Paul Wankadia, now cc'd) gave me a little > information on that front, I just forgot to pass it on. It's the > "One-at-a-Time" hash function by Bob Jenkins. See: > > http://burtleburtle.net/bob/hash/doobs.html > > What I vaguely recall from our conversation is an order of magnitude > improvement in elapsed time for large (thousands) numbers of maps, but > I'll let Paul be more specific. Do we have a test for thousands of > entries somewhere in our automated tests? It's just that the additive hash function and the tiny, non-prime hash table size offended my delicate sensibilities. ;) Actually, we discovered that reloading a large file map took well over ten seconds. Profiling revealed that the daemon spent most of its time calling strcmp(3). Oh, and I should note that the problem was observed in autofs v4. I'll be looking more closely at autofs v5 in the near future.
_______________________________________________ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs