On Thu, Mar 18, 2010 at 6:43 AM, Jeffrey Altman
<jalt...@secure-endpoints.com> wrote:
> On 3/17/2010 7:41 PM, Derrick Brashear wrote:
>> On Wed, Mar 17, 2010 at 12:50 PM, Steve Simmons <s...@umich.edu> wrote:
>>> We've been seeing issues for a while that seem to relate to the number of 
>>> volumes in a single vice partition. The numbers and data are inexact 
>>> because there are so many damned possible parameters that affect 
>>> performance, but it appears that somewhere between 10,000 and 14,000 
>>> volumes performance falls off significantly. That 40% difference in volume 
>>> count results in 2x to 3x falloffs for performance in issues that affect 
>>> the /vicep as a whole - backupsys, nightly dumps, vos listvol, etc.
>>>
>>> My initial inclination is to say it's a linux issue with directory 
>>> searches, but before pursuing this much further I'd be interested in 
>>> hearing from anyone who's running 14,000 or move volumes in a single vicep. 
>>> No, I'm not counting .backup volumes in there, so 14,000 volumes means 
>>> 28,000 entries in the directory.
>>
>> Another possibility: there's a hash table which is taking the bulk of
>> that that you then search linearly.
>
> In the 1.4 series, the volume hash table size is just 128 which
> would produce (assuming even distributions) average hash chains of
> 160 to 220 volumes per bucket given the number of volumes you
> describe.  This is quite long.
>
> In the 1.5 series, the volume hash table size defaults to 256
> which would be an average hash bucket chain length of 80 to 108.
>
> I would say that for the number of volumes that you are using
> that you would want the hash table to be no smaller than 4096
> which would bring the average hash chain length below 7 per bucket.
>
> In the 1.5 series the size of the hash table can be configured
> at run-time with the -vhashsize value.  In 1.4 you can modify
> the definition of VOLUME_HASH_TABLE_SIZE in src/vol/volume.c
> and rebuild.

If that's the problem. Of course, given that it's easy to check it's
worth doing so.



-- 
Derrick
_______________________________________________
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info

Reply via email to