chas williams - CONTRACTOR wrote:
> In message <[EMAIL PROTECTED]>,Jim Rees writes:
> 
>>I'm inclined to commit the new code and let Niklas or others work on making
>>it better for large caches later.  Comments?
> 
> 
> how about choosing the sqrt(cachesize/1024) as the "average file size"?
> 
>       cachesize       avg file size(k)        #files
> 
>       150M            ~12                     12500
>       350M            ~18                     19444
>       1G              ~32                     32768
>       10G             ~98                     102040
>       20G             ~143                    146653
> 
> i choose sqrt() for no particular reason other than the numbers seems to
> more closly match the original sizes for smaller caches and for larger
> caches it matches the newly suggested average of 32k.  for huge caches
> it "safely" limits the amount of kernel space required.

Hi,
  will it be possible to set an arbitrary value on startup?
I am right now making caches of 40GB in size and I don't expect
more than a hundred files in it. I don't care if a bad luck
happens and a file has to be fetched from the server just because of
too restrictive value. But I don't want the kernel to waste space.
If a file is fetched "unnecessarily" maybe once per 1000 iterations,
I'm fine with that. I just won't use even 12500 files.
Martin
_______________________________________________
OpenAFS-devel mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-devel

Reply via email to