On Tue, 16 Aug 2005, chas williams - CONTRACTOR wrote:

In message <[EMAIL PROTECTED]>,Jim Rees writes:
I'm inclined to commit the new code and let Niklas or others work on making
it better for large caches later.  Comments?

how about choosing the sqrt(cachesize/1024) as the "average file size"?

        cachesize       avg file size(k)        #files

        150M            ~12                     12500
        350M            ~18                     19444
        1G              ~32                     32768
        10G             ~98                     102040
        20G             ~143                    146653

i choose sqrt() for no particular reason other than the numbers seems to
more closly match the original sizes for smaller caches and for larger
caches it matches the newly suggested average of 32k.  for huge caches
it "safely" limits the amount of kernel space required.

Looks very good to me, since larger caches usually implies that you have larger files in orbit that you want to cache.

I'm all for doing it this way, it's dynamic and seems to give very sane numbers for extreme variations of cache size.

/Nikke
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 Niklas Edmundsson, Admin @ {acc,hpc2n}.umu.se     |    [EMAIL PROTECTED]
---------------------------------------------------------------------------
 I still miss my ex-wife - but my aim is improving!
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
_______________________________________________
OpenAFS-devel mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-devel

Reply via email to