> Those of you with giant client caches, I'm curious why you are doing this. > My own working set seems to be around 50-100 MB, and that's with almost > everything in afs (/usr/X11, /usr/local, ...).
ftp.stacken.kth.se is an old tired proliant (2xPII) with Solaris 9 and vsftpd. As it serves files out of 100Mbit it should use its whole bandwidth for outgoing packets and not to refresh its cache with the same content as it already had. So the cache has to be bigger than the working set which are a couple of planeshift EXEs and a Free- and OpenBSD dists. ketchup# /usr/afsws/bin/fs getcac AFS using 18941965 of the cache's available 20000000 1K byte blocks. I know cache trashing occurs with 4GB so 20GB might be a bit much but better safe than the other way round. The cache setup time for the first time was a bit on the long side though (~7 hours). With some inventive HD software striping between controllers I got the old SCSI disks up to 14MB/sec. This IS hand down hardware after all. My parameters are: afsd -dcache 100000 -stat 150000 -daemons 27 -volumes 250 -afsdb -files 2000000 If you are curious: http://www.stacken.kth.se/~haba/ketchupstats/Report.html Another server that has sees heavy cache use is www.stacken.kth.se which runs Arla under FreeBSD. It runs 39 vitual http servers for different (mostly student-) organizations at KTH. Each organizaton manages their files in an AFS volume. Some of them even mounted from other cells. igloo# /usr/arla/bin/fs getca Arla is using 4132973 of the cache's available 5242880 1K byte blocks (and 114951 of the cache's available 350000 vnodes) igloo# Arla tries here to keep the cache usage below 4194304KB and below 300000 vnodes. That seems to work fine. Earlier we tried both www and ftp on the same machine with a 5G cache. Because of the different usage profiles (www = bigger files, http = more files) these did not like to share cache and we had endless cache trashing. Today big AFS caches are not for OS - I have that rather distributed with themis (a replacement for the "package" program) to local HD but to distribute userdata for export with different protocols etc. At PDC I have researchers with 10GB+ working sets which I'd like to get into a 20GB swappable memcache. To accomplish this under Linux I'd have to malloc the chunks in afsd and then hand the memory to the kernel instead of allocating it with vmalloc in the kernel. If you allready have done this, please let me know. Another experiment with filecache on ext2 in a big file in tmpfs worked but was much slower than memcache even for locally cached data (which should only have been in mem anyway). That kind of sums up todays news on caches. Harald. PS: You'll probably not get any more emails from me for the next couple of days as I will disapear into the archipelago. Not with exactly that boat and crew but you get the picture: http://www.pdc.kth.se/~haba/segling2003/ _______________________________________________ OpenAFS-devel mailing list [email protected] https://lists.openafs.org/mailman/listinfo/openafs-devel
