> > # cat /proc/cpuinfo  | egrep -i xeon | uniq
> > model name      : Intel(R) Xeon(R) CPU           E5405  @ 2.00GHz
> > # cat /proc/cpuinfo  | egrep -i xeon | wc -l
> > 8

is that one quad-core with hyperthreading, two quad-cores without HT or two
dual-cores with HT? We apparently should count HT CPU's as one, not two.

> >              total       used       free     shared    buffers     cached
> > Mem:         32148       2238      29910          0        244        823
> > -/+ buffers/cache:       1169      30978
> > Swap:        15264          0      15264

swap is quite useless here I'd say...

> > # fdisk -l | grep GB
> > Disk /dev/sda: 73.5 GB, 73557090304 bytes
> > Disk /dev/sdb: 300.0 GB, 300000000000 bytes
> > Disk /dev/sdc: 146.8 GB, 146815737856 bytes
> > Disk /dev/sdd: 300.0 GB, 300000000000 bytes
> > Disk /dev/sde: 300.0 GB, 300000000000 bytes

> > # uname -srm
> > Linux 2.6.27.7 x86_64

> > # cat /etc/squid/squid.conf | grep -E cache_'mem|dir'\

you apparently really wandet cache_'(mem|dir)' btw...

> > cache_mem 8192 MB
> > cache_dir aufs /var/cache/proxy/cache1 102400 16 256
> > cache_dir aufs /var/cache/proxy/cache2 102400 16 256
> > cache_dir aufs /var/cache/proxy/cache3 102400 16 256
> > cache_dir aufs /var/cache/proxy/cache4 102400 16 256
> > cache_dir aufs /var/cache/proxy/cache5 102400 16 256
> > cache_dir aufs /var/cache/proxy/cache6 102400 16 256
> > cache_dir aufs /var/cache/proxy/cache7 102400 16 256
> > cache_dir aufs /var/cache/proxy/cache8 102400 16 256

> > # cat /etc/fstab  | grep proxy
> > /dev/vg00/cache  /var/cache/proxy ext3        defaults         1   2

> > Yes, I know, LVM, ext3 and aufs are bad ideas... I'm particularly
> > interested in a better cache_dir configuration (maximizing disk's usage)
> > and the correct cache_mem parameter to this hardware. (and others
> > possible/useful tips)

lvm is surely bad idea for proxy, not sure that about ext3 (very stable) and
aufs is _good_ idea.

On 17.03.09 12:58, Amos Jeffries wrote:
> You have 5 physical disks by the looks of it. Best usage of those is to
> split the cache_dir one per disk (sharing a disk leads to seek clashes).

I'd say that the 73.5 Gb disk should be used only for OS, logs etc.

> I'm not to up on the L1/L2 efficiencies, but "64 256" or higher L1 seems
> to be better for larger dir sizes.

L1 should be imho increased with 1 for every 65536 objects (256 L2 dirs *
256 files in each of them), with average size of 13KiB (default) it roughly
means one for each GB of cache_dir size. Depending on you
maximum_object_size the averags size may be higher, but that doesn't change
much.

Note that for 300GiB HDD you will be using max 250, more probably 200 and
some ppl would advise 150GiB of cache. Leave some space for metadata and
some for reserve - filesystems may benefit of it.

> For a quad or higher CPU machine, you may do well to have multiple Squid
> running (one per 2 CPUs or so). One squid doing the caching on the 300GB
> drives and one on the smaller ~100 GB drives (to get around a small bug
> where mismatched AUFS dirs cause starvation in small dir), peered together
> with no-proxy option to share info without duplicating cache.

Maybe even one "master" squid with big memory_cache, accessed by clients,
having those with cache_dir's (zero cache_mem) as parents and never_direct
set to on, and "off" only for files you surely don't cache e.g. the default
"query" acl, if you didn't comment that out.

I'm currently not sure if we can ask the "master" squid to fetch directly
everything it surely won't cache...

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
M$ Win's are shit, do not use it !

Reply via email to