There can be a number of reasons to this
a) Your squid is really running out of file descriptors. This will happen if you trow a couple of thousand users at a Squid cache without tuning the number of filedescriptors (see FAQ).
b) Your Squid is overloaded, or most likely your cache_dir is. If using the default "ufs" cache_dir type then speed will be very much limited by the speed of your harddrive, and when this limit is reached performance quickly spirals down. Use "aufs/diskd" cache_dir types, and design the hardware correctly for the load you are planning.
Regards Henrik
I have read the FAQ. [EMAIL PROTECTED] squid]# ulimit -n 1024 [EMAIL PROTECTED] root]# cat /proc/sys/fs/file-max 104854 [EMAIL PROTECTED] root]# ulimit -HSn 100000 [EMAIL PROTECTED] root]# ulimit -n 100000
Then I edit the /etc/squid/squid.conf and change the max_open_disk_fds: max_open_disk_fds 100000
[EMAIL PROTECTED] root]# /etc/init.d/squid restart [EMAIL PROTECTED] root]# cat /var/log/squid/cache.log |grep descriptors 2004/10/14 16:00:18| With 1024 file descriptors
I understand that squid needs to be recompiled, could someone help me with the parameters for recompiling squid? I'm trying to give transparent cache for a lot of users.
In the mean while, I reduced the users conected to squid, and the "WARNING! Your cache is running out of filedescriptors" doesn't appeared anymore.... but I still have users that don't get a response from the transparent cache.....
Thanks, Alejandro