RE: [squid-users] Too many open files

2013-07-28 Thread Peter Retief
 Peter:
 Do you mean you've patched the source code, and if so, how do I get 
 that patch?  Do I have to move from the stable trunk?

 Amos:
 Sorry yes that is what I meant and it can now be found here:

http://www.squid-cache.org/Versions/v3/3.HEAD/changesets/squid-3-12957.patch
 It should apply on the stable 3.3 easily, although I have not tested that.
 NP: if you rebuild please go with the 3.3.8 security update release.

I have patched the file as documented, and recompiled with the 3.3.8 branch

 Peter:
 The first log occurences are:
 2013/07/23 08:26:13 kid2| Attempt to open socket for EUI retrieval
failed:
 (24) Too many open files
 2013/07/23 08:26:13 kid2| comm_open: socket failure: (24) Too many 
 open files
 2013/07/23 08:26:13 kid2| Reserved FD adjusted from 100 to 15394 due 
 to failures

 Amos:
 So this worker #2 got errors after reaching about 990 open FD (16K -
15394). Ouch.

 Note that all these socket opening operations are failing with the Too
many open files error the OS sends back when limiting Squid to 990 or so
FD. This has confirmed that Squid is not mis-calculating  where its limit
is, but something in the OS is actually causing it to limit the worker. The
first one to hit was a socket, but also a disk file access is getting them
soon after so it is likely the global OS limit
 rather than a particular FD type limit. That 990 usable FD is also
suspiciously close to 1024 with a few % held spare for emergency use (as
Squid does when calculating its reservation value).

Amos, I don't understand how you deduced the 990 open FD from the error
messages above ( adjusted from 100 to 15394)?  I would have deduced that
there was some internal limit of 100 (not 1000) FD's, and that squid was
re-adjusting to the maximum currently allowed (16K)?  Where is my logic
wrong, or what other information led to your conclusion?  It is important
for me to understand, as I think I have addressed the maximum file
descriptors:

/etc/security/limits.conf includes:
#   - Increase file descriptor limits for Squid
*   softnofile  65536
*   hardnofile  65536
rootsoftnofile  65536
roothardnofile  65536

/etc/pam.d/common-session* includes:
# Squid requires this change to increase limit of file descriptors
session requiredpam_limits.so

After a reboot, if I login as root or squid, ulimit -Sn gives 65536

I included the following options to my squid ./configure script:
./configure  \
  --prefix=/usr \
  --localstatedir=/var \
  --libexecdir=${prefix}/lib/squid \
  --srcdir=. \
  --datadir=${prefix}/share/squid \
  --sysconfdir=/etc/squid \
  --with-default-user=proxy \
  --with-logdir=/var/log \
  --with-pidfile=/var/run/squid.pid \
  --enable-snmp \
  --enable-storeio=aufs,ufs \
  --enable-async-io \
  --with-maxfd=65536 \
  --with-filedescriptors=65536 \
  --enable-linux-netfilter \
  --enable-wccpv2

Here is the output of mgr:info a short while after starting up again:

HTTP/1.1 200 OK
Server: squid/3.3.8
Mime-Version: 1.0
Date: Sun, 28 Jul 2013 06:16:09 GMT
Content-Type: text/plain
Expires: Sun, 28 Jul 2013 06:16:09 GMT
Last-Modified: Sun, 28 Jul 2013 06:16:09 GMT
Connection: close

Squid Object Cache: Version 3.3.8
Start Time: Sun, 28 Jul 2013 06:14:31 GMT
Current Time:   Sun, 28 Jul 2013 06:16:09 GMT
Connection information for squid:
Number of clients accessing cache:  20
Number of HTTP requests received:   1772
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Number of HTCP messages received:   0
Number of HTCP messages sent:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   1078.8
Average ICP messages per minute since start:0.0
Select loop called: 598022 times, 1.093 ms avg
Cache information for squid:
Hits as % of all requests:  5min: 1.6%, 60min: 1.6%
Hits as % of bytes sent:5min: 0.2%, 60min: 0.2%
Memory hits as % of hit requests:   5min: 37.0%, 60min: 37.0%
Disk hits as % of hit requests: 5min: 27.8%, 60min: 27.8%
Storage Swap size:  72074368 KB
Storage Swap capacity:   2.9% used, 97.1% free
Storage Mem size:   8640 KB
Storage Mem capacity:3.3% used, 96.7% free
Mean Object Size:   22.30 KB
Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.47928  0.47928
Cache Misses:  0.48649  0.48649
Cache Hits:0.02796  0.02796
Near Hits: 0.0  0.0
Not-Modified Replies:  0.0  0.0
DNS Lookups:   0.16304  0.16304
ICP Queries:   0.0  0.0
Resource usage for squid:
UP Time:98.555 seconds
CPU Time:   

RE: [squid-users] Too many open files

2013-07-28 Thread Peter Retief
 Eliezer:
 I would assume that you setup your WCCP correctly.
 DO you use them in tunnel or route mode?
 in route mode you can easily get into a complex situation that you have a
routing endless loop(until X TTL).

I think the wccp2 is set up correctly - I am using tunnel mode.  Here is the
output from one of the two routers diverting traffic to squid:

#sh ip wccp
Global WCCP information:
Router information:
Router Identifier:   x.x.x.x
Protocol Version:2.0

Service Identifier: web-cache
Number of Cache Engines: 1
Number of routers:   2
Total Packets Redirected:11017412
Redirect access-list:WCCP
Total Packets Denied Redirect:   24349434
Total Packets Unassigned:36794
Group access-list:   -none-
Total Messages Denied to Group:  0
Total Authentication failures:   0



Re: [squid-users] Too many open files

2013-07-28 Thread Amos Jeffries

On 28/07/2013 6:19 p.m., Peter Retief wrote:

Peter:
Do you mean you've patched the source code, and if so, how do I get
that patch?  Do I have to move from the stable trunk?

Amos:
Sorry yes that is what I meant and it can now be found here:


http://www.squid-cache.org/Versions/v3/3.HEAD/changesets/squid-3-12957.patch

It should apply on the stable 3.3 easily, although I have not tested that.
NP: if you rebuild please go with the 3.3.8 security update release.

I have patched the file as documented, and recompiled with the 3.3.8 branch


Peter:
The first log occurences are:
2013/07/23 08:26:13 kid2| Attempt to open socket for EUI retrieval

failed:

(24) Too many open files
2013/07/23 08:26:13 kid2| comm_open: socket failure: (24) Too many
open files
2013/07/23 08:26:13 kid2| Reserved FD adjusted from 100 to 15394 due
to failures

Amos:
So this worker #2 got errors after reaching about 990 open FD (16K -

15394). Ouch.


Note that all these socket opening operations are failing with the Too

many open files error the OS sends back when limiting Squid to 990 or so
FD. This has confirmed that Squid is not mis-calculating  where its limit
is, but something in the OS is actually causing it to limit the worker. The
first one to hit was a socket, but also a disk file access is getting them
soon after so it is likely the global OS limit

rather than a particular FD type limit. That 990 usable FD is also

suspiciously close to 1024 with a few % held spare for emergency use (as
Squid does when calculating its reservation value).

Amos, I don't understand how you deduced the 990 open FD from the error
messages above ( adjusted from 100 to 15394)?


Squid starts with 16K of which 100 are reserved FD. When it changes that 
the 16K limit is still the total, but the reserved is raised to make N 
sockets reserved/unavailable.


So 16384 - 15394 = 990 FD safe to use after adjustments caused by the error.



   I would have deduced that
there was some internal limit of 100 (not 1000) FD's, and that squid was
re-adjusting to the maximum currently allowed (16K)?


Yes, that is correct. However it is the reserved limit being raised.

Reserved is the number of FD which are configured as available but 
determined to be unusable. For example this can be though of as the 
cordon on a danger zone for FD - if Squid strays into using those number 
of sockets again it can expect errors. Raising that count reduces Squid 
operational FD resources by the amount raised.
Squid may still try to use some of them under peak load conditions, but 
will do so only if there is no other way to free up the safe in-use FD.


Due to that case for emergency usage, when Squid sets the reserved limit 
it does not set it exactly on the FD number which got error'd. It sets 
is 2-3% into the safe FD count. So rounding 990 up that slight amount 
we get the 1024 which is a highly suspicious value.


Amos


RE: [squid-users] Too many open files

2013-07-28 Thread Peter Retief
 Amos:
 Squid starts with 16K of which 100 are reserved FD. When it changes that
the 16K limit is still the total, but the reserved is raised to make N
sockets reserved/unavailable.
 So 16384 - 15394 = 990 FD safe to use after adjustments caused by the
error.

 Peter:
I would have deduced that
 there was some internal limit of 100 (not 1000) FD's, and that squid 
 was re-adjusting to the maximum currently allowed (16K)?

 Amos:
 Yes, that is correct. However it is the reserved limit being raised.
 Reserved is the number of FD which are configured as available but
determined to be unusable. For example this can be though of as the cordon
on a danger zone for FD - if Squid strays into using
 those number of sockets again it can expect errors. Raising that count
reduces Squid operational FD resources by the amount raised.
 Squid may still try to use some of them under peak load conditions, but
will do so only if there is no other way to free up the safe in-use FD.
 Due to that case for emergency usage, when Squid sets the reserved limit
it does not set it exactly on the FD number which got error'd. It sets is
2-3% into the safe FD count. So rounding 990 up that  slight amount we
get the 1024 which is a highly suspicious value.

I have managed to raise the per-process limit from 16K to 64K, and this is
reflected in the mgr:info statistics.  However, if I understand your login
above, this is unlikely to be of benefit - I have to find where Ubuntu is
setting some limit of 1024?  Am I correct?  Is this a socket limit, rather
than a generic file descriptor limit?



RE: [squid-users] Too many open files

2013-07-25 Thread Peter Retief

On 07/25/2013 09:25 AM, Peter Retief wrote:
 I have changed the limits in /etc/security/limits.h to 65K, and I have 
 confirmed that the ulimits for root and squid are now 65K, but 
 squidclient mgr:info still reports a maximum of 16K per worker.
Eliezer:
Ubutnu ???
try to add into the init.d script the ulimit commands in order to force
squid running instance\ running sequences to 65k.

IT worked for me and it should work for you..
Do a restart but first make sure to run an example process with the squid
-f command on another port to get that I am right...

I did reboot after raising the limits, and then before starting squid,
checked ulimit -Sn and ulimit -Hn for both the root user and squid user.
Then after starting squid (running from squid -s, not init script yet), I
did a squidclient mgr:infor and saw 16K per process (actually I saw the
total of 98K for 6 workers, as per Amos's comment on the incorrect
calculation in squidclient, if I interpreted his comment correctly).




RE: [squid-users] Too many open files

2013-07-25 Thread Peter Retief

 To handle the load I have 6 workers, each allocated its own physical 
 disk (noatime).

 I have set ulimit -Sn 16384 and ulimit -Hn 16384, by setting 
 /etc/security/limits.conf as follows:

 #   - Increase file descriptor limits for Squid
 *   softnofile  16384
 *   hardnofile  16384

 The squid is set to run as user squid.  If I login as root, then su 
 squid, the ulimits are set correctly.  For root, however, the ulimits 
 keep reverting to 1024.

 squidclient mgr:info gives:

   Maximum number of file descriptors:   98304
  Largest file desc currently in use:   18824
   Number of file desc currently in use: 1974

 Amos replied:

That biggest-FD value is too high for workers that only have 16K available
each.
Do you mean 
I've just fixed the calculation there (was adding together the values for
each biggest-FD instead of comparing with max())

Do you mean you've patched the source code, and if so, how do I get that
patch?  Do I have to move from the stable trunk?


Note that if one of the workers is reaching the limit of available FD, then
you will get that message from that worker while the others run fine with
less FD consumed.
Can you display the entire and exact cache.log line which that error
message is contained in please?

The first log occurences are:
2013/07/23 08:26:13 kid2| Attempt to open socket for EUI retrieval failed:
(24) Too many open files
2013/07/23 08:26:13 kid2| comm_open: socket failure: (24) Too many open
files
2013/07/23 08:26:13 kid2| Reserved FD adjusted from 100 to 15394 due to
failures
2013/07/23 08:26:13 kid2| '/share/squid/errors/en-za/ERR_CONNECT_FAIL': (24)
Too many open files
2013/07/23 08:26:13 kid2| WARNING: Error Pages Missing Language: en-za
2013/07/23 08:26:13 kid2| WARNING! Your cache is running out of
filedescriptors

Then later:
2013/07/23 10:00:11 kid2| WARNING! Your cache is running out of
filedescriptors
2013/07/23 10:00:27 kid2| WARNING! Your cache is running out of
filedescriptors

After that, the errors become prolific

Thanks for the help.

Peter






Re: [squid-users] Too many open files

2013-07-25 Thread Eliezer Croitoru
On 07/25/2013 09:43 AM, Peter Retief wrote:
 Do you mean you've patched the source code, and if so, how do I get that
 patch?  Do I have to move from the stable trunk?
what version are you using?
run `squid -v` to get the version etc..
I assume that else then the RPM I am releasing there aren't much of
updates to LTS\Long life distributions.

You will might need to compile it yourself but I think there is a small
repo for debian and ubuntu out-there.

Eliezer


RE: [squid-users] Too many open files

2013-07-25 Thread Peter Retief
 Peter:
 The first log occurences are:
 2013/07/23 08:26:13 kid2| Attempt to open socket for EUI retrieval
failed:
 (24) Too many open files
 2013/07/23 08:26:13 kid2| comm_open: socket failure: (24) Too many 
 open files
 2013/07/23 08:26:13 kid2| Reserved FD adjusted from 100 to 15394 due 
 to failures

 Amos:
 So this worker #2 got errors after reaching about 990 open FD (16K -
15394). Ouch.

 Note that all these socket opening operations are failing with the Too
many open files error the OS sends back when limiting Squid to 990 or so
FD. This has confirmed that Squid is not mis-calculating  where its limit
is, but something in the OS is actually causing it to limit the worker. The
first one to hit was a socket, but also a disk file access is getting them
soon after so it is likely the global OS limit
 rather than a particular FD type limit. That 990 usable FD is also
suspiciously close to 1024 with a few % held spare for emergency use (as
Squid does when calculating its reservation value).

Amos, any ideas where I should look to see where Ubuntu is restricting the
file descriptors?  I though ulimit -Sn and ulimit -Hn would tell me how
many descriptors any child process should get?




Re: [squid-users] Too many open files

2013-07-25 Thread Eliezer Croitoru
On 07/25/2013 02:10 PM, Peter Retief wrote:
 Amos, any ideas where I should look to see where Ubuntu is restricting the
 file descriptors?  I though ulimit -Sn and ulimit -Hn would tell me how
 many descriptors any child process should get?
many things should happen and still they do not.(this is what I know)
I think that we can try to get some help on that from ubuntu team..

Dont just restart a server without making sure the traffic is fine..
since you are using WCCP I would suggest you to share the setup and then
we can try to help you more later on if needed.

If the setup is right and in place there should be no problem to find
the right place like this:
https://bugs.launchpad.net/ubuntu/+bug/672749
at ubuntu as a starter.

and then notice that there are other parts of linux that apply ulimits:
http://serverfault.com/questions/235356/open-file-descriptor-limits-conf-setting-isnt-read-by-ulimit-even-when-pam-limi

I do not like to redirect but it seems to me like the best choice now.

Also there is a basic assumption that you want to find the source of the
problem and not just make it work??

I would assume that you setup your WCCP correctly.
DO you use them in tunnel or route mode?
in route mode you can easily get into a complex situation that you have
a routing endless loop(until X TTL).

But I assume the problem was solved already??

Eliezer



Re: [squid-users] Too many open files

2013-07-24 Thread Eliezer Croitoru
On 07/24/2013 11:11 PM, Peter Retief wrote:
 The system is running 64-bit Ubuntu 12.04 and squid 3.3.6 compiled from
 source.
Sorry to ask but,

What kernel are you using?? do you have some patches applied to it??

Eliezer


Re: [squid-users] Too many open files

2013-07-24 Thread Amos Jeffries

On 25/07/2013 8:11 a.m., Peter Retief wrote:

Hi

I am struggling with the following error: comm_open: socket failure: (24)
Too many open files

This happens after squid has been running for many hours.  I have a Xeon
server with 12 cores, 64Gb Ram and 8 x 1Tb disks.  The first two are in a
RAID-1, and the balance are managed as aufs caches.

The system is running 64-bit Ubuntu 12.04 and squid 3.3.6 compiled from
source.

I am running transparent proxy from two Cisco 7600 routers using wccp2.  The
purpose is to proxy international bandwidth ( 3 x 155Mbps links).

To handle the load I have 6 workers, each allocated its own physical disk
(noatime).

I have set ulimit -Sn 16384 and ulimit -Hn 16384, by setting
/etc/security/limits.conf as follows:

#   - Increase file descriptor limits for Squid
*   softnofile  16384
*   hardnofile  16384

The squid is set to run as user squid.  If I login as root, then su
squid, the ulimits are set correctly.  For root, however, the ulimits keep
reverting to 1024.

squidclient mgr:info gives:

  Maximum number of file descriptors:   98304
 Largest file desc currently in use:   18824
  Number of file desc currently in use: 1974


That biggest-FD value is too high for workers that only have 16K 
available each.
I've just fixed the calculation there (was adding together the values 
for each biggest-FD instead of comparing with max())



Note that if one of the workers is reaching the limit of available FD, 
then you will get that message from that worker while the others run 
fine with less FD consumed.
Can you display the entire and exact cache.log line which that error 
message is contained in please?


Amos


Re: [squid-users] too many open files / Queue congestion

2006-05-18 Thread Mark Elsen

Hello there.
I don't know if the two problems I'm facing are related, but here an
excerpt of my cache.log:

2006/05/18 13:07:26| comm_open: socket failure: (24) Too many open files
2006/05/18 13:07:29| WARNING! Your cache is running out of filedescriptors
2006/05/18 13:09:37| squidaio_queue_request: WARNING - Queue congestion

What do these things mean? What is the solution for this faulty behaviour?




  - http://www.squid-cache.org/Doc/FAQ/FAQ-11.html#ss11.4
  - You also need to increase the max. allowed open files per process;
See :

 /proc/sys/fs/file-max

M.


Re: [squid-users] too many open files / Queue congestion

2006-05-18 Thread Boniforti Flavio

Mark Elsen ha scritto:

[cut]


  - http://www.squid-cache.org/Doc/FAQ/FAQ-11.html#ss11.4
  - You also need to increase the max. allowed open files per process;
See :

 /proc/sys/fs/file-max


I read that FAQ, but as I installed squid from .deb package, I cannot 
follow that indication.

And here:

proxy:~# cat /proc/sys/fs/file-max
104800

Is this enough?

--
--
Boniforti Flavio
Provincia del Verbano-Cusio-Ossola
Ufficio Informatica

Tecnoparco del Lago Maggiore
Via dell'Industria, 25
28924 Verbania
--


Re: [squid-users] too many open files / Queue congestion

2006-05-18 Thread Mark Elsen



I read that FAQ, but as I installed squid from .deb package, I cannot
follow that indication.


If you need to increase FD´s as it should be in your
case; then you need to install SQUID manully
and follow the FAQ guidelines.


And here:

proxy:~# cat /proc/sys/fs/file-max
104800
Is this enough?


- It should be; work on the FD-issue first.

M.


Re: [squid-users] too many open files / Queue congestion

2006-05-18 Thread Henrik Nordstrom
tor 2006-05-18 klockan 18:15 +0200 skrev Mark Elsen:

- http://www.squid-cache.org/Doc/FAQ/FAQ-11.html#ss11.4
- You also need to increase the max. allowed open files per process;
  See :
 
   /proc/sys/fs/file-max

file-max is the system global limit for the whole system. Per process
limits is set by ulimit.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] too many open files / Queue congestion

2006-05-18 Thread Adrian Chadd
This may be also influenced by PAM and resource limits.
On my Ubuntu desktop:

# Sets up user limits according to /etc/security/limits.conf
# (Replaces the use of /etc/limits in old login)
sessionrequired   pam_limits.so

I had to add this into limits.conf to get a decent amount of filedescriptors;
the default amount was locked down to 1024 fds per non-root process:

*   hardnofile  8192

On Thu, May 18, 2006, Henrik Nordstrom wrote:
 tor 2006-05-18 klockan 18:15 +0200 skrev Mark Elsen:
 
 - http://www.squid-cache.org/Doc/FAQ/FAQ-11.html#ss11.4
 - You also need to increase the max. allowed open files per process;
   See :
  
/proc/sys/fs/file-max
 
 file-max is the system global limit for the whole system. Per process
 limits is set by ulimit.
 
 Regards
 Henrik




Re: [squid-users] Too many open files

2003-02-15 Thread Henrik Nordstrom
You have most likely forgot to instruct your kernel to allow more open
files globally in the system.

This is not the per-process limits set by ulimit etc, but the global
limits for the whole system set in your kernel configuration.

Regards
Henrik


HBK wrote:
 
 I have setup squid cache with follwoing settings
 
 Squid Cache: Version 2.5.STABLE1-20030214
 configure options:  --prefix=/usr/local/squid --enable-async-io --enable-snmp
 
 FIle descriptors are also increased to 16384 with all the settings in place,
 (ulimit, types.h file etc)
 
 Some times I get following errors in cache.log
 -
 2003/02/15 14:59:39| httpAccept: FD 10: accept failure: (24) Too many open
 files
 2003/02/15 14:59:39| httpAccept: FD 10: accept failure: (24) Too many open
 files
 2003/02/15 14:59:39| comm_open: socket failure: (24) Too many open files
 2003/02/15 14:59:39| httpAccept: FD 10: accept failure: (24) Too many open
 files
 -
 
 please advise what should I do to resolve this
 
 Thanks



Re: [squid-users] Too many open files

2003-02-15 Thread HBK
How can I instruct kernel to allow more open files

Regards
Hassan



-- Original Message ---
From: Henrik Nordstrom [EMAIL PROTECTED]
To: HBK [EMAIL PROTECTED]
Sent: Sat, 15 Feb 2003 12:48:53 +0100
Subject: Re: [squid-users] Too many open files

 You have most likely forgot to instruct your kernel to allow more 
 open files globally in the system.
 
 This is not the per-process limits set by ulimit etc, but the global
 limits for the whole system set in your kernel configuration.
 
 Regards
 Henrik
 
 HBK wrote:
  
  I have setup squid cache with follwoing settings
  
  Squid Cache: Version 2.5.STABLE1-20030214
  configure options:  --prefix=/usr/local/squid --enable-async-io --enable-
snmp
  
  FIle descriptors are also increased to 16384 with all the settings in 
place,
  (ulimit, types.h file etc)
  
  Some times I get following errors in cache.log
  --
---
  2003/02/15 14:59:39| httpAccept: FD 10: accept failure: (24) Too many open
  files
  2003/02/15 14:59:39| httpAccept: FD 10: accept failure: (24) Too many open
  files
  2003/02/15 14:59:39| comm_open: socket failure: (24) Too many open files
  2003/02/15 14:59:39| httpAccept: FD 10: accept failure: (24) Too many open
  files
  --
---
  
  please advise what should I do to resolve this
  
  Thanks
--- End of Original Message ---