Re: [squid-users] "Reserved number of file descriptors" sudden increase

2012-10-23 Thread Amos Jeffries

On 23/10/2012 9:17 p.m., "RODRIGUEZ CEBERIO, Iñigo" wrote:

This problem starts randomly and recovers randomly too (after restarting, 
rebooting, etc). We restart httpd and it works fine 2 min. and the problem 
starts again, reboot the whole server and again appears the problem, sometimes, 
the problem disappears after removing cache directory and re-creating it.

Do you know what could cause socket() and dup2() operations start to fail?



"

   The /socket/() function shall fail if:

   [EAFNOSUPPORT]
   The implementation does not support the specified address family.
   [EMFILE]
   No more file descriptors are available for this process.
   [ENFILE]
   No more file descriptors are available for the system.
   [EPROTONOSUPPORT]
   The protocol is not supported by the address family, or the
   protocol is not supported by the implementation.
   [EPROTOTYPE]
   The socket type is not supported by the protocol.

   The /socket/() function may fail if:

   [EACCES]
   The process does not have appropriate privileges.
   [ENOBUFS]
   Insufficient resources were available in the system to perform
   the operation.
   [ENOMEM]
   Insufficient memory was available to fulfill the request.


"

Being squid-3.0 the address family errors do not apply. The rest of the 
system problems may still apply though.


Amos


RE: [squid-users] "Reserved number of file descriptors" sudden increase

2012-10-23 Thread RODRIGUEZ CEBERIO, Iñigo
This problem starts randomly and recovers randomly too (after restarting, 
rebooting, etc). We restart httpd and it works fine 2 min. and the problem 
starts again, reboot the whole server and again appears the problem, sometimes, 
the problem disappears after removing cache directory and re-creating it.

Do you know what could cause socket() and dup2() operations start to fail?

Thanks, Inigo

-Mensaje original-
De: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Enviado el: martes, 23 de octubre de 2012 10:07
Para: squid-users@squid-cache.org
Asunto: Re: [squid-users] "Reserved number of file descriptors" sudden increase

On 23/10/2012 7:53 p.m., "RODRIGUEZ CEBERIO, Iñigo" wrote:
> Thank you for replying so quickly. I'll upgrade my squid.
>
> However, in this case we have in use 2673 from 4096 and the squid is stuck 
> because of the reserved number of file descriptors rises from 100 to 1400.

Oh right yes, Squid will start limiting inbound accepted connections when it 
reaches the reserved limit.

The Reserved FD starts at 100 default value and ONLY increases when
socket() and dup2() operations which create new sockets start to fail. 
When that happens it is a strong sign that the OS cannot handle that many 
sockets, and Squid limits itself to reserve the unused ones.

I don't see any code to reduce the number - which worries me. Maybe you hit a 
fluke occurance of socket() errors and are now stuck with a large reserved set.

Amos

>
> The normal situation is
> File descriptor usage for squid:
>Maximum number of file descriptors:   4096
>Largest file desc currently in use:   3193
>Number of file desc currently in use: 2673
>Files queued for open:   0
>Available number of file descriptors: 1423
>Reserved number of file descriptors:   100
>Store Disk files open:   1
>
> When the problem starts the only change is
>
> Reserved number of file descriptors:   100 -> 1424
>
> Regards, Inigo.
>
>
> -Mensaje original-
> De: Amos Jeffries [mailto:squ...@treenet.co.nz] Enviado el: martes, 23 
> de octubre de 2012 5:21
> Para: squid-users@squid-cache.org
> Asunto: Re: [squid-users] "Reserved number of file descriptors" sudden 
> increase
>
> On 23/10/2012 3:47 a.m., "RODRIGUEZ CEBERIO, Iñigo" wrote:
>> Hello,
>>
>> I'm running squid 3.0.STABLE13 on a CentOS 5.2.
> Please upgrade, your Squid is no longer suported. Current Squid release is 
> version 3.2.3.
>
>>It works normally and suddenly it colapsed. In the cache.log appears 
>> messages telling run out of file descriptors. Using squidclient I can see a 
>> change of this parameter, "Reserved number of file descriptors", from 100 to 
>> 1424. Here it is the squidclient info about FD:
>>
>> File descriptor usage for squid:
>>   Maximum number of file descriptors:   4096
>>   Largest file desc currently in use:   3193
>>   Number of file desc currently in use: 2673
>>   Files queued for open:   0
>>   Available number of file descriptors: 1423
>>   Reserved number of file descriptors:  1424
>>   Store Disk files open:   1
>>
>> Why does that parameter rise from 100 to 1400 in just few seconds? What's 
>> going on? Any piece of advise?
> 1400 does not matter. The 2673 is more important - this is number of FD 
> currently open and in use.
>
> It can raise in three situations:
>1) scanning the disk cache in a "DIRTY" scan to rebuild the index file by 
> file. Requires opening every file on disk and can consume hundreds of FD at 
> once for the one process.
>
>2) receiving lots of client traffic. Might be a normal peak in traffic, a 
> DoS, or a broken client hammering away repeating a request (usually seen with 
> auth rejecteions).
>
>3) a forwarding loop, where Squid is processing a request which instructs 
> it to connect to itself as upstream. This is best prevented by configuring 
> "via on".
>
> Amos



Re: [squid-users] "Reserved number of file descriptors" sudden increase

2012-10-23 Thread Amos Jeffries

On 23/10/2012 7:53 p.m., "RODRIGUEZ CEBERIO, Iñigo" wrote:

Thank you for replying so quickly. I'll upgrade my squid.

However, in this case we have in use 2673 from 4096 and the squid is stuck 
because of the reserved number of file descriptors rises from 100 to 1400.


Oh right yes, Squid will start limiting inbound accepted connections 
when it reaches the reserved limit.


The Reserved FD starts at 100 default value and ONLY increases when 
socket() and dup2() operations which create new sockets start to fail. 
When that happens it is a strong sign that the OS cannot handle that 
many sockets, and Squid limits itself to reserve the unused ones.


I don't see any code to reduce the number - which worries me. Maybe you 
hit a fluke occurance of socket() errors and are now stuck with a large 
reserved set.


Amos



The normal situation is
File descriptor usage for squid:
   Maximum number of file descriptors:   4096
   Largest file desc currently in use:   3193
   Number of file desc currently in use: 2673
   Files queued for open:   0
   Available number of file descriptors: 1423
   Reserved number of file descriptors:   100
   Store Disk files open:   1

When the problem starts the only change is

Reserved number of file descriptors:   100 -> 1424

Regards, Inigo.


-Mensaje original-
De: Amos Jeffries [mailto:squ...@treenet.co.nz]
Enviado el: martes, 23 de octubre de 2012 5:21
Para: squid-users@squid-cache.org
Asunto: Re: [squid-users] "Reserved number of file descriptors" sudden increase

On 23/10/2012 3:47 a.m., "RODRIGUEZ CEBERIO, Iñigo" wrote:

Hello,
   
I'm running squid 3.0.STABLE13 on a CentOS 5.2.

Please upgrade, your Squid is no longer suported. Current Squid release is 
version 3.2.3.


   It works normally and suddenly it colapsed. In the cache.log appears messages telling 
run out of file descriptors. Using squidclient I can see a change of this parameter, 
"Reserved number of file descriptors", from 100 to 1424. Here it is the 
squidclient info about FD:
   
File descriptor usage for squid:

  Maximum number of file descriptors:   4096
  Largest file desc currently in use:   3193
  Number of file desc currently in use: 2673
  Files queued for open:   0
  Available number of file descriptors: 1423
  Reserved number of file descriptors:  1424
  Store Disk files open:   1
   
Why does that parameter rise from 100 to 1400 in just few seconds? What's going on? Any piece of advise?

1400 does not matter. The 2673 is more important - this is number of FD 
currently open and in use.

It can raise in three situations:
   1) scanning the disk cache in a "DIRTY" scan to rebuild the index file by 
file. Requires opening every file on disk and can consume hundreds of FD at once for the 
one process.

   2) receiving lots of client traffic. Might be a normal peak in traffic, a 
DoS, or a broken client hammering away repeating a request (usually seen with 
auth rejecteions).

   3) a forwarding loop, where Squid is processing a request which instructs it to 
connect to itself as upstream. This is best prevented by configuring "via on".

Amos




RE: [squid-users] "Reserved number of file descriptors" sudden increase

2012-10-22 Thread RODRIGUEZ CEBERIO, Iñigo
Thank you for replying so quickly. I'll upgrade my squid.

However, in this case we have in use 2673 from 4096 and the squid is stuck 
because of the reserved number of file descriptors rises from 100 to 1400.

The normal situation is 
File descriptor usage for squid:
  Maximum number of file descriptors:   4096
  Largest file desc currently in use:   3193
  Number of file desc currently in use: 2673
  Files queued for open:   0
  Available number of file descriptors: 1423
  Reserved number of file descriptors:   100
  Store Disk files open:   1

When the problem starts the only change is 

Reserved number of file descriptors:   100 -> 1424

Regards, Inigo.


-Mensaje original-
De: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Enviado el: martes, 23 de octubre de 2012 5:21
Para: squid-users@squid-cache.org
Asunto: Re: [squid-users] "Reserved number of file descriptors" sudden increase

On 23/10/2012 3:47 a.m., "RODRIGUEZ CEBERIO, Iñigo" wrote:
> Hello,
>   
> I'm running squid 3.0.STABLE13 on a CentOS 5.2.

Please upgrade, your Squid is no longer suported. Current Squid release is 
version 3.2.3.

>   It works normally and suddenly it colapsed. In the cache.log appears 
> messages telling run out of file descriptors. Using squidclient I can see a 
> change of this parameter, "Reserved number of file descriptors", from 100 to 
> 1424. Here it is the squidclient info about FD:
>   
> File descriptor usage for squid:
>  Maximum number of file descriptors:   4096
>  Largest file desc currently in use:   3193
>  Number of file desc currently in use: 2673
>  Files queued for open:   0
>  Available number of file descriptors: 1423
>  Reserved number of file descriptors:  1424
>  Store Disk files open:   1
>   
> Why does that parameter rise from 100 to 1400 in just few seconds? What's 
> going on? Any piece of advise?

1400 does not matter. The 2673 is more important - this is number of FD 
currently open and in use.

It can raise in three situations:
  1) scanning the disk cache in a "DIRTY" scan to rebuild the index file by 
file. Requires opening every file on disk and can consume hundreds of FD at 
once for the one process.

  2) receiving lots of client traffic. Might be a normal peak in traffic, a 
DoS, or a broken client hammering away repeating a request (usually seen with 
auth rejecteions).

  3) a forwarding loop, where Squid is processing a request which instructs it 
to connect to itself as upstream. This is best prevented by configuring "via 
on".

Amos


Re: [squid-users] "Reserved number of file descriptors" sudden increase

2012-10-22 Thread Amos Jeffries

On 23/10/2012 3:47 a.m., "RODRIGUEZ CEBERIO, Iñigo" wrote:

Hello,
  
I'm running squid 3.0.STABLE13 on a CentOS 5.2.


Please upgrade, your Squid is no longer suported. Current Squid release 
is version 3.2.3.



  It works normally and suddenly it colapsed. In the cache.log appears messages telling 
run out of file descriptors. Using squidclient I can see a change of this parameter, 
"Reserved number of file descriptors", from 100 to 1424. Here it is the 
squidclient info about FD:
  
File descriptor usage for squid:

 Maximum number of file descriptors:   4096
 Largest file desc currently in use:   3193
 Number of file desc currently in use: 2673
 Files queued for open:   0
 Available number of file descriptors: 1423
 Reserved number of file descriptors:  1424
 Store Disk files open:   1
  
Why does that parameter rise from 100 to 1400 in just few seconds? What's going on? Any piece of advise?


1400 does not matter. The 2673 is more important - this is number of FD 
currently open and in use.


It can raise in three situations:
 1) scanning the disk cache in a "DIRTY" scan to rebuild the index file 
by file. Requires opening every file on disk and can consume hundreds of 
FD at once for the one process.


 2) receiving lots of client traffic. Might be a normal peak in 
traffic, a DoS, or a broken client hammering away repeating a request 
(usually seen with auth rejecteions).


 3) a forwarding loop, where Squid is processing a request which 
instructs it to connect to itself as upstream. This is best prevented by 
configuring "via on".


Amos