It appears that moving to 3.3.5-20130607-r12573 from 3.2.11-20130524-r11822
has eliminated my problem. I have seen a few unexplainable spikes in CPU
usage, but they haven't lasted long and squid has remained responsive.
I've been running 3.3.5-20130607-r12573 for just over two weeks without a
prob
On 2013-05-28, Stuart Henderson wrote:
> On 2013-05-17, Alex Rousskov wrote:
>> On 05/17/2013 01:28 PM, Loïc BLOT wrote:
>>
>>> I have found the problem. In fact it's the problem mentionned on my
>>> last mail, is right. Squid FD limit was reached, but squid doesn't
>>> mentionned every time the
The FD limit is 16384. During the day I see peak utilization around
8,000. At night the utilization is less than 1,000. During the four
hours the CPU rises from <10% to 100% the FD utilization stays
less than 1,000.
Again, I have not seen this problem under load, only while
squid is relatively
On what OS?
and also what is the output of the ulimit -Ha and ulimit -Sa
Eliezer
On 6/11/2013 6:32 PM, Mike Mitchell wrote:
I dropped the cache size to 150 GB instead of 300 GB. Cached object count
dropped
from ~7 million to ~3.5 million. After a week I saw one occurrence of the same
proble
Hello mike,
please look at the number of system file descriptors opened, the squid
limit and the squid user limit. I have this problem on 3.2 and 3.3
because squid was at the FD limit. (look at the system fd limit for
squid, ulimit -n with the squid user)
--
Best regards,
Loïc BLOT,
UNIX systems,
I dropped the cache size to 150 GB instead of 300 GB. Cached object count
dropped
from ~7 million to ~3.5 million. After a week I saw one occurrence of the same
problem.
CPU usage climbed steadily over 4 hours from <10% to 100%, then squid became
unresponsive for 20 minutes. After that it pick
What is your squid fd limit and your system ulimit (ulimit -n) for squid
user ?
--
Best regards,
Loïc BLOT,
UNIX systems, security and network expert
http://www.unix-experience.fr
Le mercredi 29 mai 2013 à 14:03 -0400, Ron Wheeler a écrit :
> Have you looked at garbage collection as a possible
Have you looked at garbage collection as a possible source of the problem?
If you really have a 300 GB cache, that might take a long time to
process during GC.
You might want to post your GC settings to see if anyone has a
suggestion or can eliminate GC as the source of your problem.
The fac
I've hit something similar. I have four identically configured systems with
16K squid FD limit, 24 GB RAM, 300 GB cache directory. I've seen the same
failure randomly on all four systems. During the day the squid process handles
> 100 requests/second, with a peak FD usage around 8K FDs. In t
For me the problem is resolved.
It happens when squid reach the maximum FD, squid has more and more
requests to process and then it's blocked and very very slow. I have
increased system FD to 16K and squid FD to 10K, i haven't the problem
since this modification.
--
Best regards,
Loïc BLOT,
UNIX
On 2013-05-17, Alex Rousskov wrote:
> On 05/17/2013 01:28 PM, Loïc BLOT wrote:
>
>> I have found the problem. In fact it's the problem mentionned on my
>> last mail, is right. Squid FD limit was reached, but squid doesn't
>> mentionned every time the freeze appear that it's a FD limit
>> problem,
On 18/05/2013 8:14 a.m., Stuart Henderson wrote:
On 2013-05-17, Loïc BLOT wrote:
I have found the problem. In fact it's the problem mentionned on my last
mail, is right. Squid FD limit was reached, but squid doesn't mentionned
every time the freeze appear that it's a FD limit problem, then the
On 2013-05-17, Loïc BLOT wrote:
> I have found the problem. In fact it's the problem mentionned on my last
> mail, is right. Squid FD limit was reached, but squid doesn't mentionned
> every time the freeze appear that it's a FD limit problem, then the
> debug was so difficult.
> Also, i think you
13 matches
Mail list logo