[squid-users] Re: high memory usage (squid 3.2.0)

2013-04-10 Thread M A Young

On Tue, 9 Apr 2013, Mr Dash Four wrote:

The system is atom-based machine with 2GB memory. Even though squid starts OK 
at first, it gradually baloons and the current memory usage (according to ps) 
is:


Older versions of squid 3 do have some known memory leaks which might be 
your problem here, so I suggest you upgrade to a current version of squid 
such as 3.3.3 or 3.2.9.


Michael Young


RE: [squid-users] Squid is crashing

2013-01-24 Thread M A Young

On Wed, 23 Jan 2013, Farooq Bhatti wrote:

For your information I have installed the same version on two machines 
one is working fine for staff but this machine which is serving students 
is creating problem of squid crash.
However In my further analysis I found following in cache.log file seems 
users deliberately doing some techniques to make the squid crash. Please 
advise.


2013/01/19 11:20:21| WARNING: HTTP header contains NULL characters {Accept: */*
2013/01/19 16:46:40| squidaio_queue_request: WARNING - Queue congestion
2013/01/19 17:52:36| WARNING: Closing client 192.168.105.214 connection due to 
lifetime timeout
2013/01/19 21:59:50| WARNING: Forwarding loop detected for:
2013/01/19 23:06:39| squidaio_queue_request: WARNING - Disk I/O overloading


I doubt anything there is malicious. The NULL character problem is 
probably from a badly written application that a couple of us have seen 
evidence of but in 3.1 and 3.2 at least it doesn't crash the server 
directly, just leaks memory (see 
http://bugs.squid-cache.org/show_bug.cgi?id=3567 ) so that probably isn't 
what is crashing your server.
The "Forwarding loop" warning could be from a misconfiguration at your 
end or from someone trying to do something silly, but it is probably 
something you can solve or prevent by suitable configuration in 
squid.conf, and is again unlikely to cause your crash.


I suspect your crash is triggered by some traffic that happens on the 
student side but not the on the staff side. The backtrace from gdb might 
tell us more (run gdb /some/path/squid /some/path/corefile and issue the 
command bt) but you will probably either to raise the issue with the LUSCA 
developers as has previously been suggested or build yourself a current 
version of squid such as 3.2.6 (probably with the patch from Bug 3567 
above added as it isn't in the 3.2 code yet) which might fix your crash, 
or give recent enough information to allow proper debugging of your issue.


Michael Young


Re: [squid-users] Re: Maximum Resident Size

2013-01-15 Thread M A Young

On Tue, 15 Jan 2013, Amos Jeffries wrote:


On 14/01/2013 12:10 p.m., M A Young wrote:
It is probably a memory leak. Squid's memory management fools C++'s 
automatic memory clean up so it is prone to leaks if objects aren't 
explicitly cleaned. I know of one still unfixed in 3.2 which was in 3.1 as 
well, and there could easily be others.
Michael: can you point me at that one please? We have just about finished 
another round of purging memory leaks and other types of leaks in 3.2 and 
3.3.


It is bug 3567 - squid leaks httprequests if presented with a request with 
NULL characters in the headers or an invalid HTTP version. It requires bad 
requests from the client, but unfortunately we seem to have a badly 
behaved app some people use here.


Michael Young


[squid-users] Re: Maximum Resident Size

2013-01-13 Thread M A Young

On Sun, 13 Jan 2013, csn233 wrote:


On some machines, this number after service startup is small. Whereas
others are larger than the total machine memory. I'm using exactly the
same squid.conf when making the comparison.

How is this number determined? Is this a maximum that the Squid
process will potentially grow to?

I have one machine where the squid process grows progressively, and
eventually starts swapping and requiring a service restart when Squid
process RSS nears machine memory. My cache_mem has been reduced to
1/16th of machine memory, so this is not the cause of the problem.


It is probably a memory leak. Squid's memory management fools C++'s 
automatic memory clean up so it is prone to leaks if objects aren't 
explicitly cleaned. I know of one still unfixed in 3.2 which was in 3.1 as 
well, and there could easily be others.


Cachemgr's mem option might give you some idea of what objects are 
leaking, particularly if you have other caches to compare with.


Michael Young


Re: [squid-users] 3.2 Log Rotate Problem

2012-12-28 Thread M A Young

On Thu, 27 Dec 2012, dweimer wrote:


On 2012-12-26 17:41, Amos Jeffries wrote:

This is likely bug 3712, the fix is still winding its way down to a
stable release.

As a workaround you can use the "stdio:" log module instead of daemon 
module


Or to fix, apply the following patch:
 http://www.squid-cache.org/Versions/v3/3.HEAD/changesets/squid-3-12510.patch

Amos


Setting the following did indeed fix it.

access_log stdio:/var/log/squid/access.log squid

Any idea yet as to what release we should expect to see the fix for the new 
daemon log module?


It is now in the 3.2 daily snapshots (r11740 or later) so is almost 
certain to be in 3.2.6.


Michael Young


[squid-users] Problems accessing sites with very short DNS lifespan

2010-12-13 Thread M A Young
I have seen a couple of sites where access via squid is very slow. The 
issue seems to be that the time-to-live entry on their DNS record is 10 
seconds in one case, and 30 seconds in another, which I think means that 
squid is rechecking DNS frequently enough to slow things down. I worked 
around one of these by putting entires in /etc/hosts but this isn't very 
scalable. Are there any alternatives to this, perhaps some way of setting 
a minimum timeout for DNS records on squid, so it can cope with these 
strangely configured sites?


Michael Young


[squid-users] Re: HTTP/1.0 417 Expectation failed

2010-02-19 Thread M A Young

On Fri, 19 Feb 2010, Riccardo Castellani wrote:


I'm using Squid 2.7 STABLE3 and I noted my software sends http Request =
where there is written:

POST http://www.inps.it/...
...
Expect: 100-Continue


Squid answers saying :


HTTP/1.0 417 Expectation failed
...
X-Squid-Error: ERR_INVALID_REQ 0


Squid uses Expect request ?!
How Can I solve it ?


In the 2.7 series (though I am not sure when it was introduced so you may 
need to upgrade) you can use the option

ignore_expect_100 on

which tells squid to ignore the header and hope the client continues 
regardless. It is somewhat broken behaviour, but if you are lucky it might 
fix your problem.


Michael Young


[squid-users] Re: strange squid 2.6S1 behavior

2006-07-25 Thread M A Young
On Mon, 24 Jul 2006, tino wrote:

> Sorry, this is my message log  (I was turn-off syslog before)
>
> Jul 24 15:38:32 tproxy (squid): xstrdup: tried to dup a NULL pointer!
> Jul 24 15:38:33 tproxy squid[2049]: Squid Parent: child process 2051 exited
> due to signal 6
>
> I though it was a bug-listed in Squid-2.6.PRE1 ?
> http://www.squid-cache.org/bugs/show_bug.cgi?id=1589
>
> Which patch should I added ? I'm on 2.6.stable1, wccpv2+cttproxy

This could be bug 1684
http://www.squid-cache.org/bugs/show_bug.cgi?id=1684
If it is this bug, it has been fixed in the daily rebuild for about a
week.

Michael Young


Re: AW: [squid-users] Digest problem with 2.6STABLE1

2006-07-10 Thread M A Young
On Mon, 10 Jul 2006, Stolle, Martin wrote:

> I got the same problem, the cache digests aren't working any longer in my 
> configuration. I tend to say, that this might be a bug.
>
> A question to you:
> One thing changed in my configuration: I enabled --with-large-files with 
> squid-2.6. Do you also changed this parameter in your configuration script, 
> that would be the only idea for me why it shouldn't be a bug.

No, I was already running with --with-large-files before I upgraded. My
changes were to add --enable-epoll --enable-ssl . So I think it must be a
real bug, so I have put it in bugzilla at
http://www.squid-cache.org/bugs/show_bug.cgi?id=1673

Michael Young


[squid-users] Digest problem with 2.6STABLE1

2006-07-06 Thread M A Young
I have just updated a couple of my caches to 2.6STABLE1, and noticed
that the other web caches are no longer getting digests from the updated
caches, but seem to be receiving a 404 response with the message "This
cache is currently building its digest." Have I broken something during
the upgrade, or is this a real bug?

Michael Young


[squid-users] Re: Squid go down by itself

2006-03-14 Thread M A Young
On Tue, 14 Mar 2006, Damian Mantelli (A.C.A.R.A) wrote:

> Hi I have a problem, everything go Ok, on my SQUID Server, but today I had a
> error. The Squid daemon go down by itself and I don’t know why.
> I suspect of the logs files, by example store.log  came to 2048 Mbytes.
>
> Can the Log files make that my Squid Server fault down?

Yes, squid will crash if your log files get to 2GB. A workaround is to
rotate your log files more frequently, but it may also be trying to work
out what traffic is causing the files to fill in the first place, and if
you can reasonably reduce it.

Michael Young


Re: [squid-users] ip_conntrack tweaks

2005-06-13 Thread M A Young
On Mon, 13 Jun 2005, Paul Seaman wrote:

> I posted this because I run a squid proxy on a busy box.  I am aware of the
> status of ip_conntrack as a non-Squid project  However, many people who run
> squid on this list do so on a busy box.

Strange, I have a much bigger squid box and don't recall seeing this
error. The one change I did make was to increase the
/proc/sys/net/ipv4/neigh/default/gc_thresh3 value, which stops neighbour
overflow errors.

Michael Young


[squid-users] Re: Problem with unparseable HTTP header field

2005-02-18 Thread M A Young
On Fri, 18 Feb 2005, Ralf Hildebrandt wrote:

> When I surf to http://www.abstractserver.de/da2005/avi/e/Abs_revi.htm
> and enter any number/character and click "Submit my query", I get an
> error page ("Invalid Response" The HTTP Response message received from
> the contacted server could not be understood or was otherwise
> malformed.
See bug 1242
http://www.squid-cache.org/bugs/show_bug.cgi?id=1242
The issue is that with 2.5S8 (or well patched 2.5S7) squid has become less
tolerant of illegal behaviour from web servers in the headers they serve
before the contents of the web page. If you fetch that page by hand (eg.
with wget -S) you can see the HTTP headers
 1 HTTP/1.0 200 OK
 2 Server: Microsoft-IIS/3.0
 3 Date: Fri, 18 Feb 2005 19:54:50 GMT
 4 HTTP/1.1 200 OK
 5 content-type: text/html
 6 content-length: 2617
 7 Connection: Keep-Alive
which is difficult to make sense of if you actually try to understand it;
is the answer HTTP/1.1 or HTTP/1.0?

Michael Young


Re: [squid-users] Re: Abnormal End (Squid 2.5S8)

2005-02-16 Thread M A Young
On Wed, 16 Feb 2005, Henrik Nordstrom wrote:

> On Wed, 16 Feb 2005, M A Young wrote:
>
> > I suggest you make sure you have applied the post 2.5S8 major patch for
> > odd DNS responses. This supposedly affects earlier versions of squid as
> > well, but it seems to cause us many more crashes when we moved from 2.5S7
> > to 2.5S8RC3 which have stopped now we have applied this patch.
>
> There was also many other segfault errors corrected between RC3 and
> STABLE8 so it's hard to tell which of the bugs was causing your problems
> without having a backtrace of the segfault, but yes, the DNS patch is good
> to have.

I did have backtraces of the problem, and the crashes matched the
symptoms of the DNS crash, so I am pretty sure this was actually the
problem, though of course the other segfaults may have made it more likely
to occur.

Michael Young


[squid-users] Re: Abnormal End (Squid 2.5S8)

2005-02-16 Thread M A Young
On Wed, 16 Feb 2005, Awie wrote:

> After running for a few days, last night (2005/02/16 01:59:2) our Squid
> 2.5S8 had abnormal end again. Below the report in cache.log

I suggest you make sure you have applied the post 2.5S8 major patch for
odd DNS responses. This supposedly affects earlier versions of squid as
well, but it seems to cause us many more crashes when we moved from 2.5S7
to 2.5S8RC3 which have stopped now we have applied this patch.

Michael Young