Hello list,
I finally upgraded my old squid-3.0.STABLE25 installations to
squid-3.1.10 / 3.1.11 on RedHat AS5.
But now I have some asserts:
2011/03/16 09:47:50| comm_old_accept: FD 2196: (22) Invalid argument
2011/03/16 09:47:50| FTP data connection from unexpected server ([::]),
expecting
Current Stable Squid 2.7.STABLE9 or 3.1.11
Beta testers wanted for 3.2.0.5
[Anhang ftp_crash.patch gelöscht von Martin Pichlmaier/usr/cag]
Hello list,
I just wanted to post the results with valgrind.
Unfortunately the memcheck thread needs so much CPU that I could not
put a high load on the squid as maximum only about 5-10 req/s.
# ./squid -v
Squid Cache: Version 3.1.3
configure options: '--prefix=/appl' '--localstate=/var'
Thank you for your info, I will give it a try.
Martin
Marcus Kool marcus.k...@urlfilterdb.com wrote on 17.06.2010 16:15:09:
Martin,
Valgrind is a memory leak detection tool.
You need some developer skills to run it.
If you have a test environment with low load you may want
to give it a
Hello,
I just wanted to report back the last tests:
After the memory cache is filled to 100% the squid (3.1.4 or 3.1.3)
still needs more memory over time when under load, about 1-2 GB a day.
memory_pool off did not change anything, the process size still rises.
The high CPU usage seem to start
Hello list,
I have a question regarding memory and CPU usage change from 3.0 to 3.1.
I have 4 forwards proxies with ICAP (c-icap and clamav), NTLMv2
authentication, all four proxies each have about 200-400 req/sec
on RedHat AS5 64bit servers with each 16GB mem for about 15k to 30k users.
Amos Jeffries squ...@treenet.co.nz wrote on 15.06.2010 10:48:33:
martin.pichlma...@continental-corporation.com wrote:
Hello list,
I have a question regarding memory and CPU usage change from 3.0 to
3.1.
I have 4 forwards proxies with ICAP (c-icap and clamav), NTLMv2
Hello,
core files are created when squid crashes.
It would make sense to find out why squid writes core dumps.
Some documentation:
http://wiki.squid-cache.org/SquidFaq/BugReporting
To prevent writing of core files set the core file limit to 0.
It could be ulimit -c 0 or something similar.
Also
Hi Mike,
you have to connect to the LDAP server on port 3268 instead of the default
port 389 (-h) and change the basedn where to search for the accounts (-b)
to dc=domain,dc=com.
It should look like:
auth_param basic program /usr/lib64/squid/squid_ldap_auth -R -b
dc=domain,dc=com -D
I had the same problems with squid-3.0.STABLExx (current 19) and saw up to
20 crashes a day.
Therefore I made a very crude workaround to avoid the crashes -- I do not
use disk cache in my config and
simply skip calling the routine scheduleDiskRead altogether by changing
the source code of
Thank you Amos,
your patch did the trick, it now works smoothly.
I didn't have time to test yesterday, therefore sorry for my late
response.
Martin
Amos Jeffries squ...@treenet.co.nz
27.07.2009 17:00
An
martin.pichlma...@continental-corporation.com
Kopie
Squid squid-users@squid-cache.org
Hello all,
I just compiled squid-3.0.STABLE17 and it compiled fine.
Unfortunately I now get many warning messages in cache.log (still testing,
not yet in productive environment):
2009/07/27 15:11:26| HttpMsg.cc(157) first line of HTTP message is invalid
2009/07/27 15:11:28| HttpMsg.cc(157) first
I checked -- cached objects are not re-checked, at least not with two or
three hours.
But the memory usage is higher still without icap while the cache is still
filling -- but this
may due to the fact that I configured squid to cache objects only up to 1
MB and icap scans larger objects, too.
Hi Everybody,
I have a question regarding memory usage for squid. I have 4 proxies, each
has about 200-400 req/s and 2-5 MB/s with ntlm_auth and about 1000 lines
of acl,
squid version is 3.0.STABLE15 on Redhat AS 5 Linux.
They are busy servers and therefore have no disk cache but memory cache
Hi Dayo,
you have to recompile squid for this with the additional configure option
'--enable-arp-acl'.
There are some other constraints, read through the documention (for
example the config file).
snip from config file version 3.0.STABLE15
# acl aclname arp mac-address ...
Hello all,
some of my users complain that a page (www.bestjobs.ro) with cookies and
some other stuff hangs sometimes,
returns Connection reset by peer and so on.
Some problems can be resolved by reloading the page, some can not.
The pages that make problems are not the normal ones but after
Hi Guido,
thank you for your help and reply!
Somehow I missed that option when searching for that at
www.squid-cache.org.
I was looking for options with DNS in the name :-)
Regards,
Martin
Guido Serassio guido.seras...@acmeconsulting.it wrote on 15.04.2009
13:04:33:
Hi,
At 09.01
17 matches
Mail list logo