[squid-users] Is Squid 2.5 actively maintained?

2007-01-17 Thread Bernhard Erdmann
Hi,

is Squid version 2.5 actively maintained? Will security bugs be fixed?

Regards
Bernhard


Re: [squid-users] re-creating configure

2005-06-24 Thread Bernhard Erdmann
I patched squid-2.5.STABLE10 with icap-2.5.patch. How do I re-create 
configure as patch modified configure.in?



Look here:
http://www.squid-cache.org/Doc/FAQ/FAQ.html#toc2.6


Hi, I can't find any instructions on this URL how to re-create configure 
after configure.in was patched.



And tell where did you get that patch?
I can't find it here:
http://www.squid-cache.org/Versions/v2/2.5/bugs/#STABLE10


The ICAP patch is on http://devel.squid-cache.org/icap/ and the patch 
URL is http://devel.squid-cache.org/cgi-bin/diff2/icap-2.5.patch?s2_5


Re: [squid-users] blocking sites

2005-06-24 Thread Bernhard Erdmann

Damian Forrester wrote:
I am tiring to blocksome site using my squid proxy 2.5 stables.I 
configure the acl and deny them .How can I verify the that file that 
contain the block site is been check.Because I am able to access these 
site and they should be block .Can some advised.


check the atime (access time) of the file using ls -lu


[squid-users] re-creating configure

2005-06-23 Thread Bernhard Erdmann

Hi,

I patched squid-2.5.STABLE10 with icap-2.5.patch. How do I re-create 
configure as patch modified configure.in?


Regards,
Bernhard


[squid-users] mem_node

2004-07-08 Thread Bernhard Erdmann
Hi,
what is meant by mem_node on cachemgr.cgi / Memory Utilization?
Regards


[squid-users] Memory usage and mem_node size

2004-07-07 Thread Bernhard Erdmann
Hi,
I have a Squid 2.5STABLE5 running on Linux (RHL 8.0, 1.5 GB RAM) using 
28 GB disk space and serving 5,000 users. It has been in use for more a 
year now.

In the last few days, the squid process has grown drastically in size. 
Usally, squid has around 400 MB in memory. At some point, it starts 
allocating more and more RAM. I've seen it using 1.2 GB RAM and 
aggressive paging.

The proxy has 1.8 million objects on disk and 150 MB for the metadata 
should be enough.

In cachemgr.cgi / Memory Utilization I can see mem_node growing bigger 
and bigger when process size increases.

What's mem_node's purpose? I've seen mem_node allocating 800 MB when the 
squid process had 1.2 GB.

Config:
cache_mem 100 MB
maximum_object_size 250 MB
cache_dir diskd /var/spool/squid-standard 28000 16 256
At first, I tried to limit memory pooling:
memory_pools_limit 64 MB
- no difference
The I tried to switch off pooling:
memory_pools off
- no difference
I've seen segmentation violations in cache.log:
FATAL: Received Segment Violation...dying.
2004/07/07 10:15:14| storeDirWriteCleanLogs: Starting...
2004/07/07 10:15:31| Starting Squid Cache version 2.5.STABLE5 for 
i686-pc-linux-gnu...
2004/07/07 10:15:31| Process ID 17870
2004/07/07 10:15:31| With 8192 file descriptors available

FATAL: Received Segment Violation...dying.
2004/07/07 10:54:46| storeDirWriteCleanLogs: Starting...
2004/07/07 10:54:46| WARNING: Closing open FD   30
2004/07/07 10:54:46| 65536 entries written so far.
2004/07/07 10:54:46|131072 entries written so far.
2004/07/07 10:54:46|196608 entries written so far.
2004/07/07 10:54:46|262144 entries written so far.
2004/07/07 10:54:47|327680 entries written so far.
2004/07/07 10:54:47|393216 entries written so far.
2004/07/07 10:54:48|458752 entries written so far.
2004/07/07 10:54:48|524288 entries written so far.
2004/07/07 10:54:48|589824 entries written so far.
2004/07/07 10:54:48|655360 entries written so far.
2004/07/07 10:54:48|720896 entries written so far.
2004/07/07 10:54:49|786432 entries written so far.
2004/07/07 10:54:49|851968 entries written so far.
2004/07/07 10:54:49|917504 entries written so far.
2004/07/07 10:54:49|983040 entries written so far.
2004/07/07 10:54:49|   1048576 entries written so far.
2004/07/07 10:54:53| Starting Squid Cache version 2.5.STABLE5 for 
i686-pc-linux-gnu...

FATAL: Received Segment Violation...dying.
2004/07/07 12:37:52| storeDirWriteCleanLogs: Starting...
2004/07/07 12:37:52| WARNING: Closing open FD   30
2004/07/07 12:37:52| 65536 entries written so far.


[squid-users] request and byte hit ratio

2003-12-10 Thread Bernhard Erdmann
Hi,

how do I calculate the request hit ratio and the byte hit ratio?

http rhr = client_http.hits/client_http.requests?
http bhr = 1 - server.http.kbytes_in/client_http.kbytes_out ?
Regards,
Bernie


[squid-users] 2 GB file size limit

2003-10-23 Thread Bernhard Erdmann
Hi,

Squid 2.5STABLE1 on RedHat Linux 8.0 + SGI XFS 1.2 just crashed because
access.log grew to 2 GB size.

How can I build squid to handle file sizes  2 GB properly?

Regards,
Bernie



RE: [squid-users] 2 GB file size limit

2003-10-23 Thread Bernhard Erdmann
 You can't because you probably hit a
 restriction of your Filesystem , not squid.

Hi,

the filesystem (XFS) does support file sizes  2 GB.

Regards,
Bernie



Re: [squid-users] 2 GB file size limit

2003-10-23 Thread Bernhard Erdmann
   What's the exact message in cache.log ?

There is no message in cache.log.
When started with -X -N, squid prints File size limit exceeded at the
end.



Re: [squid-users] 2 GB file size limit

2003-10-23 Thread Bernhard Erdmann
 Why do you want 2Gbyte log files?   Surely you should either be
 rotating
 your log files, or else they contain information which nobody is ever
 going
 to be bothered to look through?

 I know it's not an answer to your question, but I've never understood
 the
 purpose of keeping such enormous quantities of logging information?
[...]


This logfile with 2 GB size has been written for four days. Logrotation
was not configured properly.

If your squid is just a little more busy, hitting 2 GB after just one
day is no problem.



Re: [squid-users] 2 GB file size limit

2003-10-23 Thread Bernhard Erdmann
Henrik Nordstrom wrote:
[...]
For various reasons the OS limits file sizes to 32 bits on 32 bit 
architectures such as intel x86 and compatibles..
[...]

Henrik, many thanks for your excellent answer!



[squid-users] filtering java applets

2003-07-10 Thread Bernhard Erdmann
Hi,

I'd like to use Squid for filtering java applets.

Any idea how to realise it?

Yes, Squid is a proxy cache, not a police man, but maybe someone knows
an add-on or an http proxy specialized for java applet filtering could
be contacted upwards.

Regards
Bernie



Re: [squid-users] My ignorance or Squid lack this?

2003-06-24 Thread Bernhard Erdmann
Well, my feeling is you should talk to your user and explain him why 
sucking at 2 Mb/s is bad would help much more than relying on technical 
solutions.



[squid-users] Re: Squid stops when log fills up (was: Nessus vulnerability scancrashes squid)

2003-06-12 Thread Bernhard Erdmann
 The problem was the 2G filesize limit.  The cache.log was filling up and
causing Squid to stop responding or even die entirely.  The logrotate
Hi,

the problem is: Squid stops working when it can't write logs. This has 
bitten me too. Maximum filesize reached or filesystem full is just the 
same for Squid.

Regards
Bernie


[squid-users] load balancing HTTP servers using Squid

2003-06-11 Thread Bernhard Erdmann
Hi,

I'm searching for a software load balancer for HTTP servers.

Scenario:
We have two web servers in Germany and two in the USA for german 
content. For US/english content there are two web servers in the USA and 
two in Germany. The setup is similar, so I'll concentrate to a single case.

Dream:
Requests should get dynamically balanced to the two german web servers. 
If one fails, the second gets each and every request. If both fail, 
users will be served by the US servers using HTTP redirects or reverse 
proxying.

 Internet
|
|
   german load balancer  ---  US load balancer
|  |
|  |
 firewall   firewall
/\ /\
   /  \   /  \
 www1 www2  www3 www4
(The load balancer never fail ;-))

Steps to a solution:
http://devel.squid-cache.org/rproxy/ seems to be a good starting point. 
What it's state? Can Squid-2.5-STABLE2 be used for this setup? Is 
Squid-3.0-DEV geared towards these requirements? 
http://www.squid-cache.org/mail-archive/squid-dev/200010/0321.html has a 
nice patch for 2.3-STABLE4, but how far has development gone?

Regards
Bernie