[squid-users] cache_peer default and round-robin

2004-09-02 Thread lderuaz
Hello to all,

I am using Squid Version 2.5.STABLE6-20040729 and Samba 3.0.2a on Red Hat ES3.0.

Here is my squid architecture :

I've got two internal proxies on which are performed the NLTM authentication of
the users.

Some others proxies inside the company (remote sites) are defined as parent for
remote intranet.

One external squid proxy is defined as default parent , and is for the remaining
traffic (internet)

I have now another server that i need to install as failover for the external
squid server.

Can i also define this second parent as default (in addition to the first one) ?
Or should i use the option round-robin for the two external server (without
default) ?
Or can i use both (round-robin and default) for these servers ?

Thanks by advance for your help.

Lionel

squid.conf.
.


acl remote_intranet dstdomain .company


cache_peer remote_proxy  parent 80 0 no-query
cache_peer_access remote_proxy allow remote_intranet

cache_peer external_proxy_1 parent 80 0 no-query default
cache_peer_access external_proxy_1 deny remote_intranet
cache_peer_access external_proxy_1 allow all


[squid-users] Squid 2.5STABLE5 and SNMP problem on Fedora Core 2

2004-09-02 Thread Endre Szekely-Bencedi
Hi list,

I have a problem I've been struggling with the past two days. I am running
Webalizer for my squid proxy but the other day I found on the ionternet
something related to MRTG and Squid with some sample graphs and I liked it
a lot; I knew I can't live anymore without MRTG. :) So I've tried to set it
up according to MRTG and Squid FAQs. Problem also was that I was already
running a Squid I didn't wanted to ruin (although it's a test machine) so
recompiled the Squid with --enable-snmp option. To make sure I am running
the new copy I have made a copy of the squid styartup script (redhat-alike)
and edited it, also set 000 as access on my 'old' squid binary and config
files. So I have the SNMP-enabled one as squid2.5 and he config as t
squid2.5.config.
Of course, SNMP won't work. I searched Google and found many many questions
but not a single useful answer for me.

Okay, my squid conf looks like:
acl snmppublic snmp_community public
snmp_port 3401
snmp_access allow snmppublic all
snmp_incoming_address 0.0.0.0
snmp_outgoing_address 255.255.255.255

This is something I got grepping the file for snmp and deleting the
commented rows so perhaps all relevant data is here.

A full log of the squid -k reconfigure gives:
Sep  2 11:26:35 thor squid[19621]: Reconfiguring Squid Cache (version
2.5.STABLE5)...
Sep  2 11:26:35 thor squid[19621]: SmartFilter: Disabling SmartFilter,
freeing resources
Sep  2 11:26:35 thor squid[19621]: SmartFilter: Canceling communications
thread
Sep  2 11:26:35 thor squid[19621]: SmartFilter: Waiting for communications
thread to exit
Sep  2 11:26:35 thor squid[19621]: SmartFilter: Communications thread
exited
Sep  2 11:26:35 thor (squid)[19621]: FD 21 Closing HTTP connection
Sep  2 11:26:35 thor (squid)[19621]: FD 23 Closing ICP connection
Sep  2 11:26:35 thor (squid)[19621]: FD 24 Closing SNMP socket
Sep  2 11:26:35 thor (squid)[19621]: Cache dir '/var/squid/cache' size
remains unchanged at 1048576 KB
Sep  2 11:26:35 thor squid[19621]: DNS Socket created at 0.0.0.0, port
32814, FD 8
Sep  2 11:26:35 thor squid[19621]: Adding nameserver x.x.x.a from
/etc/resolv.conf
Sep  2 11:26:35 thor squid[19621]: Adding nameserver x.x.x.b from
/etc/resolv.conf
Sep  2 11:26:35 thor squid[19621]: Adding nameserver x.x.x.c from
/etc/resolv.conf
Sep  2 11:26:35 thor squid[19621]: Smartfilter: Initializing SmartFilter
Sep  2 11:26:35 thor squid[19621]: SmartFilter: SmartFilter Plugin Library
Version 4.0.0.00
Sep  2 11:26:36 thor squid[19621]: SmartFilter: Trying to start plugin
thread
Sep  2 11:26:36 thor squid[19621]: SmartFilter: Created communication
thread
Sep  2 11:26:36 thor squid[19621]: SmartFilter: SmartFilter init:
SmartFilter initialized.
Sep  2 11:26:36 thor squid[19621]: Accepting HTTP connections at 0.0.0.0,
port 80, FD 22.
Sep  2 11:26:36 thor squid[19621]: Accepting ICP messages at 0.0.0.0, port
3130, FD 24.
Sep  2 11:26:36 thor squid[19621]: Accepting SNMP messages on port 3401, FD
25.
Sep  2 11:26:36 thor squid[19621]: WCCP Disabled.
Sep  2 11:26:36 thor squid[19621]: Configuring Parent y.y.y.y/80/7
Sep  2 11:26:36 thor squid[19621]: Loaded Icons.
Sep  2 11:26:36 thor squid[19621]: Ready to serve requests.

The things I changed are the x.x.x.x and y.y.y.y addresses only, those are
valid addresses and I can browse through my squid flawlessly also the
SmartFilter plugin (web filtering) works as it should.

Testing SNMP looks like:
[EMAIL PROTECTED] etc]# snmpwalk -v 1 -c public 127.0.0.1:3401
.1.3.6.1.4.1.3495.1.1
Timeout: No Response from 127.0.0.1:3401
[EMAIL PROTECTED] etc]#

/var/log/message shows:
Sep  2 11:29:30 thor squid[19621]: Failed SNMP agent query from :
127.0.0.1.
Sep  2 11:29:35 thor last message repeated 5 times
Sep  2 11:30:00 thor squid[19621]: Failed SNMP agent query from : a.b.c.d.

The last two lines come from mrtg trying to query SNMP also, a.b.c.d is the
same with localhost, it's the same machine. The first error comes from the
snmpwalk command.

Searching Google didn't give me any useful results, so either my problem is
somewhat pecial compared to other users, or it is so stupid that I'm the
only one having it (I don't know like anything about SNMP), so I'd
appreciate any imput regarding this.

Also, should the snmp daemon or anything be running? I have a feeling that
it hasn't much to do with this as perhaps Squid should reply. Anyway I have
snmpd on my machine (unconfigured, took a look into the config and it looks
pretty much encrypted to me).

Thanks.

Greetings,
Endre Szekely-Bencedi
GE GDC Security Leader
_
TATA Consultancy Services - Hungary
Science Park B, 1st floor
Irinyi Jozsef utca 4-20 Budapest 1117
Tel: +36 1 886 TATA  | +36 1 886 8022
Fax: +36 1 886 8001
Email: [EMAIL PROTECTED]
_

"THIS E-MAIL MESSAGE ALONG WITH ANY ATTACHMENTS IS INTENDED ONLY FOR THE
ADDRESSEE and may contain confidential and privileged information. If the
reader of this message is not the intended recipient, yo

[squid-users] Two authentication schemes, NTLM and LDAP

2004-09-02 Thread Michael Pophal
Hi all,

my problem is, I have to provide two authentication schemes, LDAP and
NTLM. Unfortunately the user has no choice which scheme to use, because
this is negotiated between browser and proxy. The strongest
authentication scheme wins -> NTLM. But some of my users only have
credentials on LDAP, others on the domain controller (NTLM).

I tried to give the choice by calling one proxy on two different ports,
to seperate the http_access lines by 

acl NTLM_auth_port myport 
acl LDAP_auth_port myport 3334

http_access allow NTLM_auth_port NTLM_authenticated_user
http_access allow LDAP_auth_port LDAP_authenticated_user

but this doesn't help.

So the next step is to run two squids on one machine. Here my question:
Is it feasible to share one disk cache between both squids (I run
diskd)? I don't want to have redundant disk cache.

If you have any good ideas to above mentioned problem I would very
appreciate that!

Thanks !!

Regards,
  Michael




Re: [squid-users] querying the cache to see if it stores a specific object

2004-09-02 Thread Henrik Nordstrom
On Thu, 2 Sep 2004, Thomas Haferlach wrote:
What is the simplest way to query a cache for example from a java
application to check if a certain URL is held by the cache or not?
See RFC2616, look for only-if-cached.
Regards
Henrik


Re: [squid-users] New generation redirector interface??

2004-09-02 Thread Henrik Nordstrom
On Thu, 2 Sep 2004, David Hubner wrote:
I heard there is a new redirector interface apart from the old process of,
Url IP/FQDN Username Method
Could anyone tell me what it is and point me to any documention. I could not find any on www.squid-cache.org.
Squid-3 helper interface supports what is by the developers referred to as 
"overlapping requests" where multiple requests is queued to the 
redirector. The difference is an added request number infront of the 
request

requestnumnber Url IP/FQDN Username Method
and the response needs to look like
requestnumber url
The helper may reorder the replies as it pleases. The intention of this 
interface is to allow for a multithreaded lookup engine and similar, 
reducing the number of processes required by allowing one process to 
handle many concurrent requests.

This is also available in Basic and Digest authenticators, and the 
intention is to extend this to support NTLM authenticators as well.

Regards
Henrik


RE: [squid-users] Redirector

2004-09-02 Thread Henrik Nordstrom
On Thu, 2 Sep 2004, Elsen Marc wrote:
My now getting older Compaq ML370 with a fairly old
Linux 2.2 kernel, still laughs at our 1500 user base
Much depends on the avg. http reqs/sec SQUID has to deal with.
In our case that is around 45 reqs/sec during office hours which is
still rather mild for SQUID.
Hehe.
Just to give some number of the other side of the spectrum the eMARA 
reverse proxy / SSL accelerator from MARA Systems now handles at peak rate 
1100 https requests/s in the lab with full RSA handshakes, encryption and 
all it involves in terminating SSL connections. Admittedly with the help 
of special crypto hardware as a P4 2.8GHZ CPU can only do about 200 RSA 
handshakes/s..

Regards
Henrik


Re: [squid-users] Two authentication schemes, NTLM and LDAP

2004-09-02 Thread Henrik Nordstrom
On Thu, 2 Sep 2004, Michael Pophal wrote:
my problem is, I have to provide two authentication schemes, LDAP and
NTLM. Unfortunately the user has no choice which scheme to use, because
this is negotiated between browser and proxy. The strongest
authentication scheme wins -> NTLM. But some of my users only have
credentials on LDAP, others on the domain controller (NTLM).
I tried to give the choice by calling one proxy on two different ports,
to seperate the http_access lines by
This is not possible with a single Squid instance. All the configured 
authentiation schemes are active whenever authentication is requested.

What you can do is to set up two instances of Squid, one connected to the 
domain controller for both Basic and NTLM, the other connected to your 
LDAP server for only Basic.

So the next step is to run two squids on one machine. Here my question:
Is it feasible to share one disk cache between both squids (I run
diskd)? I don't want to have redundant disk cache.
No, each needs to have their own cache.
What you can do is to only have cache on one of them, and forward all 
requests from the other to the one with cache.

Regards
Henrik


[squid-users] How to retreve upload and download separately from access.log

2004-09-02 Thread Rakesh Kumar Pant
I am using Squid 2.5 (STABLE 1) on Red Hat Linux-9

I want to view separate Upload(i.e length of Data sent out of the network)
and Download(i.e length of Data received ) Details for each host.
Is it possible to fetch such information from access.log ?

if yes then how can i do it. ?

if not the what are the other ways to determine the length of upload and
download data ?

do i have to make changes to the firewall(IP TABLES) or some other thing in
the squid.conf file ?

If anyone can help me i shall be highly obliged.

Sorry for mistakes



Regards
Rakesh Pant



RE: [squid-users] Redirector

2004-09-02 Thread Elsen Marc

> ...
> ...
> 
> Just to give some number of the other side of the spectrum the eMARA 
> reverse proxy / SSL accelerator from MARA Systems now handles 
> at peak rate 
> 1100 https requests/s in the lab with full RSA handshakes, 
> encryption and 
> all it involves in terminating SSL connections. Admittedly 
> with the help 
> of special crypto hardware as a P4 2.8GHZ CPU can only do 
> about 200 RSA 
> handshakes/s..
 
Well, as they always say. It's not always quantity but quality that
counts.
Today 2.5.STABLE6 gives me unmatched 'service uptime'. Now for me nearing a
record period of 60days troubleness http proxy servicing.

M.


RE: [squid-users] Redirector

2004-09-02 Thread Henrik Nordstrom
On Thu, 2 Sep 2004, Elsen Marc wrote:
Today 2.5.STABLE6 gives me unmatched 'service uptime'. Now for me nearing a
record period of 60days troubleness http proxy servicing.
Thanks!
Regards
Henrik


Re: [squid-users] How to retreve upload and download separately from access.log

2004-09-02 Thread Henrik Nordstrom
On Thu, 2 Sep 2004, Rakesh Kumar Pant wrote:
I want to view separate Upload(i.e length of Data sent out of the network)
and Download(i.e length of Data received ) Details for each host.
Is it possible to fetch such information from access.log ?
Only is using a custom log format (patch required to 2.5 for this).
By default Squid only logs the amount of data sent to the client, not how 
much the client sent to the Internet.

Regards
Henrik


[squid-users] Recompile Squid : Procedure needed

2004-09-02 Thread Lilian . Gix
Hello,


I have many times in my cache.log :
WARNING! Your cache is running out of filedescriptors

I read for this it's nessessary to recompile Squid. So I'm looking for
procedure for this.
My squid is 2.6Stable6 on Debian Linux

Thanks.

L.G.


Re: [squid-users] How to retreve upload and download separately from access.log

2004-09-02 Thread Rakesh Kumar Pant
Thanks for the info.

I am pretty new to all this. Can you guide me to the patch that is required
and how do I go about creating a custom log format.

Regards
Rakesh


- Original Message -
From: "Henrik Nordstrom" <[EMAIL PROTECTED]>
To: "Rakesh Kumar Pant" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Thursday, September 02, 2004 6:08 PM
Subject: Re: [squid-users] How to retreve upload and download separately
from access.log


> On Thu, 2 Sep 2004, Rakesh Kumar Pant wrote:
>
> > I want to view separate Upload(i.e length of Data sent out of the
network)
> > and Download(i.e length of Data received ) Details for each host.
> > Is it possible to fetch such information from access.log ?
>
> Only is using a custom log format (patch required to 2.5 for this).
>
> By default Squid only logs the amount of data sent to the client, not how
> much the client sent to the Internet.
>
> Regards
> Henrik
>



RE: [squid-users] Recompile Squid : Procedure needed

2004-09-02 Thread Elsen Marc
 
> 
> Hello,
> 
> 
> I have many times in my cache.log :
> WARNING! Your cache is running out of filedescriptors
> 
> I read for this it's nessessary to recompile Squid. So I'm looking for
> procedure for this.
> My squid is 2.6Stable6 on Debian Linux
> 
> Thanks.
> 

  Read the 

INSTALL
 
  file that comes with the distribution.

  M.


Re: [squid-users] How to retreve upload and download separately from access.log

2004-09-02 Thread Rakesh Kumar Pant
I have just upgraded to Squid 2.5 (STABLE 6) production release. Do I still
need a patch ?

Regards
Rakesh
- Original Message -
From: "Henrik Nordstrom" <[EMAIL PROTECTED]>
To: "Rakesh Kumar Pant" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Thursday, September 02, 2004 6:08 PM
Subject: Re: [squid-users] How to retreve upload and download separately
from access.log


> On Thu, 2 Sep 2004, Rakesh Kumar Pant wrote:
>
> > I want to view separate Upload(i.e length of Data sent out of the
network)
> > and Download(i.e length of Data received ) Details for each host.
> > Is it possible to fetch such information from access.log ?
>
> Only is using a custom log format (patch required to 2.5 for this).
>
> By default Squid only logs the amount of data sent to the client, not how
> much the client sent to the Internet.
>
> Regards
> Henrik
>



AW: [squid-users] Recompile Squid : Procedure needed

2004-09-02 Thread Lilian . Gix
I installed it with automatic Debian package installation program.

So I don't know if I have such file.

 

-Ursprüngliche Nachricht-
Von: Elsen Marc [mailto:[EMAIL PROTECTED] 
Gesendet: Donnerstag, 2. September 2004 14:56
An: Gix, Lilian (BR/PII3) *; [EMAIL PROTECTED]
Betreff: RE: [squid-users] Recompile Squid : Procedure needed

 
> 
> Hello,
> 
> 
> I have many times in my cache.log :
> WARNING! Your cache is running out of filedescriptors
> 
> I read for this it's nessessary to recompile Squid. So I'm looking for 
> procedure for this.
> My squid is 2.6Stable6 on Debian Linux
> 
> Thanks.
> 

  Read the 

INSTALL
 
  file that comes with the distribution.

  M.


Re: [squid-users] How to retreve upload and download separately from access.log

2004-09-02 Thread Henrik Nordstrom
On Thu, 2 Sep 2004, Rakesh Kumar Pant wrote:
I am pretty new to all this. Can you guide me to the patch that is required
and how do I go about creating a custom log format.
http://devel.squid-cache.org/
Regards
Henrik


Re: [squid-users] How to retreve upload and download separately from access.log

2004-09-02 Thread Henrik Nordstrom
On Thu, 2 Sep 2004, Rakesh Kumar Pant wrote:
I have just upgraded to Squid 2.5 (STABLE 6) production release. Do I still
need a patch ?
Yes.
It is a new feature, not a bug fix. Squid-2.5 is in it's STABLE cycle 
where no new features is allowed unless to correct a security threat.

Regards
Henrik


RE: [squid-users] Recompile Squid : Procedure needed

2004-09-02 Thread Chris Perreault
http://squid-docs.sourceforge.net/latest/book-full.html#AEN220

A good docoment to browse through...

Chris 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Thursday, September 02, 2004 9:23 AM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: AW: [squid-users] Recompile Squid : Procedure needed

I installed it with automatic Debian package installation program.

So I don't know if I have such file.

 

-Ursprüngliche Nachricht-
Von: Elsen Marc [mailto:[EMAIL PROTECTED]
Gesendet: Donnerstag, 2. September 2004 14:56
An: Gix, Lilian (BR/PII3) *; [EMAIL PROTECTED]
Betreff: RE: [squid-users] Recompile Squid : Procedure needed

 
> 
> Hello,
> 
> 
> I have many times in my cache.log :
> WARNING! Your cache is running out of filedescriptors
> 
> I read for this it's nessessary to recompile Squid. So I'm looking for 
> procedure for this.
> My squid is 2.6Stable6 on Debian Linux
> 
> Thanks.
> 

  Read the 

INSTALL
 
  file that comes with the distribution.

  M.


Re: [squid-users] Recompile Squid : Procedure needed

2004-09-02 Thread Ralf Hildebrandt
* [EMAIL PROTECTED] <[EMAIL PROTECTED]>:
> I installed it with automatic Debian package installation program.
> 
> So I don't know if I have such file.

apt-get source squid
cd squid
change the parameter you need to change
dpkg-buildpackage
-- 
Ralf Hildebrandt (i.A. des IT-Zentrum)  [EMAIL PROTECTED]
Charite - Universitätsmedizin BerlinTel.  +49 (0)30-450 570-155
Gemeinsame Einrichtung von FU- und HU-BerlinFax.  +49 (0)30-450 570-916
IT-Zentrum Standort Campus Mitte  AIM.  ralfpostfix


Re: [squid-users] How to retreve upload and download separately from access.log

2004-09-02 Thread Rakesh Kumar Pant
Thanks Henrik for the link. I will work on it and will get back to you
tomorrow.


Regards

Rakesh

- Original Message -
From: "Henrik Nordstrom" <[EMAIL PROTECTED]>
To: "Rakesh Kumar Pant" <[EMAIL PROTECTED]>
Cc: "Squid Users" <[EMAIL PROTECTED]>
Sent: Thursday, September 02, 2004 7:00 PM
Subject: Re: [squid-users] How to retreve upload and download separately
from access.log


> On Thu, 2 Sep 2004, Rakesh Kumar Pant wrote:
>
> > I have just upgraded to Squid 2.5 (STABLE 6) production release. Do I
still
> > need a patch ?
>
> Yes.
>
> It is a new feature, not a bug fix. Squid-2.5 is in it's STABLE cycle
> where no new features is allowed unless to correct a security threat.
>
> Regards
> Henrik
>



[squid-users] Hacking ntlm_auth to allow squidGuard ACLs

2004-09-02 Thread Discussion Lists
Hi All,

First post here!

In the following article the author describes how to get Samba 3 and
Squid working.

http://www.informatikserver.at/modules.php?name=News&file=print&sid=2710

However towards the end the author has a topic called "Hacking ntlm_auth
to allow squidGuard ACLs"  He describes making the following changes to
the source of the ntlm_auth.c:

In source/utils/ntlm_auth.c locate the line:
x_fprintf(x_stdout, "AF %s\%s ", ntlmssp_state->domain,
ntlmssp_state->user);

And modify it to:
x_fprintf(x_stdout, "AF %s ", ntlmssp_state->user);

I came across this page because I was looking for a way to get
squidGuard to recognize NT users so that I can create exceptions for
certain ones.  This way I can still proxy, and log the user's actions,
but they won't have their content filtered.  Will what this person is
describing above accomplish that?  Has anyone done this?  If not can
anyone think of any negative consequences?  Also, if this does work the
way I think it will, would I not specify the username in squidGuard as
"domain\user", or just "user."  "domain\user" crashes squidguard
(probably because of the "\" I am guessing.  Any ideas?

Thanks,
Mark


Re: [squid-users] How to retreve upload and download separately from access.log

2004-09-02 Thread Rakesh Kumar Pant
Thanks Henrik it seems to be a good option

I have successfully applied the patch to (STABLE 6)

But the access.log that is being created is still the same

I am just checking out the for my mistakes and will let you know the
situation

Regards
Rakesh


- Original Message -
From: "Henrik Nordstrom" <[EMAIL PROTECTED]>
To: "Rakesh Kumar Pant" <[EMAIL PROTECTED]>
Cc: "Squid Users" <[EMAIL PROTECTED]>
Sent: Thursday, September 02, 2004 7:00 PM
Subject: Re: [squid-users] How to retreve upload and download separately
from access.log


> On Thu, 2 Sep 2004, Rakesh Kumar Pant wrote:
>
> > I have just upgraded to Squid 2.5 (STABLE 6) production release. Do I
still
> > need a patch ?
>
> Yes.
>
> It is a new feature, not a bug fix. Squid-2.5 is in it's STABLE cycle
> where no new features is allowed unless to correct a security threat.
>
> Regards
> Henrik
>



Re: [squid-users] querying the cache to see if it stores a specific object

2004-09-02 Thread Thomas Haferlach
Yes I stumbled upon that also but I seem to have problems trying to get it
to work.

I telnet to our local squid cache and then send it a typical get request to
a URL I am sure it hasn't cached before.

GET http://somewhere.com/something.html HTTP/1.1
Host: somewhere.com
Cache-Control: only-if-cached

this is my request but it always returns the page I am requesting as if I
hadn't inserted the Cache-Control header.

Any ideas?

thanks,

Thomas


> On Thu, 2 Sep 2004, Thomas Haferlach wrote:
> 
> > What is the simplest way to query a cache for example from a java
> > application to check if a certain URL is held by the cache or not?
> 
> See RFC2616, look for only-if-cached.
> 
> Regards
> Henrik
> 

-- 
Supergünstige DSL-Tarife + WLAN-Router für 0,- EUR*
Jetzt zu GMX wechseln und sparen http://www.gmx.net/de/go/dsl



-- 
NEU: Bis zu 10 GB Speicher für e-mails & Dateien!
1 GB bereits bei GMX FreeMail http://www.gmx.net/de/go/mail



Re: AW: [squid-users] Recompile Squid : Procedure needed

2004-09-02 Thread Hendrik Voigtländer

[EMAIL PROTECTED] wrote:
I installed it with automatic Debian package installation program.
So I don't know if I have such file.
Not really the file INSTALL but a lot of information in /usr/share/doc
like this file.
debian:~# gunzip -c /usr/share/doc/squid/README.morefds.gz
More filedescriptors for squid
The old Linux 2.0.x kernel had support for a maximum of 256
filedescriptors per process. The squid FAQ talks about this,
and recommends use of a special patch for 2.0.x kernels.
Don't use that patch - use a 2.2.19 kernel or later, since the
recent 2.2.x kernels (and 2.4, ofcourse) have support for lots
of filedescriptors built in.
The Debian Squid package has a special patch included that makes
it possible for squid to use more than 1024 filedescriptors. You
can enable this by increasing SQUID_MAXFD in /etc/default/squid.
The /etc/init.d/squid script then sets the maximum number of
filedescriptors at startup using 'ulimit'. It also examines
the global file maximum in /proc/sys/fs/file-max and increases
that to (SQUID_MAXFD + 4096) if it is lower than that.
README.morefds  1.20  01-Oct-2001  [EMAIL PROTECTED]
Are you using testing/unstable? AFAIK the stable package is
squid  2.4.6-2woody2
debian:~# cat /etc/default/squid
#
# /etc/default/squid
#
# Max. number of filedescriptors to use. You can increase this on a busy
# cache to a maximum of (currently) 4096 filedescriptors. Default 1024.
SQUID_MAXFD=1024
That should do the trick.
Regards, Hendrik Voigtländer


[squid-users] SSL Reverse Proxy of multiple hosts

2004-09-02 Thread R. Benjamin Kessler
Hi All,

I'm trying to protect multiple web servers via a squid reverse proxy 
(version Version 2.5.STABLE5).  I've got the rev. proxy working for a single 
host but am having difficulty finding out how to configure reverse proxying 
for the other hosts.

I'd like to have something like the following:

public site1 xx.yy.133.201
public site2 xx.yy.133.202
public site3 xx.yy.133.203

all serviced by proxy1

internal site1 192.168.133.201
internal site2 192.168.133.202
internal site3 192.168.133.203

Do I have to run three different instances of squid to do this?  If they're 
all xxx.foo.com can I use a singel  "wild card" SSL certificate?

I thought I googled the answer to this once but now I can't find it again; 
direct answers to the above and/or pointers to docs on the web would be much 
appreciated.

Thanks,

Ben



Re: [squid-users] How to retreve upload and download separately from access.log

2004-09-02 Thread Henrik Nordstrom
On Thu, 2 Sep 2004, Rakesh Kumar Pant wrote:
Thanks Henrik it seems to be a good option
I have successfully applied the patch to (STABLE 6)
But the access.log that is being created is still the same
You need to specify your own log formats. See squid.conf.default.
Regards
Henrik


Re: [squid-users] querying the cache to see if it stores a specific object

2004-09-02 Thread Henrik Nordstrom
On Thu, 2 Sep 2004, Thomas Haferlach wrote:
Yes I stumbled upon that also but I seem to have problems trying to get it
to work.
GET http://somewhere.com/something.html HTTP/1.1
Host: somewhere.com
Cache-Control: only-if-cached
this is my request but it always returns the page I am requesting as if I
hadn't inserted the Cache-Control header.
Works here.
What does the X-Cache headers indicate?
And what version of Squid is this? The support for only-if-cached in Squid 
is only 6 years old, available since Squid-2.0.

Regards
Henrik


[squid-users] Linux Kongress 2004

2004-09-02 Thread Henrik Nordstrom
Last call for registering interest in this.
I need at least 2 confirmed persons intending to attend this Squid BOF 
session at the Linux Kongress to prepare the session.

So far it has been very silent which either means that there is no Squid 
Users attending the Linux Kongress, or that there is no interest to 
discuss what the future of Squid looks like or any other relevant Squid 
topic.

Regards
Henrik
-- Forwarded message --
Date: Sun, 22 Aug 2004 12:55:52 +0200 (CEST)
From: Henrik Nordstrom <[EMAIL PROTECTED]>
To: Squid Developers <[EMAIL PROTECTED]>,
Squid Users <[EMAIL PROTECTED]>
Subject: Linux Kongress 2004
I will be at Linux Kongress 2004 http://www.linux-kongress.org/2004/>.
If there is interest i could try to arrange a bof or similar informal gathering 
about the current situation within the Squid development or any other Squid 
topic which may interest you.

Please indicate interest in this by email. Suggestions on what may be 
interesting to discuss is also welcome so I know a little on what topics may 
interest you.

Regards
Henrik Nordstrom
Squid HTTP Proxy developer


Re: [squid-users] SSL Reverse Proxy of multiple hosts

2004-09-02 Thread Henrik Nordstrom
On Thu, 2 Sep 2004, R. Benjamin Kessler wrote:
I'd like to have something like the following:
public site1 xx.yy.133.201
public site2 xx.yy.133.202
public site3 xx.yy.133.203
all serviced by proxy1
internal site1 192.168.133.201
internal site2 192.168.133.202
internal site3 192.168.133.203
Do I have to run three different instances of squid to do this?
No, but you you need one https_port specification per certificate, each 
bound to their public IP.

If they're all xxx.foo.com can I use a singel "wild card" SSL 
certificate?
Then you can run them all on a single public IP address.
squid.conf:
https_port ...
https_port ...
https_port ...
httpd_accel_host your.primary.website
httpd_accel_port 80
httpd_accel_with_proxy on
acl port80 port 80
never_direct allow all
cache_peer server1 parent 80 0 no-query
acl site1 dstdomain www.site1.com
http_access allow site1 port80
cache_peer_access server1 allow site1
cache_peer server2 parent 80 0 no-query
acl site2 dstdomain www.site2.com
http_access allow site2 port80
cache_peer_access server2 allow site2
[etc].
Alternatively you can take out the cache_peer, cahce_peer_access and 
never_direct lines and place the IP addresses of the web server for each 
accelerated web server into /etc/hosts.

Regards
Henrik


[squid-users] Re: Linux Kongress 2004

2004-09-02 Thread Mar Matthias Darin
Hello, 

I took a look at the web site...  Way out of my area, but I am interested in 
the future of squid... 

I am particularly interested in squid's handling no-cachable content 
overrides.  I have spent the last 6 months trying to force alltheweb.com's 
thumbnail search (images) to be cached for 24 hours.  I have hugh loads of 
traffic to the same pages in a single day...  This site is ran sacking my 
(limited) bandwidth 

My second interest is an affordable (SOHO) cache appliance running squid on 
a lan 

Henrik Nordstrom writes: 

Last call for registering interest in this. 

I need at least 2 confirmed persons intending to attend this Squid BOF 
session at the Linux Kongress to prepare the session. 

So far it has been very silent which either means that there is no Squid 
Users attending the Linux Kongress, or that there is no interest to 
discuss what the future of Squid looks like or any other relevant Squid 
topic.


pgpKeEUMM1iMO.pgp
Description: PGP signature


RE: [squid-users] SSL Reverse Proxy of multiple hosts

2004-09-02 Thread R. Benjamin Kessler
Excellent help Henrik; thanks!

I do have another question; what's the best way to configure automatic
startup of squid (i.e. what do I need to do so that I don't get prompted for
the PEM password for each of the certs on startup?)

Thanks again.

Ben

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Thursday, September 02, 2004 6:36 PM
To: R. Benjamin Kessler
Cc: [EMAIL PROTECTED]
Subject: Re: [squid-users] SSL Reverse Proxy of multiple hosts

On Thu, 2 Sep 2004, R. Benjamin Kessler wrote:

> I'd like to have something like the following:
>
> public site1 xx.yy.133.201
> public site2 xx.yy.133.202
> public site3 xx.yy.133.203
>
> all serviced by proxy1
>
> internal site1 192.168.133.201
> internal site2 192.168.133.202
> internal site3 192.168.133.203
>
> Do I have to run three different instances of squid to do this?

No, but you you need one https_port specification per certificate, each 
bound to their public IP.

> If they're all xxx.foo.com can I use a singel "wild card" SSL 
> certificate?

Then you can run them all on a single public IP address.


squid.conf:


https_port ...
https_port ...
https_port ...

httpd_accel_host your.primary.website
httpd_accel_port 80
httpd_accel_with_proxy on

acl port80 port 80

never_direct allow all

cache_peer server1 parent 80 0 no-query
acl site1 dstdomain www.site1.com
http_access allow site1 port80
cache_peer_access server1 allow site1

cache_peer server2 parent 80 0 no-query
acl site2 dstdomain www.site2.com
http_access allow site2 port80
cache_peer_access server2 allow site2

[etc].


Alternatively you can take out the cache_peer, cahce_peer_access and 
never_direct lines and place the IP addresses of the web server for each 
accelerated web server into /etc/hosts.


Regards
Henrik





RE: [squid-users] SSL Reverse Proxy of multiple hosts

2004-09-02 Thread R. Benjamin Kessler
Excellent help Henrik; thanks!

I do have another question; what's the best way to configure automatic
startup of squid (i.e. what do I need to do so that I don't get prompted for
the PEM password for each of the certs on startup?)

Thanks again.

Ben

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Thursday, September 02, 2004 6:36 PM
To: R. Benjamin Kessler
Cc: [EMAIL PROTECTED]
Subject: Re: [squid-users] SSL Reverse Proxy of multiple hosts

On Thu, 2 Sep 2004, R. Benjamin Kessler wrote:

> I'd like to have something like the following:
>
> public site1 xx.yy.133.201
> public site2 xx.yy.133.202
> public site3 xx.yy.133.203
>
> all serviced by proxy1
>
> internal site1 192.168.133.201
> internal site2 192.168.133.202
> internal site3 192.168.133.203
>
> Do I have to run three different instances of squid to do this?

No, but you you need one https_port specification per certificate, each 
bound to their public IP.

> If they're all xxx.foo.com can I use a singel "wild card" SSL 
> certificate?

Then you can run them all on a single public IP address.


squid.conf:


https_port ...
https_port ...
https_port ...

httpd_accel_host your.primary.website
httpd_accel_port 80
httpd_accel_with_proxy on

acl port80 port 80

never_direct allow all

cache_peer server1 parent 80 0 no-query
acl site1 dstdomain www.site1.com
http_access allow site1 port80
cache_peer_access server1 allow site1

cache_peer server2 parent 80 0 no-query
acl site2 dstdomain www.site2.com
http_access allow site2 port80
cache_peer_access server2 allow site2

[etc].


Alternatively you can take out the cache_peer, cahce_peer_access and 
never_direct lines and place the IP addresses of the web server for each 
accelerated web server into /etc/hosts.


Regards
Henrik





Re: [squid-users] R: [squid-users] P2P

2004-09-02 Thread Richard
I may have misunderstood but you want to know why Kazaa wont work via squid?

Squid is a http proxy, Kazaa last time I used it could only use a
socks proxy or a direct connection Kazaa is unable to connect through
a http proxy like squid.

Regards,
Richard

On Thu, 2 Sep 2004 08:43:31 +0200, Netmail <[EMAIL PROTECTED]> wrote:
> Squid don't block the p2p programs
> You are need a firewall for this work !
> 
> -Messaggio originale-
> Da: Pablo Gietz [mailto:[EMAIL PROTECTED]
> Inviato: mercoledì 1 settembre 2004 17.56
> A: [EMAIL PROTECTED]
> Oggetto: [squid-users] P2P
> 
> Hi people
> 
> I have  read many questions about how to block traffic of P2P programs
> like Kazaa, but i need to know wy this programs don't work in our proxy?
> for example Ares. There is some especific configuration required in the
> proxy? or firewall?
> 
> Thanks
> 
> Pablo A. C. Gietz
> 
>


Re: [squid-users] Squid Slow

2004-09-02 Thread Richard
Whats the difference in the size of the 2 lists? is it possible the
2nd list has a lot more sites to listed so therefore squid is doing a
lot more work processing ACL's?

just a suggestion...

On Tue, 31 Aug 2004 09:52:48 -0600, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote:
> 
> 
> Hi everybody
> 
> I still have the same problem with squid and squidguard because when I use
> urlblacklist.com `s file squid works very slow If I change my blacklist for
> squidguard`s blacklist everything goes good.
> I have slackware 10 and squid2-5STABLE6 with squidguard 1.2.0
> 
> Thanks
> 
>


RE: [squid-users] SSL Reverse Proxy of multiple hosts

2004-09-02 Thread Henrik Nordstrom
On Thu, 2 Sep 2004, R. Benjamin Kessler wrote:
I do have another question; what's the best way to configure automatic
startup of squid (i.e. what do I need to do so that I don't get prompted for
the PEM password for each of the certs on startup?)
The simplest way to not be asked for SSL PEM passwords on startup is to 
store your SSL keys unencrypted.

openssl rsa -in encrypted.pem -out decrypted.pem
The other option if using the Squid-2.5 ssl update patch is to specify a 
program supplying the password using the sslpassword_program directive.

Regards
Henrik