Re: [squid-users] Re: my CPPUNIT is broken... ;-) ?

2006-03-18 Thread Henrik Nordstrom
fre 2006-03-17 klockan 19:31 -0800 skrev Linda W:
 Mystery solved.
 
 My shell _expanded_ control sequences by default in echo. (echo \1 - 
 becomes 
 echo ^A).
 
 Apparently there are literals in the configure script like \\1 \\2 that
 were trying to echo a literal '\1' into a sed script.  Instead it was
 echoed in as a control-A.

Hmm.. so you have replaced /bin/sh with something else than a UNIX
shell? Or was it you /bin/echo being different?

 Am I misremembering, aren't their systems were expanded echo is the default?

If so then the GNU autoconf people have not run into it yet..

Good you found what it was, and a way around the problem. Even better if
you would enlighten us what it was you were using causing the problem,
and how you worked around it.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] List objects in cache

2006-03-18 Thread Henrik Nordstrom
fre 2006-03-17 klockan 17:48 -0800 skrev Mike Leong:

 Is there a way to get a list of all the objects in the cache w/ the full 
 url?

See the purge tool in related software section.

 The cache manager cgi cuts off at the first question mark.

Not only that, it does not give you a URL at all on disk objects..

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] DNS Long timeout problem

2006-03-18 Thread Dieter Bloms
Hi,

On Fri, Mar 17, Jonathan Pauli wrote:

 Is this a DNS timeout issue that can be changed in the squid config?

login to your squid box and type host hostname, and replace
hostname with the one witch timed out.

If this take a long time you have to correct your DNS config.


-- 
Gruß

  Dieter

--
I do not get viruses because I do not use MS software.
If you use Outlook then please do not put my email address in your
address-book so that WHEN you get a virus it won't use my address in the
From field.


pgpTt8W04mJEy.pgp
Description: PGP signature


Re: [squid-users] Re: my CPPUNIT is broken... ;-) ?

2006-03-18 Thread Dieter Bloms
Hi,

On Fri, Mar 17, Linda W wrote:

 Based off SuSE9.3 with some updates; linux kernel 2.6.15.5 on pentium3;
 gcc=3.3.5 (20050117); glibc=2.3.4-23.4

Did you install some packages from other source ?
SuSE9.3 came with 2.6.11 kernel.

--snip--
ftp pwd
257 /pub/linux/suse/ftp.suse.com/suse/i386/update/9.3/rpm/i586 is current 
directory.
ftp ls kernel-default-2.6.11.4-21.11.i586.rpm
227 Entering Passive Mode (134,76,11,100,192,75)
150 Opening ASCII mode data connection for /bin/ls.
-rw-r--r--1 emoenke  ftp  16984881 Feb 14 12:13 
kernel-default-2.6.11.4-21.11.i586.rpm
226 Transfer complete.
ftp
--snip--

Maybe you have a mixed some other packages from SUSE9.3 and SUSE10.0,
too.
Try to compile it on a fresh SUSE9.3 installation.

-- 
Gruß

  Dieter

--
I do not get viruses because I do not use MS software.
If you use Outlook then please do not put my email address in your
address-book so that WHEN you get a virus it won't use my address in the
From field.


pgpEwFMbaAWuM.pgp
Description: PGP signature


[squid-users] purging variants

2006-03-18 Thread lawrence wang
I've seen a few posts explaining that Squid 2.5's Vary: support
doesn't work so well with PURGE, since it requires that you send the
exact headers along with the URL for that variant. I was wondering,
since this is a significant hassle, if anyone's written a patch that
makes Squid purge all variants under a given URL, something that would
then be usable with the existing third-party purge tool. And if not,
can anyone point me in the general direction of the code I might want
to start digging into to roll my own patch? Thanks in advance.
--Lawrence Wang


Re: [squid-users] Squid Active directory, Samba and Kerberos

2006-03-18 Thread Henrik Nordstrom
lör 2006-03-18 klockan 10:12 +0530 skrev Logu:

 Thanks for your response D.R.  I would like to know what role does kerberos 
 play when authencating with ntlm scheme.

None. NTLM is the Windows NT authentication method, supported by Active
Directory in parallel to its Kerberos authentication method.

 Is Active Directory a combination of kerberos and ldap ?

Yes, plus NT Domain, NTLM, NTLMv2, MS-CHAP and a bit more. Digest is
also optionally supported.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] purging variants

2006-03-18 Thread Henrik Nordstrom
fre 2006-03-17 klockan 17:11 -0500 skrev lawrence wang:
 I was wondering,
 since this is a significant hassle, if anyone's written a patch that
 makes Squid purge all variants under a given URL,

Problem is that Squid-2.5 does not know the URLs of objects. If it knew
it would do it.

The PURGE tool could be modified to do this I suppose. Only needs to be
taught the vary algorithm used by 2.5 and decode this into suitable
request headers as part of the purge. The required information is found
in a meta TLV header of the object.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


[squid-users] Forwarding loop after rebooting.

2006-03-18 Thread Mark Stevens
Hi group, my first post so please be gentle :)

 I'm a sysadmin who has inherited a small cluster of squid servers,
the setup is as follows.

 4 x Squid Slave Accelerators that accel a master squid.

 1 x Master Squid running a custom made redirect script written in
perl that accel a Webserver .

 1 x Backend Webserver.

 Each slave is running 4 versions of squid accelerating separate sites.

 The master runs 4 instances of squid.

 The farm is constantly under a fair load - roughly half a million hits a day.

 The setup works fine, however, recently when the master server was
taken down for repair, and brought back up again with the same
configuration, it failed to  serve content for the

busiest instance, and  every request returned is with a TCP_DENIED 403
error. The following error was reported in the cache.log

 2006/03/18 06:04:52| WARNING: Forwarding loop detected for:
 GET /folder1/subfolder/subfolder/ HTTP/1.0
 If-Modified-Since: Sat, 14 Jan 2006 01:44:45 GMT
 Host: 192.168.0.10
 Accept: */*
 From: googlebot(at)googlebot.com
 User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1;
+http://www.google.com/bot.html)
 Accept-Encoding: gzip
 Via: 1.1 slave1.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver)   This has happened
previously when the server rebooted, it is likely that the master
squid service is getting hammered by all slaves  as soon as it is
brought back into service, could the fact that it's under such heavy
load as soon as it starts up be causing a problem in Squid?

  Squid version:squid-2.5.STABLE10
  O/S: 5.8 Generic_117350-12 sun4u sparc SUNW,Ultra-80

  I have altered the output to respect privacy of client.


[squid-users] proxy.pac help

2006-03-18 Thread Raj
Hi All,

I am running Squid  2.5.STABLE10. All the clients in our company use
proxy.pac file in the browser settings. I need some help with the
proxy.pac file. At the moment I have the following configuration:

// Assign Proxy based on IP Address of Client
  if (isInNet(myIpAddress(), 172.16.96.0, 255.255.240.0)) return PROXY prox
y03.au.ap.abnamro.com:3128; PROXY proxy04.au.ap.abnamro.com:3128;

If the source IP address is from that IP range, it should go to
proxy03 first and if proxy03 is down it should go to proxy04. But that
is not happening. If proxy03 is down, it is not going to proxy04. Is
there any syntax error in the above config.

What is the correct syntax in proxy.pac file so that if proxy03 is
down it will go to proxy04?

Thanks.


[squid-users] Squid Ldap

2006-03-18 Thread Olsson Mattias
 
Hi!
 
I would like to use LDAP to auth proxy users (win 2003).  Its working
great exept that i have to login every time.
I have seen that the NT domain name could be removed with option -S. But
i cant get that to work. Please have a look and correct me:)
 
external_acl_type InetGroup %LOGIN /usr/sbin/squid_ldap_group -R -b
ou=Users Accounts,dc=domain,dc=local -D
cn=Administrator,cn=Users,dc=domain,dc=local -w password -f
((objectclass=person)(sAMAccountName=%v)(memberof=cn=%a,ou=Global,ou=S
ecurity groups,dc=domain,dc=local)) -S  -h ldap_server_ip
 
My client machines are inte same domain. Loggin in with my user named
works but IE appears to send domain\username by default...

Mvh / Kind regards
Mattias Olsson

Siemens Business Services AB
SE-171 95 Solna

Sweden
P: +46 8 730 6573 M:+46 70 629 1071
***



Re: [squid-users] purging variants

2006-03-18 Thread lawrence wang
I see. But maybe I've phrased this wrong... It seems like when the
purge tool runs, it does find all the different variants for a given
URL and runs requests against each of them; of course the variants
which require specific headers return 404's when those are not found
in the request. Perhaps there's a way to relax this check without
breaking anything else?

On 3/18/06, Henrik Nordstrom [EMAIL PROTECTED] wrote:
 fre 2006-03-17 klockan 17:11 -0500 skrev lawrence wang:
  I was wondering,
  since this is a significant hassle, if anyone's written a patch that
  makes Squid purge all variants under a given URL,

 Problem is that Squid-2.5 does not know the URLs of objects. If it knew
 it would do it.

 The PURGE tool could be modified to do this I suppose. Only needs to be
 taught the vary algorithm used by 2.5 and decode this into suitable
 request headers as part of the purge. The required information is found
 in a meta TLV header of the object.

 Regards
 Henrik


 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.2.2 (GNU/Linux)

 iD8DBQBEG9dY516QwDnMM9sRAjdnAJwNltMaGlowF39iOFmx4XdXk058rwCeMs04
 BFZR7HoLn+cnf3/Cc58fHmk=
 =U1ti
 -END PGP SIGNATURE-





[squid-users] Forwarding loop after rebooting.

2006-03-18 Thread Mark Stevens
Sorry if this a double post.

Squid version:squid-2.5.STABLE10
O/S: 5.8 Generic_117350-12 sun4u sparc SUNW,Ultra-80

Hi,

I'm a sysadmin who has inherited a small cluster of squid servers, the
setup is as follows.

4 x Squid Slave Accelerators that accel a master squid.

1 x Master Squid running a custom made redirect script written in perl
that accel a Webserver .

1 x Backend Webserver.

Each slave is running 4 versions of squid accelerating separate sites.

The master runs 4 instances of squid.

The farm is constantly under a fair load - roughly half a million hits a day.

The setup works fine, however, recently when the master server was
taken down for repair, and brought back up again with the same
configuration, it failed to  serve content for the

busiest instance, and  every request returned is with a TCP_DENIED 403
error. The following error was reported in the cache.log

2006/03/18 06:04:52| WARNING: Forwarding loop detected for:
GET /folder1/subfolder/subfolder/ HTTP/1.0
If-Modified-Since: Sat, 14 Jan 2006 01:44:45 GMT
Host: 192.168.0.10
Accept: */*
From: googlebot(at)googlebot.com
User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1;
+http://www.google.com/bot.html)
Accept-Encoding: gzip
Via: 1.1 slave1.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver)

This has happened previously when the server rebooted, it is likely
that the master squid service is getting hammered by all slaves  as
soon as it is brought back into service, could the fact that it's
under such heavy load as soon as it starts up be causing a problem in
Squid?



I have altered the output to respect privacy of client.


RE: [squid-users] Squid Ldap

2006-03-18 Thread Nick Duda

I use winbind with samba and use the directive default_domain=xxx to remove the 
domain from the users.

-Original Message- 
From: Olsson Mattias [mailto:[EMAIL PROTECTED] 
Sent: Sat 3/18/2006 8:22 AM 
To: squid-users@squid-cache.org 
Cc: 
Subject: [squid-users] Squid Ldap




Hi!

I would like to use LDAP to auth proxy users (win 2003).  Its working
great exept that i have to login every time.
I have seen that the NT domain name could be removed with option -S. But
i cant get that to work. Please have a look and correct me:)

external_acl_type InetGroup %LOGIN /usr/sbin/squid_ldap_group -R -b
ou=Users Accounts,dc=domain,dc=local -D
cn=Administrator,cn=Users,dc=domain,dc=local -w password -f
((objectclass=person)(sAMAccountName=%v)(memberof=cn=%a,ou=Global,ou=S
ecurity groups,dc=domain,dc=local)) -S  -h ldap_server_ip

My client machines are inte same domain. Loggin in with my user named
works but IE appears to send domain\username by default...

Mvh / Kind regards
Mattias Olsson

Siemens Business Services AB
SE-171 95 Solna

Sweden
P: +46 8 730 6573 M:+46 70 629 1071
***




-
Confidentiality note
The information in this email and any attachment may contain confidential and 
proprietary information of 
VistaPrint and/or its affiliates and may be privileged or otherwise protected 
from disclosure. If you are 
not the intended recipient, you are hereby notified that any review, reliance 
or distribution by others 
or forwarding without express permission is strictly prohibited and may cause 
liability. In case you have 
received this message due to an error in transmission, please notify the sender 
immediately and to delete 
this email and any attachment from your system.
-


Re: [squid-users] Forwarding loop after rebooting.

2006-03-18 Thread Henrik Nordstrom
lör 2006-03-18 klockan 13:47 + skrev Mark Stevens:

 This has happened previously when the server rebooted, it is likely
 that the master squid service is getting hammered by all slaves  as
 soon as it is brought back into service, could the fact that it's
 under such heavy load as soon as it starts up be causing a problem in
 Squid?

No.

It's by 99.9% a configuration error.

Forwarding loops occurs when the configuration in how Squid should route
the requests makes Squid send the request to itself.

Hmm.. you mentioned you are using a redirector to route the requests. If
so then make sure you have not enabled redirector_bypass (defaults off).
Also verify that the redirector is actually working.

Regards
Henrik



signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Forwarding loop after rebooting.

2006-03-18 Thread Mark Elsen
On 3/18/06, Mark Stevens [EMAIL PROTECTED] wrote:
 Sorry if this a double post.

 Squid version:squid-2.5.STABLE10
 O/S: 5.8 Generic_117350-12 sun4u sparc SUNW,Ultra-80

 Hi,

 I'm a sysadmin who has inherited a small cluster of squid servers, the
 setup is as follows.

 4 x Squid Slave Accelerators that accel a master squid.

 1 x Master Squid running a custom made redirect script written in perl
 that accel a Webserver .

 1 x Backend Webserver.

 Each slave is running 4 versions of squid accelerating separate sites.

 The master runs 4 instances of squid.

 The farm is constantly under a fair load - roughly half a million hits a day.

 The setup works fine, however, recently when the master server was
 taken down for repair, and brought back up again with the same
 configuration, it failed to  serve content for the

 busiest instance, and  every request returned is with a TCP_DENIED 403
 error. The following error was reported in the cache.log

 2006/03/18 06:04:52| WARNING: Forwarding loop detected for:

...

   http://www.squid-cache.org/Doc/FAQ/FAQ-11.html#ss11.31

   M.


Re: [squid-users] Reverse proxy multiple sites with re-writing

2006-03-18 Thread Merton Campbell Crockett


On 17 Mar 2006, at 16:29 , Robin Bowes wrote:


I have a couple of questions:

1. Can squid do content rewriting?


Squid does not have the capability to rewrite content.  I'm not sure  
that it can rewrite the URL in accelerator mode.  Apache can rewrite  
the URL using mod_rewrite; however, it doesn't inherently provide the  
ability to rewrite page content.  Instead of using mod_proxy to  
return the retrieved content to the requestor with the rewritten URL,  
you would want to use mod_php, mod_perl, mod_python to implement a  
routine to scan and rewrite the page content.


2. Would it be possible to proxy multiple such sites on one squid  
host?


I don't know.  I use Squid in its basic form to support client  
browsers.  I use Apache and mod_rewrite to implement a service  
similar to Squid's accelerator mode and to provide SSL encryption  
between the security perimeter and the browser.


Of course, I could be all wet as I haven't really looked at the  
changes in Squid since the late Nineties.


Merton Campbell Crockett
[EMAIL PROTECTED]





Re: [squid-users] proxy.pac help

2006-03-18 Thread Mark Elsen
 Hi All,

 I am running Squid  2.5.STABLE10. All the clients in our company use
 proxy.pac file in the browser settings. I need some help with the
 proxy.pac file. At the moment I have the following configuration:

 // Assign Proxy based on IP Address of Client
   if (isInNet(myIpAddress(), 172.16.96.0, 255.255.240.0)) return PROXY 
 prox
 y03.au.ap.abnamro.com:3128; PROXY proxy04.au.ap.abnamro.com:3128;

 If the source IP address is from that IP range, it should go to
 proxy03 first and if proxy03 is down it should go to proxy04. But that
 is not happening. If proxy03 is down, it is not going to proxy04. Is
 there any syntax error in the above config.

 What is the correct syntax in proxy.pac file so that if proxy03 is
 down it will go to proxy04?



 - Depending on browser vendor; this can take a while :

* Did you wait long enough ?
* Compare Firefox versus IE (e.g.)

 M.


Re: [squid-users] purging variants

2006-03-18 Thread Henrik Nordstrom
lör 2006-03-18 klockan 08:41 -0500 skrev lawrence wang:
 I see. But maybe I've phrased this wrong... It seems like when the
 purge tool runs, it does find all the different variants for a given
 URL and runs requests against each of them; of course the variants
 which require specific headers return 404's when those are not found
 in the request. Perhaps there's a way to relax this check without
 breaking anything else?

The PURGE tool must send the correct headers, or Squid won't know what
to do. It's not a check, it's how things work. Squid-2.5 does not know
what variants there is for a given URL, no more than it knows what URLs
there is in the cache. All it knows to do is OK, I now have these
request headers and URL given to me from the client, is there a matching
object. Without the headers it can not find or know the variant.
Without the headers all Squid-2.5 find is that the object varies, but
have no means of finding the variants. Note of warning: If you PURGE
without the variant headers then Squid-2.5 forgets that the object
varies and the remaining cached variants of the object can not be
reached until Squid has again learned that the object varies by seeing a
Vary header response from the server. This means that if you purge
without the headers then there is no longer any way to purge the
variants without first making a request (which will be a cache miss) for
the URL.

The PURGE tool could be modified to do this I suppose. Only needs to be
taught the vary algorithm used by 2.5 and decode this into suitable
request headers as part of the purge request sent to Squid. The required
information for reconstructing the required headers is found in a meta
TLV header of the object and is read by the purge tool. Only that it
does not know the meaning of this information and consequently does not
make use of it today.


Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


[squid-users] Antivirus and squid

2006-03-18 Thread Philipp Snizek
Hi

I intend to use squid as reverse proxy combined with antivirus.

This reverse proxy will protect a web mail server. I don't want users to
upload viruses and distribute them via this web mail user interface. 
It would be great if there existed a squid/antivirus solution that scans
uploaded/downloaded files only ignoring the other html traffic. Also it
would be great if this av solution had an api for clamav or bitdefender.
I couldn't find anything during my search this morning. Maybe some of you
have an idea?

Thanks in advance
Philipp
 



Re: [squid-users] Antivirus and squid

2006-03-18 Thread Sushil Deore


use HAVP

http://havp.sourceforge.net

-- Sushil.

On Sat, 18 Mar 2006, Philipp Snizek wrote:

 Hi

 I intend to use squid as reverse proxy combined with antivirus.

 This reverse proxy will protect a web mail server. I don't want users to
 upload viruses and distribute them via this web mail user interface.
 It would be great if there existed a squid/antivirus solution that scans
 uploaded/downloaded files only ignoring the other html traffic. Also it
 would be great if this av solution had an api for clamav or bitdefender.
 I couldn't find anything during my search this morning. Maybe some of you
 have an idea?

 Thanks in advance
 Philipp






Re: [squid-users] proxy.pac help

2006-03-18 Thread Bill Jacqmein
 Raj,

   The below should work. Assuming the isInNet is working properly.
   I would leave the if statement out and just start with returning
 the Proxy statements if possible. Eliminate systems by just not
pointing them to the proxy.pac

 Regards,

   Bill

 // Assign Proxy based on IP Address of Client
   if (isInNet(myIpAddress(), 172.16.96.0,  255.255.240.0)){
  return PROXY proxy03.au.ap.abnamro.com:3128 PROXY
 proxy04.au.ap.abnamro.com:3128;
 }



 On 3/18/06, Raj [EMAIL PROTECTED] wrote:
   Hi All,
 
  I am running Squid  2.5.STABLE10. All the clients in our company use
  proxy.pac file in the browser settings. I need some help with the
  proxy.pac file. At the moment I have the following configuration:
 
  // Assign Proxy based on IP Address of Client
if (isInNet(myIpAddress(), 172.16.96.0, 255.255.240.0)) return PROXY 
  prox
  y03.au.ap.abnamro.com:3128; PROXY proxy04.au.ap.abnamro.com:3128;
 
  If the source IP address is from that IP range, it should go to
  proxy03 first and if proxy03 is down it should go to proxy04. But that
  is not happening. If proxy03 is down, it is not going to proxy04. Is
  there any syntax error in the above config.
 
  What is the correct syntax in  proxy.pac file so that if proxy03 is
  down it will go to proxy04?
 
  Thanks.
 



[squid-users] WCCP+ Squid Slowing internet browsing , how to improve it ?

2006-03-18 Thread Daniel EPEE LEA
Hi,

Squid-2.5-STABLE12 + ip_gre  WCCP + RHEL v4 U2 +  4Gigs RAM + Cache
Dir to be 45 Gigs, but only 20Gigs now

I have a high volume network ( /19)
I had to increase the number of file descriptors and rebuild squid.
Now it works Ok,

But I notice a major slowness in browsing the internet. Plus  site
with streaming media take too much time to load. From some parts of my
network, I get Unable to reach Website answer

This is my config,
---
iptables -nL -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source   destination
DNAT   tcp  --  [MyNet]/19 ![MyNet]/19 tcp dpt:80 to:[Cache IP]:3128

---
http_port [Cache IP]:3128
icp_port 3130
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
cache_mem 256 MB
cache_swap_low 90
cache_swap_high 95
maximum_object_size 4096 KB
minimum_object_size 0 KB
maximum_object_size_in_memory 8 KB
cache_dir ufs /usr/local/squid/var/cache 20240 16 256
cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
mime_table /usr/local/squid/etc/mime.conf
pid_filename /var/run/squid.pid
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl myacl src [MyNET]
http_access allow myacl
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
acl our_networks src [MyNET]
http_access allow our_networks
http_access deny all
http_reply_access allow all
icp_access allow all
icp_access allow all
tcp_outgoing_address [CacheIP]
cache_mgr [EMAIL PROTECTED]
cache_effective_user squid
cache_effective_group squid
visible_hostname cache.domain.com
httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
logfile_rotate 10
forwarded_for on
cachemgr_passwd *
snmp_port 3401
snmp_access deny all
wccp_router [Router IP]
wccp_version 4
wccp_outgoing_address [CacheIP]
coredump_dir /usr/local/squid/var/cache


How can i improve it ? so the all the serveices ate allowed without
restriction ?

Thanks for your answers

Much regards,

--
Dan


[squid-users] Re: WCCP+ Squid Slowing internet browsing , how to improve it ?

2006-03-18 Thread Daniel EPEE LEA
Hello,

This is my Cache.log info

2006/03/18 22:19:54| clientReadRequest: FD 3476 Invalid Request
2006/03/18 22:19:57| parseHttpRequest: Unsupported method
'recipientid=105sessionid=2197

'
2006/03/18 22:19:57| clientReadRequest: FD 148 Invalid Request
2006/03/18 22:20:17| parseHttpRequest: Unsupported method 'REGISTER'
2006/03/18 22:20:17| clientReadRequest: FD 3382 Invalid Request
2006/03/18 22:20:30| parseHttpRequest: Unsupported method 'REGISTER'
2006/03/18 22:20:30| clientReadRequest: FD 2515 Invalid Request
2006/03/18 22:20:38| parseHttpRequest: Unsupported method 'REGISTER'
2006/03/18 22:20:38| clientReadRequest: FD 1091 Invalid Request
2006/03/18 22:20:45| parseHttpRequest: Unsupported method 'REGISTER'
2006/03/18 22:20:45| clientReadRequest: FD 382 Invalid Request
2006/03/18 22:20:52| parseHttpRequest: Unsupported method 'REGISTER'
2006/03/18 22:20:52| clientReadRequest: FD 2548 Invalid Request
2006/03/18 22:21:12| parseHttpRequest: Unsupported method 'REGISTER'
2006/03/18 22:21:12| clientReadRequest: FD 3150 Invalid Request
2006/03/18 22:21:36| parseHttpRequest: Unsupported method
'recipientid=155sessionid=2873

'
2006/03/18 22:21:36| clientReadRequest: FD 376 Invalid Request
2006/03/18 22:21:36| parseHttpRequest: Unsupported method 'REGISTER'
2006/03/18 22:21:36| clientReadRequest: FD 460 Invalid Request
2006/03/18 22:21:38| parseHttpRequest: Unsupported method
'recipientid=155sessionid=2873

'
2006/03/18 22:21:38| clientReadRequest: FD 1655 Invalid Request
2006/03/18 22:21:39| parseHttpRequest: Unsupported method 'REGISTER'
2006/03/18 22:21:39| clientReadRequest: FD 1655 Invalid Request
2006/03/18 22:22:10| parseHttpRequest: Unsupported method 'REGISTER'
2006/03/18 22:22:10| clientReadRequest: FD 2515 Invalid Request
2006/03/18 22:22:27| parseHttpRequest: Unsupported method 'REGISTER'
2006/03/18 22:22:27| clientReadRequest: FD 251 Invalid Request
2006/03/18 22:22:44| parseHttpRequest: Unsupported method 'REGISTER'
2006/03/18 22:22:44| clientReadRequest: FD 776 Invalid Request
2006/03/18 22:22:51| parseHttpRequest: Unsupported method
'recipientid=114sessionid=914
2006/03/18 22:22:51| clientReadRequest: FD 1490 Invalid Request
2006/03/18 22:22:55| parseHttpRequest: Unsupported method 'REGISTER'
2006/03/18 22:22:55| clientReadRequest: FD 2858 Invalid Request
2006/03/18 22:23:02| parseHttpRequest: Unsupported method 'REGISTER'
2006/03/18 22:23:02| clientReadRequest: FD 674 Invalid Request
2006/03/18 22:23:16| parseHttpRequest: Unsupported method 'REGISTER'
2006/03/18 22:23:16| clientReadRequest: FD 45 Invalid Request


Regards,

Dan
On 3/18/06, Daniel EPEE LEA [EMAIL PROTECTED] wrote:
 Hi,

 Squid-2.5-STABLE12 + ip_gre  WCCP + RHEL v4 U2 +  4Gigs RAM + Cache
 Dir to be 45 Gigs, but only 20Gigs now

 I have a high volume network ( /19)
 I had to increase the number of file descriptors and rebuild squid.
 Now it works Ok,

 But I notice a major slowness in browsing the internet. Plus  site
 with streaming media take too much time to load. From some parts of my
 network, I get Unable to reach Website answer

 This is my config,
 ---
 iptables -nL -t nat
 Chain PREROUTING (policy ACCEPT)
 target prot opt source   destination
 DNAT   tcp  --  [MyNet]/19 ![MyNet]/19 tcp dpt:80 to:[Cache 
 IP]:3128

 ---
 http_port [Cache IP]:3128
 icp_port 3130
 hierarchy_stoplist cgi-bin ?
 acl QUERY urlpath_regex cgi-bin \?
 no_cache deny QUERY
 cache_mem 256 MB
 cache_swap_low 90
 cache_swap_high 95
 maximum_object_size 4096 KB
 minimum_object_size 0 KB
 maximum_object_size_in_memory 8 KB
 cache_dir ufs /usr/local/squid/var/cache 20240 16 256
 cache_access_log /var/log/squid/access.log
 cache_log /var/log/squid/cache.log
 cache_store_log /var/log/squid/store.log
 mime_table /usr/local/squid/etc/mime.conf
 pid_filename /var/run/squid.pid
 auth_param basic children 5
 auth_param basic realm Squid proxy-caching web server
 auth_param basic credentialsttl 2 hours
 auth_param basic casesensitive off
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern ^gopher:14400%  1440
 refresh_pattern .   0   20% 4320
 acl all src 0.0.0.0/0.0.0.0
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443 563
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 443 563 # https, snews
 acl Safe_ports port 70  # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 acl myacl src [MyNET]
 http_access allow myacl
 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 

Re: [squid-users] Re: my CPPUNIT is broken... ;-) ?

2006-03-18 Thread Linda W

Henrik Nordstrom wrote:

fre 2006-03-17 klockan 19:31 -0800 skrev Linda W:

Mystery solved.

My shell _expanded_ control sequences by default in echo. (echo \1 - becomes 
echo ^A).


Apparently there are literals in the configure script like \\1 \\2 that
were trying to echo a literal '\1' into a sed script.  Instead it was
echoed in as a control-A.


Hmm.. so you have replaced /bin/sh with something else than a UNIX
shell? Or was it you /bin/echo being different?

---
It was the builtin on bash compiled to adhere with
the System-V standard, and some implementations of ksh and other
unix implementations.  See 
http://ou800doc.caldera.com/en/man/html.1/echo.1.html;.



Am I misremembering, aren't their systems were expanded echo is the default?


If so then the GNU autoconf people have not run into it yet..

---
Well that could be because the feature was extended in BASH.
The original standard requires \0 before an octal number consisting of 1-3
digits.  This required \0 to invoke the special decoding.

Bash added the feature to allow dropping of the leading
0,  accepting strings: \0nnn, \nnn, and \xHH.  I'm guessing that
most bash users run in a shell that has expansion turned off by default or
this would have come up before.  I am leaning toward
thinking this is a case of Bash implementing an incompatible and conflicting
extension (by allowing the dropping of the leading 0 of an octal sequence).


Good you found what it was, and a way around the problem. Even better if
you would enlighten us what it was you were using causing the problem,
and how you worked around it.

---
For now, I disabled expansion, since it isn't compatible (as you note
with existing scripts (like autoconf).   Meanwhile, I've submitted a suggestion
to go back to requiring the full prefix \0 before possible interpretation as
octal.  It seems cleanest if they require \0 before either an octal or hex
encoding, with hex using \0xH[H] and octal using \0N[N[N]].

Linda


Re: [squid-users] Forwarding loop after rebooting.

2006-03-18 Thread Henrik Nordstrom
lör 2006-03-18 klockan 19:23 + skrev Mark Stevens:

 I will perform further testing against the redirect rules, however
 what I am finding strange is that the problem only happens after
 downtime, to resolve the problem I used an alternative redirect_rules
 file with the same squid.conf file, and the looping errors go away,

How your redirector processes it's rules or not is not a Squid
issue/concern. Squid relies on the redirector of your choice to do it's
job.

Maybe your redirector is relying on some DNS lookups or something else
not yet available at the time you start Squid in the system bootup
procedure? Have seen people bitten by such issues in the past.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


[squid-users] Non-cached pages with squid 2.5-stable13

2006-03-18 Thread Stefan Neufeind

Hi,

at the moment I did try to run a squid 2.5-stable13 from Fedora Core 4,
handpatched with collapsed-forwarding-support and epoll. Those two
additional features work quite well. But currently I experience some
pages which unfortunately are not cached by squid. I wonder why - and
wonder if it might have to do with vary-headers the webserver is sending.

A called script returns:

Date: ... (current date)
Server: Aapche
Expires: ... (like date, approx 2min in the future)
Last-Modified: ... (shortly before Date)
Vary: Accept-Encoding
Content-Length: ...
Connection: close
Content-Type: text/html

The Vary-header is used to deliver gzip-compressed or non-compressed
content (compressed inside php) to the clients which do/don't support it.

Though I _think_ everything should be fine upon each request to squid
for this object squid includes an If-Modified-Since in it's request
which is already more than 2 hours in the past - might be the time when
squid was started and/or first tried to cache a copy of the page.

Both the squid and the webserver are in sync. Is there a possibility why
squid does not cache the content, and why it might be using an IMS that
far back in the past? Static content is cached fine - but that does not
include Vary-headers or Expires. I've seen notes from (afaik) squid
2.5-stable11 that pages with vary-headers are now cached. Could this
be related that in some special cases they are not yet?

By the way: The squid is running in httpd_accel mode with proxy, in
front of several webservers (which are in sync) defined via cache_peer.


Any hints to track this down would be welcome!


Yours sincerely,
  Stefan Neufeind




Re: [squid-users] Question

2006-03-18 Thread Bill Jacqmein
Might be easier to setup as a policy matter instead of a technology application.
Setup the AUP and have HR provide the muscle to getting it acknowledged.

On 3/17/06, Richard J Palmer [EMAIL PROTECTED] wrote:
 I'm wondering if Squid can help in this situation...

 We have a setup where we want to set a range of PCs to use Squid to
 allow access to websites, etc.

 Howeve what we idally want the users to do is on their first web request to
 the internet be greeted with a page where they have to accept an AUP
 (in reality all I want is a page to appear and then once they have
 viewed it they can access any other sites they want, without future
 issues (at least fro a set time if that is easier).

 now I guess this could be done as some form of authentication but would
 be grateful for any thoughts here (or pointers if it has been siscused
 (I can't see anything obvious).

 I'm open to thoughts
 --
 Richard Palmer




[squid-users] Re: echo enhancement leads to confused legacy script tools...

2006-03-18 Thread Linda W

Henrik Nordstrom wrote:

lör 2006-03-18 klockan 14:15 -0800 skrev Linda W:


Bash added the feature to allow dropping of the leading
0,  accepting strings: \0nnn, \nnn, and \xHH.  I'm guessing that
most bash users run in a shell that has expansion turned off by default or
this would have come up before.


the xpg_echo bash option..

Lets see what this does to configure shall we.. oh, yes it fails
miserably with this bash option set.

please send this to the autoconf maintainers as well. Probably they can
add a rule detecting this kind of systems and falling back on an
alternative somewhat slower echo method..

Regards
Henrik


I believe bash is broken in regards to using any number after
\ as an octal value.  The shell specifications require the leading
zero for an octal constant and I don't think this problem would arise
if that was fixed.  I can forward the info to them anyway.





Re: [squid-users] Non-cached pages with squid 2.5-stable13

2006-03-18 Thread Mark Elsen
On 3/19/06, Stefan Neufeind [EMAIL PROTECTED] wrote:
 Hi,

 at the moment I did try to run a squid 2.5-stable13 from Fedora Core 4,
 handpatched with collapsed-forwarding-support and epoll. Those two
 additional features work quite well. But currently I experience some
 pages which unfortunately are not cached by squid.
...
...

  http://www.ircache.net/cgi-bin/cacheability.py

  M.