Re: [squid-users] weight for what?

2009-04-30 Thread Amos Jeffries

Tech W. wrote:

Hi,

I saw this comment in squid.conf for cache_peer directive:

 use 'weight=n' to affect the selection of a peer
 during any weighted peer-selection mechanisms.
 The weight must be an integer; default is 1,
 larger weights are favored more.

so, what's "weighted peer-selection mechanisms"? Thanks.



  round-robin
  weighted-round-robin
  carp
  closest-only

all the peering methods that say 'weight' in their description or depend 
on network metrics for selection.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


Re: [squid-users] Pass the IP?

2009-04-30 Thread Amos Jeffries

detari...@aol.com wrote:

Hi everyone,

I was talking to my ISP admin another day. We have our network 
configured in a way that we get external public IP for everyone for 
every port except for port 80, because he's got Squid set there to cache 
the pages and speed it up. He says we can't get public IP's on port 80, 
because all the traffic goes through Squid and it changes the IP.


Is there a way to configure Squid that:
1.) it still works as a cache;
2.) it won't change the IP of the user as he surfs the page.

If so, I'd be glad if someone could point it out, so I could forward it 
to my admin. I just hate going to sites and seeing that my IP is in some 
blacklist due to someone in my network doing some bad things (or rather, 
some malicious software doing it for them without them even knowing). I 
already have public IP - please help me get it on port 80 as well.


The easy way:
 Get your admin to enable Via and X-Forwarded-For headers. Most 
blacklists I know of use them to determine the true IP behind a proxy.

 http://wiki.squid-cache.org/SquidFaq/SecurityPitfalls

The hard way:
  Tproxy IP address spoofing.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


Re: [squid-users] Parent Proxy's, Not Failing over when Primary Parent is Down.

2009-04-30 Thread Amos Jeffries

Dean Weimer wrote:

-Original Message-
From: crobert...@gci.net [mailto:crobert...@gci.net] 
Sent: Thursday, April 30, 2009 2:13 PM

To: squid-users@squid-cache.org
Subject: Re: [squid-users] Parent Proxy's, Not Failing over when Primary
Parent is Down.

Dean Weimer wrote:

I have a current Parent child proxy configuration I have been testing,

its working with the exception of some sites not failing over to second
parent when primary parent goes down.

In the test scenario I have 2 parent proxies, and one child proxy

server, the parents are each configured twice using an alias IP address.
This is done to load balance using round robin for the majority of web
traffic yet allow some sites that we have identified to not work
correctly with load balancing to go out a single parent proxy.
  


Since Squid 2.6 there has been a parent selection method called 
"sourcehash", which will keep a client-to-parent-proxy relationship 
until the parent fails.


I considered this, but was concerned that after a failed proxy server,
the majority of my load would be on one server, and not taking advantage
of both links when the problem is resolved.


The load balanced traffic works as expected, the dead parent is

identified and ignored until it comes back online.  The traffic that
cannot be load balanced is all using HTTPS (not sure HTTPS has anything
to do with the problem or not), when I stop the parent proxy 10.50.20.7
(aka 10.52.20.7) the round-robin configuration is promptly marked as
dead.  However a website that has already been connected to that is in
the NONBAL acl just returns the proxy error from the child giving a
connect to (10.52.20.7) parent failed connection denied.

Hmmm...  You might have to disable server_persistent_connections, or 
lower the value of persistent_request_timeout to have a better response 
rate to a parent failure with your current setup.


Also considered this, but figured it would break some sites that are
working successfully with load balancing because they create a
persistent connection, and making the request timeout to low would
becoming annoying to the users.  Also as the default is listed at 2
minutes, I noticed that even after as much as 5 minutes that the
connection would not fail over.


  It will not mark the non load balanced parent dead, closing and

restarting the browser doesn't help.  It will change the status to dead
however if I connect to another site in the NONBAL acl.  Going back to
the first site, I can then connect, even though I have to log in again,
which is expected and why these sites cannot be load balanced.

Does anyone have any ideas short of writing some sort of test script

that will cause the parent to be marked as dead, if it fails without any
user intervention.

Here is the cache peer configuration from the child proxy. FYI, I

added the 5 sec timeout to see if it had any effect, and it didn't with
the exception of speeding up the detection of the dead load balanced
proxy.

## Define Parent Caches
# Cache Peer Timeout
peer_connect_timeout 5 seconds
# Round Robin Caches
cache_peer 10.50.20.7 parent 8080 8181 name=DSL2BAL round-robin
cache_peer 10.50.20.6 parent 8080 8181 name=DSL1BAL round-robin
# Non Load Balanced caches
cache_peer 10.52.20.7 parent 8080 8181 name=DSL2
cache_peer 10.52.20.6 parent 8080 8181 name=DSL1

## Define Parent Cache Access rules
# Access Control Lists
acl NONBAL dstdomain "/usr/local/squid/etc/nonbal.dns.list
# Rules for the Control Lists
cache_peer_access DSL2BAL allow !NONBAL
cache_peer_access DSL1BAL allow !NONBAL
cache_peer_access DSL2 allow NONBAL
cache_peer_access DSL1 allow NONBAL

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


Chris

I am currently doing some testing by creating access control lists for a
couple nonexistent sub domains on our own domain.  This then just
accesses the error page from the parent proxy for nonexistent domain, so
it shouldn't put an unnecessary load on the internet links testing.
Then allowing each one through one of the non balanced parents.  By
accessing that page with my browser it causes the parent to be marked
dead.

I could look at writing a script to access these pages through the child
proxy every so many seconds to cause the parent to be marked as dead.
It's kind of a hacked solution, but hopefully it would keep the users
from having too much down time in the event that one proxy goes down.


Since 2.6 there have also been a set of monitor* options to do this in 
various ways when ICP feedback is insufficient or not available.




It would probably be preferable though to query ICP directly and then do
a reconfigure on the child squid to exclude that parent from its
configuration.  If anyone can tell me where to find the information on
how to do an ICP query that would save me some time, and be greatly
appreciated, in the mean time I will start  searching or worse yet if
that fails sniffing network traffic to write an application to mimic 

Re: [squid-users] WCCP return method

2009-04-30 Thread Amos Jeffries

kgardenia42 wrote:

On 4/30/09, Ritter, Nicholas  wrote:

* WCCP supports a return method for packets which the web-cache
decides to reject/return.  Does squid support this?  I see that the
return method can be configured in squid but is the support for
returning actually there?

I dunno about this one.


Does anyone know the answer to this?  I'd just like to know what squid
can do when it comes to "return method".


Only whats documented.

http://www.squid-cache.org/Doc/config/wccp2_return_method/


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


Re: [squid-users] squid + auth + safari + SSL = TCP_DENIED/407

2009-04-30 Thread Amos Jeffries

Gavin McCullagh wrote:

Hi,

one of our Mac people has been complaining that he can't get into certain
SSL sites.  I borrowed a MAC and found that these does indeed seem to be a
problem, though apparently not on all SSL sites (a login on www.bebo.com)
is an example that does give the problem.  I'm not sure of this but it
looks like it might be where there's a POST request over SSL.

I noticed this:

http://www2.tr.squid-cache.org/mail-archive/squid-users/200709/0109.html

so I tried turning off authentication and it worked.

I'm using squid-2.6-stable18 which I'm well aware is old.  Is this a bug in
squid or safari or is this known for sure?  Does anyone know if an upgrade
to squid would sort it out?

If not, I may have to put in an ACL either to allow:

 - all macs to be unauthenticated 
 - all SSL to be unauthenticated

 - all requests with safari browser strings using SSL to be unauthenticated

or something like that.  Has anyone had to do this?  Is there a known "best
way"?

Thanks in advance,
Gavin



This one seems like a browser bug like Henrik says in that post you found.

The only part Squid has in any of this is to open a CONNECT tunnel and 
shove data bits between browser and server. And auth credentials, 
challenge or POST content which goes through the tunnel is not touched 
by Squid in any way.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


Re: [squid-users] Recroding traffic

2009-04-30 Thread Jeff Pang

Luis Daniel Lucio Quiroz:

Hi Squids,

Suppose I have an squid configured with https for caching.  I wonder if it is 
possible to record html traffic, how?




You may analyse it from the access.log or capture it with an external 
tool like iptraf.


Regards.


Re: [squid-users] Squid DENY access "www.Symantec.com"

2009-04-30 Thread Jeff Pang

panagiotis polychronopoulos 写道:


Hi to everyone
I have a problem. The squid do not permit access to www.symantec.com portal 
becouse can not resolve the DNS. How i could solve the mystery?
 


use a correct DNS for squid or create a host entry for that domain name.

Regards.


[squid-users] weight for what?

2009-04-30 Thread Tech W.

Hi,

I saw this comment in squid.conf for cache_peer directive:

 use 'weight=n' to affect the selection of a peer
 during any weighted peer-selection mechanisms.
 The weight must be an integer; default is 1,
 larger weights are favored more.

so, what's "weighted peer-selection mechanisms"? Thanks.

Regards.



  Enjoy a better web experience. Upgrade to the new Internet Explorer 8 
optimised for Yahoo!7. Get it now.


[squid-users] Re[squid-users] cording username for secure connection

2009-04-30 Thread molybtek

We have squid running as an authenticating proxy using squid_ldap_auth. 
In the access log, for normal connections, it records the username for most
log entries. 
However, for secure connections, the username field is blank. Are there any
way to get Squid to record the username for those secure connections as
well? 
-- 
View this message in context: 
http://www.nabble.com/Recording-username-for-secure-connection-tp23326582p23326582.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] Re[squid-users] cording username for tunnel connection

2009-04-30 Thread molybtek

We have squid running as an authenticating proxy using squid_ldap_auth. 
In the access log, for normal connections, it records the username for most
log entries. 
However, for secure connections, the username field is blank. Are there any
way to get Squid to record the username for those secure connections as
well? 
-- 
View this message in context: 
http://www.nabble.com/Recording-username-for-tunnel-connection-tp23326573p23326573.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] Squid DENY access "www.Symantec.com"

2009-04-30 Thread panagiotis polychronopoulos


Hi to everyone
I have a problem. The squid do not permit access to www.symantec.com portal 
becouse can not resolve the DNS. How i could solve the mystery?
 
any suggestion would help
thanks in advance
 
_
Το What's New σας ειδοποιεί άμεσα για κάθε ενημέρωση. Μάθετε πώς.
http://home.live.com/


[squid-users] Pass the IP?

2009-04-30 Thread detariael

Hi everyone,

I was talking to my ISP admin another day. We have our network 
configured in a way that we get external public IP for everyone for 
every port except for port 80, because he's got Squid set there to 
cache the pages and speed it up. He says we can't get public IP's on 
port 80, because all the traffic goes through Squid and it changes the 
IP.


Is there a way to configure Squid that:
1.) it still works as a cache;
2.) it won't change the IP of the user as he surfs the page.

If so, I'd be glad if someone could point it out, so I could forward it 
to my admin. I just hate going to sites and seeing that my IP is in 
some blacklist due to someone in my network doing some bad things (or 
rather, some malicious software doing it for them without them even 
knowing). I already have public IP - please help me get it on port 80 
as well.


Regards,
Detariael



[squid-users] Recroding traffic

2009-04-30 Thread Luis Daniel Lucio Quiroz
Hi Squids,

Suppose I have an squid configured with https for caching.  I wonder if it is 
possible to record html traffic, how?


TIA
LD


Re: [squid-users] WCCP return method

2009-04-30 Thread kgardenia42
On 4/30/09, Ritter, Nicholas  wrote:
>
> * WCCP supports a return method for packets which the web-cache
> decides to reject/return.  Does squid support this?  I see that the
> return method can be configured in squid but is the support for
> returning actually there?
>
> I dunno about this one.

Does anyone know the answer to this?  I'd just like to know what squid
can do when it comes to "return method".

Thanks.


RE: [squid-users] Parent Proxy's, Not Failing over when Primary Parent is Down.

2009-04-30 Thread Dean Weimer
-Original Message-
From: crobert...@gci.net [mailto:crobert...@gci.net] 
Sent: Thursday, April 30, 2009 2:13 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Parent Proxy's, Not Failing over when Primary
Parent is Down.

Dean Weimer wrote:
> I have a current Parent child proxy configuration I have been testing,
its working with the exception of some sites not failing over to second
parent when primary parent goes down.
>
> In the test scenario I have 2 parent proxies, and one child proxy
server, the parents are each configured twice using an alias IP address.
This is done to load balance using round robin for the majority of web
traffic yet allow some sites that we have identified to not work
correctly with load balancing to go out a single parent proxy.
>   

Since Squid 2.6 there has been a parent selection method called 
"sourcehash", which will keep a client-to-parent-proxy relationship 
until the parent fails.

I considered this, but was concerned that after a failed proxy server,
the majority of my load would be on one server, and not taking advantage
of both links when the problem is resolved.

> The load balanced traffic works as expected, the dead parent is
identified and ignored until it comes back online.  The traffic that
cannot be load balanced is all using HTTPS (not sure HTTPS has anything
to do with the problem or not), when I stop the parent proxy 10.50.20.7
(aka 10.52.20.7) the round-robin configuration is promptly marked as
dead.  However a website that has already been connected to that is in
the NONBAL acl just returns the proxy error from the child giving a
connect to (10.52.20.7) parent failed connection denied.

Hmmm...  You might have to disable server_persistent_connections, or 
lower the value of persistent_request_timeout to have a better response 
rate to a parent failure with your current setup.

Also considered this, but figured it would break some sites that are
working successfully with load balancing because they create a
persistent connection, and making the request timeout to low would
becoming annoying to the users.  Also as the default is listed at 2
minutes, I noticed that even after as much as 5 minutes that the
connection would not fail over.

>   It will not mark the non load balanced parent dead, closing and
restarting the browser doesn't help.  It will change the status to dead
however if I connect to another site in the NONBAL acl.  Going back to
the first site, I can then connect, even though I have to log in again,
which is expected and why these sites cannot be load balanced.
>
> Does anyone have any ideas short of writing some sort of test script
that will cause the parent to be marked as dead, if it fails without any
user intervention.
>
> Here is the cache peer configuration from the child proxy. FYI, I
added the 5 sec timeout to see if it had any effect, and it didn't with
the exception of speeding up the detection of the dead load balanced
proxy.
>
> ## Define Parent Caches
> # Cache Peer Timeout
> peer_connect_timeout 5 seconds
> # Round Robin Caches
> cache_peer 10.50.20.7 parent 8080 8181 name=DSL2BAL round-robin
> cache_peer 10.50.20.6 parent 8080 8181 name=DSL1BAL round-robin
> # Non Load Balanced caches
> cache_peer 10.52.20.7 parent 8080 8181 name=DSL2
> cache_peer 10.52.20.6 parent 8080 8181 name=DSL1
>
> ## Define Parent Cache Access rules
> # Access Control Lists
> acl NONBAL dstdomain "/usr/local/squid/etc/nonbal.dns.list
> # Rules for the Control Lists
> cache_peer_access DSL2BAL allow !NONBAL
> cache_peer_access DSL1BAL allow !NONBAL
> cache_peer_access DSL2 allow NONBAL
> cache_peer_access DSL1 allow NONBAL
>
> Thanks,
>  Dean Weimer
>  Network Administrator
>  Orscheln Management Co

Chris

I am currently doing some testing by creating access control lists for a
couple nonexistent sub domains on our own domain.  This then just
accesses the error page from the parent proxy for nonexistent domain, so
it shouldn't put an unnecessary load on the internet links testing.
Then allowing each one through one of the non balanced parents.  By
accessing that page with my browser it causes the parent to be marked
dead.

I could look at writing a script to access these pages through the child
proxy every so many seconds to cause the parent to be marked as dead.
It's kind of a hacked solution, but hopefully it would keep the users
from having too much down time in the event that one proxy goes down.

It would probably be preferable though to query ICP directly and then do
a reconfigure on the child squid to exclude that parent from its
configuration.  If anyone can tell me where to find the information on
how to do an ICP query that would save me some time, and be greatly
appreciated, in the mean time I will start  searching or worse yet if
that fails sniffing network traffic to write an application to mimic the
squid query.




Re: [squid-users] Writing Plugins for Squid

2009-04-30 Thread Parvinder Bhasin
Thanks Chris.  For some reason I never got the message in my mailbox.   
Perhaps something with my email filter.

Thanks a bunch.  Really appreciate it.

Cheers
Parvinder Bhasin

On Apr 30, 2009, at 12:02 PM, Chris Robertson wrote:


Parvinder Bhasin wrote:

Since I didn't get answer to my last post,


You did get a response...

http://www.squid-cache.org/mail-archive/squid-users/200904/0736.html


I assume I have to code it myself.
Can someone point me to the write place , where I can get some  
details on how to write plugins/helper apps for squid?


http://www.squid-cache.org/Doc/config/external_acl_type/



Thanks


Chris





Re: [squid-users] Parent Proxy's, Not Failing over when Primary Parent is Down.

2009-04-30 Thread Chris Robertson

Dean Weimer wrote:

I have a current Parent child proxy configuration I have been testing, its 
working with the exception of some sites not failing over to second parent when 
primary parent goes down.

In the test scenario I have 2 parent proxies, and one child proxy server, the 
parents are each configured twice using an alias IP address.  This is done to 
load balance using round robin for the majority of web traffic yet allow some 
sites that we have identified to not work correctly with load balancing to go 
out a single parent proxy.
  


Since Squid 2.6 there has been a parent selection method called 
"sourcehash", which will keep a client-to-parent-proxy relationship 
until the parent fails.



The load balanced traffic works as expected, the dead parent is identified and 
ignored until it comes back online.  The traffic that cannot be load balanced 
is all using HTTPS (not sure HTTPS has anything to do with the problem or not), 
when I stop the parent proxy 10.50.20.7 (aka 10.52.20.7) the round-robin 
configuration is promptly marked as dead.  However a website that has already 
been connected to that is in the NONBAL acl just returns the proxy error from 
the child giving a connect to (10.52.20.7) parent failed connection denied.


Hmmm...  You might have to disable server_persistent_connections, or 
lower the value of persistent_request_timeout to have a better response 
rate to a parent failure with your current setup.



  It will not mark the non load balanced parent dead, closing and restarting 
the browser doesn't help.  It will change the status to dead however if I 
connect to another site in the NONBAL acl.  Going back to the first site, I can 
then connect, even though I have to log in again, which is expected and why 
these sites cannot be load balanced.

Does anyone have any ideas short of writing some sort of test script that will 
cause the parent to be marked as dead, if it fails without any user 
intervention.

Here is the cache peer configuration from the child proxy. FYI, I added the 5 
sec timeout to see if it had any effect, and it didn't with the exception of 
speeding up the detection of the dead load balanced proxy.

## Define Parent Caches
# Cache Peer Timeout
peer_connect_timeout 5 seconds
# Round Robin Caches
cache_peer 10.50.20.7 parent 8080 8181 name=DSL2BAL round-robin
cache_peer 10.50.20.6 parent 8080 8181 name=DSL1BAL round-robin
# Non Load Balanced caches
cache_peer 10.52.20.7 parent 8080 8181 name=DSL2
cache_peer 10.52.20.6 parent 8080 8181 name=DSL1

## Define Parent Cache Access rules
# Access Control Lists
acl NONBAL dstdomain "/usr/local/squid/etc/nonbal.dns.list
# Rules for the Control Lists
cache_peer_access DSL2BAL allow !NONBAL
cache_peer_access DSL1BAL allow !NONBAL
cache_peer_access DSL2 allow NONBAL
cache_peer_access DSL1 allow NONBAL

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


Chris


RE: [squid-users] WCCP return method

2009-04-30 Thread Ritter, Nicholas


-Original Message-
From: kgardenia42 [mailto:kgardeni...@googlemail.com] 
Sent: Thursday, April 30, 2009 1:44 PM
To: squid-users@squid-cache.org
Subject: [squid-users] WCCP return method

My questions are :

* When the squid box has to proxy to the web-app, what is the
recommended way to stop a circular redirect i.e. I want to avoid the
squid box's proxied connection from getting intercepted by the
router's WCCP rules and looped back around to the squid box again.


Have the squid box on a dedicated sub-interface or physical interface,
run the "wccp redirect" statements on the local Ethernet interface and
use an "ip wccp redirect exclude" statement on the interface the squid
box is running on.

This is how I do it, and it works great.



* WCCP supports a return method for packets which the web-cache
decides to reject/return.  Does squid support this?  I see that the
return method can be configured in squid but is the support for
returning actually there?

I dunno about this one.


Re: [squid-users] Writing Plugins for Squid

2009-04-30 Thread Chris Robertson

Parvinder Bhasin wrote:
Since I didn't get answer to my last post, 


You did get a response...

http://www.squid-cache.org/mail-archive/squid-users/200904/0736.html


I assume I have to code it myself.
Can someone point me to the write place , where I can get some details 
on how to write plugins/helper apps for squid?


http://www.squid-cache.org/Doc/config/external_acl_type/



Thanks


Chris



Re: [squid-users] Squid for Apache Authentification

2009-04-30 Thread Chris Robertson

Tech W. wrote:

Hello,

I have the apache which enables directory authentification.
for example, I have this auth config for apache:


in httpd.conf:
〈Directory /home/ftp/pub>
Options Indexes
AllowOverride AuthConfig
order allow,deny
allow from all
〈/Directory>


in .htaccess:
authname "shared files"
authtype basic
authuserfile /*/.password
require valid-user


When someone access apache's that directory, it will open a prompting window 
for inputing username and password.

But, if I set a Squid reverse-proxy before apache, and when people access that 
same directory through squid, the prompting window will not be opened, people 
have no chance to input username and password.
  


Add "login=PASS" to your cache_peer line.


From http://www.squid-cache.org/Versions/v3/3.0/cfgman/cache_peer.html...


use 'login=PASS' if users must authenticate against
 the upstream proxy or in the case of a reverse proxy
 configuration, the origin web server.  This will pass
 the users credentials as they are to the peer.
 This only works for the Basic HTTP authentication scheme.




I'm using squid3.0-stable13 and Linux with kernel 2.6.
Please help. Thanks in advance.

Regards.


Chris


[squid-users] Parent Proxy's, Not Failing over when Primary Parent is Down.

2009-04-30 Thread Dean Weimer
I have a current Parent child proxy configuration I have been testing, its 
working with the exception of some sites not failing over to second parent when 
primary parent goes down.

In the test scenario I have 2 parent proxies, and one child proxy server, the 
parents are each configured twice using an alias IP address.  This is done to 
load balance using round robin for the majority of web traffic yet allow some 
sites that we have identified to not work correctly with load balancing to go 
out a single parent proxy.

The load balanced traffic works as expected, the dead parent is identified and 
ignored until it comes back online.  The traffic that cannot be load balanced 
is all using HTTPS (not sure HTTPS has anything to do with the problem or not), 
when I stop the parent proxy 10.50.20.7 (aka 10.52.20.7) the round-robin 
configuration is promptly marked as dead.  However a website that has already 
been connected to that is in the NONBAL acl just returns the proxy error from 
the child giving a connect to (10.52.20.7) parent failed connection denied.  It 
will not mark the non load balanced parent dead, closing and restarting the 
browser doesn't help.  It will change the status to dead however if I connect 
to another site in the NONBAL acl.  Going back to the first site, I can then 
connect, even though I have to log in again, which is expected and why these 
sites cannot be load balanced.

Does anyone have any ideas short of writing some sort of test script that will 
cause the parent to be marked as dead, if it fails without any user 
intervention.

Here is the cache peer configuration from the child proxy. FYI, I added the 5 
sec timeout to see if it had any effect, and it didn't with the exception of 
speeding up the detection of the dead load balanced proxy.

## Define Parent Caches
# Cache Peer Timeout
peer_connect_timeout 5 seconds
# Round Robin Caches
cache_peer 10.50.20.7 parent 8080 8181 name=DSL2BAL round-robin
cache_peer 10.50.20.6 parent 8080 8181 name=DSL1BAL round-robin
# Non Load Balanced caches
cache_peer 10.52.20.7 parent 8080 8181 name=DSL2
cache_peer 10.52.20.6 parent 8080 8181 name=DSL1

## Define Parent Cache Access rules
# Access Control Lists
acl NONBAL dstdomain "/usr/local/squid/etc/nonbal.dns.list
# Rules for the Control Lists
cache_peer_access DSL2BAL allow !NONBAL
cache_peer_access DSL1BAL allow !NONBAL
cache_peer_access DSL2 allow NONBAL
cache_peer_access DSL1 allow NONBAL

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co



[squid-users] WCCP return method

2009-04-30 Thread kgardenia42
Hi,

I've been trying to get my head around a couple of concepts in WCCP
and I think I'm missing something and lots of Googling hasn't helped
much so far.

Assuming the following setup :

* a LAN where the gateway is using WCCP2 (GRE) to catch traffic
destined for a given web-app ip address (using an access-list) and
forward it to a squid box
* the squid box is using iptables to catch traffic forwarded to it by
the router and redirect it to a local squid port

My questions are :

* When the squid box has to proxy to the web-app, what is the
recommended way to stop a circular redirect i.e. I want to avoid the
squid box's proxied connection from getting intercepted by the
router's WCCP rules and looped back around to the squid box again.

* WCCP supports a return method for packets which the web-cache
decides to reject/return.  Does squid support this?  I see that the
return method can be configured in squid but is the support for
returning actually there?

I'd be very grateful for your help.

Thanks.


[squid-users] Writing Plugins for Squid

2009-04-30 Thread Parvinder Bhasin
Since I didn't get answer to my last post, I assume I have to code it  
myself.
Can someone point me to the write place , where I can get some details  
on how to write plugins/helper apps for squid?


Thanks




Re: [squid-users] squid + auth + safari + SSL = TCP_DENIED/407

2009-04-30 Thread Banyan He
Gavin,

Can you attach the squid access log, openssl test prompts? Oh, and maybe
error.log

Thanks,

-- 
Banyan He
Network & Security Information System
ban...@rootong.com



On 5/1/09 12:11 AM, "Gavin McCullagh"  wrote:

> Hi,
> 
> one of our Mac people has been complaining that he can't get into certain
> SSL sites.  I borrowed a MAC and found that these does indeed seem to be a
> problem, though apparently not on all SSL sites (a login on www.bebo.com)
> is an example that does give the problem.  I'm not sure of this but it
> looks like it might be where there's a POST request over SSL.
> 
> I noticed this:
> 
> http://www2.tr.squid-cache.org/mail-archive/squid-users/200709/0109.html
> 
> so I tried turning off authentication and it worked.
> 
> I'm using squid-2.6-stable18 which I'm well aware is old.  Is this a bug in
> squid or safari or is this known for sure?  Does anyone know if an upgrade
> to squid would sort it out?
> 
> If not, I may have to put in an ACL either to allow:
> 
>  - all macs to be unauthenticated
>  - all SSL to be unauthenticated
>  - all requests with safari browser strings using SSL to be unauthenticated
> 
> or something like that.  Has anyone had to do this?  Is there a known "best
> way"?
> 
> Thanks in advance,
> Gavin
> 
> 




[squid-users] squid + auth + safari + SSL = TCP_DENIED/407

2009-04-30 Thread Gavin McCullagh
Hi,

one of our Mac people has been complaining that he can't get into certain
SSL sites.  I borrowed a MAC and found that these does indeed seem to be a
problem, though apparently not on all SSL sites (a login on www.bebo.com)
is an example that does give the problem.  I'm not sure of this but it
looks like it might be where there's a POST request over SSL.

I noticed this:

http://www2.tr.squid-cache.org/mail-archive/squid-users/200709/0109.html

so I tried turning off authentication and it worked.

I'm using squid-2.6-stable18 which I'm well aware is old.  Is this a bug in
squid or safari or is this known for sure?  Does anyone know if an upgrade
to squid would sort it out?

If not, I may have to put in an ACL either to allow:

 - all macs to be unauthenticated 
 - all SSL to be unauthenticated
 - all requests with safari browser strings using SSL to be unauthenticated

or something like that.  Has anyone had to do this?  Is there a known "best
way"?

Thanks in advance,
Gavin



Re: [squid-users] Getting error msgs when trying to start squid

2009-04-30 Thread Henrique M.


Amos Jeffries-2 wrote:
> 
> Henrique M. wrote:
>> 
>> Amos Jeffries-2 wrote:
>>>   acl localhost src 192.168.2.5 # 192.168.2.5 Server IP, 192.168.2.1
>>> Modem
>>> IP
>>>
>>> "localhost" is a special term used in networking to mean the IPs
>>> 127.0.0.1
>>> and sometimes ::1 as well. When defining an ACL for 'public' squid box
>>> IPs
>>> its better to use a different name. The localnet definition covers the
>>> same public IPs anyway so redefining it is not a help here.
>>>
>> 
>> So what do you suggest? Should I just erase this line or change it?
> 
> Make it back to:
>acl localhost src 127.0.0.1
> 
>> 
>> 
>> Amos Jeffries-2 wrote:
>>>   http_access allow all
>>>
>>> This opens the proxy to access from any source on the internet at all.
>>> Zero inbound security. Not good for a long-term solution. I'd suggest
>>> testing with that as a "deny all" to make sure we don't get a
>>> false-success.
>>>
>> 
>> Will do that. How about the "icp_access"? What does this command do?
>> Should
>> I leave it "allow all"?
> 
> Allows other machines which have your squid set as a cache_peer to send 
> ICP requests to you and get replies back. Current Squid default it off 
> for extra security. Unless you need it, do: icp_access deny all
> 
>> 
>> 
>> joost.deheer wrote:
>>> Define "doesn't work". Clients get an error? Won't start? Something
>>> else?
>>>
>> 
>> Squid seems to starts, but clients can't browse the internet. They get
>> the
>> default error msg that the  browser shows when it  can't load the
>> website.
>> This actualy got me thinking if I am setting up the browser  correctly?
>> I'm
>> typing the servers IP for  the proxy address and "3128" for the proxy
>> port,
>> is that correct?
> 
> I believe so yes.
>   * Make sure its set for HTTP, HTTPS, FTP, and Gopher but not SOCKS 
> proxy settings. (some may not be present).
> 
>   * Check the testing client machine can get to squid (ping or such).
> Check the cache.log to see if Squid is failing or busy at the time you 
> are checking.
> 
>   * make sure that squid is actually running and opened port 3128.
>"netstat -antup | grep 3128" or similar commands should say.
> 
>> 
>> 
>> joost.deheer wrote:
>>> You could also try to start the proxy with 'squid -N' to start squid as
>>> a
>>> console application instead of  in daemon mode. The  errors should then
>>> appear on your screen.
>>>
>> 
>> How should I do that? I tried to start squid with "/etc/init.d/squid -N
>> start" and "/etc/init.d/squid -N"  but I didn't work.  I end up finding
>> out
>> that I could check squid's status and for my surprise I got this message
>> "*
>> squid is not running.".  So how do I start squid so it will show me the
>> error msgs on screen?
> 
> Just "squid -N -Y -d 1" shoudl work.  If not find the path to *bin/squid 
> and run with the full file path/name.
>   Usually "locate bin/squid" says where squid actually is.
> 
> Amos
> -- 
> Please be using
>Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
>Current Beta Squid 3.1.0.7
> 
> 

Thanks a lot Amos, Squid is now working on server and on client machines. I
figured that squid wasn't running because there were a few folders that were
not available for squid to use (probably cache folders).

This is actually something that I would like to understand. Does squid cache
files and webpages automatically or do I have to add a few command lines to
enable it?

How about the about of memory RAM used by squid and the amount of disk
available for cache? Do I have to set this up or not? If not what are the
default values?
-- 
View this message in context: 
http://www.nabble.com/Getting-error-msgs-when-trying-to-start-squid-tp22933693p23318899.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] Squid for Apache Authentification

2009-04-30 Thread Tech W.

Hello,

I have the apache which enables directory authentification.
for example, I have this auth config for apache:


in httpd.conf:
〈Directory /home/ftp/pub>
Options Indexes
AllowOverride AuthConfig
order allow,deny
allow from all
〈/Directory>


in .htaccess:
authname "shared files"
authtype basic
authuserfile /*/.password
require valid-user


When someone access apache's that directory, it will open a prompting window 
for inputing username and password.

But, if I set a Squid reverse-proxy before apache, and when people access that 
same directory through squid, the prompting window will not be opened, people 
have no chance to input username and password.

I'm using squid3.0-stable13 and Linux with kernel 2.6.
Please help. Thanks in advance.

Regards.




  Yahoo!7 recommends that you update your browser to the new Internet 
Explorer 8.Get it now.


Re: [squid-users] Multiple different parents and no default

2009-04-30 Thread Markus Meyer
Amos Jeffries schrieb:
> 
> Oh understood. You were thinking outside Squid :)

Sometimes it's worth the risk...

> Well, cancel that extra work by the clients once per domain, against the
> extra work your own server does doing (and waiting for) the re-writing
> to happen once per request.

Hmm, right. I should keep that in mind when my proxies are dying. Just
in case I keep a copy of your sugeestion.

Cheers,
Markus



signature.asc
Description: OpenPGP digital signature


Re: [squid-users] External C program

2009-04-30 Thread Amos Jeffries

Julien Philibin wrote:

On Wed, Apr 29, 2009 at 11:15 PM, Amos Jeffries  wrote:

Very interesting Bharath !!!


Yes thank you. You have identified the issue and we can now tell Julien
exactly what he has to do.


What would be your advice to get my program working ?!


Use fgets(). The scan() family apparently do not handle EOF in the way
needed.

Thus to work your code must be:

 char line[8196];
 char ip[45];
 char url[8196];

 ip[0] = '\0';
 url[0] = '\0';

 while( fgets(line, 8196, stdin) != NULL ) {
 snscanf(sbuf, 8196, "%s %s" ip, url);
 // happy joy 
 }

Amos



Hey that's smart! :)

I'm going to go for that and if things go wrong, I'll let you know ...


It is slightly wrong. The sbuf there should be 'line'.
I hope your compiler catches that also.

And please do use snscanf instead of scanf. It will save you from many 
security and segfault bugs over your coding time.




Thank you everyone!

btw: Amos, any idea why I get a randomly 127.0.0.1 instead of my real
Ip in the logs ?



As someone said earlier 127.0.0.1 is one of the IPs assigned to your 
machine. It is a special IPv4 address assigned as "localhost". Every 
machine with networking has that same IP for private non-Internet 
traffic use.


Most machines will have two of these; 127.0.0.1 for IPv4 and ::1 for 
IPv6. They are identical in use and purpose for their own IP protocols.



Why you get it randomly I don't know. I expect it to show up 
consistently for requests the OS identifies as local-machine only. And 
never for requests the OS thinks are remote global.


If your testing uses localhost:3128 as the proxy it will connect to 
127.0.0.1 privately. If it uses the public IP or name resolving to the 
public IP it will use a global public connections.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


Re: [squid-users] Next problem: squid ignoring my vary header

2009-04-30 Thread Amos Jeffries

Stefan Hartmann wrote:

Hi,

next problem arising when trying content-compression: Squid seems to
ignore my "Vary: Accept-Encoding" header. From the logs (log_mime_hdr
on, inserted line breaks for better reading):

1240918625.344  1 111.111.111.111 TCP_MEM_HIT/200 28399 GET
http://our.server.de/library/js/ajax_v4-6-21.js - NONE/-
application/x-javascript

[
Host: our.server.de
User-Agent: Nutscrape/1.0 (CP/M; 8-bit)
Cache-Control: max-age=259200
]
[
HTTP/1.0 200 OK
X-Powered-By: ASP.NET
Vary: Accept-Encoding
Expires: Tue, 28 Apr 2009 13:32:59 GMT
Date: Tue, 28 Apr 2009 11:32:59 GMT
Content-Type: application/x-javascript
Content-Length: 28115
Content-Encoding: gzip
X-Cache: HIT from accel3
Connection: close
]

No Accept-Encoding in the request Headers, but i get a HIT from a
(previously cached) request with "Accept-Encoding: gzip" and so
Content-Encoding: gzip in the reply to my Request without
Accept-Encoding (and yes, the response was gzipped).

Any Idea? Could this be the result of a missing validator (Etag) and i
will have to enable "broken_vary_encoding" ?!?

Regards,
Stefan



I have heard this before in here. What version of Squid?

broken_vary_encoding copes with servers sending gzip data with plain 
text type.


The requests you have shown does not specify any limits on the data type 
accepted back. Squid treats it as anything the client version of HTTP 
can receive is acceptable.


I believe if you added a Accept-Encoding: header with values other than 
'gzip' which Squid has stored you would get back something else for that 
encoding.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7



Re: [squid-users] Multiple different parents and no default

2009-04-30 Thread Amos Jeffries

Markus Meyer wrote:

Amos Jeffries schrieb:

Markus, if you are altering the URL anyway you might find this a simpler
 way to do the whole thing:

create two sub-domains:

Thanks, I'll keep that in mind. I prefer to use one domain for this. At
the moment there are only two webservers but it might be in the future
that there are seven or eight servers. That would mean eight DNS-queries
to contact only one proxy.

Huh? no. dstdomain has no DNS queries. Client still only does one to find
the RR it wants.


Ok, this I don't understand. When I have two subdomains which point to
one server the clients have to make two DNS-queries where it all could
work with one DNS-query. And the less queries the faster the page is
laoding.


Oh understood. You were thinking outside Squid :)

Well, cancel that extra work by the clients once per domain, against the 
extra work your own server does doing (and waiting for) the re-writing 
to happen once per request.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


Re: [squid-users] Multiple different parents and no default

2009-04-30 Thread Amos Jeffries

Markus Meyer wrote:

Amos Jeffries schrieb:

Markus, if you are altering the URL anyway you might find this a simpler
 way to do the whole thing:

create two sub-domains:

Thanks, I'll keep that in mind. I prefer to use one domain for this. At
the moment there are only two webservers but it might be in the future
that there are seven or eight servers. That would mean eight DNS-queries
to contact only one proxy.

Huh? no. dstdomain has no DNS queries. Client still only does one to find
the RR it wants.


Ok, this I don't understand. When I have two subdomains which point to
one server the clients have to make two DNS-queries where it all could
work with one DNS-query. And the less queries the faster the page is
laoding.


Oh understood. You were thinking outside Squid :)

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


Re: [squid-users] Multiple different parents and no default

2009-04-30 Thread Markus Meyer
Amos Jeffries schrieb:
>>> Markus, if you are altering the URL anyway you might find this a simpler
>>>  way to do the whole thing:
>>>
>>> create two sub-domains:
>> Thanks, I'll keep that in mind. I prefer to use one domain for this. At
>> the moment there are only two webservers but it might be in the future
>> that there are seven or eight servers. That would mean eight DNS-queries
>> to contact only one proxy.
> 
> Huh? no. dstdomain has no DNS queries. Client still only does one to find
> the RR it wants.

Ok, this I don't understand. When I have two subdomains which point to
one server the clients have to make two DNS-queries where it all could
work with one DNS-query. And the less queries the faster the page is
laoding.

Cheers,
Markus



signature.asc
Description: OpenPGP digital signature


Re: [squid-users] Multiple different parents and no default

2009-04-30 Thread Markus Meyer
Chris Robertson schrieb:
>>  With the above setup I tested with
>> requests for both webservers, http://myproxy/kalimba_img/blubba.jpg and
>> http://myproxy/jallah_img/whatever.jpg, and "myproxy" always asked the
>> first listed peer "jallah.image".
>>   
> 
> Here's what I would do to work around this quirk (since you don't want
> to have a separate externally accessible sub domain per server)...

Ok, I'm picky. But your suggestions are working like a charm ;)

> ...so you are rewriting the destination domain (which you can then use
> to control which cache_peer is accessed) and removing the extraneous
> directory.

I never would've guessed the right order in which to use the rules.
Thanks a lot.

>> Hope this time I did a better job in explaining...
>>   
> 
> Much.  Hopefully I did a decent job of crafting a workable solution.

Jep. *thumbsup*

Cheers,
Markus



signature.asc
Description: OpenPGP digital signature