Re: [squid-users] Downgrade from 3.0stable10 to 3.0stable9

2008-11-18 Thread Amos Jeffries

Marcel Grandemange wrote:

Good day.

Im wondering if anybody could shed some light for me.

Ive had to downgrade a machine of mine due to bugs in stable10, however
since the downgrade im noticing a HELL of a lot of TCP_SWAPFAIL_MISS/200
Messages in access.log. And I do mean an extreme amount.

Any ideas?



Do you have the same 64/32 bit settings and --with-large-files on both 
builds?


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2


RE: [squid-users] acl deny versus acl allow?

2008-11-18 Thread Roger Thomas
Hi,
Ok, well you were all right!  Unfortunately I didn't know that the allow acl
had to be above the deny.

Ive used this and it works like a charm.

acl misc_allow_list url_regex -i /etc/squid/block/misc_allow.list
http_access allow misc_allow_list

acl misc_block_list url_regex -i /etc/squid/block/misc_block.list
http_access deny misc_block_list


Thanks all!

Roger


-Original Message-
From: Jeff Gerard [mailto:[EMAIL PROTECTED] 
Sent: 18 November 2008 07:31
To: squid-users@squid-cache.org
Subject: Re: [squid-users] acl deny versus acl allow?

My apologies...I misinterpreted what you said. I thought you meant deny
should not be used at all

- Original Message -
From: Amos Jeffries 
Date: Monday, November 17, 2008 9:33 pm
Subject: Re: [squid-users] acl deny versus acl allow?
To: Jeff Gerard 
Cc: squid-users@squid-cache.org

 Jeff Gerard wrote:BR  Can you clarify this? I have looked through the
FAQ and there 
 is plenty of reference to using deny and I can't see any 
 mention of replacing deny with allow.
  
 
 You can write either:
 http_access deny something
 or
 http_access allow something
 
 not both on the same line.
 
 To quote straight from that FAQ page:
 
 Q: How do I allow my clients to use the cache?
 A: Define an ACL that corresponds to your client's IP addresses.
 Next, allow those clients in the 
 http_access list.
 
 For example:
 acl myclients src 172.16.5.0/24
 http_access allow myclients
 
 
 and more relevant to your stated example:
 
 
 Q: How do I implement an ACL ban list?
 A: ..., Another way is to deny access to specific servers which 
 are 
 known to hold recipes.
 
 For example:
 acl Cooking2 dstdomain www.gourmet-chef.com
 http_access deny Cooking2
 http_access allow all
 
 
 Amos
 
  Thanks
  
  The word 'deny' is fully replaced with the word 
 'allow'.
 
  Please read and understand the FAQ on ACL before continuing 
 with 
  your 
  testing:
  http://wiki.squid-cache.org/SquidFaq/SquidAcl
 
  Amos
  -- 
  Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2
 
  
  --- 
  Jeff Gerard
 
 
 -- 
 Please be using
 Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
 Current Beta Squid 3.1.0.2
 

--- 
Jeff Gerard



Re: [squid-users] acl deny versus acl allow?

2008-11-18 Thread Henrik Nordstrom
On mån, 2008-11-17 at 15:25 +, Roger Thomas wrote:
 Hi,
 
 This is my first time posting to the mailing list, but I just wanted to know
 whether anyone knew how to do the below:
 
 I use the following to block a list of words from URL’s:
 
 acl misc_block_list url_regex -i /etc/squid/block/misc_block.list
 http_access deny misc_block_list
 
 I am trying to allow certain words, so for example, the word sex is in the
 block list, but I want the word sussex to be allowed.
 I have created another file called misc_allow.list but I’m not sure how to
 tell it to allow.  I presumed something like this:
 
 acl misc_allow_list url_regex -i /etc/squid/block/misc_allow.list
 http_access allow deny misc_allow_list

Hint 1: You can negate acls with !

Hint 2: You only need a single deny line.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] helper-protocol setting under Squid 3 for NTLM

2008-11-18 Thread Henrik Nordstrom
On mån, 2008-11-17 at 08:48 -0800, Mark Krawec wrote:

 I'm running Squid 3-STABLE10 and Samba 3.2.4.
 
 My auth_param statement looks like:
 
 auth_param ntlm program /usr/local/squid/libexec/ntlm_auth -b dc01 dc02
 dc03

Ouch... see mailinglist discussions regarding this helper.

 authentication is working but in the past there was a helper-protocol
 setting
 
 under Squid 2 along the lines of:
 
 /usr/local/samba/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp.
 
 Is there a equivalent for Squid 3?

Exactly the same.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Downgrade from 3.0stable10 to 3.0stable9

2008-11-18 Thread Henrik Nordstrom
On tis, 2008-11-18 at 09:47 +0200, Marcel Grandemange wrote:
 Good day.
 
 Im wondering if anybody could shed some light for me.
 
 Ive had to downgrade a machine of mine due to bugs in stable10, however
 since the downgrade im noticing a HELL of a lot of TCP_SWAPFAIL_MISS/200
 Messages in access.log. And I do mean an extreme amount.
 
 Any ideas?

Sounds like your swap.state has gone corrupted, maybe an old version not
matching the cache content..

Is there any messages in cache.log?

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] About squid ICAP implementation

2008-11-18 Thread Henrik Nordstrom
On lör, 2008-11-15 at 05:51 +0900, Takashi Tochihara wrote:

 I think to send Allow: 204  Preview: , squid must buffer not the
 whole message, but the whole *Previewed* message. (part of the message)

Allow: 204 is not related to previews. It tells the ICAP server that
it's OK to respond with 204 at any time, even outside of the preview.

The preview is signalled by the Preview:  header, and implicitly
requests the ICAP server to respond with 204 or 100 at the end of the
preview to continue the transaction or possibly a syntesised response
replacing the original message.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] problem with reply_body_max_size and external ACL

2008-11-18 Thread Henrik Nordstrom
On fre, 2008-11-14 at 02:05 +1300, Amos Jeffries wrote:

 Based on this and a few other occurrences I'm beginning to suspect that 
 credential re-checks are missing on all reply controls.

Also reply_body_max_size is a fast acl lookup.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: FW: [squid-users] Squid Stops Responding Sporadically

2008-11-18 Thread Henrik Nordstrom
On tor, 2008-11-13 at 19:40 +0200, Marcel Grandemange wrote:

 Under further investigation system log file presented following:
 
 Nov 13 19:37:21 thavinci kernel: pid 66367 (squid), uid 100: exited on
 signal 6 (core dumped)
 Nov 13 19:37:21 thavinci squid[66118]: Squid Parent: child process 66367
 exited due to signal 6
 Nov 13 19:37:24 thavinci squid[66118]: Squid Parent: child process 66370
 started

What is said in cache.log?

Squid FAQ Sending bug reports to the Squid team:
http://wiki.squid-cache.org/SquidFaq/TroubleShooting#head-7067fc0034ce967e67911becaabb8c95a34d576d

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid in chroot jail reconfigure/rotate FATAL errors: SOLVED

2008-11-18 Thread Henrik Nordstrom
On fre, 2008-11-14 at 16:41 +0100, Rudi Vankemmel wrote:
 I have seen quite some postings indicating errors when issuing a
 squid -k reconfigure or squid -k rotate from within a chroot jail.

-k rotate should work fine in a chroot, but -k reconfigure requires a
bit of dual filesystem layout and relaxed permissions to work.

The reason to this is that Squid permanently drops all root permissions
when chrooted, to prevent a possible chroot breakout in case of
compromise, but the config file is still read as root before chrooting
(another security measure, making it harder for a possible attacker to
gain access to sensitive config material).

To be able to use -k reconfigure you must set up so that all config
files is accessible within the chroot as your cache_effective_user
(usually done by giving one of it's groups read permission to the
files), and also accessible using the same path outside the chroot.
(some symlinking is required for this).

Regards
Henrik



signature.asc
Description: This is a digitally signed message part


Re: [squid-users] squid_ldap_auth and passwords in clear text

2008-11-18 Thread Henrik Nordstrom
On fre, 2008-11-14 at 10:31 -0600, Johnson, S wrote:

 I just got the squid_ldap_auth working ok on my segment but when
 watching the protocol analyzer I see that the auth requests against the
 AD are coming in as clear text passwords.  Is there anyway we can
 encrypt the ldap domain requests?

By AD do you refer to Microsoft AD? In such case use NTLM authentication
instead of LDAP.

You can also TLS encrypt the LDAP communication, but this does not
protect the credentials sent by browsers to Squid, just the
communication squid-LDAP.

Regards
Henrik




signature.asc
Description: This is a digitally signed message part


RE: [squid-users] very basic question on enforcing use of proxy

2008-11-18 Thread Henrik Nordstrom
On lör, 2008-11-15 at 14:24 -0800, Gregori Parker wrote:
 You could enforce proxy-pac file via global policy, or depending on
 your network equipment, you may be able to do policy-based routing
 (route by port) and/or even wccp...there are a several ways to get
 squid inbetween your users and their http traffic that I would
 recommend exploring before doing transparent-mode anything.

Both policy routing and WCCP is examples of how to configure the router
side of transparent interception.

Regards
Henrik



signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Re: squid_ldap_auth and passwords in clear text

2008-11-18 Thread Henrik Nordstrom
On sön, 2008-11-16 at 10:48 -0800, Chuck Kollars wrote:

 Eavesdropping on all network traffic from any connection used to be a big 
 problem when network hubs repeated all traffic everywhere. Although Ethernet 
 has changed hugely, the old paranoia remains. Any modern device is 
 a switch (not a hub) and only directs traffic to the one port it's 
 destined for, so nobody else can eavesdrop.

It's usually almost as easy to eavesdrop on selected traffic in a
switched environment, only requires some small amount of extra
preparation to get the traffic flowing in your direction.

 Of course even with switches you should take some reasonable precautions:
  1) Ensure whatever you do to get your sniffer to work is inaccessible to 
 users. 

Usually the steps taken by an network admin to run a sniffer is very
different from an attacker. A serious network admin uses a dedicated
station for the purpose, connected to a mirror port on the switch.. an
attacker uses a compromised station or server (or in very rare cases of
physical access plugs his own gear in a free or borrowed network socket)

  2) Keep all network infrastructure physically inaccessible, perhaps by 
 locking the wiring closets.

Doesn't help when there is a compromised station on the network, unless
you both configure the switch to lock ports on mac addresses and smart
ARP filtering.

  3) Restrict (password protect and more) and monitor remote access to all 
 network infrastructure devices. 

As above.

  4) Keep all servers (Squid, etc.) physically inaccessible.

As above.

  5) Severely restrict (or disallow altogether) remote access to all servers 
 (ex: only SSH and never as root and only with a public/private key). 

Agreed.

  6) Avoid using those cheap mini-hubs (often 5-port) unless you're sure 
 your model really function as switches despite their name. 

Not sure it's very relevant.. and most do function as switches despite
their price.. but just don't expect the be able to push a full matrix of
traffic over them...

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Downgrade from 3.0stable10 to 3.0stable9

2008-11-18 Thread Henrik Nordstrom
On tis, 2008-11-18 at 21:14 +1300, Amos Jeffries wrote:

 Do you have the same 64/32 bit settings and --with-large-files on both 
 builds?

Didn't we make the cache and swap.state format large-files independent
in Squid-3?

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid and Radius authentication

2008-11-18 Thread Henrik Nordstrom
On ons, 2008-11-12 at 21:45 -0600, Johnson, S wrote:
 I'm trying to get the squid_radius_auth working and have tried to
 manually connect to my Microsoft radius server.  I cannot get an ok
 for a response when manually testing the connection.  Although, I can
 see the attempts in my Microsoft radius server log so I know I'm
 hitting it.  I have a feeling it's my configuration in my Microsoft
 radius server.  I've dug around and cannot find any articles on the
 setup for the radius server side; just the squid side (which again I
 think is working ok).  Does anyone have information on this or
 suggestions to try?

There isn't very much. The RADIUS server need to be configured to accept
normal obfuscated plain-text authentication as defined in the RADIUS
protocol specifications (Access-Request with the User-Password
attribute), and both need to be configured with the same shared secret.

squid_radius_auth does not support syntesized CHAP-MD5 authentication.
Contributions adding such support is welcome which may make it easier to
interoperate with some RADIUS servers but probably not MS AD.. (what I
mean is squid_radius_auth calculating a CHAP response based on the
received plain-text credentials)

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


RE: [squid-users] Downgrade from 3.0stable10 to 3.0stable9

2008-11-18 Thread Marcel Grandemange
 Good day.
 
 Im wondering if anybody could shed some light for me.
 
 Ive had to downgrade a machine of mine due to bugs in stable10, 
 however since the downgrade im noticing a HELL of a lot of
TCP_SWAPFAIL_MISS/200
 Messages in access.log. And I do mean an extreme amount.
 
 Any ideas?

Sounds like your swap.state has gone corrupted, maybe an old version not
matching the cache content..

How and why would this happen? The box hasn't been powered off in months.
Also first time something like this has happened.
So far im guesing it was upgrade to stable 10 that mucked things up.
Personally ive never had so many issues with any particular version of
squid.


Is there any messages in cache.log?


Nothing realy relavent
Closest is.

2008/11/18 09:30:04| Version 1 of swap file without LFS support detected...
2008/11/18 09:30:04| Rebuilding storage in /mnt/cache1 (DIRTY)
2008/11/18 09:30:04| Version 1 of swap file without LFS support detected...
2008/11/18 09:30:04| Rebuilding storage in /mnt/cache1 (DIRTY)
2008/11/18 09:30:10| Version 1 of swap file without LFS support detected...
2008/11/18 09:30:10| Rebuilding storage in /mnt/cache2 (DIRTY)
2008/11/18 09:30:10| Version 1 of swap file without LFS support detected...
2008/11/18 09:30:10| Rebuilding storage in /mnt/cache2 (DIRTY)
2008/11/18 09:30:10| Version 1 of swap file without LFS support detected...
2008/11/18 09:30:10| Rebuilding storage in /usr/local/squid/cache (DIRTY)
2008/11/18 09:30:10| Using Least Load store dir selection
2008/11/18 09:30:10| Set Current Directory to /usr/local/squid/cache
2008/11/18 09:30:10| Version 1 of swap file without LFS support detected...
2008/11/18 09:30:10| Rebuilding storage in /usr/local/squid/cache (DIRTY)
2008/11/18 09:30:10| Using Least Load store dir selection
2008/11/18 09:30:10| Set Current Directory to /usr/local/squid/cache

And a crap load of...

2008/11/18 11:50:02| WARNING: unparseable HTTP header field {GET
/announce?info_hash=%5d%e3G%f5%00%05%8aN%bbQ%93R%40%ab%c5%0b6U%fd%21peer_id
=-UT1800-%25.%12c%26%95%b9%cc%ce%deH%9fport=45582uploaded=15122432downloa
ded=30408704left=2564653056corrupt=1048576key=F8BA4737numwant=200compac
t=1no_peer_id=1 HTTP/1.1}
2008/11/18 11:50:26| WARNING: unparseable HTTP header field {GET
/announce?info_hash=%5d%e3G%f5%00%05%8aN%bbQ%93R%40%ab%c5%0b6U%fd%21peer_id
=-UT1800-%25.%12c%26%95%b9%cc%ce%deH%9fport=45582uploaded=15187968downloa
ded=30408704left=2564505600corrupt=1048576key=F8BA4737numwant=200compac
t=1no_peer_id=1 HTTP/1.1}
2008/11/18 11:51:47| WARNING: unparseable HTTP header field {GET
/announce?info_hash=%5d%e3G%f5%00%05%8aN%bbQ%93R%40%ab%c5%0b6U%fd%21peer_id
=-UT1800-%25.%12c%26%95%b9%cc%ce%deH%9fport=45582uploaded=15417344downloa
ded=30408704left=2563948544corrupt=1048576key=F8BA4737numwant=200compac
t=1no_peer_id=1 HTTP/1.1}
2008/11/18 11:52:36| WARNING: unparseable HTTP header field {GET
/announce?info_hash=%5d%e3G%f5%00%05%8aN%bbQ%93R%40%ab%c5%0b6U%fd%21peer_id
=-UT1800-%25.%12c%26%95%b9%cc%ce%deH%9fport=45582uploaded=15613952downloa
ded=31457280left=2563489792corrupt=1048576key=F8BA4737numwant=200compac
t=1no_peer_id=1 HTTP/1.1}
2008/11/18 11:53:11| WARNING: unparseable HTTP header field {GET
/announce?info_hash=%5d%e3G%f5%00%05%8aN%bbQ%93R%40%ab%c5%0b6U%fd%21peer_id
=-UT1800-%25.%12c%26%95%b9%cc%ce%deH%9fport=45582uploaded=15695872downloa
ded=31457280left=2563227648corrupt=1048576key=F8BA4737numwant=200compac
t=1no_peer_id=1 HTTP/1.1}
2008/11/18 11:53:45| WARNING: unparseable HTTP header field {GET
/announce?info_hash=%5d%e3G%f5%00%05%8aN%bbQ%93R%40%ab%c5%0b6U%fd%21peer_id
=-UT1800-%25.%12c%26%95%b9%cc%ce%deH%9fport=45582uploaded=1592downloa
ded=31457280left=2562965504corrupt=1048576key=F8BA4737numwant=200compac
t=1no_peer_id=1 HTTP/1.1}
2008/11/18 11:53:56| WARNING: unparseable HTTP header field {GET
/announce?info_hash=%5d%e3G%f5%00%05%8aN%bbQ%93R%40%ab%c5%0b6U%fd%21peer_id
=-UT1800-%25.%12c%26%95%b9%cc%ce%deH%9fport=45582uploaded=15826944downloa
ded=31457280left=2562883584corrupt=1048576key=F8BA4737numwant=200compac
t=1no_peer_id=1 HTTP/1.1}
2008/11/18 11:56:20| WARNING: unparseable HTTP header field {GET
/announce?info_hash=%5d%e3G%f5%00%05%8aN%bbQ%93R%40%ab%c5%0b6U%fd%21peer_id
=-UT1800-%25.%12c%26%95%b9%cc%ce%deH%9fport=45582uploaded=16171008downloa
ded=32505856left=2561425408corrupt=1048576key=F8BA4737numwant=200compac
t=1no_peer_id=1 HTTP/1.1}
2008/11/18 11:56:59| WARNING: unparseable HTTP header field {GET
/announce?info_hash=%5d%e3G%f5%00%05%8aN%bbQ%93R%40%ab%c5%0b6U%fd%21peer_id
=-UT1800-%25.%12c%26%95%b9%cc%ce%deH%9fport=45582uploaded=16302080downloa
ded=32505856left=2560999424corrupt=1048576key=F8BA4737numwant=200compac
t=1no_peer_id=1 HTTP/1.1}
2008/11/18 11:58:13| WARNING: unparseable HTTP header field {GET
/announce?info_hash=%5d%e3G%f5%00%05%8aN%bbQ%93R%40%ab%c5%0b6U%fd%21peer_id
=-UT1800-%25.%12c%26%95%b9%cc%ce%deH%9fport=45582uploaded=16531456downloa
ded=32505856left=2560655360corrupt=1048576key=F8BA4737numwant=200compac
t=1no_peer_id=1 

RE: FW: [squid-users] Squid Stops Responding Sporadically

2008-11-18 Thread Marcel Grandemange

 Under further investigation system log file presented following:
 
 Nov 13 19:37:21 thavinci kernel: pid 66367 (squid), uid 100: exited on 
 signal 6 (core dumped) Nov 13 19:37:21 thavinci squid[66118]: Squid 
 Parent: child process 66367 exited due to signal 6 Nov 13 19:37:24 
 thavinci squid[66118]: Squid Parent: child process 66370 started

What is said in cache.log?

Unfortunately I could not experiment and fault find on this box very long as
it's a production machine.
Simply reverted to stable 9 and issues went away.

However im now experiencing different issues on the cache contents which I
believe was the work of stable10.

Squid FAQ Sending bug reports to the Squid team:
http://wiki.squid-cache.org/SquidFaq/TroubleShooting#head-7067fc0034ce967e6
7911becaabb8c95a34d576d

Regards
Henrik



RE: [squid-users] Downgrade from 3.0stable10 to 3.0stable9

2008-11-18 Thread Marcel Grandemange
 Good day.
 
 Im wondering if anybody could shed some light for me.
 
 Ive had to downgrade a machine of mine due to bugs in stable10, however
 since the downgrade im noticing a HELL of a lot of TCP_SWAPFAIL_MISS/200
 Messages in access.log. And I do mean an extreme amount.
 
 Any ideas?
 

Do you have the same 64/32 bit settings and --with-large-files on both 
builds?

Yup identical, as I used FreeBSD ports to upgrade to stable 10 and to
downgrade it used the same config.

Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
   Current Beta Squid 3.1.0.2



Re: [squid-users] Multiple site example

2008-11-18 Thread Henrik Nordstrom

On fre, 2008-11-14 at 13:24 -0800, Ramon Moreno wrote:

 How do I configure this parameter for 3 sites while using the same
 port? I am guessing, but would it be something like this:
 http_port 80 accel defaultsite=bananas.mysite.com vhost
 http_port 80 accel defaultsite=apples.mysite.com vhost
 http_port 80 accel defaultsite=oranges.mysite.com vhost

Just one of them. Pick your preferred one, or if you can't decide use
just vhost alone.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


RE: [squid-users] Downgrade from 3.0stable10 to 3.0stable9

2008-11-18 Thread Henrik Nordstrom
On tis, 2008-11-18 at 12:01 +0200, Marcel Grandemange wrote:

 How and why would this happen? The box hasn't been powered off in months.
 Also first time something like this has happened.
 So far im guesing it was upgrade to stable 10 that mucked things up.
 Personally ive never had so many issues with any particular version of
 squid.

As Amos already asked, was the two versions compiled in the same manner?

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Downgrade from 3.0stable10 to 3.0stable9

2008-11-18 Thread Amos Jeffries

Henrik Nordstrom wrote:

On tis, 2008-11-18 at 21:14 +1300, Amos Jeffries wrote:

Do you have the same 64/32 bit settings and --with-large-files on both 
builds?


Didn't we make the cache and swap.state format large-files independent
in Squid-3?


Not 3.0 that I know of.
Certainly not different between stable9 and stable10.

The only piece of s10 that touched the filesystem would have possibly 
reduced files being saved with negative lengths. Not added unreadable 
files anywhere.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2


Re: [squid-users] problem with reply_body_max_size and external ACL

2008-11-18 Thread Amos Jeffries

Henrik Nordstrom wrote:

On fre, 2008-11-14 at 02:05 +1300, Amos Jeffries wrote:

Based on this and a few other occurrences I'm beginning to suspect that 
credential re-checks are missing on all reply controls.


Also reply_body_max_size is a fast acl lookup.



That would be it. Thanks Henrik.

Razvan, you may be able to get this to work then by adding the ACL test 
to an http_reply_access line as well.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2


Re: [squid-users] About squid ICAP implementation

2008-11-18 Thread Takashi Tochihara
Hi,  Henrik

From: Henrik Nordstrom [EMAIL PROTECTED]
Subject: Re: [squid-users] About squid ICAP implementation
Date: Tue, 18 Nov 2008 09:34:51 +0100

 On lör, 2008-11-15 at 05:51 +0900, Takashi Tochihara wrote:
 
  I think to send Allow: 204  Preview: , squid must buffer not the
  whole message, but the whole *Previewed* message. (part of the message)
 
 Allow: 204 is not related to previews. It tells the ICAP server that
 it's OK to respond with 204 at any time, even outside of the preview.
 
 The preview is signalled by the Preview:  header, and implicitly
 requests the ICAP server to respond with 204 or 100 at the end of the
 preview to continue the transaction or possibly a syntesised response
 replacing the original message.

You are right. 

In case of the client sends Preview  Allow: 204, the servier first
responds 100 Continue and next (as a result) responds 204 No
Content, squid must buffer the whole message.

I understand what you said. Thank you!

best regards,

-- Takashi Tochihara



[squid-users] MaxConn ACL Directive

2008-11-18 Thread Nyamul Hassan

Hi,

I want to detect if any of my clients are using NAT on their end and serving 
multiple PCs.  While such detection is very difficult, I think the MaxConn 
ACL directive seems to be a good way of minimize the impact.  But, I'm not 
sure how many concurrent connections should be an acceptable value.  Could 
you provide any suggestion?


Regards
HASSAN



[squid-users] squid 3.0 + POST method + reqmod

2008-11-18 Thread Philipp
Hi

I've been testing Squid's icap client (Squid 3.0Stable10) together with a
trial license of Kaspersky's kav4proxy version 5.5.51.

On specific websites I get a status 400 from the icap server when POST is
used together with icap reqmod.

Of course once just could deny the POST method for reqmod or just run
respmod while disabling reqmod. So, there is a workaround.

The issue is reproducable on these webpages:
http://www.jobs.ch/suche/Electronic-Mechanics-Engineering/72/0 and then
select something from the 'Select region' bar.
http://www.brack.ch -- click on the 'Anmelden' button

I made packet dumps of the failed reqmod and compared them to RFC 3507.
The client's reqmod looks sane to me. I do not understand why it results
into status 400.

If interested I can attach the dumps in a later mail.

Thanks
Philipp






RE: [squid-users] Downgrade from 3.0stable10 to 3.0stable9

2008-11-18 Thread Marcel Grandemange
 How and why would this happen? The box hasn't been powered off in months.
 Also first time something like this has happened.
 So far im guesing it was upgrade to stable 10 that mucked things up.
 Personally ive never had so many issues with any particular version of 
 squid.

As Amos already asked, was the two versions compiled in the same manner?

Yup identical, as I used FreeBSD ports to upgrade to stable 10 and to
downgrade it used the same config.

Regards
Henrik



Re: [squid-users] Age header

2008-11-18 Thread mSQL dba



--- On Tue, 11/18/08, Henrik Nordstrom [EMAIL PROTECTED] wrote:

 From: Henrik Nordstrom [EMAIL PROTECTED]

 
 Based on how long the object has been in the cache, and
 received Age and
 Date headers.
 

Thanks. If there is no received Age, then how to calculate the Age value?


  


[squid-users] Problems POST-Method on Squid 3

2008-11-18 Thread hdkutz
Hello List,
I'am having problems with my squid 3 on Centos.
If I try to use POST-Method (e.g. Webmail, Bugzilla) the proxy returns

Read Timeout
No Error

I have no idea why this is happening.

Here's my Config:
snip
http_port 172.25.1.40:80
http_port 127.0.0.1:3128
hierarchy_stoplist cgi-bin ?
visible_hostname proxy.mycompany.com
coredump_dir /var/spool/squid
high_memory_warning 3000 MB
cachemgr_passwd secret all
cache_mgr [EMAIL PROTECTED]
memory_pools off
cache_mem 1024 MB
cache_swap_low 90
cache_swap_high 95
cache_effective_user squid
cache_dir ufs /var/spool/squid 20 16 256
access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
pid_filename /var/log/squid/squid.pid
dns_defnames on
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255 
acl internal_if src 172.25.1.40/255.255.255.255 
acl kutz src 172.25.63.152/255.255.255.255 172.25.63.134/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443  # https
acl SSL_ports port 8443 # psync-https
acl SSL_ports port 12120#
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl QUERY urlpath_regex cgi-bin \?
acl snmppublic snmp_community public
acl mysys src 172.25.46.46/255.255.255.255
acl support.microsoft.com dstdomain support.microsoft.com
acl our_networks src 172.25.0.0/16 172.16.0.0/16 62.143.254.0/24 80.69.108.0/24
acl myspecial dstdomain .myspecial.com
acl ausnahme1 dst 172.25.22.198/32 172.25.46.206/32 172.25.46.218/32
acl ausnahme2 url_regex ^http://some.url.com$
acl ausnahme3 url_regex ^http://some.url.com$
acl ausnahme4 url_regex ^http://some.url.com$
acl ausnahme5 url_regex ^http://some.url.com$
acl ausnahme6 url_regex ^http://some.url.com$
acl ausnahme7 url_regex ^http://some.url.com$
acl ausnahme8 url_regex ^http://some.url.com$
acl ausnahme9 url_regex ^http://some.url.com$
acl ausnahmeA url_regex ^http://some.url.com$
acl ausnahmeB url_regex ^http://some.url.com$
acl ausnahmeC url_regex ^http://some.url.com$
acl ausnahmeD url_regex ^http://some.url.com$
acl ausnahmeE url_regex ^http://some.url.com$
acl ausnahmeF url_regex ^http://some.url.com$
acl ausnahmeG url_regex ^http://some.url.com$
always_direct allow myspecial
acl purge method PURGE
#broken_vary_encoding allow apache
acl apache rep_header Server ^Apache
request_header_access Accept-Encoding deny support.microsoft.com
http_access allow purge localhost internal_if
#http_access deny  purge
http_access allow manager localhost kutz mysys
#http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
no_cache deny ausnahme1 
no_cache deny ausnahme2 
no_cache deny ausnahme3 
no_cache deny ausnahme4 
no_cache deny ausnahme5 
no_cache deny ausnahme6
no_cache deny ausnahme7
no_cache deny ausnahme8
no_cache deny ausnahme9
no_cache deny ausnahmeA
no_cache deny ausnahmeB
no_cache deny ausnahmeC
no_cache deny ausnahmeD
no_cache deny ausnahmeE
no_cache deny ausnahmeF
no_cache deny ausnahmeG
cache deny QUERY
http_access allow our_networks
http_access allow localhost
http_access deny all
http_reply_access allow all
icp_access deny all
snmp_port 3401
snmp_access allow snmppublic kerpsys
snmp_access allow snmppublic localhost
snmp_access deny all
snmp_incoming_address 0.0.0.0 
snmp_outgoing_address 255.255.255.255
snip

-- 
Han Solo:
Wonderful girl! Either I'm going to kill her
or I'm beginning to like her.


RE: [squid-users] Downgrade from 3.0stable10 to 3.0stable9

2008-11-18 Thread Dean Weimer
You might want to run make showconfig under each version of the port and verify 
that none of the configuration options have changed on the new version of the 
port.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

-Original Message-
From: Marcel Grandemange [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 18, 2008 6:57 AM
To: 'Henrik Nordstrom'
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Downgrade from 3.0stable10 to 3.0stable9

 How and why would this happen? The box hasn't been powered off in months.
 Also first time something like this has happened.
 So far im guesing it was upgrade to stable 10 that mucked things up.
 Personally ive never had so many issues with any particular version of 
 squid.

As Amos already asked, was the two versions compiled in the same manner?

Yup identical, as I used FreeBSD ports to upgrade to stable 10 and to
downgrade it used the same config.

Regards
Henrik



[squid-users] customize logformat to see header

2008-11-18 Thread zulkarnain
Hi,

I'm trying to modify logformat to display header of this folowing websites. My 
purpose is to be able to use the correct pattern for refresh_pattern. Here are 
my rules

acl googlevideo url_regex -i googlevideo\.com
acl kaspersky url_regex -i kaspersky\.com
acl kaspersky-labs url_regex -i kaspersky-labs\.com
acl metacafe url_regex -i metacafe\.com
acl apple url_regex -i phobos\.apple\.com
acl pornhub url_regex -i pornhub\.com

logformat squid %ts.%03tu %6tr %a %Ss/%03Hs %st %rm %ru %un %Sh/%A %mt
logformat analisa %{%H:%M:%S}tl %-13a %-6st %03Hs %-17Ss %-24mt %-6tr %ru 
*REQ* *C:%{Cache-Control}h *P:%{Pragma}h *LMS:
%{Last-Modified}h *REP* *C:%{Cache-Control}h *P:%{Pragma}h 
*LMS:%{Last-Modified}h *Exp:%{Expires}h

access_log /var/log/squid/analisa.log analisa googlevideo kaspersky 
kaspersky-labs metacafe apple pornhub
access_log /var/log/squid/access.log squid

The rules above did not work. The file analisa.log is empty even after I 
accessed several websites above. Did I miss something? Any help would be 
greatly appreciated.

Rgds,
Zul




  


Re: [squid-users] Someone's using my cache?

2008-11-18 Thread [EMAIL PROTECTED]
I just wanted to say thanks for the replies on this. I have not forgotten nor 
am I putting your help in the trash bin, I have simply become overwhelmed with 
other tasks at this point. I will get back to this thread as soon as possible 
and as soon as I can start working on it so that I can try the suggested input.

Thanks again.

Mike



Re: [squid-users] Regex Problem - Squid 3.0STABLE10

2008-11-18 Thread Jeff Gerard
sweet...had to compile a newer version of PCRE and do a bit of symbolic linking 
but got it working!

Thanks!

PS...I like how you set your reply-to address to squid-users :)

- Original Message -
From: Henrik K 
Date: Monday, November 17, 2008 11:25 pm
Subject: Re: [squid-users] Regex Problem - Squid 3.0STABLE10
To: squid-users@squid-cache.org

 On Mon, Nov 17, 2008 at 03:00:06PM -0800, Jeff Gerard wrote:BR  Thanks so 
 much...I'll definitely give this a try...but...
  
  apparently I'm not sure what to do here..
  
  Should I simply 
  set LDFLAGS=-lpcreposix -lpcre
  then run my ./configure?
  or??
 
 Right..
 
 export LDFLAGS=-lpcreposix -lpcre
 ./configure ...
 
 And ofcourse make sure you have PCRE library installed.
 libpcre3-dev for debian etc..
 
 

--- 
Jeff Gerard


[squid-users] squid over socks?? is possible?

2008-11-18 Thread SA Alfonso Baqueiro
Is posible to configure squid to access the internet using a SOCKS5 server

the configuration does not have a direct option to do this, so I tried
using tsocks, but
squids returns to the browser :

The following error was encountered:

Zero Sized Reply

Any idea how to solve the problem?? Any help apreciated.

--


Re: [squid-users] Re: R: [squid-users] Connection to webmail sitesproblem using more than one parent proxy

2008-11-18 Thread Chris Robertson

Amos Jeffries wrote:


ICP is yet another very different way of choosing the peer to send 
through. It's always on by default so needs to be turned off for the 
methods that break with ICP selection.


Care to expound on this?  What cache_peer selection methods break with 
ICP enabled?  I'm not seeing anything regarding this in the 
documentation (http://www.squid-cache.org/Doc/config/cache_peer/)...




Amos


Chris


Re: [squid-users] error 401 when going via squid ???

2008-11-18 Thread Chris Robertson

Kinkie wrote:

Could you try a more recent version of squid?
I don't think that 2.6S4 supports proxying content when the server
only offers ntlm authentication


For what it's worth, any 2.6 (or 2.7) release should perform the 
required connection pinning to proxy NTLM authentication...


http://www.squid-cache.org/Versions/v2/2.6/RELEASENOTES.html#toc1

1. Key changes from squid 2.5
...
Support for proxying of Microsoft Integrated Login (NTLM  Negotiate) 
connection oriented authentication schemes, enabling access to servers 
or proxies using such authentication methods.


Chris



Re: [squid-users] acl allow???

2008-11-18 Thread Chris Robertson

Roger Thomas wrote:

Hi,

This is my first time posting to the mailing list, but I just wanted to know
whether anyone knew how to do the below:

I use the following to block a list of words from URL’s:

acl misc_block_list url_regex -i /etc/squid/block/misc_block.list
http_access deny misc_block_list

I am trying to allow certain words, so for example, the word sex is in the
block list, but I want the word sussex to be allowed.
I have created another file called misc_allow.list but I’m not sure how to
tell it to allow.  I presumed something like this:

acl misc_allow_list url_regex -i /etc/squid/block/misc_allow.list
http_access allow misc_allow_list
  


This ACL allows ANYONE to use your proxy to get to URLs that match your 
misc_allow_list (unless they are blocked earlier).


Better would be combining the two acls in one http_access line...

http_access deny misc_block_list !misc_allow_list

... which reads block any request where the URL matches a regular 
expression found in /etc/squid/block/misc_block.list UNLESS it also 
matches a regular expression in /etc/squid/block/misc_allow.list. Just 
be mindful of how regex matching effects your proxy performance.



this doesn’t work though.  It says: 


If anyone can help, I would really appreciate it!

Thank you all in advance,

Regards,

Roger

[EMAIL PROTECTED]
  


Chris


Re: [squid-users] customize logformat to see header

2008-11-18 Thread Chris Robertson

zulkarnain wrote:

Hi,

I'm trying to modify logformat to display header of this folowing websites. My 
purpose is to be able to use the correct pattern for refresh_pattern. Here are 
my rules

acl googlevideo url_regex -i googlevideo\.com
acl kaspersky url_regex -i kaspersky\.com
acl kaspersky-labs url_regex -i kaspersky-labs\.com
acl metacafe url_regex -i metacafe\.com
acl apple url_regex -i phobos\.apple\.com
acl pornhub url_regex -i pornhub\.com
  


Better to use dstdomain.

acl googlevideo dstdomain .googlevideo.com
acl kapersky dstdomain .kapersky.com
...


logformat squid %ts.%03tu %6tr %a %Ss/%03Hs %st %rm %ru %un %Sh/%A %mt
logformat analisa %{%H:%M:%S}tl %-13a %-6st %03Hs %-17Ss %-24mt %-6tr %ru *REQ* 
*C:%{Cache-Control}h *P:%{Pragma}h *LMS:
%{Last-Modified}h *REP* *C:%{Cache-Control}h *P:%{Pragma}h 
*LMS:%{Last-Modified}h *Exp:%{Expires}h

access_log /var/log/squid/analisa.log analisa googlevideo kaspersky 
kaspersky-labs metacafe apple pornhub
  


According to http://www.squid-cache.org/Doc/config/access_log/*,  the 
ACLs are ANDed together, just like with http_access lines.  The only way 
something is going to be logged with this format is if the domain 
matches all of your url_regex lines.  
http://gooGLevideo.compornhub.COMandKAPersky-labs.comMetacafe.com-anythinggoeshere-phobos.apple.com...



You'll need one access_log line for each of the ACLs.


access_log /var/log/squid/access.log squid

The rules above did not work. The file analisa.log is empty even after I 
accessed several websites above. Did I miss something? Any help would be 
greatly appreciated.

Rgds,
Zul
  


Chris

*Will log to the specified file ... those entries which match ALL the 
acl's specified (which must be defined in acl clauses). If no acl is 
specified, all requests will be logged to this file.


Re: [squid-users] Problems POST-Method on Squid 3

2008-11-18 Thread Amos Jeffries
 Hello List,
 I'am having problems with my squid 3 on Centos.
 If I try to use POST-Method (e.g. Webmail, Bugzilla) the proxy returns

 Read Timeout
 No Error

This error indicates a network issue below Squid. The remote server has
been sent and accepted the request, but has not sent back any reply within
15 minutes.

My experience with this its always been a PMTU error somewhere on the
Internet between Squid and the server combined with someone blocking ICMP.

Amos


 I have no idea why this is happening.

 Here's my Config:
 snip
 http_port 172.25.1.40:80
 http_port 127.0.0.1:3128
 hierarchy_stoplist cgi-bin ?
 visible_hostname proxy.mycompany.com
 coredump_dir /var/spool/squid
 high_memory_warning 3000 MB
 cachemgr_passwd secret all
 cache_mgr [EMAIL PROTECTED]
 memory_pools off
 cache_mem 1024 MB
 cache_swap_low 90
 cache_swap_high 95
 cache_effective_user squid
 cache_dir ufs /var/spool/squid 20 16 256
 access_log /var/log/squid/access.log squid
 cache_log /var/log/squid/cache.log
 cache_store_log /var/log/squid/store.log
 pid_filename /var/log/squid/squid.pid
 dns_defnames on
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern ^gopher:14400%  1440
 refresh_pattern .   0   20% 4320
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl internal_if src 172.25.1.40/255.255.255.255
 acl kutz src 172.25.63.152/255.255.255.255 172.25.63.134/255.255.255.255
 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443  # https
 acl SSL_ports port 8443 # psync-https
 acl SSL_ports port 12120#
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70  # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 acl QUERY urlpath_regex cgi-bin \?
 acl snmppublic snmp_community public
 acl mysys src 172.25.46.46/255.255.255.255
 acl support.microsoft.com dstdomain support.microsoft.com
 acl our_networks src 172.25.0.0/16 172.16.0.0/16 62.143.254.0/24
 80.69.108.0/24
 acl myspecial dstdomain .myspecial.com
 acl ausnahme1 dst 172.25.22.198/32 172.25.46.206/32 172.25.46.218/32
 acl ausnahme2 url_regex ^http://some.url.com$
 acl ausnahme3 url_regex ^http://some.url.com$
 acl ausnahme4 url_regex ^http://some.url.com$
 acl ausnahme5 url_regex ^http://some.url.com$
 acl ausnahme6 url_regex ^http://some.url.com$
 acl ausnahme7 url_regex ^http://some.url.com$
 acl ausnahme8 url_regex ^http://some.url.com$
 acl ausnahme9 url_regex ^http://some.url.com$
 acl ausnahmeA url_regex ^http://some.url.com$
 acl ausnahmeB url_regex ^http://some.url.com$
 acl ausnahmeC url_regex ^http://some.url.com$
 acl ausnahmeD url_regex ^http://some.url.com$
 acl ausnahmeE url_regex ^http://some.url.com$
 acl ausnahmeF url_regex ^http://some.url.com$
 acl ausnahmeG url_regex ^http://some.url.com$
 always_direct allow myspecial
 acl purge method PURGE
 #broken_vary_encoding allow apache
 acl apache rep_header Server ^Apache
 request_header_access Accept-Encoding deny support.microsoft.com
 http_access allow purge localhost internal_if
 #http_access deny  purge
 http_access allow manager localhost kutz mysys
 #http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 no_cache deny ausnahme1
 no_cache deny ausnahme2
 no_cache deny ausnahme3
 no_cache deny ausnahme4
 no_cache deny ausnahme5
 no_cache deny ausnahme6
 no_cache deny ausnahme7
 no_cache deny ausnahme8
 no_cache deny ausnahme9
 no_cache deny ausnahmeA
 no_cache deny ausnahmeB
 no_cache deny ausnahmeC
 no_cache deny ausnahmeD
 no_cache deny ausnahmeE
 no_cache deny ausnahmeF
 no_cache deny ausnahmeG
 cache deny QUERY
 http_access allow our_networks
 http_access allow localhost
 http_access deny all
 http_reply_access allow all
 icp_access deny all
 snmp_port 3401
 snmp_access allow snmppublic kerpsys
 snmp_access allow snmppublic localhost
 snmp_access deny all
 snmp_incoming_address 0.0.0.0
 snmp_outgoing_address 255.255.255.255
 snip

 --
 Han Solo:
   Wonderful girl! Either I'm going to kill her
   or I'm beginning to like her.





Re: [squid-users] Regex Problem - Squid 3.0STABLE10

2008-11-18 Thread Amos Jeffries
 sweet...had to compile a newer version of PCRE and do a bit of symbolic
 linking but got it working!

 Thanks!

 PS...I like how you set your reply-to address to squid-users :)

Reply-All in the mailer. ;)

Amos


 - Original Message -
 From: Henrik K
 Date: Monday, November 17, 2008 11:25 pm
 Subject: Re: [squid-users] Regex Problem - Squid 3.0STABLE10
 To: squid-users@squid-cache.org

 On Mon, Nov 17, 2008 at 03:00:06PM -0800, Jeff Gerard wrote:BR 
 Thanks so much...I'll definitely give this a try...but...
 
  apparently I'm not sure what to do here..
 
  Should I simply
  set LDFLAGS=-lpcreposix -lpcre
  then run my ./configure?
  or??

 Right..

 export LDFLAGS=-lpcreposix -lpcre
 ./configure ...

 And ofcourse make sure you have PCRE library installed.
 libpcre3-dev for debian etc..



 ---
 Jeff Gerard





Re: [squid-users] customize logformat to see header

2008-11-18 Thread Amos Jeffries
 Hi,

 I'm trying to modify logformat to display header of this folowing
 websites. My purpose is to be able to use the correct pattern for
 refresh_pattern. Here are my rules

 acl googlevideo url_regex -i googlevideo\.com
 acl kaspersky url_regex -i kaspersky\.com
 acl kaspersky-labs url_regex -i kaspersky-labs\.com
 acl metacafe url_regex -i metacafe\.com
 acl apple url_regex -i phobos\.apple\.com
 acl pornhub url_regex -i pornhub\.com

Please, use dstdomain for this type of matching. It's much faster than regex.


 logformat squid %ts.%03tu %6tr %a %Ss/%03Hs %st %rm %ru %un %Sh/%A %mt
 logformat analisa %{%H:%M:%S}tl %-13a %-6st %03Hs %-17Ss %-24mt %-6tr
 %ru *REQ* *C:%{Cache-Control}h *P:%{Pragma}h *LMS:
 %{Last-Modified}h *REP* *C:%{Cache-Control}h *P:%{Pragma}h
 *LMS:%{Last-Modified}h *Exp:%{Expires}h

 access_log /var/log/squid/analisa.log analisa googlevideo kaspersky
 kaspersky-labs metacafe apple pornhub
 access_log /var/log/squid/access.log squid

 The rules above did not work. The file analisa.log is empty even after I
 accessed several websites above. Did I miss something? Any help would be
 greatly appreciated.


The h and h bit goes before the {}.
For example:  %h{Expires}


Amos




Re: [squid-users] Re: R: [squid-users] Connection to webmail sitesproblem using more than one parent proxy

2008-11-18 Thread Amos Jeffries
 Amos Jeffries wrote:

 ICP is yet another very different way of choosing the peer to send
 through. It's always on by default so needs to be turned off for the
 methods that break with ICP selection.

 Care to expound on this?  What cache_peer selection methods break with
 ICP enabled?  I'm not seeing anything regarding this in the
 documentation (http://www.squid-cache.org/Doc/config/cache_peer/)...


ICP is not compatible with:
  sourcehash
  userhash
  carp

It may 'unbalance' the following in a way favorable to higher response
speeds.
  round-robin
  weighted-round-robin
  icp
  closest-only

Amos




Re: [squid-users] Re: R: [squid-users] Connection to webmail sitesproblem using more than one parent proxy

2008-11-18 Thread Henrik Nordstrom
On tis, 2008-11-18 at 12:37 -0900, Chris Robertson wrote:

 Care to expound on this?  What cache_peer selection methods break with 
 ICP enabled?  I'm not seeing anything regarding this in the 
 documentation (http://www.squid-cache.org/Doc/config/cache_peer/)...

None or all depending on your viewpoint.. ICP is one of the peer
selection algorithms and has highest priority, so if the peer responds
to ICP then the peer selection is done by ICP..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part