Re: [squid-users] How to block teamviewer in squid

2008-10-16 Thread Malte Schröder
On Thu, 16 Oct 2008 09:01:48 +0530
Tharanga [EMAIL PROTECTED] wrote:

 Hi folks,
...
 did anyone succesfully block the team viewer access in squid acl.

I block it by its user-agent string: DynGate .


Re: [squid-users] CARP setup

2008-10-16 Thread Henrik Nordstrom
On tor, 2008-10-16 at 09:42 +0530, Paras Fadte wrote:
 Hi Henrik,
 
 In CARP setup, if one uses same weightage for all the parent caches
 how would the requests be handled ? will the requests be equally
 forwarded to all the parent caches ? if the weightages differ then
 won't all the requests be forwarded to a particular parent cache only
 which has the highest weightage ?

CARP is a hash algorithm. For each given URL there is one CARP parent
that is the designated one.

The weights control how large portion of the URL space is assigned to
each member.

 Also if I do not use the proxy-only option in the squid which
 forwards the requests to parent caches, won't less number of requests
 be forwarded to parent caches since it will be already cached by squid
 in front of the parent caches?

Correct. And it's completely orthogonal to the use of CARP. As I said
most setups do not want to use proxy-only. proxy-only is only useful in
some very specific setups. These setups MAY be using CARP or some other
peering method, the choice of peering method is unrelated to proxy-only.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] How to block teamviewer in squid

2008-10-16 Thread Avinash Rao
I am using squid in ubuntu 8.04 and its already blocked (by default).
I am trying to connect to my linux server from home through broadband,
but the connection is not going through.

First, it said, the screen in locked, then i got the screen unlocked,
then, it said the password was accepted by the host machine,
connection not established.

How do i use teamviewer to connect to my server through squid?

On Thu, Oct 16, 2008 at 9:01 AM, Tharanga [EMAIL PROTECTED] wrote:

 Hi folks,

 I need to block team viewer (remote access software) on squid. I analyse the
 connection establishmet . it goes through port 80 to teamviewer server ( ip
 is dynamic).

 Team viewer clinetport 80 -- Team viewer main server (dynamic
 ip's) --- (port 80) team viewer server

 did anyone succesfully block the team viewer access in squid acl.

 Thanks,

 Tharanga Abeyseela



[squid-users] Data transfer limit

2008-10-16 Thread RM
I've tried searching through the archives for data transfer limits but
all I can find is stuff on bandwidth limiting through the use of delay
pools to restrict users to a specific transfer rate.

Here is my situation. I have a Squid server on the internet that users
around the world can connect to but it requires that they know their
own username and password (this is not an open proxy) in order to
connect. So I have this in my squid.conf:

auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/passwd
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off

acl ncsa_users proxy_auth REQUIRED
http_access allow ncsa_users

/etc/squid/passwd has a list of usernames and their associated
passwords. How can I limit each user to a specific amount of data
transfer per month such as 100GB. I do not want to limit the rate to
anything such as 64kbps. I want my users to use my full 10Mbps
connection if they can but once they reach 100GB of transferred data,
I want to disable them.

Is this possible?

Thanks


Re: [squid-users] Complicate ACL affect performance?

2008-10-16 Thread Henrik K
On Thu, Oct 16, 2008 at 01:56:59AM +0800, howard chen wrote:
 Hello,
 
 On Wed, Oct 15, 2008 at 10:14 PM, Henrik K [EMAIL PROTECTED] wrote:
  On Wed, Oct 15, 2008 at 03:42:20PM +0200, Henrik Nordstrom wrote:
 
   Any suggestion for having large ACL in a high traffic server?
 
  Avoid using regex based acls.
 
  It's fine if you use Perl + Regexp::Assemble to optimize them. And link
  Squid with PCRE. Sometimes you just need to block more specific URLs.
 
 
 
 WHat do you mean link Squid with PCRE ?

http://www.pcre.org/

When compiling add -lpcreposix -lpcre to LDFLAGS. It overrides your your
system library, being faster and I don't have any memory leaks anymore.

If you want to read long regexps from include file, you need to
patch a bit: http://www.squid-cache.org/bugs/show_bug.cgi?id=2215
(src/cache_cf.c - strtokFile() - change all 256 to 65535)



Re: [squid-users] Complicate ACL affect performance?

2008-10-16 Thread Henrik Nordstrom
On ons, 2008-10-15 at 17:14 +0300, Henrik K wrote:
  Avoid using regex based acls.
 
 It's fine if you use Perl + Regexp::Assemble to optimize them. And link
 Squid with PCRE. Sometimes you just need to block more specific URLs.

No it's not. Even optimized regexes is several orders of magnitude more
complex to evaluate than the structured acls.

The lookup time of dstdomain is logaritmic to the number of entries.

The lookup time of regex acls is linear to the number of entries.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Disk Space problem in a squid-proxy server

2008-10-16 Thread Angela Williams
Hi!
On Thursday 16 October 2008, [EMAIL PROTECTED] wrote:
 *
 This message has been scanned by IMSS NIT-Silchar

 Dear All Squid Users,

 I have a proxy server, where the df shows the following :

 [EMAIL PROTECTED] /]# df
 Filesystem   1K-blocks  Used Available Use% Mounted on
 /dev/mapper/PrimaryVol-root
   19838052   6484280  12329772  35% /
 /dev/mapper/PrimaryVol-var
   14855176  12631912   1456496  90% /var
 /dev/mapper/PrimaryVol-home
   34756272  29035612   3926612  89% /home
 /dev/mapper/PrimaryVol-tmp
4062912205720   3647480   6% /tmp
 /dev/sda1   101086 22511 73356  24% /boot
 tmpfs   253424 0253424   0% /dev/shm


 On executing the du command under /home it shows the following ( copying
 here a certain portion of the output of the command du):

 9304./squid/00/DA
 8748./squid/00/55
 3364./squid/00/C6
 10204   ./squid/00/E9
 6336./squid/00/62
 4996./squid/00/57
 8264./squid/00/16
 5428./squid/00/0A
 8972./squid/00/31
 6344./squid/00/DD
 6740./squid/00/68
 5168./squid/00/9D
 5004./squid/00/DC
 15868   ./squid/00/E4
 6464./squid/00/07
 8708./squid/00/A1
 4728./squid/00/51
 3588./squid/00/3B
 4468./squid/00/91
 4508./squid/00/D1
 5064./squid/00/D3
 7652./squid/00/BA
 5828./squid/00/B1
 3484./squid/00/88
 .


 My question, is can i delete the above folders to make more free space
 available under /home directory ?

Simple answer? No!

You might want to have a look at your squid.conf file and see how much 
diskspace is allocated for its cache. Look for a line like this default.
cache_dir ufs /var/cache/squid 100 16 256
The 100 is the size in Meg in this default.

Remember that if you change the size it really means trashing the old cache 
and have squid recreate and nice new empty one with squid -z

Please do not hijack an existing thread as many users with threaded news 
readers will not see it!

Cheers
Ang

Cheers
Ang



-- 
Angela Williams Enterprise Outsourcing
Unix/Linux  Cisco spoken here! Bedfordview
[EMAIL PROTECTED]   Gauteng South Africa

Smile!! Jesus Loves You!!



RE: [squid-users] Data transfer limit

2008-10-16 Thread Alex Huxham
http://www.ledge.co.za/software/squint/squish/

Squish has been out for some time, I know I use it within an education
environment, bit of a mess to setup, but once running it does the job
perfectly.

-Original Message-
From: RM [mailto:[EMAIL PROTECTED] 
Sent: 16 October 2008 07:57
To: squid-users@squid-cache.org
Subject: [squid-users] Data transfer limit

I've tried searching through the archives for data transfer limits but
all I can find is stuff on bandwidth limiting through the use of delay
pools to restrict users to a specific transfer rate.

Here is my situation. I have a Squid server on the internet that users
around the world can connect to but it requires that they know their
own username and password (this is not an open proxy) in order to
connect. So I have this in my squid.conf:

auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/passwd
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off

acl ncsa_users proxy_auth REQUIRED
http_access allow ncsa_users

/etc/squid/passwd has a list of usernames and their associated
passwords. How can I limit each user to a specific amount of data
transfer per month such as 100GB. I do not want to limit the rate to
anything such as 64kbps. I want my users to use my full 10Mbps
connection if they can but once they reach 100GB of transferred data,
I want to disable them.

Is this possible?

Thanks


[squid-users] Unable to have certain site to be non-cacheable and ignore already cached data

2008-10-16 Thread Anton
Hello!

was trying for a few hours to have a certain site 
(http://www.nix.ru) to be not cacheable - but squid always 
gives me an object which is in cache!

My steps:

acl DIRECTNIX url_regex ^http://www.nix.ru/$
no_cache deny DIRECTNIX
always_direct allow DIRECTNIX

- But anyway - until I PURGED by the squidclient the 
required page - it was in the STALE state in the log - 
TCP_REFRESH_HIT - and I saw the old data.

Could anyone please give a clue how to make a certain URL 
REGEX to be non-cacheable and to make a direct request 
always to a origin server, even if the object is already 
cached?

Regards,
Anton.


Re: [squid-users] Data transfer limit

2008-10-16 Thread Barry Irwin
Morning

to do this you need to implement some kind of quota system.  In essence
you need to rotate your logs ( hourly, daily etc) and then put some
little script together that adds up the traffic associated with each
user account.  This can then be used to feed an ACL denying access.

The implementation of the acl can be as simple as a text file, or using
external ACLS you can query a DB.

Barry


RM wrote:
 I've tried searching through the archives for data transfer limits but
 all I can find is stuff on bandwidth limiting through the use of delay
 pools to restrict users to a specific transfer rate.
 
 Here is my situation. I have a Squid server on the internet that users
 around the world can connect to but it requires that they know their
 own username and password (this is not an open proxy) in order to
 connect. So I have this in my squid.conf:
 
 auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/passwd
 auth_param basic children 5
 auth_param basic realm Squid proxy-caching web server
 auth_param basic credentialsttl 2 hours
 auth_param basic casesensitive off
 
 acl ncsa_users proxy_auth REQUIRED
 http_access allow ncsa_users
 
 /etc/squid/passwd has a list of usernames and their associated
 passwords. How can I limit each user to a specific amount of data
 transfer per month such as 100GB. I do not want to limit the rate to
 anything such as 64kbps. I want my users to use my full 10Mbps
 connection if they can but once they reach 100GB of transferred data,
 I want to disable them.
 
 Is this possible?
 
 Thanks


Re: [squid-users] Unable to have certain site to be non-cacheable and ignore already cached data

2008-10-16 Thread Anton
Just realized that i have 

reload_into_ims on

this was making me to be not able to refresh the given page 
or site, since refresh request was changed - but anyway - 
it should not affect no_cache? 

On Thursday 16 October 2008 14:28, Anton wrote:
 BTW Squid 2.6STABLE20 - TPROXY2

 On Thursday 16 October 2008 13:49, Anton wrote:
  Hello!
 
  was trying for a few hours to have a certain site
  (http://www.nix.ru) to be not cacheable - but squid
  always gives me an object which is in cache!
 
  My steps:
 
  acl DIRECTNIX url_regex ^http://www.nix.ru/$
  no_cache deny DIRECTNIX
  always_direct allow DIRECTNIX
 
  - But anyway - until I PURGED by the squidclient the
  required page - it was in the STALE state in the log -
  TCP_REFRESH_HIT - and I saw the old data.
 
  Could anyone please give a clue how to make a certain
  URL REGEX to be non-cacheable and to make a direct
  request always to a origin server, even if the object
  is already cached?
 
  Regards,
  Anton.


Re: [squid-users] Authentication Issue with Squid and mixed BASIC/NTLM auth

2008-10-16 Thread Amos Jeffries

Chris Natter wrote:

We were having issues with spell-check in 3.0, I haven't tried any of
the development builds to see if it was resolved though in a later
release. 



OWA spell-check just seems to hang when you attempt to spell-check an
email, or gives the try again later prompt. I saw some previous
postings on the archive of the mailing list, but most of them are very
outdated.

I'll have to build an RPM of squid 2.7 and check to see if that solves
both issues.


Ah, now that you mention it I vaguely recall the topic as it flew past a 
while back.


Yes, 2.7 is likely the most dependable to have both combos of fixes you 
need.


Without knowing the cause the spellcheck issue _may_ have been resolved 
in 3.1.  Both of the MS workarounds and 'unknown method' support are now 
present. If you have a spare moment and are inclined to test it please 
let us know the result. If you still hit bad news for 3.1, its 
definitely a bug that needs looking into at some point.


Amos



Thanks for the help.

-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, October 15, 2008 6:46 PM

To: Chris Natter
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Authentication Issue with Squid and mixed
BASIC/NTLM auth


Hey all,



I've got a tough situation I'm hoping someone can help me with.



We 'downgraded' from an old 3.0PRE build that a predecessor had setup

on a

reverse proxy, to squid 2.6.STABLE20. The proxy runs your standard OWA
over Reverse Proxy setup, with login=PASS to an OWA backend running

with

BASIC/NTLM auth. We have to have the NTLM for phones that sync with
ActiveSync.



It seems like something fundamental has changed in the way squid

handles

auth from 3.0 to squid 2.6. Using firefox on 2.6, I can auth with just
'USERNAME', with IE on 2.6 we have to type DOMAINUSERNAME or
[EMAIL PROTECTED] now. Previously, with squid 3.0, just 'USERNAME' would

work

for auth.



While this seems trivial, anything harder than just 'USERNAME' boggles

a

lot of users. I'm assuming this has something to do with 'attempting

NTLM'

negotiation? Is there a way around it in squid 2.6?



The cleaner @DOMAIN handling was only added to Squid 2.7+ and 3.0+. You
will need an upgrade again to one of those versions at least.

What caused you to downgrade though? perhapse its been fixed now in 3.1?

Amos



--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] HTTPS traffic in normal transparent proxy

2008-10-16 Thread viveksnv


Thanks Hendrik.

I tried with both types for blocking https://gmail.com.

My conf is

acl gmail1 url_regex gmail.com mail.google.com
and
acl gmail dstdomain  gmail.com mail.google.com

http_access deny gmail gmail1

Now https://gmail.com is blocking..

But all other https sites not working..

Error in browser.

while retrieving the url (disply ip address).
protocol error..

In access.log

only one https request goes..

GET https://gmail.com

Regards
Vivek

On ons, 2008-10-15 at 10:23 -0400, [EMAIL PROTECTED] wrote:

My configuration is...

http_port 0.0.0.0:3128 transparent

https_port 0.0.0.0:3129 transparent
cert=/usr/local/squid-test/CA/servercert.pem
key=/usr/local/squid-test/CA/serverkey.pem

Iptable rules are:

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT
--to-port 3128
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j REDIRECT
--to-port 3129

In cache.log

Accepting transparently proxied HTTP connections at 0.0.0.0, port 

3128,

FD 12.
Accepting HTTPS connections at 0.0.0.0, port 3129, FD 13

In access.log while accessing https://gmail.com

TCP_MISS/200 2213 CONNECT gmail.com:443


This is not a transparently intercepted https request. This browser is
configured to use the proxy.

The https_port method will only work for transparently intercepted
requests, not when the browser is configured to use the proxy.

For this to work when the browser is configured to use the proxy you
need the sslbump feature available in the upcoming 3.1 release.


But problem is now gmail not blocked...

In http://gmail.com requests...it's blocked..


CONNECT requests is subject to the same http_access rules as http
access. If GET http://gmail.com is blocked but CONNECT gmail.com:443 is
not then check your access rules. A guess without seeing your ruleset is
that you are using url_regex instead of dstdomain type acls..

Regards
Henrik







You are invited to Get a Free AOL Email ID. - http://webmail.aol.in



Re: [squid-users] Using Squid as a reverse-proxy to SSL origin?

2008-10-16 Thread Henrik Nordstrom
On ons, 2008-10-15 at 16:42 -0400, Todd Lainhart wrote:
 I've looked in the archives, site, and Squid book, but I can't find
 the answer to what I'm looking to do.  I suspect that it's not
 supported.

It is.

 My origin server accepts Basic auth over SSL (non-negotiable).  I'd
 like to stick a reverse proxy/surrogate in front of it for
 caching/acceleration, and have it accept non-SSL connections w/ Basic
 auth, directing those requests as https to the origin.  The origin's
 responses will be cached, to be used in subsequent GETs to the proxy.
 Both machines are in a closed IP environment.  Both use the same
 authentication mechanism.

The basic setup is a plain reverse proxy.
http://wiki.squid-cache.org/SquidFaq/ReverseProxy#head-7fa129a6528d9a5c914f8dd5671668173e39e341

As the backend runs https you need to adjust the cache_peer line a bit
to enable ssl (port 443, and the ssl option).

When authentication is used you also need to tell Squid to trust the web
server with auth credentials

http://wiki.squid-cache.org/SquidFaq/ReverseProxy#head-c59962b21bb8e2a437beb149bcce3190ee1c03fd

 I see that Squid 3.0 has an ssl-bump option, but I don't think that
 does what I described.  If it does, that's cool - I can change the
 requirement of the proxy to accept Basic/SSL.

sslbump is a different thing. Not needed for what you describe.


But you may need to use https:// to the reverse proxy as well. This is
done by using https_port instead of http_port (and requires a suitable
certificate). 

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] How to block teamviewer in squid

2008-10-16 Thread Henrik Nordstrom
On tor, 2008-10-16 at 09:01 +0530, Tharanga wrote:
 I need to block team viewer (remote access software) on squid. I analyse the
 connection establishmet . it goes through port 80 to teamviewer server ( ip
 is dynamic).
 
 Team viewer clinetport 80 -- Team viewer main server (dynamic
 ip's) --- (port 80) team viewer server
 
 did anyone succesfully block the team viewer access in squid acl.

What does access.log say with log_mime_hdrs on?

Regards
Henrik



signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Unable to have certain site to be non-cacheable and ignore already cached data

2008-10-16 Thread Henrik Nordstrom
On tor, 2008-10-16 at 13:49 +0500, Anton wrote:
 Hello!
 
 was trying for a few hours to have a certain site 
 (http://www.nix.ru) to be not cacheable - but squid always 
 gives me an object which is in cache!
 
 My steps:
 
 acl DIRECTNIX url_regex ^http://www.nix.ru/$
 no_cache deny DIRECTNIX
 always_direct allow DIRECTNIX

This only matches the exact URL of root page of the server, not any
other objects on that web server (including any inlined objects or
stylesheets).

What you probably want is:

acl DIRECTNIX dstdomain www.nix.ru
no_cache deny DIRECTNIX

which matches the whole site.

always_direct is unrelated to caching. But if you want Squid to bypass
any peers you may have (cache_peer) then it's the right directive.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Unable to have certain site to be non-cacheable and ignore already cached data

2008-10-16 Thread Henrik Nordstrom
On tor, 2008-10-16 at 14:34 +0500, Anton wrote:
 Just realized that i have 
 
 reload_into_ims on
 
 this was making me to be not able to refresh the given page 
 or site, since refresh request was changed - but anyway - 
 it should not affect no_cache? 

It doesn't.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Unable to have certain site to be non-cacheable and ignore already cached data

2008-10-16 Thread Leonardo Rodrigues Magalhães



Anton escreveu:

Hello!

was trying for a few hours to have a certain site 
(http://www.nix.ru) to be not cacheable - but squid always 
gives me an object which is in cache!


My steps:

acl DIRECTNIX url_regex ^http://www.nix.ru/$
no_cache deny DIRECTNIX
always_direct allow DIRECTNIX
  


   your ACL is too complicated for a pretty simple thing ... it has the 
'begin with' flag (^) and has the 'end with' ($) flag as well. And it 
has a final slash too. So, it seems that would match exclusively


http://www.nix.ru/

   and nothing else . including NOT matching 
'http://www.nix.ru/index.htm', 'http://www.nix.ru/logo.jpg' and so on.


   if you wanna hints on doing regexps, i would give you a precious 
hint: don't try to complicate things.


acl DIRECTNIX url_regex -i www\.nix\.ru

   would do the job and would be much simplier to understand. And NEVER 
forget the case inconditional (-i) flag on regex 


--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
[EMAIL PROTECTED]
My SPAMTRAP, do not email it






[squid-users] Disabling error pages

2008-10-16 Thread Robert Morrison

Hi,

I've found lots of references online (in this list's archives, other sites and 
the FAQ) to customising error pages in squid, but haven't yet found reference 
to removing error pages completely.

My squid box is running transparently. In the case of any errors I'd like it to 
simply return no content to users, so it is not so obvious that their access is 
being proxied.

Is this possible without editing source code? I think I saw reference to 
setting font color in error messages to the same as background, but I'd prefer 
something a little less hackish ;)

Thanks

R

_
Catch up on all the latest celebrity gossip 
http://clk.atdmt.com/GBL/go/115454061/direct/01/

Re: [squid-users] Unable to have certain site to be non-cacheable and ignore already cached data

2008-10-16 Thread Anton
Thanks so much Henrick and Leonardo! 
Looks I should learn regexes, since taked $ as 
the whatever after meaning but not end of string :)
Now it logs as TCP_MISS.
Thanks so much again!

On Thursday 16 October 2008 15:45, Leonardo Rodrigues 
Magalhães wrote:
 Anton escreveu:
  Hello!
 
  was trying for a few hours to have a certain site
  (http://www.nix.ru) to be not cacheable - but squid
  always gives me an object which is in cache!
 
  My steps:
 
  acl DIRECTNIX url_regex ^http://www.nix.ru/$
  no_cache deny DIRECTNIX
  always_direct allow DIRECTNIX

 your ACL is too complicated for a pretty simple thing
 ... it has the 'begin with' flag (^) and has the 'end
 with' ($) flag as well. And it has a final slash too. So,
 it seems that would match exclusively

 http://www.nix.ru/

 and nothing else . including NOT matching
 'http://www.nix.ru/index.htm',
 'http://www.nix.ru/logo.jpg' and so on.

 if you wanna hints on doing regexps, i would give you
 a precious hint: don't try to complicate things.

 acl DIRECTNIX url_regex -i www\.nix\.ru

 would do the job and would be much simplier to
 understand. And NEVER forget the case inconditional (-i)
 flag on regex 


Re: [squid-users] Complicate ACL affect performance?

2008-10-16 Thread Henrik K
On Thu, Oct 16, 2008 at 10:10:23AM +0200, Henrik Nordstrom wrote:
 On ons, 2008-10-15 at 17:14 +0300, Henrik K wrote:
   Avoid using regex based acls.
  
  It's fine if you use Perl + Regexp::Assemble to optimize them. And link
  Squid with PCRE. Sometimes you just need to block more specific URLs.
 
 No it's not. Even optimized regexes is several orders of magnitude more
 complex to evaluate than the structured acls.
 
 The lookup time of dstdomain is logaritmic to the number of entries.
 
 The lookup time of regex acls is linear to the number of entries.

It's fine that you advocate for avoid regex, but a much better way is to
actually tell people what's wrong and how to use them efficiently if needed.

Of course you shouldn't have a separate regex for every URL. I suggest you
look at what Regexp::Assemble does.

Optimizing 1000 x www.foo.bar/randomstuff into a _single_
www.foobar.com/(r(egex|and(om)?)|fuba[rz]) regex is nowhere near linear.
Even if it's all random servers, there are only ~30 characters from which
branches are created from.



[squid-users] Re-distributing the cache between multiple servers

2008-10-16 Thread James Cohen
Hi,

I have two reverse proxy servers using each other as neighbours. The
proxy servers are load balanced (using a least connections
algorithm) by a Netscaler upstream of them.

A small amount of URLs account for around 50% or so of the requests.

At the moment there's some imbalance in the hit rates on the two
caches because I brought up server A before server B and it's holding
the majority of the objects which make that 50% of request traffic.

I can see that clearing/expiring both caches should result in an equal
hit rate between the two servers.

Is this the only way of achieving this? I'm concerned now that if I
was to add a third server C into the cache pool it'd have an even
lower hit rate than on A or B.

I spent some time searching but wasn't able to find Squid
administration for dummies ;)

Thanks,

James


Re: [squid-users] Unable to have certain site to be non-cacheable and ignore already cached data

2008-10-16 Thread Anton
BTW Squid 2.6STABLE20 - TPROXY2

On Thursday 16 October 2008 13:49, Anton wrote:
 Hello!

 was trying for a few hours to have a certain site
 (http://www.nix.ru) to be not cacheable - but squid
 always gives me an object which is in cache!

 My steps:

 acl DIRECTNIX url_regex ^http://www.nix.ru/$
 no_cache deny DIRECTNIX
 always_direct allow DIRECTNIX

 - But anyway - until I PURGED by the squidclient the
 required page - it was in the STALE state in the log -
 TCP_REFRESH_HIT - and I saw the old data.

 Could anyone please give a clue how to make a certain URL
 REGEX to be non-cacheable and to make a direct request
 always to a origin server, even if the object is already
 cached?

 Regards,
 Anton.


Re: [squid-users] Squid 3 HTTP accelerator not caching content

2008-10-16 Thread Tom Williams

Henrik Nordstrom wrote:

On tis, 2008-10-14 at 09:04 -0700, Tom Williams wrote:
  

Is authentication required to access the server? If so then the server
need to return Cache-Control: public on the content which is
non-private and should be cached.

Keep in mind that such content will be accessible directly from the
cache without using authentication.
  
  
Currently, basic authentication is required to access the site.  I 
wasn't aware of Cache-Control: public so I'll see about configuring 
the server to return this.  This is a HTTP header, correct?



It is.

Without it authenticated content is considered private, and not cached
by shared caches.

With public the content is considered public for all to access, not
really requiring authentication even if the request did include
authentication.

Regards
Henrik
  


Ok, I was able to conduct a test using a public page (authentication is 
NOT required) and I started seeing SWAPOUT and RELEASE entries in 
store.log.  So, this confirms it was the authentication that was causing 
my problem.


I haven't made the HTTP header change yet but that will happen at some 
point. :)


Thanks!

Peace...

Tom


Re: [squid-users] Using Squid as a reverse-proxy to SSL origin?

2008-10-16 Thread Todd Lainhart
Thank you, Amos and Henrik.  I'll be testing this in 2.7/Stable 4 - I
assume that's OK (no significant fixes in 3.0 in this area that I
should take advantage of)?

Could I do the same thing with SSL to the reverse proxy?  That is, the
reverse proxy is the endpoint for the client, gets the creds, becomes
the endpoint for the server, decrypts and caches the origin response,
and then serves cached content encrypted back to the client?  I would
guess this falls into man-in-the-middle style ugliness, is
out-of-bounds for SSL and so wouldn't be supported.  But then again I
was wrong about my original use-case not being supported :-) .

  -- Todd

On Thu, Oct 16, 2008 at 6:15 AM, Henrik Nordstrom
[EMAIL PROTECTED] wrote:
 On ons, 2008-10-15 at 16:42 -0400, Todd Lainhart wrote:
 I've looked in the archives, site, and Squid book, but I can't find
 the answer to what I'm looking to do.  I suspect that it's not
 supported.

 It is.

 My origin server accepts Basic auth over SSL (non-negotiable).  I'd
 like to stick a reverse proxy/surrogate in front of it for
 caching/acceleration, and have it accept non-SSL connections w/ Basic
 auth, directing those requests as https to the origin.  The origin's
 responses will be cached, to be used in subsequent GETs to the proxy.
 Both machines are in a closed IP environment.  Both use the same
 authentication mechanism.

 The basic setup is a plain reverse proxy.
 http://wiki.squid-cache.org/SquidFaq/ReverseProxy#head-7fa129a6528d9a5c914f8dd5671668173e39e341

 As the backend runs https you need to adjust the cache_peer line a bit
 to enable ssl (port 443, and the ssl option).

 When authentication is used you also need to tell Squid to trust the web
 server with auth credentials

 http://wiki.squid-cache.org/SquidFaq/ReverseProxy#head-c59962b21bb8e2a437beb149bcce3190ee1c03fd

 I see that Squid 3.0 has an ssl-bump option, but I don't think that
 does what I described.  If it does, that's cool - I can change the
 requirement of the proxy to accept Basic/SSL.

 sslbump is a different thing. Not needed for what you describe.


 But you may need to use https:// to the reverse proxy as well. This is
 done by using https_port instead of http_port (and requires a suitable
 certificate).

 Regards
 Henrik



Re: [squid-users] Squid 3 HTTP accelerator not caching content

2008-10-16 Thread Tom Williams

Amos Jeffries wrote:

Tom Williams wrote:

Amos Jeffries wrote:

So, I setup my first Squid 3.0STABLE9 proxy in HTTP accelerator mode
over the weekend.  Squid 3 is running on the same machine as the web
server and here are my HTTP acceleration related config options:

http_port 80 accel vhost
cache_peer 192.168.1.19 parent 8085 0 no-query originserver login=PASS


Here are the cache related options:

cache_mem 64 MB
maximum_object_size_in_memory 50 KB
cache_replacement_policy heap LFUDA
cache_dir aufs /mnt/drive3/squid-cache 500 32 256

As described in this mailing list thread:

http://www2.gr.squid-cache.org/mail-archive/squid-users/199906/0756.html 



all of the entries in my store.log have RELEASE as the action:

1223864638.986 RELEASE -1  
A1FE29E96A44936155BB873BDC882B12  200

1223864638-1 375007920 text/html 2197/2197 GET
http://aaa.bbb.ccc.ddd/locations/

Here is a snipet from the cache.log file:

2008/10/12 21:23:36| Done reading /mnt/drive3/squid-cache swaplog (0
entries)
2008/10/12 21:23:36| Finished rebuilding storage from disk.
2008/10/12 21:23:36| 0 Entries scanned
2008/10/12 21:23:36| 0 Invalid entries.
2008/10/12 21:23:36| 0 With invalid flags.
2008/10/12 21:23:36| 0 Objects loaded.
2008/10/12 21:23:36| 0 Objects expired.
2008/10/12 21:23:36| 0 Objects cancelled.
2008/10/12 21:23:36| 0 Duplicate URLs purged.
2008/10/12 21:23:36| 0 Swapfile clashes avoided.
2008/10/12 21:23:36|   Took 0.01 seconds (  0.00 objects/sec).
2008/10/12 21:23:36| Beginning Validation Procedure
2008/10/12 21:23:36|   Completed Validation Procedure
2008/10/12 21:23:36|   Validated 25 Entries
2008/10/12 21:23:36|   store_swap_size = 0
2008/10/12 21:23:37| storeLateRelease: released 0 objects
2008/10/12 21:24:07| Preparing for shutdown after 2 requests
2008/10/12 21:24:07| Waiting 30 seconds for active connections to 
finish

2008/10/12 21:24:07| FD 14 Closing HTTP connection
2008/10/12 21:24:38| Shutting down...
2008/10/12 21:24:38| FD 15 Closing ICP connection
2008/10/12 21:24:38| aioSync: flushing pending I/O operations
2008/10/12 21:24:38| aioSync: done
2008/10/12 21:24:38| Closing unlinkd pipe on FD 12
2008/10/12 21:24:38| storeDirWriteCleanLogs: Starting...
2008/10/12 21:24:38|   Finished.  Wrote 0 entries.
2008/10/12 21:24:38|   Took 0.00 seconds (  0.00 entries/sec).
CPU Usage: 0.041 seconds = 0.031 user + 0.010 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:3644 KB
Ordinary blocks: 3511 KB  8 blks
Small blocks:   0 KB  1 blks
Holding blocks:  1784 KB  9 blks
Free Small blocks:  0 KB
Free Ordinary blocks: 132 KB
Total in use:5295 KB 145%
Total free:   132 KB 4%
2008/10/12 21:24:38| aioSync: flushing pending I/O operations
2008/10/12 21:24:38| aioSync: done
2008/10/12 21:24:38| aioSync: flushing pending I/O operations
2008/10/12 21:24:38| aioSync: done
2008/10/12 21:24:38| Squid Cache (Version 3.0.STABLE9): Exiting 
normally.


I'm running on RedHat EL 5.   With Squid running, I can access the
website just fine and pages load without problems or issues.  It's 
just

nothing is being cached.

This is my first time configuring Squid as a HTTP accelerator so I
probably missed something when I set it up.  Any ideas on what 
might be

wrong?

Thanks in advance for your time and assistance!  :)




Q)  Do you have any of the routing access controls (http_access,
never_direct, cache_peer_access, cache_peer_domain) which make squid 
pass

the accelerated requests back to the web server properly?
  


I have the default http_access options except I have http_access 
allow all at the end of them:


Ouch. You have a semi-open proxy.
If anyone identifies your public IP they can point a domain DNS at 
your IP and have it accelerated. Or even configure port 80 as their 
proxy IP and browse through it. A firewall or NAT layer cannot prevent 
this happening.


You should at the very least be limiting requests to the domains you 
are serving.


I prefer a config like the one listed: 
http://wiki.squid-cache.org/SquidFaq/ReverseProxy#head-7fa129a6528d9a5c914f8dd5671668173e39e341 



Thanks for this information.  I knew my configuration wasn't secure but 
at this point, I don't leave the proxy running except for when I'm 
working on it.  I'll review the config above and will use it as my test 
config from here on out.  :)


Thanks!

Peace...

Tom


[squid-users] squidnt.com, warning

2008-10-16 Thread Mr Lyphifco
It seems that the site http://squidnt.com/ is trying to masquerade as an
official website for Mr Serassio's Windows port of Squid. It doesn't
explicitly state this, but the wording of the site contents strongly implies
such a thing.

Also it was entered into a new Wikipedia article on SquidNT as the homepage:

  http://en.wikipedia.org/w/index.php?title=SquidNTaction=history

I suspect blog-spam of some sort.



[squid-users] recovering an object from the cache -- trimming off the squid header

2008-10-16 Thread lartc
hi all,

i've googled, but have been unable to find a simple sed command, or
otherwise to recover an object sitting in the web cache.

i know the filename(s) in the cache, however, there's a squid header
on top of a binary file(s), and I don't know how to recover just the
binary portion.

original files were accidentally deleted, so these are the only
remaining copy -- i've backed them up, but can't figure out how to trim
off the header properly and just get the original file ...

hope you can help

thanks 

charles





[squid-users] Squid and WCCP hardware placement

2008-10-16 Thread Johnson, S

I'm working on getting this working but I'm unclear on the hardware placement 
for each of the devices.

Is it:

A)
Workstation-Cisco-Squid--internet
    (WCCP)    (NAT)

B)
Workstation-Cisco (WCCP)
    |
   Squid---internet
        (NAT)

C)
Workstation-Cisco-Internet
|    (WCCP)
   Squid

D) or???

Thanks a bunch.



RE: [squid-users] Authentication Issue with Squid and mixed BASIC/NTLM auth

2008-10-16 Thread Chris Natter
Hmmm, strange. I tested 2.7STABLE4, but it doesn't seem to be stripping
the DOMAIN, it will still accept only DOMAIN\USERNAME. Perhaps I'm
missing something?

I also tested squid-3.1-20081016, built with a spec file adopted from a
squid3.0STABLE7 Redhat package:

configure \
   --exec_prefix=/usr \
   --bindir=%{_sbindir} \
   --libexecdir=%{_libdir}/squid \
   --localstatedir=/var \
   --datadir=%{_datadir} \
   --sysconfdir=/etc/squid \
   --disable-dependency-tracking \
   --enable-arp-acl \
   --enable-auth=basic,digest,ntlm,negotiate \
 
--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-do
main-NTLM,SASL \
   --enable-cache-digests \
   --enable-cachemgr-hostname=localhost \
   --enable-delay-pools \
   --enable-digest-auth-helpers=password \
   --enable-epoll \
 
--enable-external-acl-helpers=ip_user,ldap_group,unix_group,wbinfo_grou
p \
   --enable-icap-client \
   --enable-ident-lookups \
   --enable-linux-netfilter \
   --enable-ntlm-auth-helpers=SMB,fakeauth \
   --enable-referer-log \
   --enable-removal-policies=heap,lru \
   --enable-snmp \
   --enable-ssl \
   --enable-storeio=aufs,coss,diskd,,ufs \
   --enable-useragent-log \
   --enable-wccpv2 \
   --with-default-user=squid \
   --with-filedescriptors=16384 \
   --with-dl \
   --with-openssl=/usr/kerberos \
   --with-pthreads

And it looks like NTLM could be broken (I don't want to make
assumptions). I was unable to pass credentials in either the
DOMAIN\USERNAME or USERNAME format to OWA through squid. It also forced
an NTLM prompt for Firefox that I had to escape out of before I could
authenticate with BASIC auth.

I wasn't able to test spell-check as I couldn't authenticate to the OWA
server. 

Thanks!
-Chris
-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED] 
Sent: Thursday, October 16, 2008 5:37 AM
To: Chris Natter
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Authentication Issue with Squid and mixed
BASIC/NTLM auth

Chris Natter wrote:
 We were having issues with spell-check in 3.0, I haven't tried any of
 the development builds to see if it was resolved though in a later
 release. 
 
 OWA spell-check just seems to hang when you attempt to spell-check an
 email, or gives the try again later prompt. I saw some previous
 postings on the archive of the mailing list, but most of them are very
 outdated.
 
 I'll have to build an RPM of squid 2.7 and check to see if that solves
 both issues.

Ah, now that you mention it I vaguely recall the topic as it flew past a

while back.

Yes, 2.7 is likely the most dependable to have both combos of fixes you 
need.

Without knowing the cause the spellcheck issue _may_ have been resolved 
in 3.1.  Both of the MS workarounds and 'unknown method' support are now

present. If you have a spare moment and are inclined to test it please 
let us know the result. If you still hit bad news for 3.1, its 
definitely a bug that needs looking into at some point.

Amos

 
 Thanks for the help.
 
 -Original Message-
 From: Amos Jeffries [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, October 15, 2008 6:46 PM
 To: Chris Natter
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] Authentication Issue with Squid and mixed
 BASIC/NTLM auth
 
 Hey all,



 I've got a tough situation I'm hoping someone can help me with.



 We 'downgraded' from an old 3.0PRE build that a predecessor had setup
 on a
 reverse proxy, to squid 2.6.STABLE20. The proxy runs your standard
OWA
 over Reverse Proxy setup, with login=PASS to an OWA backend running
 with
 BASIC/NTLM auth. We have to have the NTLM for phones that sync with
 ActiveSync.



 It seems like something fundamental has changed in the way squid
 handles
 auth from 3.0 to squid 2.6. Using firefox on 2.6, I can auth with
just
 'USERNAME', with IE on 2.6 we have to type DOMAINUSERNAME or
 [EMAIL PROTECTED] now. Previously, with squid 3.0, just 'USERNAME' would
 work
 for auth.



 While this seems trivial, anything harder than just 'USERNAME'
boggles
 a
 lot of users. I'm assuming this has something to do with 'attempting
 NTLM'
 negotiation? Is there a way around it in squid 2.6?

 
 The cleaner @DOMAIN handling was only added to Squid 2.7+ and 3.0+.
You
 will need an upgrade again to one of those versions at least.
 
 What caused you to downgrade though? perhapse its been fixed now in
3.1?
 
 Amos


-- 
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] Squid and WCCP hardware placement

2008-10-16 Thread lartc
Hhi,

no reason (unless there's smothing i don't get) to use nat or wccp at
the workstation level. wccp should configured at the cisco box (level C
only) such that it forwards requests to the web through the squid box 

cheers

charles

On Thu, 2008-10-16 at 12:56 -0500, Johnson, S wrote:
 I'm working on getting this working but I'm unclear on the hardware placement 
 for each of the devices.
 
 Is it:
 
 A)
 Workstation-Cisco-Squid--internet
 (WCCP)(NAT)
 
 B)
 Workstation-Cisco (WCCP)
 |
Squid---internet
 (NAT)
 
 C)
 Workstation-Cisco-Internet
 |(WCCP)
Squid
 
 D) or???
 
 Thanks a bunch.




Re: [squid-users] Disabling error pages

2008-10-16 Thread Henrik Nordstrom
On tor, 2008-10-16 at 13:02 +0100, Robert Morrison wrote:

 I've found lots of references online (in this list's archives, other
 sites and the FAQ) to customising error pages in squid, but haven't
 yet found reference to removing error pages completely.

You can't. Oce the request has reached the proxy the proxy must respond
with someting. If it fails retreiving the requested object the polite
thing is to respond with an error message explaining what happened and
what the user can do to fix the peoblem.

If you fo not want to be polite to the users then you MAY change the
error pages to just a blank page with no visible content, but there
still needs to be somr kind of response.

 Is this possible without editing source code? I think I saw reference
 to setting font color in error messages to the same as background, but
 I'd prefer something a little less hackish ;)

Yes. Just replace the error pages with a file containing just the
following line:

!-- %s --

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Re-distributing the cache between multiple servers

2008-10-16 Thread Henrik Nordstrom
On tor, 2008-10-16 at 14:39 +0100, James Cohen wrote:
 I have two reverse proxy servers using each other as neighbours. The
 proxy servers are load balanced (using a least connections
 algorithm) by a Netscaler upstream of them.

Ok.

 A small amount of URLs account for around 50% or so of the requests.

Ok.

 At the moment there's some imbalance in the hit rates on the two
 caches because I brought up server A before server B and it's holding
 the majority of the objects which make that 50% of request traffic.

This should even out very quickly, unless you are using proxy-only in
the peering relation..

If you are using proxy-only then it will take longer time as it then
takes much longer for the active content to get replicated on the
servers.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid and WCCP hardware placement

2008-10-16 Thread Rhino

B.
cheers
-Ryan


Johnson, S wrote:

I'm working on getting this working but I'm unclear on the hardware placement 
for each of the devices.

Is it:

A)
Workstation-Cisco-Squid--internet
(WCCP)(NAT)

B)
Workstation-Cisco (WCCP)
|
   Squid---internet
(NAT)

C)
Workstation-Cisco-Internet
|(WCCP)
   Squid

D) or???

Thanks a bunch.




Re: [squid-users] recovering an object from the cache -- trimming off the squid header

2008-10-16 Thread Henrik Nordstrom
On tor, 2008-10-16 at 19:06 +0200, lartc wrote:
 hi all,
 
 i've googled, but have been unable to find a simple sed command, or
 otherwise to recover an object sitting in the web cache.
 
 i know the filename(s) in the cache, however, there's a squid header
 on top of a binary file(s), and I don't know how to recover just the
 binary portion.

See the purge tool. It knows how to do this.

Found in the related software section.

Regards
Henrik



Re: [squid-users] squidnt.com, warning

2008-10-16 Thread Guido Serassio

Hi,

At 18.01 16/10/2008, Mr Lyphifco wrote:

It seems that the site http://squidnt.com/ is trying to masquerade as an
official website for Mr Serassio's Windows port of Squid. It doesn't
explicitly state this, but the wording of the site contents strongly implies
such a thing.

Also it was entered into a new Wikipedia article on SquidNT as the homepage:

  http://en.wikipedia.org/w/index.php?title=SquidNTaction=history

I suspect blog-spam of some sort.


Thanks for your report.

The squidnt.com site seems deliberately incomplete

SquidNT was the name of the Windows port project of Squid 2.5. 
Starting from Squid 2.6 STABLE4 Windows is an official Squid 2 
platform, and the official sources can be compiled on Windows without 
changes. So SquidNT is the name of a complete project.


I think that the Wikipedia page and the Squid FAQ page should me more 
accurate about this.
So I have just updated the Wiki page: 
http://wiki.squid-cache.org/SquidFaq/AboutSquid#head-500ddc367517c94cdf5cc49cb26868ab64becf63


Please, do you can update again the Wikipedia page ?

Thanks

Regards

Guido



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/



Re: [squid-users] squidnt.com, warning

2008-10-16 Thread Henrik Nordstrom
On tor, 2008-10-16 at 17:01 +0100, Mr Lyphifco wrote:
 It seems that the site http://squidnt.com/ is trying to masquerade as
 an
 official website for Mr Serassio's Windows port of Squid. It doesn't
 explicitly state this, but the wording of the site contents strongly
 implies
 such a thing.
 
 Also it was entered into a new Wikipedia article on SquidNT as the
 homepage:
 
   http://en.wikipedia.org/w/index.php?title=SquidNTaction=history
 
 I suspect blog-spam of some sort.

I would agree. The site seems completely anonymous on who is behind the
content, and I have never heard of the name who is registered as owner
of the domain (additionally the domain owner is registered with an UK
address but US phonenumber.. which is a bit odd but imho)

But I do suspect the wikipedia user who created the wikipedia article is
the the same. The wikipedia article was created before the first blog
post (Wikipedia article created 19 July, first blog post is from 26
July).

I have added a warning comment on their download page.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] wbinfo_group.pl ?? return a error cannot run ..

2008-10-16 Thread Phibee Network Operation Center

Hi

We have a problems with our new squid server,
when we want add wbinfo_group.pl, he can't start it :


2008/10/14 06:07:39| Starting Squid Cache version 3.0.STABLE7 for 
i386-redhat-linux-gnu...

2008/10/14 06:07:39| Process ID 26104
2008/10/14 06:07:39| With 1024 file descriptors available
2008/10/14 06:07:39| DNS Socket created at 0.0.0.0, port 53027, FD 7
2008/10/14 06:07:39| Adding domain proxy.phibee.net from /etc/resolv.conf
2008/10/14 06:07:39| Adding nameserver 127.0.0.1 from /etc/resolv.conf
2008/10/14 06:07:39| helperStatefulOpenServers: Starting 15 'ntlm_auth' 
processes

2008/10/14 06:07:39| helperOpenServers: Starting 15 'ntlm_auth' processes
2008/10/14 06:07:39| helperOpenServers: Starting 5 'wbinfo_group.pl' 
processes
2008/10/14 06:07:39| WARNING: Cannot run 
'/usr/lib/squid/wbinfo_group.pl' process.
2008/10/14 06:07:39| WARNING: Cannot run 
'/usr/lib/squid/wbinfo_group.pl' process.
2008/10/14 06:07:39| WARNING: Cannot run 
'/usr/lib/squid/wbinfo_group.pl' process.
2008/10/14 06:07:39| WARNING: Cannot run 
'/usr/lib/squid/wbinfo_group.pl' process.
2008/10/14 06:07:39| WARNING: Cannot run 
'/usr/lib/squid/wbinfo_group.pl' process.

2008/10/14 06:07:39| User-Agent logging is disabled.
2008/10/14 06:07:39| Referer logging is disabled.
2008/10/14 06:07:39| Unlinkd pipe opened on FD 42
2008/10/14 06:07:39| Local cache digest enabled; rebuild/rewrite every 
3600/3600 sec

2008/10/14 06:07:39| Swap maxSize 512 KB, estimated 393846 objects
2008/10/14 06:07:39| Target number of buckets: 19692
2008/10/14 06:07:39| Using 32768 Store buckets
2008/10/14 06:07:39| Max Mem  size: 16384 KB
2008/10/14 06:07:39| Max Swap size: 512 KB
2008/10/14 06:07:39| Version 1 of swap file with LFS support detected...
2008/10/14 06:07:39| Rebuilding storage in /var/spool/squid (CLEAN)
2008/10/14 06:07:39| Using Least Load store dir selection
2008/10/14 06:07:39| Current Directory is /etc
2008/10/14 06:07:39| Loaded Icons.
2008/10/14 06:07:39| Accepting  HTTP connections at 0.0.0.0, port 8080, 
FD 44.

2008/10/14 06:07:39| Accepting ICP messages at 0.0.0.0, port 3130, FD 45.
2008/10/14 06:07:39| HTCP Disabled.
2008/10/14 06:07:39| Ready to serve requests.
2008/10/14 06:07:39| Done reading /var/spool/squid swaplog (1 entries)
2008/10/14 06:07:39| Finished rebuilding storage from disk.
2008/10/14 06:07:39| 1 Entries scanned
2008/10/14 06:07:39| 0 Invalid entries.
2008/10/14 06:07:39| 0 With invalid flags.
2008/10/14 06:07:39| 1 Objects loaded.
2008/10/14 06:07:39| 0 Objects expired.
2008/10/14 06:07:39| 0 Objects cancelled.
2008/10/14 06:07:39| 0 Duplicate URLs purged.
2008/10/14 06:07:39| 0 Swapfile clashes avoided.
2008/10/14 06:07:39|   Took 0.02 seconds ( 48.57 objects/sec).
2008/10/14 06:07:39| Beginning Validation Procedure
2008/10/14 06:07:39|   Completed Validation Procedure
2008/10/14 06:07:39|   Validated 27 Entries
2008/10/14 06:07:39|   store_swap_size = 12
2008/10/14 06:07:40| storeLateRelease: released 0 objects
2008/10/14 06:08:05| externalAclLookup: 'AD_Group' queue overload 
(ch=0xb9b05bd0)
2008/10/14 06:08:05| externalAclLookup: 'AD_Group' queue overload 
(ch=0xb9b05bd0)


and now we have a:

2008/10/14 06:07:39| WARNING: Cannot run 
'/usr/lib/squid/wbinfo_group.pl' process.


if i run it manually wbinfo_group.pl, it's good ... i run on Fedora 9 
and my conf are:


external_acl_type AD_Group %LOGIN /usr/lib/squid/wbinfo_group.pl



Right of /usr/lib/squid/wbinfo_group.pl are squid.squid and all x

Anyone know where is the probleme and where i can resolv it ?

thanks for your help
jerome






[squid-users] newbie: configuring squid to always check w/origin server

2008-10-16 Thread dukehoops

Hi,

I am a squid (and http 1.1 headers to be honest) newbie who'd really
appreciate help w/squid config and header attributes on the following:

I have a server serving images that change dynamically (same URL invoked at
different times may return different images). I would like the following
behavior:
1. browser sends image request to squid

2. on receiving request from browser, squid always validates with origin
server using an Etag (or simply asks for image if no Etag exists)

3. origin server compares Etag received from squid and either responds with
either
a) yes, you're up to date and sends no image data, or 
b) sends image date with new Etag (to be used by squid during subsequent
requests in #2 above).

4. if 3a) then squid returns to browser image date from own cache.
if 3b) squid updates own cache with what origin server returned and returns
image to browser

FWIW, origin servers are several in a cluster behind a virtual domain

Possibly naive (and hopefully easy) questions:
1. With what headers should the origin server respond in 3a) and 3b)? In
latter case, it seems like something like Cache-Control: must-revalidate,
not sure whether to use s-maxage=0 and/or maxage=0
2. What params should be used in squid config?

thanks a lot!
-nikita


-

Nikita Tovstoles
vside.com


-- 
View this message in context: 
http://www.nabble.com/newbie%3A-configuring-squid-to-always-check-w-origin-server-tp20018895p20018895.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] wbinfo_group.pl ?? return a error cannot run ..

2008-10-16 Thread Henrik Nordstrom

On tor, 2008-10-16 at 22:26 +0200, Phibee Network Operation Center
wrote:
 Hi
 
 We have a problems with our new squid server,
 when we want add wbinfo_group.pl, he can't start it :
 

 2008/10/14 06:07:39| WARNING: Cannot run 
 '/usr/lib/squid/wbinfo_group.pl' process.

Is wbinfo_group.pl executable from a shell running as your
cache_effective_user? (not the same as testing from the root account..)

Do you have SELINUX enabled? Check your system logs in case it's SELINUX
denying Squid from running wbinfo_group.pl.

Regards
Henrik



signature.asc
Description: This is a digitally signed message part


Re: [squid-users] newbie: configuring squid to always check w/origin server

2008-10-16 Thread Henrik Nordstrom
On tor, 2008-10-16 at 16:12 -0700, dukehoops wrote:
 1. With what headers should the origin server respond in 3a) and 3b)? In
 latter case, it seems like something like Cache-Control: must-revalidate,
 not sure whether to use s-maxage=0 and/or maxage=0

You probably do not need or want must-revalidate, it's a quite hard
directive. max-age is sufficient I think.

You only need must-revalidate (in addition to max-age) if it's
absolutely forbidden to use the last known version when/if revalidation
should fails to contact the web server for some reason.

You only need s-maxage if you want to assign different cache criterias
to shared caches such as Squid and browsers, for example enabling
browsers to cache the image longer than Squid.

 2. What params should be used in squid config?

Preferably nothing specific for this since you have control over the web
server..

REgards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] wbinfo_group.pl ?? return a error cannot run ..

2008-10-16 Thread Chris Robertson

Phibee Network Operation Center wrote:

Hi

We have a problems with our new squid server,
when we want add wbinfo_group.pl, he can't start it :


2008/10/14 06:07:39| Starting Squid Cache version 3.0.STABLE7 for 
i386-redhat-linux-gnu...

2008/10/14 06:07:39| Process ID 26104
2008/10/14 06:07:39| With 1024 file descriptors available
2008/10/14 06:07:39| DNS Socket created at 0.0.0.0, port 53027, FD 7
2008/10/14 06:07:39| Adding domain proxy.phibee.net from /etc/resolv.conf
2008/10/14 06:07:39| Adding nameserver 127.0.0.1 from /etc/resolv.conf
2008/10/14 06:07:39| helperStatefulOpenServers: Starting 15 
'ntlm_auth' processes

2008/10/14 06:07:39| helperOpenServers: Starting 15 'ntlm_auth' processes
2008/10/14 06:07:39| helperOpenServers: Starting 5 'wbinfo_group.pl' 
processes
2008/10/14 06:07:39| WARNING: Cannot run 
'/usr/lib/squid/wbinfo_group.pl' process.
2008/10/14 06:07:39| WARNING: Cannot run 
'/usr/lib/squid/wbinfo_group.pl' process.
2008/10/14 06:07:39| WARNING: Cannot run 
'/usr/lib/squid/wbinfo_group.pl' process.
2008/10/14 06:07:39| WARNING: Cannot run 
'/usr/lib/squid/wbinfo_group.pl' process.
2008/10/14 06:07:39| WARNING: Cannot run 
'/usr/lib/squid/wbinfo_group.pl' process.

2008/10/14 06:07:39| User-Agent logging is disabled.
2008/10/14 06:07:39| Referer logging is disabled.
2008/10/14 06:07:39| Unlinkd pipe opened on FD 42
2008/10/14 06:07:39| Local cache digest enabled; rebuild/rewrite every 
3600/3600 sec

2008/10/14 06:07:39| Swap maxSize 512 KB, estimated 393846 objects
2008/10/14 06:07:39| Target number of buckets: 19692
2008/10/14 06:07:39| Using 32768 Store buckets
2008/10/14 06:07:39| Max Mem  size: 16384 KB
2008/10/14 06:07:39| Max Swap size: 512 KB
2008/10/14 06:07:39| Version 1 of swap file with LFS support detected...
2008/10/14 06:07:39| Rebuilding storage in /var/spool/squid (CLEAN)
2008/10/14 06:07:39| Using Least Load store dir selection
2008/10/14 06:07:39| Current Directory is /etc
2008/10/14 06:07:39| Loaded Icons.
2008/10/14 06:07:39| Accepting  HTTP connections at 0.0.0.0, port 
8080, FD 44.

2008/10/14 06:07:39| Accepting ICP messages at 0.0.0.0, port 3130, FD 45.
2008/10/14 06:07:39| HTCP Disabled.
2008/10/14 06:07:39| Ready to serve requests.
2008/10/14 06:07:39| Done reading /var/spool/squid swaplog (1 entries)
2008/10/14 06:07:39| Finished rebuilding storage from disk.
2008/10/14 06:07:39| 1 Entries scanned
2008/10/14 06:07:39| 0 Invalid entries.
2008/10/14 06:07:39| 0 With invalid flags.
2008/10/14 06:07:39| 1 Objects loaded.
2008/10/14 06:07:39| 0 Objects expired.
2008/10/14 06:07:39| 0 Objects cancelled.
2008/10/14 06:07:39| 0 Duplicate URLs purged.
2008/10/14 06:07:39| 0 Swapfile clashes avoided.
2008/10/14 06:07:39|   Took 0.02 seconds ( 48.57 objects/sec).
2008/10/14 06:07:39| Beginning Validation Procedure
2008/10/14 06:07:39|   Completed Validation Procedure
2008/10/14 06:07:39|   Validated 27 Entries
2008/10/14 06:07:39|   store_swap_size = 12
2008/10/14 06:07:40| storeLateRelease: released 0 objects
2008/10/14 06:08:05| externalAclLookup: 'AD_Group' queue overload 
(ch=0xb9b05bd0)
2008/10/14 06:08:05| externalAclLookup: 'AD_Group' queue overload 
(ch=0xb9b05bd0)


and now we have a:

2008/10/14 06:07:39| WARNING: Cannot run 
'/usr/lib/squid/wbinfo_group.pl' process.


if i run it manually wbinfo_group.pl, it's good ... i run on Fedora 9 
and my conf are:


external_acl_type AD_Group %LOGIN /usr/lib/squid/wbinfo_group.pl



Right of /usr/lib/squid/wbinfo_group.pl are squid.squid and all x

Anyone know where is the probleme and where i can resolv it ?


Try typing setenforce 0 and then starting Squid.  Perhaps it's an 
SELINUX issue.




thanks for your help
jerome


Chris



[squid-users] Cannot get squid 2.6 in reverse-proxy to not send cache when peer is dead

2008-10-16 Thread samk
See Thread at: http://www.techienuggets.com/Detail?tx=56772 Posted on behalf of 
a User

All,

I really need help here, and this has got to be a real simple problem, just not 
easy to lay out for you all.

I am using Squid 2.6 as a reverse proxy for our webservers.
Our webservers get rebooted every night, and during that downtime, we send 
users to a sorry server.
We are using a Cisco CSS device to route the traffic to the sorry server, when 
it detects that the webservers are down.

The problem I am having is that when the webservers go down, the Squid server 
is delivering content from it's cache instead of a 404.
This is causing half-loaded webpages instead of the SORRY SERVER page.


###CONFIG##
cache_peer 12.xxx.xxx.xxx parent 80 0 no-query originserver name=foo
acl sites_foo dstdomain www1.foobar.com www.foobar.com
cache_peer_access foo allow sites_foo
cache_peer_access foo deny all

acl foo_networks src 12.xxx.xxx.xxx/27
http_access allow foo_networks

http_port 12.xxx.xxx.xxx:80 accel defaultsite=www1.foobar.com





Re: [squid-users] Unable to have certain site to be non-cacheable and ignore already cached data

2008-10-16 Thread Amos Jeffries
 Thanks so much Henrick and Leonardo!
 Looks I should learn regexes, since taked $ as
 the whatever after meaning but not end of string :)
 Now it logs as TCP_MISS.
 Thanks so much again!

If you are needing to match just the domain its better to use 'dstdomain'
ACL type instead of regex.  Squid processes them much faster and can still
do wildcard sub-domains (.nix.ru).

Amos



 On Thursday 16 October 2008 15:45, Leonardo Rodrigues
 Magalhães wrote:
 Anton escreveu:
  Hello!
 
  was trying for a few hours to have a certain site
  (http://www.nix.ru) to be not cacheable - but squid
  always gives me an object which is in cache!
 
  My steps:
 
  acl DIRECTNIX url_regex ^http://www.nix.ru/$
  no_cache deny DIRECTNIX
  always_direct allow DIRECTNIX

 your ACL is too complicated for a pretty simple thing
 ... it has the 'begin with' flag (^) and has the 'end
 with' ($) flag as well. And it has a final slash too. So,
 it seems that would match exclusively

 http://www.nix.ru/

 and nothing else . including NOT matching
 'http://www.nix.ru/index.htm',
 'http://www.nix.ru/logo.jpg' and so on.

 if you wanna hints on doing regexps, i would give you
 a precious hint: don't try to complicate things.

 acl DIRECTNIX url_regex -i www\.nix\.ru

 would do the job and would be much simplier to
 understand. And NEVER forget the case inconditional (-i)
 flag on regex 





Re: [squid-users] squidnt.com, warning

2008-10-16 Thread Henrik Nordstrom
On tor, 2008-10-16 at 21:16 +0200, Guido Serassio wrote:

 Please, do you can update again the Wikipedia page ?

Done.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Object becomes STALE: refresh_pattern min and max

2008-10-16 Thread BUI18
Sorry it took a while to get back.  Not sure how to interpre X-Cache and 
X-Cache-Lookup.

Here's the header info from Fiddler:

Request Header

GET /server1/websites/data/folder/myvideofile.vid HTTP/1.1

Client
Accept: */*
Transport
Host: ftp.mydomain.com
Proxy-Connection: Keep-Alive

Response Header

HTTP/1.0 200 OK
Content-Length: 1775372
Content-Type: video/jpeg
Last-Modified: Tue, 02 Sep 2008 23:57:25 GMT
Accept-Ranges: none
ETag: 8020b2a557dc91:3ecc
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
Date: Thu, 16 Oct 2008 22:52:49 GMT
X-Cache: MISS from squid.mydomain.com
X-Cache-Lookup: HIT from squid.mydomain.com:3128
Via: 1.0 squid.mydomain.com:3128 (squid/2.6.STABLE14)
Proxy-Connection: keep-alive





- Original Message 
From: Amos Jeffries [EMAIL PROTECTED]
To: BUI18 [EMAIL PROTECTED]
Cc: squid-users@squid-cache.org; Itzcak Pechtalt [EMAIL PROTECTED]
Sent: Wednesday, September 24, 2008 6:17:22 AM
Subject: Re: [squid-users] Object becomes STALE: refresh_pattern min and max

BUI18 wrote:
 Hi - Thanks for responding.  URL for video file never changes.
 

What release of Squid?

Did you check the Expires header properly from the transfer rather than 
from the (apparently untrustworthy) info in the store log?


 I did some more checking in the Squid logs and this is what I noticed:
 
 File Properties of video file (Pacific Daylight Time (PDT))
 
 Created On: Monday, September 22, 2008, 8:59:35 AM
 
 Modified On: Monday, September 22, 2008, 8:59:35 AM
 
 Accessed On: Today, September 24, 2008, 3:53:12 AM
 
 ***
 Wget Grabs File (Time in India Standard Time (IST))
 
 --04:38:35--  http://ftp.mydomain.com/websites/data/myvideofile.vid
  = `/WGET/Temp/myvideofile.vid'
 04:38:54 (93.91 KB/s) - `/WGET/Temp/myvideofile.vid' saved [1791244/1791244]
 
 The access.log confirms initial pre-fetch by wget.
 
 1222124934.241  18968 192.168.200.4 TCP_MISS/200 1791684 GET 
 http://ftp.mydomain.com/websites/data/myvideofile.vid - DIRECT/69.43.136.41 
 video/jpeg
 
 UTC = Mon, 22 Sep 2008 23:08:54 GMT
 
 The store.log shows a write from memory to disk:
 
 1222124934.241 SWAPOUT 00 00057B65 1E18E35BDC9307C6BC3FBEFD5B4120A3  200 
 1222124765 1222099175-1 video/jpeg 1791244/1791244 GET 
 http://ftp.mydomain.com/websites/data/myvideofile.vid
 
 UTC = Mon, 22 Sep 2008 23:08:54 GMT
 
 ***
 
 Then Store.log shows release or removal from cache:
 
 153725.068 RELEASE 00 00057B65 605FAC36E93B0CDE81902BBC6C5EC71A  200 
 1222124765 1222099175-1 video/jpeg 1791244/-279 GET 
 http://ftp.mydomain.com/websites/data/myvideofile.vid
 
 UTC = Wed, 24 Sep 2008 10:55:25 GMT
 
 Notice the -1 for expiration header (I do not set one on the object).  My min 
 age is 5 days so I'm not sure why the object would be released from cache in 
 less than 2 days.
 
 If the object was released from cache, when the user tried to access file, 
 Squid reports TCP_REFRESH_MISS, which to me means that it was found in cache 
 but when it sends a If-Modified-Since request, it thinks that the file has 
 been modified (which it was not as seen by the lastmod date indicated in the 
 store.log below.
 
 ***
 
 User accessed file (access.log):
 
 153742.005  17275 192.168.200.52 TCP_REFRESH_MISS/200 1791688 GET 
 http://ftp.mydomain.com/websites/data/myvideofile.vid - DIRECT/69.43.136.41 
 video/jpeg
 
 UTC = Wed, 24 Sep 2008 10:55:42 GMT
 
 Then store.log shows a write to disk
 
 153742.005 SWAPOUT 00 00088336 1E18E35BDC9307C6BC3FBEFD5B4120A3  200 
 153575 1222099175-1 video/jpeg 
 1791244/1791244 GET http://ftp.mydomain.com/websites/data/myvideofile.vid
 
 UTC = Wed, 24 Sep 2008 10:55:42 GMT
 datehdr: Wed, 24 Sep 2008 10:55:55 GMT
 lastmod: Mon, 22 Sep 2008 15:59:35 GMT
 
 Anyone with ideas on why this behavior occurs?
 
 thanks
 
 
 
 
 
 - Original Message 
 From: Itzcak Pechtalt [EMAIL PROTECTED]
 To: Squid Users squid-users@squid-cache.org
 Sent: Wednesday, September 24, 2008 4:35:59 AM
 Subject: Re: [squid-users] Object becomes STALE: refresh_pattern min and max
 
 On Wed, Sep 24, 2008 at 1:39 PM, BUI18 [EMAIL PROTECTED] wrote:
 Hi -

 I have squid box with tons of disk for the cache_dir
 (hundreds of GB).  I use wget to perform some pre-fetching of large
 video files.  I've set the min and max age to 5 days and 7 days (in
 minutes).  And although I have plenty of disk space available, I still
 receive TCP_REFRESH_MISS for files that had been pre-fetched and later
 accessed the same day.  Does anyone know why Squid would consider it as
 STALE?  I thought that by setting the min value for refresh_pattern for
 the video file would guarantee freshness.  Not only does the cache
 consider it STALE, it then goes and pre-fetches a new copy even though
 I know that the 

[squid-users] delaypoll

2008-10-16 Thread ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░
anyone can give me squid.conf for delaypoll ?
i want to create :

user 192.168.1.1 - 192.168.1.100 up/down 12k/24k allow all files/website
user 192.168.1.50 - 192.168.1.80 up/down 12k/24k allow only to open
yahoo.com and google.com
user 192.168.1.100 - 192.168.1.200 up/down 12k/12k only allow to open meebo


-- 
-=-=-=-=


Re: [squid-users] squidnt.com, warning

2008-10-16 Thread Amos Jeffries
 On tor, 2008-10-16 at 17:01 +0100, Mr Lyphifco wrote:
 It seems that the site http://squidnt.com/ is trying to masquerade as
 an
 official website for Mr Serassio's Windows port of Squid. It doesn't
 explicitly state this, but the wording of the site contents strongly
 implies
 such a thing.

 Also it was entered into a new Wikipedia article on SquidNT as the
 homepage:

   http://en.wikipedia.org/w/index.php?title=SquidNTaction=history

 I suspect blog-spam of some sort.

 I would agree. The site seems completely anonymous on who is behind the
 content, and I have never heard of the name who is registered as owner
 of the domain (additionally the domain owner is registered with an UK
 address but US phonenumber.. which is a bit odd but imho)

 But I do suspect the wikipedia user who created the wikipedia article is
 the the same. The wikipedia article was created before the first blog
 post (Wikipedia article created 19 July, first blog post is from 26
 July).

 I have added a warning comment on their download page.

 Regards
 Henrik


Which appears to have been moderated out of existence.
At least the three comments now present are all by 'admin' advertising
their downloads.

Amos




Re: [squid-users] Disabling error pages

2008-10-16 Thread Amos Jeffries
 On tor, 2008-10-16 at 13:02 +0100, Robert Morrison wrote:

 I've found lots of references online (in this list's archives, other
 sites and the FAQ) to customising error pages in squid, but haven't
 yet found reference to removing error pages completely.

 You can't. Oce the request has reached the proxy the proxy must respond
 with someting. If it fails retreiving the requested object the polite
 thing is to respond with an error message explaining what happened and
 what the user can do to fix the peoblem.

 If you fo not want to be polite to the users then you MAY change the
 error pages to just a blank page with no visible content, but there
 still needs to be somr kind of response.

 Is this possible without editing source code? I think I saw reference
 to setting font color in error messages to the same as background, but
 I'd prefer something a little less hackish ;)

 Yes. Just replace the error pages with a file containing just the
 following line:

 !-- %s --

 Regards
 Henrik


Also see TCP_RESET
http://www.squid-cache.org/Versions/v3/3.0/cfgman/deny_info.html

Amos




Re: [squid-users] Re-distributing the cache between multiple servers

2008-10-16 Thread Amos Jeffries
 Hi,

 I have two reverse proxy servers using each other as neighbours. The
 proxy servers are load balanced (using a least connections
 algorithm) by a Netscaler upstream of them.

 A small amount of URLs account for around 50% or so of the requests.

 At the moment there's some imbalance in the hit rates on the two
 caches because I brought up server A before server B and it's holding
 the majority of the objects which make that 50% of request traffic.

 I can see that clearing/expiring both caches should result in an equal
 hit rate between the two servers.

 Is this the only way of achieving this? I'm concerned now that if I
 was to add a third server C into the cache pool it'd have an even
 lower hit rate than on A or B.

 I spent some time searching but wasn't able to find Squid
 administration for dummies ;)


Sounds like one of the expected side effects of sibling 'proxy-only'
setting. If squid were allowed to cache data received from their siblings
in one of these setups, the hits would balance out naturally.

Amos



[squid-users] Header Stripping of Header type other

2008-10-16 Thread WRIGHT Alan [UK]
Hi Folks,
I have had a look at the wiki and the docs and need a bit more help.

I am trying to look for and strip a request header X-MSISDN:

I could use ACL with request_header_access other deny, but this will
strip some other headers too which is not possible.

Is there a way to create custom header fields for stripping?

Regards

Alan




Re: [squid-users] Update Accelerator, Squid and Windows Update Caching

2008-10-16 Thread Richard Wall
On Fri, Oct 10, 2008 at 12:30 PM, Amos Jeffries [EMAIL PROTECTED] wrote:
 Richard Wall wrote:

 Hi,

 I've been reading through the archive looking for information about
 squid 2.6 and windows update caching. The FAQ mentions problems with
 range offsets but it's not really clear which versions of Squid this
 applies to.

 All versions. The FAQ was the result of my experiments mid last year. With
 some tweaks made early his year since Vista came out.
 We haven't done a intensive experiments with Vista yet.

Hi Amos,

I'm still investigating Windows Update caching (with 2.6.STABLE17/18)

First of all, I have been doing some tests to try and find out the
problem with Squid and Content-Range requests.
 * I watch the squid logs as a vista box does its automatic updates
and I can see that *some* of its requests use ranges. (so far I have
only seen these when it requests .psf files...some of which seem to be
very large files...so the range request makes sense) See:
http://groups.google.hr/group/microsoft.public.windowsupdate/browse_thread/thread/af5db07dc2db9713

# zcat squid.log.192.168.1.119.2008-10-16.gz | grep
multipart/byteranges | awk '{print $7}' | uniq | while read URL; do
echo $URL; wget --spider $URL 21 | grep Length; done
http://www.download.windowsupdate.com/msdownload/update/software/secu/2008/10/windows6.0-kb956390-x86_2d03c4b14b5bad88510380c14acd2bffc26436a7.psf
Length: 91,225,471 (87M) [application/octet-stream]
http://www.download.windowsupdate.com/msdownload/update/software/secu/2008/05/windows6.0-kb950762-x86_0cc2989b92bc968e143e1eeae8817f08907fd715.psf
Length: 834,868 (815K) [application/octet-stream]
http://www.download.windowsupdate.com/msdownload/update/software/secu/2008/03/windows6.0-kb948590-x86_ed27763e42ee2e20e676d9f6aa13f18b84d7bc96.psf
Length: 755,232 (738K) [application/octet-stream]
http://www.download.windowsupdate.com/msdownload/update/software/crup/2008/09/windows6.0-kb955302-x86_1e40fd3ae8f95723dbd76f837ba096adb25f3829.psf
Length: 7,003,447 (6.7M) [application/octet-stream]
...

 * I have found that curl can make range requests so I've been using
it to test how Squid behavesand it seems to do the right thing. eg
 - First ask for a range : The correct range is returned X-Cache: MISS
 - Repeat the range request :  The correct range is returned X-Cache: MISS
 - Request the entire file: The entire file is correctly returned X-Cache: MISS
 - Repeat the request: X-Cache: HIT
 - Repeat the previous range request: X-Cache: HIT
 - Request a different range: X-Cache: HIT

curl --range 1000-1002 --header Pragma: -v -x http://127.0.0.1:3128
http://www.download.windowsupdate.com/msdownload/update/software/secu/2008/05/windows6.0-kb950762-x86_0cc2989b92bc968e143e1eeae8817f08907fd715.psf
 /dev/null

Looking back through the archive I find this conversation from 2005:
http://www.squid-cache.org/mail-archive/squid-users/200504/0669.html

...but the behaviour there sounds like a result of setting:
range_offset_limit -1

Seems to me that Squid should do a good job of Windows Update caching.
There is another thread discussing how to override MS update cache
control headers:
http://www.squid-cache.org/mail-archive/squid-users/200508/0596.html

but I don't see anything evil in the server response headers
today. I guess the client may be sending no-cache headers...I'll
double check that later.

Is there some other case that I'm missing?

 I'm going to experiment, but if anyone has any positive or
 negative experience of Squid and windows update caching, I'd be really
 interested to hear from you.

 In case Squid cannot do windows update caching by its self, I'm also
 looking at integrating Update Accelerator
 (http://update-accelerator.advproxy.net/) script with standard squid
 2.6 and wondered if anyone else had any experience of this.
 The update accelerator script is just a perl wrapper around wget which
 is configured as a Squid url_rewrite_program. It's not clear to me
 what this script is doing that Squid wouldn't do by its self.

 Strange indeed.

I got update accelerator working with Squid but I'm still not
convinced that it's necessary (see above).

-RichardW.