[squid-users] sslbump + DynamicSslCert + url_rewrite_program + NTLM authentication

2011-02-10 Thread Yonah Russ
Hi,

I've been using Squid 2.6/7 for a while as a redirecting proxy for
developers to preview their changes as if they are looking at
production websites.
Now I need to support rewriting SSL requests as well and this has
brought me to investigate Squid 3.2/3.1
As both of these seem very new and alot seems to have changed, I'm
hoping you can help point me in the best direction.

I understand that 3.2 has the DynamicSSLCert feature and that a patch
exists for 3.1 as well- which would be the prefered way to implement
this for semi production/internal users?
Is there any way to restrict which sites get bumped and which do not?

I also understand that redirect_program has been replaced with
url_rewrite_program but the interface seems to be fairly backwards
compatible- any gotchas to look out for?
Will the url_rewrite_program have access to the decrypted https
request? If so, will the rewrite program be able to rewrite the
request and still send it over HTTPS?

Have their been changes in Active Directory integration for proxy
authentication? Currently I'm using NTLM and Basic
authentication+winbind but not without issues.

I understand there are some changes regarding SMP. Currently I run
multiple instances of Squid with different configurations(http_port,
redirect_program). Can I consolidate this any with the newer versions?
I'd be interested in sharing the authentication helpers, but still
having different http/https ports and rewrite configurations.

Thanks in advance,
Yonah


[squid-users] netdbExchangeHandleReply: corrupt data, aborting

2011-02-10 Thread Alex Sharaz

Sent this out a while back.

Don't think I got any replies.

Anyway, Still happening but now with squid 3.1.10/3.1.11

I'd like to do a phased upgrade to 3.1.x but don;t want to try it if
I'm still getting these netdb errors

Rgds
Alex


Hi,

For a while now I've been running  a squid 2.7stable7 service here (just
upgraded to stable9) and thought I'd try out the 3.1.4 build on my test
web cache. Although the test cache is linked into my production cache
cluster  as a sibling
the universtiy access the cache service via a serveriron
hardware load balancer which load balances traffic over all my
2.7.STABLE9 boxes. I access the test cache directly.

Since this morning, when i upgraded  to 3.1.4
I've been seeing the following in the 3.1.4 cache.log file


2010/06/21 12:14:12| storeLateRelease: released 0 objects
2010/06/21 12:14:33| netdbExchangeHandleReply: corrupt data, aborting
2010/06/21 12:14:33| netdbExchangeHandleReply: corrupt data, aborting
2010/06/21 12:14:41| netdbExchangeHandleReply: corrupt data, aborting
2010/06/21 12:18:59| Detected DEAD Sibling: wwwcache3-east.hull.ac.uk
2010/06/21 12:18:59| Detected DEAD Sibling: wwwcache4-east.hull.ac.uk
2010/06/21 12:18:59| Detected DEAD Sibling: wwwcache1-west.hull.ac.uk
2010/06/21 12:18:59| Detected DEAD Sibling: wwwcache3-west.hull.ac.uk
2010/06/21 12:18:59| Detected DEAD Sibling: wwwcache1-east.hull.ac.uk
2010/06/21 12:18:59| Detected REVIVED Sibling: wwwcache1-west.hull.ac.uk
2010/06/21 12:18:59| Detected REVIVED Sibling: wwwcache3-west.hull.ac.uk
2010/06/21 12:18:59| Detected REVIVED Sibling: wwwcache3-east.hull.ac.uk
2010/06/21 12:18:59| Detected REVIVED Sibling: wwwcache1-east.hull.ac.uk
2010/06/21 12:18:59| Detected REVIVED Sibling: wwwcache4-east.hull.ac.uk
2010/06/21 12:54:11| NETDB state saved; 821 entries, 3 msec
2010/06/21 12:54:45| netdbExchangeHandleReply: corrupt data, aborting
2010/06/21 12:54:45| netdbExchangeHandleReply: corrupt data, aborting
2010/06/21 12:54:52| netdbExchangeHandleReply: corrupt data, aborting
2010/06/21 13:11:28| Detected DEAD Sibling: wwwcache4-east.hull.ac.uk
2010/06/21 13:11:28| Detected DEAD Sibling: wwwcache2-west.hull.ac.uk
2010/06/21 13:11:28| Detected REVIVED Sibling: wwwcache4-east.hull.ac.uk
2010/06/21 13:11:28| Detected REVIVED Sibling: wwwcache2-west.hull.ac.uk
2010/06/21 13:40:18| netdbExchangeHandleReply: corrupt data, aborting
2010/06/21 13:40:25| netdbExchangeHandleReply: corrupt data, aborting
2010/06/21 13:40:26| netdbExchangeHandleReply: corrupt data, aborting
2010/06/21 13:55:15| NETDB state saved; 821 entries, 3 msec


Don't think I've seen this before. Web cache configs available if
necessary. Anyone else trying to mix 2.7 and 3.1 siblings?

Rgds


[squid-users] Assertion failed message then squid restart on 3.1.10 and 3.1.11

2011-02-10 Thread Alex Sharaz

Hi,

Looking for hints as to how to resolve the above problem.

Occasionally I get

2011/02/10 09:25:42| assertion failed: htcp.cc:1350: sz = 0
2011/02/10 09:25:52| Starting Squid Cache version 3.1.11 for x86_64- 
unknown-linux-gnu...


Messages appearing in my cache.log.

The server in question is a test box that is linked into my production  
( 2.7.stable9) group of caches). I'd like to move to the 3.1 branch  
from 2.7 but am reluctant to do so while it occasionally breaks.


Any pointers as to how I might resolve the above?

I'm running squid on a 64 bit ubuntu (10.4) box with the following  
config


#!/bin/bash
ulimit -SHn 24576
./configure   --enable-snmp --enable-basic-auth-helpers=PAM  -- 
enable-cachemgr-hostname=slb-realsrv1-east --enable-htcp --enable- 
cache-digests  --enable-async-io  --prefix=/usr/local/squid --with- 
pthreads --enable-removal-policies --enable-ssl -with-openssl=/usr/ 
local/ssl --enable-linux-netfilter -with-large-files --with- 
maxfd=24576 --with-dl --enable-icmp --enable-poll --disable-ident- 
lookups --enable-truncate --enable-delay-pools --disable-ipv6 -- 
disable-loadable-modules


Thanks
Alex




Re: [squid-users] url blocking

2011-02-10 Thread Amos Jeffries

On 10/02/11 18:25, Zartash . wrote:


So is there any way to block %?



If it actually exists in the URL (not just the browser display version) 
using '%' in the pattern will match it. Block with that ACL.



If its encoding something then no, you can't block it directly. It's a 
URL wire-level encoding byte.


You could decode the %xx code and figure out what character it is 
hiding. Match and block on that.


Or, if you don't care what character its encoding use '.' regex control 
to match any single byte.




Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.4


Re: [squid-users] urlParse: Illegal character in hostname

2011-02-10 Thread Winfield Henry
Ah, I did set that, forgot to undo and never made the connection.

Thank you. 

 On 2/9/2011 at 4:58 PM, in message
4362f9aa484687ab1128437ff2264...@mail.treenet.co.nz, Amos Jeffries
squ...@treenet.co.nz wrote:
 On Wed, 09 Feb 2011 08:03:12 -0500, Winfield Henry wrote:
 Hello,
  
 I have seen lots of references to the above in the mailing list. It is 
 stated it is not a problem with squid, rather the clients proxy
 technique. 
 I am fine with that, but wondering how this may be excluded from the log
 
 files? The hostname it refers to are some strange encoded characters.
  
 Thanks,
 W
 
 The message is displayed because you chose to enable check_hostnames
 feature. These are the security warnings that feature produces.
 
 Where possible please track down how and why these mangled invalid URLs
 are appearing and try to get it fixed.
 
 Amos



Re: [squid-users] Assertion failed message then squid restart on 3.1.10 and 3.1.11

2011-02-10 Thread Amos Jeffries

On 11/02/11 01:57, Alex Sharaz wrote:

Hi,

Looking for hints as to how to resolve the above problem.

Occasionally I get

2011/02/10 09:25:42| assertion failed: htcp.cc:1350: sz = 0
2011/02/10 09:25:52| Starting Squid Cache version 3.1.11 for
x86_64-unknown-linux-gnu...

Messages appearing in my cache.log.

The server in question is a test box that is linked into my production (
2.7.stable9) group of caches). I'd like to move to the 3.1 branch from
2.7 but am reluctant to do so while it occasionally breaks.

Any pointers as to how I might resolve the above?

I'm running squid on a 64 bit ubuntu (10.4) box with the following config

#!/bin/bash
ulimit -SHn 24576
./configure --enable-snmp --enable-basic-auth-helpers=PAM
--enable-cachemgr-hostname=slb-realsrv1-east --enable-htcp
--enable-cache-digests --enable-async-io --prefix=/usr/local/squid
--with-pthreads --enable-removal-policies --enable-ssl
-with-openssl=/usr/local/ssl --enable-linux-netfilter -with-large-files
--with-maxfd=24576 --with-dl --enable-icmp --enable-poll
--disable-ident-lookups --enable-truncate --enable-delay-pools
--disable-ipv6 --disable-loadable-modules

Thanks
Alex



Ouch, quite nasty. I've applied patches to handle this to 3.HEAD. It 
should apply easily to earlier releases as well.


When the mirrors update it will be available at 
http://www.squid-cache.org/Versions/v3/3.HEAD/changesets/squid-3-11220.patch


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.4


Re: [squid-users] Questions on SQUID peering/mesh

2011-02-10 Thread Matus UHLAR - fantomas
 On 01/02/11 17:06, Pandu Poluan wrote:
 I have 2 questions regarding SQUID peering:

 Q1: Should I use ICP or HTCP?

 On 01.02.11 19:00, Amos Jeffries wrote:
 If you have a choice HTCP.
 The packets are slightly bigger than ICP (they contain HTTP headers not
 just URLs) but the false-positives are much lower and thus routing
 choices are better.

 On 09/02/11 22:52, Matus UHLAR - fantomas wrote:
 what if we use cache digests?

On 10.02.11 17:22, Amos Jeffries wrote:
 Then you are using digests not ICP or HTCP  :-P

So, if squid fetched digest from sibling, it won't send ICP nor htcp to it?

 CD has more false positives than ICP but less lag on the real matches  
 and and less background bandwidth consumption.

of course. My question now is, if they can benefit of all of those...
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
If Barbie is so popular, why do you have to buy her friends? 


Re: [squid-users] netdbExchangeHandleReply: corrupt data, aborting

2011-02-10 Thread Amos Jeffries

On 11/02/11 01:56, Alex Sharaz wrote:

Sent this out a while back.

Don't think I got any replies.

Anyway, Still happening but now with squid 3.1.10/3.1.11

I'd like to do a phased upgrade to 3.1.x but don;t want to try it if
I'm still getting these netdb errors

Rgds
Alex


Hi,

For a while now I've been running a squid 2.7stable7 service here (just
upgraded to stable9) and thought I'd try out the 3.1.4 build on my test
web cache. Although the test cache is linked into my production cache
cluster as a sibling
the universtiy access the cache service via a serveriron
hardware load balancer which load balances traffic over all my
2.7.STABLE9 boxes. I access the test cache directly.

Since this morning, when i upgraded to 3.1.4
I've been seeing the following in the 3.1.4 cache.log file


snip

2010/06/21 13:40:18| netdbExchangeHandleReply: corrupt data, aborting
2010/06/21 13:40:25| netdbExchangeHandleReply: corrupt data, aborting
2010/06/21 13:40:26| netdbExchangeHandleReply: corrupt data, aborting
2010/06/21 13:55:15| NETDB state saved; 821 entries, 3 msec


Don't think I've seen this before. Web cache configs available if
necessary. Anyone else trying to mix 2.7 and 3.1 siblings?

Rgds


This message is not fatal to Squids operation. It just means something 
unknown as encountered in the exchange and the remaining input data had 
to be discarded.


The whole NetDB transfer is open to machine architecture mis-match 
problems and 64-bit/32-bit problems. Along with unknown record type coding.


Is it showing up just on the squid-3 boxes or on the 2.x boxes as well?

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.4


Re: [squid-users] url blocking

2011-02-10 Thread Marcus Kool

ufdbGuard is a URL filter for Squid that does exactly what Zartash needs.
It transforms codes like %xx to their respective characters and does
URL matching based on the normalised/translated URLs.
It also supports regular expressions, Google Safesearch enforcement and more.

Marcus


Amos Jeffries wrote:

On 10/02/11 18:25, Zartash . wrote:


So is there any way to block %?



If it actually exists in the URL (not just the browser display version) 
using '%' in the pattern will match it. Block with that ACL.



If its encoding something then no, you can't block it directly. It's a 
URL wire-level encoding byte.


You could decode the %xx code and figure out what character it is 
hiding. Match and block on that.


Or, if you don't care what character its encoding use '.' regex control 
to match any single byte.




Amos


Re: [squid-users] Questions on SQUID peering/mesh

2011-02-10 Thread Amos Jeffries

On 11/02/11 03:28, Matus UHLAR - fantomas wrote:

On 01/02/11 17:06, Pandu Poluan wrote:

I have 2 questions regarding SQUID peering:

Q1: Should I use ICP or HTCP?



On 01.02.11 19:00, Amos Jeffries wrote:

If you have a choice HTCP.
The packets are slightly bigger than ICP (they contain HTTP headers not
just URLs) but the false-positives are much lower and thus routing
choices are better.



On 09/02/11 22:52, Matus UHLAR - fantomas wrote:

what if we use cache digests?


On 10.02.11 17:22, Amos Jeffries wrote:

Then you are using digests not ICP or HTCP  :-P


So, if squid fetched digest from sibling, it won't send ICP nor htcp to it?



CD and ICP certainly work together. I believe CD and HTCP would work as 
well.




CD has more false positives than ICP but less lag on the real matches
and and less background bandwidth consumption.


of course. My question now is, if they can benefit of all of those...


Well CD + HTCP if you wanted to.

The lookup queries of HTCP are essentially just ICP with the HTTP 
headers attached. So the gains are achieved by the remote peer being 
able to determine its yes/no reply in things like the expiry headers, 
Vary: and ETag matching or running the cache ACLs on it.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.4


[squid-users] squid + sslbump + [c-icap] + [squidclamav/havp] + clamav

2011-02-10 Thread Alessandro Baggi
Hi list, For many years I've used squid-2.7-STABLE7 for proxying, 
content filtering and virus scan, but it was not able to scan https 
traffic for viruses. Now compiling a package for my system, I've seen 
that in 3.1.x version there is the ssl-bump option to get https traffic 
treated as http traffic.


in my squid.conf I have:

...
..
ssl_bump allow localnet
always_direct allow all

http_port 172.16.2.8:3128 ssl-bump cert:/etc/squid/cert/cert.crt 
key=/etc/squid/cert/key.key



My first question is, How to see if ssl-bump works? in access.log I get 
always CONNECT/DIRECT for HTTPS connection. This is normal or my 
ssl-bump config does not work?


Then my squidclamav version is 6.x and use c-icap and I've configured 
squid for icap as:


icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service service_req reqmod_precache bypass=1 
icap://127.0.0.1:1344/squidclamav
adaptation_access service_req allow all
icap_service service_resp respmod_precache bypass=1 
icap://127.0.0.1:1344/squidclamav
adaptation_access service_resp allow all

For http connection all works fine, and always with https connection 
there are always CONNECT/DIRECT.

on http://wiki.squid-cache.org/Features/SslBump I get:

Squid-in-the-middle decryption and encryption of straight *CONNECT* and 
transparently redirected SSL traffic, using configurable client- and 
server-side certificates. While decrypted, the traffic can be inspected 
using ICAP.


Then at this point ssl-bump must permit to squidclamav to see file 
(decrypted) over https?


if Yes, there is a my misconfiguration, can you point me in the right 
direction? (If you need my squid.conf I can post it)


thanks in advance.


Re: [squid-users] squid + sslbump + [c-icap] + [squidclamav/havp] + clamav

2011-02-10 Thread Marcus Kool

There seems to be a misconception about what sslbump can and cannot do.

sslbump can only decrypt SSL connections.
sslbump cannot decrypt all other types of traffic that use the
HTTPS port and CONNECT method.
So, for example, it cannot decrypt Skype traffic and files
containing a virus can still enter the network.

Marcus

Alessandro Baggi wrote:
Hi list, For many years I've used squid-2.7-STABLE7 for proxying, 
content filtering and virus scan, but it was not able to scan https 
traffic for viruses. Now compiling a package for my system, I've seen 
that in 3.1.x version there is the ssl-bump option to get https traffic 
treated as http traffic.


in my squid.conf I have:

...
..
ssl_bump allow localnet
always_direct allow all

http_port 172.16.2.8:3128 ssl-bump cert:/etc/squid/cert/cert.crt 
key=/etc/squid/cert/key.key



My first question is, How to see if ssl-bump works? in access.log I get 
always CONNECT/DIRECT for HTTPS connection. This is normal or my 
ssl-bump config does not work?


Then my squidclamav version is 6.x and use c-icap and I've configured 
squid for icap as:


icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service service_req reqmod_precache bypass=1 
icap://127.0.0.1:1344/squidclamav

adaptation_access service_req allow all
icap_service service_resp respmod_precache bypass=1 
icap://127.0.0.1:1344/squidclamav

adaptation_access service_resp allow all

For http connection all works fine, and always with https connection 
there are always CONNECT/DIRECT.

on http://wiki.squid-cache.org/Features/SslBump I get:

Squid-in-the-middle decryption and encryption of straight *CONNECT* and 
transparently redirected SSL traffic, using configurable client- and 
server-side certificates. While decrypted, the traffic can be inspected 
using ICAP.


Then at this point ssl-bump must permit to squidclamav to see file 
(decrypted) over https?


if Yes, there is a my misconfiguration, can you point me in the right 
direction? (If you need my squid.conf I can post it)


thanks in advance.




[squid-users] problem using squid as proxy server to load balance reverse-proxies

2011-02-10 Thread Sri Rao
Hi,

I am trying to setup squid as a ssl proxy to load balance btwn
reverse-proxies.  I believe the config is right but what is happening
is that squid gets the CONNECT request and connects to the reverse
servers on the right port but forwards the CONNECT request instead of
connecting to them as the originserver.  I am pasting the config as it
is right now.  I am using localhost as test reverse proxies just for
testing.  It Also doesn't seem to be failing to the next peer when the
first one it selects either returns an error(http error code or
connection failure) and I have retry_on_error.


Thanks for your help!

Sri


pid_filename /var/run/squid_sptest.pid
debug_options ALL,1 44,9 26,9 17,9 3,9 5,9 15,9 33,9 39,9 61,9 21,5
http_port 127.0.0.1:7174

hierarchy_stoplist cgi-bin ?

retry_on_error on

refresh_pattern .   0   0% 0
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320


acl sp_test myport 7174
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8

acl localnet src 10.0.0.0/8 # RFC 1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC 1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC 1918 possible internal network

acl SSL_ports port 443
acl CONNECT method CONNECT

http_access allow sp_test localhost CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny CONNECT !SSL_ports
http_access deny all

cache_peer 127.0.0.1 parent 8174 0 originserver proxy-only no-query
round-robin weight=2  default
cache_peer 127.0.0.12 parent 8174 0 originserver proxy-only no-query
round-robin weight=1

cache_peer_access 127.0.0.1 allow sp_test
cache_peer_access 127.0.0.12 allow sp_test
cache_peer_access 127.0.0.1 deny all
cache_peer_access 127.0.0.12 deny all

never_direct allow sp_test

cache deny all


[squid-users] (null):// instead of http://, what would cause this?

2011-02-10 Thread Dean Weimer
I have a reverse proxy running 3.1.10, and noticed a few odd lines in the 
access log while searching them for some other info.  I was wondering if anyone 
knew what would cause some entries like these?  There are only 13 lines out of 
22,000+ requests to this server today, and I haven't heard any complaints from 
users, just thought the entries were odd.

1297353864.628  0 10.200.129.50 NONE/400 3030 GET (null)://...snip... - 
NONE/- text/htm

The clients are on WAN connections of various speeds, and these could just be 
simply caused by network errors on the WAN connections, just thought I would 
check and see if any else had seen these and if it's something that I should 
investigate further in case there is an application issue causing this.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


Re: [squid-users] (null):// instead of http://, what would cause this?

2011-02-10 Thread Eliezer

got almost the same thing but on forward proxy.

it's getting null and then the address like

nullhttp://...

i dont remember the exact line cause it was 5 times and then gone.



On 10/02/2011 22:38, Dean Weimer wrote:


I have a reverse proxy running 3.1.10, and noticed a few odd lines in the 
access log while searching them for some other info.  I was wondering if anyone 
knew what would cause some entries like these?  There are only 13 lines out of 
22,000+ requests to this server today, and I haven't heard any complaints from 
users, just thought the entries were odd.

1297353864.628  0 10.200.129.50 NONE/400 3030 GET (null)://...snip...  - 
NONE/- text/htm

The clients are on WAN connections of various speeds, and these could just be 
simply caused by network errors on the WAN connections, just thought I would 
check and see if any else had seen these and if it's something that I should 
investigate further in case there is an application issue causing this.

Thanks,
  Dean Weimer
  Network Administrator
  Orscheln Management Co
   


[squid-users] Please ignore, testing email server

2011-02-10 Thread Jose Nathaniel G. Nengasca
thanks




[squid-users] Problem with HTTP 1.1 replies

2011-02-10 Thread Packet Racer
Hopefully this is the right list for this questions:

Currently running squid 2.6.STABLE21-3 (the RedHat distributed one),
and having problems with a specific site that makes of use of HTTP
1.1.  The issue can be boiled down to this:

The site loads a page and asks the browser NOT to cache it.  Then it
asks the browser to reload it a 2nd and 3rd time.  The 2nd and 3rd
load depend on the browser pulling fresh data from the server.  When
it works, it looks like this:

*** Client request #1:
GET http://www.[...snipped...].com/boost-gzip-cookie-test.html HTTP/1.0
Accept: */*
Referer: http://www.[...snipped...].com/
Accept-Language: en-us
UA-CPU: x86
Connection: Keep-Alive
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; [...snipped...])
Cookie: has_js=1; cmTPSet=Y
Host: www.[...snipped...].com

*** Server reply #1 (headers only shown):
HTTP/1.1 200 OK
Date: Tue, 08 Feb 2011 03:48:57 GMT
Server: Apache/2.2.3 (Red Hat)
Last-Modified: Thu, 03 Feb 2011 21:32:11 GMT
ETag: 17c801c-15a-49b677f912cc0
Accept-Ranges: bytes
Content-Length: 460
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Expires: Sun, 19 Nov 1978 05:00:00 GMT
X-Header: Boost Citrus 1.8
Connection: close
Content-Type: text/html; charset=utf-8

*** Client request #2:
GET http://www.[...snipped...].com/boost-gzip-cookie-test.html HTTP/1.0
Accept: image/gif, image/x-xbitmap, [...snipped...], application/xaml+xml, */*
Accept-Language: en-us
UA-CPU: x86
Connection: Keep-Alive
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; [...snipped...])
Host: www.[...snipped...].com
Cookie: has_js=1; cmTPSet=Y

*** Server reply #2:
HTTP/1.1 200 OK
Date: Tue, 08 Feb 2011 03:48:57 GMT
Server: Apache/2.2.3 (Red Hat)
Last-Modified: Thu, 03 Feb 2011 21:32:11 GMT
ETag: 17c801c-15a-49b677f912cc0
Accept-Ranges: bytes
Content-Length: 460
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Expires: Sun, 19 Nov 1978 05:00:00 GMT
X-Header: Boost Citrus 1.8
Connection: close
Content-Type: text/html; charset=utf-8

Request and Reply #3 are exactly the same as #2.  As you can see, the
site depends on the browser honoring the Cache-Control header, which
is an HTTP 1.1 construct.

When traffic goes through Squid, however, what you get is this:

*** Client request #1:
GET http://www.[...snipped...].com/boost-gzip-cookie-test.html HTTP/1.0
Accept: */*
Referer: http://www.[...snipped...].com/
Accept-Language: en-us
UA-CPU: x86
Proxy-Connection: Keep-Alive
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; [...snipped...])
Cookie: has_js=1; cmTPSet=Y
Proxy-Authorization: [...snipped...]
Host: www.[...snipped...].com

*** Reply #1:
HTTP/1.0 200 OK
Date: Tue, 08 Feb 2011 03:08:16 GMT
Server: Apache/2.2.3 (Red Hat)
Last-Modified: Thu, 03 Feb 2011 21:32:11 GMT
ETag: 17c801c-15a-49b677f912cc0
Accept-Ranges: bytes
Content-Length: 346
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Expires: Sun, 19 Nov 1978 05:00:00 GMT
X-Header: Boost Citrus 1.8
Content-Type: text/html; charset=utf-8
Content-Encoding: gzip
X-Cache: MISS from [...snipped...]
X-Cache-Lookup: MISS from [...snipped...]:3128
Via: 1.0 [...sniped...]:3128 (squid/2.6.STABLE21)
Proxy-Connection: keep-alive

*** Client request #2:
GET http://www.[...snipped...].com/boost-gzip-cookie-test.html HTTP/1.0
Accept: image/gif, image/x-xbitmap, image/jpeg, [...snipped...],
application/xaml+xml, */*
Accept-Language: en-us
UA-CPU: x86
Proxy-Connection: Keep-Alive
If-Modified-Since: Thu, 03 Feb 2011 21:32:11 GMT; length=346
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; [...snipped...])
Cookie: has_js=1; cmTPSet=Y
Proxy-Authorization: [...snipped...]
Host: www.[...snipped...].com

*** Reply #2:
HTTP/1.0 304 Not Modified
Date: Tue, 08 Feb 2011 03:08:16 GMT
Server: Apache/2.2.3 (Red Hat)
ETag: 17c801c-15a-49b677f912cc0
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
X-Cache: MISS from [...snipped...]
X-Cache-Lookup: MISS from [...snipped...]:3128
Via: 1.0 [...snipped...]:3128 (squid/2.6.STABLE21)
Proxy-Connection: keep-alive

Well, now the browser is sending If-Modified-Since: and Squid says
Not Modified.  That kind of breaks the subsequent pages that load
from that site.  It seems to me that IE7 and IE8 (the two I tested
with) do not honor the Cache-Control header if they see an HTTP/1.0
response.

So, the question is:  What are the possible solutions that I can implement?

Things like changing the browser or asking the site to stop doing the
boost-gzip-cookie-test are not viable solutions.  I'm thinking about
an upgrade to 2.7 or 3.1, but that will take some time to plan and
test.  Plus, I'm not sure that an upgrade will fix the problem,
anyway.  Anyone know?

Ideally I'm hoping that there's some way to tell Squid not to modify
the server responses when the request asks fora
boost-gzip-cookie-test.html.  Is there?  Or maybe to insert a 

[squid-users] Re: Problem with HTTP 1.1 replies

2011-02-10 Thread Packet Racer
Well, in reply to my own message...

Somebody just pointed me to a setting in Internet Explorer called Use
HTTP 1.1 through proxy connection.  It's under Internet Options 
Advanced, about half way down the list of option under HTTP 1.1
settings section.

That fixed the problem for me and my users.

I'll still be upgrading to a newer version, but now I can take the
time to plan it properly.


On Thu, Feb 10, 2011 at 4:19 PM, Packet Racer mrpacketra...@gmail.com wrote:

 Hopefully this is the right list for this questions:

 Currently running squid 2.6.STABLE21-3 (the RedHat distributed one),
 and having problems with a specific site that makes of use of HTTP
 1.1.  The issue can be boiled down to this:

 The site loads a page and asks the browser NOT to cache it.  Then it
 asks the browser to reload it a 2nd and 3rd time.  The 2nd and 3rd
 load depend on the browser pulling fresh data from the server.  When
 it works, it looks like this:
 []


Re: [squid-users] Problem with HTTP 1.1 replies

2011-02-10 Thread Amos Jeffries

On 11/02/11 15:19, Packet Racer wrote:

Hopefully this is the right list for this questions:

Currently running squid 2.6.STABLE21-3 (the RedHat distributed one),
and having problems with a specific site that makes of use of HTTP
1.1.  The issue can be boiled down to this:

The site loads a page and asks the browser NOT to cache it.  Then it
asks the browser to reload it a 2nd and 3rd time.  The 2nd and 3rd
load depend on the browser pulling fresh data from the server.  When
it works, it looks like this:

*** Client request #1:
GET http://www.[...snipped...].com/boost-gzip-cookie-test.html HTTP/1.0
Accept: */*
Referer: http://www.[...snipped...].com/
Accept-Language: en-us
UA-CPU: x86
Connection: Keep-Alive
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; [...snipped...])
Cookie: has_js=1; cmTPSet=Y
Host: www.[...snipped...].com

*** Server reply #1 (headers only shown):
HTTP/1.1 200 OK
Date: Tue, 08 Feb 2011 03:48:57 GMT
Server: Apache/2.2.3 (Red Hat)
Last-Modified: Thu, 03 Feb 2011 21:32:11 GMT
ETag: 17c801c-15a-49b677f912cc0
Accept-Ranges: bytes
Content-Length: 460
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Expires: Sun, 19 Nov 1978 05:00:00 GMT
X-Header: Boost Citrus 1.8
Connection: close
Content-Type: text/html; charset=utf-8

*** Client request #2:
GET http://www.[...snipped...].com/boost-gzip-cookie-test.html HTTP/1.0
Accept: image/gif, image/x-xbitmap, [...snipped...], application/xaml+xml, */*
Accept-Language: en-us
UA-CPU: x86
Connection: Keep-Alive
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; [...snipped...])
Host: www.[...snipped...].com
Cookie: has_js=1; cmTPSet=Y

*** Server reply #2:
HTTP/1.1 200 OK
Date: Tue, 08 Feb 2011 03:48:57 GMT
Server: Apache/2.2.3 (Red Hat)
Last-Modified: Thu, 03 Feb 2011 21:32:11 GMT
ETag: 17c801c-15a-49b677f912cc0
Accept-Ranges: bytes
Content-Length: 460
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Expires: Sun, 19 Nov 1978 05:00:00 GMT
X-Header: Boost Citrus 1.8
Connection: close
Content-Type: text/html; charset=utf-8

Request and Reply #3 are exactly the same as #2.  As you can see, the
site depends on the browser honoring the Cache-Control header, which
is an HTTP 1.1 construct.

When traffic goes through Squid, however, what you get is this:

*** Client request #1:
GET http://www.[...snipped...].com/boost-gzip-cookie-test.html HTTP/1.0
Accept: */*
Referer: http://www.[...snipped...].com/
Accept-Language: en-us
UA-CPU: x86
Proxy-Connection: Keep-Alive
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; [...snipped...])
Cookie: has_js=1; cmTPSet=Y
Proxy-Authorization: [...snipped...]
Host: www.[...snipped...].com

*** Reply #1:
HTTP/1.0 200 OK
Date: Tue, 08 Feb 2011 03:08:16 GMT
Server: Apache/2.2.3 (Red Hat)
Last-Modified: Thu, 03 Feb 2011 21:32:11 GMT
ETag: 17c801c-15a-49b677f912cc0
Accept-Ranges: bytes
Content-Length: 346
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Expires: Sun, 19 Nov 1978 05:00:00 GMT
X-Header: Boost Citrus 1.8
Content-Type: text/html; charset=utf-8
Content-Encoding: gzip
X-Cache: MISS from [...snipped...]
X-Cache-Lookup: MISS from [...snipped...]:3128
Via: 1.0 [...sniped...]:3128 (squid/2.6.STABLE21)
Proxy-Connection: keep-alive

*** Client request #2:
GET http://www.[...snipped...].com/boost-gzip-cookie-test.html HTTP/1.0
Accept: image/gif, image/x-xbitmap, image/jpeg, [...snipped...],
application/xaml+xml, */*
Accept-Language: en-us
UA-CPU: x86
Proxy-Connection: Keep-Alive
If-Modified-Since: Thu, 03 Feb 2011 21:32:11 GMT; length=346
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; [...snipped...])
Cookie: has_js=1; cmTPSet=Y
Proxy-Authorization: [...snipped...]
Host: www.[...snipped...].com

*** Reply #2:
HTTP/1.0 304 Not Modified
Date: Tue, 08 Feb 2011 03:08:16 GMT
Server: Apache/2.2.3 (Red Hat)
ETag: 17c801c-15a-49b677f912cc0
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
X-Cache: MISS from [...snipped...]
X-Cache-Lookup: MISS from [...snipped...]:3128
Via: 1.0 [...snipped...]:3128 (squid/2.6.STABLE21)
Proxy-Connection: keep-alive

Well, now the browser is sending If-Modified-Since: and Squid says
Not Modified.  That kind of breaks the subsequent pages that load


No. The browser is sending If-Modified-Since, Squid is passing this to 
the server. The server sends back Not Modified.


Squid is obeying the no-store  (do not storing the response) and the 
no-cache (do not generate or alter the reply based on local proxy 
cache storage).




from that site.  It seems to me that IE7 and IE8 (the two I tested
with) do not honor the Cache-Control header if they see an HTTP/1.0
response.


see above. The website test is relying on a specific inefficient and 
somewhat broken mode of HTTP being used. Its own server is configured to 
use HTTP properly and generate more efficient replies.




So, the question is:  What 

Re: [squid-users] problem using squid as proxy server to load balance reverse-proxies

2011-02-10 Thread Amos Jeffries

On 11/02/11 09:00, Sri Rao wrote:

Hi,

I am trying to setup squid as a ssl proxy to load balance btwn
reverse-proxies.  I believe the config is right but what is happening


What you have setup is a forward proxy load balancer which only permits 
management and binary-over-HTTP tunneled traffic from its localhost 
machine IP.



is that squid gets the CONNECT request and connects to the reverse
servers on the right port but forwards the CONNECT request instead of
connecting to them as the originserver.  I am pasting the config as it
is right now.  I am using localhost as test reverse proxies just for
testing.  It Also doesn't seem to be failing to the next peer when the
first one it selects either returns an error(http error code or
connection failure) and I have retry_on_error.


This would be an artifact of the special handling CONNECT requests have.

Your goal of having an SSL proxy directly opposes the use of CONNECT. 
Since CONNECT is a binary-over-HTTP tunnel.


I suggest going back to your first stated criteria setup squid as a ssl 
proxy and getting that going.


This means using the https_port directive (NOT the http_port!!). With a 
server SSL certificate. Squid will then be an SSL proxy.

 * Problem 2 is then how to get browsers etc to send traffic to it.

Since your third criteria is to pass traffic to reverse proxies it 
implies that this is to be a front-end reverse-proxy itself.
 If that is correct, then setup the https_port with the reverse-proxy 
accel options. And do a standard reverse-proxy to two backends 
configuration.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.4


[squid-users] simplest way to block (and drop) 1 'user'(computer) using 1 specific 'URL' ??

2011-02-10 Thread Linda Walsh





I purchased a little toaster-sized HP home-server that I haven't fully made
use of, but that does have an annoying feature.  It's **constantly** sending
messages to a ms-server.  Maybe it's some sort of I'm alive pulse, but it's
annoyingly filling up my squidlog, and always using up/interrupting 
normal traffic bin __minor__ amounts as it constantly does an HTTP 
version of

a ping that runs *almost* all the time.

Here's a snipped from a 'cooked' log format I use to give me a quick 
view into what's going w/squid:
   +0.19   182ms; ln=1579 (8.5K/8.4K) TCP_MISS/403 Home-Server [POST 
http://sqm.microsoft.com/sqm/Windows/sqmserver.dll - 
HIER_DIRECT/sqm.microsoft.com text/html ]
   +0.18   173ms; ln=1579 (8.9K/8.9K) TCP_MISS/403 Home-Server [POST 
http://sqm.microsoft.com/sqm/Windows/sqmserver.dll - 
HIER_DIRECT/sqm.microsoft.com text/html ]
   +0.17   164ms; ln=1579 (9.4K/9.3K) TCP_MISS/403 Home-Server [POST 
http://sqm.microsoft.com/sqm/Windows/sqmserver.dll - 
HIER_DIRECT/sqm.microsoft.com text/html ]
   +0.20   191ms; ln=1579 (8.1K/8.0K) TCP_MISS/403 Home-Server [POST 
http://sqm.microsoft.com/sqm/Windows/sqmserver.dll - 
HIER_DIRECT/sqm.microsoft.com text/html ]
   +0.15   145ms; ln=1579 (10.6K/10.5K) TCP_MISS/403 Home-Server [POST 
http://sqm.microsoft.com/sqm/Windows/sqmserver.dll - 
HIER_DIRECT/sqm.microsoft.com text/html ]

---

It just keeps going this -- occasionally it will stop for a few minutes, 
but most of the time it's doing these little several-K requests. 

Is there an easy way in squid to say if requester='home-server' and 
request address = 'http://sqm.microsoft.com/sqm/Windows/sqmserver.dll', 
then DROP the request (and issue nothing in the log).


There are more crude methods of shutting up, like one time, since it is 
going through the proxy-server to get to the outside world, I just threw 
in an ipchains rule to ignore it altogether.   Fast, but a bit crude.  I 
don't want to cut off all internet access -- just that one, constant 
droning request that just goes on and on...(filling logs, but most of 
all, always reducing my full bandwidth)...


What a pain in the butt!

Talk about products that 'phone home'This one whines to home about 5 
times/second!  LAME!


I currently have no other filtering going on in my squid files, so I'm 
not really sure where to start.  Do I need to write an external helper 
and filter all traffic through it?  That sounds like overkill -- and I'd 
really not wish to slow down traffic from other stations -- I already 
get too many 'sorry but your browser is configured to use a proxy which 
is not responding' messages, now, as it is -- and ***I'M THE ONLY 
USER!!!***...   (very sad when 1 user can overwhelm a proxy server 
designed to handle hundreds (if not thousands) of users...  But that's 
question for another day (like after I've pulled the latest source and 
tried it to see if it is fixed...;-))



Thanks!

Linda Walsh



Re: [squid-users] simplest way to block (and drop) 1 'user'(computer) using 1 specific 'URL' ??

2011-02-10 Thread Amos Jeffries

On 11/02/11 17:22, Linda Walsh wrote:





I purchased a little toaster-sized HP home-server that I haven't fully made
use of, but that does have an annoying feature. It's **constantly** sending
messages to a ms-server. Maybe it's some sort of I'm alive pulse, but it's
annoyingly filling up my squidlog, and always using up/interrupting
normal traffic bin __minor__ amounts as it constantly does an HTTP
version of
a ping that runs *almost* all the time.

Here's a snipped from a 'cooked' log format I use to give me a quick
view into what's going w/squid:
+0.19 182ms; ln=1579 (8.5K/8.4K) TCP_MISS/403 Home-Server [POST
http://sqm.microsoft.com/sqm/Windows/sqmserver.dll -
HIER_DIRECT/sqm.microsoft.com text/html ]
+0.18 173ms; ln=1579 (8.9K/8.9K) TCP_MISS/403 Home-Server [POST
http://sqm.microsoft.com/sqm/Windows/sqmserver.dll -
HIER_DIRECT/sqm.microsoft.com text/html ]
+0.17 164ms; ln=1579 (9.4K/9.3K) TCP_MISS/403 Home-Server [POST
http://sqm.microsoft.com/sqm/Windows/sqmserver.dll -
HIER_DIRECT/sqm.microsoft.com text/html ]
+0.20 191ms; ln=1579 (8.1K/8.0K) TCP_MISS/403 Home-Server [POST
http://sqm.microsoft.com/sqm/Windows/sqmserver.dll -
HIER_DIRECT/sqm.microsoft.com text/html ]
+0.15 145ms; ln=1579 (10.6K/10.5K) TCP_MISS/403 Home-Server [POST
http://sqm.microsoft.com/sqm/Windows/sqmserver.dll -
HIER_DIRECT/sqm.microsoft.com text/html ]
---

It just keeps going this -- occasionally it will stop for a few minutes,
but most of the time it's doing these little several-K requests.
Is there an easy way in squid to say if requester='home-server' and
request address = 'http://sqm.microsoft.com/sqm/Windows/sqmserver.dll',
then DROP the request (and issue nothing in the log).

There are more crude methods of shutting up, like one time, since it is
going through the proxy-server to get to the outside world, I just threw
in an ipchains rule to ignore it altogether. Fast, but a bit crude. I
don't want to cut off all internet access -- just that one, constant
droning request that just goes on and on...(filling logs, but most of
all, always reducing my full bandwidth)...

What a pain in the butt!

Talk about products that 'phone home'This one whines to home about 5
times/second! LAME!

I currently have no other filtering going on in my squid files, so I'm
not really sure where to start. Do I need to write an external helper
and filter all traffic through it? That sounds like overkill -- and I'd
really not wish to slow down traffic from other stations -- I already
get too many 'sorry but your browser is configured to use a proxy which
is not responding' messages, now, as it is -- and ***I'M THE ONLY
USER!!!***... (very sad when 1 user can overwhelm a proxy server
designed to handle hundreds (if not thousands) of users... But that's
question for another day (like after I've pulled the latest source and
tried it to see if it is fixed...;-))



That 403 is Squid or something upstream blocking the requests. So the 
speed of calls is likely due to badly programed retries.


You could block this in Squid with:
  acl SQM dstdomain sqm.microsoft.com
  http_access deny SQM

and prevent logging of its requests with
  access_log none SQM

But neither of those will help with the bandwidth consumption between 
Squid and the problem box. Likely only finding out the cause of the 
call-home and killing it will do that.


These may help with that latter:

http://www.neowin.net/forum/topic/439244-what-are-these-sqm-files/page__st__30__p__589093549#entry589093549

http://www.neowin.net/forum/topic/439244-what-are-these-sqm-files/page__st__30__p__588689642#entry588689642

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.4


Re: [squid-users] (null):// instead of http://, what would cause this?

2011-02-10 Thread Amos Jeffries

On 11/02/11 09:38, Dean Weimer wrote:

I have a reverse proxy running 3.1.10, and noticed a few odd lines in the 
access log while searching them for some other info.  I was wondering if anyone 
knew what would cause some entries like these?  There are only 13 lines out of 
22,000+ requests to this server today, and I haven't heard any complaints from 
users, just thought the entries were odd.

1297353864.628  0 10.200.129.50 NONE/400 3030 GET (null)://...snip...  - 
NONE/- text/htm

The clients are on WAN connections of various speeds, and these could just be 
simply caused by network errors on the WAN connections, just thought I would 
check and see if any else had seen these and if it's something that I should 
investigate further in case there is an application issue causing this.



Looks a lot like http://bugs.squid-cache.org/show_bug.cgi?id=2976

The URL scheme handling and display is a bit complex. I've been working 
on un-twisting it for a while now. Which hopefully will resolve this.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.4


[squid-users] Re: simplest way to block (and drop) 1 'user'(computer) using 1 specific 'URL' ??

2011-02-10 Thread Linda W

Amos Jeffries wrote:
That 403 is Squid or something upstream blocking the requests. So the 
speed of calls is likely due to badly programed retries.



	Not squid -- I kept wondering why it would keep hammering month 
after month on an adddr that supposedly doesn't work -- unless it

really does, and the other end is programmed to return a 403 so it
looks like no information is being transfered, but the exact contents 
could vary -- I just haven't been interested enough to find out.




You could block this in Squid with:
  acl SQM dstdomain sqm.microsoft.com
  http_access deny SQM

and prevent logging of its requests with
  access_log none SQM

But neither of those will help with the bandwidth consumption between 
Squid and the problem box. Likely only finding out the cause of the 
call-home and killing it will do that.

---

Will try the aboves  Thanks!



These may help with that latter:


	Will check them out, but it's the out-of-domain bandwidth that is 
scarce.  Inside, it's on a 1G switched network, so it's not

really noticeable.



Re: [squid-users] problem using squid as proxy server to load balance reverse-proxies

2011-02-10 Thread Sri Rao
Hi Amos,

Thanks for the quick reply!


 I am trying to setup squid as a ssl proxy to load balance btwn
 reverse-proxies.  I believe the config is right but what is happening

 What you have setup is a forward proxy load balancer which only permits
 management and binary-over-HTTP tunneled traffic from its localhost machine
 IP.

That is actually what I want.  I want to do binary-over-HTTP from the
localhost to the reverse-proxy servers.  When the forward proxy tries
to connect to the origin server directly it does a tunnelConnect but
even though I have set originserver for the cache_peers it seems to
just forward the CONNECT instead of doing a tunnelConnect.  I thought
originserver should force squid to treat the cache_peers as if they
were web servers?


 is that squid gets the CONNECT request and connects to the reverse
 servers on the right port but forwards the CONNECT request instead of
 connecting to them as the originserver.  I am pasting the config as it
 is right now.  I am using localhost as test reverse proxies just for
 testing.  It Also doesn't seem to be failing to the next peer when the
 first one it selects either returns an error(http error code or
 connection failure) and I have retry_on_error.

 This would be an artifact of the special handling CONNECT requests have.

 Your goal of having an SSL proxy directly opposes the use of CONNECT. Since
 CONNECT is a binary-over-HTTP tunnel.

 I suggest going back to your first stated criteria setup squid as a ssl
 proxy and getting that going.

I would rather not have to maintain certs as I will have several of
these squid proxies.

 This means using the https_port directive (NOT the http_port!!). With a
 server SSL certificate. Squid will then be an SSL proxy.
  * Problem 2 is then how to get browsers etc to send traffic to it.

 Since your third criteria is to pass traffic to reverse proxies it implies
 that this is to be a front-end reverse-proxy itself.
  If that is correct, then setup the https_port with the reverse-proxy accel
 options. And do a standard reverse-proxy to two backends configuration.

Thanks for the info...will definitely keep this in mind.

Sri