[squid-users] tunnelConnectTimeout(): tunnelState-servers is NULL

2012-07-23 Thread Dean Weimer
I have had a Squid server that has been running for some time, and all of the 
sudden started having problems, this server runs as both a forward and reverse 
proxy on different ports.  The reverse proxy part seems to be responding fine, 
but the forwarding is all of the sudden logging errors in cache.log file about 
tunnelSate  The parent server appears to be running fine, and is serving 
requests for a few hundred clients that access it directly without issue.

Of course the full log message is exactly what the subject of this email 
message is.
tunnelConnectTimeout(): tunnelState-servers is NULL

The parent server is running 3.1.20 and this server is on 3.1.18, anyone have 
any idea what would cause this type of behavior to start happening?

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


RE: [squid-users] Block IP based lookups

2012-04-23 Thread Dean Weimer
-Original Message-

Is it possible to block all IP based lookups from the browser with squid
acls?

If I assume you mean to match request to IP address,
http://192.168.1.1/, instead of to a hostname like
http://www.example.com, the following works quite well.

# Match By IP Requests
acl BYIP dstdom_regex ^[0-9\.:]*$

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


RE: [squid-users] forward and reverse proxy with squid 3.2

2011-09-16 Thread Dean Weimer
 -Original Message-
 From: Erich Titl [mailto:erich.t...@think.ch]
 Sent: Friday, September 16, 2011 3:35 AM
 To: squid-users@squid-cache.org
 Subject: [squid-users] forward and reverse proxy with squid 3.2
 
 Hi Folks
 
 I need to replace my squid proxy running on a Debian Lenny, because
the
 version provided does not handle ssl.
 
 I managed with some tweaks to the makefile (especially for the link
 phase) to compile 3.2.0.11, the configuration changes though apear to
 make it impossible to run a normal and reverse proxy in the same
instance.
 
 I copied most of the configuration files from the old installation,
 hoping they would not to be too different.
 
 My new installation runs fine as a normal proxy, as soon as I include
 the reverse proxy configuration, everything is sent to the peer
 mentioned there.
 
 ##
 ##
 # squid reverse proxy settings
 # content shamelessly adapted from
 #
 http://wiki.squid-
 cache.org/ConfigExamples/Reverse/SslWithWildcardCertifiate
 # Copyleft 2009 erich.t...@think.ch
 ##
 ##
 
 http_port 80 accel
 
 # peer servicedesk
 cache_peer servicedesk.ruf.ch parent 80 0 no-query originserver
 name=servicedesk
 
 acl sites_server_1 dstdomain servicedesk.ruf.ch
 cache_peer_access servicedesk allow sites_server_1
 http_access allow sites_server_1
 ##
 ###
 
 It appears that the cache_peer directive now takes precedence.
 
 cheers
 
 Erich

Erich,
I ran into this when switching to the 3.x branch from 2.x, you
need to answer on a second port for the forward proxy requests, this
setup works in 3.1.x, I haven't tried it in 3.2.x versions, but I
believe this should work in it as well.

http_port 80 accel
http_port 3128
# If using https on reverse proxy as well
https_port 443 accel cert=/usr/local/squid/etc/certs/chain.crt
key=/usr/local/squid/etc/certs/cert.key options=NO_SSLv2
cipher=ALL:!aNULL:!eNULL:!LOW:!EXP:!ADH:!RC4+RSA:+HIGH:+MEDIUM:!SSLv2

Make sure to include the proper access list entries so that you don't
open the forward proxy to the world when allowing access to the reverse
proxy port.  The server will answer on http and https on ports 80 and
443 and direct those to the parent server, when connected to on port
3128 it will function as a standard forward proxy service for your
internal users.

Dean


RE: [squid-users] forward and reverse proxy with squid 3.2

2011-09-16 Thread Dean Weimer
 -Original Message-
 From: Erich Titl [mailto:erich.t...@think.ch]
 Sent: Friday, September 16, 2011 8:28 AM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] forward and reverse proxy with squid 3.2
 
 Hi Dean
 
 at 16.09.2011 15:12, Dean Weimer wrote:
  -Original Message-
  From: Erich Titl [mailto:erich.t...@think.ch]
  Sent: Friday, September 16, 2011 3:35 AM
  To: squid-users@squid-cache.org
  Subject: [squid-users] forward and reverse proxy with squid 3.2
 
  Hi Folks
 
  I need to replace my squid proxy running on a Debian Lenny, because
  the
  version provided does not handle ssl.
 
  I managed with some tweaks to the makefile (especially for the link
  phase) to compile 3.2.0.11, the configuration changes though apear
to
  make it impossible to run a normal and reverse proxy in the same
  instance.
 
  I copied most of the configuration files from the old installation,
  hoping they would not to be too different.
 
  My new installation runs fine as a normal proxy, as soon as I
include
  the reverse proxy configuration, everything is sent to the peer
  mentioned there.
 
 
 ##
  ##
  # squid reverse proxy settings
  # content shamelessly adapted from
  #
  http://wiki.squid-
  cache.org/ConfigExamples/Reverse/SslWithWildcardCertifiate
  # Copyleft 2009 erich.t...@think.ch
 
 ##
  ##
 
  http_port 80 accel
 
  # peer servicedesk
  cache_peer servicedesk.ruf.ch parent 80 0 no-query originserver
  name=servicedesk
 
  acl sites_server_1 dstdomain servicedesk.ruf.ch
  cache_peer_access servicedesk allow sites_server_1
  http_access allow sites_server_1
 
 ##
  ###
 
  It appears that the cache_peer directive now takes precedence.
 
  cheers
 
  Erich
 
  Erich,
  I ran into this when switching to the 3.x branch from 2.x, you
  need to answer on a second port for the forward proxy requests, this
  setup works in 3.1.x, I haven't tried it in 3.2.x versions, but I
  believe this should work in it as well.
 
  http_port 80 accel
  http_port 3128
  # If using https on reverse proxy as well
  https_port 443 accel cert=/usr/local/squid/etc/certs/chain.crt
  key=/usr/local/squid/etc/certs/cert.key options=NO_SSLv2
 
 cipher=ALL:!aNULL:!eNULL:!LOW:!EXP:!ADH:!RC4+RSA:+HIGH:+MEDIUM:!SS
 Lv2
 
 I have a forward proxy defined on 8080 and it works well until I
include
 the reverse proxy configuration. Then everything goes to the cache
peer
 defined for that vhost. What does your cache peer look like?
 
 Thanks
 
 Erich


Perhaps it's the cache_peer_domain lines that you need, I have sanitized
these entries, I am actually using a vhost configuration with multiple
peers on port 80, and a single peer on https.

cache_peer 1.1.1.1 parent 80 0 proxy-only no-query originserver
name=HTTPPEER
cache_peer_domain HTTPPEER www.domain.com
cache_peer 1.1.1.1 parent 443 0 ssl no-query originserver
name=HTTPSPEER
cache_peer_domain HTTPSPEER www.domain.com

My forward proxy is also using a parent cache, which makes the ACLs and
rules likely quite a bit different, but I don't appear to have any allow
deny rules for the parent peers used in the reverse proxy settings, so
it looks like the cache_peer_domain is doing all the work in deciding
what goes to the parents via the reverse proxy function, and what goes
to the forward parent server.  The only ACLs and rules I have setup are
allowing and denying access to the forward proxy port. 


RE: [squid-users] Reverse Proxy and Externally Generated Wildcard SSL Certificates

2011-02-14 Thread Dean Weimer
John,
I believe what you need to do is export the Certificates from the IIS 
servers, they will be saved in a .pfx file, which is the PKCS12 format.  
OpenSSL can convert these into the PEM format that squid supports, these 
commands will give you the desired output.

Exports the Certificate:
openssl pkcs12 -in server.pfx -out server.crt -nodes -nokeys -clcerts

Exports the Private Key (Note will not be encrypted, store in safe place):
openssl pkcs12 -in server.pfx -out server.key -nodes -nocerts -clcerts

The openssl man page and the pkcs12 man page will have more information about 
these options if you need them.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

 -Original Message-
 From: John Gardner [mailto:john.gard...@southtyneside.gov.uk]
 Sent: Sunday, February 13, 2011 2:13 AM
 To: squid-users@squid-cache.org
 Subject: [squid-users] Reverse Proxy and Externally Generated Wildcard SSL
 Certificates
 
 Hi everyone.  I've got a query about running Squid as a Reverse Proxy that I
 hope someone can answer.
 
 Over the past year, I've been tasked with introducing serveral Squid servers
 into our organisation, most of them so far have been internal Caching
 proxies, but I'm now at the stage where I need to implement a Reverse
 Proxy (RP) in our DMZ.
 
 We're going to offload the SSL onto the RP using a Wildcard SSL Certificate
 and during testing I used the advice here: http://wiki.squid-
 cache.org/ConfigExamples/Reverse/SslWithWildcardCertifiate.  This was
 great to test everything and worked well.  However, now I'm ready to put
 this into a Production environment and I have to deal with the fact that we
 are fundamentally a Windows house.
 
 They have already procured wildcard SSL certificates from Verisign, where
 the original CSR was generated on a Windows server sent off to the CA
 (Verisign) and then then the wildcard certificate returned to us.  My question
 is quite simple, how do I import the wildcard certificate into openssl on the
 RP server?  All the examples I've seen online assume that you're generating
 the CSR on the proxy server itself but I don't have that luxury unfortunately.
 
 I know this is more of an OpenSSL question rather than pure Squid question,
 I was just hoping that someone on the list has already done this and can give
 me some advice.
 
 Thanks in advance.
 
 John
 
 
 This email and any files transmitted with it are intended solely for the named
 recipient and may contain sensitive, confidential or protectively marked
 material up to the central government classification of ?RESTRICTED which
 must be handled accordingly.  If you have received this e-mail in error, 
 please
 immediately notify the sender by e-mail and delete from your system, unless
 you are the named recipient (or authorised to receive it for the recipient)
 you are not permitted to copy, use, store, publish, disseminate or disclose it
 to anyone else.
 
 
 E-mail transmission cannot be guaranteed to be secure or error-free as it
 could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, 
 or
 contain viruses and therefore the Council accept no liability for any such
 errors or omissions.
 
 
 Unless explicitly stated otherwise views or opinions expressed in this email
 are solely those of the author and do not necessarily represent those of the
 Council and are not intended to be legally binding.
 
 
 
 All Council network traffic and GCSX traffic may be subject to recording
 and/or monitoring in accordance with relevant legislation.
 
 
 
 South Tyneside Council, Town Hall  Civic Offices, Westoe Road, South
 Shields, Tyne  Wear, NE33 2RL, Tel: 0191 427 1717, Website:
 www.southtyneside.info



RE: [squid-users] Reverse Proxy and Externally Generated Wildcard SSL Certificates

2011-02-14 Thread Dean Weimer
 -Original Message-
 From: John Gardner [mailto:john.gard...@southtyneside.gov.uk]
 Sent: Monday, February 14, 2011 8:25 AM
 To: Dean Weimer; squid-users@squid-cache.org
 Subject: RE: [squid-users] Reverse Proxy and Externally Generated
Wildcard
 SSL Certificates
 
 John,
  I believe what you need to do is export the Certificates from
the IIS
 servers, they will be saved in a .pfx file, which is the PKCS12
format.
 OpenSSL can convert these into the PEM format that squid supports,
these
 commands will give you the desired output.
 
 Exports the Certificate:
 openssl pkcs12 -in server.pfx -out server.crt -nodes -nokeys -clcerts
 
 Exports the Private Key (Note will not be encrypted, store in safe
place):
 openssl pkcs12 -in server.pfx -out server.key -nodes -nocerts
-clcerts
 
 The openssl man page and the pkcs12 man page will have more
information
 about these options if you need them.
 
 Dean
 
 Thanks for the help, but I've just found out that the CSR (and
therefore
 private key) were all generated from a Juniper VPN Appliance and so
now all
 bets are off :-/
 
 Cheers
 

They may already be stored in PEM format then, the JUNEOS that runs on
most Juniper devices was originally derived from FreeBSD and as such its
SSL implementation is likely based on OpenSSL (of course that's just a
guess).  I haven't worked on any Juniper devices myself, so I am of no
help in figuring out how to export them.
If they were generated on the Juniper VPN appliance, is that device
already doing HTTPS offloading for you?  You might not get the desired
benefit moving that to a Squid proxy server if it is, perhaps just
placing the proxy between the VPN appliance and the backend web server
to utilize the cache would give you the desired outcome without needing
to move the SSL.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


RE: [squid-users] (null):// instead of http://, what would cause this?

2011-02-11 Thread Dean Weimer
After converting the logs to Date Time format from Unix timestamp, they did 
indeed line up with a reconfigure I issued to adjust some ACLs.  At least now I 
know I don't have an application issue to track down before it became a bigger 
problem.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Thursday, February 10, 2011 11:10 PM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] (null):// instead of http://, what would cause 
 this?
 
 On 11/02/11 09:38, Dean Weimer wrote:
  I have a reverse proxy running 3.1.10, and noticed a few odd lines in the
 access log while searching them for some other info.  I was wondering if
 anyone knew what would cause some entries like these?  There are only 13
 lines out of 22,000+ requests to this server today, and I haven't heard any
 complaints from users, just thought the entries were odd.
 
  1297353864.628  0 10.200.129.50 NONE/400 3030 GET (null)://...snip... 
   -
 NONE/- text/htm
 
  The clients are on WAN connections of various speeds, and these could just
 be simply caused by network errors on the WAN connections, just thought I
 would check and see if any else had seen these and if it's something that I
 should investigate further in case there is an application issue causing this.
 
 
 Looks a lot like http://bugs.squid-cache.org/show_bug.cgi?id=2976
 
 The URL scheme handling and display is a bit complex. I've been working
 on un-twisting it for a while now. Which hopefully will resolve this.
 
 Amos
 --
 Please be using
Current Stable Squid 2.7.STABLE9 or 3.1.11
Beta testers wanted for 3.2.0.4


[squid-users] (null):// instead of http://, what would cause this?

2011-02-10 Thread Dean Weimer
I have a reverse proxy running 3.1.10, and noticed a few odd lines in the 
access log while searching them for some other info.  I was wondering if anyone 
knew what would cause some entries like these?  There are only 13 lines out of 
22,000+ requests to this server today, and I haven't heard any complaints from 
users, just thought the entries were odd.

1297353864.628  0 10.200.129.50 NONE/400 3030 GET (null)://...snip... - 
NONE/- text/htm

The clients are on WAN connections of various speeds, and these could just be 
simply caused by network errors on the WAN connections, just thought I would 
check and see if any else had seen these and if it's something that I should 
investigate further in case there is an application issue causing this.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


[squid-users] Reverse Proxy for multiple SSL sites on same server

2011-01-14 Thread Dean Weimer
I am struggling with a setup where I am adding a parent web server behind my 
reverse proxy that has multiple ssl sites running under the same name but on 
different ports.  The site on the default port 443 works, but I can't get it to 
forward to the parent on the second site running on port 444.  The server is 
already running several ssl sites on 443 using a UCC SSL cert with subject 
alternative names

Here are the relevant parts of the setup:

https_port 10.50.20.10:443 accel cert=/usr/local/squid/etc/certs/server.crt 
key=/usr/local/squid/etc/certs/server.key defaultsite=www.mydomain.com vhost 
options=NO_SSLv2 
cipher=ALL:!aNULL:!eNULL:!LOW:!EXP:!ADH:!RC4+RSA:+HIGH:+MEDIUM:!SSLv2
https_port 10.50.20.10:444 accel cert=/usr/local/squid/etc/certs/server.crt 
key=/usr/local/squid/etc/certs/server.key defaultsite=secure.mydomain.com:444 
vhost options=NO_SSLv2 
cipher=ALL:!aNULL:!eNULL:!LOW:!EXP:!ADH:!RC4+RSA:+HIGH:+MEDIUM:!SSLv2

acl ssl_secure proto HTTPS
acl securesite444 url_regex -i ^https://secure.mydomain.com:444/
acl securesite url_regex -i ^https://secure.mydomain.com/
acl parentserver dst 10.20.10.62/32

http_access deny securesite444 !ssl_secure
http_access allow securesite444 ssl_secure
http_access deny securesite !ssl_secure
http_access allow securesite ssl_secure
http_access allow parentserver ssl_secure
http_access deny ssl_secure

cache_peer 10.20.10.62 parent 444 0 ssl no-query originserver name=parent444 
sslcapath=/usr/local/share/certs sslflags=DONT_VERIFY_PEER
cache_peer_domain parent444 secure.mydomain.com
cache_peer_access parent444 allow securesite444 ssl_secure

cache_peer 10.20.10.62 parent 443 0 ssl no-query originserver name=parent 
sslcapath=/usr/local/share/certs sslflags=DONT_VERIFY_PEER
cache_peer_domain parent secure.mydomain.com
cache_peer_access parent allow securesite ssl_secure


The logs show both the SSL listening ports were started, and both parents 
configured, however when accessing https://secure.mydomain.com:444/ it reports 
that it was unable to select source.

2011/01/14 13:49:51| Accepting HTTPS connections at 10.50.20.10:443, FD 71.
2011/01/14 13:49:51| Accepting HTTPS connections at 10.50.20.10:444, FD 72.
2011/01/14 13:49:51| Configuring Parent 10.20.10.62/443/0
2011/01/14 13:49:51| Configuring Parent 10.20.10.62/444/0
2011/01/14 13:49:51| Ready to serve requests.
-BEGIN SSL SESSION PARAMETERS-
MIGMAgEBAgIDAQQCAC8EIBe26zUEsTBKHRt+Bvw3c9j5XNAArlUDi0Zq6qSncolM
BDCuSmhFVdKHBuflZ2nY/N1UPGY8syDnGlUyDEIQdwFdMveOyawuMJmqeVePI2NI
eKOhBgIETTCo5aIEAgIBLKQCBACmGQQXb3JzY2hlbG5oci5vcnNjaGVsbi5jb20=
-END SSL SESSION PARAMETERS-
2011/01/14 13:49:57| Failed to select source for 
'https://secure.mydomain.com:444/'
2011/01/14 13:49:57|   always_direct = 0
2011/01/14 13:49:57|never_direct = 0
2011/01/14 13:49:57|timedout = 0

Does anyone have any idea what I am missing in the parent configuration or 
access rule list that is not allowing the reverse proxy to find and use the 
parent server?

Thanks,
 Dean Weimer


RE: [squid-users] RE: RE : [squid-users] [Squid 3.1.9] SSL Reverse PROXY - Insecure Renegotiation Supported

2010-11-16 Thread Dean Weimer
Hi Amos,

Glad to hear you, I have already try and retry this one, but no changes... 
this is freaky and I'm tired :)

I will continue tomorrow, I think I need to find a guide to compile squid with 
non-system ssl libraries/headers.

Otherwise, is there a way to know with wich openssl squid is compiled??? 
Because à every time squid will run correctly in ssl mode... :-/

Man thanks,

Sebastian

-Message d'origine-
De : Amos Jeffries [mailto:squ...@treenet.co.nz] 
Envoyé : lundi 15 novembre 2010 23:55
À : Sébastien WENSKE
Cc : Dean Weimer; squid-users@squid-cache.org
Objet : RE: [squid-users] RE: RE : [squid-users] [Squid 3.1.9] SSL Reverse 
PROXY - Insecure Renegotiation Supported

On Mon, 15 Nov 2010 21:33:40 +, Sébastien WENSKE sebast...@wenske.fr
wrote:
I think this should be
  --with-openssl=/usr/src/openssl/openssl-1.0.0a/

 
 I'm lost ... I need to fix this issue before implementing this in my 
 company ...


Sébastien,

If it helps, my system had openssl installed with the following options.

./config --prefix=/usr/local --openssldir=/usr/local/etc/ssl -fPIC shared
make
make install

Squid had the following options for enabling openssl

--enable-ssl --with-openssl=/usr/local

In your squid source directory, look for the config.log Amos mentioned, and in 
it the following lines should indicate which path it found your openssl 
libraries under.

configure:26112: checking openssl/err.h usability
configure:26129: g++ -c -g -O2 -I/usr/local/include  conftest.cpp 5
configure:26136: $? = 0
configure:26150: result: yes
configure:26154: checking openssl/err.h presence
configure:26169: g++ -E -I/usr/local/include  conftest.cpp
configure:26176: $? = 0
configure:26190: result: yes
configure:26223: checking for openssl/err.h
configure:26232: result: yes
configure:26112: checking openssl/md5.h usability
configure:26129: g++ -c -g -O2 -I/usr/local/include  conftest.cpp 5
configure:26136: $? = 0
configure:26150: result: yes
configure:26154: checking openssl/md5.h presence
configure:26169: g++ -E -I/usr/local/include  conftest.cpp
configure:26176: $? = 0
configure:26190: result: yes
configure:26223: checking for openssl/md5.h
configure:26232: result: yes
configure:26112: checking openssl/ssl.h usability
configure:26129: g++ -c -g -O2 -I/usr/local/include  conftest.cpp 5
configure:26136: $? = 0
configure:26150: result: yes
configure:26154: checking openssl/ssl.h presence
configure:26169: g++ -E -I/usr/local/include  conftest.cpp
configure:26176: $? = 0
configure:26190: result: yes
configure:26223: checking for openssl/ssl.h
configure:26232: result: yes
configure:26112: checking openssl/x509v3.h usability
configure:26129: g++ -c -g -O2 -I/usr/local/include  conftest.cpp 5
configure:26136: $? = 0
configure:26150: result: yes
configure:26154: checking openssl/x509v3.h presence
configure:26169: g++ -E -I/usr/local/include  conftest.cpp
configure:26176: $? = 0
configure:26190: result: yes
configure:26223: checking for openssl/x509v3.h
configure:26232: result: yes

From examining these paths on mine, and looking under the source build 
directory for openssl-1.0.0a, it looks like Amos is indeed correct that the 
path for your system should be --with-openssl=/usr/src/openssl/openssl-1.0.0a 
also verify that /usr/src/openssl/openssl-1.0.0a/include/openssl does indeed 
exist on your system and it contains the *.h files shown in the output from the 
config.log listed above (should actually be linked files under the source tree, 
but that shouldn't matter).

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


RE: [squid-users] [Squid 3.1.9] SSL Reverse PROXY - Insecure Renegotiation Supported

2010-11-15 Thread Dean Weimer
 -Original Message-
 From: Sébastien WENSKE [mailto:sebast...@wenske.fr]
 Sent: Monday, November 15, 2010 8:44 AM
 To: squid-users@squid-cache.org
 Subject: [squid-users] [Squid 3.1.9] SSL Reverse PROXY - Insecure
 Renegotiation Supported
 
 Hello guys,
 
 I have set up a squid as SSL reverse proxy, it works very fine.
 
 I have checked SSL security against Qualys and they report me that the
 server is vulnerable to MITM attacks because it supports insecured
 renegotiation
 
 
 There is my SSL relating configuration:
 
 https_port xx.xx.xx.xx:443 cert=/etc/squid/ssl/RapidSSL_xxx.xxx.xx.crt
 key=/etc/squid/ssl/RapidSSL_xxx.xxx.xx.key options=NO_SSLv2 cipher=RSA:
 HIGH:!eNULL:!aNULL:!LOW:!RC4 RSA:!RC2 RSA:!EXP:!ADH  accel ignore-cc
 defaultsite=xxx..xx vhost
 [...]
 cache_peer 10.x.x.x parent 80 0 front-end-https=on name=sw01 no-query
 originserver default login=PASS no-digest
 [...]
 ssl_unclean_shutdown on
 [...]
 
 
 Is it openssl related or squid configuration 
 
 
 Many Thanks,
 
 Sebastian

I have squid compiled from source against Openssl 1.0.0a, with the following 
options set:

https_port x.x.x.x:443 accel cert=xxx.crt key=xxx.key defaultsite=xxx..xxx 
vhost options=NO_SSLv2 
cipher=ALL:!aNULL:!eNULL:!LOW:!EXP:!ADH:!RC4+RSA:+HIGH:+MEDIUM:!SSLv2
sslproxy_options NO_SSLv2
sslproxy_cipher ALL:!aNULL:!eNULL:!LOW:!EXP:!ADH:!RC4+RSA:+HIGH:+MEDIUM:!SSLv2

It passes the entire test from our PCI (Payment Card Industry) site 
certification scans, the options and ciphers are set both on the https_port 
line and on individual lines, not sure if both or only one are required.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co



RE: [squid-users] RE: RE : [squid-users] [Squid 3.1.9] SSL Reverse PROXY - Insecure Renegotiation Supported

2010-11-15 Thread Dean Weimer
 -Original Message-
 From: Sébastien WENSKE [mailto:sebast...@wenske.fr]
 Sent: Monday, November 15, 2010 11:29 AM
 To: squid-users@squid-cache.org
 Subject: [squid-users] RE: RE : [squid-users] [Squid 3.1.9] SSL Reverse PROXY
 - Insecure Renegotiation Supported
 
 Thanks Dean,
 
 I have tried to compile with openssl 10.0.0a, but I get the same result...
 even with sslproxy_ directives.
 
 Can you check your server on https://www.ssllabs.com/ssldb/index.html just to
 see
 
 In my case:
 
 browser --- HTTPS  reverse proxy (squid 3.1.9)  HTTP - OWA
 2010 (IIS 7.5)
 
 Maybe I miss something, how can I see which version of openssl is use in squid
 ?


Here is the information I got back, minus the certificate section, the overall 
score was a 91.  When you compiled with openssl, make sure to use the 
--with-openssl=[DIR] to specify your path.  To make sure you hit the version 
you installed, and not the local system libraries as they may differ.  Though 
it would be best to update the local system libraries as well if possible.

Protocols
TLS 1.2 No
TLS 1.1 No
TLS 1.0 Yes
SSL 3.0 Yes
SSL 2.0+ Upgrade SupportYes
SSL 2.0 No


Cipher Suites (sorted; server has no preference)
TLS_RSA_WITH_IDEA_CBC_SHA (0x7) 128
TLS_RSA_WITH_AES_128_CBC_SHA (0x2f) 128
TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (0x41)128
TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (0x84)128
TLS_RSA_WITH_SEED_CBC_SHA (0x96)128
TLS_RSA_WITH_3DES_EDE_CBC_SHA (0xa) 168
TLS_RSA_WITH_AES_256_CBC_SHA (0x35) 256


Miscellaneous
Test date   Mon Nov 15 18:49:14 UTC 2010
Test duration   102.430 seconds
Server signatureMicrosoft-IIS/6.0
Session resumption  Yes
Renegotiation   Secure Renegotiation Supported
Strict Transport Security   No
TLS Version Tolerance   0x0304: 0x301; 0x0399: 0x301; 0x0499: fail
PCI compliant   Yes
FIPS-ready  No

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


[squid-users] ACL problem, can not get never_direct to work.

2010-11-11 Thread Dean Weimer
I think I am going nuts, because I can't see what I am doing wrong here, I am 
trying to send a group of domains through a parent proxy because the proxy 
forwarding them doesn't have direct access to the websites.  These ACL list are 
before any others in the configuration, but the domains are still trying to go 
direct.

# The Parent Configuration
cache_peer 10.50.20.6 parent 8080 8181 name=PROXY3 no-query no-digest

#The ACL lines
acl InternalDNS dstdomain /usr/local/squid/etc/internal.dns.acl

## Put this in once to verify they above ACL was actually working for the 
domains
## http_access deny InternalDNS
## With above uncommented, I got access denied as expected

## Here is where I am doing something wrong, that I cannot figure out
never_direct allow InternalDNS
always_direct allow !InternalDNS
cache_peer_access PROXY3 allow InternalDNS
cache_peer_access PROXY3 deny all


All sites in the ACL still attempt to go direct instead of forwarding to the 
parent

Squid -k parse shows no errors

Squid -k reconfigure was run, Output from the cache.log shows the parent was 
configured:
2010/11/11 16:43:04| Configuring Parent 10.50.20.6/8080/8181
2010/11/11 16:43:04| Loaded Icons.
2010/11/11 16:43:04| Ready to serve requests.

No errors are present after this in the cache.log, but the access.log still 
shows the sites going direct:
1289494760.992   5408 10.100.10.9 TCP_MISS/000 0 GET http://www.orscheln.com/ - 
DIRECT/www.orscheln.com -

When I had the http_access deny line in to verify the domains were correctly 
being seen by the acl:
1289493703.745  0 10.100.10.9 TCP_DENIED/403 2540 GET 
http://www.orscheln.com/ - NONE/- text/html

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


[squid-users] client_side_request.cc messages in cache.log

2010-11-04 Thread Dean Weimer
I just setup a new site through my reverse proxy running Squid 3.1.9, and 
though it's working fine, I am receiving the follow message every time an url 
on the new site is accessed.

010/11/04 10:39:32| client_side_request.cc(1047) clientRedirectDone: 
redirecting body_pipe 0x8016a1e38*1 from request 0x802637800 to 0x802242000

The url in question is an HTTPS url, and is passed through a self written url 
rewrite program (written in Python), I have verified that the processes are not 
crashing or causing any internal errors when rewriting this url.  The 
application is a vendor provided ASP.net application running on IIS 6.0.  So 
far it's only available to internal users, for testing so there isn't a heavy 
load for this url on the proxy yet.  There isn't any perceivable difference in 
performance between the reverse proxy and accessing the site directly (Though I 
wouldn't expect to see the performance advantages of Squid with the currently 
load on the backend server being next to nothing at this point), so whatever is 
causing the error doesn't seem to be affecting performance.

I am concerned that this message may be a sign of a more major problem when the 
server gets placed under a larger load.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co



[squid-users] forward and reverse proxy in 3.1.x https forward proxy failing

2010-11-01 Thread Dean Weimer
I had an older machine that was still running 3.0 STABLE 12, that was 
functioning as a forward and reverse proxy using port 80 for both.  And a 
reverse proxy for one site on Port 443, the machine sits in a DMZ the forward 
proxy only directs about to web sites for machines connected through WAN 
connections, and functions as a reverse proxy for those machines when 
connecting to a couple internal sites.  This machine had a hardware failure 
last night and I was forced to put in place the newer machine that had already 
had the software installed but wasn't configured or tested yet.

The problem I am having is that this machine running squid 3.1.9 functions fine 
as both forward and reverse for http websites, and is working for the reverse 
HTTPS site, though I had to use the sslproxy_cert_error acl method to bypass a 
cert error, even though the cert is valid, it's not accepting it.  That's a 
minor problem though, as its functioning.  The more pressing problem is that 
HTTPS forward proxy is not working, the logs show an error every time stating a 
connect method was received on an accelerator port.

2010/11/01 12:26:43| clientProcessRequest: Invalid Request
2010/11/01 12:26:44| WARNING: CONNECT method received on http Accelerator port 
80
2010/11/01 12:26:44| WARNING: for request: CONNECT armmf.adobe.com:443 HTTP/1.0
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; .NET CLR 
1.1.4322)
Host: armmf.adobe.com
Content-Length: 0
Proxy-Connection: Keep-Alive
Pragma: no-cache

Is using the same port for both forward of http  https not allowed while using 
it for a reverse proxy anymore?

I tried adding the new allow-direct option to my http_port line with no change 
in behavior.

Current line is:
http_port 10.40.1.254:80 accel vhost allow-direct

Anyone have any ideas as to what I am doing wrong here?


Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co
 Phone: (660) 269-3448
 Fax: (660) 269-3950




RE: [squid-users] forward and reverse proxy in 3.1.x https forward proxy failing

2010-11-01 Thread Dean Weimer
 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Monday, November 01, 2010 3:57 PM
 To: Dean Weimer
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] forward and reverse proxy in 3.1.x https forward
 proxy failing
 
 On Mon, 1 Nov 2010 12:41:44 -0500, Dean Weimer dwei...@orscheln.com
 wrote:
  I had an older machine that was still running 3.0 STABLE 12, that was
  functioning as a forward and reverse proxy using port 80 for both.  And
 a
  reverse proxy for one site on Port 443, the machine sits in a DMZ the
  forward proxy only directs about to web sites for machines connected
  through WAN connections, and functions as a reverse proxy for those
  machines when connecting to a couple internal sites.  This machine had a
  hardware failure last night and I was forced to put in place the newer
  machine that had already had the software installed but wasn't
 configured
  or tested yet.
 
  The problem I am having is that this machine running squid 3.1.9
 functions
  fine as both forward and reverse for http websites, and is working for
 the
  reverse HTTPS site, though I had to use the sslproxy_cert_error acl
 method
  to bypass a cert error, even though the cert is valid, it's not
 accepting
  it.  That's a minor problem though, as its functioning.  The more
 pressing
  problem is that HTTPS forward proxy is not working, the logs show an
 error
  every time stating a connect method was received on an accelerator port.
 
  2010/11/01 12:26:43| clientProcessRequest: Invalid Request
  2010/11/01 12:26:44| WARNING: CONNECT method received on http
 Accelerator
  port 80
  2010/11/01 12:26:44| WARNING: for request: CONNECT armmf.adobe.com:443
  HTTP/1.0
  User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; .NET CLR
  1.1.4322)
  Host: armmf.adobe.com
  Content-Length: 0
  Proxy-Connection: Keep-Alive
  Pragma: no-cache
 
  Is using the same port for both forward of http  https not allowed
 while
  using it for a reverse proxy anymore?
 
 It's never been allowed. The ability in older Squid was a bug.
 You will need a separate http_port line for the two modes if you want
 CONNECT tunnels.
 
 It's a good idea to keep each of the four modes (forward, reverse,
 intercept and transparent) on separate http_port. From 3.1 onwards this is
 being enforced where possible.
 
 Amos

Thanks for the reply Amos, I had came to that conclusion myself, about it not 
working anyways, didn't realize it was a bug that allowed it in the old version 
though.  I have already configured an additional port and verified that worked 
shortly after sending the first post.  The majority of our PCs browsers are set 
to use a configuration script, and that has been corrected with the new port.  
We have one application that has it in an INI file which will be delivered in 
our nightly polling process.  Now we just have to find the machines that are 
incorrectly set with a manual proxy setting and have them updated.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


RE: [squid-users] One slow Website Through Proxy

2010-09-23 Thread Dean Weimer
Thanks Amos, guess I learned something simple that I should have already known 
when troubleshooting these things always capture packets on both sides of 
squid.  I was only looking at the data between the client PC and squid.  Had I 
looked at the packets on the other side of squid I more than likely would have 
caught this one.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Wednesday, September 22, 2010 10:31 PM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] One slow Website Through Proxy
 
 On Wed, 22 Sep 2010 16:00:32 -0400, Chad Naugle
 chad.nau...@travimp.com
 wrote:
  I am not sure what is causing the issue, but in my own test, IE8
 performed
  SLOOO by far (Using the PROD Proxy), where under Firefox 3.5.13
 (Using
  my DEV Proxy), the site was almost instantly available while the IE8 was
  STILL loading the same page.  After the first load, my PROD Proxy under
 IE8
  loaded considerably faster, but not anywhere close to as fast as with
  Firefox 3.5.13, for the first attempt.
 
 
  -
  Chad E. Naugle
  Tech Support II, x. 7981
  Travel Impressions, Ltd.
 
 
 
  Dean Weimer dwei...@orscheln.com 9/22/2010 3:13 PM 
  I am running squid 3.1.8, and have one website that pauses for about 1
 to
  2 minutes before loading.  The website is www.pb.com (PitneyBowes).
 There
  are no errors logged in the cache.log file, and nothing unusual in the
  access.log file.  I have even done network packet captures and don't see
  anything unusual.  The website responds fine when bypassing the proxy
 and
  every other website appears to be fine through the proxy server.
 
  I have tested with both IE and Firefox, using my default wpad.dat script
  with auto detect and manually specifying the proxy server with no
 change.
  And even tried turning HTTP/1.1 through proxy servers on and off at the
  browser, nothing seems to affect its behavior.
 
  Can any of you confirm whether or not this website is slow through your
  setups, or have any idea what could be causing this issue?
 
 
 The www.pb.com domain times out while resolving  DNS records instead
 of returning NXDOMAIN or SERVFAIL response. Default DNS timeout is 2
 minutes. After which Squid will use the A results to fetch the page.
 
 Amos


[squid-users] One slow Website Through Proxy

2010-09-22 Thread Dean Weimer
I am running squid 3.1.8, and have one website that pauses for about 1 to 2 
minutes before loading.  The website is www.pb.com (PitneyBowes).  There are no 
errors logged in the cache.log file, and nothing unusual in the access.log 
file.  I have even done network packet captures and don't see anything unusual. 
 The website responds fine when bypassing the proxy and every other website 
appears to be fine through the proxy server.

I have tested with both IE and Firefox, using my default wpad.dat script with 
auto detect and manually specifying the proxy server with no change.  And even 
tried turning HTTP/1.1 through proxy servers on and off at the browser, nothing 
seems to affect its behavior.

Can any of you confirm whether or not this website is slow through your setups, 
or have any idea what could be causing this issue? 

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co
 Phone: (660) 269-3448
 Fax: (660) 269-3950



[squid-users] WCCP and parent authentication

2010-08-17 Thread Dean Weimer
I know when using squid as an intercept proxy it can't do authentication as the 
clients don't know it's there, but do any of you out there know if you can use 
it with a parent proxy that requires authentication?

The specific scenario I am considering is Squid in DMZ with WCCPv2 used in 
conjunction with a Cisco ASA 5520 firewall and an external (Websense filtering) 
proxy that requires authentication, both NTLM and basic authentication is 
supported.

Clients
   |
Cisco ASA5520 -WCCPv2- Squid 3.1.6 (In DMZ) -- Secondary Internet Connection -- 
Parent Proxy Service 
   |
Internet

We are currently using auto-detect, but continually keep running into 
applications that don't recognize auto-detect, or sometimes don't even have the 
ability to read a configuration script.  I am trying to come up with a way to 
alleviate the user's issues, without losing our local cache.  And keeping the 
HR and Legal departments happy by continuing to filter websites with content 
that some could find offensive, as well as blocking unsafe (malware/spyware) 
websites.


Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


RE: [squid-users] Squid 3.1.5.1 --disable-ipv6 possibly not working?

2010-08-03 Thread Dean Weimer
The Base system still has IPv6 support; however some of the Bind DNS servers I 
am using do not, which causes a server failure when attempting to do an IPV6 
 name resolution request.  This was causing some problems with configuring 
a parent server by DNS name on some other systems that are now in production.  
Disabling IPv6 in squid fixed those problems, I figured the 3.1.6 would be out 
before I was ready to put this system in production use and thought doing its 
configuration and testing with the 3.1.5.1 wouldn't hurt until then.  Guess I 
could have waited one more day to start testing and I wouldn't have run into 
this problem, 3.1.6 is compiling on this system now.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Monday, August 02, 2010 6:51 PM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Squid 3.1.5.1 --disable-ipv6 possibly not working?
 
 On Mon, 2 Aug 2010 15:25:49 -0500, Dean Weimer dwei...@orscheln.com
 wrote:
  I just built a new proxy server running FreeBSD 7.3 and Squid 3.1.5.1
  compile with the following options.
 
 snip
 
 Yes the 3.1.5.1 package has some IPv6 bugs in IPv4-only systems. Thus the
 .1 (beta status).
 These have been resolved to the best of my knowledge in the followup 3.1.6
 package which is available now.
 
 If you were using --disable-ipv6 for reasons of custom kernel builds with
 stack customization or IPv6 being disabled in the system and failovers not
 working, those problems have also fixed in the 3.1.6 package.
 
 Amos



[squid-users] Squid 3.1.5.1 --disable-ipv6 possibly not working?

2010-08-02 Thread Dean Weimer
I just built a new proxy server running FreeBSD 7.3 and Squid 3.1.5.1 compile 
with the following options.

./configure \
--prefix=/usr/local/squid \
--enable-pthreads \
--enable-ssl \
--with-openssl=/usr/local \
--enable-async-io \
--enable-underscores \
--enable-storeio=ufs,aufs \
--enable-delay-pools \
--disable-ipv6

After launching it, I could not get to any websites, I just received the squid 
error (22) Invalid Argument.  No errors were logged in the cache.log, and the 
access log only showed a normal entry for a MISS request, below is one of the 
examples.

1280795483.002184 10.100.10.3 TCP_MISS/503 3676 GET http://www.yahoo.com/ - 
DIRECT/www.yahoo.com text/html

While trying to find the problem, I also noticed the following output in the 
cache.log.

2010/08/02 21:12:50| Accepting ICP messages at [::]:8181, FD 10.
2010/08/02 21:12:50| Accepting SNMP messages on [::]:3401, FD 12.

So I used squid -v to verify that I did indeed compile it with --disable-ipv6.

/usr/local/squid/sbin/squid -v
Squid Cache: Version 3.1.5.1
configure options:  '--prefix=/usr/local/squid' '--enable-pthreads' 
'--enable-ssl' '--with-openssl=/usr/local' '--enable-async-io' 
'--enable-underscores' '--enable-storeio=ufs,aufs' '--enable-delay-pools' 
'--disable-ipv6' --with-squid=/usr/local/squid-3.1.5.1 --enable-ltdl-convenience

Sure enough it was there, I added the configuration option tcp_outgoing_address 
set it to my IPv4 address, and everything started working. Am I correct in 
think that there is something broken in the 3.1.5.1 build with the 
--disable-ipv6, or am I missing something else here?
 

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co



RE: [squid-users] cachemanager

2010-06-24 Thread Dean Weimer
 -Original Message-
 From: Philippe Dhont [mailto:philippe.dh...@gems-group.com]
 Sent: Thursday, June 24, 2010 9:28 AM
 To: squid-users@squid-cache.org
 Subject: [squid-users] cachemanager
 
 Hi,
 
 I installed cachemanager and when i go to the url i have to fill in a
manager
 name and a password.
 The password I know (it is defined in my squid.conf), but I don't know
about
 the manager name.
 I tried several names but it's not working.
 How do I know what name to use ?
 
 Thnx, Ph.

Anything you want, it just uses it for logging.



[squid-users] ctx: enter level

2010-06-08 Thread Dean Weimer
I have 2 proxy servers that I have updated to 3.1.4, and have since been seeing 
errors in the cache.log with the following:

ctx: enter level # '?'

# is replaced with a number, appears to have started at zero and increments 
each time the error occurs.
? is replaced with vary text sometimes it's nothing, sometimes it's the 
requested page, and sometimes its strange characters like Ø.

These servers are both outbound web proxies functioning as children to a hosted 
Web filters proxy server.

I upgraded from 3.1.1, compiled with the same options, and running the same 
configuration files, the error was not occurring prior to the upgrade.

Compiled options:
./configure \
--prefix=/usr/local/squid \
--enable-pthreads \
--enable-ssl \
--with-openssl=/usr/local \
--enable-async-io \
--enable-underscores \
--enable-storeio=ufs,aufs \
--enable-delay-pools \
--disable-ipv6 \

Both systems are running FreeBSD 7.1.

Any ideas? 

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co



RE: [squid-users] Compile Error on FreeBSD 8.0 with Squid 3.1.2 3.1.3

2010-06-08 Thread Dean Weimer
As of 3.1.4 this compiles fine on FreeBSD 8.0, and 7.1.  However, the FreeBSD 
7.2 system I have still has this problem with 3.1.4.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Tuesday, May 04, 2010 6:04 PM
 To: Kinkie
 Cc: Dean Weimer; squid-users@squid-cache.org; Squid Developers
 Subject: Re: [squid-users] Compile Error on FreeBSD 8.0 with Squid 3.1.2 
 3.1.3
 
 On Tue, 4 May 2010 17:53:31 +0200, Kinkie gkin...@gmail.com wrote:
  It's already fixed in trunk.
  Amos, please import changes in revnos 10428  10431. Only impacts
  system running bdb-4 but not 1.85
 
 10428 is already in and not related to dbh.
 
 Did you mean 10432 which reverts a portion of 10431?
 
 Amos
 
 
 Kinkie
 
  On Tue, May 4, 2010 at 5:44 PM, Dean Weimer dwei...@orscheln.com
 wrote:
  I have run into the following compile error on both squid 3.1.2 and
  squid 3.1.3 on FreeBSD 8.0 using these options for
  ./configure \
   --prefix=/usr/local/squid \
   --enable-pthreads \
   --enable-ssl \
   --with-openssl=/usr/local \
   --enable-async-io \
   --enable-underscores \
   --enable-storeio=ufs,aufs \
   --enable-delay-pools \
   --disable-ipv6
  Squid 3.1.1 Compiles fine on this system, has anyone else ran into this
  issue or have any ideas as to the cause.
 
  Making all in session
  gcc -DHAVE_CONFIG_H  -I../../.. -I../../../include -I../../../src
   -I../../../include  -I.   -I/usr/local/include -Wall -Wpointer-arith
  -Wwrite-strings -Wmissing-prototypes -Wmissing-declarations -
 Wcomments
  -Werror -D_REENTRANT -Wall -g -O2 -MT squid_session.o -MD -MP -MF
  .deps/squid_session.Tpo -c -o squid_session.o squid_session.c
  cc1: warnings being treated as errors
  squid_session.c: In function 'init_db':
  squid_session.c:62: warning: implicit declaration of function 'dbopen'
  squid_session.c:62: warning: assignment makes pointer from integer
  without a cast
  squid_session.c: In function 'shutdown_db':
  squid_session.c:71: error: too few arguments to function 'db-close'
  squid_session.c: In function 'session_active':
  squid_session.c:81: warning: passing argument 2 of 'db-get' from
  incompatible pointer type
  squid_session.c:81: error: too few arguments to function 'db-get'
  squid_session.c:85: warning: passing argument 2 of 'db-del' from
  incompatible pointer type
  squid_session.c:85: error: too few arguments to function 'db-del'
  squid_session.c: In function 'session_login':
  squid_session.c:103: warning: passing argument 2 of 'db-put' from
  incompatible pointer type
  squid_session.c:103: error: too few arguments to function 'db-put'
  squid_session.c: In function 'session_logout':
  squid_session.c:111: warning: passing argument 2 of 'db-del' from
  incompatible pointer type
  squid_session.c:111: error: too few arguments to function 'db-del'
  *** Error code 1
 
  Stop in /usr/local/squid-3.1.3/helpers/external_acl/session.
  *** Error code 1
 
  Stop in /usr/local/squid-3.1.3/helpers/external_acl.
  *** Error code 1
 
  Stop in /usr/local/squid-3.1.3/helpers.
  *** Error code 1
 
  Stop in /usr/local/squid-3.1.3.
 
  Thanks,
   Dean Weimer
   Network Administrator
   Orscheln Management Co
 
 


RE: [squid-users] ctx: enter level

2010-06-08 Thread Dean Weimer
Most of the time another one of them, in the case below its seems to have lost 
the connection to the parent proxy, here's the full log since the last 
reconfigure done earlier today simply to add a host to a bypass filter list.

2010/06/08 11:01:36| Ready to serve requests.
2010/06/08 12:50:15| ctx: enter level 620: 'lost'
2010/06/08 12:50:15| ctx: enter level 621: 'lost'
2010/06/08 12:50:15| ctx: enter level 622: 'lost'
2010/06/08 12:50:15| ctx: enter level 623: 'lost'
2010/06/08 12:50:15| ctx: enter level 624: 'lost'
2010/06/08 12:50:15| ctx: enter level 625: 'lost'
2010/06/08 12:50:15| ctx: enter level 626: 'lost'
2010/06/08 12:50:15| ctx: enter level 627: 'lost'
2010/06/08 12:50:15| ctx: enter level 628: 'lost'
2010/06/08 12:50:15| ctx: enter level 629: 'lost'
2010/06/08 12:50:15| ctx: enter level 630: 'lost'
2010/06/08 12:50:15| ctx: enter level 631: 'lost'
2010/06/08 12:50:15| ctx: enter level 632: 'lost'
2010/06/08 12:50:15| ctx: enter level 633: 'lost'
2010/06/08 12:50:15| ctx: enter level 634: 'lost'
2010/06/08 12:50:15| ctx: enter level 635: 'lost'
2010/06/08 12:50:15| ctx: enter level 636: 'lost'
2010/06/08 12:50:15| ctx: enter level 637: 'lost'
2010/06/08 12:50:15| ctx: enter level 638: 'lost'
2010/06/08 12:50:15| ctx: enter level 639: 'lost'
2010/06/08 12:50:15| TCP connection to snip/8081 failed

Here's an entry from our other server, this one has a lower load in request 
rate, but has a similar load in bandwidth.
2010/06/08 10:19:51| ctx: enter level 102: 
'http://cdn.cloudfiles.mosso.com/c71692/templates/coveritlive/images/sound.gif'
2010/06/08 10:19:51| ctx: enter level 103: 'Ø8^Q^A^H'
2010/06/08 10:19:51| ctx: enter level 104: 'Ø8^Q^A^H'
2010/06/08 10:19:51| ctx: enter level 105: 'Ø8^Q^A^H'
2010/06/08 10:19:51| ctx: enter level 106: 'Ø8^Q^A^H'
2010/06/08 10:19:51| ctx: enter level 107: 'Ø8^Q^A^H'
2010/06/08 10:19:51| ctx: enter level 108: 'Ø8^Q^A^H'
2010/06/08 10:19:51| ctx: enter level 109: 'Ø8^Q^A^H'
2010/06/08 10:19:51| ctx: enter level 110: 'Ø8^Q^A^H'
2010/06/08 10:19:51| ctx: enter level 111: 'Ø8^Q^A^H'
2010/06/08 10:19:51| ctx: enter level 112: 'Ø8^Q^A^H'
2010/06/08 10:19:51| ctx: enter level 113: 'Ø8^Q^A^H'
2010/06/08 10:19:51| ctx: enter level 114: 
'http://tag.admeld.com/match?admeld_adprovider_id=310external_user_id=a8c5a9a8-5e05-11df-b422-00163e03bb53cb=mdmn2credirect=http://ib.adnxs.com/pxj?bidder=12cb=yno94eaction=setuid('a8c5a9a8-5e05-11df-b422-00163e03bb53');redir=http://tap.rubiconproject.com/oz/feeds/triggit/tokens?token=a8c5a9a8-5e05-11df-b422-00163e03bb53expires=180'

I probably should add that both of these proxy servers running 3.1.4 are 
parents to a single 3.1.1 proxy server.  Both server running 3.1.4 have no disk 
cache,  System status shows memory consumption and other stats all within 
normal ranges. 



Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


 -Original Message-
 From: Henrik Nordström [mailto:hen...@henriknordstrom.net]
 Sent: Tuesday, June 08, 2010 1:00 PM
 To: Dean Weimer
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] ctx: enter level
 
 tis 2010-06-08 klockan 10:35 -0500 skrev Dean Weimer:
  I have 2 proxy servers that I have updated to 3.1.4, and have since been
 seeing errors in the cache.log with the following:
 
  ctx: enter level # '?'
 
  # is replaced with a number, appears to have started at zero and
 increments each time the error occurs.
  ? is replaced with vary text sometimes it's nothing, sometimes it's the
 requested page, and sometimes its strange characters like Ø.
 
 What is the next message?
 
 Regards
 Henrik



RE: [squid-users] Compile Error on FreeBSD 8.0 with Squid 3.1.2 3.1.3

2010-06-08 Thread Dean Weimer
 -Original Message-
 From: Henrik Nordström [mailto:hen...@henriknordstrom.net]
 Sent: Tuesday, June 08, 2010 1:01 PM
 To: Dean Weimer
 Cc: squid-users@squid-cache.org; Squid Developers
 Subject: RE: [squid-users] Compile Error on FreeBSD 8.0 with Squid 3.1.2 
 3.1.3
 
 tis 2010-06-08 klockan 11:13 -0500 skrev Dean Weimer:
  As of 3.1.4 this compiles fine on FreeBSD 8.0, and 7.1.  However, the
 FreeBSD 7.2 system I have still has this problem with 3.1.4.
 
 Odd.
 
 What error do you get?
 
 Regards
 Henrik

Compile options:
./configure \
--prefix=/usr/local/squid \
--enable-pthreads \
--enable-ssl \
--with-openssl=/usr/local \
--enable-async-io \
--enable-underscores \
--enable-storeio=ufs,aufs \
--enable-delay-pools \
--disable-ipv6

End of output form make:
Making all in session
gcc -DHAVE_CONFIG_H  -I../../.. -I../../../include -I../../../src  
-I../../../include  -I.   -I/usr/local/include -Wall -Wpointer-arith 
-Wwrite-strings -Wmissing-prototypes -Wmissing-declarations -Wcomments -Werror 
-D_REENTRANT -Wall -g -O2 -MT squid_session.o -MD -MP -MF 
.deps/squid_session.Tpo -c -o squid_session.o squid_session.c
cc1: warnings being treated as errors
squid_session.c: In function 'init_db':
squid_session.c:61: warning: implicit declaration of function 'dbopen'
squid_session.c:61: warning: assignment makes pointer from integer without a 
cast
squid_session.c: In function 'shutdown_db':
squid_session.c:70: error: too few arguments to function 'db-close'
squid_session.c: In function 'session_active':
squid_session.c:80: warning: passing argument 2 of 'db-get' from incompatible 
pointer type
squid_session.c:80: error: too few arguments to function 'db-get'
squid_session.c:84: warning: passing argument 2 of 'db-del' from incompatible 
pointer type
squid_session.c:84: error: too few arguments to function 'db-del'
squid_session.c: In function 'session_login':
squid_session.c:102: warning: passing argument 2 of 'db-put' from incompatible 
pointer type
squid_session.c:102: error: too few arguments to function 'db-put'
squid_session.c: In function 'session_logout':
squid_session.c:110: warning: passing argument 2 of 'db-del' from incompatible 
pointer type
squid_session.c:110: error: too few arguments to function 'db-del'
*** Error code 1

Stop in /usr/local/squid-3.1.4/helpers/external_acl/session.
*** Error code 1

Stop in /usr/local/squid-3.1.4/helpers/external_acl.
*** Error code 1

Stop in /usr/local/squid-3.1.4/helpers.
*** Error code 1

Stop in /usr/local/squid-3.1.4.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


RE: [squid-users] ctx: enter level

2010-06-08 Thread Dean Weimer
 -Original Message-
 From: Henrik Nordström [mailto:hen...@henriknordstrom.net]
 Sent: Tuesday, June 08, 2010 2:12 PM
 To: Dean Weimer
 Cc: squid-users@squid-cache.org
 Subject: RE: [squid-users] ctx: enter level
 
 tis 2010-06-08 klockan 13:55 -0500 skrev Dean Weimer:
  Most of the time another one of them, in the case below its seems to have
 lost the connection to the parent proxy, here's the full log since the last
 reconfigure done earlier today simply to add a host to a bypass filter list.
 
  2010/06/08 11:01:36| Ready to serve requests.
 [..]
  2010/06/08 12:50:15| ctx: enter level 635: 'lost'
  2010/06/08 12:50:15| ctx: enter level 636: 'lost'
  2010/06/08 12:50:15| ctx: enter level 637: 'lost'
  2010/06/08 12:50:15| ctx: enter level 638: 'lost'
  2010/06/08 12:50:15| ctx: enter level 639: 'lost'
  2010/06/08 12:50:15| TCP connection to snip/8081 failed
 
 Please file a bug report. Looks like the ctx debugging trace have gone
 wrong somewhere, not unwinding properly.
 
   http://bugs.squid-cache.org/
 
 The ctx is a internal tool to try to provide correct context to debug
 messages.
 
 It's pretty safe to ignore this issue until it's fixed. But may
 eventually cause some instabilities due to invalid memory reference in
 the printout.
 
 Regards
 Henrik

Filed,
http://bugs.squid-cache.org/show_bug.cgi?id=2945

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


RE: [squid-users] fail-safe and load balancing with reverse proxy

2010-06-02 Thread Dean Weimer
Try using peer_connect_timeout  You can lower the time out so it fails over 
faster.
 
Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

 -Original Message-
 From: Király László [mailto:k...@mail.madalbal.hu]
 Sent: Wednesday, June 02, 2010 3:14 AM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] fail-safe and load balancing with reverse proxy
 
 Okay, I compile a brand new 3.1.4 squid with --icmp-enable option.
 
 I added also to the squid.conf:
 ---
 pinger_program /usr/local/squid/libexec/pinger
 query_icmp on
 test_reachability on
 ---
 
 It didn't help. :S
 
  Hi List,
 
  I use a squid3-3.0.STABLE8 reverse proxy on a debian system.
  It makes forward queries to web server, which is accessible from 2
  public ips.
 
  My peer config:
  ---
  cache_peer x.y.z.57 parent 80 0 no-query no-digest no-netdb-exchange
  originserver name=parent1 round-robin login=PASS weight=16
  cache_peer a.b.c.118 parent 80 0 no-query no-digest no-netdb-exchange
  originserver name=parent2 round-robin login=PASS weight=1
  ---
 
  I would like to do a fail-safe connection to the web server.
 
  It's working, but if one of the public ips isn't accessible, there
  is some Connection timed out (110) proxy message until the parent is
  detected as dead, while the proxy tries to query the offline parent.
 
  How can I eliminate this thing?
  Why squid doesn't resend the query to the another parent?
 
  I cannot set ICP queries while the parent is a simple web server.
  Is there a way to make better dead peer detection?
 
  Can I do this whith icmp queries?
 
  Best regards,
  László Király
 --- End of Original Message ---



RE: [squid-users] Google SSL searches

2010-05-28 Thread Dean Weimer
Henrik,
In cases like this, is it better to use the port 443 acl like you did 
in your example, instead of the proto HTTPS option?  Just curious if the port 
acl is faster or has some other advantage over the proto acl.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co
 Phone: (660) 269-3448
 Fax: (660) 269-3950


 -Original Message-
 From: Henrik Nordström [mailto:hen...@henriknordstrom.net]
 Sent: Thursday, May 27, 2010 5:58 PM
 To: Dave Burkholder
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] Google SSL searches
 
 tor 2010-05-27 klockan 15:35 -0400 skrev Dave Burkholder:
 
  Is there some way to specify via a Squid ACL that requests via port 443 to
  google.com are blocked, but requests to google.com via port 80 are
 allowed?
 
 acl https port 443
 acl google dstdomain google.com
 http_access deny https google
 
 Regards
 Henrik



RE: [squid-users] Google SSL searches

2010-05-28 Thread Dean Weimer
Ah, good to know, thanks for the info.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

 -Original Message-
 From: Henrik Nordström [mailto:hen...@henriknordstrom.net]
 Sent: Friday, May 28, 2010 10:00 AM
 To: Dean Weimer
 Cc: squid-users@squid-cache.org
 Subject: RE: [squid-users] Google SSL searches
 
 fre 2010-05-28 klockan 09:46 -0500 skrev Dean Weimer:
  Henrik,
  In cases like this, is it better to use the port 443 acl like you did
  in your example, instead of the proto HTTPS option?  Just curious if
  the port acl is faster or has some other advantage over the proto acl.
 
 The proto acl won't work in this case as the protocol is not known on
 CONNECT requests. All Squid knows about https requests when running as a
 proxy is that the client wants to connect to hostname:port.
 
 proto https acls only works when Squid is operating as a proxy at the
 https level, i.e. when used as a reverse proxy or when using sslbump.
 
 Regards
 Henrik
 
 



[squid-users] Compile Error on FreeBSD 8.0 with Squid 3.1.2 3.1.3

2010-05-04 Thread Dean Weimer
I have run into the following compile error on both squid 3.1.2 and squid 3.1.3 
on FreeBSD 8.0 using these options for
./configure \
 --prefix=/usr/local/squid \
 --enable-pthreads \
 --enable-ssl \
 --with-openssl=/usr/local \
 --enable-async-io \
 --enable-underscores \
 --enable-storeio=ufs,aufs \
 --enable-delay-pools \
 --disable-ipv6
Squid 3.1.1 Compiles fine on this system, has anyone else ran into this issue 
or have any ideas as to the cause.

Making all in session
gcc -DHAVE_CONFIG_H  -I../../.. -I../../../include -I../../../src  
-I../../../include  -I.   -I/usr/local/include -Wall -Wpointer-arith 
-Wwrite-strings -Wmissing-prototypes -Wmissing-declarations -Wcomments -Werror 
-D_REENTRANT -Wall -g -O2 -MT squid_session.o -MD -MP -MF 
.deps/squid_session.Tpo -c -o squid_session.o squid_session.c
cc1: warnings being treated as errors
squid_session.c: In function 'init_db':
squid_session.c:62: warning: implicit declaration of function 'dbopen'
squid_session.c:62: warning: assignment makes pointer from integer without a 
cast
squid_session.c: In function 'shutdown_db':
squid_session.c:71: error: too few arguments to function 'db-close'
squid_session.c: In function 'session_active':
squid_session.c:81: warning: passing argument 2 of 'db-get' from incompatible 
pointer type
squid_session.c:81: error: too few arguments to function 'db-get'
squid_session.c:85: warning: passing argument 2 of 'db-del' from incompatible 
pointer type
squid_session.c:85: error: too few arguments to function 'db-del'
squid_session.c: In function 'session_login':
squid_session.c:103: warning: passing argument 2 of 'db-put' from incompatible 
pointer type
squid_session.c:103: error: too few arguments to function 'db-put'
squid_session.c: In function 'session_logout':
squid_session.c:111: warning: passing argument 2 of 'db-del' from incompatible 
pointer type
squid_session.c:111: error: too few arguments to function 'db-del'
*** Error code 1

Stop in /usr/local/squid-3.1.3/helpers/external_acl/session.
*** Error code 1

Stop in /usr/local/squid-3.1.3/helpers/external_acl.
*** Error code 1

Stop in /usr/local/squid-3.1.3/helpers.
*** Error code 1

Stop in /usr/local/squid-3.1.3.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co



RE: [squid-users] UDP errors after upgrade to 3.1.1

2010-04-08 Thread Dean Weimer
-Original Message-
From: donovan jeffrey j [mailto:dono...@beth.k12.pa.us] 
Sent: Thursday, April 08, 2010 7:37 AM
To: Amos Jeffries
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] UDP errors after upgrade to 3.1.1


snip

no pid file in the 3.1.1 build.

I check my other copies and they all made the PID file in
/usr/local/squid/var/logs/squid.pid
nothing is in my 3.1.1

cat: /usr/local/squid/var/logs/squid.pid: No such file or directory

The machines I have installed 3.1.1 on want to place the pid file in
/usr/local/squid/var/run/squid.pid

Unfortunately the install doesn't appear to build that directory, simply
do a mkdir /usr/local/squid/var/run (make sure its owned by your squid
user).  Then either kill and restart squid, or manually create a
squid.pid file with the process id in it.

Alternatively you could use the pid_filename directive to point it to
another location.


RE: [squid-users] cache_peer using DNS name

2010-04-01 Thread Dean Weimer
I don't have IPv6 capability, but on this test system I just did a quick 
install and Squid does have the default IPv6 setup as does the O/S (FreeBSD 
7.2).  I will recompile with --disable-ipv6 and see if the problem goes away.  
Not sure if they have a  record for the hostname, I get a server fail 
response when trying against the DNS servers I have configured on the system.  
The Bind DNS servers I am hitting do have IPv6 disabled.  I have recompiled 
Squid with the --disable-ipv6 option and set my cache_peer line back to the 
domain name.  I will let you know if this resolves the problem, after the new 
configuration is running long enough to know.

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Wednesday, March 31, 2010 5:17 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] cache_peer using DNS name

Henrik Nordström wrote:
 ons 2010-03-31 klockan 14:41 -0500 skrev Dean Weimer:
 I found it listed in 3.0PRE3 bugs, here is the link that I found, it is 
 listed as fixed.
 
 And it is fixed. That was a typo which made Squid always use the name=
 instead of the host when figuring out how to connect to the peer.
 Obvious error, and long time gone (fixed in 2003, long before 3.0 was
 released in 2007).
 

Does the peer have  records and you have no IPv6 connectivity?

This looks like one of the effects of our failover bug. Compounded by 
the fact the peer name is looked up so often.

Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE9 or 3.1.1


RE: [squid-users] cache_peer using DNS name

2010-04-01 Thread Dean Weimer
I have not ran into the problem since disabling the IPv6 this morning, using 
the DNS name for the chace_peer with the name= option set on the line.
Looks like you got it right Amos, thanks a bunch for your help.

Dean

-Original Message-
From: Dean Weimer [mailto:dwei...@orscheln.com] 
Sent: Thursday, April 01, 2010 9:54 AM
To: Amos Jeffries; squid-users@squid-cache.org
Subject: RE: [squid-users] cache_peer using DNS name

I don't have IPv6 capability, but on this test system I just did a quick 
install and Squid does have the default IPv6 setup as does the O/S (FreeBSD 
7.2).  I will recompile with --disable-ipv6 and see if the problem goes away.  
Not sure if they have a  record for the hostname, I get a server fail 
response when trying against the DNS servers I have configured on the system.  
The Bind DNS servers I am hitting do have IPv6 disabled.  I have recompiled 
Squid with the --disable-ipv6 option and set my cache_peer line back to the 
domain name.  I will let you know if this resolves the problem, after the new 
configuration is running long enough to know.

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Wednesday, March 31, 2010 5:17 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] cache_peer using DNS name

Henrik Nordström wrote:
 ons 2010-03-31 klockan 14:41 -0500 skrev Dean Weimer:
 I found it listed in 3.0PRE3 bugs, here is the link that I found, it is 
 listed as fixed.
 
 And it is fixed. That was a typo which made Squid always use the name=
 instead of the host when figuring out how to connect to the peer.
 Obvious error, and long time gone (fixed in 2003, long before 3.0 was
 released in 2007).
 

Does the peer have  records and you have no IPv6 connectivity?

This looks like one of the effects of our failover bug. Compounded by 
the fact the peer name is looked up so often.

Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE9 or 3.1.1


[squid-users] cache_peer using DNS name

2010-03-31 Thread Dean Weimer
I am working on testing a hosted web filter solution, this involves chaining 
our internal squid proxy to the hosted web filter proxy server.  I was seeing 
very poor performance and found several TCP connection to 
filters.dnsdomainname.com/8081 failed entries in the log.  I discovered that 
changing he line to the IP address stopped this problem.  Further searching 
found a bug in 3.0 where using a DN name for a parent and the name= option on a 
chace_peer line caused it to try and lookup the name= value instead of the DNS 
name.  I went back and removed the name= option and set or line back to the DNS 
domain name.  TCP connection errors are gone now.

I am running version 3.1.1 here is the relevant part of the configuration.

always_direct allow nonfilter
never_direct allow all
# Original Configuration, appears to work sometimes, but frequent connection 
errors
# cache_peer filters.dnsdomainname.com parent 8081 0 name=webfilter no-query 
default login=PASS no-digest connect-timeout=10 connection-auth=on

## Second Try, works, but need to use DNS name in case they change their IP
## cache_peer 192.168.1.1 parent 8081 0 name=webfilter no-query default 
login=PASS no-digest connect-timeout=10 connection-auth=on

### Third try works, and is acceptable, but would be easier if I could use the 
name= option
cache_peer filters.dnsdomainname.com parent 8081 0 no-query default login=PASS 
no-digest connect-timeout=10 connection-auth=on


Everything works this way, but I thought I would throw this out there, in case 
someone else is struggling with the same problem.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co



RE: [squid-users] cache_peer using DNS name

2010-03-31 Thread Dean Weimer
I found it listed in 3.0PRE3 bugs, here is the link that I found, it is listed 
as fixed.

http://ftp.isu.edu.tw/pub/Unix/Proxy/Squid/Versions/v3/3.0/bugs/index.html#squid-3.0.PRE3-accel_cache_peer_name

However, this exact problem was occurring I would have never gotten out, I 
discovered since taking the name= option off that it was apparently just 
occurring less often.  Maybe just a coincidence, that it got better when I 
changed it.  I have changed the rule back to IP address to see if it continues 
without error over a longer period of time.  I have ruled out any network 
connection problems being the cause (at least on my end).  Let me know if there 
is any debug options I should enable to give you more information.

-Original Message-
From: Henrik Nordström [mailto:hen...@henriknordstrom.net] 
Sent: Wednesday, March 31, 2010 2:24 PM
To: Dean Weimer
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] cache_peer using DNS name

ons 2010-03-31 klockan 12:25 -0500 skrev Dean Weimer:
 I am working on testing a hosted web filter solution, this involves
 chaining our internal squid proxy to the hosted web filter proxy
 server.  I was seeing very poor performance and found several TCP
 connection to filters.dnsdomainname.com/8081 failed entries in the
 log.  I discovered that changing he line to the IP address stopped
 this problem.  Further searching found a bug in 3.0 where using a DN
 name for a parent and the name= option on a chace_peer line caused it
 to try and lookup the name= value instead of the DNS name.  I went
 back and removed the name= option and set or line back to the DNS
 domain name.  TCP connection errors are gone now.

Very Odd..

is there an open bug report on this?

I don't see any trace of this happening from reading the sources. There
is a very clear distinction between host and name.

Regards
Henrik



RE: [squid-users] Reverse and SSL cert

2010-03-31 Thread Dean Weimer
You can export the certificates from most Microsoft programs into PKCS12
format it will have a .pfx extension.  Then you can use OpenSSL to
convert that to a PEM format. Look at the openssl man page for pkcs12
for more info, on how to do the conversion.

-Original Message-
From: Andrea Gallazzi [mailto:andrea.galla...@live.com] 
Sent: Wednesday, March 31, 2010 3:23 PM
To: Squid Mailing List
Subject: [squid-users] Reverse and SSL cert

After little problem  I installed squid 3.1.1 with openssl on my ubuntu 
server 9.10.

Now i have my ssl certificate (.cer) on my exchange server but squid (or

openssl ?) require a .pem certificate.

I have doubts about this.

Is the certificate the same of exchange ?
(if yes) The same certificate will installed on squid and on exchange?
How to make the .pem certificate for squid?

thanks


-- 
Andrea Gallazzi
http://andreagx.blogspot.com


RE: [squid-users] Having issue with reverse proxy and SSL

2010-03-26 Thread Dean Weimer
Nick,
Both http://some.url.com/ and https://some.url.com/ satisfy your
acl acl_http dstdomain some.url.com as the destination domain is the
same in both cases.  Not sure if this is the best way to handle it but
if you changed your acls to use url_regex instead and used the following
it should work.

acl acl_http url_regex -i ^http://some.url.com
acl acl_ssl url_regex -i ^https://some.url.com

Dean

-Original Message-
From: Nick Duda [mailto:nd...@vistaprint.com] 
Sent: Friday, March 26, 2010 12:21 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Having issue with reverse proxy and SSL

Hi all,

I've got a reverse proxy setup but something is wrong with my config. I
want a request for a certain HTTP request to go to one cache_peer and
the exactly same request but for HTTPS to go to another cache_peer.
Right now its always hitting the same cache_peer.

Squid Cache: Version 2.6.STABLE18
configure options:  '--enable-snmp' '--enable-storeio=aufs'
'--enable-ssl'

http_port 80 accel vhost
https_port 443 accel vhost cert=/path/to/cert.pem
key=/path/to/server.key

cache_peer secure.someurl.com parent 443 0 no-query originserver ssl
name=ssl sslflags=DONT_VERIFY_PEER
cache_peer 192.168.1.10 parent 80 0 no-query originserver name=http

acl acl_http dstdomain some.url.com
acl acl_ssl dstdomain some.url.com

cache_peer_access http allow acl_http
cache_peer_access ssl allow acl_ssl

http_access allow acl_http
http_access allow acl_ssl


Wouldn't that config send the request to the correcet cache_peer
depending on if it came in SSL or HTTP? It's the same URL, but either
HTTP or HTTPS always sends it to the cache_peer with the name=http

Thoughts?

Nick


RE: [squid-users] Having issue with reverse proxy and SSL

2010-03-26 Thread Dean Weimer
I believe so, I believe you can also place them in a separate file one
expression per line

Example:
A file /usr/local/squid/etc/acl_http could be as follows:
^http://some.url.com
^http://some.url2.com
^http://some.url3.com

Squid configuration lone would be as follows:
acl acl_http url_regex -i /usr/local/squid/etc/acl_http


Though I think I remember something about external files not working
correctly in some cases with url_regex, though I may be completely
mistaken or the problem may have been fixed.  Best thing to do is test
it, if the setup isn't live it's a quick easy test to see if it works.
Also I probably should note that the -i is there to ignore case,
depending on your setup you may not want to use it.

-Original Message-
From: Nick Duda [mailto:nd...@vistaprint.com] 
Sent: Friday, March 26, 2010 1:25 PM
To: Dean Weimer; squid-users@squid-cache.org
Subject: RE: [squid-users] Having issue with reverse proxy and SSL

Using regex can I have multiple domains?

i.e.

acl acl_http url_regex -i ^http://some.url.com ^http://some.url2.com
^http://some.url3.com


- Nick



-Original Message-
From: Dean Weimer [mailto:dwei...@orscheln.com] 
Sent: Friday, March 26, 2010 2:17 PM
To: Nick Duda; squid-users@squid-cache.org
Subject: RE: [squid-users] Having issue with reverse proxy and SSL

Nick,
Both http://some.url.com/ and https://some.url.com/ satisfy your
acl acl_http dstdomain some.url.com as the destination domain is the
same in both cases.  Not sure if this is the best way to handle it but
if you changed your acls to use url_regex instead and used the following
it should work.

acl acl_http url_regex -i ^http://some.url.com
acl acl_ssl url_regex -i ^https://some.url.com

Dean

-Original Message-
From: Nick Duda [mailto:nd...@vistaprint.com] 
Sent: Friday, March 26, 2010 12:21 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Having issue with reverse proxy and SSL

Hi all,

I've got a reverse proxy setup but something is wrong with my config. I
want a request for a certain HTTP request to go to one cache_peer and
the exactly same request but for HTTPS to go to another cache_peer.
Right now its always hitting the same cache_peer.

Squid Cache: Version 2.6.STABLE18
configure options:  '--enable-snmp' '--enable-storeio=aufs'
'--enable-ssl'

http_port 80 accel vhost
https_port 443 accel vhost cert=/path/to/cert.pem
key=/path/to/server.key

cache_peer secure.someurl.com parent 443 0 no-query originserver ssl
name=ssl sslflags=DONT_VERIFY_PEER
cache_peer 192.168.1.10 parent 80 0 no-query originserver name=http

acl acl_http dstdomain some.url.com
acl acl_ssl dstdomain some.url.com

cache_peer_access http allow acl_http
cache_peer_access ssl allow acl_ssl

http_access allow acl_http
http_access allow acl_ssl


Wouldn't that config send the request to the correcet cache_peer
depending on if it came in SSL or HTTP? It's the same URL, but either
HTTP or HTTPS always sends it to the cache_peer with the name=http

Thoughts?

Nick


RE: [squid-users] Reverse Proxy SSL Options

2010-03-19 Thread Dean Weimer
On 18.03.10 13:12, Dean Weimer wrote:
 We have multiple websites using a certificate that has subject 
 alternative names set to use SSL for the multiple domains.  That part

 is working fine, and traffic will pass through showing with Valid 
 certificates.  However, I need to Disable it from answering with weak

 ciphers and SSLv2 to pass the scans.

check https_port options cipher= and options=

for the latter you can play with openssl ciphers.
I use (not on squid), DEFAULT:!EXP
--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I feel like I'm diagonally parked in a parallel universe. 

Thanks for the info that worked, almost, I added the following entries.

sslproxy_options NO_SSLv2
sslproxy_cipher
ALL:!aNULL:!eNULL:!LOW:!EXP:!ADH:!RC4+RSA:+HIGH:+MEDIUM:!SSLv2

I stole the cipher options from an apache server that was passing the
PCI scans.  This still caused it to fail the scans.

When I entered the same configuration in the https_port line, however it
worked.

Example(IP and domain name has been changed):
https_port 192.168.1.2:443 accel
cert=/usr/local/squid/etc/certs/test.crt
key=/usr/local/squid/etc/certs/test.key defaultsite=www.default.com
vhost options=NO_SSLv2
cipher=ALL:!aNULL:!eNULL:!LOW:!EXP:!ADH:!RC4+RSA:+HIGH:+MEDIUM:!SSLv2

Do the sslproxy_* lines only effect the squid outbound connections to
the back end servers?
Or are both settings possibly required?  In the successful test scan I
had both Set.

I am willing to test some other options if anyone wants me to, I have
untill Tuesday before the system needs to be live, its currently only
accessible to internal clients with a hosts file entry and is being
tested with a Rapid7 Nexpose scanner.

Thanks,
Dean Weimer



[squid-users] Reverse Proxy SSL Options

2010-03-18 Thread Dean Weimer
I am trying to setup a reverse proxy to server multiple websites,
everythign is working fine except that so far in the testing process I
have discovered that it is not passing the PCI scans that we are
required to pass.
 
We have multiple websites using a certificate that has subject
alternative names set to use SSL for the multiple domains.  That part is
working fine, and traffic will pass through showing with Valid
certificates.  However, I need to Disable it from answering with weak
ciphers and SSLv2 to pass the scans.
 
I found the sslproxy_options and the sslproxy_cipher directives, I would
assume that these are what I would use to fix this problem.  However
there is nothing in the documentation that says where to place these in
the configuration file or what arguements they accept.
 
It would be greatly appriciated if someone could direct me to some
docuementation on how to set these options.
 
Thanks
Dean Weimer


[squid-users] Help with extension_methods

2010-01-25 Thread Dean Weimer
I found some errors in my cache.log file this afternoon, I have tracked it down 
to a development machine and know that  they occurred while the developer 
working on the machine was doing a build out of Plone, which did in the end 
succeed so I am not sure this is a huge concern but would rather not have the 
errors in the future if it can be fixed.

There were several entries like this in the access.log:
1264442419.041  0 10.20.147.34 NONE/400 1806 NONE 
error:unsupported-request-method - NONE/- text/html

That corresponded to entries like this in the cache.log:
2010/01/25 12:03:35| clientParseRequestMethod: Unsupported method attempted by 
10.20.147.34: This is not a bug. see squid.conf extension_methods
2010/01/25 12:03:35| clientParseRequestMethod: Unsupported method in request 
'_z___'

I checked on the extension_methods, but am a little confused as to what to 
enter for the method?  To possibly solve this issue, would I just use the 
following configuration line?
extension_methods _z___

If anyone could point me in the right direction to find some resources on this 
issue it would be greatly appreciated.  I tried searching but didn't find any 
information on _z___ on the web.  I am currently running squid3.0.STABLE21.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co



RE: [squid-users] Re: Check your disk space?!

2009-12-29 Thread Dean Weimer
Check what your Operating system reports on the disk volume, perhaps something 
else is being written to that disk.  I even made the mistake once of taking a 
snapshot for temporary backup purposes of my cache volume during testing and 
forgot to delete it.  Needless to say once that server went live it ran out of 
disk space in a hurry and took me a little while to figure out where all that 
disk space went.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

 -Original Message-
 From: Heinz Diehl [mailto:h...@fancy-poultry.org]
 Sent: Tuesday, December 29, 2009 9:59 AM
 To: squid-users@squid-cache.org
 Subject: [squid-users] Re: Check your disk space?!
 
 On 29.12.2009, Heinz Diehl wrote:
 
 []
 
 Forgot to mention:
 the whole cache took around 10 GB of space on the harddisk as the error
 occured, so the harddisk can not be filled up.
 



RE: [squid-users] SSL Reverse Proxy testing With Invalid Certificate, can it be done.

2009-10-05 Thread Dean Weimer
 -Original Message-
 From: Henrik Nordstrom [mailto:hen...@henriknordstrom.net]
 Sent: Monday, October 05, 2009 4:48 AM
 To: Dean Weimer
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] SSL Reverse Proxy testing With Invalid
 Certificate, can it be done.
 
 fre 2009-09-25 klockan 10:57 -0500 skrev Dean Weimer:
 
  2009/09/25 11:38:07| SSL unknown certificate error 18 in...
  2009/09/25 11:38:07| fwdNegotiateSSL: Error negotiating SSL
 connection on FD 15: error:14090086:SSL
 routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
(1/-1/0)
 
 This is your Squid trying to use SSL to connect to the requested
 server.
 Not related to the http_port certificate settings.
 
 validation requirements on peer certificates is set in cache_peer.
 
 Regards
 Henrik

I was running Squid 3.0.STABLE19 on the test system.  Here are the
configuration lines from the original test. At one point I had added
cert lines on the cache_peer before realizing that those were only for
use when certificate authentication was needed on the parent.  I can't
remember for sure if the log was copied form when I had those options on
or not, I still had an invalid certificate error after removing them but
it may have been a different error number.

https_port 443 accel cert=/usr/local/squid/etc/certs/server.crt
key=/usr/local/squid/etc/certs/server.key defaultsite=mysite vhost

cache_peer 1.2.3.4 parent 443 0 no-query originserver ssl
sslflags=DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN name=secure_mysite

My production server is a couple revisions behind, currently running
STABLE17, it will be updated to 19 this coming weekend.  I did not test
it with the fake certificate.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


RE: [squid-users] SSL Reverse Proxy testing With Invalid Certificate, can it be done.

2009-09-29 Thread Dean Weimer
 -Original Message-
 From: Chris Robertson [mailto:crobert...@gci.net]
 Sent: Monday, September 28, 2009 4:16 PM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] SSL Reverse Proxy testing With Invalid
 Certificate, can it be done.
 
 Dean Weimer wrote:
  I am trying to setup a test with an SSL reverse proxy on an intranet
 site, I currently have a fake self signed certificate and the server
is
 answering on the HTTP side just fine, and answering on the HTTPS
 however I get a (92) protocol error returned from the proxy when
trying
 to access it through HTTPS.
 
  I have added the following lines for the HTTPS option
 
  https_port 443 accel cert=/usr/local/squid/etc/certs/server.crt
 key=/usr/local/squid/etc/certs/server.key defaultsite=mysite vhost
 
  cache_peer 10.20.10.76 parent 443 0 no-query originserver ssl
 sslflags=DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN name=secure_mysite
 
  From the log I can see the error is caused by the invalid
 certificate.
 
  2009/09/25 11:38:07| SSL unknown certificate error 18 in...
  2009/09/25 11:38:07| fwdNegotiateSSL: Error negotiating SSL
 connection on FD 15: error:14090086:SSL
 routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
(1/-1/0)
 
  Is there a way that I can tell it to go ahead and trust this fake
 certificate during testing while I wait for the actual certificate
that
 is valid, to be issued.
 
 
 Perhaps http://www.squid-cache.org/Doc/config/sslproxy_flags/
 
 
  Thanks,
   Dean Weimer
   Network Administrator
   Orscheln Management Co
 
 
 Chris

I didn't see that one, though I have the real certificate now and
everything is working with it.  I figure the sslflags on the cache peer
settings should accomplish the same thing, but they didn't seem to make
a difference whether I included them or not.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


[squid-users] SSL Reverse Proxy testing With Invalid Certificate, can it be done.

2009-09-25 Thread Dean Weimer
I am trying to setup a test with an SSL reverse proxy on an intranet site, I 
currently have a fake self signed certificate and the server is answering on 
the HTTP side just fine, and answering on the HTTPS however I get a (92) 
protocol error returned from the proxy when trying to access it through HTTPS.

I have added the following lines for the HTTPS option

https_port 443 accel cert=/usr/local/squid/etc/certs/server.crt 
key=/usr/local/squid/etc/certs/server.key defaultsite=mysite vhost

cache_peer 10.20.10.76 parent 443 0 no-query originserver ssl 
sslflags=DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN name=secure_mysite

From the log I can see the error is caused by the invalid certificate.

2009/09/25 11:38:07| SSL unknown certificate error 18 in...
2009/09/25 11:38:07| fwdNegotiateSSL: Error negotiating SSL connection on FD 
15: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify 
failed (1/-1/0)

Is there a way that I can tell it to go ahead and trust this fake certificate 
during testing while I wait for the actual certificate that is valid, to be 
issued.


Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


RE: [squid-users] Antwort: [squid-users] Squid 3.0.STABLE17 is available

2009-07-29 Thread Dean Weimer
 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Monday, July 27, 2009 10:01 AM
 To: martin.pichlma...@continental-corporation.com
 Cc: Squid
 Subject: Re: [squid-users] Antwort: [squid-users] Squid 3.0.STABLE17
is
 available
 
 Amos Jeffries wrote:
  martin.pichlma...@continental-corporation.com wrote:
  Hello all,
 
  I just compiled squid-3.0.STABLE17 and it compiled fine.
  Unfortunately I now get many warning messages in cache.log (still
  testing, not yet in productive environment):
  2009/07/27 15:11:26| HttpMsg.cc(157) first line of HTTP message is
  invalid
  2009/07/27 15:11:28| HttpMsg.cc(157) first line of HTTP message is
  invalid
  2009/07/27 15:11:37| HttpMsg.cc(157) first line of HTTP message is
  invalid
  2009/07/27 15:11:40| HttpMsg.cc(157) first line of HTTP message is
  invalid
  2009/07/27 15:11:41| HttpMsg.cc(157) first line of HTTP message is
  invalid
 
  It seems that nearly every URL I try to access gives that warning
  message,
  for example www.arin.net, www.ripe.net, www.hp.com,
  www.arin.net, even www.squid-cache.org and so on.
  Are nearly all pages in the internet invalid or is the if-query or
  rather the function incorrect?
  The lines that produce the above warning are new in STABLE17...
 
  HttpMsg.cc -- lines 156 to 160:
  if (!sanityCheckStartLine(buf, hdr_len, error)) {
  debugs(58,1, HERE  first line of HTTP message is
 invalid);
  // NP: sanityCheck sets *error
  return false;
  }
 
 
  Oh dear. I missed a bit in the upgrade. Thanks.
  This attached patch should quieten it down to only the real errors.
 
  Amos
 
 
 Oh foey. forget that patch. It pasted badly.
 
 Here is the real one.
 
 Amos
 --
 Please be using
Current Stable Squid 2.7.STABLE6 or 3.0.STABLE17
Current Beta Squid 3.1.0.12

Amos,
Was this fixed on the 3.0.STABLE17 that's on the download site?
Or do I still need to run this patch if I downloaded it today before
installing it?


RE: [squid-users] Parent Proxy's, Not Failing over when Primary Parent is Down.

2009-05-01 Thread Dean Weimer
Thanks Amos, I was looking at the 3.0 page for cache_peer definition since I am 
running 3.0 STABLE14, so I never saw those monitor options.  I am not running 
anything that requires the 3.0 branch so I could switch to 2.7 to solve this 
problem.  I would like to know if there are plans to include these options 
under the 3.x branches in the future?  As I would prefer my configuration 
doesn't depend on an option that will not be available in the foreseeable 
future.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Thursday, April 30, 2009 11:39 PM
To: Dean Weimer
Cc: crobert...@gci.net; squid-users@squid-cache.org
Subject: Re: [squid-users] Parent Proxy's, Not Failing over when Primary Parent 
is Down.

Dean Weimer wrote:
 -Original Message-
 From: crobert...@gci.net [mailto:crobert...@gci.net] 
 Sent: Thursday, April 30, 2009 2:13 PM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Parent Proxy's, Not Failing over when Primary
 Parent is Down.
 
 Dean Weimer wrote:
 I have a current Parent child proxy configuration I have been testing,
 its working with the exception of some sites not failing over to second
 parent when primary parent goes down.
 In the test scenario I have 2 parent proxies, and one child proxy
 server, the parents are each configured twice using an alias IP address.
 This is done to load balance using round robin for the majority of web
 traffic yet allow some sites that we have identified to not work
 correctly with load balancing to go out a single parent proxy.
   
 
 Since Squid 2.6 there has been a parent selection method called 
 sourcehash, which will keep a client-to-parent-proxy relationship 
 until the parent fails.
 
 I considered this, but was concerned that after a failed proxy server,
 the majority of my load would be on one server, and not taking advantage
 of both links when the problem is resolved.
 
 The load balanced traffic works as expected, the dead parent is
 identified and ignored until it comes back online.  The traffic that
 cannot be load balanced is all using HTTPS (not sure HTTPS has anything
 to do with the problem or not), when I stop the parent proxy 10.50.20.7
 (aka 10.52.20.7) the round-robin configuration is promptly marked as
 dead.  However a website that has already been connected to that is in
 the NONBAL acl just returns the proxy error from the child giving a
 connect to (10.52.20.7) parent failed connection denied.
 
 Hmmm...  You might have to disable server_persistent_connections, or 
 lower the value of persistent_request_timeout to have a better response 
 rate to a parent failure with your current setup.
 
 Also considered this, but figured it would break some sites that are
 working successfully with load balancing because they create a
 persistent connection, and making the request timeout to low would
 becoming annoying to the users.  Also as the default is listed at 2
 minutes, I noticed that even after as much as 5 minutes that the
 connection would not fail over.
 
   It will not mark the non load balanced parent dead, closing and
 restarting the browser doesn't help.  It will change the status to dead
 however if I connect to another site in the NONBAL acl.  Going back to
 the first site, I can then connect, even though I have to log in again,
 which is expected and why these sites cannot be load balanced.
 Does anyone have any ideas short of writing some sort of test script
 that will cause the parent to be marked as dead, if it fails without any
 user intervention.
 Here is the cache peer configuration from the child proxy. FYI, I
 added the 5 sec timeout to see if it had any effect, and it didn't with
 the exception of speeding up the detection of the dead load balanced
 proxy.
 ## Define Parent Caches
 # Cache Peer Timeout
 peer_connect_timeout 5 seconds
 # Round Robin Caches
 cache_peer 10.50.20.7 parent 8080 8181 name=DSL2BAL round-robin
 cache_peer 10.50.20.6 parent 8080 8181 name=DSL1BAL round-robin
 # Non Load Balanced caches
 cache_peer 10.52.20.7 parent 8080 8181 name=DSL2
 cache_peer 10.52.20.6 parent 8080 8181 name=DSL1

 ## Define Parent Cache Access rules
 # Access Control Lists
 acl NONBAL dstdomain /usr/local/squid/etc/nonbal.dns.list
 # Rules for the Control Lists
 cache_peer_access DSL2BAL allow !NONBAL
 cache_peer_access DSL1BAL allow !NONBAL
 cache_peer_access DSL2 allow NONBAL
 cache_peer_access DSL1 allow NONBAL

 Thanks,
  Dean Weimer
  Network Administrator
  Orscheln Management Co
 
 Chris
 
 I am currently doing some testing by creating access control lists for a
 couple nonexistent sub domains on our own domain.  This then just
 accesses the error page from the parent proxy for nonexistent domain, so
 it shouldn't put an unnecessary load on the internet links testing.
 Then allowing each one through one of the non balanced parents.  By
 accessing that page

RE: [squid-users] Parent Proxy's, Not Failing over when Primary Parent is Down.

2009-04-30 Thread Dean Weimer
-Original Message-
From: crobert...@gci.net [mailto:crobert...@gci.net] 
Sent: Thursday, April 30, 2009 2:13 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Parent Proxy's, Not Failing over when Primary
Parent is Down.

Dean Weimer wrote:
 I have a current Parent child proxy configuration I have been testing,
its working with the exception of some sites not failing over to second
parent when primary parent goes down.

 In the test scenario I have 2 parent proxies, and one child proxy
server, the parents are each configured twice using an alias IP address.
This is done to load balance using round robin for the majority of web
traffic yet allow some sites that we have identified to not work
correctly with load balancing to go out a single parent proxy.
   

Since Squid 2.6 there has been a parent selection method called 
sourcehash, which will keep a client-to-parent-proxy relationship 
until the parent fails.

I considered this, but was concerned that after a failed proxy server,
the majority of my load would be on one server, and not taking advantage
of both links when the problem is resolved.

 The load balanced traffic works as expected, the dead parent is
identified and ignored until it comes back online.  The traffic that
cannot be load balanced is all using HTTPS (not sure HTTPS has anything
to do with the problem or not), when I stop the parent proxy 10.50.20.7
(aka 10.52.20.7) the round-robin configuration is promptly marked as
dead.  However a website that has already been connected to that is in
the NONBAL acl just returns the proxy error from the child giving a
connect to (10.52.20.7) parent failed connection denied.

Hmmm...  You might have to disable server_persistent_connections, or 
lower the value of persistent_request_timeout to have a better response 
rate to a parent failure with your current setup.

Also considered this, but figured it would break some sites that are
working successfully with load balancing because they create a
persistent connection, and making the request timeout to low would
becoming annoying to the users.  Also as the default is listed at 2
minutes, I noticed that even after as much as 5 minutes that the
connection would not fail over.

   It will not mark the non load balanced parent dead, closing and
restarting the browser doesn't help.  It will change the status to dead
however if I connect to another site in the NONBAL acl.  Going back to
the first site, I can then connect, even though I have to log in again,
which is expected and why these sites cannot be load balanced.

 Does anyone have any ideas short of writing some sort of test script
that will cause the parent to be marked as dead, if it fails without any
user intervention.

 Here is the cache peer configuration from the child proxy. FYI, I
added the 5 sec timeout to see if it had any effect, and it didn't with
the exception of speeding up the detection of the dead load balanced
proxy.

 ## Define Parent Caches
 # Cache Peer Timeout
 peer_connect_timeout 5 seconds
 # Round Robin Caches
 cache_peer 10.50.20.7 parent 8080 8181 name=DSL2BAL round-robin
 cache_peer 10.50.20.6 parent 8080 8181 name=DSL1BAL round-robin
 # Non Load Balanced caches
 cache_peer 10.52.20.7 parent 8080 8181 name=DSL2
 cache_peer 10.52.20.6 parent 8080 8181 name=DSL1

 ## Define Parent Cache Access rules
 # Access Control Lists
 acl NONBAL dstdomain /usr/local/squid/etc/nonbal.dns.list
 # Rules for the Control Lists
 cache_peer_access DSL2BAL allow !NONBAL
 cache_peer_access DSL1BAL allow !NONBAL
 cache_peer_access DSL2 allow NONBAL
 cache_peer_access DSL1 allow NONBAL

 Thanks,
  Dean Weimer
  Network Administrator
  Orscheln Management Co

Chris

I am currently doing some testing by creating access control lists for a
couple nonexistent sub domains on our own domain.  This then just
accesses the error page from the parent proxy for nonexistent domain, so
it shouldn't put an unnecessary load on the internet links testing.
Then allowing each one through one of the non balanced parents.  By
accessing that page with my browser it causes the parent to be marked
dead.

I could look at writing a script to access these pages through the child
proxy every so many seconds to cause the parent to be marked as dead.
It's kind of a hacked solution, but hopefully it would keep the users
from having too much down time in the event that one proxy goes down.

It would probably be preferable though to query ICP directly and then do
a reconfigure on the child squid to exclude that parent from its
configuration.  If anyone can tell me where to find the information on
how to do an ICP query that would save me some time, and be greatly
appreciated, in the mean time I will start  searching or worse yet if
that fails sniffing network traffic to write an application to mimic the
squid query.




RE: [squid-users] Using Squid as a proxy to change network devices' properties instead of web broswers'?

2009-04-15 Thread Dean Weimer
Interesting, saw this and thought that it might solve some problems I have been 
having with applications that import settings from the browser, but don't work 
with auto detect.  I thought I would try this on Vista, of course it doesn't 
exist, but there is a replacement.

In Vista (of course you have to run as admin):
To Display current setting:
netsh winhttp show proxy
To import form IE:
netsh winhttp import proxy source=ie
(Does anyone know if you can use a different source?)
To manually set it:
netsh winhttp set myproxy:port local;localsite1;localsite2;...
To Set back to direct:
netsh winhttp reset proxy

Also I noticed that it imports no proxy if you are set to use a script or 
automatically detect, the proxycfg in XP still pulls the manual configuration 
even after I set it to auto detect.  It was set to manual configuration the 
first time I ran the command, so it appears to not look at the current settings 
but looks at what is in the registry for the manual configuration whether or 
not it is currently enabled.

In XP:
To Display Current Settings:
proxycfg -d
To Import from IE:
Proxycfg -u
To Manually Set:
Proxycfg -p myproxy:port local;localsite1;localsite2;...

Looks like under my environment I will have to use the manual set options to 
possibly solve the issue, the main problem I have found is that Java doesn't 
seem to work correctly if the browser is configured for auto detect, it will 
work however, if the browser is set to use a specific configuration script, or 
a manually configured proxy.  Both of these options however do require the user 
to change settings if they have a laptop and try to use it outside of our 
network.
Guess if this command fixes the problem I can look at writing a startup script 
to detect if they are on our local LAN or not and set it to direct or a manual 
proxy depending on the result, then push this script to clients with group 
policy.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Wednesday, April 15, 2009 7:32 AM
To: Phillip Pi
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Using Squid as a proxy to change network devices' 
properties instead of web broswers'?

Phillip Pi wrote:
 Hello.
 
 I got Squid v2.7 stable 6 installed and working in a Windows XP Pro. SP2 
 machine, with its IIS, as a proxy server. I can make clients' web 
 browsers (e.g., IE and Firefox in Windows XP), go through this proxy 
 server with no problems.
 
 I am wondering if I can use Squid to do the same proxy for network 
 devices (e.g., onboard network). I would like to be able to set up PCs' 
 Internet access instead of web browsers.
 
 Thank you in advance. :)

The use of Squid as HTTP proxy is limited only individual app or devices 
capabilities.

On windows XP the command proxycfg -u IIRC is sufficient to get the 
MS-produced apps using the same settings as IE, whether they are proxy 
or not.

I've heard tell of people using ActiveDirectory to push out proxy 
settings to all machines in a controlled network environment, mayhap an 
expert on that will say how if you need it.

Other devices and apps you will have to check out individually and see 
what can be done.

As a fallback for the really limited apps there is always interception 
at the network gateway device. Though this has a whole other set of 
problems and should only be considered as a last resort.

Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
   Current Beta Squid 3.1.0.7


RE: [squid-users] Using Squid as a proxy to change network devices' properties instead of web broswers'?

2009-04-15 Thread Dean Weimer
That would solve this problem, but by forcing the use of a proxy, we get better 
control of the web traffic.  It also allows us to use group policy to block 
access to setting the proxy for users not allowed to browse the web, without 
jumping through hoops required to setup authentication on the proxy server.  We 
can't just block access to IE, because these users do need access to intranet 
applications.  Currently there are only a couple of users that have laptops and 
access sites that have this problem the others are on desktops, and setting 
them to use the configuration script is a onetime deal.  Even these users are a 
very small percentage probably only around 2% of all users.
Setting up a transparent proxy with authentication to stop the users not 
allowed internet access would have an impact on the other 98% of users who work 
just fine with the auto detect settings.  Of course if Sun just implemented an 
auto detect option in the Java Runtime Environment proxy settings, all my 
problems would just go away.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

-Original Message-
From: Hunter Fuller [mailto:hackmies...@gmail.com] 
Sent: Wednesday, April 15, 2009 11:25 AM
To: Dean Weimer; squid-users@squid-cache.org
Subject: Re: [squid-users] Using Squid as a proxy to change network devices' 
properties instead of web broswers'?

You can't do transparent proxying here?
-hackmiester
Too short? http://five.sentenc.es/



2009/4/15 Dean Weimer dwei...@orscheln.com:
 Interesting, saw this and thought that it might solve some problems I have 
 been having with applications that import settings from the browser, but 
 don't work with auto detect.  I thought I would try this on Vista, of course 
 it doesn't exist, but there is a replacement.

 In Vista (of course you have to run as admin):
 To Display current setting:
 netsh winhttp show proxy
 To import form IE:
 netsh winhttp import proxy source=ie
 (Does anyone know if you can use a different source?)
 To manually set it:
 netsh winhttp set myproxy:port local;localsite1;localsite2;...
 To Set back to direct:
 netsh winhttp reset proxy

 Also I noticed that it imports no proxy if you are set to use a script or 
 automatically detect, the proxycfg in XP still pulls the manual configuration 
 even after I set it to auto detect.  It was set to manual configuration the 
 first time I ran the command, so it appears to not look at the current 
 settings but looks at what is in the registry for the manual configuration 
 whether or not it is currently enabled.

 In XP:
 To Display Current Settings:
 proxycfg -d
 To Import from IE:
 Proxycfg -u
 To Manually Set:
 Proxycfg -p myproxy:port local;localsite1;localsite2;...

 Looks like under my environment I will have to use the manual set options to 
 possibly solve the issue, the main problem I have found is that Java doesn't 
 seem to work correctly if the browser is configured for auto detect, it will 
 work however, if the browser is set to use a specific configuration script, 
 or a manually configured proxy.  Both of these options however do require the 
 user to change settings if they have a laptop and try to use it outside of 
 our network.
 Guess if this command fixes the problem I can look at writing a startup 
 script to detect if they are on our local LAN or not and set it to direct or 
 a manual proxy depending on the result, then push this script to clients with 
 group policy.

 Thanks,
  Dean Weimer
  Network Administrator
  Orscheln Management Co

 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Wednesday, April 15, 2009 7:32 AM
 To: Phillip Pi
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] Using Squid as a proxy to change network devices' 
 properties instead of web broswers'?

 Phillip Pi wrote:
 Hello.

 I got Squid v2.7 stable 6 installed and working in a Windows XP Pro. SP2
 machine, with its IIS, as a proxy server. I can make clients' web
 browsers (e.g., IE and Firefox in Windows XP), go through this proxy
 server with no problems.

 I am wondering if I can use Squid to do the same proxy for network
 devices (e.g., onboard network). I would like to be able to set up PCs'
 Internet access instead of web browsers.

 Thank you in advance. :)

 The use of Squid as HTTP proxy is limited only individual app or devices
 capabilities.

 On windows XP the command proxycfg -u IIRC is sufficient to get the
 MS-produced apps using the same settings as IE, whether they are proxy
 or not.

 I've heard tell of people using ActiveDirectory to push out proxy
 settings to all machines in a controlled network environment, mayhap an
 expert on that will say how if you need it.

 Other devices and apps you will have to check out individually and see
 what can be done.

 As a fallback for the really limited apps there is always interception
 at the network gateway device. Though this has a whole other set

[squid-users] Implications of Disabling via headers

2009-04-13 Thread Dean Weimer
I have a problem with a website that doesn't like going through a 
parent child proxy setup, if you access the site pointing the client directly 
at the parent proxy it open just fine.  However, when the client accesses the 
website using the child proxy the page fails to load.  I have no control over 
the website and have sent a request to the support for the site to help resolve 
the issue.  While waiting to hear back from them, I was wondering if possibly 
disabling the via headers would potential help, but wasn't sure of the 
consequences that doing so would have.
The eventual configuration in this scenario is to have 2 parents with a 
single child, one server can easily handle the number of clients we have, but 
we want to use the 2 parents to handle load balancing on multiple internet 
connections.  I have already used ACLs to send this website along with others I 
know have problems with multiple source IPs in a single session, through a 
single parent so that they only have failover and not load balancing.  This has 
been verified to work on all the other sites that I know clients need that have 
this problem.  I have verified by use of a packet sniffer that this site is 
correctly trying to go out a single parent proxy server, and I am considering 
disabling the via header to see if that resolves the issue.
In addition to any possible problems with disabling the via headers, 
would it be better to do it on the parent proxies or on the child proxy server, 
if it doesn't have to be done on both.  If it's of any consequence, I do have 
the forwarded_for directive set to off on the parents and the child proxy 
server.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co



[squid-users] url_regex help

2009-01-02 Thread Dean Weimer
I have an internal web server that some users are accessing through a proxy, 
it's an older server that badly needs to be replaced, but until the developers 
get around to migrating the applications I have to solve a problem.  Basically 
it is serving some Excel and PDF files, but the server responds with a 304 
Unmodified response even though the files have been updated, causing squid to 
serve the cached file instead of the updated file.

I was able to use the an ACL using the dstdomain option and a no_cache deny 
line to stop it from caching the server entirely.  However as this machine is 
quite slow, I would like to still cache the html and images as those work 
correctly.  While using the url_regex  lines to get just the Excel and PDF 
files not cached, I am still getting some TCP_MEM_HIT entries in the access 
logs on these files.  I probably should mention that I disabled the disk cache 
for now on this system while figuring this problem out, all actual web request 
are forwarded through another proxy that is still caching on disk, only the 
internal web applications go direct.

Here's what I have, anyone have an idea where I went wrong
I am Running Squid 3.0 Stable 9 on FreeBSD 6.2
Acl NOCACHEPDF url_regex -i ^http://hostname.\*pdf$
Acl NOCACHEXLS url_regex -i ^http://hostname.\*xls$
No_cache deny NOCACHEPDF NOCACHEXLS

I have used cat combined with awk and grep to check the pattern matching on the 
access logs with:
Cat /usr/local/squid/var/logs/access.log | awk '{print $7}' | grep -e 
^http://hostname.\*pdf$
Cat /usr/local/squid/var/logs/access.log | awk '{print $7}' | grep -e 
^http://hostname.\*xls$

This correctly matches all the entries I want and none that I don't want to 
stop caching.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


RE: [squid-users] url_regex help

2009-01-02 Thread Dean Weimer
That worked, thanks a lot, used your advice on a single rule instead of two as 
well.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

-Original Message-
From: Guillaume Smet [mailto:guillaume.s...@gmail.com] 
Sent: Friday, January 02, 2009 3:19 PM
To: Dean Weimer
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] url_regex help

On Fri, Jan 2, 2009 at 5:13 PM, Dean Weimer dwei...@orscheln.com wrote:
 Here's what I have, anyone have an idea where I went wrong
 I am Running Squid 3.0 Stable 9 on FreeBSD 6.2
 Acl NOCACHEPDF url_regex -i ^http://hostname.\*pdf$
 Acl NOCACHEXLS url_regex -i ^http://hostname.\*xls$
 No_cache deny NOCACHEPDF NOCACHEXLS

 I have used cat combined with awk and grep to check the pattern matching on 
 the access logs with:
 Cat /usr/local/squid/var/logs/access.log | awk '{print $7}' | grep -e 
 ^http://hostname.\*pdf$
 Cat /usr/local/squid/var/logs/access.log | awk '{print $7}' | grep -e 
 ^http://hostname.\*xls$

The \ character before the * is only necessary to prevent your shell
from expanding the wildcard because you didn't use quotes to escape
your regexp.

In your Squid conf file, you just need:
acl NOCACHEPDF url_regex -i ^http://hostname.*pdf$
acl NOCACHEXLS url_regex -i ^http://hostname.*xls$

But if I were you, I'd use:
acl NOCACHEXLS url_regex -i ^http://hostname/.*\.xls$
acl NOCACHEPDF url_regex -i ^http://hostname/.*\.pdf$
which is more precise and more correct IMHO.

Or shorter:
acl NOCACHE url_regex -i ^http://hostname/.*\.(pdf|xls)$

-- 
Guillaume


RE: RES: [squid-users] block https requests

2008-12-17 Thread Dean Weimer
The host is still known from the request header, and is not encrypted in https, 
only the data in the body of the request and reply is encrypted, if the headers 
were encrypted a proxy would never be able to direct the request to the origin 
server.

Here is a direct copy from a raw TCP data capture of a login to my home web 
server.
CONNECT www.myhostinghome.net:443 HTTP/1.1
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.0.4) 
Gecko/2008102920 Firefox/3.0.4
Proxy-Connection: keep-alive
Host: www.myhostinghome.net
HTTP/1.0 200 Connection established
...II-`.9..$Q6z...j...D ..q...
@.8b.7oF.D.
...9.8...5.E.D.3.2.A./.
.
[...snip...]

This is the reason you won't find any forms on a decent secure site using the 
GET method as the data submitted will still be visible to anyone in the middle.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

-Original Message-
From: Matus UHLAR - fantomas [mailto:uh...@fantomas.sk] 
Sent: Wednesday, December 17, 2008 11:02 AM
To: squid-users@squid-cache.org
Subject: Re: RES: [squid-users] block https requests

On 16.12.08 13:51, Ricardo Augusto de Souza wrote:
  I AM used to block sites using:
 
 
 acl bad_sites dstdomain /etc/squid/bad_sites.txt
 
 http_access deny bad_sites
 
   
 
 With this my users cannot access all domains listed in
 /etc/squid/bad_sites.txt using http but they can access using https.

squid does not see what's in https requests, they are enctypted. That's that
the s means (secure): only client and server know what's inside, nobody
other.

you can disable CONNECT method to those hots. You may need to disable
CONNECT to IP addresses.

Or you may do an MITM attack and use sslbump (which means, https won't be
secure anymore for your clients). Clients will detect it - they will see
certificate mismatch (since you won't be able to provide anyone's
certificate but yours)

 How do I solve this?

disable https?
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Support bacteria - they're the only culture some people have. 


RE: [squid-users] Netscape to Squid conversion Issues

2008-12-09 Thread Dean Weimer
You might try configuring squid as a reverse proxy for a web server actually 
hosting your proxy.pac file, I have never tried this, but I think it would work.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

-Original Message-
From: Jan Welker [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 09, 2008 8:04 AM
To: squid-users@squid-cache.org
Subject: [squid-users] Netscape to Squid conversion Issues

Here's a situation we're facing and I'm curious if anyone has some
insight into how we might approach this problem.

We currently have approximately  pcs, a very large portion of which
are configured in one of two ways.

A. Netscape browsers with manual proxy servers set up for http and
https as proxy.host.net:8080
B. Netscape browsers with automatic proxy configuration with URL setup
as proxy.host.net:8080 (note they're the same).

This setup runs fine when pointing to the netscape admin-server/proxy
server configuration.

The problem I'm having is when I point one of the automatic
configured pcs to one of the boxes running SQUID. At startup, the user
receives a message saying the automatic configuration has failed and
on the squid server I see the following access.log entry.

10.49.0.145 - - [30/Apr/2001:16:28:40 -0400] GET / HTTP/0.0 400 1094 NONE:NONE

From the docs, it's clear that I need to provide a proxy.pac file
telling the users what their automatic configuration should be. The
problem I'm having is how to provide this info and provide
filtering/caching all from the same port?

Having all the users change their configuration to point to another
port or host isn't an attractive option (120+ sites, 6000 pcs likely
to be touched). If I must do that, I'd much prefer to cut over to
transparent proxying so we don't face this problem again in the future
and it's trivial for the end users to reconfigure.

Any insight would be greatly appreciated.

Jan


RE: [squid-users] Netscape to Squid conversion Issues

2008-12-09 Thread Dean Weimer
What version of squid are you running, and can you include the configuration 
lines that you used to configure the reverse proxy settings?

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

-Original Message-
From: Jan Welker [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 09, 2008 9:40 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Netscape to Squid conversion Issues

I do have a webserver hosting the pac file. I set the default site to
be that webserver. I do not get it to work. I followed this manual:
http://wiki.squid-cache.org/SquidFaq/ReverseProxy


On Tue, Dec 9, 2008 at 4:26 PM, Dean Weimer [EMAIL PROTECTED] wrote:
 You might try configuring squid as a reverse proxy for a web server actually 
 hosting your proxy.pac file, I have never tried this, but I think it would 
 work.

 Thanks,
  Dean Weimer
  Network Administrator
  Orscheln Management Co

 -Original Message-
 From: Jan Welker [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, December 09, 2008 8:04 AM
 To: squid-users@squid-cache.org
 Subject: [squid-users] Netscape to Squid conversion Issues

 Here's a situation we're facing and I'm curious if anyone has some
 insight into how we might approach this problem.

 We currently have approximately  pcs, a very large portion of which
 are configured in one of two ways.

 A. Netscape browsers with manual proxy servers set up for http and
 https as proxy.host.net:8080
 B. Netscape browsers with automatic proxy configuration with URL setup
 as proxy.host.net:8080 (note they're the same).

 This setup runs fine when pointing to the netscape admin-server/proxy
 server configuration.

 The problem I'm having is when I point one of the automatic
 configured pcs to one of the boxes running SQUID. At startup, the user
 receives a message saying the automatic configuration has failed and
 on the squid server I see the following access.log entry.

 10.49.0.145 - - [30/Apr/2001:16:28:40 -0400] GET / HTTP/0.0 400 1094 
 NONE:NONE

 From the docs, it's clear that I need to provide a proxy.pac file
 telling the users what their automatic configuration should be. The
 problem I'm having is how to provide this info and provide
 filtering/caching all from the same port?

 Having all the users change their configuration to point to another
 port or host isn't an attractive option (120+ sites, 6000 pcs likely
 to be touched). If I must do that, I'd much prefer to cut over to
 transparent proxying so we don't face this problem again in the future
 and it's trivial for the end users to reconfigure.

 Any insight would be greatly appreciated.

 Jan



RE: [squid-users] Netscape to Squid conversion Issues

2008-12-09 Thread Dean Weimer
http_port 192.168.0.15:8080 accel defaultsite=fu.company.com
cache_peer 172.0.0.1 parent 8080 0 no-query originserver

Your pointing it to its loopback address on port 8080, that would still be 
squid, you need to change that to the IP Address and port of the web server.  
You can run the web server on the same host if you like but it would have to be 
on a different port.  Also I noticed in the earlier message that your auto 
configuration script isn't called out on the line if its 
http://proxy.host.net:8080/proxy.pac, and you are only specifying 
http://proxy.host.net:8080, you would need to make sure the web server is set 
to use proxy.pac as the default index page.

I hope this is a proof of concept test configuration, because if you have 6,000 
clients 1g of cache really isn't going to help you very much.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

-Original Message-
From: Jan Welker [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 09, 2008 10:27 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Netscape to Squid conversion Issues

I am using Squid 2.6STABLE18.

My config:

acl all src 0.0.0.0/0.0.0.0

acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 8443 # https
acl SSL_ports port 563  # snews
acl SSL_ports port 873  # rsync
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl Safe_ports port 8443# special web
acl purge method PURGE
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost

acl company src 192.168.0.0/16
http_access allow company

http_access deny all

icp_access allow all
http_port 192.168.0.15:8080 accel defaultsite=fu.company.com
cache_peer 172.0.0.1 parent 8080 0 no-query originserver

hierarchy_stoplist cgi-bin ?

cache_dir ufs /var/spool/squid 1024 16 256

access_log /var/log/squid/access.log squid

acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY


refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320

acl apache rep_header Server ^Apache
broken_vary_encoding allow apache

extension_methods REPORT MERGE MKACTIVITY CHECKOUT

visible_hostname prod-iproxy.company.com

hosts_file /etc/hosts

coredump_dir /var/spool/squid



#
Thanks,
Jan




On Tue, Dec 9, 2008 at 5:02 PM, Dean Weimer [EMAIL PROTECTED] wrote:
 What version of squid are you running, and can you include the configuration 
 lines that you used to configure the reverse proxy settings?

 Thanks,
  Dean Weimer
  Network Administrator
  Orscheln Management Co

 -Original Message-
 From: Jan Welker [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, December 09, 2008 9:40 AM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Netscape to Squid conversion Issues

 I do have a webserver hosting the pac file. I set the default site to
 be that webserver. I do not get it to work. I followed this manual:
 http://wiki.squid-cache.org/SquidFaq/ReverseProxy


 On Tue, Dec 9, 2008 at 4:26 PM, Dean Weimer [EMAIL PROTECTED] wrote:
 You might try configuring squid as a reverse proxy for a web server actually 
 hosting your proxy.pac file, I have never tried this, but I think it would 
 work.

 Thanks,
  Dean Weimer
  Network Administrator
  Orscheln Management Co

 -Original Message-
 From: Jan Welker [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, December 09, 2008 8:04 AM
 To: squid-users@squid-cache.org
 Subject: [squid-users] Netscape to Squid conversion Issues

 Here's a situation we're facing and I'm curious if anyone has some
 insight into how we might approach this problem.

 We currently have approximately  pcs, a very large portion of which
 are configured in one of two ways.

 A. Netscape browsers with manual proxy servers set up for http and
 https as proxy.host.net:8080
 B. Netscape browsers with automatic proxy configuration with URL setup
 as proxy.host.net:8080 (note they're the same).

 This setup runs fine when pointing to the netscape admin-server/proxy
 server configuration.

 The problem I'm having is when I point one of the automatic
 configured pcs to one of the boxes

RE: [squid-users] Downgrade from 3.0stable10 to 3.0stable9

2008-11-18 Thread Dean Weimer
You might want to run make showconfig under each version of the port and verify 
that none of the configuration options have changed on the new version of the 
port.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

-Original Message-
From: Marcel Grandemange [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 18, 2008 6:57 AM
To: 'Henrik Nordstrom'
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Downgrade from 3.0stable10 to 3.0stable9

 How and why would this happen? The box hasn't been powered off in months.
 Also first time something like this has happened.
 So far im guesing it was upgrade to stable 10 that mucked things up.
 Personally ive never had so many issues with any particular version of 
 squid.

As Amos already asked, was the two versions compiled in the same manner?

Yup identical, as I used FreeBSD ports to upgrade to stable 10 and to
downgrade it used the same config.

Regards
Henrik



RE: [squid-users] Squid on VMWare ESX

2008-10-06 Thread Dean Weimer
I have two installations on ESX 3.5 Update 2 currently in testing, one running 
on Solaris and the other on Ubuntu, both the 3.0 branch.  They are running with 
no disk cache however, and pointing at parent proxies.  I was concerned about 
how our iSCSI SAN would handle the cache as it is recommended not to run on 
raid.  Though I am planning to test that as well, I just have to get a few 
other projects finished first.  I have ran into no problems with either 
installation so far.  They seem to handle the live migration moves between 
servers with only a slight slow down in operation during the move.  The load 
when I go into production will be around 500 users as well, though I have only 
had about 25 users pointed at the test installations.  I have considered doing 
a FreeBSD install as that's what I have been using for a long time on physical 
hardware, but being that it is not officially supported by VMWare, I have been 
hesitant to try it for fear that it might hurt the ESX servers performance.
I hope this helps you some, I still have a decent amount of testing to do 
before I would be willing to say it works and performs great, but so far it's 
been good.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co.

-Original Message-
From: Altrock, Jens [mailto:[EMAIL PROTECTED] 
Sent: Monday, October 06, 2008 6:20 AM
To: squid-users@squid-cache.org
Subject: [squid-users] Squid on VMWare ESX

Are there any concerns/problems using Squid on VMware ESX server 3.5? We
got about 500 Users, so there shouldn't be that much load on that
machine. Maybe someone tested that and could just report how it works.

Regards

Jens


[squid-users] Cache Peers and Load Balancing

2008-09-29 Thread Dean Weimer
I am looking at implementing a new proxy configuration, using multiple peers 
and load balancing, I have been looking through the past archives but I haven't 
found the answers to some questions I have.

Here is what I am trying to accomplish:
Have 3 parent proxy servers, each connected to a DSL line, I will call them 
PARENT1 (1.1.1.1), PARENT2 (2.2.2.2) and PARENT3 (3.3.3.3).
One child proxy that the users will connect to, configured with null storage, I 
will call it CHILD (4.4.4.4)
I want to Keep it simple, and not include other load balancers, probably not 
best performance, but easiest to deploy and maintain.  This of course leaves 
the cache_peer options as my method for load balancing.

Obviously the simplest method would be:
cache_peer 1.1.1.1 parent 3128 3130 round-robin
cache_peer 2.2.2.2 parent 3128 3130 round-robin
cache_peer 3.3.3.3 parent 3128 3130 round-robin
cache_peer_access 1.1.1.1 allow all
cache_peer_access 2.2.2.2 allow all
cache_peer_access 3.3.3.3 allow all
The problem with this is, that websites requiring the source IP as part of the 
session state, in this case it would be required to add the sourcehash option:
cache_peer 1.1.1.1 parent 3128 3130 round-robin sourcehash
cache_peer 2.2.2.2 parent 3128 3130 round-robin sourcehash
cache_peer 3.3.3.3 parent 3128 3130 round-robin sourcehash
cache_peer_access 1.1.1.1 allow all
cache_peer_access 2.2.2.2 allow all
cache_peer_access 3.3.3.3 allow all
The question here, is what problems the source hash brings to the table.  For 
starters I know this is persistent, and doesn't change once established unless 
a condition changes.  But what happens when PARENT1 goes down, I would expect 
the hashes associated with it would be load balanced using round-robin between 
PARENT2 and PARENT3.  What happens when PARENT1 comes back into service?  Will 
the original hashes associated with it resume using it?  Or will they stay for 
the foreseeable future on the new parent hash?
Now this brings another possible option to the table, the websites needed a 
persistent source IP are limited, currently only 8 that I know need this in use 
by the users.  So there is the possibility of using the source hash only with 
sites you know need it.  This of course adds maintenance overhead.  But can be 
handled by adding a secondary address to each parent, and using an access list.
acl HASHNEEDED dstdomain /usr/local/squid/etc/sourcehash.list
cache_peer 1.1.1.1 parent 3128 3130 round-robin
cache_peer 11.11.11.11 parent 3128 3130 round-robin sourcehash
cache_peer 2.2.2.2 parent 3128 3130 round-robin
cache_peer 22.22.22.22 parent 3128 3130 round-robin sourcehash
cache_peer 3.3.3.3 parent 3128 3130 round-robin
cache_peer 33.33.33.33 parent 3128 3130 round-robin sourcehash
cache_peer_access 1.1.1.1 allow !HASHNEEDED
cache_peer_access 11.11.11.11 allow HASHNEEDED
cache_peer_access 2.2.2.2 allow !HASHNEEDED
cache_peer_access 22.22.22.22 allow HASHNEEDED
cache_peer_access 3.3.3.3 allow !HASHNEEDED
cache_peer_access 33.33.33.33 allow HASHNEEDED
This should give me the best of both previous options, at the cost of the 
increased maintenance, and user calls, when they come across a new site that 
doesn't work without the source hash.

Now the other question is whether or not I should configure the 3 parent 
servers as siblings?
Would doing so break the source hash?

Please let me know if any of you have other suggestions that are completely 
different, keeping in mind that I would like to stick entirely within squid and 
not utilize other technologies.  Feel free to tell me I am completely wrong in 
how squid works with the above configurations, I am pretty much a complete 
newbie when it comes to the cache_peer options.  Since there seems to be a lack 
of information on this on the web page and wiki (Please forgive me if it's 
already there and I just didn't search for the right term to find it), I will 
gladly do my best to put everything I learn in this project on the squid wiki 
to help people in the future.


Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co



[squid-users] prefer_direct configuration

2008-07-08 Thread Dean Weimer
I am trying to setup a new proxy server at a remote location which has both a 
T1 link to our main office and a DSL connection to the internet.  The DSL 
connection has a much larger download than the T1 so it's preferable to use it 
for web browsing, but I would like to be able to have the proxy server 
automatically route traffic through the T1 and use our proxy servers here as 
parents in the event that the DSL would fail and the T1 line is still up.

I have added the proxy servers at our main office using the cache peer entries, 
and defined the icp_port.

cache_peer 10.50.20.5 parent 8080 8181
cache_peer 10.50.20.4 parent 8080 8181
icp_port 8181

Then added the prefer_direct on entry.
prefer_direct on

I tested by manually entering a false route on the remote proxy server for one 
website, it does load, but only after waiting for a timeout  for each and every 
request (watching packet traces appears to show 4 attempts for each before 
falling back to the parent cache).  Since this covers not only the html files, 
but requests for each image, and any subsequent links form the same web site 
all continue to follow this behavior.  The end result is for a small page with 
a few images it takes anywhere from 2 to 4 minutes to complete.
Is there a way to adjust the timeouts, and perhaps have it cache the path for a 
period of time after having one failure before trying again?
Or is my method of testing flawed, and causing this behavior?
 
Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co



RE: [squid-users] RE: performances ... again

2008-06-06 Thread Dean Weimer
Could you possibly give us the pac script you are using?  I once thought that 
using the option of DNS does not resolve use proxy, else go direct, as internal 
clients can't resolve outside DNS.  This caused a very similar symptom as you 
are seeing as clients had to wait for local DNS timeouts before going through 
the proxy on every page.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

-Original Message-
From: Ionel GARDAIS [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 06, 2008 12:55 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] RE: performances ... again

We are using a pac script, mostly with Windows XP clients ...

@Dean : the second page does not load faster than the first one. 
Browser repsonse time are much better early in the morning (with less 
connection obviously). When a blank page takes too much time to load, 
doing a refresh or revalidating the URL in the address bar often 
unlock page loading and the page is displayed with a good response time.

I'll try to investigate if DNS is involved and maybe find a workaround 
to the pac autoconfiguration to do transparent proxy.

Ionel


Ritter, Nicholas wrote:
  I had a problem similar to this at another job site a coulple of years ago. 
 The clients were windows xp machines, and they were using wpad/pac style 
 configuration. The fix was transparent caching.

 -Nick


 -Original Message-
 From: Dean Weimer [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, June 05, 2008 1:53 PM
 To: GARDAIS Ionel; squid-users@squid-cache.org
 Subject: [squid-users] Re:[squid-users] performances ... again

 How are your browsers configured to use the proxy? Manual, wpad script, 
 transparent?
  
 Could be a problem with discovering proxy settings.

 What about a second page on the same server, ie. http://some.domain.com then 
 http://some.domain.com/nextpage.html?  Could be a DNS response issue, perhaps 
 your first server is timing out, and the clients have to wait for the second 
 to respond.  If the second page comes up right away, this would be a good 
 indicator of that.  As Squid would have cached the DNS lookup from the first 
 request.

 Most servers are not going to have a 12Mb/s of bandwidth is a decent chunk, I 
 wouldn't expect to see that maxed out all the time, because you are averaging 
 under 2Mb/s in itself is not cause for concern.  The fact that you are 
 hitting it on large downloads means the link is performing well.

 I am seeing about 180ms median response time on misses and 5ms median 
 response time on hits, 87ms response time on DNS Lookups.  The server is 
 running 2G cpu and 1G ram, with an average of 900 req/min.  The server is 
 servicing about 500 clients connected behind 2 T1 lines.  Both lines are 
 consistently running at 1.2 to 1.5Mb/s from 7am to 6pm when most users are at 
 work.  Disk cache is 8gigs on the same disk as system, which is actually a 
 hardware mirrored ultra 160 10K SCSI disks, (Not ideal, as I have learned a 
 lot more since I first built this system), but the performance is excellent, 
 so I haven't found cause to change it.  The server is running FreeBSD 5.4, 
 squid the cache and logs are installed on their own mount point using ufs 
 file system, Mount point is on a single Disk slice encompassing entire hard 
 drive, and to top that off, the file system runs about 90% of capacity, yet 
 another no no.

 Thanks,
  Dean Weimer
  Network Administrator
  Orscheln Management Co

 -Original Message-
 From: GARDAIS Ionel [mailto:[EMAIL PROTECTED]
 Sent: Thursday, June 05, 2008 12:11 PM
 To: chris brain; squid-users@squid-cache.org
 Subject: [squid-users] RE : [squid-users] Re:[squid-users] performances ... 
 again

 Hi Chris,

 The internet link is not congested.
 As I wrote, we use less than 2Mb/s of the 12Mb/s we can reach (but yes, 
 upload seems to be limited to 512Kb/s (somewhere around the maximum of the 
 line), this might be a bottleneck).
 When downloading large files (from ten to hundereds of megabytes), the whole 
 12Mb are used (showing a 1100KB/s download speed).

 After rereading my post, I saw that I did not finish a line :
 [...] cache-misses median service times are around 200ms and cache-hits are 
 around 3ms but we often see a 10-second lag for browser to start loading the 
 page.

 Ionel


 -Message d'origine-
 De : chris brain [mailto:[EMAIL PROTECTED]
 Envoyé : jeudi 5 juin 2008 18:34
 À : squid-users@squid-cache.org
 Objet : [squid-users] Re:[squid-users] performances ... again

 Hi Ionel,

 Your performance dont look that bad. Our stats roughly work out to be :

 1000+ users
 NTLM auth
 Average HTTP requests per minute since start: 2990.8
 with max 30% hits. (so your hits look coparable to us.) Our cache miss 
 service time averages to about 160ms and cache hits service time about 10ms 
 running IBM blade P4 3G cpu 1Gb ram. mirrored drive.

 Our links can get quite congested and we dont get complaints about

[squid-users] RE : [squid-users] performances ... again

2008-06-06 Thread Dean Weimer
Your DNS responses were similar to what I saw on those same domains, but how is 
squid querying DNS, it can be set different than the host DNS servers that dig 
would be using.

Do you have any of the following options set in your squid.conf?  If so what 
are they set to?

DNS OPTIONS
 -

* check_hostnames
* allow_underscore
* cache_dns_program
* dns_children
* dns_retransmit_interval
* dns_timeout
* dns_defnames
* dns_nameservers
* hosts_file
* dns_testnames
* append_domain
* ignore_unknown_nameservers
* ipcache_size
* ipcache_low
* ipcache_high
* fqdncache_size

Also if you haven't already, setup cachemgr.cgi, look at the general runtime 
information page, and see what the median service times are reporting for DNS 
Lookups.  Also look at the IP Cache statistics, that will show you all cached 
domains, those should not have the delay when accessing them if It is purely a 
DNS issue causing the performance hit.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

-Original Message-
From: GARDAIS Ionel [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 06, 2008 2:56 PM
To: Henrik Nordstrom
Cc: Squid Users
Subject: [squid-users] RE : [squid-users] performances ... again

Okay ...
It's been the hardest 20 minutes of the day : find a few domain names that 
should have not been accessed and cached by our DNS.

Well, from Paris, France, time given by dig stats :
- mana.pf (French Polynesia, other side of the Earth, satellite link) : around 
700ms
- aroundtheworld.com, astaluego.com, apple.is, dell.nl, Volvo.se : between 100 
and 150ms
- nintendo.co.jp, Yamaha.co.jp, pioneer.co.jp : around 300ms

Cached entries are returned in less than 1ms.

Ionel


-Message d'origine-
De : Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Envoyé : vendredi 6 juin 2008 21:05
À : GARDAIS Ionel
Cc : Squid Users
Objet : Re: [squid-users] performances ... again

On fre, 2008-06-06 at 14:37 +0200, Ionel GARDAIS wrote:
 I got a user (whom I can trust) who uses an explicit proxy configuration 
 : there are no improvments.

Ok. Then it's at the proxy, or the DNS servers it uses.

Remember that to diagnose DNS slowness you need to query for hosts and
domains which has not yet been visited, as the DNS server also caches a
lot. Lookups of already visited domains/hosts is not valid as proof to
say that the DNS is fine..

 I tried to avoid use of calls which cause DNS lookups (hence the 
 host.match() and host.indexOf() ).

Good.

Regards
Henrik


[squid-users] Re:[squid-users] performances ... again

2008-06-05 Thread Dean Weimer
How are your browsers configured to use the proxy? Manual, wpad script, 
transparent?
 
Could be a problem with discovering proxy settings.

What about a second page on the same server, ie. http://some.domain.com then 
http://some.domain.com/nextpage.html?  Could be a DNS response issue, perhaps 
your first server is timing out, and the clients have to wait for the second to 
respond.  If the second page comes up right away, this would be a good 
indicator of that.  As Squid would have cached the DNS lookup from the first 
request.

Most servers are not going to have a 12Mb/s of bandwidth is a decent chunk, I 
wouldn't expect to see that maxed out all the time, because you are averaging 
under 2Mb/s in itself is not cause for concern.  The fact that you are hitting 
it on large downloads means the link is performing well.

I am seeing about 180ms median response time on misses and 5ms median response 
time on hits, 87ms response time on DNS Lookups.  The server is running 2G cpu 
and 1G ram, with an average of 900 req/min.  The server is servicing about 500 
clients connected behind 2 T1 lines.  Both lines are consistently running at 
1.2 to 1.5Mb/s from 7am to 6pm when most users are at work.  Disk cache is 
8gigs on the same disk as system, which is actually a hardware mirrored ultra 
160 10K SCSI disks, (Not ideal, as I have learned a lot more since I first 
built this system), but the performance is excellent, so I haven't found cause 
to change it.  The server is running FreeBSD 5.4, squid the cache and logs are 
installed on their own mount point using ufs file system, Mount point is on a 
single Disk slice encompassing entire hard drive, and to top that off, the file 
system runs about 90% of capacity, yet another no no.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

-Original Message-
From: GARDAIS Ionel [mailto:[EMAIL PROTECTED] 
Sent: Thursday, June 05, 2008 12:11 PM
To: chris brain; squid-users@squid-cache.org
Subject: [squid-users] RE : [squid-users] Re:[squid-users] performances ... 
again

Hi Chris,

The internet link is not congested.
As I wrote, we use less than 2Mb/s of the 12Mb/s we can reach (but yes, upload 
seems to be limited to 512Kb/s (somewhere around the maximum of the line), this 
might be a bottleneck).
When downloading large files (from ten to hundereds of megabytes), the whole 
12Mb are used (showing a 1100KB/s download speed).

After rereading my post, I saw that I did not finish a line :
[...] cache-misses median service times are around 200ms and cache-hits are 
around 3ms but we often see a 10-second lag for browser to start loading the 
page.

Ionel


-Message d'origine-
De : chris brain [mailto:[EMAIL PROTECTED] 
Envoyé : jeudi 5 juin 2008 18:34
À : squid-users@squid-cache.org
Objet : [squid-users] Re:[squid-users] performances ... again

Hi Ionel,

Your performance dont look that bad. Our stats roughly work out to be :

1000+ users
NTLM auth
Average HTTP requests per minute since start:   2990.8
with max 30% hits. (so your hits look coparable to us.)
Our cache miss service time averages to about 160ms
and cache hits service time about 10ms
running IBM blade P4 3G cpu 1Gb ram. mirrored drive.

Our links can get quite congested and we dont get complaints about the 
performance.

Are you having internet link performance issues?? are you monitoring it 
(snmp/netflow) ?

chris 




West Australian Newspapers Group

 
Privacy and Confidentiality Notice

The information contained herein and any attachments are intended solely for 
the named recipients. It may contain privileged confidential information.  If 
you are not an intended recipient, please delete the message and any 
attachments then notify the sender. Any use or disclosure of the contents of 
either is unauthorised and may be unlawful. Any liability for viruses is 
excluded to the fullest extent permitted by law.

Advertising Terms  Conditions
Please refer to the current rate card for advertising terms and conditions.  
The rate card is available on request or via www.thewest.com.au

Unsubscribe
If you do not wish to receive emails such as this in future please reply to it 
with unsubscribe in the subject line.


RE: [squid-users] Basic Config Question

2008-05-29 Thread Dean Weimer
I run squid in a DMZ and have no problem getting usage information from it.  
The only issue I could see a firewall causing is if your firewall is using NAT 
(Network Address Translation) or PAT (Port Address Translation), you could not 
determine which machine the request came from, unless you look fast enough 
while the firewall still has the translation defined.  In our case the inside 
hosts are exempted from translation when accessing the Squid server, however 
these are DHCP addresses, so they don't really mean to much, as the PC that 
received that address can change.  Basically it really depends on the firewall, 
its configuration and which usage information you want as to whether or not it 
would cause a problem.  If you do bypass the firewall, I would recommend 
installing a software based firewall, or using one already built-in to your 
Squid host operating system to protect your Squid server.

If this is indeed the point your consult was trying to make, I must agree with 
Squidly, you may need a better consultant, he/she should have been easily able 
to explain this as the reason.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co.

-Original Message-
From: Joel Jaeggli [mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 29, 2008 11:24 AM
To: Squidly
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Basic Config Question

Squidly wrote:
 I have a consultant telling me that I need to have my squid server
 dual homed and bypassing my firewall for squid to be able to properly
 report usage. Is this the case? Is there some other reason this config
 is required?

reporting and connectivity are separate issues.

measuring octets between the cache and the internet and the cache and 
the clients ought to be easy enough, or you need a better consultant.




RE: [squid-users] Squid virtual ips problem

2008-05-20 Thread Dean Weimer
I believe you need to use tcp_outgoing_address 
http://www.squid-cache.org/Versions/v3/3.0/cfgman/tcp_outgoing_address.html.  
Glad you asked this, actually never thought about this, but I think this is 
also just what I need to solve a problem I have with some websites and our T1 
load balancer.  By forcing traffic to them through a virtual IP that bypasses 
the load balancer.

This should get the behavior you are after. 

acl machine1 src 192.168.10.50/32
acl machine2 src 192.168.10.60/32
acl outbound1 myip 192.168.10.2/32
acl outbound2 myip 192.168.10.3/32
tcp_outgoing_address 192.168.10.2 machine1
tcp_outgoing_address 192.168.10.2 outbound1
tcp_outgoing_address 192.168.20.3 machine2
tcp_outgoing_address 192.168.10.2 outbound2
tcp_outgoing_address 192.168.10.1

  All requests from 192.168.10.50  60 will go out through 192.168.10.1  3 
respectively. Requests made to 192.168.10.2  3 will leave via the address they 
came in through. Requests made to 192.168.10.1 will go out through 
192.168.10.1, using the default rule at the end.
I haven't done enough recently with acls in squid to know for sure what rule 
will take precedence when multiple rules are matched.  You may have to change 
the order of the rules around to make them behave exactly as you want them to.


Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co.

-Original Message-
From: marpel78 [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, May 20, 2008 11:33 AM
To: squid-users@squid-cache.org
Subject: [squid-users] Squid virtual ips problem


Hi all.
I've a big problem with squid and linux box.

My server has three ips 192.168.10.1 on eth0 and 192.168.10.2 (virtual
eth0:1) 192..168.10.3 (eth0:2).

Squid is listening on 192.168.10.1, 2 and 3 port 8080.

My problem is that i should like to make a selection based on source ip.

If i get a request from 192.168.10.50 i should that squid use 192.168.10.2
to go to internet.
If i get a request from 192.168.10.60 i should like squit use 192.168.10.3
to go to internet.

But my squid only use its physical address 192.168.10.1 to go to internet
also if my clients use 192.168.10.2 or 192.168.10.3 as proxy.

I try to use iptables + squid + ip route but it does not work.
Any suggestion please??
Thanls
-- 
View this message in context: 
http://www.nabble.com/Squid-virtual-ips-problem-tp17344754p17344754.html
Sent from the Squid - Users mailing list archive at Nabble.com.



RE: [squid-users] Unable to access support.microsoft.com

2008-05-14 Thread Dean Weimer
Scott,
If you want to test this without modifying your proxy configuration you 
can uncheck the http 1.1 options on the Internet Explorer Options Advanced tab. 
 Also does it work through Firefox or another browser?  Firefox reports a 
content encoding error when accessing support.microsooft.com when this problem 
is present.  If these steps show that this is indeed the problem you probably 
need to discover why your other subnets are working, are they in fact bypassing 
the proxy?

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, May 13, 2008 11:12 PM
To: Thompson, Scott (WA)
Cc: Amos Jeffries; squid-users@squid-cache.org
Subject: RE: [squid-users] Unable to access support.microsoft.com

 Thx Amos, but why would it work from every other subnet but the one I am
 on?
 They all go out the same proxy server

 Scott

Could be the other causes entirely. Thats just the most common one seen
with support.microsoft.com recently.
 - TCP-level stuff like windoss scaling, ECN, PMTU discovery come in
second place and produce similar errors.
 - configuration loops (302's) may have a similar effect.

The biggest clue is the exact content of the error page. A squid-generate
page you can at least track through the cache.log.
If it's one of the IE-generated ones then it's probably a low-level issue
with IE either handling the page info or connecting to squid.

Amos


 -Original Message-
 From: Amos Jeffries [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, 14 May 2008 10:37 AM
 To: Thompson, Scott (WA)
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] Unable to access support.microsoft.com


 Hi all
 A strange one and probably something stupid at my end but for some
 reason I cannot access support.microsoft.com
 In IE all I get is a 'cannot display the webpage' error, nothing from
 Squid to indicate an error.

 support.microsoft.com have a broken HTTP/1.1 server.

 Squid 3.0, 2.5, or an early 2.6?
 2.x - Upgrade to a more recent release.
 3.x - A header hack to your config is available to bypass this.

   # Fix broken sites by removing Accept-Encoding header
   acl broken dstdomain ...
   request_header_access Accept-Encoding deny broken
   # NP: don't forget to remove it again when you upgrade out of 3.0

 For all the guff see
   http://squidproxy.wordpress.com/2008/04/29/chunked-decoding/

 Amos







[squid-users] Unable to Access support.microsoft.com through Squid

2008-05-02 Thread Dean Weimer
I have recently been unable to browse support.microsoft.com through our squid 
proxy servers.

Investigation into the issue leads me to believe that Microsoft is responding 
with gzip transfer encoding.

Firefox Reports a content encoding error:
Content Encoding Error
The page you are trying to view cannot be shown because it uses an invalid or 
unsupported form of compression.

Internet Explorer Reports that I am not connected to the internet or the web 
server is not responding.
I was able to find a work around for IE, by un-checking the use http 1.1 
options under the advanced tab.

We have 2 proxy servers configured, one is running FreeBSD 5.4 with 
Squid-2.5.STABLE13, the other is running FreeBSD 6.2 with Squid-2.6-STABLE9.  
Both have the same problem, after some reading, I found that this issue should 
have been fixed in Squid-2.6+ and Squid-3.1+.  Since 3.1 is not out in 
production state yet, 2.6 seems to be the way to go, is STABLE9 not new enough 
to have the fix in it?  I already have 3.0 compiled and was ready to put it in 
place, but I have no problem switching to the latest version of 2.6 if that 
will fix this problem.

Has anyone else ran into this issue, and found a solution to the problem?  

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


RE: [squid-users] Unable to Access support.microsoft.com through Squid

2008-05-02 Thread Dean Weimer
Thanks for your help Mick, this solved the problem.

Also after seeing this I was able to figure out that in squid 3.0 you can use 
request_header_access in place of header_access.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

-Original Message-
From: Michael Graham [mailto:[EMAIL PROTECTED] 
Sent: Friday, May 02, 2008 10:37 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Unable to Access support.microsoft.com through Squid

 Has anyone else ran into this issue, and found a solution to the problem?  

I do

# Fix broken sites by removing Accept-Encoding header
acl broken dstdomain support.microsoft.com
acl broken dstdomain .digitalspy.co.uk
header_access Accept-Encoding deny broken

The problem is that sending Accept-Encoding causes these sites to reply 
with a header that it's not supposed to (Transfer-Encoding: chunked)

Cheers,
Mick