[squid-users] ACL all squid3

2011-03-01 Thread Voy User
I know questions about 'all' splay tree warning has been asked in the list 
before  I found
the reply at
http://www.mail-archive.com/squid-users@squid-cache.org/msg57540.html

However, my question is slightly different.
I am using squid3 with debian lenny.

I am using squid3 with webmin (yeah, I know a lot of people don't like webmin).
The webmin squid module hasn't been updated for squid3 - so it doesn't know 
about
the 'all' acl being inbuilt. So if I do not have the all acl in squid.conf  
try to
all rules using webmin, I don't see 'all' in the list of acl's it gives to 
'Allow'
and 'Deny'.

I have a few options
1) Move back to older version squid. I would prefer not to do this, but if I 
had to,
which is newest version of squid which I can use which doesn't have the 'all' 
acl
built in  what's the best way to define 'all'. Till now I have been defining 
it as
acl all src 0.0.0.0/0.0.0.0

This is from the visolve squid3 docs (docs have probably not been updated)
http://www.visolve.com/squid/squid30/accesscontrols.php#Recommended_Minimum_acl_Configuration

2) I continue using squid3  define the all acl.

acl all src 0.0.0.0/0.0.0.0

This gives me the foll warning
--
Restarting Squid HTTP Proxy 3.0: squid3 Waiting.done.
2011/03/01 13:13:33 WARNING: '0.0.0.0/0.0.0.0' is a subnetwork of 
'0.0.0.0/0.0.0.0'
2011/03/01 13:13:33 WARNING: because of this '0.0.0.0/0.0.0.0' is ignored to 
keep splay tree searching predictable
2011/03/01 13:13:33 WARNING: You should probably remove '0.0.0.0/0.0.0.0' from 
the ACL named 'all'
2011/03/01 13:13:33 squid.conf line 2575: http_access allow
2011/03/01 13:13:33 aclParseAccessLine: Access line contains no ACL's, skipping


From the warning it appears as if squid just skips this line  continues. This 
should work fine because
webmin squid module sees this in the conf file  hence starts showing 'all' in 
the list of acls.
Does anyone see any problem in this?

Or is there a better way with squid3.




[squid-users] icap and https

2011-03-01 Thread arielf
Hello,

I am trying to use Squid as proxy so that traffic goes through an icap
service I built and continues to intended site. I will have several clients
(browsers) that are accessing several server sites.
I need help configuring https correctly :(

I tried testing out my configuration using a broswer from ip: 9.148.16.192
I used firefox foxyproxy plugin to direct http traffic to 9.148.26.247:3128
and https to 3129 (machine/ports where my squid is listening, checked this
with netstat)

I started testing two sites, one http and another https:
1. http://mydomain.com/MyCRM/index.php
2. https://9.148.26.247:8443/- this site runs on tomcat that I
configured with mykey.jks

when I start I get all OK messages:
2011/03/01 08:23:40| Accepting  HTTP connections at [::]:3128, FD 15.
2011/03/01 08:23:40| Accepting HTTPS connections at [::]:3129, FD 16.
2011/03/01 08:23:40| HTCP Disabled.
2011/03/01 08:23:40| Configuring Parent 9.148.16.192/3129/0

when I try site 1 (http) all seems to work fine.
however when I try site 2, I get an error:
2011/03/01 08:37:54| clientNegotiateSSL: Error negotiating SSL connection on
FD 12: error:1407609B:SSL routines:SSL23_GET_CLIENT_HELLO:https proxy
request (1/-1)

where am I going wrong??
many thanks, Ariel :)

my config is below:
#
# configure https port
#
https_port 3129 key=/root/security/mykey.key.pem
cert=/root/security/mycert.crt.pem vhost
cache_peer 9.148.16.192 parent 3129 0 no-query originserver ssl
sslflags=DONT_VERIFY_PEER name=securePeer1
cache_peer_access securePeer1 allow all

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

http_access allow localnet
http_access allow localhost
always_direct allow all
http_access allow all

# Squid normally listens to port 3128
http_port 3128

# We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?

# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

icap_log /var/log/squid/icap.log icap_squid
icap_enable on
icap_send_client_ip on
icap_service_failure_limit -1
icap_service_revival_delay 30
icap_service myservice respmod_precache bypass=0
icap://127.0.0.1:1344/myservice
adaptation_access myservice allow all

request_header_access Accept-Encoding deny all
append_domain .haifa.ibm.com

-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/icap-and-https-tp3329449p3329449.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] ACL all squid3

2011-03-01 Thread Amos Jeffries

On 01/03/11 21:04, Voy User wrote:

I know questions about 'all' splay tree warning has been asked in the list 
before  I found
the reply at
http://www.mail-archive.com/squid-users@squid-cache.org/msg57540.html

However, my question is slightly different.
I am using squid3 with debian lenny.

I am using squid3 with webmin (yeah, I know a lot of people don't like webmin).
The webmin squid module hasn't been updated for squid3 - so it doesn't know 
about
the 'all' acl being inbuilt. So if I do not have the all acl in squid.conf  
try to
all rules using webmin, I don't see 'all' in the list of acl's it gives to 
'Allow'
and 'Deny'.

I have a few options

snip

2) I continue using squid3  define the all acl.

acl all src 0.0.0.0/0.0.0.0

This gives me the foll warning
--
Restarting Squid HTTP Proxy 3.0: squid3 Waiting.done.
2011/03/01 13:13:33 WARNING: '0.0.0.0/0.0.0.0' is a subnetwork of 
'0.0.0.0/0.0.0.0'
2011/03/01 13:13:33 WARNING: because of this '0.0.0.0/0.0.0.0' is ignored to 
keep splay tree searching predictable
2011/03/01 13:13:33 WARNING: You should probably remove '0.0.0.0/0.0.0.0' from 
the ACL named 'all'


Yes it can be ignored if you must. It will just make a lot of noise on 
every start, restart and reconfigure.


The safe way to define it for all squid versions 2.6+ is:

   acl all src all



2011/03/01 13:13:33 squid.conf line 2575: http_access allow
2011/03/01 13:13:33 aclParseAccessLine: Access line contains no ACL's, skipping


This part is a separate issue. Something has screwed up your http_access 
line. That should have a WARNING on it as well.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5


Re: [squid-users] icap and https

2011-03-01 Thread Amos Jeffries

On 01/03/11 21:49, arielf wrote:

Hello,

I am trying to use Squid as proxy so that traffic goes through an icap
service I built and continues to intended site. I will have several clients
(browsers) that are accessing several server sites.
I need help configuring https correctly :(

I tried testing out my configuration using a broswer from ip: 9.148.16.192
I used firefox foxyproxy plugin to direct http traffic to 9.148.26.247:3128
and https to 3129 (machine/ports where my squid is listening, checked this
with netstat)

I started testing two sites, one http and another https:
1. http://mydomain.com/MyCRM/index.php
2. https://9.148.26.247:8443/- this site runs on tomcat that I
configured with mykey.jks

when I start I get all OK messages:
2011/03/01 08:23:40| Accepting  HTTP connections at [::]:3128, FD 15.
2011/03/01 08:23:40| Accepting HTTPS connections at [::]:3129, FD 16.
2011/03/01 08:23:40| HTCP Disabled.
2011/03/01 08:23:40| Configuring Parent 9.148.16.192/3129/0

when I try site 1 (http) all seems to work fine.
however when I try site 2, I get an error:
2011/03/01 08:37:54| clientNegotiateSSL: Error negotiating SSL connection on
FD 12: error:1407609B:SSL routines:SSL23_GET_CLIENT_HELLO:https proxy
request (1/-1)

where am I going wrong??


The wrong step is in using https_port to receive traffic from the 
browser. Those ports are for receiving a SSL/TLS encrypted connection. 
None of the popular browsers support such encryption on the link between 
themselves and proxies.


The browser wraps https:// inside a plain-text HTTP method called 
CONNECT and sends it to the Squid port. The encrypted part goes through 
a tunnel the CONNECT creates.


This error message about negotiating is due to https_port failing to 
decrypt the non-encrypted CONNECT.


In order to break into the CONNECT requests you will need the ssl-bump 
mode enabled on the normal http_port. Then send both HTTP and HTTPS 
traffic to the same proxy port via regular browser proxy settings.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5


[squid-users] opening a port for a spesific destination

2011-03-01 Thread a bv
Hi,

i want to open an unusual port to a spesific destination ip for
internal users access
Can you provide me the configuration steps?

Regards


Re: [squid-users] Squid 3.1 reverse proxy to OWA on IIS7

2011-03-01 Thread Gordon McKee

Hi

Okay - sorry I am just using our website as a test - it is on the same
server as the exchange box and is reverse proxied.  Browse the site and you
will see what I mean (how slow it is).  Something is going on causing the
images to be sent really slowly.  www.optimalprofit.com  is the website and
www.optimalprofit.com/owa is the exchange domain.  The exchange login page
should be really fast - it take about 4 min to load.  If I browse to the
site internally it is really fast.

I am kind of clucking at straws as to what it wrong.  Text comes down fast
and images are really slow.  SQUID worked a treat with version 2.6, but 2.7,
3.0 and 3.1 all make the reverse proxy really slow.

Many thanks

Gordon

-Original Message- 
From: Amos Jeffries

Sent: Monday, February 28, 2011 9:38 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid 3.1 reverse proxy to OWA on IIS7

On Mon, 28 Feb 2011 16:18:27 -, Gordon McKee wrote:

Hi

The GET / HTTP/1.1 returns:

GET / HTTP/1.1
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)
Host: www.optimalprofit.com
Connection: Close


:) I hope not. That is the initial request.



and the GET /images/op-hwynit-ad1.gif HTTP/1.1 to pull an image
file returns:

HTTP/1.0 200 OK
Content-Type: image/gif
Content-Encoding: gzip
Last-Modified: Wed, 08 Dec 2004 15:34:12 GMT
Accept-Ranges: bytes
ETag: a0d3e25d3bddc41:0
Vary: Accept-Encoding
Server: Microsoft-IIS/7.0
X-Powered-By: ASP.NET
Date: Mon, 28 Feb 2011 16:13:28 GMT
Content-Length: 264171
X-Cache: MISS from kursk.gdmckee.home
Via: 1.0 kursk.gdmckee.home (squid/3.1.11)
Connection: close

I have tried the telnet codes to access the OWA folder and the
scripts come back very fast and the images take for every.  Not sure
what is going wrong.


It's 258 KB after compression and not being cached. Size may have
something to do with it if the scripts are much smaller.


Amos 





Re: [squid-users] opening a port for a spesific destination

2011-03-01 Thread Amos Jeffries

On 01/03/11 22:34, a bv wrote:

Hi,

i want to open an unusual port to a spesific destination ip for
internal users access
Can you provide me the configuration steps?

Regards


The default Squid configuration already allows most unusual website 
ports to be accessed including any port above 1024.


What you ask is also a trivially obvious configuration. I think it would 
be best if you learned how to operate your Squid rather than being 
handed the answer immediately.


Documentation on how to write access controls for Squid can be found 
here: http://wiki.squid-cache.org/SquidFaq/SquidAcl.

section one covers the ACLs test types available and how to define them.
section two covers how to use those tests on access lines to do things.

Or even simpler explanations in the new Squid-3 beginners guide 
available for sale at 
https://www.packtpub.com/squid-proxy-server-31-beginners-guide/book


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5


Re: [squid-users] ACL all squid3

2011-03-01 Thread Voy User
 - Original Message -
 From: Amos Jeffries
 Sent: 03/01/11 02:41 PM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] ACL  all  squid3
 
  2011/03/01 13:13:33 squid.conf line 2575: http_access allow
  2011/03/01 13:13:33 aclParseAccessLine: Access line contains no ACL's, 
  skipping
 
 This part is a separate issue. Something has screwed up your http_access 
 line. That should have a WARNING on it as well.


Thank you for your reply. The http_access warning was caused by something I 
accidentally
deleted when I last edited squid.conf.



Re: [squid-users] opening a port for a spesific destination

2011-03-01 Thread Amos Jeffries

On 01/03/11 23:05, a bv wrote:

thanks ,

i already do some configuration on it but sometimes i get my mind
mixed so i wanna go on my question again.

First : Do i have to define my port as Safe port?


The default Safe_ports ACL already has ports 1025-65535 which includes 
.  If you have removed that then you will need to add the port back in.



Second : İf i do the first one do all the clients will access anywhere
with that port?


Yes. Normally all clients can access websites regardless of whether they 
are served by Java, SOAP or AJAX services with alternative ports.



Third. If so how must my acl would be?

acl myweirdport port 
http_access allow myweirdport  x.y.z.t
x.y.z.t is the destination ip which  i like the clients access with that port?


You need a ACL to define each test. One for the IP, one for the port.

To limit the port access to only that IP...
 Adding the port to Safe_ports will make it generally not rejected,
 Then you must add a *deny* access rule and use ! (meaning not) 
before the IP ACL to reject other IPs going there.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5


[squid-users] Re: icap and https

2011-03-01 Thread arielf
Many thanks Amos,

I followed your advise, unfortunately I'm not there yet. This is what I did,
please see where I went wrong now.

I reconfigured squid to use ssl-bump, configured both http and https sites
in firefox foxyproxy to port 3128
in squid.conf I removed https section and added:
http_port 3128 ssl-bump key=/root/security/mykey.key.pem
cert=/root/security/mycert.crt.pem
ssl_bump allow all

it started ok, but failed again and I tried to access https site
2011/03/01 11:03:51| Accepting  bumpy HTTP connections at [::]:3128, FD 15.
2011/03/01 11:03:51| HTCP Disabled.
2011/03/01 11:03:51| Squid modules loaded: 0
2011/03/01 11:03:51| Adaptation support is off.
2011/03/01 11:03:51| Ready to serve requests.
2011/03/01 11:03:52| storeLateRelease: released 0 objects
-BEGIN SSL SESSION PARAMETERS-
MHECAQECAgMBBAIANQQgOETLtr/8z9TaMvWhjyT6g3ZmAB87r+AjuOx7AmD8NvQE
MPMyqntXd1ZJwAebb4K+5KKX0f8vnMlQjjFo7kWuK1xJHQZnnu5YBONvcuyIbDj7
yKEGAgRNbRkcogQCAgEspAIEAA==
-END SSL SESSION PARAMETERS-
2011/03/01 11:04:44| SSL unknown certificate error 20 in
/C=IL/ST=NA/L=Haifa/O=IBM/OU=HRL/CN=Magen
2011/03/01 11:04:44| fwdNegotiateSSL: Error negotiating SSL connection on FD
13: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate
verify failed (1/-1/0)

after reading other posts with a similar error I added:
http_port 3128 ssl-bump key=/root/security/mykey.key.pem
cert=/root/security/mycert.crt.pem clientca=/root/security/myCertCA.crt.pem

Again it started ok, but failed on a different error trying to proxy an
https site:

2011/03/01 11:10:31| Accepting  bumpy HTTP connections at [::]:3128, FD 15.
2011/03/01 11:10:31| HTCP Disabled.
2011/03/01 11:10:31| Squid modules loaded: 0
2011/03/01 11:10:31| Adaptation support is off.
2011/03/01 11:10:31| Ready to serve requests.
2011/03/01 11:10:32| storeLateRelease: released 0 objects
2011/03/01 11:11:08| clientNegotiateSSL: Error negotiating SSL connection on
FD 12: error:140890C7:SSL routines:SSL3_GET_CLIENT_CERTIFICATE:peer did not
return a certificate (1/-1)

again, please help, what did I do wrong now?
Many thanks, Ariel.

-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/icap-and-https-tp3329449p3329673.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] squid non-accel default website

2011-03-01 Thread Nils Hügelmann
Hi Amos,

are there any news about this?


Thanks,

Nils Hügelmann

 On Wed, 12 May 2010 23:02:08 +0200, Nils Hügelmann n...@huegelmann.info
 wrote:
  Hi Henrik,
 
  thanks for the answer, a fallback feature for direct requests would be
  great :-)
 
  regards
  nils
 
  Am 12.05.2010 22:38, schrieb Henrik Nordström:
  tis 2010-05-11 klockan 17:04 +0200 skrev Nils Hügelmann:
 
   
  At the current state, it shows an invalid URL ... while trying to
  retrieve the URL: / error on direct access, which prevents using url
  rewriters(and deny_info too?!) so how to do this?...
 
  You can't.
 
  The reason is because Squid really need to know if an request is being
  proxied or accelerated as it have impact on how the request should be
  processed, and HTTP requires web servers (including accelerators) to
  also know how to process requests using full URL.
 
  Can't you move the proxy to a separate port, freeing up port 80 to be
  used as a web server?
 
  But yes, I guess we could add support for fallback mode when seeing an
  obvious webserver request on a proxy port instead of bailing out with
  invalid request.
 

 FYI:
  There are some security holes opened when defaulting to intercept or
 accel mode on supposedly forward traffic.
 Mandrivia has supplied captive-portal 'splash' pages for 3.2 that can be
 sent instead of the current invalid response page. If anyone can spare the
 time to implement a bit of polish let me know please, there are only two
 small'ish alterations needed to make this happen for 3.2.

 Amos



Re: [squid-users] Frustrating Invalid Request Reply

2011-03-01 Thread Ümit Kablan
Hi,

2011/2/28 Amos Jeffries squ...@treenet.co.nz:
 On Mon, 28 Feb 2011 16:51:54 +0200, Ümit Kablan wrote:

 Hi, Sorry for the late reply,

 snip

 Enter the full phrase and hit enter: [192.168.1.10 - 192.168.1.120]

 GET


 /search?hl=trsource=hpbiw=1280bih=897q=ertexaq=2aqi=g10aql=oq=ertfp=3405898bc8895081tch=1ech=1psi=_LBrTd6iFM-o8QPm5P3tDA12989033090755safe=active
 HTTP/1.1
 Host: www.google.com.tr
 Proxy-Connection: keep-alive
 Referer: http://www.google.com.tr/
 Accept: */*
 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US)
 AppleWebKit/534.10 (KHTML, like Gecko) Chrome/8.0.552.224
 Safari/534.10
 Accept-Encoding: gzip,deflate,sdch
 Accept-Language: tr-TR,tr;q=0.8,en-US;q=0.6,en;q=0.4
 Accept-Charset: ISO-8859-9,utf-8;q=0.7,*;q=0.3
 Cookie:


 NID=44=WDrVJT3IHROI8LLhYljiGzpNonvug9envnNeEoo6qdVxw1B1eHwarlfgZgODzoTsj7i7QGza5luXEqgQuFx7eWduz3Pcc-8IFrLp8tTyIaJC9VgyXEyQAv0qBQD3Dxm9;


 PREF=ID=e5ce72ddfd5e542a:U=0163fee991eaa35b:FF=0:TM=1298386459:LM=1298903279:S=6Sakp_hgUHZXMW1W

 [192.168.1.120 - 192.168.1.10]

 HTTP/1.0 400 Bad Request
 Server: squid/2.7.STABLE8
 Date: Mon, 28 Feb 2011 14:30:43 GMT
 Content-Type: text/html
 Content-Length: 2044
 X-Squid-Error: ERR_INVALID_REQ 0
 X-Cache: MISS from kiemserver
 X-Cache-Lookup: NONE from kiemserver:3128
 Via: 1.0 kiemserver:3128 (squid/2.7.STABLE8)
 Connection: close

 Last is the weird part. It crops the full url and it thinks it is
 talking directly to the origin as you already said. Or I am skipping
 something obvious.


 I'm still convinced this is some form of configuration mistake somewhere.
 Lets step through this piece by piece in detail and see if anything appears.

Hard to stay sane but OK :-)

 Which browser are you using to test with?
  What proxy settings are entered into its control panel?

I tried it with Mozilla Firefox 3.6.13 by writing 192.168.1.10 port
3128 to the Prefereces  Network  Configuration. Configured Internet
Explorer by Tools  Internet Options  Connections  Local Network
Configuration and typing proxy IP and PORT. Google Chrome acquires the
options from system so it is the same as IE.


 What does the client hosts file contain?
 What does the client resolv.conf or equivalent Windows network connection
 settings contain as gateway router, domain, and DNS servers?

Client is windows configured to use a static IP 192.168.1.120 with
255.255.255.0 subnet mask and 192.168.1.1 gateway (and dns). Hosts
equivalent is name address mapping I assume, which I found nothing
about (except 127.0.0.1 - localhost I guess)

I sometimes think that the javascript makes an explicit request which
leads to that misinterpretation by the browsers. I have no strong
clues about though. And I also want to ask if we can make some
workaroud from proxy layer without involving browsers. As I previously
said: Can Squid Proxy fix such bad requests by concaneting other
fields from HTTP request to build the correct URL?


 Amos


-- 
Ümit


RE: [squid-users] RDP, Certificates and Squid

2011-03-01 Thread Damian Teasdale
Putting it above the Internet Denied ACL worked. Thanks for the help.

Thanks

Damian Teasdale


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: February/23/2011 2:07 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] RDP, Certificates and Squid

 On Wed, 23 Feb 2011 13:55:54 -0500, Chad Naugle wrote:
 I am not certain with my response, but I have some ideas.

 - Your ACL ordering, that is often the case, is most likely to blame.
 Squid applies ACL's in order, top-down, and checks each ACL in their
 order when http_access is being applied.
 - I believe the ACL blocking access may be the 'PURGE' ACL, since the
 server could be sending them no-cache headers. -- I may need
 clarification on this behavior from another person, but you can
 attempt
 to comment it out to see if this is true, or add something such as
 http_access allow PURGE GoDaddy.

 Not PURGE, that is just a method type ACL. Albeit a performance sapping
 one.

 - Any of your explicit src / dstdomain allows will not log
 usernames
 returned by the InternetUsers ACL.
 - Does the Internet_Denied and/or FacebookUsers nt_groups involve
 a
 login prompt, or blind authentication?
 - All Explicit allows / deny's should be placed _before_
 authentication
 routines.


 :) its pretty much always ordering.

 In this case the block is 407, so look for things which require
 authentication to be tested.


 ...

 Damian Teasdale 2/23/2011 1:27 PM 
 This is the whole list from what I can tell.

 snip

 acl InternetDenied external nt_group Internet_Denied
 acl FacebookUsers external nt_group FacebookUsers

 These are missing their external_acl_type definition, but something
 called nt_group is a safe bet that its doing a login.

 snip
 acl InternetUsers proxy_auth REQUIRED

 And this glaring auth ACL.

 snip

 http_access deny InternetDenied

 ... AND the first thing Squid does is check one of those nt_group ACLs.

  ** This is very, very likely the problem.


 no_cache deny Itrade

 NP: time to remove the no_ bit off the front of that directive.

 http_access allow PURGE localhost
 http_access deny PURGE
 http_access allow GC
 http_access allow Facebook FacebookUsers

 ... somewhat later facebook users are checked, but only if they are
 visiting facebook.
 This auth ACL will not be the problem.

 http_access deny Facebook
 http_access allow Blackberry
 http_access allow Citrix
 http_access allow WindowsUpdate
 http_access allow BusinessObjects
 http_access allow MapInfo
 http_access allow MindLeaders
 http_access allow DiscoverLink
 http_access allow Knotia
 http_access allow Chep
 http_access allow Auditors
 http_access allow pdr
 http_access allow GoDaddy
 http_access allow InternetUsers

 ... then finally anyone who can login is permitted.


 # And finally deny all other access to this proxy
 http_access deny all

 Thanks

 Damian Teasdale


 snip

 The Oppenheimer Group  CONFIDENTIAL

 NP: Posted to a public mailing list archived in perppetuity.


 Amos



The Oppenheimer Group  CONFIDENTIAL

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately and delete the original. Any other 
use of the email by you is prohibited.


[squid-users] Reverse proxy: quota limit by IP

2011-03-01 Thread Vincent BLANQUE
Hi everybody,

I would like to limit the connection to my webserver defining policies
relative to the data volume downloaded and the time connection by
IP/by month. Do you think it is possible to implement it with Squid as
a reverse proxy? Is it easy?

My server run with Django. As the user need to be authentified maybe
it s a best solution to define a time limit thru the time session, but
for the data volume downloaded?

thx,

Vincent


Re: [squid-users] squid non-accel default website

2011-03-01 Thread Amos Jeffries

On Tue, 01 Mar 2011 16:43:40 +0100, Nils Hügelmann wrote:

Hi Amos,

are there any news about this?


The splash page template has been added to 3.2 and the langpack already 
that includes setup instructions for several popular browsers.


The code change to send it on non-proxy requests has not been done yet.

A secondary change to make squid look up its first available generic 
listening port instead of using a hard-coded 3128 for use in that 
template has also not yet been done.


Amos




On Wed, 12 May 2010 23:02:08 +0200, Nils Hügelmann 
n...@huegelmann.info

wrote:
 Hi Henrik,

 thanks for the answer, a fallback feature for direct requests 
would be

 great :-)

 regards
 nils

 Am 12.05.2010 22:38, schrieb Henrik Nordström:
 tis 2010-05-11 klockan 17:04 +0200 skrev Nils Hügelmann:


 At the current state, it shows an invalid URL ... while 
trying to
 retrieve the URL: / error on direct access, which prevents 
using url

 rewriters(and deny_info too?!) so how to do this?...

 You can't.

 The reason is because Squid really need to know if an request is 
being
 proxied or accelerated as it have impact on how the request 
should be
 processed, and HTTP requires web servers (including accelerators) 
to

 also know how to process requests using full URL.

 Can't you move the proxy to a separate port, freeing up port 80 
to be

 used as a web server?

 But yes, I guess we could add support for fallback mode when 
seeing an
 obvious webserver request on a proxy port instead of bailing out 
with

 invalid request.


FYI:
 There are some security holes opened when defaulting to intercept 
or

accel mode on supposedly forward traffic.
Mandrivia has supplied captive-portal 'splash' pages for 3.2 that 
can be
sent instead of the current invalid response page. If anyone can 
spare the
time to implement a bit of polish let me know please, there are only 
two

small'ish alterations needed to make this happen for 3.2.

Amos




Re: [squid-users] Frustrating Invalid Request Reply

2011-03-01 Thread Amos Jeffries

On Tue, 1 Mar 2011 17:56:02 +0200, Ümit Kablan wrote:

Hi,

2011/2/28 Amos Jeffries squ...@treenet.co.nz:

On Mon, 28 Feb 2011 16:51:54 +0200, Ümit Kablan wrote:


Hi, Sorry for the late reply,


snip


Enter the full phrase and hit enter: [192.168.1.10 - 
192.168.1.120]


GET



/search?hl=trsource=hpbiw=1280bih=897q=ertexaq=2aqi=g10aql=oq=ertfp=3405898bc8895081tch=1ech=1psi=_LBrTd6iFM-o8QPm5P3tDA12989033090755safe=active
HTTP/1.1
Host: www.google.com.tr
Proxy-Connection: keep-alive
Referer: http://www.google.com.tr/
Accept: */*
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US)
AppleWebKit/534.10 (KHTML, like Gecko) Chrome/8.0.552.224
Safari/534.10
Accept-Encoding: gzip,deflate,sdch
Accept-Language: tr-TR,tr;q=0.8,en-US;q=0.6,en;q=0.4
Accept-Charset: ISO-8859-9,utf-8;q=0.7,*;q=0.3
Cookie:



NID=44=WDrVJT3IHROI8LLhYljiGzpNonvug9envnNeEoo6qdVxw1B1eHwarlfgZgODzoTsj7i7QGza5luXEqgQuFx7eWduz3Pcc-8IFrLp8tTyIaJC9VgyXEyQAv0qBQD3Dxm9;



PREF=ID=e5ce72ddfd5e542a:U=0163fee991eaa35b:FF=0:TM=1298386459:LM=1298903279:S=6Sakp_hgUHZXMW1W

[192.168.1.120 - 192.168.1.10]

HTTP/1.0 400 Bad Request
Server: squid/2.7.STABLE8
Date: Mon, 28 Feb 2011 14:30:43 GMT
Content-Type: text/html
Content-Length: 2044
X-Squid-Error: ERR_INVALID_REQ 0
X-Cache: MISS from kiemserver
X-Cache-Lookup: NONE from kiemserver:3128
Via: 1.0 kiemserver:3128 (squid/2.7.STABLE8)
Connection: close

Last is the weird part. It crops the full url and it thinks it is
talking directly to the origin as you already said. Or I am 
skipping

something obvious.



I'm still convinced this is some form of configuration mistake 
somewhere.
Lets step through this piece by piece in detail and see if anything 
appears.



Hard to stay sane but OK :-)


Which browser are you using to test with?
 What proxy settings are entered into its control panel?


I tried it with Mozilla Firefox 3.6.13 by writing 192.168.1.10 port
3128 to the Prefereces  Network  Configuration. Configured Internet
Explorer by Tools  Internet Options  Connections  Local Network
Configuration and typing proxy IP and PORT. Google Chrome acquires 
the

options from system so it is the same as IE.


Good.

 Clicking use HTTP settings for all protocols as well?





What does the client hosts file contain?
What does the client resolv.conf or equivalent Windows network 
connection

settings contain as gateway router, domain, and DNS servers?


Client is windows configured to use a static IP 192.168.1.120 with
255.255.255.0 subnet mask and 192.168.1.1 gateway (and dns). Hosts
equivalent is name address mapping I assume, which I found nothing
about (except 127.0.0.1 - localhost I guess)



Good.


Okay, next steps ... (please check these answers in case something has 
been forgotten or overlooked)


 Are there any NAT, NAPT, Port Forwarding, or Connection Sharing 
settings on the client box?

   if so what are they?

 Same question again for the LAN router?

 Same question again for the Squid box?

 Also, is there any black-box filtering device or service between the 
client and Squid boxes?


 Is there any Web Security firewall on the client box (is Semantec or 
McAfee filters)?

   if so what are its outward proxy relay settings?




I sometimes think that the javascript makes an explicit request which
leads to that misinterpretation by the browsers. I have no strong


JS does all sorts of stuff. In your case it appears to be the only 
working traffic though. The google click-search requests are JS 
background connections.



clues about though. And I also want to ask if we can make some
workaroud from proxy layer without involving browsers. As I 
previously

said: Can Squid Proxy fix such bad requests by concaneting other
fields from HTTP request to build the correct URL?




A temporary workaround is to set transparent on the port. This will 
fill your logs with NAT lookup failures though and still get nowhere 
towards finding the real solution or what has gone wrong.


Amos


Re: [squid-users] Reverse proxy: quota limit by IP

2011-03-01 Thread Amos Jeffries

On Tue, 1 Mar 2011 18:14:45 -0300, Vincent BLANQUE wrote:

Hi everybody,

I would like to limit the connection to my webserver defining 
policies

relative to the data volume downloaded and the time connection by
IP/by month. Do you think it is possible to implement it with Squid 
as

a reverse proxy? Is it easy?


No. Squid is not a good place to define quotas.



My server run with Django. As the user need to be authentified maybe
it s a best solution to define a time limit thru the time session, 
but

for the data volume downloaded?

thx,

Vincent


The main purposes for having squid as reverse-proxy is to reduce 
backend volume and add scalability. Placing quotas voids both of those 
benefits.


Also, to complicate matters HTTP is stateless and the concept of time 
in squid is dynamic. So sessions do not exist.


The best way to run quotas is with an external system that keeps track 
or requests and updates visitors permissions live. Squid can integrate 
with such a system using its streaming access_log modules and 
external_acl_type ACLs.


Amos