Re: [squid-users] Reverse proxy with LDAP authentication

2008-09-27 Thread Amos Jeffries

Andrew Struiksma wrote:

Here is the main part of my config:

http_port 80 defaultsite=site.company.org https_port 443
cert=/etc/ssl/certs/company.org.cert \
key=/etc/ssl/certs/company.org.key \
defaultsite=site.company.org

cache_peer site.company.org parent 443 0 no-query \
originserver ssl sslflags=DONT_VERIFY_PEER name=myAccel acl
our_sites dstdomain site.company.org acl all src 0.0.0.0/0.0.0.0

auth_param basic program /usr/lib/squid/ldap_auth \
-R -b dc=company,dc=org -D
cn=squid_user,cn=Users,dc=company,dc=org \
-w password -f sAMAccountName=%s -h 192.168.1.2

auth_param

basic children 5 auth_param basic realm Our Site auth_param basic
credentialsttl 5 minutes

acl ldap_users proxy_auth REQUIRED

http_access allow ldap_users
http_access allow our_sites

If I understand you correctly that should be:

 http_access allow our_sites ldap_users
 http_access deny all


cache_peer_access myAccel allow our_sites

Andrew


That config should be do it.
Perhapse a never_direct allow our_sites to prevent
non-peered traffic.


OK. I'll add in those options. Currently, if a user connects on port 80 they 
are not forwarded to port 443 until after logging in and actually clicking a 
link on the website. They then are prompted to login a second time on port 443. 
Can Squid redirect to port 443 immediately before login or do I need to setup 
Apache to do this?


Ah, now it sounds like you believe or need one thing and your config is 
doing yet another.


Fortunately they are easy to do:

At the top of the config after http_port 80 add these:

  acl port80 myport 80
  deny_info https://site.company.org port80
  http_access deny port80

That will cause squid itself to send a 3xx moved fake 'error' message to 
all port 80 requests. The users browser will then automatically 
re-connect to port 443 before being asked to login.


NP: for anyone else trying to copy this: it only works on one domain 
name at a time. Needs adjustment for virtual-hosted setups.




Can I add in an ACL to permit users from certain IP ranges to access the site 
with having to authenticate to LDAP? I'm thinking about sending all users 
through Squid but I don't want to force users on our LAN to have to 
authenticate.



Yes. Just chain the acl names properly. An http_access allow line before 
one that requires auth should do it.


http_access are checked top-down and first to match causes allow/deny.
They can be thought of as a boolean expression:
 http_access allow/deny if a AND b AND c AND d
 OR
 http_access allow/deny if a AND b AND !d   (! being NOT)


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


RE: [squid-users] Expires: vs. Cache-Control: max-age

2008-09-27 Thread Markus Karg
According to HTTP/1.1 specification, the precedence is not determined by
the keyword, but by the value: The shorter age is to be taken.

Regards
Markus

 -Original Message-
 From: Chris Woodfield [mailto:[EMAIL PROTECTED]
 Sent: Freitag, 26. September 2008 23:46
 To: Squid Users
 Subject: [squid-users] Expires: vs. Cache-Control: max-age
 
 Hi,
 
 Can someone confirm whether Expires: or Cache-control: max-age
 parameters take precedence when both are present in an object's
 headers? My assumption would be Cache-control: max-age would be
 preferred, but we're seeing some behavior that suggests otherwise.
 
 Specifically, we're seeing Expires: headers in the past resulting in
 refresh checks against our origin even when a Cache-Control: max-age
 header is present and the cached object should be fresh per that
 metric.
 
 What we're seeing is somewhat similar to bug 2430, but I want to make
 sure what we're seeing isn't expected behavior.
 
 Thanks,
 
 -Chris


Re: [squid-users] multiple web ports squid not working?

2008-09-27 Thread Amos Jeffries

jason bronson wrote:

I've got an issue where I have multiple ports one webserver is on port
80 and one is on 21080
anyhow 21080 works fine
port 80 from the outside world doesnt work at all i get a blank
index.php file returned from the browser to download?

So i run tcpdump on port 80 and i see connections coming in but squid
is not writing anything to the logs even with full debugging?

I run wget from my squid server to see if it can talk with the
webserver and it returns the 21080 webserver page???

what bothers me is I'd think at this point the outside world would at
least see the 21080 server not a blank index file returned? and I'd
think something would write in squids logs?

Please if anyone knows what im doing shoot me a hint !

Im running
/usr/local/squid/sbin/squid -v
Squid Cache: Version 2.7.STABLE3
configure options:


heres my configuration

acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl localnet src 10.108.0.0/24# RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443# https
acl Safe_ports port 70# gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl Safe_ports port 3128
acl Safe_ports port 21080
acl CONNECT method CONNECT
http_access allow all


Absent any limits on the peers or direct access. This proxy is open for 
abuse by anyone on the web.



http_access allow manager localhost
http_access deny manager
http_access allow localnet
http_access deny all
icp_access allow localnet
icp_access deny all



http_port 80 accel defaultsite=64.132.59.237
http_port 21080 accel defaultsite=64.132.59.237


defautsite= does not mean what you think.
It is the full domain name to be used of client omits the required Host: 
header.


Unless you expect clients to access your website by 
http://64.132.59.237/index.php  that setting is incorrect.


DNS should be pointing your domain name at Squid.



hierarchy_stoplist cgi-bin ?
access_log /usr/local/squidserver/var/logs/access.log squid
refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern -i (/cgi-bin/|\?) 00%0
refresh_pattern .020%4320
negative_ttl 0 seconds
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
visible_hostname 127.0.0.1
coredump_dir /usr/local/squidserver/var/cache
cache_peer 10.108.50.39 parent 21080 0 no-query originserver name=mybox
cache_peer 10.108.30.82 parent 80 0 no-query originserver name=webapps
cache_peer_access webapps allow all
cache_peer_access mybox allow all



cache_peer_access webapps deny all
cache_peer_access mybox deny all


These last two cache_peer_access lines are irrelevant given the ones above.

Given the order of peer defines, both having allow all:
 * 10.108.50.39 will see nearly all requests arriving on its port 21080.
 * 10.108.30.82 will see few if any.
 * Squid will have requests arriving at both ports.

Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] Cannot Access Site w/ Squid 2.6 Stable 3 Transparent Mode

2008-09-27 Thread Amos Jeffries

Brodsky, Jared S. wrote:

Hi all,

I am running Squid 2.6 Stable 3 in Transparent mode and none of my users
can access msnbc.com from behind the our cache.


I see from the config you are using tproxy.  I'd recommend upgrading to 
tproxy v4.1+ and the Squid 3.1 as soon as convenient. It's just had 
quite a few fixes and being rolled out successfully in some high-load sites.


It's up to you though. We expect formal 3.1 test releases within weeks.


tcp_outgoing_address 10.100.1.2 has undefined network behavior. It 
goes against the tproxy operation usage. tproxy behavior under those 
config conditions may be unexpected.


acl adzapports myport 81 also has undefined behavior as tproxy 
intercepted requests work with whatever dstIP:port the client originally 
requested. Not squid listening port.




 The cache box itself
has no problem reaching the site via wget, lynx, or telnet.  The strange
part is that if you have a direct url to one of their CSS files it loads
fine when behind squid. I can also telnet into msnbc.com from machines
behind the proxy as well.  I have added into my conf file the following
which had no effect:

acl msnbc dstdomain .msnbc.msn.com
cache deny msnbc

I have tried this with no luck as well  
http://wiki.squid-cache.org/SquidFaq/SystemWeirdnesses#head-699d810035c0
99c8b4bff21e12bb365438a21027

Note: msnbc.com redirects to www.msnbc.msn.com.  
We can get to msn.com just fine, as well as cnbc.com.  I think there is

a problem w/ my conf file with the rewrite statements I have in
conjunction w/ how msnbc redirects their traffic.  I have attached my
conf file below.

Any help would be greatly appreciated.


http_port 81 transparent tproxy
http_port 3128
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
cache_mem  525 MB
cache_swap_low 93
cache_swap_high 95
maximum_object_size 300 MB
maximum_object_size_in_memory  100 MB
cache_replacement_policy heap LFUDA
memory_replacement_policy heap GDSF
cache_dir aufs /var/spool/squid/ 20480 16 256
access_log /var/log/squid/access.log
log_fqdn on
ftp_user [EMAIL PROTECTED]
ftp_list_width 64
hosts_file /etc/hosts
acl adzapports myport 81
acl adzapmethods method HEAD GET
url_rewrite_access deny !adzapmethods
url_rewrite_access allow adzapports
refresh_pattern ^ftp:   144020% 10080   reload-into-ims
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320reload-into-ims
refresh_pattern cgi-bin 0   0%  0
refresh_pattern \?  0   0%  0
refresh_pattern .   0   20% 4320
refresh_pattern (/cgi-bin/|\?) 0 0% 0
refresh_pattern .0 20% 4320
quick_abort_min 64 KB
quick_abort_max 512 KB
quick_abort_pct 50
range_offset_limit 1 MB
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443  # https
acl SSL_ports port 563  # snews
acl SSL_ports port 873  # rsync
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 873 # rsync
acl purge method PURGE
acl CONNECT method CONNECT
refresh_pattern -i \.flv$ 10080 90% 99 ignore-no-cache
override-expire ignore-private
quick_abort_min -1 KB
acl youtube dstdomain .youtube.com
cache allow youtube
hierarchy_stoplist cgi-bin ?
cache allow all
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
acl gtn_lan src 10.1.1.0/24
acl gtn_lan2 src 10.100.1.0/24
http_access allow gtn_lan
http_access allow gtn_lan2
http_access allow localhost
http_access deny all
http_reply_access allow all
icp_access allow all
tcp_outgoing_address 10.100.1.2
log_access deny localhost
log_access allow all
cache_mgr [EMAIL PROTECTED]
mail_from [EMAIL PROTECTED]
cache_effective_group proxy
httpd_accel_no_pmtu_disc on
append_domain .greatertalent.com
memory_pools_limit 64 MB
via off
forwarded_for off
snmp_port 3401
acl snmp_public snmp_community public
acl snmp_probes src 10.1.1.0/24
acl snmp_probes src 10.100.1.0/24
snmp_access allow snmp_public localhost snmp_probes
snmp_access deny all
strip_query_terms off
coredump_dir /var/spool/squid
pipeline_prefetch on




Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] Recommendations for URL filtering

2008-09-27 Thread Amos Jeffries

Johnson, S wrote:

Anyone have recommendations for a URL filtering list through squid?



Yes. Don't

Or if you do use a well maintained one. Such an SURBL.

Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


[squid-users] why get a miss?

2008-09-27 Thread Jeff Peng
Hello,


I'm running Squid-3.0.9, when try to cache a page with a ?, I always
get a MISS. The server's response headers are:

(Status-Line)   HTTP/1.0 200 OK
Connection  close
Content-Encodinggzip
Content-Typetext/html; charset=gbk
DateSat, 27 Sep 2008 11:43:50 GMT
Server  Apache/2.0.54 (Unix) PHP/5.2.6
VaryAccept-Encoding
Via 1.0 cache.mysite.org (Squid/3.0.9)
X-Cache MISS from cache.mysite.org
X-Powered-ByPHP/5.2.6


I have commented out these two lines in squid.conf:

#hierarchy_stoplist cgi-bin ?
#refresh_pattern (cgi-bin|\?)0   0%  0


But why squid still get TCP_MISS in the response? Thanks.


Re: [squid-users] why get a miss?

2008-09-27 Thread Amos Jeffries
Jeff Peng wrote:
 Hello,
 
 
 I'm running Squid-3.0.9, when try to cache a page with a ?, I always
 get a MISS. The server's response headers are:
 
 (Status-Line) HTTP/1.0 200 OK
 Connectionclose
 Content-Encoding  gzip
 Content-Type  text/html; charset=gbk
 Date  Sat, 27 Sep 2008 11:43:50 GMT
 ServerApache/2.0.54 (Unix) PHP/5.2.6
 Vary  Accept-Encoding
 Via   1.0 cache.mysite.org (Squid/3.0.9)
 X-Cache   MISS from cache.mysite.org
 X-Powered-By  PHP/5.2.6
 
 
 I have commented out these two lines in squid.conf:
 
 #hierarchy_stoplist cgi-bin ?
 #refresh_pattern (cgi-bin|\?)0   0%  0

That last one is not about caching. Only about detecting when an object
is expected to be stale once already in cache. It's needed to maintain
safety and protect against dynamic pages being stored when they should not.

The request headers you show have no Expires: header and no age
information at all. They are thus very unsafe to cache.

It looks like you have control over the website. Adjust the PHP to send
Expires: or Cache-Control: max-age=N  information.

If that info is set properly and still not caching. We'll have to see
the rest of your config to tell why not.

Amos
-- 
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] Recommendations for URL filtering

2008-09-27 Thread Marcus Kool

Or use a commercial URL filter from URLfilterDB.

Marcus


Amos Jeffries wrote:

Johnson, S wrote:

Anyone have recommendations for a URL filtering list through squid?



Yes. Don't

Or if you do use a well maintained one. Such an SURBL.

Amos


[squid-users] In SF from October 1 - 7

2008-09-27 Thread Adrian Chadd
G'day everyone,

I'll be in San Francisco (ish area) from October 1 to October 7. Drop
me a line if you're interested in catching up for an impromptu Squid
related evening event sometime then.



Adrian