[squid-users] cache_peer

2011-02-11 Thread Tim Bateson
Hi,
I am using squid 2.7 and would like to know if it possible to map 2
acl groups to a particular cache_peer.
Our acls are mapped using the extern_acl and acl as follows.
external_acl_type groupn children=10 ttl=200 %LOGIN
/usr/lib/squid/wbinfo_group.pl
acl unrestrictedusers external groupn grp1
acl restrictedusers external groupn grp2

Can anyone confirm what I want is possible. If not I will have to run
2 squid servers with each set of users getting mapped to their own
cache_peer parent.
Thanks,
Tim


Re: [squid-users] cache_peer

2011-02-11 Thread Michael Hendrie

On 11/02/2011, at 8:21 PM, Tim Bateson wrote:

 Hi,
 I am using squid 2.7 and would like to know if it possible to map 2
 acl groups to a particular cache_peer.
 Our acls are mapped using the extern_acl and acl as follows.
 external_acl_type groupn children=10 ttl=200 %LOGIN
 /usr/lib/squid/wbinfo_group.pl
 acl unrestrictedusers external groupn grp1
 acl restrictedusers external groupn grp2
 

Check out the cache_peer_access tag.  You can use your ACL elements to 
allow/deny access to certain cache_peers 
http://www.squid-cache.org/Doc/config/cache_peer_access/

 Can anyone confirm what I want is possible. If not I will have to run
 2 squid servers with each set of users getting mapped to their own
 cache_peer parent.
 Thanks,
 Tim



Re: [squid-users] problem using squid as proxy server to load balance reverse-proxies

2011-02-11 Thread Amos Jeffries

On 11/02/11 19:25, Sri Rao wrote:

Hi Amos,

Thanks for the quick reply!



I am trying to setup squid as a ssl proxy to load balance btwn
reverse-proxies.  I believe the config is right but what is happening


What you have setup is a forward proxy load balancer which only permits
management and binary-over-HTTP tunneled traffic from its localhost machine
IP.


That is actually what I want.  I want to do binary-over-HTTP from the
localhost to the reverse-proxy servers.  When the forward proxy tries
to connect to the origin server directly it does a tunnelConnect but
even though I have set originserver for the cache_peers it seems to
just forward the CONNECT instead of doing a tunnelConnect.  I thought
originserver should force squid to treat the cache_peers as if they
were web servers?



It should. You seem to have found a bug there. I've added a fix for that 
now.


A secondary problem in your config was never_direct allow sp_test - 
since sp_test always matches direct tunnel setup (tunnelConnect) is not 
permitted.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.4


Re: [squid-users] Questions on SQUID peering/mesh

2011-02-11 Thread Matus UHLAR - fantomas
 On 01.02.11 19:00, Amos Jeffries wrote:
 If you have a choice HTCP.
 The packets are slightly bigger than ICP (they contain HTTP headers not
 just URLs) but the false-positives are much lower and thus routing
 choices are better.
[...]
 CD has more false positives than ICP but less lag on the real matches
 and and less background bandwidth consumption.

so HTCP should provide even more benefit in addition to cache digests...

 CD and ICP certainly work together. I believe CD and HTCP would work as  
 well.

 On 11/02/11 03:28, Matus UHLAR - fantomas wrote:
 of course. My question now is, if they can benefit of all of those...

On 11.02.11 03:43, Amos Jeffries wrote:
 Well CD + HTCP if you wanted to.

 The lookup queries of HTCP are essentially just ICP with the HTTP  
 headers attached. So the gains are achieved by the remote peer being  
 able to determine its yes/no reply in things like the expiry headers,  
 Vary: and ETag matching or running the cache ACLs on it.

I was more asking if the squid sends HTCP queries when it already uses
digests. i guess the answer is yes and it does that to avoid false
positives...
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Spam is for losers who can't get business any other way.


Re: [squid-users] sslbump + DynamicSslCert + url_rewrite_program + NTLM authentication

2011-02-11 Thread Amos Jeffries

On 11/02/11 01:40, Yonah Russ wrote:

Hi,

I've been using Squid 2.6/7 for a while as a redirecting proxy for
developers to preview their changes as if they are looking at
production websites.
Now I need to support rewriting SSL requests as well and this has
brought me to investigate Squid 3.2/3.1
As both of these seem very new and alot seems to have changed, I'm
hoping you can help point me in the best direction.

I understand that 3.2 has the DynamicSSLCert feature and that a patch
exists for 3.1 as well- which would be the prefered way to implement
this for semi production/internal users?
Is there any way to restrict which sites get bumped and which do not?


Yes.
http://www.squid-cache.org/Doc/config/ssl_bump/



I also understand that redirect_program has been replaced with
url_rewrite_program but the interface seems to be fairly backwards
compatible- any gotchas to look out for?


No. Same old problems. No significant changes there. Just additional 
error checking and reporting around mangled URLs and redirect status 
codes for certain requests.



Will the url_rewrite_program have access to the decrypted https
request? If so, will the rewrite program be able to rewrite the
request and still send it over HTTPS?


Good question. Don't known the answer though sorry.

Though I think the answer is probably yes, the side effects are likely 
to be even worse than with HTTP since the SSL is closely tied to the URL 
and domain as realm.




Have their been changes in Active Directory integration for proxy
authentication? Currently I'm using NTLM and Basic
authentication+winbind but not without issues.


On the NTLM auth side:
 *Some HTTP/1.1 improvements that make NTLM work better. Though still 
with problems. The later the version the better the background 
connection stability.
 * Microsoft have officially obsoleted NTLM and encourage Kerberos 
rollout. So do we. 3.2 will now use Kerberos on peer links as well.


On the Basic auth side:
 * 3.2 has had a large set of bug fixes



I understand there are some changes regarding SMP. Currently I run
multiple instances of Squid with different configurations(http_port,
redirect_program). Can I consolidate this any with the newer versions?


Yes. 3.2 has configuration options to make control and configuration of 
multiple instances MUCH easier.



I'd be interested in sharing the authentication helpers, but still
having different http/https ports and rewrite configurations.


Child processes and caches are not yet shared. Pretty much everything 
else can be shared or separated as you wish.


NP: if you want to go with 3.2. I'm about to release 3.2.0.5 within a 
few days.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.4


RE: [squid-users] (null):// instead of http://, what would cause this?

2011-02-11 Thread Dean Weimer
After converting the logs to Date Time format from Unix timestamp, they did 
indeed line up with a reconfigure I issued to adjust some ACLs.  At least now I 
know I don't have an application issue to track down before it became a bigger 
problem.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Thursday, February 10, 2011 11:10 PM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] (null):// instead of http://, what would cause 
 this?
 
 On 11/02/11 09:38, Dean Weimer wrote:
  I have a reverse proxy running 3.1.10, and noticed a few odd lines in the
 access log while searching them for some other info.  I was wondering if
 anyone knew what would cause some entries like these?  There are only 13
 lines out of 22,000+ requests to this server today, and I haven't heard any
 complaints from users, just thought the entries were odd.
 
  1297353864.628  0 10.200.129.50 NONE/400 3030 GET (null)://...snip... 
   -
 NONE/- text/htm
 
  The clients are on WAN connections of various speeds, and these could just
 be simply caused by network errors on the WAN connections, just thought I
 would check and see if any else had seen these and if it's something that I
 should investigate further in case there is an application issue causing this.
 
 
 Looks a lot like http://bugs.squid-cache.org/show_bug.cgi?id=2976
 
 The URL scheme handling and display is a bit complex. I've been working
 on un-twisting it for a while now. Which hopefully will resolve this.
 
 Amos
 --
 Please be using
Current Stable Squid 2.7.STABLE9 or 3.1.11
Beta testers wanted for 3.2.0.4


Re: [squid-users] squid + sslbump + [c-icap] + [squidclamav/havp] + clamav [SOLVED]

2011-02-11 Thread Alessandro Baggi

Il 10/02/2011 21:10, Alessandro Baggi ha scritto:

Il 10/02/2011 20:02, Marcus Kool ha scritto:

can only decrypt SSL connections.
sslbump cannot decrypt all other types of traffic that use the
HTTPS port and CONNECT method.
So, for example, it cannot decrypt Skype traffic and files
containing a virus can still enter the network. 
Thanks for the reply, but i want to try to scan viruses on web https, 
I don't want program that use 443 ports for other purpose, only for web.
Sorry another time,  on http://wiki.squid-cache.org/Features/SslBump I 
get:
Squid-in-the-middle decryption and encryption of straight *CONNECT* 
and transparently redirected SSL traffic, using configurable client- 
and server-side certificates. While decrypted, the traffic can be 
inspected using ICAP.


At this point, what's the meaning of While decrypted, the traffic can 
be inspected using ICAP?


On squidclamav site we can find:

Release v5.4 is out, here are the change:

...

- Add support for scanning SSL encrypted traffic with the new Squid
  feature sslBump. Thank to Jean DERAM for the patch.
...


Thanks in advance


Hi list. The problem was solved, I've a misconfiguration with pemission. 
Now https traffic is scanned.


Thanks to all.


Re: [squid-users] problem using squid as proxy server to load balance reverse-proxies

2011-02-11 Thread Sri Rao
Hi Amos,



 I am trying to setup squid as a ssl proxy to load balance btwn
 reverse-proxies.  I believe the config is right but what is happening

 What you have setup is a forward proxy load balancer which only permits
 management and binary-over-HTTP tunneled traffic from its localhost
 machine
 IP.

 That is actually what I want.  I want to do binary-over-HTTP from the
 localhost to the reverse-proxy servers.  When the forward proxy tries
 to connect to the origin server directly it does a tunnelConnect but
 even though I have set originserver for the cache_peers it seems to
 just forward the CONNECT instead of doing a tunnelConnect.  I thought
 originserver should force squid to treat the cache_peers as if they
 were web servers?


 It should. You seem to have found a bug there. I've added a fix for that
 now.

Where can I grab the fix?

 A secondary problem in your config was never_direct allow sp_test - since
 sp_test always matches direct tunnel setup (tunnelConnect) is not permitted.

yeah I never want it to go direct to the origin.  I want it to connect
to the peers but as the originserver which should still be
tunnelConnect right?

Thanks,

Sri


Re: [squid-users] problem using squid as proxy server to load balance reverse-proxies

2011-02-11 Thread Amos Jeffries

On 12/02/11 06:37, Sri Rao wrote:

Hi Amos,




I am trying to setup squid as a ssl proxy to load balance btwn
reverse-proxies.  I believe the config is right but what is happening


What you have setup is a forward proxy load balancer which only permits
management and binary-over-HTTP tunneled traffic from its localhost
machine
IP.


That is actually what I want.  I want to do binary-over-HTTP from the
localhost to the reverse-proxy servers.  When the forward proxy tries
to connect to the origin server directly it does a tunnelConnect but
even though I have set originserver for the cache_peers it seems to
just forward the CONNECT instead of doing a tunnelConnect.  I thought
originserver should force squid to treat the cache_peers as if they
were web servers?



It should. You seem to have found a bug there. I've added a fix for that
now.


Where can I grab the fix?


It will be here when the mirrors next update:
http://www.squid-cache.org/Versions/v3/3.1/changesets/squid-3.1-10230.patch




A secondary problem in your config was never_direct allow sp_test - since
sp_test always matches direct tunnel setup (tunnelConnect) is not permitted.


yeah I never want it to go direct to the origin.  I want it to connect
to the peers but as the originserver which should still be
tunnelConnect right?


Hmm, I think I finally get what you are trying to do. :)
And no Squid's handling of CONNECT is not smart enough to do CONNECT 
properly to origins when the origin is a cache_peer without direct TCP 
access from Squid.



 tunnelConnect is Squid being a gateway and converting the CONNECT into 
a TCP tunnel directly CONNECTed from to the destination server. Similar 
to the way SSH would for example. Bytes are shuffled but squid sees none 
of them.

Like so:
   client--(CONNECT)--Squid --(direct TCP)--some host

 using cache_peer is Squid passing an HTTP requests (just happens to be 
CONNECT) on unchanged for another proxy cache_peer to process. The 
tunnel data is just a regular HTTP body entity to Squid, same as a POST 
with data going both ways to the client and cache_peer.

Like so:
   client--(CONNECT)--Squid--(CONNECT)--Other proxy--(direct 
TCP)--some host


inside the tunnel:
client --(binary)-- some host


In your case you have the peer origins hostname in the CONNECT 
destination yes? so allowing CONNECT to go direct will go there.
 I think you should be doing never_direct deny of everything *except* 
CONNECT requests to your internal origins.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.4


[squid-users] Squid Cache - hangs after a few minutes

2011-02-11 Thread justin hyland
Im trying to get multiple squid servers to act as front-end web
servers for my main central apache web server, here is my setup so
far...

I have changed the IP of the apache server that this sends traffic to,
to 123.123.123.123, fyi
Code:

# egrep -v ^# squid.conf | sed -e '/^$/d'
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80          # http
acl Safe_ports port 21          # ftp
acl Safe_ports port 443         # https
acl Safe_ports port 70          # gopher
acl Safe_ports port 210         # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280         # http-mgmt
acl Safe_ports port 488         # gss-http
acl Safe_ports port 591         # filemaker
acl Safe_ports port 777         # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow all
icp_access allow all
http_port 80 accel defaultsite=123.123.123.123 vhost
cache_peer 123.123.123.123 parent 80 0 no-query originserver name=myAccel
cache_peer_access myAccel allow all
hierarchy_stoplist cgi-bin ?
cache_dir ufs /var/spool/squid 2000 16 256
access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern .               0       20%     4320
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
always_direct allow all
coredump_dir /var/spool/squid

This works wonders.. for about 4 minutes. then the requests go from
half a second per a page load, to 5 to 10, then 30 or 40 seconds..,
then it wont even process, the tail -f access_logs shows that its not
even hitting apache any longer on the central server, so its like
squid freezes up, any idea???

I have turned off the firewall on the squid server as well as the
central apache server, and still doesn't help much. I read through
http://squidproxy.wordpress.com/2007...s-are-hanging/ and did all of
it, with no avail.

P.S. I doubt this is a connection issue between the servers, as the
website WITH squid loads just as fast as apache for a few minutes,
then slowly goes to a hault


Re: [squid-users] Dynamic content section in page

2011-02-11 Thread Terry.
2011/2/9 Andy Nagai ana...@wernerpublishing.com:
 Our site consists of mostly articles. Each article has user entered comments
 at the bottom. The comment section cannot be cached. They need to see their
 comments right after they enter them. The problem is I need the rest of the
 page cached. Is the only way to solve this is to load the page from cache
 then using ajax fill in the comment section to make that portion dynamic?


Either Ajax or iFrame can handle that.
We have used Ajax for this.



-- 
Free SmartDNS Hosting:
http://DNSbed.com/


[squid-users] Configuring SQUID in Windows to authenticate with Active Directory

2011-02-11 Thread Liyanage, Lakshman
Hello All,
I am new to SQUID and hence require some help.
I have SQUID 2.7 Stable8 installed on a Windows Server 2008 R2. I am now trying 
to configure it to use MS Active Directory. I have the following lines  in the 
.conf file:
-
auth_param basic program c:/squid/libexec/squid_ldap_auth -R -b 
dc=ad-mycompany,dc=domain,dc=com -D 
cn=admin,cn=Users,dc=ad-mycompany,dc=domain,dc=com -w password -f 
sAMAccountName=%s -h myipnumber
auth_param basic children 5
auth_param basic realm My_Company
auth_param basic credentialsttl 5 minute
--
When I try to start SQUID, Windows throws Error 1067: The process terminated 
unexpectedly at me.  I have a web server/service running on port 80 and 443.
What am I missing here?
Many many thanks for your help

Lakshman

Re: [squid-users] Squid Cache - hangs after a few minutes

2011-02-11 Thread Luis Daniel Lucio Quiroz
Le vendredi 11 février 2011 15:47:05, justin hyland a écrit :
 Im trying to get multiple squid servers to act as front-end web
 servers for my main central apache web server, here is my setup so
 far...
 
 I have changed the IP of the apache server that this sends traffic to,
 to 123.123.123.123, fyi
 Code:
 
 # egrep -v ^# squid.conf | sed -e '/^$/d'
 acl all src 0.0.0.0/0.0.0.0
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70  # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow all
 icp_access allow all
 http_port 80 accel defaultsite=123.123.123.123 vhost
 cache_peer 123.123.123.123 parent 80 0 no-query originserver name=myAccel
 cache_peer_access myAccel allow all
 hierarchy_stoplist cgi-bin ?
 cache_dir ufs /var/spool/squid 2000 16 256
 access_log /var/log/squid/access.log squid
 cache_log /var/log/squid/cache.log
 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern ^gopher:14400%  1440
 refresh_pattern .   0   20% 4320
 acl apache rep_header Server ^Apache
 broken_vary_encoding allow apache
 always_direct allow all
 coredump_dir /var/spool/squid
 
 This works wonders.. for about 4 minutes. then the requests go from
 half a second per a page load, to 5 to 10, then 30 or 40 seconds..,
 then it wont even process, the tail -f access_logs shows that its not
 even hitting apache any longer on the central server, so its like
 squid freezes up, any idea???
 
 I have turned off the firewall on the squid server as well as the
 central apache server, and still doesn't help much. I read through
 http://squidproxy.wordpress.com/2007...s-are-hanging/ and did all of
 it, with no avail.
 
 P.S. I doubt this is a connection issue between the servers, as the
 website WITH squid loads just as fast as apache for a few minutes,
 then slowly goes to a hault


run suqid
squid -X -N

and wait, you will see debug info