Re: [squid-users] problem with snmp
Dnia Śr Grudnia 12 2007, 17:21, Adrian Chadd napisał(a): On Wed, Dec 12, 2007, [EMAIL PROTECTED] wrote: What about adding the snmp version there.. snmpwalk -m /usr/share/squid/mib.txt -v2c -c public localhost:3405 .1.3.6.1.4.1.3495.1.3.1 yea.. thanks :) it works, so on wiki.squid-cache.org are outdated infromations. Where in the Wiki? I'll go update it. Here: http://wiki.squid-cache.org/SquidFaq/SquidSnmp?highlight=%28snmp%29 * SquidFaq * SquidSnmp That isn't working: You can test if your Squid supports SNMP with the snmpwalk program (s nmpwalk is a part of the NET-SNMP project). Note that you have to specify the SNMP port, which in Squid defaults to 3401. snmpwalk -p 3401 hostname communitystring .1.3.6.1.4.1.3495.1.1 Regards, -- Tomasz
Re: [squid-users] Compressing Object
On tis, 2007-12-11 at 21:58 +0900, Adrian Chadd wrote: There's some preliminary experimental work done with squid-3 but it was done a while ago and I'm not sure what the timeframe for getting that to work. I forget where that work is too, I think its somewhere in devel.squid-cache.org. It is, and is fairly up to date (October). It's not doing Transfer-Encoding but dynamic Content-Encoding recoding. It's scheduled for 3.1, but needs a review first to make sure we don't make the same ETag mistake as mod_deflate, which I suspect we do... Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] ProxyAuth credentials size limit
I suggest filing a squid bugzilla bug. I'm not sure what the limits are but I bet they're compile-time at the moment. On Wed, Dec 12, 2007, Glenn Zazulia wrote: Hi, I'm using Squid 2.6.STABLE17 on Redhat Windows, configured in a chain of peers with custom auth external acl helpers that manipulate the proxyauth credentials/header. This works fine when the user:passwd string is less than 256 bytes (prior to base64-encoding), but I noticed that squid truncates anything larger than that. I didn't find any stated header size limit in the RFCs (2616, etc.), and I'm wondering if this is an arbitrary, static limit imposed by squid? If at all possible, I need to increase that limit to 1 KB, and I'm wondering if this can be done without patching the source. I didn't see anything obvious in the config file or the docs. Thanks for any assistance that you could provide. Glenn Zazulia -- - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support - - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -
[squid-users] Delay pools cached resources
Hi! Is it possible to delay only new resources, but do not delay resources that were cached? I mean if e.g. i downloaded some images before, i want them to be displayed in browser immediately (not depending on browser's cache). Thanks! -- Alexey Vatchenko http://www.bsdua.org E-mail: [EMAIL PROTECTED] JID: [EMAIL PROTECTED]
Re: [squid-users] NTLM auth popup boxes
Adrian Chadd wrote: On Sat, Nov 03, 2007, Elvar wrote: Hello all, I am currently running squid-2.6.14 on FreeBSD 6-STABLE and Squid is configured to authenticate users to the Active Directory database via the NTLM plugin. The problem I'm having is that approximately every other day or sometimes sooner or sometime longer, users start getting a popup box asking for auth credentials. Normally this is not the case as it's handled automatically in the background. I'm forced to restart the squid proxy server to resolve this. One thing I notice is that every time it happens the number of squid child processes is greater than the number listed in squid.conf. Currently I'm set at 'auth_param ntlm children 150'. I'm not sure what is causing this login popup box but it's really upsetting my users and I need to figure out a solution. Has anyone else experienced this? Any have any suggestions? A couple of possibilities: * Samba can't keep up with your request rate * Squid is blocking and missing out on processing the NTLM authentication results I suggest a few things: * How busy is the cache? Do you have graphs? If not, compile with snmp support and start graphing whatever you can * Look at your load and see if you're better off with aufs than ufs; aufs won't block (as much!) and should free Squid up to handle the helper replies quicker; * I've seen this happen at back from lunch enterprise situations where a few hundred people come back and fire up their browsers at the same time, overloading the NTLM authentication mechanism. Henrik's authentication IP caching patch (ntlm_ip_cache? I forget now) seems to do the trick but it comes with certain use restrictions. This depends on how busy your caches are; see point 1. Adrian Hi Adrian, Based on your suggestion to try and monitor how busy Squid is I followed the directions at http://www.squid-cache.org/~wessels/squid-rrd/ to produce some graphs. Have you by chanced played with this monitoring setup? I have the graphs displaying but no actual data inside the graphs. Regards, Elvar squid.conf listed below Kind regards Elvar Begin squid.conf acl localnet src 192.168.0.0/16 http_port 192.168.0.1:3128 hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? cache deny QUERY acl all src 0.0.0.0/0.0.0.0 cache_dir ufs /usr/local/squid/cache 500 16 256 access_log /usr/local/squid/logs/access.log squid #cache_log none cache_log /usr/local/squid/logs/cache.log cache_store_log none emulate_httpd_log off log_mime_hdrs on check_hostnames off auth_param ntlm keep_alive on auth_param ntlm program /usr/local/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp --require-membership-of=S-1-5-21-2590255907-4225717938-1771017636-2445 auth_param ntlm children 150 #auth_param ntlm max_challenge_reuses 0 #auth_param ntlm max_challenge_lifetime 5 minutes #auth_param basic program /usr/local/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp #auth_param basic children 5 #auth_param basic realm WT #auth_param basic credentialsttl 2 hours refresh_pattern ^ftp: 144020% 10080 refresh_pattern ^gopher:14400% 1440 refresh_pattern . 0 20% 4320 ### Needed for Windows Update to work ### acl windowsupdate dstdomain .windowsupdate.microsoft.com acl windowsupdate dstdomain .update.microsoft.com acl windowsupdate dstdomain .download.windowsupdate.com acl windowsupdate dstdomain .c.microsoft.com acl windowsupdate dstdomain .download.microsoft.com http_access allow windowsupdate localnet ## acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 563 acl Safe_ports port 80 # http acl CONNECT method CONNECT acl Safe_ports port 21 # ftp acl Safe_ports port 443 563 # https, snews acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl AuthorizedUsers proxy_auth REQUIRED http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow all AuthorizedUsers http_access deny all http_reply_access allow all icp_access allow all cache_effective_user squid visible_hostname example.com logfile_rotate 20 coredump_dir /usr/local/squid/cache # End squid.conf
RE: [squid-users] Webmail sites - allow access
Hi Group, I am using squid to block pretty much all web access other than work-related sites. However, I need to open up some of the popular webmail sites. Um, can you see the self-contradiction in that? popular webmail sites are naturally non-work. If you operate via email you should have company servers to handle that. Well, we don't. And that is a whole other can of worms that doesn't need to be opened on this forum. I was able to get hotmail work properly with the following ACL. But I am having problems with gmail.com and mail.yahoo. acl WebmailSites dstdomain .ard.yahoo.com .login.yahoo.com .mail.yahoo.com .gmail.com .mail.google.com .google.com/accounts .google.ca/accounts These are NOT dstdomain. dstdoman is quite naturally ONLY a _domain_ . The /acounts is an URI. Thank-you. I'll experiment with that. .hotmail.com .live.com .passport.com For the gmail site, it won't seem to take the two /accounts entries at all. The yahoo site partially works, but I get 'unable to load javascript' errors. Which would be natural if the javascript sub-includes are located elsewhere. So my best bet is to scan all http headers when logging in, reading mail, and logging out? And then include any and all unique domains and subdomains in the acl? Has anyone got these to work? Care to share your ACL's with me? Thanks in advance! Davan Wong Amos
RE: [squid-users] Problem with router
Thanks for your answer, I'll try All that we are is the result of what we have thought. -Mensaje original- De: Amos Jeffries [mailto:[EMAIL PROTECTED] Enviado el: miércoles, 12 de diciembre de 2007 17:08 Para: humberto CC: squid-users@squid-cache.org Asunto: Re: [squid-users] Problem with router Hi all; I have SQUID 2.6.STABLE1 with wccp. In a Cisco router a receive a message: Pleass try 2.6stable17. Amos 3745-STGO#show ip wccp web-cache de WCCP Cache-Engine information: Web Cache ID: 0.0.0.0 Protocol Version: 0.4 State: Usable Initial Hash Info: Assigned Hash Info: Hash Allotment:256 (100.00%) Packets Redirected:454 Connect Time: 00:05:12 And the navigation is not permit. Regards Humberto All that we are is the result of what we have thought. __ NOD32 2719 (20071212) Information __ This message was checked by NOD32 antivirus system. http://www.eset.com
[squid-users] How to throttle video streams
Dear All, Is it possible to to throttle video streams such as youtube, google video etc... using delay pools or by any other way? How to detect them? For instance youtube stream does not have extension or mime type so I can know its a stream. Detect it by an URL? Any ideas are welcome :) Thanks in advance, Dominik
Re: [squid-users] ProxyAuth credentials size limit
Yes, I think so, and I suspect that it's not a simple configuration parameter either but hard-coded. Without knowing where in the source to look for this, I searched all files for explicit, static 256 byte buffers, and I found quite a few. I'll file a bug. Thanks. Glenn -- On 12/13/2007 05:33 AM, Adrian Chadd wrote: I suggest filing a squid bugzilla bug. I'm not sure what the limits are but I bet they're compile-time at the moment. On Wed, Dec 12, 2007, Glenn Zazulia wrote: Hi, I'm using Squid 2.6.STABLE17 on Redhat Windows, configured in a chain of peers with custom auth external acl helpers that manipulate the proxyauth credentials/header. This works fine when the user:passwd string is less than 256 bytes (prior to base64-encoding), but I noticed that squid truncates anything larger than that. I didn't find any stated header size limit in the RFCs (2616, etc.), and I'm wondering if this is an arbitrary, static limit imposed by squid? If at all possible, I need to increase that limit to 1 KB, and I'm wondering if this can be done without patching the source. I didn't see anything obvious in the config file or the docs. Thanks for any assistance that you could provide. Glenn Zazulia
[squid-users] File cache squid
Is the OS file cache of any importance to squid? And by that I mean quite simply, HOW important is the OS file cache to squid? Paul Cocker IT Systems Administrator TNT Post is the trading name for TNT Post UK Ltd (company number: 04417047), TNT Post (Doordrop Media) Ltd (00613278), TNT Post Scotland Ltd (05695897),TNT Post North Ltd (05701709) and TNT Post South West Ltd (05983401). Emma's Diary and Lifecycle are trading names for Lifecycle Marketing (Mother and Baby) Ltd (02556692). All companies are registered in England and Wales; registered address: 1 Globeside Business Park, Fieldhouse Lane, Marlow, Buckinghamshire, SL7 1HY.
[squid-users] Help with minimalistic pass-thru squid proxy
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi list, I am using a squid proxy to route http traffic through a separate router on my network. I am getting some traffic in my /var/log/squid/cache.log file: 2007/12/13 13:46:54| urlParse: Illegal character in hostname 'www.hostname1.com?404=y' 2007/12/13 14:01:44| urlParse: Illegal character in hostname 'www.hostname2.com?404=y' 2007/12/13 14:08:18| urlParse: Illegal character in hostname 'hostname3.com?404=y' 2007/12/13 14:13:37| urlParse: Illegal character in hostname 'hostname.com?404=y' I'm looking at the source of these URLs when this happens and the URL is not malformed. What I really want, though, is to turn off any kind of URL checking like this. I just want squid to pass any HTTP requests through unchecked and unaltered if possible. I also want to turn off all URL logging and caching. Can someone suggest how to do this in the squid config? Also, if there are better suggestions (other than squid, that is) I would be glad to consider them. Thanks. -BEGIN PGP SIGNATURE- Note: This signature can be verified at https://www.hushtools.com/verify Charset: UTF8 Version: Hush 2.5 wkYEARECAAYFAkdhiToACgkQRBFe1uc9INr2kwCdGpvohD3NSYdvjM0FHZE8+4Riym4A nR5e192Q9sj6DQWdshbwf2FI8xx2 =ThJZ -END PGP SIGNATURE-
Re: [squid-users] File cache squid
The OS file cache is Very important for most IO operations for most applications - including Squid. On Thu, 2007-12-13 at 17:49 +, Paul Cocker wrote: Is the OS file cache of any importance to squid? And by that I mean quite simply, HOW important is the OS file cache to squid? Paul Cocker IT Systems Administrator TNT Post is the trading name for TNT Post UK Ltd (company number: 04417047), TNT Post (Doordrop Media) Ltd (00613278), TNT Post Scotland Ltd (05695897),TNT Post North Ltd (05701709) and TNT Post South West Ltd (05983401). Emma's Diary and Lifecycle are trading names for Lifecycle Marketing (Mother and Baby) Ltd (02556692). All companies are registered in England and Wales; registered address: 1 Globeside Business Park, Fieldhouse Lane, Marlow, Buckinghamshire, SL7 1HY.
Re: [squid-users] How to throttle video streams
Dominik Zalewski wrote: Dear All, Is it possible to to throttle video streams such as youtube, google video etc... using delay pools or by any other way? How to detect them? For instance youtube stream does not have extension or mime type so I can know its a stream. Detect it by an URL? acl flashVideo rep_mime_type video/flv video/x-flv Any ideas are welcome :) Thanks in advance, Dominik Chris Chris
[squid-users] Adjusting Parent Cache weight based on acl
Hello Everyone, Although I haven't reached the stage yet of needing the following feature I thought I might as well start talking about it soon. I would like to suggest (if there isn't already a way of doing this) the following idea for Squid: Adjusting a Parent Cache's weight based on acl - What this means is the following: I have a main proxy server called (let's say) main_proxy. I have two sibling proxy servers called child1_proxy and child2_proxy. Child1 and 2 proxies both have there own internet link of different sizes (the one is adsl and the other one a E1). Now to balance requests between them is simple, just add them with the same weight. To use one for a set of users etc is simple. What I would like to do is dynamically control the weight of each cache, based on acl's Let's say Client A is an exec and needs high speed caching, I want some requests to go over the adsl and some over the E1. Now I would like to do this during some time or something else...all do'able with the current acl's, but what if I want to change the proxy based on system/network load or some external factor, or I want to say something like, when Client A requests it from username user and from IP a.b.c.d (say a dial up) then decrease the weight of the adsl proxy. I hope this is making sense, since I feel like i havn't really carried over the idea correctly. Thanks, Pieter De Wit
[squid-users] force caching (or High availability config)
Hi, I have squid configured as a transparent proxy in front of application server (ApS). Data generated by ApS gets updated infrequently and sometimes ApS gets slow doing it's internal housecleaning. What I want to do is for Squid to fudge response times a bit by timing out connections to ApS after, say 20s and using cached data instead (even if it's outdated). This would also help with ApS reboots so that data is available at all times regardless of responsiveness or availability of ApS. Looking through documentation and Google searches didn't bring up any relevant information. I do realize that this violates HTTP and is not widely applicable but in my situation I can live with consequences (I think). -- Dmitry Makovey Web Systems Administrator Athabasca University (780) 675-6245 signature.asc Description: This is a digitally signed message part.
Re: [squid-users] force caching (or High availability config)
On Dec 13, 2007, at 3:57 PM, Dmitry S. Makovey wrote: Hi, I have squid configured as a transparent proxy in front of application server (ApS). Data generated by ApS gets updated infrequently and sometimes ApS gets slow doing it's internal housecleaning. What I want to do is for Squid to fudge response times a bit by timing out connections to ApS after, say 20s and using cached data instead (even if it's outdated). This would also help with ApS reboots so that data is available at all times regardless of responsiveness or availability of ApS. Looking through documentation and Google searches didn't bring up any relevant information. I do realize that this violates HTTP and is not widely applicable but in my situation I can live with consequences (I think). This is actually a feature we've been interested in as well. As far as I know, there's no way to do this in Squid right now, though it was discussed before by one of my co-workers and apparently there was a similar feature being developed, I don't know if that ever made it into the mainline code or not, I'm sure one of the developers can comment. What we've done instead is leverage offline mode so that if the application servers get themselves into a state where they wont reply in a timely manner, the caches are automatically toggled into offline mode by a watchdog daemon. That might, depending on your configuration and your ability to monitor your application server's state, be an option you can consider in lieu of doing it entirely in Squid. --Dave Systems Administrator Zope Corp. 540-361-1722 [EMAIL PROTECTED]
Re: [squid-users] force caching (or High availability config)
: I have squid configured as a transparent proxy in front of application server : (ApS). Data generated by ApS gets updated infrequently and sometimes ApS gets : slow doing it's internal housecleaning. What I want to do is for Squid : to fudge response times a bit by timing out connections to ApS after, say : 20s and using cached data instead (even if it's outdated). This would also : help with ApS reboots so that data is available at all times regardless of : responsiveness or availability of ApS. http://www.nabble.com/read_timeout-and--22fwdServerClosed-3A-re-forwarding-22-to13888907.html#a13961255 Summary: not currently possible with 2.6, may work in 2.HEAD Note that if the origin server is down (ie: does not accept the connection at all) then 2.6 will return stale content as long as that stale content had a Last-Modified header when it was cached (see cavets in bug#1098 and bug#2119) -Hoss
Re: [squid-users] ProxyAuth credentials size limit
Here's the bugzilla link to the bug: http://www.squid-cache.org/bugs/show_bug.cgi?id=2139 On 12/13/2007 10:38 AM, Glenn Zazulia wrote: Yes, I think so, and I suspect that it's not a simple configuration parameter either but hard-coded. Without knowing where in the source to look for this, I searched all files for explicit, static 256 byte buffers, and I found quite a few. I'll file a bug. Thanks. Glenn -- On 12/13/2007 05:33 AM, Adrian Chadd wrote: I suggest filing a squid bugzilla bug. I'm not sure what the limits are but I bet they're compile-time at the moment. On Wed, Dec 12, 2007, Glenn Zazulia wrote: Hi, I'm using Squid 2.6.STABLE17 on Redhat Windows, configured in a chain of peers with custom auth external acl helpers that manipulate the proxyauth credentials/header. This works fine when the user:passwd string is less than 256 bytes (prior to base64-encoding), but I noticed that squid truncates anything larger than that. I didn't find any stated header size limit in the RFCs (2616, etc.), and I'm wondering if this is an arbitrary, static limit imposed by squid? If at all possible, I need to increase that limit to 1 KB, and I'm wondering if this can be done without patching the source. I didn't see anything obvious in the config file or the docs. Thanks for any assistance that you could provide. Glenn Zazulia
Re: [squid-users] Invoked sites by allowed websites.
Do you know how I would allow access based on the referer? I'm searching for how to do this and would like to try it out. On Dec 12, 2007, at 6:52 PM, Adrian Chadd wrote: On Wed, Dec 12, 2007, Cody Jarrett wrote: I'm using squid 2.6 and have it configured to block all websites except for a few that I specify are ok. The problem I'm having is, several sites that are fine to access, such as kbb.com, have content invoked from other sites. So when I view kbb.com for example, the page is missing most it's content and looks really messed up in firefox, and this happens with other sites. Is there some way to allow access to approved sites, and further sites that are invoked? There's no easy way for squid (or any proxy, really!) to properly determine and further sites that are invoked. You could possibly allow access based on referrer URL as well - which should show up as having been referred by your list of approved URLs - but referrer URLs can't be trusted as anyone can just fake them. Adrian http_port 10.1.0.1:3128 http_port 127.0.0.1:3128 visible_hostname server.blah.com hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? no_cache deny QUERY cache_dir ufs /var/spool/squid 400 16 256 refresh_pattern ^ftp: 144020% 10080 refresh_pattern ^gopher:14400% 1440 refresh_pattern . 0 20% 4320 acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 563 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 563 # https, snews acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT #allow only the sites listed in the following file acl goodsites dstdom_regex /etc/squid/allowed-sites.squid http_access allow goodsites http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access deny to_localhost acl lan_network src 10.1.1.0/24 #deny http access to all other sites http_access deny lan_network http_access deny itfreedom_network http_access allow localhost http_access deny all acl to_lan_network dst 10.1.45.0/24 http_access allow to_lan_network http_reply_access allow all icp_access allow all -- - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support - - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -
Re: [squid-users] problem with snmp
Dnia ¦r Grudnia 12 2007, 17:21, Adrian Chadd napisa³(a): On Wed, Dec 12, 2007, [EMAIL PROTECTED] wrote: What about adding the snmp version there.. snmpwalk -m /usr/share/squid/mib.txt -v2c -c public localhost:3405 .1.3.6.1.4.1.3495.1.3.1 yea.. thanks :) it works, so on wiki.squid-cache.org are outdated infromations. Where in the Wiki? I'll go update it. Here: http://wiki.squid-cache.org/SquidFaq/SquidSnmp?highlight=%28snmp%29 * SquidFaq * SquidSnmp That isn't working: You can test if your Squid supports SNMP with the snmpwalk program (s nmpwalk is a part of the NET-SNMP project). Note that you have to specify the SNMP port, which in Squid defaults to 3401. snmpwalk -p 3401 hostname communitystring .1.3.6.1.4.1.3495.1.1 Regards, -- Tomasz Thanks to you for catching this. The wiki is now updated. Amos
Re: [squid-users] ProxyAuth credentials size limit
On Thu, Dec 13, 2007, Glenn Zazulia wrote: Here's the bugzilla link to the bug: http://www.squid-cache.org/bugs/show_bug.cgi?id=2139 Thanks! Adrian -- - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
Re: [squid-users] Invoked sites by allowed websites.
On Thu, Dec 13, 2007, Cody Jarrett wrote: Do you know how I would allow access based on the referer? I'm searching for how to do this and would like to try it out. acl aclname referer_regex [-i] regexp ... adrian On Dec 12, 2007, at 6:52 PM, Adrian Chadd wrote: On Wed, Dec 12, 2007, Cody Jarrett wrote: I'm using squid 2.6 and have it configured to block all websites except for a few that I specify are ok. The problem I'm having is, several sites that are fine to access, such as kbb.com, have content invoked from other sites. So when I view kbb.com for example, the page is missing most it's content and looks really messed up in firefox, and this happens with other sites. Is there some way to allow access to approved sites, and further sites that are invoked? There's no easy way for squid (or any proxy, really!) to properly determine and further sites that are invoked. You could possibly allow access based on referrer URL as well - which should show up as having been referred by your list of approved URLs - but referrer URLs can't be trusted as anyone can just fake them. Adrian http_port 10.1.0.1:3128 http_port 127.0.0.1:3128 visible_hostname server.blah.com hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? no_cache deny QUERY cache_dir ufs /var/spool/squid 400 16 256 refresh_pattern ^ftp: 144020% 10080 refresh_pattern ^gopher:14400% 1440 refresh_pattern . 0 20% 4320 acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 563 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 563 # https, snews acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT #allow only the sites listed in the following file acl goodsites dstdom_regex /etc/squid/allowed-sites.squid http_access allow goodsites http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access deny to_localhost acl lan_network src 10.1.1.0/24 #deny http access to all other sites http_access deny lan_network http_access deny itfreedom_network http_access allow localhost http_access deny all acl to_lan_network dst 10.1.45.0/24 http_access allow to_lan_network http_reply_access allow all icp_access allow all -- - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support - - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA - -- - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support - - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -
Re: [squid-users] NTLM auth popup boxes
On Thu, Dec 13, 2007, Elvar wrote: Based on your suggestion to try and monitor how busy Squid is I followed the directions at http://www.squid-cache.org/~wessels/squid-rrd/ to produce some graphs. Have you by chanced played with this monitoring setup? I have the graphs displaying but no actual data inside the graphs. I haven't played with Duane's RRD stuff. Have you run create.sh and setup poll.pl to run every 5 minutes? -- - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
RE: [squid-users] Webmail sites - allow access
Hi Group, I am using squid to block pretty much all web access other than work-related sites. However, I need to open up some of the popular webmail sites. Um, can you see the self-contradiction in that? popular webmail sites are naturally non-work. If you operate via email you should have company servers to handle that. Well, we don't. And that is a whole other can of worms that doesn't need to be opened on this forum. I was able to get hotmail work properly with the following ACL. But I am having problems with gmail.com and mail.yahoo. acl WebmailSites dstdomain .ard.yahoo.com .login.yahoo.com .mail.yahoo.com .gmail.com .mail.google.com .google.com/accounts .google.ca/accounts These are NOT dstdomain. dstdoman is quite naturally ONLY a _domain_ . The /acounts is an URI. Thank-you. I'll experiment with that. .hotmail.com .live.com .passport.com For the gmail site, it won't seem to take the two /accounts entries at all. The yahoo site partially works, but I get 'unable to load javascript' errors. Which would be natural if the javascript sub-includes are located elsewhere. So my best bet is to scan all http headers when logging in, reading mail, and logging out? And then include any and all unique domains and subdomains in the acl? Essentially yes. With some exceptions where the . wildcard is suitable as you use above. Amos Has anyone got these to work? Care to share your ACL's with me? Thanks in advance! Davan Wong Amos
[squid-users] allow audio on sites in squid
hi guys, Is there a rule to be able to detect most of the audio and video mostly podcasts to pass through squid? I currently have squid stable 16 setup on my environment and we have internal sites that play back .wav files that we're automatically loaded on the browser via window mediaplayer. but for some weird reason it gets blocked and people aren't able to hear them. I'm basically looking for a general rule that will be able to pass these mime types through the proxy. regards, beavis
Re: [squid-users] NTLM auth popup boxes
Adrian Chadd wrote: On Thu, Dec 13, 2007, Elvar wrote: Based on your suggestion to try and monitor how busy Squid is I followed the directions at http://www.squid-cache.org/~wessels/squid-rrd/ to produce some graphs. Have you by chanced played with this monitoring setup? I have the graphs displaying but no actual data inside the graphs. I haven't played with Duane's RRD stuff. Have you run create.sh and setup poll.pl to run every 5 minutes? I finally got it working. Turns out it was an access denied issue to the cache itself. I must admit, those are some pretty nice graphs. Let's hope this helps me with finding the overall issue causing those darn popup boxes. The users are pretty frustrated... :) Thanks, Elvar