Re: [squid-users] Transparent Proxy and iTunes/WinAmp
When I try to connect via my browser, this is what I see: __ ERROR The requested URL could not be retrieved While trying to process the request: GET /stream/1038 HTTP/1.1 Host: scfire-nyk-aa03.stream.aol.com:80 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13 Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive The following error was encountered: * Invalid Response The HTTP Response message received from the contacted server could not be understood or was otherwise malformed. Please contact the site operator. Your cache administrator may be able to provide you with more details about the exact nature of the problem if needed. Your cache administrator is webmaster. Generated Sat, 05 Apr 2008 04:48:09 GMT by localhost (squid/3.0.PRE5) __ Thanks for your help. On Wed, Apr 2, 2008 at 6:22 PM, Tim Bates <[EMAIL PROTECTED]> wrote: > I've never had that happen at my place, and I've been running a transparent > proxy for quite some time. > > Could it maybe be the client is not sending all the headers required? What > happens if you try to connect to the same streams with a browser (Shout/Ice > cast streams should load a web page about them)? > > TB > > > > Adam Goldberg wrote: > > > Hi -- > > > > For some reason, whenever clients try to connect to music streams on > > port 80 through my transparent proxy, they receive the error: > > > > HTTP/1.0 5.2 Bad Gateway > > > > I wonder what's going on here. I wonder if IPTABLES can somehow > > detect the difference between a browser request and a music client > > request, although they both run on 80. Or perhaps, I need to change > > something in squid.conf? > > > > Thanks for your help, > > Adam > > > > > > > > > ** > This message is intended for the addressee named and may contain > privileged information or confidential information or both. If you > are not the intended recipient please delete it and notify the sender. > ** >
[squid-users] Mime Content-type missing in acces.log.
Hi, The content-type field in the access.log is always "-" character. The only time I saw "text/html" is when the request is deny. Linux Mandriva 2008.0 Squid Cache: Version 2.6.STABLE16 configure options: '--build=i586-mandriva-linux-gnu' some squid.conf parameters: emulate_httpd_log off logformat squid %ts.%03tu %6tr %>a %Ss/%03Hs %
[squid-users] Block file upload
Is it possible to stop people from uploading files using squid ie is there some way to do an outbound mime type acl ? I have added these two lines to my squid.conf : acl fileupload req_mime_type -i ^multipart/form-data$ http_access deny fileupload Here is my complete conf : #=== cache_mem 32 MB cache_mgr [EMAIL PROTECTED] cache_dir ufs /var/spool/squid 2000 16 256 cache_access_log /var/log/squid/access.log cache_log /var/log/squid/cache.log cache_store_log /var/log/squid/store.log visible_hostname gateway cache_effective_user squid cache_effective_group squid http_port 3128 transparent hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? no_cache deny QUERY auth_param basic children 5 auth_param basic realm Squid proxy-caching web server auth_param basic credentialsttl 2 hours auth_param basic casesensitive off refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 563 acl CONNECT method CONNECT acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 563 # https, snews acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http # MIME Filter for File Upload == acl fileupload req_mime_type -i ^multipart/form-data$ http_access deny to_localhost http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost # === Block File Upload http_access deny fileupload all http_reply_access deny fileupload all coredump_dir /var/spool/squid #=== But it not works ! Has anyone used this acl before and has a sample from the conf file ?
RE: [squid-users] Unable to access a website through Suse/Squid.
fre 2008-04-04 klockan 13:56 -0400 skrev Terry Dobbs: > Thanks so much, the advmss worked like a charm. How do I make it so this > route stays there? When I restart networking it seems to vanish. Some things first.. you should figure out if the MTU is local or remote. As it's mostly you having issues I would suspect it's local. In such case you should have a lower mss on the default route to make TCP/IP work better. How are you connected to the Internet? ADSL with PPPoE, or some other tunneling method which has a lover MTU than the default 1500? How to set the routing is quite distribution dependent, and I am not very familiar with SuSE. But on the good side you can use iptables to acheive the same thing, or maybe rules in your router. Regards Henrik
Re: [squid-users] newbie syslog.conf and coss questions
Amos Jeffries wrote: B. Cook wrote: Morning all, 2) regarding syslog.conf and file rotation I know I can setup -k rotate from cron and that will rotate squid's current cache_log, but how would I compress it? To compress it you would need a wrapper script which runs runs the -k rotate, then compresses the resulting access_log.0 into a new filename (usually dated files is a good idea). For posterity sake, here's the script that I use: #!/bin/sh # Tell squid to rotate logs /usr/local/squid/sbin/squid -k rotate # Situate ourselves in the log directory cd /usr/local/squid/logs/ # Move the old logs (overwriting number 5) mv -f access.log.4.gz access.log.5.gz mv access.log.3.gz access.log.4.gz mv access.log.2.gz access.log.3.gz mv access.log.1.gz access.log.2.gz mv access.log.0.gz access.log.1.gz # Compress the most recent squid log /bin/gzip access.log.0 It's nothing fancy, but it gets the job done. Just make sure your logfile_rotate (in squid.conf) is not set to 0. I have looked back through the gmane archives of squid-users and I do not see anyone that answered this question directly.. and even a google search didn't turn up much of anything useful. Or am I just missing something basic with a syslog.conf parameter? thanks in advance and I am sorry if another form of these questions were asked/answered; I couldn't find anything like them. again thanks in advance Amos Chris
Re: [squid-users] COSS problem on Squid 2.6.Stable19
da duy wrote: Dear Squid-Users, I'm currently having problem installing squid on Ubuntu 7.10, i;ve compiled them with this option: sudo ./configure --enable-storeio=coss,ufs,aufs,diskd -with-large-files --enable-delay-pools --enable-snmp --enable-removal-policies=heap,lru --enable-auth=ntlm,basic --enable-external-acl-helpers=ip_user,ldap_group i create the cache file first with these command: sudo dd if=/dev/sda bs=1048576 count=5000 of=/usr/local/squid/var/cache So this file is owned by root. Likely it's not group or world writable (it shouldn't be), so Squid can't modify it. and i add these line to squid.conf: cache_dir coss /usr/local/squid/var/cache 5000 block-size=512 max-size=524288 cache_swap_log /usr/local/squid/%s when i run sudo squid -k parse everything is fine, but when i run sudo squid -z i get this: 2008/04/04 21:46:04| Creating Swap Directories FATAL: Failed to create COSS stripe /usr/local/squid/var/cache Yup. Give the cache_effective_user ownership of this file and try again. Squid Cache (Version 2.6.STABLE19): Terminated abnormally. CPU Usage: 0.010 seconds = 0.000 user + 0.010 sys Maximum Resident Size: 0 KB Page faults with physical i/o: 0 i've tried googling but could not get an explanation of the case, any help would really be appreciated, thank and regards, Yudi Chris
RE: [squid-users] Unable to access a website through Suse/Squid.
Thanks so much, the advmss worked like a charm. How do I make it so this route stays there? When I restart networking it seems to vanish. -Original Message- From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] Sent: Friday, April 04, 2008 1:13 PM To: Terry Dobbs Cc: squid-users@squid-cache.org Subject: RE: [squid-users] Unable to access a website through Suse/Squid. tor 2008-04-03 klockan 12:36 -0400 skrev Terry Dobbs: > Also, the second command gives me an error and says "mss" is a garbage. Sorry, should be advmss /sbin/ip route add 63.148.24.5 via your.internet.gateway advmss 496 to replace an already existing route use replace instead of add.. Regards Henrik
Re: [squid-users] ICAP: fake user and new icap header X-Authenticated-Groups
I´ve experience with Windows 2003 ADS (also Windows NT domain) and Squid 2.5/2.6. I read windows group and manage it with several ACLs. It works without problem. Enviroment: SO: CentOS 4.4/5.1 Samba: 3.0xx Squid: 2.5/2.6 By the way, I've been suffering BC for many years and I hate it. --- Arno _ <[EMAIL PROTECTED]> wrote: > > Hello, > I'm my configuration I have 2 bluecoat proxy talking > to a webwasher via ICA= > P. > And I also have a squid 3.0 for my test and as a > backup of the bluecoat. > But my squid is not doing any authentication, I > can't and don't want to. > > So to be able to make it work with the ICAP > (webwasher) I need to send user= > name and user group to it. > > Any actual way of sending a fake information, or > should I crate a new icap-= > fake-client-username and icap-fake-client-group on > the icap config part of = > squid.conf ? > Anyone interested or it will be just for me ? > > That will let me (you, anyone) have a mix > environment with authenticated pr= > oxy and some other not ((can be for automated system > or whatever you want) > > regards, > > arno > > _ > Connect to the next generation of MSN Messenger > http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline You rock. That's why Blockbuster's offering you one month of Blockbuster Total Access, No Cost. http://tc.deals.yahoo.com/tc/blockbuster/text5.com
RE: [squid-users] Unable to access a website through Suse/Squid.
tor 2008-04-03 klockan 12:36 -0400 skrev Terry Dobbs: > Also, the second command gives me an error and says "mss" is a garbage. Sorry, should be advmss /sbin/ip route add 63.148.24.5 via your.internet.gateway advmss 496 to replace an already existing route use replace instead of add.. Regards Henrik
Re: [squid-users] Pegging CPU with epoll_wait
Henrik Nordstrom wrote: restarting squid (maybe a few hours or a few days), it starts pegging the CPU at 100%. Running strace on the squid processes scrolls: epoll_wait(3, {}, 256, 0) = 0 as fast as my screen will scroll. Restarting squid makes it settle down again for a while. ... then file a bug report and attach your cache.log file. It happened again, and bug #2296 has been filed. FYI, squidclient reports: # squidclient -p 80 mgr:events HTTP/1.0 200 OK Server: squid/2.6.STABLE6 Date: Fri, 04 Apr 2008 16:41:22 GMT Content-Type: text/plain Expires: Fri, 04 Apr 2008 16:41:22 GMT Last-Modified: Fri, 04 Apr 2008 16:41:22 GMT X-Cache: MISS from revproxy.bryanlgh.org X-Cache-Lookup: MISS from revproxy.bryanlgh.org:80 Via: 1.0 revproxy.bryanlgh.org:80 (squid/2.6.STABLE6) Connection: close Last event to run: storeClientCopyEvent Operation Next Execution Weight Callback Valid? storeClientCopyEvent0.00 seconds0 yes storeClientCopyEvent0.00 seconds0 yes storeClientCopyEvent0.00 seconds0 yes MaintainSwapSpace 0.449644 seconds1 N/A ipcache_purgelru5.780025 seconds1 N/A fqdncache_purgelru 9.699383 seconds1 N/A storeDirClean 13.604307 seconds 1 N/A statAvgTick 43.007845 seconds 1 N/A peerClearRR 102.739505 seconds 0 yes peerClearRR 102.739505 seconds 0 yes peerClearRR 102.739505 seconds 0 yes peerClearRR 102.739505 seconds 0 yes peerClearRR 102.739505 seconds 0 yes peerClearRR 102.739505 seconds 0 yes peerClearRR 102.739505 seconds 0 yes peerRefreshDNS 2868.335948 seconds 1 N/A User Cache Maintenance 3102.819286 seconds 1 N/A storeDigestRebuildStart 3103.252835 seconds 1 N/A storeDigestRewriteStart 3103.278672 seconds 1 N/A peerDigestCheck 141057.508629 seconds 1 yes begin:vcard fn:Ben Hollingsworth n:Hollingsworth;Ben org:BryanLGH Health System;Information Technology adr:;;1600 S. 48th St.;Lincoln;NE;68506;USA email;internet:[EMAIL PROTECTED] title:Systems Programmer tel;work:402-481-8582 tel;fax:402-481-8354 tel;cell:402-432-5334 url:http://www.bryanlgh.org version:2.1 end:vcard
Re: [squid-users] Dub with access.log rotation ...
Squid, as other linux software use a daemon knows as logrotate who is the responsable of many log rotations. You should modify the logrotate.conf or ./logrotate.d/squid specify logrotate config for squid. --- Ramiro Sabastta <[EMAIL PROTECTED]> wrote: > Hi, > > I installed squid on a Debian box. > > Everithing is working well, but I have a issue that > I can't solve. > > The access.log log file always close an open new one > at 6 A.M. > > I try to change that with the squid -k rotate > option, including in the > crontab file this line: > > 0 0 * * * /usr/sbin/squid -k rotate > > but this configuration close the file at 00 AM and 6 > AM too. > > ¿What can i do to force the rotation only to 0 AM? > (but not to 6 AM) > > Thanks a lot ... > > Kind regards !! > > Ramiro > You rock. That's why Blockbuster's offering you one month of Blockbuster Total Access, No Cost. http://tc.deals.yahoo.com/tc/blockbuster/text5.com
Re: [squid-users] Access denied without password request
[EMAIL PROTECTED] wrote: Hello I need to deny access to anywhere except www.rbc.ru. I wrote this acl: acl acl-pupkin proxy_auth pupkin acl acl-pupkin-allow dstdom_regex rbc.ru Ew. regex. Use this instead: acl acl-pupkin-allow dstdomain rbc.ru http_access allow acl-pupkin acl-pupkin-allow http_access deny acl-pupkin It's working. But on that site as on a lot ot others there are some banners, counters, etc, which points to other sites. And proxy ask password after any moving on allowed site. Is it possible to fully deny access without any permanent password requests? Yes, Using any of the other many ACL types which do not involve auth. You will have to pick the criteria yourself. Here is a useful list of the ACL types available in current squid releases: http://www.squid-cache.org/Versions/v2/2.6/cfgman/acl.html http://www.squid-cache.org/Versions/v3/3.0/cfgman/acl.html Amos -- Please use Squid 2.6.STABLE19 or 3.0.STABLE4
Re: [squid-users] newbie syslog.conf and coss questions
B. Cook wrote: Morning all, I am running squid 2.6.stable18 from FreeBSD ports. (FreeBSD 6.x and 7.x) I have two questions.. 1) regarding coss. I have enabled coss as shown in the squid faq and all seems to be working wonderfully. I had some questions about the cache_swap_log so I was reading squid.conf.default in my squid dir (/usr/local/etc/squid again from FreeBSD ports..) I found that it said that cache_swap_log was mandatory.. I also saw That is flat wrong. cache_swap_log - debugging log useful only if you have filesystem problems you need to debug. cache_log - cache operational log. Needed for critical errors and general problem warnings. access_log - needed to keep track of squid processed requests. None of which are mandatory. But the last two are recommended for a variety of reasons. The administration usefulness being a big one. 2) regarding syslog.conf and file rotation I know I can setup -k rotate from cron and that will rotate squid's current cache_log, but how would I compress it? To compress it you would need a wrapper script which runs runs the -k rotate, then compresses the resulting access_log.0 into a new filename (usually dated files is a good idea). I have looked back through the gmane archives of squid-users and I do not see anyone that answered this question directly.. and even a google search didn't turn up much of anything useful. Or am I just missing something basic with a syslog.conf parameter? thanks in advance and I am sorry if another form of these questions were asked/answered; I couldn't find anything like them. again thanks in advance Amos -- Please use Squid 2.6.STABLE19 or 3.0.STABLE4
Re: [squid-users] Dub with access.log rotation ...
Ramiro Sabastta wrote: Hi, I installed squid on a Debian box. Everithing is working well, but I have a issue that I can't solve. The access.log log file always close an open new one at 6 A.M. I try to change that with the squid -k rotate option, including in the crontab file this line: 0 0 * * * /usr/sbin/squid -k rotate but this configuration close the file at 00 AM and 6 AM too. ¿What can i do to force the rotation only to 0 AM? (but not to 6 AM) It sounds a lot like you have the squid logs registered with logrotate (check for a /etc/logrotate.d/squid file) This run around 6'ish every day. Otherwise it may be another crontab elsewhere doing it. Amos -- Please use Squid 2.6.STABLE19 or 3.0.STABLE4
[squid-users] Dub with access.log rotation ...
Hi, I installed squid on a Debian box. Everithing is working well, but I have a issue that I can't solve. The access.log log file always close an open new one at 6 A.M. I try to change that with the squid -k rotate option, including in the crontab file this line: 0 0 * * * /usr/sbin/squid -k rotate but this configuration close the file at 00 AM and 6 AM too. ¿What can i do to force the rotation only to 0 AM? (but not to 6 AM) Thanks a lot ... Kind regards !! Ramiro
[squid-users] newbie syslog.conf and coss questions
Morning all, I am running squid 2.6.stable18 from FreeBSD ports. (FreeBSD 6.x and 7.x) I have two questions.. 1) regarding coss. I have enabled coss as shown in the squid faq and all seems to be working wonderfully. I had some questions about the cache_swap_log so I was reading squid.conf.default in my squid dir (/usr/local/etc/squid again from FreeBSD ports..) I found that it said that cache_swap_log was mandatory.. I also saw that FreeBSD needs to have VFS_AIO in the kernel or kldload aio. I have built squid with aufs as well.. is building aufs better/worse than using the kernel module AIO? I would think a kernel module would offer better performance? I didn't have the aio loaded but I did have aufs built, and I didn't have the cache_swap_log.. and it all seemed to work. (this is a log excerpt from my local testing machine) 2008/04/04 08:46:14| Finished rebuilding storage from disk. 2008/04/04 08:46:14| 6580 Entries scanned 2008/04/04 08:46:14| 0 Invalid entries. 2008/04/04 08:46:14| 0 With invalid flags. 2008/04/04 08:46:14| 6558 Objects loaded. 2008/04/04 08:46:14| 0 Objects expired. 2008/04/04 08:46:14| 0 Objects cancelled. 2008/04/04 08:46:14| 0 Duplicate URLs purged. 2008/04/04 08:46:14| 0 Swapfile clashes avoided. 2008/04/04 08:46:14| Took 13.3 seconds ( 492.4 objects/sec). 2008/04/04 08:46:14| Beginning Validation Procedure 2008/04/04 08:46:14| COSS: /backups/stripe: Rebuild Completed Again if this is mandatory.. why did it start and (seemingly work) with out them? and what will or should I notice with them? 2) regarding syslog.conf and file rotation I know I can setup -k rotate from cron and that will rotate squid's current cache_log, but how would I compress it? I have looked back through the gmane archives of squid-users and I do not see anyone that answered this question directly.. and even a google search didn't turn up much of anything useful. Or am I just missing something basic with a syslog.conf parameter? thanks in advance and I am sorry if another form of these questions were asked/answered; I couldn't find anything like them. again thanks in advance
[squid-users] ICAP: fake user and new icap header X-Authenticated-Groups
Hello, I'm my configuration I have 2 bluecoat proxy talking to a webwasher via ICA= P. And I also have a squid 3.0 for my test and as a backup of the bluecoat. But my squid is not doing any authentication, I can't and don't want to. So to be able to make it work with the ICAP (webwasher) I need to send user= name and user group to it. Any actual way of sending a fake information, or should I crate a new icap-= fake-client-username and icap-fake-client-group on the icap config part of = squid.conf ? Anyone interested or it will be just for me ? That will let me (you, anyone) have a mix environment with authenticated pr= oxy and some other not ((can be for automated system or whatever you want) regards, arno _ Connect to the next generation of MSN Messenger http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline
Re: [squid-users] Limiting download size
Tag Name reply_body_max_size Usage reply_body_max_size (KB) Description This option specifies the maximum size of a reply body. It can be used to prevent users from downloading very large files, such as MP3's and movies. The reply size is checked twice. First when we get the reply headers, we check the content-length value. If the content length value exists and is larger than this parameter, the request is denied and the user receives an error message that says "the request or reply is too large." If there is no content-length, and the reply size exceeds this limit, the client's connection is just closed and they will receive a partial reply. Default reply_body_max_size 0 If this parameter is set to zero (the default), there will be no limit imposed. piyush joshi escreveu: Dear All, I want to only allow users using squid server to download only those files from internet which are less than 5 MB in size but do not know how to do this. Please help me so that i can add this feature in my squid server.
[squid-users] Access denied without password request
Hello I need to deny access to anywhere except www.rbc.ru. I wrote this acl: acl acl-pupkin proxy_auth pupkin acl acl-pupkin-allow dstdom_regex rbc.ru http_access allow acl-pupkin acl-pupkin-allow http_access deny acl-pupkin It's working. But on that site as on a lot ot others there are some banners, counters, etc, which points to other sites. And proxy ask password after any moving on allowed site. Is it possible to fully deny access without any permanent password requests? with best regards, AGP
[squid-users] ym through squid-2.6.STABLE6-5.el5_1.2
Hello squid experts, I am using squid with basic pam_mysql_auth. All is working fine. I have only one problem when our users is tring to use ym. Time to time, suddenly they are disconnected and they should relogin. I am doing some qos on our squid server, and of course all proxy traffic is shaped. There are few users which have dedicated bandwidth and rest of all, goes into shared bandwidth. I know that ym is buggy and consume bandwidth (with avdertisments). With pidgin, all is working fine, but our users complain about its silly design. So, my question is: is any quick fix to solve ym relogin problem? I tried to configure their IE browsers to use Use HTTP 1.1 and HTTP 1.1 through proxy connections, but doesn't seems to fix described problem (if no enough bandwidth available, they are disconnected when using ym). Any ideas? Regards, Alx You rock. That's why Blockbuster's offering you one month of Blockbuster Total Access, No Cost. http://tc.deals.yahoo.com/tc/blockbuster/text5.com
[squid-users] COSS problem on Squid 2.6.Stable19
Dear Squid-Users, I'm currently having problem installing squid on Ubuntu 7.10, i;ve compiled them with this option: sudo ./configure --enable-storeio=coss,ufs,aufs,diskd -with-large-files --enable-delay-pools --enable-snmp --enable-removal-policies=heap,lru --enable-auth=ntlm,basic --enable-external-acl-helpers=ip_user,ldap_group i create the cache file first with these command: sudo dd if=/dev/sda bs=1048576 count=5000 of=/usr/local/squid/var/cache and i add these line to squid.conf: cache_dir coss /usr/local/squid/var/cache 5000 block-size=512 max-size=524288 cache_swap_log /usr/local/squid/%s when i run sudo squid -k parse everything is fine, but when i run sudo squid -z i get this: 2008/04/04 21:46:04| Creating Swap Directories FATAL: Failed to create COSS stripe /usr/local/squid/var/cache Squid Cache (Version 2.6.STABLE19): Terminated abnormally. CPU Usage: 0.010 seconds = 0.000 user + 0.010 sys Maximum Resident Size: 0 KB Page faults with physical i/o: 0 i've tried googling but could not get an explanation of the case, any help would really be appreciated, thank and regards, Yudi You rock. That's why Blockbuster's offering you one month of Blockbuster Total Access, No Cost. http://tc.deals.yahoo.com/tc/blockbuster/text5.com
Re: [squid-users] what to block from the proxy to speed up?
I'm a little unclear on your goals - you wish to speed up your net access by blocking things? A couple of things to think about: * is your line congested ? * are DNS lookups working correctly? * have you looked at things like ACK prioritisation on your router/firewall ? IF you want to continue on your current course, have a look at adzap - replaces many of the common banners/ads with blank images served off your local LAN. Quick simple and easy to implement. Barry Rakotomandimby Mihamina wrote , On 2008/04/04 07:41 AM: Hi, I want to speedup the websurfing of my LAN. I have a squid running on the gateway, not transparent (people choose to use the proxy or not. I would like to know if there is a list of url patterns to "block" in order to have a more fluent surfing. For example, I already "blocked" - googlesyndication - google-analytics I should also do the same for Xiti and other populars slowness tools. But may there is already a working this on the net... Would you know? Thanks.
[squid-users] per user quota
Hi folks, I would like to know if there is any plan for adding per user quota (per hour/day) to a future squid version.