Re: [squid-users] High availability based on Squid process

2008-04-16 Thread ??????????? ???????

BJ Tiemessen ?:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Look at Linux HA (www.linux-ha.org), very nice software.

BJ

Nick Duda wrote:
| I might need to take this elsewhere, but curious if anyone is doing
this already.
|
| I need to have a failover Squid proxy server in the event the primary
goes down...when I say down, I mean Squid is not working. Is there any
linux high availability (fault tolerance) software solutions that would
failover if the squid process is not running?
|
| - Nick

- --
BJ Tiemessen
eSoft Inc.
303-444-1600 x3357
[EMAIL PROTECTED]
www.eSoft.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFIBO9oxD4S8yzNNMMRAqTHAJ0fXLOxQgA1ney43aoNh19MjwBjegCfQ10I
l3AEH0WOEf7bhxRUJ+BxkKM=
=fXMa
-END PGP SIGNATURE-

For example script (that discover is squid proccess is running or not)
Run this script by cron.


Re: [squid-users] Accessing cachemgr.cgi

2008-04-16 Thread hdkutz
On Tue, Apr 15, 2008 at 01:53:46PM -0800, Chris Robertson wrote:
 hdkutz wrote:
 Hello List,
 pretty new to squid 3.0.
 Tried to configure cachemgr.cgi.
 Problem:
 Squid is not listening to his standard port 3128.
 It is configured to Listen on port 80.
 Apache Webserver is configured to use port 3128.
 If I try to access http://proxy:3128/cgi-bin/cachemgr.cgi I'll get
 snip
 connect 127.0.0.1:80: (111) Connection refused
 snip
 
 snipy
 [EMAIL PROTECTED] etc]# grep manager squid.conf
 acl manager proto cache_object
 http_access allow manager localhost 
 http_access deny manager
 [EMAIL PROTECTED] etc]# grep localhost squid.conf
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8
 http_access allow manager localhost
 http_access allow localhost
 [EMAIL PROTECTED] etc]# grep 127.0.0.1 cachemgr.conf 
 127.0.0.1
 127.0.0.1:80
 snipy
 
 Am I missing something?
   
 
 My guess would be that either you have specified an IP address on the 
 port line of your squid.conf, which forces Squid to only bind to the 
 interface where that IP is assigned, or something is preventing local 
 communication (be it SELinux, firewall rules...).
 
 Chris
Thanx for your suggestion.
No SELinux, firewall rules.
You are right. Indeed, squid only listens on one IP.
Reconfigured squid to listen on 127.0.0.1:80 also.
Got now:
snip
ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: cache_object://127.0.0.1/

The following error was encountered:

* Access Denied.

  Access control configuration prevents your request from being allowed at
this time. Please contact your service provider if you feel this is incorrect. 

Your cache administrator is webmaster.
Generated Wed, 16 Apr 2008 08:04:05 GMT by proxy (squid/3.0.STABLE4) 
snip
Seems to me, that an ACL is missing.
But, acl localhost is already there (see above).
Does this acl misses something.

Cheers,
ku
-- 
Jabba the Hutt:
Bring me Solo and the Wookiee! They will all suffer
for this outrage.


Re: [squid-users] How to check the cache_peer sibling is working?

2008-04-16 Thread Henrik Nordstrom
ons 2008-04-16 klockan 12:24 +0800 skrev John Lui:
 in the squid.default.conf,it says if setting icp_query_timeout 0(by
 default) will automatically determine an optimal ICP query timeout
 value based on the round-trip-time,
 Isn't it meaning icp_query_timeout 0 is better than icp_query_timeout 500?

When round-trip measurements works yes, which they don't if the servers
is too close to each other (too much rounding errors)

In Squid-3 there is a lower limit. Thought this was in Squid-2 as well,
but apparently have been lost.. (now resurrected)

Regards
Henrik



[squid-users] Only one outgoing connection for an incomming connection

2008-04-16 Thread Roman Aspetsberger

Hello.



I would like to set up the following environment:



Client -- ProxyIN -- Squid -- ProxyOUT -- Webserver



The problem is, that ProxyIN adds some request headers, which ProxyOUT

needs.

So, is there a chance to config Squid in a way, that it opens only one

outgoing connection

for an incoming connection and passes on all original request header

fields?





Greets,

Roman



[squid-users] cross-domain in Active Directory 2003 with Squid

2008-04-16 Thread Martin . Steiner
Hello!

I already tried 2 weeks to install Squid 2.6.STABLE18 for Windows. So what 
I want is following:

I created a group in the Active Directory with the Name InternetUsers, 
Group Scope Domain local, Group Type Security. The group scope Domain 
local is mandatory because we have AD-Trusts with other divisions and the 
users have the need to login into the Internet from this cross-domain over 
my Squid. An Example:

User in this group:

mydomain1\testuser
mydomain2\testuser
mydomain3\testuser

Result of my configuration:

Only the mydomain1 users can login successfully with the proxy settings. 
The other one get a DINIED from the squid. So please can somebody help 
me with my specific problem??

Here are my settings and configurations:

My System:

Windows Server 2003 Standard Edition SP2
2.3 GHZ
512 MB-RAM
8 GByte - HDD
no other services are running
is in domain mydomain1
(Is installed on VMWare ESX-Server)

AD-Server:

Active Directory 2003

Squid Configuration:

Installed the Squid Service with these cmd-instructions:
C:\squid\sbin\squid.exe -i -f C:/squid/etc/squid.conf -n Squid1
and
C:\squid\sbin\squid.exe -z -f C:/squid/etc/squid.conf
for creating the cash

After then I changed the squid.conf file:

auth_param basic program C:/squid/libexec/squid_ldap_auth.exe -R -b 
dc=stec-01,dc=s-tec -D cn=Administrator,cn=Users,dc=stec-01,dc=s-tec 
-w password -f sAMAccountName=%s -h 172.27.208.59 -p 3268
auth_param basic children 5
auth_param basic realm Squid Proxy Server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off

external_acl_type InetGroup %LOGIN C:/squid/libexec/squid_ldap_group.exe 
-R -b dc=mydomain,dc=at -D cn=Administrator,cn=Users,dc=mydomain,dc=at 
-w password -f 
((objectclass=person)(sAMAccountName=%v)(memberof=CN=%a,OU=Groups,DC=mydomain,DC=at))
 
-h 172.27.208.59 -p 3268

acl localMAGNA dstdomain .mydomain1.at .mydomain2.at .mydomain3.at
acl localnet proxy_auth REQUIRED
acl ProxyUsers external InetGroup InternetUsers

http_access allow localMAGNA
http access allow ProxyUsers

First Time I have tried to make this with LDAP. The same with ntlm.

Thank you very much in advance for your help.

With kind regards
Martin


Re: [squid-users] Only one outgoing connection for an incomming connection

2008-04-16 Thread Amos Jeffries

Roman Aspetsberger wrote:

Hello.

I would like to set up the following environment:

Client -- ProxyIN -- Squid -- ProxyOUT -- Webserver

The problem is, that ProxyIN adds some request headers, which ProxyOUT
needs.
So, is there a chance to config Squid in a way, that it opens only one
outgoing connection
for an incoming connection and passes on all original request header
fields?


What you are describing sounds like the default behaviour of Squid.

As long as the ProxyIn/OUT are obeying the standards and prefixing heir 
custom headers with X- squid has no reason to strip them out on the 
way through.


Amos
--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4


Re: [squid-users] control bandwidth usage on internal LAN?

2008-04-16 Thread Dave Augustus
On Sunday 09 March 2008 9:49:49 pm Chuck Kollars wrote:
 How can I prioritize traffic on my _internal_ LAN (or
 to use different words the _other_ side of Squid)?

OK

 The first request for a very large file uses some
 amount of drop bandwidth which I can control with
 things like delay_pools. But the second request is
 answered out of the cache, at disk speed, and
 saturates my LAN. I'd much rather respond smartly to
 the 20 other users by making the large file go to the
 back of the line. How can I turn down the priority of
 large file responses from the cache?


What you need is QOS. Check out Zeroshell.net

We installed this 2 months ago and never looked back. We use it in transparent 
bridge mode.

Dave


[squid-users] Marking Cached traffic..

2008-04-16 Thread Stephan Viljoen

HI There,

I was wondering whether it's posible to mark cached traffic with a different 
TOS then uncached traffic. I need to come up with a way of passing cached 
traffic through our bandwidth manager without taxing the end user for it. 
Basically giving them the full benefits of the proxy server.


Thanks in advance
-steph 



--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



[squid-users] authentification and banners

2008-04-16 Thread Serj A. Androsov
Hello all,

I use basic authentification and want to open full access to some site
blabla.com.

The blabla.com page can contain images from the different sites.
The authentification windows appeared because of these banners have
different site urls.

Can some explain how can I solve it?


Re: [squid-users] Marking Cached traffic..

2008-04-16 Thread Adrian Chadd
On Wed, Apr 16, 2008, Stephan Viljoen wrote:
 HI There,
 
 I was wondering whether it's posible to mark cached traffic with a 
 different TOS then uncached traffic. I need to come up with a way of 
 passing cached traffic through our bandwidth manager without taxing the end 
 user for it. Basically giving them the full benefits of the proxy server.

There's the http://zph.bratcheda.org/ stuff.

I'm probably going to roll it into my private tree after I've stabilised
the codebase. Someone's asked me about it.



Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Clock sync accuracy importance?

2008-04-16 Thread K K
Have you considered running one of the machines as an NTP server, have
the others sync their clock to that?

On 4/14/08, Jon Drukman [EMAIL PROTECTED] wrote:
 Should I throw an Expires header in there?

Yes, explicit 'Expires' headers help squid make smarter decisions.

If you know an object is going to be good indefinitely (e.g. a GIF for
a logo), then setting a very long expiration date will ensure squid
doesn't bother checking with the origin server.

You might want to also reconsider 'Cache-Control: max-age=300'


Kevin


[squid-users] Squid problem

2008-04-16 Thread Michael J. Perrone


[squid-users] Squid Problem

2008-04-16 Thread Michael J. Perrone
Hello,

My company uses Squid as a proxy server for select users within our
organization. This solution has been functioning without incident for
approximately 1.5 years. Recently (within the past three weeks), we have
been experiencing problems where the software fails to serve webpages.
After approximately 60-90 from a web request, the software claims a DNS
error and times out. The only fix we can initiate is to restart the
fedora 7 server on which Squid resides. The error message is confusing
because we can successfully resolve hosts on the machine with its IP
settings (in other words, I do not believe it is a DNS issue). HELP!
This is becoming a real problem for us.



Also, the following is an excerpt from the cache.log. This log entries
coincide with the issues we are experiencing. The following is only an
example. Each time the problem occurs, the client and the web request is
unique.

WARNING: Closing client 192.168.2.231 connection due to lifetime timeout
2008/04/16 10:10:48|  cld.countrywide.com:443
2008/04/16 10:10:48| WARNING: Closing client 192.168.0.136 connection
due to lifetime timeout
2008/04/16 10:10:48|
http://oascentral.yellowpages.com/RealMedia/ads/adstream_sx.ads/anywho.c
om/[EMAIL PROTECTED]CAT=null
2008/04/16 10:10:48| WARNING: Closing client 192.168.2.59 connection due
to lifetime timeout
2008/04/16 10:10:48|  http://realestate.yahoo.com/Homevalues
2008/04/16 10:10:48| WARNING: Closing client 192.168.0.90 connection due
to lifetime timeout
2008/04/16 10:10:48|  tmsservice.calyxsoftware.com:443

Any help is greatly appreciated!


Michael J. Perrone
Network Engineer / Systems Administrator
Foundation Financial Group, LLC

904-861-1740 - Direct
866-659-3200 - Ext. 3740 - Toll Free 
904-861-1702 - Fax

PRIVILEGED AND CONFIDENTIAL: This communication, including attachments,
is for the exclusive use of addressee and may contain proprietary,
confidential and/or privileged information. If you are not the intended
recipient, any use, copying, disclosure, dissemination or distribution
is strictly prohibited. If you are not the intended recipients, please
notify the sender immediately by return e-mail, delete this
communication and destroy all copies.




[squid-users] Re: Clock sync accuracy importance?

2008-04-16 Thread Jon Drukman

K K wrote:

Have you considered running one of the machines as an NTP server, have
the others sync their clock to that?


no, one of the machines is shared hosting so i don't have access to run 
my own ntpd on it.



Yes, explicit 'Expires' headers help squid make smarter decisions.

If you know an object is going to be good indefinitely (e.g. a GIF for
a logo), then setting a very long expiration date will ensure squid
doesn't bother checking with the origin server.

You might want to also reconsider 'Cache-Control: max-age=300'


reconsider in what way?  the pages i am most interested in 
cache-controlling are news hub pages, and they should be good for 5 
minutes, tops.  otherwise the cached version is in danger of falling too 
far behind the 'real' news feed.


i guess i don't really understand the difference between doing Expires: 
now plus 5 minutes (in apache speak) and Cache-Control: max-age=300


-jsd-



Re: [squid-users] Squid Problem

2008-04-16 Thread Kinkie
On Wed, Apr 16, 2008 at 6:57 PM, Michael J. Perrone
[EMAIL PROTECTED] wrote:
 Hello,

  My company uses Squid as a proxy server for select users within our
  organization. This solution has been functioning without incident for
  approximately 1.5 years. Recently (within the past three weeks), we have
  been experiencing problems where the software fails to serve webpages.
  After approximately 60-90 from a web request, the software claims a DNS
  error and times out. The only fix we can initiate is to restart the
  fedora 7 server on which Squid resides. The error message is confusing
  because we can successfully resolve hosts on the machine with its IP
  settings (in other words, I do not believe it is a DNS issue). HELP!
  This is becoming a real problem for us.

What does the cachemgr DNS page say?

-- 
 /kinkie


Re: [squid-users] Squid2-only plugin from Secure Computing

2008-04-16 Thread Alex Rousskov

On Wed, 2008-04-16 at 11:08 +0800, Adrian Chadd wrote:
 On Wed, Apr 16, 2008, Adam Carter wrote:
 
I think SmartFilter patches the squid source, so is tied to specific
versions. It certainly adds another option to the configure script.
You can download it for free from SecureComputing's website and have
look. Sorry I cant be more helpful but I'm not a developer.
   
Smartfilter 4.2.1 works with squid 2.6-17.
   
http://www.securecomputing.com/index.cfm?skey=1326
  
   FYI: We have started talking to Secure Computing regarding Squid3
   compatibility of the SmartFilter plugin. I will keep you updated.
  
  Thanks Alex, good to hear. Hopefully you can some up with a model that will 
  allow us to apply squid bigfixes without compromising SecureComputing 
  support.
 
 Well, we could also talk to them about rolling their existing patches
 into the Squid-2 codebase.

I doubt their existing patches should be included in Squid sources
because they are relatively large and very SmartFilter-specific. It
would be rather unfair to ask Squid developers to maintain that code,
and there are better alternatives to simply dumping parts of custom
filters into Squid (e.g., ICAP and eCAP).

Alex.




Re: [squid-users] Squid2-only plugin from Secure Computing

2008-04-16 Thread Adrian Chadd
On Wed, Apr 16, 2008, Alex Rousskov wrote:

  Well, we could also talk to them about rolling their existing patches
  into the Squid-2 codebase.
 
 I doubt their existing patches should be included in Squid sources
 because they are relatively large and very SmartFilter-specific. It
 would be rather unfair to ask Squid developers to maintain that code,
 and there are better alternatives to simply dumping parts of custom
 filters into Squid (e.g., ICAP and eCAP).

Hm, I haven't looked at the patchset yet - I assume its in the 53 meg download?

Much like any other submission, it might need some tidying up to be more
suitable; I'm reasonably surprised they've not approached anyone about trying
to include this work into the mainline Squid tree (to make their lives easier!)
in the past.



Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] authentification and banners

2008-04-16 Thread Кабиольский Евгений

Serj A. Androsov пишет:

Hello all,

I use basic authentification and want to open full access to some site
blabla.com.

The blabla.com page can contain images from the different sites.
The authentification windows appeared because of these banners have
different site urls.

Can some explain how can I solve it?
  

You have something like this in your config
acl disallowed_users proxy_auth user1
acl banned url_regex -i path_to_file
http_access deny disallowed_users banned

IMHO better way is to use redirector (like SquidGuard or squidred) to 
get out banners from page


Re: [squid-users] squid-2 fork - cacheboy (again)

2008-04-16 Thread Alex Rousskov
On Wed, 2008-04-16 at 11:22 +0800, Adrian Chadd wrote:

 Well, I'm at that point again, where I'd like to do some large-scale work to
 the Squid-2 tree to fix a whole range of longstanding performance and codebase
 issues to help the codebase move forward. Unfortunately this clashes with the
 general Squid project direction of developing Squid-3.

 So I've decided to fork the Squid-2 project again into another cacheboy
 derivative seperate from the Squid project. I'm going to pursue a different
 set of short-term and medium-term goals whilst focusing on maintaining the
 relative maturity of the Squid-2 codebase.

 I wish everyone working on Squid-3 the best of luck for the future.

I also would like to wish Adrian good luck with his fork.

For the record, we did ask Adrian to consider continue working within
the Squid project, without placing any restrictions on his activities
(so all the work Adrian mentions above would be accepted), but obviously
Adrian thinks he must fork.

Personally, I do not see enough technical reasons for this fork, but all
my attempts to clarify the motivation behind it or prevent it have
failed.

Alex.




[squid-users] Can I find a performance document for Squid?

2008-04-16 Thread JXu

Hi All,

Can I find a performance document for Squid? for example, what is Squid
throughput?


Thanks,

Forrest
-- 
View this message in context: 
http://www.nabble.com/Can-I-find-a-performance-document-for-Squid--tp16732532p16732532.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] YouTube and other streaming media (caching)

2008-04-16 Thread Ray Van Dolson
Hello all, I'm beginning to implement a Squid setup and am in
particular looking to cache Youtube as it is a significant chunk of our
traffic and we don't want to outright block it (yet).

I'm using squid-2.6.STABLE6 from RHEL 5.1 (latest errata).  I've been
reading around a lot and am seeking a bit of clarification on the
current status of caching youtue and potentially other streaming media.
Specifically:

  * Adrian mentions support for Youtube caching in 2.7 -- which seems
to correspond with this changeset:
  
  http://www.squid-cache.org/Versions/v2/2.7/changesets/11905.patch

Which would seem to be only a configuration file change.  Is there
any reason Youtube caching won't work correctly in my 2.6 version
with a similar setup (and the rewriting script as well I guess)?

  * If there are additional changes to 2.7 codebase that make youtube
caching possible, are they insignificant enough that they could
easily be backported to 2.6?  I'm trying to decide how I will
convince Red Hat to incorporate this as I doubt they'll want to
move to 2.7.  Alternate of course is to build from source which I
am open to.

My config file is as follows:

  http_port 3128
  append_domain .esri.com
  acl apache rep_header Server ^Apache
  broken_vary_encoding allow apache
  maximum_object_size 4194240 KB
  maximum_object_size_in_memory 1024 KB
  access_log /var/log/squid/access.log squid
  refresh_pattern ^ftp:   144020% 10080
  refresh_pattern ^gopher:14400%  1440
  refresh_pattern .   0   20% 4320

  acl all src 0.0.0.0/0.0.0.0
  acl esri src 10.0.0.0/255.0.0.0
  acl manager proto cache_object
  acl localhost src 127.0.0.1/255.255.255.255
  acl to_localhost dst 127.0.0.0/8
  acl SSL_ports port 443
  acl Safe_ports port 80  # http
  acl Safe_ports port 21  # ftp
  acl Safe_ports port 443 # https
  acl Safe_ports port 70  # gopher
  acl Safe_ports port 210 # wais
  acl Safe_ports port 1025-65535  # unregistered ports
  acl Safe_ports port 280 # http-mgmt
  acl Safe_ports port 488 # gss-http
  acl Safe_ports port 591 # filemaker
  acl Safe_ports port 777 # multiling http
  acl CONNECT method CONNECT
  # Some Youtube ACL's
  acl youtube dstdomain .youtube.com .googlevideo.com .video.google.com 
.video.google.com.au
  acl youtubeip dst 74.125.15.0/24 64.15.0.0/16
  cache allow youtube
  cache allow youtubeip
  cache allow esri

  http_access allow manager localhost
  http_access deny manager
  http_access deny !Safe_ports
  http_access deny CONNECT !SSL_ports
  http_access allow localhost
  http_access allow esri
  http_access deny all
  http_reply_access allow all
  icp_access allow all
  coredump_dir /var/spool/squid

  # YouTube options.
  refresh_pattern -i \.flv$ 10080 90% 99 ignore-no-cache override-expire 
ignore-private
  quick_abort_min -1 KB

  # This will block other streaming media.  Maybe we don't want this, but using
  # it for now.
  hierarchy_stoplist cgi-bin ?
  acl QUERY urlpath_regex cgi-bin \?
  cache deny QUERY

I see logfile entries (and cached objects) that indicate my youtube
videos are being saved to disk.  However they are never HIT even when
the same server is used.  I wonder if the refresh_pattern needs to be
updated?  The GET requests for the video do not have a .flv in their
filename What does refresh_pattern search for a match?  The request
URL?  The resulting MIME type?

That's it for now. :)  Thanks in advance.

Ray


Re: FW: [squid-users] Squid Problem

2008-04-16 Thread Kinkie
On Wed, Apr 16, 2008 at 7:33 PM, Michael J. Perrone
[EMAIL PROTECTED] wrote:
[...]
  when I type the command squidclient mgr:info this is
  the result...   Is this how I get to the cache manager DNS page (the
  machine is not running apache...)?

  [EMAIL PROTECTED] ~]# squidclient mgr:info
  client: ERROR: Cannot connect to localhost:3128: Connection refused

squidclient -h host -p port -U admin -W squidmgr_password
cache_object://host/page

where you have to replace the brackets with the relevant values. The
squidmgr_password can be found in your squid.conf file.

Please keep the list in Cc, so that others may jump in and help if they wish to.
-- 
 /kinkie


Re: [squid-users] Squid2-only plugin from Secure Computing

2008-04-16 Thread Henrik Nordstrom
ons 2008-04-16 klockan 11:08 +0800 skrev Adrian Chadd:

 Well, we could also talk to them about rolling their existing patches
 into the Squid-2 codebase.

No, it's some small glue patches and a large binary blob...

Regards
Henrik



Re: [squid-users] Accessing cachemgr.cgi

2008-04-16 Thread Chris Robertson

hdkutz wrote:

Thanx for your suggestion.
No SELinux, firewall rules.
You are right. Indeed, squid only listens on one IP.
Reconfigured squid to listen on 127.0.0.1:80 also.
Got now:
snip
ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: cache_object://127.0.0.1/

The following error was encountered:

* Access Denied.

  Access control configuration prevents your request from being allowed at
this time. Please contact your service provider if you feel this is incorrect. 


Your cache administrator is webmaster.
Generated Wed, 16 Apr 2008 08:04:05 GMT by proxy (squid/3.0.STABLE4) 
snip

Seems to me, that an ACL is missing.
But, acl localhost is already there (see above).
Does this acl misses something.

Cheers,
ku
  


Now the other http_access rules (and their order) become important.  You 
might benefit from perusing the FAQ section on ACLs 
(http://wiki.squid-cache.org/SquidFaq/SquidAcl), especially the 
subsection on troubleshooting ACLs 
(http://wiki.squid-cache.org/SquidFaq/SquidAcl#head-57ab8844e9060937c4a654e1aa7568f87cb25aef).


Chris


Re: [squid-users] YouTube and other streaming media (caching)

2008-04-16 Thread Adrian Chadd
The problem with caching Youtube (and other CDN content) is that
the same content is found at lots of different URLs/hosts. This
unfortunately means you'll end up caching multiple copies of the
same content and (almost!) never see hits.

Squid-2.7 -should- be quite stable. I'd suggest just running it from
source. Hopefully Henrik will find some spare time to roll 2.6.STABLE19
and 2.7.STABLE1 soon so 2.7 will appear in distributions.



Adrian

On Wed, Apr 16, 2008, Ray Van Dolson wrote:
 Hello all, I'm beginning to implement a Squid setup and am in
 particular looking to cache Youtube as it is a significant chunk of our
 traffic and we don't want to outright block it (yet).
 
 I'm using squid-2.6.STABLE6 from RHEL 5.1 (latest errata).  I've been
 reading around a lot and am seeking a bit of clarification on the
 current status of caching youtue and potentially other streaming media.
 Specifically:
 
   * Adrian mentions support for Youtube caching in 2.7 -- which seems
 to correspond with this changeset:
   
   http://www.squid-cache.org/Versions/v2/2.7/changesets/11905.patch
 
 Which would seem to be only a configuration file change.  Is there
 any reason Youtube caching won't work correctly in my 2.6 version
 with a similar setup (and the rewriting script as well I guess)?
 
   * If there are additional changes to 2.7 codebase that make youtube
 caching possible, are they insignificant enough that they could
 easily be backported to 2.6?  I'm trying to decide how I will
 convince Red Hat to incorporate this as I doubt they'll want to
 move to 2.7.  Alternate of course is to build from source which I
 am open to.
 
 My config file is as follows:
 
   http_port 3128
   append_domain .esri.com
   acl apache rep_header Server ^Apache
   broken_vary_encoding allow apache
   maximum_object_size 4194240 KB
   maximum_object_size_in_memory 1024 KB
   access_log /var/log/squid/access.log squid
   refresh_pattern ^ftp:   144020% 10080
   refresh_pattern ^gopher:14400%  1440
   refresh_pattern .   0   20% 4320
 
   acl all src 0.0.0.0/0.0.0.0
   acl esri src 10.0.0.0/255.0.0.0
   acl manager proto cache_object
   acl localhost src 127.0.0.1/255.255.255.255
   acl to_localhost dst 127.0.0.0/8
   acl SSL_ports port 443
   acl Safe_ports port 80  # http
   acl Safe_ports port 21  # ftp
   acl Safe_ports port 443 # https
   acl Safe_ports port 70  # gopher
   acl Safe_ports port 210 # wais
   acl Safe_ports port 1025-65535  # unregistered ports
   acl Safe_ports port 280 # http-mgmt
   acl Safe_ports port 488 # gss-http
   acl Safe_ports port 591 # filemaker
   acl Safe_ports port 777 # multiling http
   acl CONNECT method CONNECT
   # Some Youtube ACL's
   acl youtube dstdomain .youtube.com .googlevideo.com .video.google.com 
 .video.google.com.au
   acl youtubeip dst 74.125.15.0/24 64.15.0.0/16
   cache allow youtube
   cache allow youtubeip
   cache allow esri
 
   http_access allow manager localhost
   http_access deny manager
   http_access deny !Safe_ports
   http_access deny CONNECT !SSL_ports
   http_access allow localhost
   http_access allow esri
   http_access deny all
   http_reply_access allow all
   icp_access allow all
   coredump_dir /var/spool/squid
 
   # YouTube options.
   refresh_pattern -i \.flv$ 10080 90% 99 ignore-no-cache override-expire 
 ignore-private
   quick_abort_min -1 KB
 
   # This will block other streaming media.  Maybe we don't want this, but 
 using
   # it for now.
   hierarchy_stoplist cgi-bin ?
   acl QUERY urlpath_regex cgi-bin \?
   cache deny QUERY
 
 I see logfile entries (and cached objects) that indicate my youtube
 videos are being saved to disk.  However they are never HIT even when
 the same server is used.  I wonder if the refresh_pattern needs to be
 updated?  The GET requests for the video do not have a .flv in their
 filename What does refresh_pattern search for a match?  The request
 URL?  The resulting MIME type?
 
 That's it for now. :)  Thanks in advance.
 
 Ray

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] YouTube and other streaming media (caching)

2008-04-16 Thread Ray Van Dolson
On Thu, Apr 17, 2008 at 08:11:51AM +0800, Adrian Chadd wrote:
 The problem with caching Youtube (and other CDN content) is that
 the same content is found at lots of different URLs/hosts. This
 unfortunately means you'll end up caching multiple copies of the
 same content and (almost!) never see hits.
 
 Squid-2.7 -should- be quite stable. I'd suggest just running it from
 source. Hopefully Henrik will find some spare time to roll 2.6.STABLE19
 and 2.7.STABLE1 soon so 2.7 will appear in distributions.

Thanks Adrian.  FYI I got this to work with 2.7 (latest) based off the
instructions you provided earlier.  Here is my final config and the
perl script used to generate the storage URL:

  http_port 3128
  append_domain .esri.com
  acl apache rep_header Server ^Apache
  broken_vary_encoding allow apache
  maximum_object_size 4194240 KB
  maximum_object_size_in_memory 1024 KB
  access_log /usr/local/squid/var/logs/access.log squid

  # Some refresh patterns including YouTube -- although YouTube probably needs 
to
  # be adjusted.
  refresh_pattern ^ftp:   144020% 10080
  refresh_pattern ^gopher:14400%  1440
  refresh_pattern -i \.flv$   10080   90% 99 ignore-no-cache 
override-expire ignore-private
  refresh_pattern ^http://sjl-v[0-9]+\.sjl\.youtube\.com 10080 90% 99 
ignore-no-cache override-expire ignore-private
  refresh_pattern get_video\?video_id 10080 90% 99 ignore-no-cache 
override-expire ignore-private
  refresh_pattern youtube\.com/get_video\? 10080 90% 99 ignore-no-cache 
override-expire ignore-private
  refresh_pattern .   0   20% 4320

  acl all src 0.0.0.0/0.0.0.0
  acl esri src 10.0.0.0/255.0.0.0
  acl manager proto cache_object
  acl localhost src 127.0.0.1/255.255.255.255
  acl to_localhost dst 127.0.0.0/8
  acl SSL_ports port 443
  acl Safe_ports port 80  # http
  acl Safe_ports port 21  # ftp
  acl Safe_ports port 443 # https
  acl Safe_ports port 70  # gopher
  acl Safe_ports port 210 # wais
  acl Safe_ports port 1025-65535  # unregistered ports
  acl Safe_ports port 280 # http-mgmt
  acl Safe_ports port 488 # gss-http
  acl Safe_ports port 591 # filemaker
  acl Safe_ports port 777 # multiling http
  acl CONNECT method CONNECT
  # Some Youtube ACL's
  acl youtube dstdomain .youtube.com .googlevideo.com .video.google.com 
.video.google.com.au
  acl youtubeip dst 74.125.15.0/24 
  acl youtubeip dst 64.15.0.0/16
  cache allow youtube
  cache allow youtubeip
  cache allow esri

  # These are from http://wiki.squid-cache.org/Features/StoreUrlRewrite
  acl store_rewrite_list dstdomain mt.google.com mt0.google.com mt1.google.com 
mt2.google.com
  acl store_rewrite_list dstdomain mt3.google.com
  acl store_rewrite_list dstdomain kh.google.com kh0.google.com kh1.google.com 
kh2.google.com
  acl store_rewrite_list dstdomain kh3.google.com
  acl store_rewrite_list dstdomain kh.google.com.au kh0.google.com.au 
kh1.google.com.au
  acl store_rewrite_list dstdomain kh2.google.com.au kh3.google.com.au

  # This needs to be narrowed down quite a bit!
  acl store_rewrite_list dstdomain .youtube.com

  storeurl_access allow store_rewrite_list
  storeurl_access deny all

  storeurl_rewrite_program /usr/local/bin/store_url_rewrite

  http_access allow manager localhost
  http_access deny manager
  http_access deny !Safe_ports
  http_access deny CONNECT !SSL_ports
  http_access allow localhost
  http_access allow esri
  http_access deny all
  http_reply_access allow all
  icp_access allow all
  coredump_dir /usr/local/squid/var/cache

  # YouTube options.
  quick_abort_min -1 KB

  # This will block other streaming media.  Maybe we don't want this, but using
  # it for now.
  hierarchy_stoplist cgi-bin ?
  acl QUERY urlpath_regex cgi-bin \?
  cache deny QUERY

And here is the store_url_rewrite script.  I added some logging:

  #!/usr/bin/perl

  use IO::File;
  use IO::Socket::INET;
  use IO::Pipe;

  $| = 1;

  $fh = new IO::File(/tmp/debug.log, a);

  $fh-print(Hello!\n);
  $fh-flush();

  while () {
  chomp;
  #print LOG Orig URL:  . $_ . \n;
  $fh-print(Orig URL:  . $_ . \n);
  if (m/kh(.*?)\.google\.com(.*?)\/(.*?) /) {
  print http://keyhole-srv.google.com; . $2 . 
.SQUIDINTERNAL/ . $3 . \n;
  # print STDERR KEYHOLE\n;
  } elsif (m/mt(.*?)\.google\.com(.*?)\/(.*?) /) {
  print http://map-srv.google.com; . $2 . .SQUIDINTERNAL/ . 
$3 . \n;
  # print STDERR MAPSRV\n;
  } elsif 
(m/^http:\/\/([A-Za-z]*?)-(.*?)\.(.*)\.youtube\.com\/get_video\?video_id=([^]+).*
 /) {
  print 
http://video-srv.youtube.com.SQUIDINTERNAL/get_video?video_id=; . $4 . \n;
  
$fh-print(http://video-srv.youtube.com.SQUIDINTERNAL/get_video?video_id=; . 
$4 . \n);
  $fh-flush();
  } elsif 

[squid-users] How do I DOS-proof my cache

2008-04-16 Thread David Young

Hey Squid users :)

We had a problem recently where a user with a misconfigured download  
accelerator was able to bring our proxy to its knees, downloading an  
80MB driver about 100 times in parallel. We temporarily solved the  
problem by stopping the download accelerator, but this makes me aware  
of how vulnerable our proxy is to heavy DOS-type attacks.


I've read a bit about the partial object caching expected in 3.1,  
range_offset, and half-closed clients. Can anybody share some ideas  
for making a squid cache more resilient to this kind of abuse / attack?


Thanks!
- David






Re: [squid-users] Marking Cached traffic..

2008-04-16 Thread Amos Jeffries
 HI There,

 I was wondering whether it's posible to mark cached traffic with a
 different
 TOS then uncached traffic. I need to come up with a way of passing cached
 traffic through our bandwidth manager without taxing the end user for it.
 Basically giving them the full benefits of the proxy server.

Not at present. It should not be too hard to add though.

For now squid just has a combination of tcp_outgoing_tos to set the TOS on
all squid outputs and delay_pools to bandwidth-manage the MISS'es within
squid. cache HIT's don't get passed through the delay pools, so have the
effect you want.

Amos




Re: [squid-users] YouTube and other streaming media (caching)

2008-04-16 Thread Adrian Chadd
On Wed, Apr 16, 2008, Ray Van Dolson wrote:

 And here is the store_url_rewrite script.  I added some logging:

Cool!

 Could likely remove the last elsif block at this point as it's catching
 on the previous one now.  But this is working great!  Probably some
 tuning yet to be done.  Maybe someone could update the wiki with the
 new regexp syntax.

I'm keeping a slightly updated version of this stuff in my customer
site. That way I can (try!) to keep on top of changes in the rules and
notify customers when they need to update their scripts. The last thing
I want to see is 100 different versions of my youtube caching hackery
installed in places and causing trouble in the future.



Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] How do I DOS-proof my cache

2008-04-16 Thread Amos Jeffries
 Hey Squid users :)

 We had a problem recently where a user with a misconfigured download
 accelerator was able to bring our proxy to its knees, downloading an
 80MB driver about 100 times in parallel. We temporarily solved the
 problem by stopping the download accelerator, but this makes me aware
 of how vulnerable our proxy is to heavy DOS-type attacks.

 I've read a bit about the partial object caching expected in 3.1,
 range_offset, and half-closed clients. Can anybody share some ideas
 for making a squid cache more resilient to this kind of abuse / attack?


The 'maxconn' ACL is available in all squid to protect against this type
of client.

The collapsed forwarding feature of 2.x designed to cope with wider DDoS
still needs someone with time to port it into 3.x.
http://wiki.squid-cache.org/Features/CollapsedForwarding

Amos



Re: [squid-users] How do I DOS-proof my cache

2008-04-16 Thread David Young

Hi Amos,

Unfortunately the maxconn ACL is not suitable in our circumstance,  
since we service several clients who are behind NAT'd IPs.. so there  
may be as many as 50 real browsers behind a single IP.. the  
collapsedforwarding option looks interesting, I'll keep an eye on  
that, thanks :)


- David




On 17/04/2008, at 2:39 PM, Amos Jeffries wrote:


The 'maxconn' ACL is available in all squid to protect against this  
type

of client.

The collapsed forwarding feature of 2.x designed to cope with wider  
DDoS

still needs someone with time to port it into 3.x.
http://wiki.squid-cache.org/Features/CollapsedForwarding

Amos




Re: [squid-users] Can I find a performance document for Squid?

2008-04-16 Thread Alex Rousskov
On Wed, 2008-04-16 at 13:10 -0700, JXu wrote:

 Can I find a performance document for Squid? for example, what is Squid
 throughput?

Squid performance varies a lot depending on your hardware and deployment
scenario. Some Squids can do thousands of transactions per second
(hundreds of Mbits/sec). Others do hundreds of transactions per second
(tens of Mbits/sec). Yet others crawl at tens of transactions per second
due to slow disk caching or other reasons. Lower numbers are more
typical.

If you can provide specifics, somebody on the list may be able to post
numbers from a similar environment.

You can also benchmark a Squid prototype to get an estimate of its
performance on a given hardware, for a given configuration.

Alex.




Re: [squid-users] How do I DOS-proof my cache

2008-04-16 Thread Amos Jeffries

David Young wrote:

Hi Amos,

Unfortunately the maxconn ACL is not suitable in our circumstance, since 
we service several clients who are behind NAT'd IPs.. so there may be as 
many as 50 real browsers behind a single IP.. the collapsedforwarding 
option looks interesting, I'll keep an eye on that, thanks :)


- David



Right. Well, with IPv4 either you or the customer using NAT is now 
screwed. You can protect your business by limiting their IP or you can 
remain at the mercy of their future expansions.


The middle ground on this is to use a combination of ACL to lift the 
maxconn cap for NAT clients higher than then other clients. Or to roll 
out IPv6 web access with Squid-3.1 as I have.


FYI: The IPv6 experience here has not been bad, the only major hurdle I 
have encountered by going dual-stack is general-traffic transit to the 
nearest v6-native network.


Amos





On 17/04/2008, at 2:39 PM, Amos Jeffries wrote:


The 'maxconn' ACL is available in all squid to protect against this type
of client.

The collapsed forwarding feature of 2.x designed to cope with wider DDoS
still needs someone with time to port it into 3.x.
http://wiki.squid-cache.org/Features/CollapsedForwarding

Amos





--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4