[squid-users] Re: LiveCD type install for transparent caching of YouTube, etc?

2008-03-16 Thread Kinkie
On Sun, Mar 16, 2008 at 3:40 AM, Paul Bryson <[EMAIL PROTECTED]> wrote:
> Kinkie wrote:
>  > A quick googling brought me to a VMWare virtual appliance at
>  > http://www.vmware.com/appliances/directory/57.
>  >
>  > I've recorded your suggestion on http://wiki.squid-cache.org/WishList.
>  >
>  > Are you willing to help the squid project by starting this activity?
>
>  Your Google Fu is good, I didn't see that.  That is a pretty cool idea,
>  I just wish it were suitable for a stand alone box.  At least it gives
>  the opportunity for to test out the Squid feature set.
>
>  Unfortunately I'm not a developer by any stretch of the imagination, so
>  there isn't much I can do in the way of actually "developing".  However,
>  if you like I could flesh out various ideas on a Features page.

This is not a code-writing activity; it's rather about having
experience in one specific usage scenario and being willing to to
share it with others.

ANY help is welcome and actively sought. The wiki Features pages
(http://wiki.squid-cache.org/Features) is an excellent starting point,
but it's by no means comprehensive. Any itch needing a scratch can
actually end up as a development project, or as documentation in the
KnowledgeBase or ConfigurationExample.


-- 
 /kinkie


Re: Re: [squid-users] Cache url's with '?' question marks

2008-03-16 Thread Adrian Chadd
Erk, HTML email, thats not even done right! *smacks your client* :)

Uhm, start by looking at the header logging feature (mime_${SOMETHING} in 
squid.conf) to log
request and reply headers.

I'd then use those client requests to create test cases to feed to Squid.




Adrian

On Fri, Mar 14, 2008, Saul Waizer wrote:
> 
> Adrian,
> QUERY ACL has been removed for over a week, can you give me some pointers 
> as far as looking at the MISSes
> Thanks!
> 
> 
> 
> 
> 
> 
> 
> On Fri , Adrian Chadd <[EMAIL PROTECTED]>sent:
> 
> 
> Caching dynamic content 
> doesn't work "like that".
> 
> 
> 
> Firstly, removing the QUERY ACL gives you the ability to cache dynamic
> 
> content that returned explicit lifetime.
> 
> 
> 
> You need to look at all of those MISSes and see why Squid isn't caching
> 
> them. Its hard to tell from where I'm sitting.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Adrian
> 
> 
> 
> 
> 
> On Fri, Mar 14, 2008, Saul Waizer wrote:
> 
> > -BEGIN PGP SIGNED 
> MESSAGE-
> 
> > Hash: SHA1
> 
> > 
> 
> > Amos,
> 
> > 
> 
> > I've implemented the example you sent on 
> Dynamic Content but so far i
> 
> > regret to say that no improvement has been 
> made on the hit ratio
> 
> > 
> 
> > I added the following to my 
> squid.conf
> 
> > 
> 
> > refresh_pattern (/cgi-bin/|\?) 0 0% 
> 0
> 
> > refresh_pattern . 0 20% 4320
> 
> > acl mydomain dstdomain 
> .mydomain.com
> 
> > cache allow mydomain
> 
> > 
> 
> > my stats look something like 
> this:
> 
> > 
> 
> > 67.5103% TCP_MISS/200
> 
> > 6.07349% TCP_HIT/200
> 
> > 4.55681% TCP_MEM_HIT/200
> 
> > 1.59761% TCP_IMS_HIT/304
> 
> > 
> 
> > Any help is appreciated.
> 
> > 
> 
> > Thanks
> 
> > 
> 
> > 
> 
> > 
> 
> > Amos Jeffries wrote:
> 
> > > Adrian Chadd wrote:
> 
> > >> G'day,
> 
> > >>
> 
> > >> Just remove the QUERY ACL and the 
> cache ACL line using "QUERY" in it.
> 
> > >> Then turn on header logging 
> (log_mime_hdrs on) and see if the replies
> 
> > >> to the dynamically generated 
> content is actually giving caching info.
> 
> > >>
> 
> > >>
> 
> > >>
> 
> > >> Adrian
> 
> > > 
> 
> > >  href="http://mail.hoodiny.com/parse.php?redirect=http%3A%2F%2Fwiki.squid-cache.org%2FConfigExamples%2FDynamicContent";
>  target=_blank>http://wiki.squid-cache.org/ConfigExamples/DynamicContent
> 
> > > 
> 
> > > Amos
> 
> > > 
> 
> > >>
> 
> > >> On Fri, Feb 29, 2008, Saul Waizer 
> wrote:
> 
> > > Hello List,
> 
> > > 
> 
> > > I am having problems trying to cache 
> images*/content that comes from a
> 
> > > URL containing a question mark on it 
> ('?')
> 
> > > 
> 
> > > Background:
> 
> > > I am running squid Version 
> 2.6.STABLE17 on FreeBSD 6.2 as a reverse
> 
> > > proxy to accelerate content hosted in 
> America served in Europe.
> 
> > > 
> 
> > > The content comes from an application 
> that uses TOMCAT so a URL
> 
> > > requesting dynamic content would look 
> similar to this:
> 
> > > 
> 
> > >  href="http://mail.hoodiny.com/parse.php?redirect=http%3A%2F%2Fdomain.com%2Fstorage%2Fstorage%3FfileName%3D%2F.domain.com-1%2Fusr%2F14348%2Fimage%2Fthumbnail%2Fth_8837728e67eb9cce6fa074df7619cd0d193_1_.jpg";
>  target=_blank>http://domain.com/storage/storage?fileName=/.domain.com-1/usr/14348/image/thumbnail/th_8837728e67eb9cce6fa074df7619cd0d193_1_.jpg
> 
> > > 
> 
> > > 
> 
> > > The result of such request always 
> results on a MISS with a log similar
> 
> > > to this:
> 
> > > 
> 
> > > TCP_MISS/200 8728 GET  href="http://mail.hoodiny.com/parse.php?redirect=http%3A%2F%2Fdomain.com%2Fstorage%2Fstorage%3F";
>  target=_blank>http://domain.com/storage/storage? -
> 
> > > FIRST_UP_PARENT/server_1 
> image/jpg
> 
> > > 
> 
> > > I've added this to my config: acl 
> QUERY urlpath_regex cgi-bin as you can
> 
> > > see bellow but it makes no difference 
> and I tried adding this:
> 
> > > acl QUERY urlpath_regex cgi-bin \? and 
> for some reason ALL requests
> 
> > > result in a MISS.
> 
> > > 
> 
> > > Any help is greatly 
> appreciated.
> 
> > > 
> 
> > > My squid config looks like this: 
> (obviously real ip's were changed)
> 
> > > 
> 
> > > # STANDARD ACL'S 
> ###
> 
> > > acl all src 0.0.0.0/0.0.0.0
> 
> > > acl manager proto 
> cache_object
> 
> > > acl localhost src 
> 127.0.0.1/255.255.255.255
> 
> > > acl to_localhost dst 
> 127.0.0.0/8
> 
> > > # REVERSE CONFIG FOR 
> SITE #
> 
> > > http_port 80 accel vhost
> 
> > > cache_peer 1.1.1.1 parent 80 0 
> no-query originserver name=server_1
> 
> > > acl sites_server_1 dstdomain 
> domain.com
> 
> > > # REVERSE ACL'S FOR 
> OUR DOMAINS ##
> 
> > > acl ourdomain0 dstdomain  href="http://mail.hoodiny.com/parse.php?redirect=http://www.domain.com"; 
> target=_blank>www.domain.com
> 
> > > acl ourdomain1 dstdomain 
> domain.com
> 
> > > http_access allow ourdomain0
> 
> > > http_access allow ourdomain1
> 
> > > http_access deny all
> 
> > > icp_access allow all
> 
> > >  HEADER CONTROL 
> ###
> 
> > > 

Re: [squid-users] [help] setting up firewall policy for transparent (single-homed host) proxy

2008-03-16 Thread Rachmat Hidayat Al Anshar

Hi Indunil :)  

First of all, thanks a zillion for ur help before, 
I was implemented ur suggested rules, and
its working, my squid box become transparent ;-)

Um, but there is some other problem disturbing
me here. Those rules working for a http traffic, 
in fact, i have to redirect ftp traffic also.
Could u give me a favor solving this?!?! Is it 
any added rules that i have to issue according 
to this ftp traffic redirections, or what?

I couldn't really understanding about these line 
of rules: 
> iptables -t mangle -A PREROUTING -j MARK --set-mark
3 -p tcp --dport 80
> ip rule add fwmark 3 table 2

Could u explain me about iptable's MARK jumping 
options, --set-mark flag, and about 
the interconnection with ip rule, fwmark, table ?!??!

Thanks in advance
Rachmat Hidayat Al Anshar



--- Indunil Jayasooriya <[EMAIL PROTECTED]> wrote:

> > All iptables rules here implemented on
> firewall-box.
> > I have also check the access.log of squid guys,
> > but there is nothing logged. :'(
> > Its looks like firewall-box didn't make it
> redirect all web
> > services to the squid-box.
> 
> to riderect all web traffic ( i.e port 80) to squid
> server.
> 
> your client's gateway should be the ip of the
> firewall. Pls check it.
> 
> And also, check your Dns server entries in Client's
> PCs. If they have been set,
> 
> Then, when clients browse internet ( i.e - accessing
> destination port
> 80), it should be redirected to squid box.
> 
> Now, your clients' gateway is the ip address of the
> firewall.
> So, on your firewall box
> 
> add below lines.
> 
> 
> iptables -t mangle -A PREROUTING -j ACCEPT -p tcp
> --dport 80 -s squid-box
> iptables -t mangle -A PREROUTING -j MARK --set-mark
> 3 -p tcp --dport 80
> ip rule add fwmark 3 table 2
> ip route add default via squid-box dev eth1 table 2
> iptables -t nat -A POSTROUTING -o eth0 -s squid-box
> -j SNAT --to-source 1.2.3.4
> 
> 1.2.3.4 is the external ip of the firewall (i.e - ip
> that connects to
> your ISP router.)
> 
> Next step is, pls log in to your squid box.
> 
> On squid box.
> 
> add below rule
> 
> iptables -A PREROUTING -t nat -i eth0 -p tcp --dport
> 80 -j REDIRECT
> --to-port 3128
> 
> How can I solve this out...
> 
> This is something  easy. Anyway, pls try the above
> rules again. If
> there is no luck, pls draw your network  diagram
> again. You have drwan
> once before. But it is not so clear.  While you draw
> your network
> diagram, pls add your local ips( bogus ips). if you
> have any external
> ips (internet ips) ,pls write them as 1.2.3.4
> format. then , it would
> be eacier to, when I wtrite rules.
> 
> 
> -- 
> Thank you
> Indunil Jayasooriya
> 



  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 




[squid-users] getting getpwnam basic authentication working

2008-03-16 Thread p cooper
Ive volunteered to setup one machine with 4 logins + content
filtering/time based ACL  for the 2 children  to replace ( and improve
on )  my sisters'  dying winXP machine.
I want to use basic authentication  ( less work for me and i think) and
none are particularly computer literate to mess  around at  all ( well
-yet)

OS = gentoo linux

ive compiled squid Squid  Version 2.6.STABLE18  with configure options: 
'--enable-basic-auth-helpers=getpwnam'

bits of my squid conf

hepworth ~ # grep  ^[A-Za-z] /usr/local/squid/etc/squid.conf
auth_param basic program /usr/local/squid/libexec/getpwname_auth /etc/passwd
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl passwd proxy_auth
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow  passwd
http_access deny all
icp_access allow all
http_port 3128
logformat squid  %tl  %Ss/%03Hs  %rm %ru %ul   %mt
access_log /var/log/squid/access.log squid

but the proxy wont let me through  when i enter he username and  unix
login password.

hepworth ~ # tail -n 3 /var/log/squid/access.log
 16/Mar/2008:12:08:44 +  TCP_DENIED/407  GET
http://en-us.start2.mozilla.com/firefox?client=firefox-a&rls=org.mozilla:en-US:official
andrew   text/html
 16/Mar/2008:12:08:57 +  TCP_DENIED/407  GET
http://en-us.start2.mozilla.com/firefox?client=firefox-a&rls=org.mozilla:en-US:official
andrew   text/html
 16/Mar/2008:12:09:00 +  TCP_DENIED/407  GET
http://en-us.start2.mozilla.com/favicon.ico -   text/html
hepworth ~ #







Re: [squid-users] Vary object loop

2008-03-16 Thread Adrian Chadd
On Fri, Mar 14, 2008, Alex Rousskov wrote:

> > I think it actually is a bug in the Vary handling in Squid-3.
> > The condition:
> > 
> > if (!has_vary || !entry->mem_obj->vary_headers) {
> > if (vary) {
> > /* Oops... something odd is going on here.. */
> > 
> > .. needs to be looked at.
> 
> But it is not the condition getting hit according to Aurimas' log, is
> it?

There's two confusing log messages with the same message, which is confusing
things.

Aurimas, in varyEvaluateMatch() in client_side.cc, change the first debugs()
statement here:


if (!has_vary || !entry->mem_obj->vary_headers) {
if (vary) {
/* Oops... something odd is going on here.. */
 debugs(33, 1, "varyEvaluateMatch: Oops. Not a Vary object on 
second attempt, '" <<
entry->mem_obj->url << "' '" << vary << "'");
safe_free(request->vary_headers);
return VARY_CANCEL;
}

.. replace that "second attempt" with "first attempt", recompile, and see whats
logged.




Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


[squid-users] Pound or Squid

2008-03-16 Thread Dwyer, Simon
Hey everyone,

I am currently running Pound in the DMZ as the reverse proxy but squid as a
normal proxy.  They are running on the same box and all that.  

I just wanted to know if anyone here had any experience with both and could
recommend to move my pound setup to squid to simplify things?

Thanks in advance,

Simon Dwyer


Re: [squid-users] HTML NTLM and 2.6 not behaving

2008-03-16 Thread Adrian Chadd
G'day,

I'd start by grabbing tcpdump/ethereal/wireshark and sniffing the traffic
on the Squid-2.5 and Squid-2.6 servers. Remember to snapshot the entire
packet with tcpdump (-s 1518) if you want to use tcpdump to capture
a pcap file that you can then read in ethereal/wireshark on another box.

Enabling the header logging in Squid may help too (log_mime_hdrs on) but
its not always that helpful for debugging authentication issues.

Then compare the request and reply headers from both Squid-2.5 and Squid-2.6
to see what sort of differences you see. If there aren't any differences
(ie, the origin server gets -exactly- the same request and returns -exactly-
the same reply) then there's something stranger going on.

Take all of this info, whack it in a bugzilla report 
(http://bugs.squid-cache.org/)
and wait for a volunteer to help. :0



Adrian

On Fri, Mar 14, 2008, NOCTECH noctech wrote:
> Having a rather difficult to fathom problem with a user logging into
> some external Outlook WebAccess webmail server.  I've read a bunch of
> posts about the problems with NTLM and Squid <= 2.5, but this one is
> stumping me.
> 
> A little bit about our setup -- using multiple squid and dg boxes and a
> WCCP router to transparently cache/filter the web.
> 
> Most of our squid caches are 2.6, but we still have two remaining that
> are version 2.5 that we're phasing out.  The odd thing is, the login
> seems to work correctly with squid 2.5 and incorrectly with 2.6, which
> is exactly backwards of what I expect.  When I proxy directly to one of
> the squid 2.6 boxes, specifically:
> 
> Squid Cache: Version 2.6.STABLE18
> configure options:  '--prefix=/usr' '--sysconfdir=/etc/squid'
> '--bindir=/usr/sbin' '--sbindir=/usr/sbin' '--localstatedir=/var'
> '--libexecdir=/usr/sbin' '--datadir=/usr/share/squid'
> '--mandir=/usr/share/man' '--with-maxfd=4096' '--disable-useragent-log'
> '--enable-ssl' '--with-openssl' '--disable-ident-lookups'
> '--enable-poll' '--enable-truncate' '--enable-gnuregex'
> '--enable-async-io' '--with-pthreads' '--with-aio' '--with-dl'
> '--enable-storeio=aufs,diskd,ufs,coss,null'
> '--enable-removal-policies=heap,lru' '--enable-kill-parent-hack'
> '--enable-forw-via-db' '--enable-linux-netfilter' '--enable-underscores'
> '--enable-x-accelerator-vary'
> 
> I get a login box (in firefox) that reads:
> Enter username and password for "" at http://mail.example.com
> 
> When I put in the credentials and click OK, the box just keeps coming
> back.  When I click cancel, I get a different login box:
> Enter username and password for "mail.example.com" at
> http://mail.example.com
> 
> and the login works.
> 
> If I proxy directly to one of the 2.5 boxes:
> Squid Cache: Version 2.5.STABLE4
> configure options:  --disable-useragent-log --enable-ssl --with-openssl
> --disable-ident-lookups --enable-poll --enable-truncate
> --enable-gnuregex --enable-async-io --with-pthreads --with-aio --with-dl
> --enable-storeio=aufs,diskd,ufs,coss,null
> --enable-removal-policies=heap,lru --enable-kill-parent-hack
> --enable-forw-via-db --enable-linux-netfilter --enable-underscores
> --enable-x-accelerator-vary
> 
> It goes directly to the second login box.
> 
> Any thoughts?  Any information I can provide to be helpful?
> 
> Sean
> 
> 
> 

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Pound or Squid

2008-03-16 Thread Amos Jeffries

Dwyer, Simon wrote:

Hey everyone,

I am currently running Pound in the DMZ as the reverse proxy but squid as a
normal proxy.  They are running on the same box and all that.  


I just wanted to know if anyone here had any experience with both and could
recommend to move my pound setup to squid to simplify things?

Thanks in advance,

Simon Dwyer


From 2.6, squid can easily handle multi-mode configurations and do both 
jobs.
All you need to do is add more http_port and cache_peer lines to enable 
the reverse-proxy part in squid. That should let you drop Pound entirely 
and free u those CPU cycles spent running it.


From my own experience here I would only add a recommendation of adding 
the reverse-proxy bits of squid above the standard proxy restrictions.


Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] LiveCD type install for transparent caching of YouTube, etc?

2008-03-16 Thread Henrik Nordstrom
On Fri, 2008-03-07 at 14:07 -0600, Paul Bryson wrote:
> I have been looking for some sort of easy to install Squid transparent 
> caching proxy.  Something like KnoppMyth (http://mysettopbox.tv/) but 
> just for Squid.  Boot to a CD that has you partition/format your 
> harddrives, installs the OS plus Squid with sane default settings.  If 
> there is a web interface, all the better.

There is the CacheMARA product from MARA Systems. But it needs a bit of
updates to work with current hardware (and current Squids), and
additionally the free version isn't quite ready yet (my fault). Also
this is perhaps a bit too stripped down for some peoples needs with the
OS image only some MB in size and very locked down.

Adrian is also working on an appliance type Squid installation. Adrian
is also working on another Appliance style Squid installation. I'll let
him describe what he is doing if he wants to.

> And as we want to be able to cache YouTube, Google Maps, Google Earth, 
> Windows Updates, etc, we need the rewrite rules in the 2.7 branch of 
> Squid.  We were thinking of using a good sized hard disk for cache 
> storage, to reduce the internet bandwidth hit as much as possible.

Then you will need to roll your own, as you won't find a ready
distribution with all this just yet..

> Is there any such type of beast, or a quick path to such a thing?

Start from something resembling what you want in terms of how the CD
behaves and with reasonable tools for rolling your own, then add Squid
and remove what else you don't want to have.

But as long as you are likely to tweak and upgrade the system I think
you will be happier with a more normal OS install. Adding Squid and
transparent interception isn't very hard.

The weakest point is perhaps management gui. There isn't very many to
choose from. webmin is the most complete, but even that is lacking many
features and also lagging behind quite far.

Regards
Henrik



RE: [squid-users] Reverse proxy IP not passing through

2008-03-16 Thread Henrik Nordstrom
On Fri, 2008-03-14 at 16:28 -0400, saul waizer wrote:

> Recompile squid with this option if you haven't done it so far "
> --enable-follow-x-forwarded-for"
> 
> Add these lines to your squid.conf:
> 
> forwarded_for on
> follow_x_forwarded_for allow all

No, this is a quite different thing. This makes Squid pick up the client
IP from those headers added by Squid, for use in Squid's access
controls, logging etc.

The X-Forwarded-For header is always added by Squid unless you
explicitly disable it. To make use of the header you need to configure
your web application to look for the header instead of the source IP
(HTTP_X_FORWARDED_FOR instead of REMOTE_ADDRESS in terms of CGI, but
beware of significant syntax differences in the data).



With Squid-2.6 & later it IS possible to install Squid in a manner that
the original client IP is fully transparent. This requires that Squid is
running on the router/gateway between the clients and your web server
(or a complex wccp or policy routing setup making the routers divert all
such traffic via the proxy), and that Squid is running on an Linux
server patched with TPROXY support. 

Regards
Henrik



[squid-users] Squid Future (was Re: [squid-users] Squid-2, Squid-3, roadmap)

2008-03-16 Thread Adrian Chadd
Just to summarise the discussion, both public and private.

* Squid-3 is receiving the bulk of the active core Squid developers' focus;
* Squid-2 won't be actively developed at the moment by anyone outside
  of paid commercial work;
* I've been asked (and agreed at the moment) to not push any big changes to
  Squid-2.

If your organisation relies on Squid-2 and you haven't any plans to migrate
to Squid-3, then there's a few options.

* Discuss migrating to Squid-3 with the Squid-3 developers, see what can be 
done.
* Discuss commercial Squid-2 support/development with someone (eg Xenion/me).
* Migrate away from Squid to something else.

Obviously all of us would prefer that users wouldn't migrate away from Squid in
general, so if the migration to Squid-3 isn't on your TODO list for whatever
reason then its in your best interests -right now- to discuss this out in the
open.

If you don't think that Squid as a project is heading in a direction that is 
useful
for you, then its in your best interests -right now- to discuss this with the
Squid development team rather than ignoring the issue or discussing it 
privately.
I'd prefer open discussions which everyone can view and contribute towards.

If there's enough interest in continuing the development of Squid-2 along
my Roadmap or any other plan then I'm interested in discussing this with you.
If the interest is enough to warrant beginning larger changes to Squid-2 to
support features such as IPv6, threading and improved general performance
then I may reconsider my agreement with the Squid-3 developers (and deal with
whatever pain that entails.)

At the end of the day, I'd rather see something that an increasing number of 
people
on the Internet will use and - I won't lie here - whatever creates a self 
sustaining
project, both from community and financial perspectives.





Adrian



Re: [squid-users] Squid Future (was Re: [squid-users] Squid-2, Squid-3, roadmap)

2008-03-16 Thread Robert Collins
On Mon, 2008-03-17 at 10:18 +0900, Adrian Chadd wrote:

> At the end of the day, I'd rather see something that an increasing number of 
> people
> on the Internet will use and - I won't lie here - whatever creates a self 
> sustaining
> project, both from community and financial perspectives.

I agree with this. FWIW I see squid 2 and 3 as very similar to apache
1.x and 2.x - apache 2 took a _long_ time to be considered an 'upgrade'
by _all_ users, and squid3 has been in the same boat.

I don't think that the amount of work to make squid3 better for all
users is insurmountable by the community, and I think that continuing
the polish on squid3 is the best way forward. YMMV of course :).

-Rob
-- 
GPG key available at: .


signature.asc
Description: This is a digitally signed message part


RE: [squid-users] Squid Future (was Re: [squid-users] Squid-2, Squid-3, roadmap)

2008-03-16 Thread Nick Duda
The only reason I haven't upgraded beyond the current stable 2.6 code is that 
some third part companies (like Secure Computing, who we use as a Squid plugin) 
only supports certain versions of squid. I haven't even played with 3.0 because 
of this. I think squid hands down is an amazing proxy software and I will 
continue to keep using it going forward. We use are proxies as content 
filtering devices as well...so need the support of both.

Your comments about apache are dead on...

- Nick

-Original Message-
From: Robert Collins [mailto:[EMAIL PROTECTED]
Sent: Sunday, March 16, 2008 9:25 PM
To: Adrian Chadd
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid Future (was Re: [squid-users] Squid-2, 
Squid-3, roadmap)

On Mon, 2008-03-17 at 10:18 +0900, Adrian Chadd wrote:

> At the end of the day, I'd rather see something that an increasing
> number of people on the Internet will use and - I won't lie here -
> whatever creates a self sustaining project, both from community and financial 
> perspectives.

I agree with this. FWIW I see squid 2 and 3 as very similar to apache 1.x and 
2.x - apache 2 took a _long_ time to be considered an 'upgrade'
by _all_ users, and squid3 has been in the same boat.

I don't think that the amount of work to make squid3 better for all users is 
insurmountable by the community, and I think that continuing the polish on 
squid3 is the best way forward. YMMV of course :).

-Rob
--
GPG key available at: .


RE: [squid-users] Squid Future (was Re: [squid-users] Squid-2,Squid-3, roadmap)

2008-03-16 Thread Adam Carter
My 2c WRT 2 v 3 etc;

- We currently run commercial proxies and are looking to replace them with 
squid boxes, however recent list discussion is making me a little nervous. I 
would have used 2.6 for performance (need to support 10K users) and for
- Secure Computing's Smartfilter. It currently runs on 2.6-17. Do the Squid3 
devs have any contact with Secure Computing about Smartfilter coming to v3? Has 
there been any contact about v2 in the past?

Also, while I think MP would be nice, its easy enough to load balance across 
multiple boxes with proxy.pac or WCCP or a layer 7 switch, so its not a killer 
feature IMO. I'd be much more interesting in have 3 brought up to parity with 2 
rather than working on extra features.

Rgs,
Adam


Re: [squid-users] Caching files from Amazon S3

2008-03-16 Thread Robert Collins
On Sun, 2008-03-16 at 14:04 +0900, Adrian Chadd wrote:
> Annoyingly, why the hell is the request from the client a range request?
> Squid can't easily cache those unless it somehow fetches the entire object
> first.

FWIW -3 has about 60% of the work needed to cache fragments done. Whats
missing is a store that can handle them. 

Rob
-- 
GPG key available at: .


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] [help] setting up firewall policy for transparent (single-homed host) proxy

2008-03-16 Thread Indunil Jayasooriya
On Sun, Mar 16, 2008 at 4:18 PM, Rachmat Hidayat Al Anshar
<[EMAIL PROTECTED]> wrote:
>
>  Hi Indunil :)
>
>  First of all, thanks a zillion for ur help before,
>  I was implemented ur suggested rules, and
>  its working, my squid box become transparent ;-)
>
>  Um, but there is some other problem disturbing
>  me here. Those rules working for a http traffic,
>  in fact, i have to redirect ftp traffic also.
>  Could u give me a favor solving this?!?! Is it
>  any added rules that i have to issue according
>  to this ftp traffic redirections, or what?

Squid is NOT a ftp proxy. If you use Open BSD's PF, it has rules for a
ftp proxy, since ftp-proxy runs on localhost.
But, in Linux, I do not know such thing. So you will have to add below
iptables rules to access ftp sites from clients.

I assume your client network is 192.168.0.0/24 and external ip is
1.2.3.4 ( ip that connects to ISP router)


/sbin/modprobe -a ip_conntrack_ftp ip_nat_ftp

iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT

iptables -A FORWARD -p tcp -s 192.168.0.0/24 --dport 21 -j ACCEPT
iptables -t nat -A POSTROUTING -o eth0 -s 192.168.0.0/24 -j SNAT
--to-source 1.2.3.4


>
>  I couldn't really understanding about these line
>  of rules:
>
> > iptables -t mangle -A PREROUTING -j MARK --set-mark
>  3 -p tcp --dport 80

the above rule marks packets as value 3 that are destined to port 80
But, before routing. That is why , it says PREROUTING

>  > ip rule add fwmark 3 table 2

tthen, Those makred packets as value 3 are added to a table called 2.

That's it.


-- 
Thank you
Indunil Jayasooriya


Re: [squid-users] Vary object loop

2008-03-16 Thread Alex Rousskov
On Mon, 2008-03-17 at 06:25 +0900, Adrian Chadd wrote:
> On Fri, Mar 14, 2008, Alex Rousskov wrote:
> 
> > > I think it actually is a bug in the Vary handling in Squid-3.
> > > The condition:
> > > 
> > > if (!has_vary || !entry->mem_obj->vary_headers) {
> > > if (vary) {
> > > /* Oops... something odd is going on here.. */
> > > 
> > > .. needs to be looked at.
> > 
> > But it is not the condition getting hit according to Aurimas' log, is
> > it?
> 
> There's two confusing log messages with the same message, which is confusing
> things.

The messages are indeed poorly written, but they are different:

varyEvaluateMatch: Oops. Not a Vary object on second attempt,
varyEvaluateMatch: Oops. Not a Vary match on second attempt,

That is why I said that it was the other, arguably less suspicious,
condition being hit here.

Thanks,

Alex.


On Fri, 2008-03-14 at 14:58 +, Aurimas Mikalauskas wrote:
> The next question is about Vary header. I get absolutely amazing
> amount of these errors in cache.log:
> 
> 2008/03/14 10:46:54| clientProcessHit: Vary object loop!
> 2008/03/14 10:46:54| varyEvaluateMatch: Oops. Not a Vary match on
> second attempt, 'http://some.url' 'accept-encoding'
> 2008/03/14 10:46:55| clientProcessHit: Vary object loop!
> 2008/03/14 10:46:55| varyEvaluateMatch: Oops. Not a Vary match on
> second attempt, 'http://some.other.url'
> 'accept-encoding="gzip,%20deflate"'
> 
> A rough number:
> # grep -c 'Vary object loop' cache.log && wc -l squid_access.log
> 244816
> 1842602 squid_access.log
> 
> Any idea what kind of loop that is and how to avoid it?




Re: [squid-users] Squid Future (was Re: [squid-users] Squid-2, Squid-3, roadmap)

2008-03-16 Thread Adrian Chadd
On Sun, Mar 16, 2008, Nick Duda wrote:
> The only reason I haven't upgraded beyond the current stable 2.6 code is that 
> some third part companies (like Secure Computing, who we use as a Squid 
> plugin) only supports certain versions of squid. I haven't even played with 
> 3.0 because of this. I think squid hands down is an amazing proxy software 
> and I will continue to keep using it going forward. We use are proxies as 
> content filtering devices as well...so need the support of both.

There's no dialogue as far as I'm aware between the "Squid developers" as a 
whole and
Secure Computing. I haven't any idea about specific developers, but I haven't 
noticed
anything about Secure Computing on the squid-dev list.

I'm sure we'd be open to it as a whole.



Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


[squid-users] Re: LiveCD type install for transparent caching of YouTube, etc?

2008-03-16 Thread Paul Bryson

Kinkie wrote:

This is not a code-writing activity; it's rather about having
experience in one specific usage scenario and being willing to to
share it with others.


Then I will add what I can to the wiki when I get a chance.  That really 
is the limit of my abilities.



Atamido



[squid-users] Re: LiveCD type install for transparent caching of YouTube, etc?

2008-03-16 Thread Paul Bryson

Henrik Nordstrom wrote:

There is the CacheMARA product from MARA Systems. But it needs a bit of
updates to work with current hardware (and current Squids), and
additionally the free version isn't quite ready yet (my fault). 


Looks like an interesting product.  I've become a big fan of these 
appliance devices built on open source software.  We are currently using 
a Barracuda Web Filter, and it works well, except that we don't have the 
fine grained control over the hows and whats of caching.  (Really it's 
an on/off option.)



Also
this is perhaps a bit too stripped down for some peoples needs with the
OS image only some MB in size and very locked down.


That is in many people's eyes typically a bonus.  I wouldn't personally 
want X running on a box like that.  The OS, (lots of hardware support), 
Squid, Apache + a simple configuration webpage, and possibly something 
like Dan's Guardian.  Footprint of the image shouldn't be too big.



Adrian is also working on an appliance type Squid installation. Adrian
is also working on another Appliance style Squid installation. I'll let
him describe what he is doing if he wants to.


I spoke with Adrian briefly a while ago, and he certainly does have some 
interesting bits.  Though, as I mentioned, lots of hardware support is 
probably the key here.


And as we want to be able to cache YouTube, Google Maps, Google Earth, 
Windows Updates, etc, we need the rewrite rules in the 2.7 branch of 
Squid.  We were thinking of using a good sized hard disk for cache 
storage, to reduce the internet bandwidth hit as much as possible.


Then you will need to roll your own, as you won't find a ready
distribution with all this just yet..


I just mentioned those as things that would require the 2.7 branch.  We 
could write our own, and probably would for a number of sites.  For the 
major sites, we aren't outside the idea of paying someone to keep our 
rules updated.



Start from something resembling what you want in terms of how the CD
behaves and with reasonable tools for rolling your own, then add Squid
and remove what else you don't want to have.


Unfortunately I am a Windows admin professionally, so my Linux is pretty 
weak.  I spent several years using KnoppMyth (MythTV) and Trixbox 
(Asterisk), but the amount of stuff I really needed the command line for 
is pretty small.  And a lot of stuff I've had to ask my brother about 
(who is a Linux admin professionally).  I do believe in using the right 
tools for the job, and Microsoft's caching solution just seems terrible. 
 Squid's on the other hand is pretty much the standard in the industry 
that everything else is compared against.


So, I'm not going to be able to roll my own solution by myself, or 
realistically even 10% of the work myself.



The weakest point is perhaps management gui. There isn't very many to
choose from. webmin is the most complete, but even that is lacking many
features and also lagging behind quite far.


Yeah, management is almost always the weakest point in open source 
projects.  Especially when it's a package that isn't shipped as an 
entire system.  Really don't know what to do about that.  I know some 
PHP, but just enough to alter/fix things, not write them from scratch.



Atamido



Re: [squid-users] Caching files from Amazon S3

2008-03-16 Thread Abidoon Nadeem

Kinly explain in more detail. I do not understand your response.
- Original Message - 
From: "Robert Collins" <[EMAIL PROTECTED]>

To: "Adrian Chadd" <[EMAIL PROTECTED]>
Cc: "Abidoon Nadeem" <[EMAIL PROTECTED]>; 
Sent: Monday, March 17, 2008 7:09 AM
Subject: Re: [squid-users] Caching files from Amazon S3




Re: [squid-users] Re: LiveCD type install for transparent caching of YouTube, etc?

2008-03-16 Thread Daniel Rose

Paul Bryson wrote:

So, I'm not going to be able to roll my own solution by myself, or 
realistically even 10% of the work myself.


You might surprise yourself.  I moved from Windows to Linux, and while much of 
it is hard, especially on the desktop, squid is not hard.

I think the most important concern for you trying out squid on Linux would be 
getting the right distribution.  Ubuntu has 'server' support, and Red Hat sell 
really good Support contracts, and CentOS has a strong community, and is 
basically the same software as Red Hat.

You should be able to select squid as an optional package during the 
installation, so it should work right out of the box.




--
Daniel Rose
National Library of Australia