Re: [squid-users] squid + dansguardian + auth

2010-02-15 Thread Bruno Santos
X-Copyrighted-Material

Hi !

Yes, i was careful to check in the SPEC file to see if there was such option 
and it is present:
--enable-follow-x-forwarded-for 

The problem i guess is when dansguardian forwards the IP to squid, instead of 
giving the orinal IP, it goes with the local IP.
But, with other options enabled, i get an html response - 400 bad request..

[squid-users] Recent Polygraph results for Squid?

2010-02-15 Thread Oborn, Keith
 

Hi all -

We used to be a heavy user of commercial forward proxy/cache products,
but with the demise of the NetApp line we stopped that activity.

However, I'm now looking for an initial steer on a good large-scale
forward proxy setup (multi-gigabit rates, hundreds of thousands of
users). Unfortunately, our Polygraph rig was also scrapped some time ago
(boxes died-), and it will take time and resource to build a new one.

Sadly, the isp-caching and web-polygraph lists seem to be dead
nowadays-.

I'd be very interested in any numbers at all - most particularly on
recent Sun kit, as it looks as if ZFS is a good bet for Squid. I must
admit I alway hankered after testing a proxy/cache on a Thumper (X4540)
because of the huge spindle density.

At present, any numbers that get me within like a factor of two of
actual performance on any modern X86 server hardware would be great (and
perhaps any idea if using Sun T-series helps - does Squid like lots of
threads?) - that will enable us to decide whether it is worth setting up
to run our own detailed tests. I will, of course, post any test results
we produce if we go down that road.



--
Save Paper - Do you really need to print this e-mail?

Visit www.virginmedia.com for more information, and more fun.

This email and any attachments are or may be confidential and legally privileged
and are sent solely for the attention of the addressee(s). If you have received 
this
email in error, please delete it from your system: its use, disclosure or 
copying is
unauthorised. Statements and opinions expressed in this email may not represent
those of Virgin Media. Any representations or commitments in this email are
subject to contract. 

Registered office: 160 Great Portland Street, London W1W 5QA.
Registered in England and Wales with number 2591237
==



Re: [squid-users] How can I cache most content

2010-02-15 Thread Marcus Kool

acl blockanalysis01 dstdomain .scorecardresearch.com .google-analytics.com
acl blockads01 dstdomain .rad.msn.com ads1.msn.com ads2.msn.com ads3.msn.com 
ads4.msn.com
acl blockads02 .adserver.yahoo.com pagead2.googlesyndication.com
http_access deny blockanalysis01
http_access deny blockads01
http_access deny blockads02

There are thousands of websites serving ads.
Just try to block those who generate the most traffic and
you will save bandwidth.  Trying to block more does not
help much, so look at your Squid log.

all ACLs need to be defined in the right order
See also http://wiki.squid-cache.org/SquidFaq/SquidAcl

Marcus


Landy Landy wrote:

Landy Landy wrote:

If you are desperate for bandwidth I suggest to

block

ads (e.g. a.rad.msn.com) and 'user behaviour

analysis'

(e.g. scorecardresearch.com).

Furthermore, you may consider blocking mp3 files.
Depending on what type of users you have, this can

save

a lot of bandwidth.


Blocking is not an option. I have a small WISP and

blocking stuff won't work for us though I'm blocking p2p by
default to everyone except for those who call me and ask if
it can be enabled for magicjack and other services.  


Blocking scorecardresearch.com is invisible to users since
it just
produces a 1x1 pixel gif that is unused for the visual
display.
Blocking it makes their internet experience faster.
There are many websites who do this type of 'user behaviour
analysis'.

Blocking advertisements is debatable.
Maybe your users want to see them or maybe
they want a faster internet. I have seen
Squid log files where the microsoft ad + yahoo ad
+ google ads took 25% of the total bandwidth.
Just check on your server how much percent the URLs for
   rad.msn.com
   adserver.yahoo.com
   pagead2.googlesyndication.com
consume.




If that's the case I don't mind blocking it. Please guide me on how to do it. 


Thanks.


  





[squid-users] Problem with getting through squid in vmware

2010-02-15 Thread Michael Neumeier
Hello list,

I am facing a simple get through squid. First of all, I want to explain my 
setup:

I have a Windows 7 host machine with the IP 10.255.0.0/24. On this Windows 
machine, I have VMWare 6 installed. In this VMWare, I am running Debian 5.0.2 
32bit with squid 3.0.Stable-3+lenny2. The IP of this VM is 192.168.157.155

Now, I want a Firefox running on the host connect through the squid running in 
the VM to the internet. Therefor, I added the following acl / rules:

acl test1 src 10.255.0.0/24
http_access allow test1

And in the Firefox I configured for all Proxy-Settings the IP 192.168.157.155 
with port 3128. But Firefox tells me that the proxy server denied the access. I 
tried to ping the VM fom the Windows 7 host - and that works perfectly. So as 
next step, I just thought that I made general errors in squid config, and 
therefor I used the following lines in squid.conf:

acl all src 0.0.0.0/0.0.0.0
http_access allow all

Even after a restart, the access doesn´t work. Can anyone help me here please? 
(And yes, I am new to squid...)

Regards,
Dennis

[squid-users] auth-problem with website on server with port 8080

2010-02-15 Thread Ralf.Lutz
Hi,

I have a problem with our new configured proxy authenticating against an AD via 
kerberos.

We vist a website zfl.bsz-bw.de (sorry, it´s password-protected, you cant´s 
test it, but if you want anything to see, just contact me) that works fine. 
There we click a link, that is redirected to pollux.bsz-bw.de and then 
redirected to pollux.bsz-bw.de:8080.

The url with the port 8080 did not finish loading, the Response oft he GET is a 
407, but nothing more happens. After a few minutes, parts oft he site are 
loaded from safebrowsing-cache.google.com (I can see this with HTTPfox), but 
the site doesn´t work of course.

Could you understand me, what I mean ? I can´t describe the problem exactly.

It seems, that the kerberos-auth against the ad doesn´t work with the site with 
port 8080, copuld this be ?

I have our old squid here, too. It uses ntlm to auth aganist the AD. With this 
proxy, the website with port 8080 works well (but I don´t want to use the old 
squid anymore).

I have squid-3.0.STABLE9-1.el5 running on centos 5.4

Can anyone help me ?

Best Regards,
Ralf



Re: [squid-users] Problem with getting through squid in vmware

2010-02-15 Thread Dieter Bloms
Hi Michael,

On Mon, Feb 15, Michael Neumeier wrote:

 I have a Windows 7 host machine with the IP 10.255.0.0/24. On this
 Windows machine, I have VMWare 6 installed. In this VMWare, I am
 running Debian 5.0.2 32bit with squid 3.0.Stable-3+lenny2. The IP of
 this VM is 192.168.157.155

I think your windows host comes with an address from range
192.168.157.0/24, please very with tcpdump.

You have to insert your own http_access lines before the 
http_access deny all line


--

Gruß

  Dieter

--
I do not get viruses because I do not use MS software.
If you use Outlook then please do not put my email address in your
address-book so that WHEN you get a virus it won't use my address in the
From field.


Re: [squid-users] auth-problem with website on server with port 8080

2010-02-15 Thread Ralf.Lutz
For testing I´ve now made the following entries in the squid.conf before the 
auth-entries, so that I shold connect to the site without authentication:

acl zfl url_regex bsz-bw
acl zfl1 port 80 8080
http_access allow zfl zfl1

and this works not, too. So it seems, that not the authentication is the 
problem, or ? The squid with ntlm that works is also another squid-version, so 
this would be possible.

Has anyone an idea ?

Kind Regards,
Ralf


Re: [squid-users] cache manager access from web

2010-02-15 Thread Matus UHLAR - fantomas
On 14.02.10 01:32, J. Webster wrote:
 Would that work with:
 http_access deny manager CONNECT !SSL_ports

no, the manager is not fetched by CONNECT request (unless something is
broken).

you need https_port directive and acl of type myport, then allow manager
only on the https port. that should work.

note that you should access manager directly not using the proxy.

 
  Date: Sat, 13 Feb 2010 20:58:11 +0100
  From: uh...@fantomas.sk
  To: squid-users@squid-cache.org
  Subject: Re: [squid-users] cache manager access from web
 
  On 11.02.10 10:46, J. Webster wrote:
  I have changed the config and can now login to the cache manager.
  This was in the conf already:
  http_access deny CONNECT !SSL_ports
 
  So, the issue remains whether allowing password access to the cache 
  manager is enough.
  How else can this be made more secure? I guess not if the only way for me 
  to access it is through a public IP address.
 
  I think allowing managr only on https_port should work and help...
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Windows found: (R)emove, (E)rase, (D)elete


[squid-users] Configuring Squid to proxy by protocol (only http)?

2010-02-15 Thread Bill Stephens
All,

My institution has a proxy.pac configuration that proxies HTTP traffic
but not HTTPS. This works fine in a browser.  When I try to configure
Java to use the proxy it will connect to HTTP URLs just fine and barf
on HTTPS because the proxy changes the protocol on secure requests to
HTTP and our Web Services do not like that.

Can a Squid proxy be configured as follows?
1. HTTP traffic: forward to existing proxy
2. HTTPS traffic: direct connect

Thanks,
Bill S.


Re: [squid-users] auth-problem with website on server with port 8080

2010-02-15 Thread Ralf.Lutz
The problem is solved, it had nothing to do with squid or kerberos-auth.
The port 8080 was blocked at the firewall from the new proxy, the old
proxy was allowed.

Regards,
Ralf



[squid-users] RE: Advisory SQUID-2010:2 - Remote Denial of Service issue in HCTP

2010-02-15 Thread Andy Litzinger
Does the HTCP port have to be open towards the attacker or can the attacker 
exploit the bug through a squid listening port?  i.e. If I have a firewall in 
front of squid (reverse proxy) that only allows port 80/443 in from the web and 
HTCP is bound to some other port am I at risk from attackers outside my 
firewall?

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Friday, February 12, 2010 6:30 AM
To: squid-annou...@squid-cache.org; Squid
Subject: Advisory SQUID-2010:2 - Remote Denial of Service issue in HCTP

__

 Squid Proxy Cache Security Update Advisory SQUID-2010:2
__

Advisory ID:SQUID-2010:2
Date:   February 12, 2010
Summary:Remote Denial of Service issue in HCTP
Affected versions:  Squid 2.x,
 Squid 3.0 - 3.0.STABLE23
Fixed in version:   Squid 3.0.STABLE24
__

 http://www.squid-cache.org/Advisories/SQUID-2010_2.txt
__

Problem Description:

  Due to incorrect processing Squid is vulnerable to a denial of
  service attack when receiving specially crafted HTCP packets.

__

Severity:

  This problem allows any machine to perform a denial of service
  attack on the Squid service when its HTCP port is open.

__

Updated Packages:

  This bug is fixed by Squid versions 3.0.STABLE24

  In addition, patches addressing these problems can be found In
  our patch archives.

Squid 2.7:
  http://www.squid-cache.org/Versions/v2/2.7/changesets/12600.patch

Squid 3.0:
http://www.squid-cache.org/Versions/v3/3.0/changesets/3.0-ADV-2010_2.patch


  If you are using a prepackaged version of Squid then please refer
  to the package vendor for availability information on updated
  packages.

__

Determining if your version is vulnerable:

  All Squid-3.0 releases without htcp_port in their configuration
  file (the default) are not vulnerable.

  Squid-3.1 releases are not vulnerable.

  For unpatched Squid-2.x and Squid-3.0 releases; if your cache.log
  contains a line with Accepting HTCP messages on port when run
  with debug level 1 (debug_options ALL,1). Your Squid is
  vulnerable.

  Alternatively; for unpatched Squid-2.x and Squid-3.0 releases.
  If the command
squidclient mgr:config | grep htcp_port
  displays a non-zero HTCP port your Squid is vulnerable.

__

Workarounds:

  For Squid-2.x:
   * Configuring htcp_port 0 explicitly

  For Squid-3.0:
   * Ensuring that any unnecessary htcp_port setting left in
 squid.conf after upgrading to 3.0 are removed.

__

Contact details for the Squid project:

  For installation / upgrade support on binary packaged versions
  of Squid: Your first point of contact should be your binary
  package vendor.

  If your install and build Squid from the original Squid sources
  then the squid-users@squid-cache.org mailing list is your primary
  support point. For subscription details see
  http://www.squid-cache.org/Support/mailing-lists.html.

  For reporting of non-security bugs in the latest STABLE release
  the squid bugzilla database should be used
  http://www.squid-cache.org/bugs/.

  For reporting of security sensitive bugs send an email to the
  squid-b...@squid-cache.org mailing list. It's a closed list
  (though anyone can post) and security related bug reports are
  treated in confidence until the impact has been established.

__

Credits:

  The vulnerability was discovered by Kieran Whitbread.

__

Revision history:

  2010-02-12 14:11 GMT Initial Release
__
END



Re: [squid-users] Creating ip exception

2010-02-15 Thread Jose Ildefonso Camargo Tolosa
Hi!

Remember ACLs are used up to down, and the first one to hit will
be used, so, just add an allow for the IPs you want whitelisted,
before the ACL that blocks the pages.

I hope this helps,

Ildefonso Camargo


On Mon, Feb 15, 2010 at 12:34 AM, Martin Connell
mconn...@richmondfc.com.au wrote:
 Dear Squid,

 I am a new squid user, and I¹ve been relegated the task of creating a couple
 of exceptions based on IP address.

 So basically, we have our squid setup so certain sites are banned for all
 users, facebook etc. However there are 2 pc¹s we want to have access
 specifically to facebook for work purposes. Can you please point me in the
 right direction as to how I would go about this, I¹ve been trying to google
 this, I know I need to edit the squid.conf file but after looking through
 that file not too sure how to do this. Any help would be much appreciated.



 This email and any files transmitted with it are confidential and intended 
 solely for the use of the individual or entity to whom they are addressed. 
 Please notify the sender immediately by e-mail if you have received this 
 e-mail by mistake and delete this e-mail from your system. Please note that 
 any views or opinions presented in this email are solely those of the author 
 and do not necessarily represent those of the organisation. Finally, the 
 recipient should check this email and any attachments for the presence of 
 viruses. The organisation accepts no liability for any damage caused by any 
 virus transmitted by this email




Re: [squid-users] squid + dansguardian + auth

2010-02-15 Thread Jose Ildefonso Camargo Tolosa
Hi!

I really don't understand why are you, people, so insistent on the
x-forwarded-for thing. it has nothing to do with authentication,
unless you use IP as part of your ACLs, off course.

Now, I repeat:

authplugin = '/etc/dansguardian/authplugins/proxy-basic.conf'
authplugin = '/etc/dansguardian/authplugins/proxy-digest.conf'
authplugin = '/etc/dansguardian/authplugins/proxy-ntlm.conf'

That's and excerpt from the dansguardian.conf file.  Do you have these enabled?

I hope this helps,

Ildefonso Camargo

On Mon, Feb 15, 2010 at 5:47 AM, Bruno Santos bvsan...@hal.min-saude.pt wrote:
 X-Copyrighted-Material

 Hi !

 Yes, i was careful to check in the SPEC file to see if there was such option 
 and it is present:
 --enable-follow-x-forwarded-for

 The problem i guess is when dansguardian forwards the IP to squid, instead of 
 giving the orinal IP, it goes with the local IP.
 But, with other options enabled, i get an html response - 400 bad request..


[squid-users] Problem with transparent mode to a certain site

2010-02-15 Thread John Lauro
Hello,

I can't access hotels.com going through a transparent squid box.  However,
with a browser set to use the same squid box as a manual proxy configuration
it works fine.  Squid version 3.1.0.15.  Just running this as a semi
test/production. Our configuration isn't completely standard, as we are
going through haproxy in addition to squid. Can others with squid running in
transparent mode confirm or deny a problem to this site.  Unfortunately I
did not write the exact error down (took it out of the loop for now), but
was something about unsupported compression method in the browser (running
on Firefox 3.5.7).

 






Re: [squid-users] setting up different filtering based on port number

2010-02-15 Thread Al - Image Hosting Services

Hi,

On Mon, 15 Feb 2010, Amos Jeffries wrote:

On Sun, 14 Feb 2010 18:21:25 -0600 (CST), Al - Image Hosting Services
az...@zickswebventures.com wrote:

Hi,

I know that this is a little bit off topic for this list, but I asked on



the squidguard list and they said that I need to run 2 instances of

squid.

I know that squid can listen on 2 ports very easily, and I have setup
squid to listen on 2 different ports. Port 8080 uses squidguard to

filter,

but port 8081 doesn't. What I would really like to be able to do is to
have less restrictive filtering on port 8081. For example, I would like

to

block youtube on port 8080, but not on port 8081. Still I would like to

be

able to block porn on port 8081. Could someone give me some assistance

on

how to do this or point me to a how to?

Best Regards,
Al


Use of the myport ACL type and url_rewrite_access to prevent things
being sent to the squidguard re-writer.

http://www.squid-cache.org/Doc/config/url_rewrite_access/


I should have explained that differently, so I will give it another try.

This is what I have in my squid.conf now:

acl custom-auth proxy_auth REQUIRED
acl mysite dstdomain .zickswebventures.com
acl portA myport 8080
acl portB myport 8081
url_rewrite_access allow portA
url_rewrite_program /bin/squidGuard -c /etc/squid/squidGuard.conf
url_rewrite_children 3
http_access allow mysite
http_access allow custom-auth all
http_access deny all

It works perfectly, requests sent to portA are filtered and requests that 
are sent to portB are not, but I need to add sort of an intermediate level 
of filtering.


Solution 1: It looks like squidguard can filter based on IP. If I created 
a portC in squid.conf, should I be able to add this to my squidguard.conf:


 src portC {
 ip0.0.0.0:8082
 }

 src portA {
 ip0.0.0.0:8080
 }

My question is, does squid pass the port along with the IP address to 
squidguard? If it does, then is my config wrong or does squidguard just 
not know what to do with the port information?


Solution 2: Call 2 instances of squidguard with a different config. 
Although, I don't know if this is possible without knowing more about how 
squid passes information to squidguard.


Solution 3: Create a blocklist within squid of maybe 5 to 30 sites, so my 
squid.conf would like:


acl custom-auth proxy_auth REQUIRED
acl mysite dstdomain .zickswebventures.com
acl block dstdomain .facebook.com .twitter.com
acl portA myport 8080
acl portB myport 8081
acl portB myport 8082

url_rewrite_access allow portA portB
url_rewrite_program /bin/squidGuard -c /etc/squid/squidGuard.conf
url_rewrite_children 3
http_access allow mysite
http_access allow custom-auth all
http_access deny all

Of course, the blank line is where I would need to tell squid to redirect 
to the the zickswebvenutres.com/blocked.html if it sees a one of the urls 
being blocked, but only on portA. Could this be done?


Best Regards,
Al




[squid-users] RE: Advisory SQUID-2010:2 - Remote Denial of Service issue in HCTP

2010-02-15 Thread Amos Jeffries
On Mon, 15 Feb 2010 09:19:40 -0800, Andy Litzinger
andy.litzin...@theplatform.com wrote:
 Does the HTCP port have to be open towards the attacker or can the
 attacker exploit the bug through a squid listening port?  i.e. If I have
a
 firewall in front of squid (reverse proxy) that only allows port 80/443
in
 from the web and HTCP is bound to some other port am I at risk from
 attackers outside my firewall?

As long as the attacker can get a packet into the HTCP listener port they
can crash Squid.

NP: that differs from the http_port.

A firewall prevents external access to the HTCP port drops the severity.
But, it might still be exploited by internal machines though, so still
vulnerable.

Also note, Squid passes these messages on _unchanged_ to its peers
regardless of its own handling, so making one gateway Squid immune does not
protect those behind it.

Amos



Re: [squid-users] Problem with transparent mode to a certain site

2010-02-15 Thread Amos Jeffries
On Mon, 15 Feb 2010 13:22:02 -0500, John Lauro
john.la...@covenanteyes.com wrote:
 Hello,
 
 I can't access hotels.com going through a transparent squid box. 
However,
 with a browser set to use the same squid box as a manual proxy
 configuration
 it works fine.  Squid version 3.1.0.15.  Just running this as a semi
 test/production. Our configuration isn't completely standard, as we are
 going through haproxy in addition to squid. Can others with squid
running
 in
 transparent mode confirm or deny a problem to this site.  Unfortunately
I
 did not write the exact error down (took it out of the loop for now),
but
 was something about unsupported compression method in the browser
(running
 on Firefox 3.5.7).

http://redbot.org/?uri=http://www.hotels.com/

The error The resource doesn't send Vary consistently. will screw most
browsers at random times when viewing through any proxy.

Amos



Re: [squid-users] setting up different filtering based on port number

2010-02-15 Thread Amos Jeffries
On Mon, 15 Feb 2010 14:23:12 -0600 (CST), Al - Image Hosting Services
az...@zickswebventures.com wrote:
 Hi,
 
 On Mon, 15 Feb 2010, Amos Jeffries wrote:
 On Sun, 14 Feb 2010 18:21:25 -0600 (CST), Al - Image Hosting Services
 az...@zickswebventures.com wrote:
 Hi,

 I know that this is a little bit off topic for this list, but I asked
on

 the squidguard list and they said that I need to run 2 instances of
 squid.
 I know that squid can listen on 2 ports very easily, and I have setup
 squid to listen on 2 different ports. Port 8080 uses squidguard to
 filter,
 but port 8081 doesn't. What I would really like to be able to do is to
 have less restrictive filtering on port 8081. For example, I would
like
 to
 block youtube on port 8080, but not on port 8081. Still I would like
to
 be
 able to block porn on port 8081. Could someone give me some assistance
 on
 how to do this or point me to a how to?

 Best Regards,
 Al

 Use of the myport ACL type and url_rewrite_access to prevent things
 being sent to the squidguard re-writer.

 http://www.squid-cache.org/Doc/config/url_rewrite_access/
 
 I should have explained that differently, so I will give it another try.
 
 This is what I have in my squid.conf now:
 
 acl custom-auth proxy_auth REQUIRED
 acl mysite dstdomain .zickswebventures.com
 acl portA myport 8080
 acl portB myport 8081
 url_rewrite_access allow portA
 url_rewrite_program /bin/squidGuard -c /etc/squid/squidGuard.conf
 url_rewrite_children 3
 http_access allow mysite
 http_access allow custom-auth all
 http_access deny all
 
 It works perfectly, requests sent to portA are filtered and requests
that 
 are sent to portB are not, but I need to add sort of an intermediate
level 
 of filtering.
 
 Solution 1: It looks like squidguard can filter based on IP. If I
created 
 a portC in squid.conf, should I be able to add this to my
squidguard.conf:
 
   src portC {
ip0.0.0.0:8082
   }
 
   src portA {
ip0.0.0.0:8080
   }
 
 My question is, does squid pass the port along with the IP address to 
 squidguard? If it does, then is my config wrong or does squidguard just 
 not know what to do with the port information?
 

Squid will never pass the IP 0.0.0.0 to squidguard. All IPs handled are
routable.
So I expect that will never patch properly.

 Solution 2: Call 2 instances of squidguard with a different config. 
 Although, I don't know if this is possible without knowing more about
how 
 squid passes information to squidguard.
 
 Solution 3: Create a blocklist within squid of maybe 5 to 30 sites, so
my 
 squid.conf would like:
 
 acl custom-auth proxy_auth REQUIRED
 acl mysite dstdomain .zickswebventures.com
 acl block dstdomain .facebook.com .twitter.com
 acl portA myport 8080
 acl portB myport 8081
 acl portB myport 8082
 
 url_rewrite_access allow portA portB
 url_rewrite_program /bin/squidGuard -c /etc/squid/squidGuard.conf
 url_rewrite_children 3
 http_access allow mysite
 http_access allow custom-auth all
 http_access deny all
 
 Of course, the blank line is where I would need to tell squid to
redirect 
 to the the zickswebvenutres.com/blocked.html if it sees a one of the
urls 
 being blocked, but only on portA. Could this be done?

You seem to misunderstand how ACL work.

Read this:
  http://wiki.squid-cache.org/SquidFaq/SquidAcl#Common_Mistakes

Then consider this:
  url_rewrite_access allow portA
  url_rewrite_access allow portB !block
  url_rewrite_access deny all


Or better yet do a real HTTP redirection by Squid instead:

  deny_info http://zickswebvenutres.com/blocked.html block
  http_access deny portA block


Amos


Re: [squid-users] squid + dansguardian + auth

2010-02-15 Thread Amos Jeffries
On Mon, 15 Feb 2010 13:15:35 -0430, Jose Ildefonso Camargo Tolosa
ildefonso.cama...@gmail.com wrote:
 Hi!
 
 I really don't understand why are you, people, so insistent on the
 x-forwarded-for thing. it has nothing to do with authentication,
 unless you use IP as part of your ACLs, off course.


You mean such as little 'unimportant' things like http_access allow
our_networks or http_access deny all?


XFF defines the route of transfer. Security ACL define the trusted secure
zone. Combined, the XFF provides the true origin client for end-server
access authorization (and IP spoofing sometimes) across any hierarchy.

The hierarchy in this case is client+DG+Squid+untrusted. 

Some (many?) websites use it to identify individual clients sources across
translation technologies such as NAT , intercepting proxies and CDN
hierarchies where the IP addresses are altered and multiple clients
otherwise appear to all come from the same source.

In the case of Squid+DansGuardian. _Every single request_ comes out the
other end as sourced from 127.0.0.1 / localhost.

Amos



Re: [squid-users] Recent Polygraph results for Squid?

2010-02-15 Thread Luis Daniel Lucio Quiroz
Le Lundi 15 Février 2010 05:35:47, Oborn, Keith a écrit :
 Hi all -
 
 We used to be a heavy user of commercial forward proxy/cache products,
 but with the demise of the NetApp line we stopped that activity.
 
 However, I'm now looking for an initial steer on a good large-scale
 forward proxy setup (multi-gigabit rates, hundreds of thousands of
 users). Unfortunately, our Polygraph rig was also scrapped some time ago
 (boxes died-), and it will take time and resource to build a new one.
 
 Sadly, the isp-caching and web-polygraph lists seem to be dead
 nowadays-.
 
 I'd be very interested in any numbers at all - most particularly on
 recent Sun kit, as it looks as if ZFS is a good bet for Squid. I must
 admit I alway hankered after testing a proxy/cache on a Thumper (X4540)
 because of the huge spindle density.
 
 At present, any numbers that get me within like a factor of two of
 actual performance on any modern X86 server hardware would be great (and
 perhaps any idea if using Sun T-series helps - does Squid like lots of
 threads?) - that will enable us to decide whether it is worth setting up
 to run our own detailed tests. I will, of course, post any test results
 we produce if we go down that road.
 
 
 
 ---
 --- Save Paper - Do you really need to print this e-mail?
 
 Visit www.virginmedia.com for more information, and more fun.
 
 This email and any attachments are or may be confidential and legally
 privileged and are sent solely for the attention of the addressee(s). If
 you have received this email in error, please delete it from your system:
 its use, disclosure or copying is unauthorised. Statements and opinions
 expressed in this email may not represent those of Virgin Media. Any
 representations or commitments in this email are subject to contract.
 
 Registered office: 160 Great Portland Street, London W1W 5QA.
 Registered in England and Wales with number 2591237
 ===
 ===
I donnt  know if my implementation is bigger than yours, but we have
3 linux x86 boxes (xeon 2.3gz+32gbram each) supporing 5000 users with next 
navigation chars:

object mean size: 17kb
hits/min: 4500 (each server)

If you are worry about squid capacity take my data as a reference.

LD


Re: [squid-users] Recent Polygraph results for Squid?

2010-02-15 Thread Brett Lymn
On Mon, Feb 15, 2010 at 11:35:47AM -, Oborn, Keith wrote:
  
 I'd be very interested in any numbers at all - most particularly on
 recent Sun kit, as it looks as if ZFS is a good bet for Squid. I must
 admit I alway hankered after testing a proxy/cache on a Thumper (X4540)
 because of the huge spindle density.
 

Our main squid proxies are a couple of sun fire v445's, peak http
requests are 12.5k/min, median service time  100ms.  We run our
caches on ZFS, two spindles only.  We cut over from UFS because we
kept running into fragmentation problems on UFS - squid would claim
the disk was full but df would say not.

 At present, any numbers that get me within like a factor of two of
 actual performance on any modern X86 server hardware would be great (and
 perhaps any idea if using Sun T-series helps - does Squid like lots of
 threads?) 

Squid itself is single threaded so it probably would not benefit from
a T-series itself but if you have a lot of
authenticators/redirectors/other ancillaries it may work well, or
maybe either setup multiple instances of squid or even resort to
slicing up the machine into a few LDOMs to present multiple smaller
machines in the same chassis - that may keep the squid config simpler.
Never tested squid on a T-series myself though.

-- 
Brett Lymn
Warning:
The information contained in this email and any attached files is
confidential to BAE Systems Australia. If you are not the intended
recipient, any use, disclosure or copying of this email or any
attachments is expressly prohibited.  If you have received this email
in error, please notify us immediately. VIRUS: Every care has been
taken to ensure this email and its attachments are virus free,
however, any loss or damage incurred in using this email is not the
sender's responsibility.  It is your responsibility to ensure virus
checks are completed before installing any data sent in this email to
your computer.




[squid-users] icap_preview_size units

2010-02-15 Thread Luis Daniel Lucio Quiroz
Hi all

when using this parameter
icap_preview_size 128

what does it mean exactly, 128 bytes, kb,  mb  or %
I couldnt figure it.  The fact is that when tcpdumping I see more than 128, 
unther 3.0.19+


tia

LD
 


Re: [squid-users] Problem with getting through squid in vmware

2010-02-15 Thread Amos Jeffries
On Mon, 15 Feb 2010 15:07:50 +0100, Dieter Bloms sq...@bloms.de wrote:
 Hi Michael,
 
 On Mon, Feb 15, Michael Neumeier wrote:
 
 I have a Windows 7 host machine with the IP 10.255.0.0/24. On this
 Windows machine, I have VMWare 6 installed. In this VMWare, I am
 running Debian 5.0.2 32bit with squid 3.0.Stable-3+lenny2. The IP of
 this VM is 192.168.157.155
 
 I think your windows host comes with an address from range
 192.168.157.0/24, please very with tcpdump.
 
 You have to insert your own http_access lines before the 
 http_access deny all line
 

Or alternatively the squid access.log during your allow all test should
have recorded the real client IP.

Amos


Re: [squid-users] cache manager access from web

2010-02-15 Thread Amos Jeffries
On Mon, 15 Feb 2010 15:32:30 +0100, Matus UHLAR - fantomas
uh...@fantomas.sk wrote:
 On 14.02.10 01:32, J. Webster wrote:
 Would that work with:
 http_access deny manager CONNECT !SSL_ports
 
 no, the manager is not fetched by CONNECT request (unless something is
 broken).
 
 you need https_port directive and acl of type myport, then allow
manager
 only on the https port. that should work.
 
 note that you should access manager directly not using the proxy.
 

You may (or may not) hit a problem after trying that because the cache mgr
access uses its own protocol 
cache_object:// not htps://.  An SSL tunnel with mgr access going through
it should not have that problem but one never knows.

Amos

 
  Date: Sat, 13 Feb 2010 20:58:11 +0100
  From: uh...@fantomas.sk
  To: squid-users@squid-cache.org
  Subject: Re: [squid-users] cache manager access from web
 
  On 11.02.10 10:46, J. Webster wrote:
  I have changed the config and can now login to the cache manager.
  This was in the conf already:
  http_access deny CONNECT !SSL_ports
 
  So, the issue remains whether allowing password access to the cache
  manager is enough.
  How else can this be made more secure? I guess not if the only way
  for me to access it is through a public IP address.
 
  I think allowing managr only on https_port should work and help...


Re: [squid-users] Configuring Squid to proxy by protocol (only http)?

2010-02-15 Thread Amos Jeffries
On Mon, 15 Feb 2010 09:45:27 -0500, Bill Stephens grape...@gmail.com
wrote:
 All,
 
 My institution has a proxy.pac configuration that proxies HTTP traffic
 but not HTTPS. This works fine in a browser.  When I try to configure
 Java to use the proxy it will connect to HTTP URLs just fine and barf
 on HTTPS because the proxy changes the protocol on secure requests to
 HTTP and our Web Services do not like that.

A badly broken proxy by the sounds of it.

 
 Can a Squid proxy be configured as follows?
 1. HTTP traffic: forward to existing proxy
 2. HTTPS traffic: direct connect

This is a Java problem at the core. It sounds like your Java can't
interpret PAC files. See about fixing that first, a version of Java that
can do HTTP stuff properly may come with a lot of other useful fixes.

HTTPS was designed specifically to prevent man-in-middle attacks such as
interception proxies. You require administrative control over the domains
being visited or the client computers doing the connecting to get around
the security errors thrown up by HTTPS. You will also require Squid-3.1
sslbump feature probably.

Your best bet though is getting the broken proxy fixed or replaced with
something that knows HTTP.

Amos


Re: [squid-users] auth-problem with website on server with port 8080

2010-02-15 Thread Amos Jeffries
On Mon, 15 Feb 2010 14:54:41 +0100, ralf.l...@heidelberg.de wrote:
 Hi,
 
 I have a problem with our new configured proxy authenticating against an
 AD via kerberos.
snip
 
 I have squid-3.0.STABLE9-1.el5 running on centos 5.4
 
 Can anyone help me ?

I note that you are calling your 3.0.STABLE9 new.
It's quite old now, please do your best to get a more recent 3.0.

Amos


Re: [squid-users] icap_preview_size units

2010-02-15 Thread Amos Jeffries
On Mon, 15 Feb 2010 18:41:10 -0600, Luis Daniel Lucio Quiroz
luis.daniel.lu...@gmail.com wrote:
 Hi all
 
 when using this parameter
 icap_preview_size 128
 
 what does it mean exactly, 128 bytes, kb,  mb  or %
 I couldnt figure it.  The fact is that when tcpdumping I see more than
 128, 
 unther 3.0.19+

_default_ preview size. IIRC, the server itself can alter the preview size
during protocol config exchange.

Amos


[squid-users] Re: Squid with Dansguardian is killing apt-get and Spybot updates

2010-02-15 Thread tcygne

How are Squid and DansGuardian chained together? how does that fit with
the firewall interception rules? 

I'm not sure what you are asking. The proxy/filter doesn't seem to have any
firewall installed. The traffic is rerouted to the filter by the ddwrt
router box at (192.168.1.1) using the following commands.

#!/bin/sh
PROXY_IP=192.168.1.2
PROXY_PORT=8080
LAN_IP=`nvram get lan_ipaddr`
LAN_NET=$LAN_IP/`nvram get lan_netmask`
iptables -t nat -A PREROUTING -i br0 -s $LAN_NET -d $LAN_NET -p tcp --dport
80 -j ACCEPT
iptables -t nat -A PREROUTING -i br0 -s ! $PROXY_IP -p tcp --dport 80 -j
DNAT --to $PROXY_IP:$PROXY_PORT
iptables -t nat -I POSTROUTING -o br0 -s $LAN_NET -d $PROXY_IP -p tcp -j
SNAT --to $LAN_IP
iptables -I FORWARD -i br0 -o br0 -s $LAN_NET -d $PROXY_IP -p tcp --dport
$PROXY_PORT -j ACCEPT
iptables -t nat -I PREROUTING -i br0 -s 192.168.1.5 -j ACCEPT

the final command allows 192.168.1.5 to bypass the filter. This would be the
only device in which apt-get and spybot updates work from. (Nevermind how
one device can do both of those things) It looks like all traffic is
rerouted to port 8080 (dansguardian answers) so maybe it isn't hitting squid
at all. And this isn't a squid issue. ;-( I'm not real slick with iptables,
but maybe the router box is dropping all non port 80 traffic except for
device 192.168.1.5? More than likely apt and spybot use https, so what would
be the iptables rule to allow all traffic on port 443 to bypass the filter?


-- 
View this message in context: 
http://n4.nabble.com/Squid-with-Dansguardian-is-killing-apt-get-and-Spybot-updates-tp1555460p1556890.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] delay pool

2010-02-15 Thread Adnan Shahzad
Dear All,

I want to configure Per user quota, Mean 2 GB per day internet access. Can I do 
it with delay pools? But in delay pool how And my 2nd question is delay 
pool bucket is for day or for week or month?


Kindly reply

Regards

Adnan


[squid-users] Re: slow performance 1 user : 3.1.0.16 on default config

2010-02-15 Thread Andres Salazar
Hello,

This time we can see I followed the original config. A page like
cnn.com takes about 60 seconds to load. Without the proxy it takes 10
seconds.

CentOS 5.4 fresh and clean base install with compiling tools.
Then I installed openssl-devel
./configure; make; make install also same problem with:
./configure --enable-ssl; make; make install

This is a dual Atom 1GB RAM. I will have to test on another machine tomorrow.

The cache log says:
2010/02/15 23:41:34| Starting Squid Cache version 3.1.0.16 for
i686-pc-linux-gnu...
2010/02/15 23:41:34| Process ID 10523
2010/02/15 23:41:34| With 1024 file descriptors available
2010/02/15 23:41:34| Initializing IP Cache...
2010/02/15 23:41:34| DNS Socket created at [::], FD 5
2010/02/15 23:41:34| Adding nameserver 16.10.55.10 from /etc/resolv.conf
2010/02/15 23:41:34| Adding nameserver 16.30.3.13 from /etc/resolv.conf
2010/02/15 23:41:34| Adding domain localhost from /etc/resolv.conf
2010/02/15 23:41:34| Unlinkd pipe opened on FD 10
2010/02/15 23:41:34| Store logging disabled
2010/02/15 23:41:34| Swap maxSize 0 + 262144 KB, estimated 20164 objects
2010/02/15 23:41:34| Target number of buckets: 1008
2010/02/15 23:41:34| Using 8192 Store buckets
2010/02/15 23:41:34| Max Mem  size: 262144 KB
2010/02/15 23:41:34| Max Swap size: 0 KB
2010/02/15 23:41:34| Using Least Load store dir selection
2010/02/15 23:41:34| Set Current Directory to /usr/local/squid/var/cache
2010/02/15 23:41:35| Loaded Icons.
2010/02/15 23:41:35| Accepting  HTTP connections at [::]:3128, FD 11.
2010/02/15 23:41:35| HTCP Disabled.
2010/02/15 23:41:35| Squid modules loaded: 0
2010/02/15 23:41:35| Ready to serve requests.
2010/02/15 23:41:35| storeLateRelease: released 0 objects
2010/02/15 23:42:51| Preparing for shutdown after 186 requests
2010/02/15 23:42:51| Waiting 0 seconds for active connections to finish
2010/02/15 23:42:51| FD 11 Closing HTTP connection
2010/02/15 23:42:53| Shutting down...
2010/02/15 23:42:53| basic/auth_basic.cc(97) done: Basic
authentication Shutdown.
2010/02/15 23:42:53| Closing unlinkd pipe on FD 10
2010/02/15 23:42:53| storeDirWriteCleanLogs: Starting...
2010/02/15 23:42:53|   Finished.  Wrote 0 entries.
2010/02/15 23:42:53|   Took 0.00 seconds (  0.00 entries/sec).
CPU Usage: 0.429 seconds = 0.298 user + 0.131 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:4092 KB
Ordinary blocks: 3960 KB 85 blks
Small blocks:   0 KB  1 blks
Holding blocks:  2140 KB 11 blks
Free Small blocks:  0 KB
Free Ordinary blocks: 131 KB
Total in use:6100 KB 149%
Total free:   131 KB 3%
2010/02/15 23:42:53| Open FD READ/WRITE5 DNS Socket
2010/02/15 23:42:53| Open FD READ/WRITE8 Waiting for next request
2010/02/15 23:42:53| Open FD READ/WRITE9 Reading next request
2010/02/15 23:42:53| Open FD READ/WRITE   12 Reading next request
2010/02/15 23:42:53| Open FD READ/WRITE   13 i.cdn.turner.com idle connection
2010/02/15 23:42:53| Open FD READ/WRITE   14 Reading next request
2010/02/15 23:42:53| Open FD READ/WRITE   15 i.cdn.turner.com idle connection
2010/02/15 23:42:53| Open FD READ/WRITE   16 www.cnnaudience.com idle connection
2010/02/15 23:42:53| Open FD READ/WRITE   17 i.cdn.turner.com idle connection
2010/02/15 23:42:53| Open FD READ/WRITE   18 Waiting for next request
2010/02/15 23:42:53| Open FD READ/WRITE   19 i.cdn.turner.com idle connection
2010/02/15 23:42:53| Open FD READ/WRITE   20 Waiting for next request
2010/02/15 23:42:53| Open FD READ/WRITE   21 Waiting for next request
2010/02/15 23:42:53| Open FD READ/WRITE   22 Waiting for next request
2010/02/15 23:42:53| Open FD READ/WRITE   23 i.cdn.turner.com idle connection
2010/02/15 23:42:53| Open FD READ/WRITE   24 i.cdn.turner.com idle connection
2010/02/15 23:42:53| Open FD READ/WRITE   25 i.cdn.turner.com idle connection
2010/02/15 23:42:53| Open FD READ/WRITE   26 i.cdn.turner.com idle connection
2010/02/15 23:42:53| Open FD READ/WRITE   27 mail.google.com:443
2010/02/15 23:42:53| Open FD READ/WRITE   28 content.dl-rms.com idle connection
2010/02/15 23:42:53| Open FD READ/WRITE   29 chatenabled.mail.google.com:443
2010/02/15 23:42:53| Open FD READ/WRITE   30
symbolcomplete.marketwatch.com idle connection
2010/02/15 23:42:53| Open FD READ/WRITE   31 es.optimost.com idle connection
2010/02/15 23:42:53| Open FD READ/WRITE   33 Waiting for next request
2010/02/15 23:42:53| Open FD READ/WRITE   34 i2.cdn.turner.com idle connection
2010/02/15 23:42:53| Open FD READ/WRITE   35 i2.cdn.turner.com idle connection
2010/02/15 23:42:53| Open FD READ/WRITE   36 cnn.dyn.cnn.com idle connection
2010/02/15 23:42:53| Open FD READ/WRITE   37 b.scorecardresearch.com
idle connection
2010/02/15 23:42:53| Open FD READ/WRITE   38 Reading next request
2010/02/15 23:42:53| Open FD READ/WRITE   39 mail.google.com:443
2010/02/15 

Re: [squid-users] Re: Squid with Dansguardian is killing apt-get and Spybot updates

2010-02-15 Thread Amos Jeffries

tcygne wrote:

How are Squid and DansGuardian chained together? how does that fit with
the firewall interception rules? 


I'm not sure what you are asking. The proxy/filter doesn't seem to have any
firewall installed. The traffic is rerouted to the filter by the ddwrt
router box at (192.168.1.1) using the following commands.



Ah, okay. You sound a little confused about your own network structure 
but managed to answer my question anyway :) well done.


What you have is this:

 Client-WRT-DansGuardian-Squid-WRT-Internet
(and back)


The WRT iptables is the firewall (even though its on a different box).


#!/bin/sh
PROXY_IP=192.168.1.2
PROXY_PORT=8080
LAN_IP=`nvram get lan_ipaddr`
LAN_NET=$LAN_IP/`nvram get lan_netmask`
iptables -t nat -A PREROUTING -i br0 -s $LAN_NET -d $LAN_NET -p tcp --dport
80 -j ACCEPT


... passes packets between internal machines without involving the proxy 
box.



iptables -t nat -A PREROUTING -i br0 -s ! $PROXY_IP -p tcp --dport 80 -j
DNAT --to $PROXY_IP:$PROXY_PORT


... passes all other port 80 to the proxy, except stuff from the proxy 
box itself. Specifically to DG on the proxy box.



iptables -t nat -I POSTROUTING -o br0 -s $LAN_NET -d $PROXY_IP -p tcp -j
SNAT --to $LAN_IP


... SNAT's everything from the local network to some IP belonging to the 
WRT.
 I assume (and hope) that is making internal IPs to some globally 
routable IP. Not just making all traffic seem to be coming from 192.168.1.1.




iptables -I FORWARD -i br0 -o br0 -s $LAN_NET -d $PROXY_IP -p tcp --dport
$PROXY_PORT -j ACCEPT


... lets stuff going to DG on the proxy box through.


iptables -t nat -I PREROUTING -i br0 -s 192.168.1.5 -j ACCEPT


I'm a little suspicious about that iptables -t nat -I PREROUTING -i br0 
-s 192.168.1.5 -j ACCEPT




the final command allows 192.168.1.5 to bypass the filter. This would be the
only device in which apt-get and spybot updates work from. (Nevermind how


... the proxy box also is in that state.


one device can do both of those things) It looks like all traffic is
rerouted to port 8080 (dansguardian answers) so maybe it isn't hitting squid
at all. And this isn't a squid issue. ;-( I'm not real slick with iptables,
but maybe the router box is dropping all non port 80 traffic except for
device 192.168.1.5? More than likely apt and spybot use https, so what would
be the iptables rule to allow all traffic on port 443 to bypass the filter?



It should already be bypassing the filter. Only port-80 is handled 
specially. At most you may need:

 iptables -I FORWARD -i br0 -p tcp -s $LAN_NET --dport 443 -j ACCEPT


Regarding the HTTP breakage, try adding
  iptables -t nat -I POSTROUTING -j MASQUERADE

... if that does not fix the proxy access out again then look at 
DansGuardian and see if its passing stuff to Squid.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE24
  Current Beta Squid 3.1.0.16


Re: [squid-users] Re: slow performance 1 user : 3.1.0.16 on default config

2010-02-15 Thread Amos Jeffries

Andres Salazar wrote:

Hello,

This time we can see I followed the original config. A page like
cnn.com takes about 60 seconds to load. Without the proxy it takes 10
seconds.


That sort of matches the relative number of parallel connections modern 
browsers will open to proxies vs to web servers.


You need to read this:

http://www.stevesouders.com/blog/2008/03/20/roundup-on-parallel-connections/

 

Note that if you’re behind a proxy (at work, etc.) your download 
characteristics change. If web clients behind a proxy issued too many 
simultaneous requests an intelligent web server might interpret that as 
a DoS attack and block that IP address. Browser developers are aware of 
this issue and throttle back the number of open connections.


In Firefox the network.http.max-persistent-connections-per-proxy setting 
has a default value of 4. If you try the Max Connections test page while 
behind a proxy it loads painfully slowly opening no more than 4 
connections at a time to download 180 images. IE8 drops back to 2 
connections per server when it’s behind a proxy, so loading the Max 
Connections test page shows an upperbound of 60 open connections. Keep 
this in mind if you’re comparing notes with others – if you’re at home 
and they’re at work you might be seeing different behavior because of a 
proxy in the middle.




Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE24
  Current Beta Squid 3.1.0.16


Re: [squid-users] delay pool

2010-02-15 Thread Amos Jeffries

Adnan Shahzad wrote:

Dear All,

I want to configure Per user quota, Mean 2 GB per day internet access. Can I do 
it with delay pools? But in delay pool how And my 2nd question is delay 
pool bucket is for day or for week or month?


Delay pools works in seconds. Being old code it's also got some numeric 
32-bit limits hanging around.


What you can do with it is assign a per-user bandwidth speed. Squid will 
police it for your HTTP traffic.


Quota stuff is quite hard in squid and still requires some custom code 
using helpers to manage the bandwidth used and do all the accounting. 
Squid only does allow/deny control at that point.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE24
  Current Beta Squid 3.1.0.16


Re: [squid-users] Toproxy - cache rapidshare ??

2010-02-15 Thread Amos Jeffries

Ariel wrote:

Hello, list, I will write in Spanish and translate it with google.
For some time I am reading info on tproxy to implement it on a debian lenny.
The specific query is as follows.
Enabling Tproxy, it is possible to cache sites like rapidshare,
megaupload, etc.. or skip the cache only achievement these sites and
see the source IP address (the customer) and not the cache (squid).


Cacheability is a property of the reply objects. It has nothing to do 
with the client IP address.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE24
  Current Beta Squid 3.1.0.16


Re: [squid-users] How to set up Squid reverse proxy for facebook?

2010-02-15 Thread Amos Jeffries

fulan Peng wrote:

Hi, Squid gurus!

I am struggling to set up Squid for facebook. At www.facebook.com ,
when you click login it will go to https://login.facebook.com. After
successful login, it will go back to www.facebook.com.  I used Apache
mod_proxy_html rewrite the url, but still it won't work. Now I am
trying to use Squid to cache all subdomains of facebook in https.  Any
hint or suggestion will be appreciated!


Sigh.

Dear fulan Peng,

  what do you expect will happen to your customers when they attempt to 
login to their facebook account and get back a session ID for someone 
elses account?


Or better yet, when they see someone elses purchase receipt after 
entering their own credit card details?



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE24
  Current Beta Squid 3.1.0.16


Re: [squid-users] RE: images occasionally don't get through

2010-02-15 Thread Amos Jeffries

Folkert van Heusden wrote:

To help the debugging I also found an url that is accessible to everyone:

failed:
--
192.168.0.90 - - [12/Feb/2010:15:28:21 +] GET
http://www.ibm.com/common/v15/main.css HTTP/1.0 200 10015
http://www-03.ibm.com/systems/hardware/browse/linux/?c=serversintron=Linux
2001t=ad Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)
TCP_MEM_HIT:DIRECT


http://redbot.org/?uri=http%3A%2F%2Fwww.ibm.com%2Fcommon%2Fv15%2Fmain.css


In short: The website is screwed.


1) The resource doesn't send Vary consistently.

... this means that the following might have happened:

 A fetches compressed version of main.css - gets a copy main.css.

 B fetched non-compressed version of main.css - Squid finds a stored 
copy of main.css. Vary: rules say nothing about compression. So Squid 
passes it back.



2) The ETag doesn't change between representations.

... means the following might have happened:

 A already had a copy of main.css. It's non-compressed and the server 
called it ETag:XYZ


 B fetches a compressed copy of object main.css from Squid - The 
server returns the compressed version and calls it ETag:XYZ.

   Squid stores it under the name main.css:(ETag:XYZ).

 A requests a new copy of main.css(ETag:XYZ) - Squid find its stored 
copy of main.css:(ETag:XYZ) and passes it back.



Going by the sizes of whats working, I'd guess #2 as most likely. But 
depending on the specific Vary: brokenness it might be either.



Your working test works because Ctrl+Reload send special controls which 
bypass any stored copies Squid might have and goes directly back to the 
web server. Note that you have now poisoned the cache for any other 
machines which were previously working with teh copies Squid use to store.
 They will now need to Ctrl+Reload to see the page properly, and thus 
poison it for you...


The only person who can fix this is the website author or webserver admin.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE24
  Current Beta Squid 3.1.0.16


Re: [squid-users] cache manager access from web

2010-02-15 Thread Amos Jeffries

J. Webster wrote:

Would that work with:
http_access deny manager CONNECT !SSL_ports



No. (The other replies explain why).

To encrypt the requests you need to setup an https_port to receive 
encrypted traffic, and then some form of SSL tunnel to do the encryption 
(the tools bundled with Squid do not encrypt or decrypt).


After the above, the http_access rules apply as they would for plain 
HTTP traffic.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE24
  Current Beta Squid 3.1.0.16


[squid-users] questions with squid-3.1

2010-02-15 Thread Jeff Peng
I just downloaded squid-3.1 source and compile  install it on an
ubuntu linux box.
There are two questions around it:

1. # sbin/squid -k kill
squid: ERROR: No running copy

Though squid is running there, but squid -k kill shows No running copy.
I think this is because squid can't  find its pid file,  so where is
the default pid file?

2. # sbin/squid -D
2010/02/16 15:02:41| WARNING: -D command-line option is obsolete.

-D is obsolete, why and what's the corresponding one to this option in
squid-3.1?


Thanks.

-- 
Jeff Peng
Email: jeffp...@netzero.net
Skype: compuperson