Re: [squid-users] Question about ACLs and http_access in Squid 3

2008-10-24 Thread Amos Jeffries

Tom Williams wrote:
Ok, now that I've basically got Squid 3 configured as a HTTP 
accelerator, I have a question about ACL rules and http_access.


Here is the basic config:  I've got two web servers behind a load 
balancer.   The idea is to have Squid server as a HTTP accelerator for 
Apache so it will cache static content (like global site graphics, etc) 
leaving Apache to deal with traffic that requires database access.


Here are my configuration lines:

acl directIP dst aaa.bbb.ccc.ddd/32
acl website dstdomain .mydomain.com

#Recommended minimum configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager
# Deny requests to unknown ports
http_access deny !Safe_ports
# Deny CONNECT to other than SSL ports
http_access deny CONNECT !SSL_ports
#
# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on localhost is a local user
http_access deny to_localhost
#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow directIP
http_access allow website

# And finally deny all other access to this proxy
http_access deny all


Now, when I point my browser at:

http://aaa.bbb.ccc.ddd/

I get an access denied 403 error page from Squid.

If I point my browser at:

http://www.mydomain.com/

It works just fine.  www.mydomain.com resolves to the aaa.bbb.ccc.ddd. 
IP address.


Why does the domain work yet the IP doesn't?  What am I missing?



All of the actual acceleration bits :)
  http://wiki.squid-cache.org/SquidFaq/ReverseProxy


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


[squid-users] How do I configure Keepalive-Timeout?

2008-10-24 Thread keisuke.hamanaka

Hello,I have a question.

I'd like to configure Keepalive-Timeout.
But I can't find Keepalive section in the squid.conf file.

Does persistent_request_timeoutTAG  mean Keepalive-timeout?

If so, Can I choose KeepAlive on or KeepAlive off  on each destination site?
And Can I choose KeepAlive on  or KeepAlive off  on clientside and 
serverside?





[squid-users] Ignoring query string from url

2008-10-24 Thread nitesh naik
Hi All,

Is there way to ignore query string in url so that objects are cached
without query string ?  I am using external perl program to strip them query
string from url which is slowing down response time. I have started 1500
processes of redirect program.

If I run squid without redirect program to strip query string , the squid
response is much faster but all the requests goes to the origin server.

Perl program to strip query string is.

#!/usr/bin/perl -p
BEGIN { $|=1 }
s|(.*)\?(.*)|$1|;

Regards
Nitesh


Re: [squid-users] How do I configure Keepalive-Timeout?

2008-10-24 Thread Amos Jeffries

[EMAIL PROTECTED] wrote:

Hello,I have a question.

I'd like to configure Keepalive-Timeout.
But I can't find Keepalive section in the squid.conf file.

Does persistent_request_timeoutTAG  mean Keepalive-timeout?

If so, Can I choose KeepAlive on or KeepAlive off  on each destination site?
And Can I choose KeepAlive on  or KeepAlive off  on clientside and 
serverside?



keepalive is determined by your browser or the web server per-site.

Squid only has these:
http://www.squid-cache.org/Versions/v3/3.1/cfgman/persistent_request_timeout.html
http://www.squid-cache.org/Versions/v3/3.1/cfgman/pconn_timeout.html

Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] Trouble getting kerberos auth working with squid 3.0

2008-10-24 Thread Steven Cardinal
Thanks Henrik,

That was my issue with Firefox - it now authenticates just fine. I've
been unable to get IE (6.0.2900.2180.xpsp_sp2_gdr.080814-1233) to
authenticate. I know this isn't a squid-specific thing, but any ideas
what setting in IE may be responsible for this? If not, no problem. I
appreciate your rapid response on my main issue.

Regards,

Steve

On Thu, Oct 23, 2008 at 3:03 PM, Henrik Nordstrom
[EMAIL PROTECTED] wrote:
 On tor, 2008-10-23 at 14:25 -0400, Steven Cardinal wrote:
 I see no sign on my DCs of any failed authentication. A tcpdump trace
 on my workstation shows no attempts from my Windows PC to perform any
 kerberos authentication. If I try running the command line specified
 in the squid.conf, I get:

 Then your browsers do not trust the proxy with kerberos authentication.
 Verify that you have configured the proxy by name and not IP in the
 browser proxy settings. To be exact the proxy name needs to match both a
 name that the browser trusts with Kerberos authentication AND a server
 kerberos ticket (or whatever those are called, kept in the keytab,
 kerberos is not a strong field of mine..)

 I'm guessing, however, that squid_kerb_auth can't be run just like
 that, however.

 Correct. You need to speak base64 encoded GSSAPI wrapped in Microsoft
 Negotiate SSP protocol format wrapped in the Squid NTLM/Negotiate
 protocol to it..

 Any ideas where I should look? I set my keytab file to be
 world-readable as a test and that didn't help.

 It seems you don't even get that far.. the very first steps is not
 dependent on the helper, only browser.. only when the browser agrees on
 sending the initial negotiation packet is the helper called. Until then
 all that happens is that Squid says that authentication is required to
 continue and the Negotiate SSP authentication protocol is supported.

 Regards
 Henrik



Re: [squid-users] Trouble getting kerberos auth working with squid 3.0

2008-10-24 Thread Malte Schröder
Hello,
IE6 does not support the Negotiate authentication scheme for proxies.
It does support that only against web servers.

Regards
Malte

On Fri, 24 Oct 2008 07:38:57 -0400
Steven Cardinal [EMAIL PROTECTED] wrote:

 Thanks Henrik,
 
 That was my issue with Firefox - it now authenticates just fine. I've
 been unable to get IE (6.0.2900.2180.xpsp_sp2_gdr.080814-1233) to
 authenticate. I know this isn't a squid-specific thing, but any ideas
 what setting in IE may be responsible for this? If not, no problem. I
 appreciate your rapid response on my main issue.
 
 Regards,
 
 Steve
 
 On Thu, Oct 23, 2008 at 3:03 PM, Henrik Nordstrom
 [EMAIL PROTECTED] wrote:
  On tor, 2008-10-23 at 14:25 -0400, Steven Cardinal wrote:
  I see no sign on my DCs of any failed authentication. A tcpdump trace
  on my workstation shows no attempts from my Windows PC to perform any
  kerberos authentication. If I try running the command line specified
  in the squid.conf, I get:
 
  Then your browsers do not trust the proxy with kerberos authentication.
  Verify that you have configured the proxy by name and not IP in the
  browser proxy settings. To be exact the proxy name needs to match both a
  name that the browser trusts with Kerberos authentication AND a server
  kerberos ticket (or whatever those are called, kept in the keytab,
  kerberos is not a strong field of mine..)
 
  I'm guessing, however, that squid_kerb_auth can't be run just like
  that, however.
 
  Correct. You need to speak base64 encoded GSSAPI wrapped in Microsoft
  Negotiate SSP protocol format wrapped in the Squid NTLM/Negotiate
  protocol to it..
 
  Any ideas where I should look? I set my keytab file to be
  world-readable as a test and that didn't help.
 
  It seems you don't even get that far.. the very first steps is not
  dependent on the helper, only browser.. only when the browser agrees on
  sending the initial negotiation packet is the helper called. Until then
  all that happens is that Squid says that authentication is required to
  continue and the Negotiate SSP authentication protocol is supported.
 
  Regards
  Henrik
 
 


-- 
---
Malte Schröder
[EMAIL PROTECTED]
ICQ# 68121508
---



signature.asc
Description: PGP signature


RE: [squid-users] Problems with downloads

2008-10-24 Thread Osmany Goderich
I solved the problem.

It was the range_offset_limit -1 KB line that was not letting squid resume 
downloads. I set it back to 0KB as it is by default and woila!!! Everything 
back to normal!!

Thank you very much for your support. This is one of the best mailing lists.

-Mensaje original-
De: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Enviado el: jueves, 23 de octubre de 2008 14:07
Para: Osmany Goderich
CC: squid-users@squid-cache.org
Asunto: Re: [squid-users] Problems with downloads

On tor, 2008-10-23 at 14:34 -0500, Osmany Goderich wrote:
 Hi everyone,
 
 I have Squid3.0STABLE9 installed on a CentOS5.2_x86_64 system. I have 
 problems with downloads, especially large files. Usually downloads are 
 slow in my network because of the amount of users I have but I dealt 
 with it using download accelerators like “FlashGET”. Now the downloads 
 get interrupted and they never resume and I don’t know why.

Can you try downgrading to 2.7 to see if that makes any difference. If it 
does please file a bug report.

Also check your cache.log for any errors.

  I can’t seem to find
 a pattern as to when or why the downloads get interrupted. I don’t 
 know if I explained my self well enough. I’m suspecting that there is 
 something wrong with all the configurations I did to tune de cache 
 effectiveness.

There isn't much you can do wrong at this level.

Regards
Henrik



Re: [squid-users] Ignoring query string from url

2008-10-24 Thread Matus UHLAR - fantomas
On 24.10.08 13:40, nitesh naik wrote:
 Is there way to ignore query string in url so that objects are cached
 without query string ?  I am using external perl program to strip them query
 string from url which is slowing down response time. I have started 1500
 processes of redirect program.
 
 If I run squid without redirect program to strip query string , the squid
 response is much faster but all the requests goes to the origin server.

Pardon? Different query strings can lead to different responses. Do you want
squid to produce still the same page of results when you google fort
different things?

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
10 GOTO 10 : REM (C) Bill Gates 1998, All Rights Reserved!


RE: [squid-users] Problems with downloads

2008-10-24 Thread Henrik Nordstrom
On tor, 2008-10-23 at 15:54 -0500, Osmany Goderich wrote:

 I had squid2.6STABLE6-5 before and I upgraded it thinking it was a bug in 
 that release. Should I still downgrade to 2.7?

Yes.

Regards
Henrik



signature.asc
Description: This is a digitally signed message part


RE: [squid-users] Problems with downloads

2008-10-24 Thread Henrik Nordstrom
On fre, 2008-10-24 at 08:31 -0500, Osmany Goderich wrote:

 It was the range_offset_limit -1 KB line that was not letting squid
 resume downloads. I set it back to 0KB as it is by default and
 woila!!! Everything back to normal!!

Good.

range_offset_limit -1 says Squid should NEVER resume download, and
instead always download the complete file.

To use this you must also disable quick_abort, telling Squid to always
continue downloading the requested object when the client has
disconnected.

quick_abort_min -1 KB


But be warned that both these settings can cause Squid to waste
excessive amounts of bandwidth on data which will perhaps never be
requested by any client..

Also depending on the Squid version range_offset_limit -1 may result in
significant delays or even timeouts if the client requests a range far
into the requested file. Not sure what the status in Squid-3 is wrt
this.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] How do I configure Keepalive-Timeout?

2008-10-24 Thread Henrik Nordstrom
On fre, 2008-10-24 at 16:52 +0900, [EMAIL PROTECTED] wrote:
 Hello,I have a question.
 
 I'd like to configure Keepalive-Timeout.
 But I can't find Keepalive section in the squid.conf file.
 
 Does persistent_request_timeoutTAG  mean Keepalive-timeout?

Yes. It sets the timeout for idle client connections. How long Squid
waits after the last received request before it closes the connection.

 If so, Can I choose KeepAlive on or KeepAlive off  on each destination 
 site?

No. It's global.

 And Can I choose KeepAlive on  or KeepAlive off  on clientside and 
 serverside?

Yes. Both the on/off and timeout is separate for client and server.

client-squid:

client_persistent_connections
persistent_request_timeout

squid-server:

server_persistent_connections
pconn_timeout


These sets the upper limit as enforced by Squid. Clients and servers
also has their own settings which may further limit persistent
connection lifetime or use.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Ignoring query string from url

2008-10-24 Thread Henrik Nordstrom
On fre, 2008-10-24 at 13:40 +0530, nitesh naik wrote:

 Is there way to ignore query string in url so that objects are cached
 without query string ?  I am using external perl program to strip them query
 string from url which is slowing down response time. I have started 1500
 processes of redirect program.

Then switch to the concurrent helper protocol with only one or two
helper processes.. requires a minimal change in the helper to support
the new request/response format. This significantly speeds up things as
Squid then batches several request to the helper, reducing the amount of
context switching.

See url_rewrite_concurrency. The protocol change is the same as for the
auth_param concurency parameter:

request:

  channel url method ...[newline]

response:

  channel new-url[newline]
or
  channel[newline]

that is responses need to echo back the same channel identifier as the
request had.

requests may be answered out-of-order if one likes.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


RES: [squid-users] How can I block a https site?

2008-10-24 Thread Ricardo Augusto de Souza
I am still not able to block https sites.
I tested all you sugested here.
I am using transparent proxy. I am redirecting all outgoing traffic to
port 80 to squid port 3128. If i redirect 443 port to squid i wont be
able to access ANY https site.

I just wanna block *FEW* https sites like i AM ALREADY doing using


Acl bleh dstdomain /some/file/
http_access deny bleh




-Mensagem original-
De: Matus UHLAR - fantomas [mailto:[EMAIL PROTECTED] 
Enviada em: quinta-feira, 23 de outubro de 2008 08:20
Para: squid-users@squid-cache.org
Assunto: Re: [squid-users] How can I block a https site?

 Matus UHLAR - fantomas wrote:
 On 21.10.08 16:23, Alejandro Bednarik wrote:
  You can also use url_regex -i
 
  acl bad_sites url_regex -i /etc/squid/bad_sites.txt
  http_access deny bad_sites
 
 using regexes is very ineffective and may lead to problems if you
don't
 count with:
 - dot matching ANY character
 - regex matching the middle of string, not just the end of it (like
   dstdomain does)

On 22.10.08 23:45, Amos Jeffries wrote:
  - URL parts often included in regex not occuring in CONNECT requests.
  - neither the http(s):// part.

no, but it can match different hosts it should not mach.

 .imo.im

will block e.g. www.limolimo.com

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
   One OS to rule them all, One OS to find them, 
One OS to bring them all and into darkness bind them 


Re: RES: [squid-users] How can I block a https site?

2008-10-24 Thread Matt Harrison
Ricardo Augusto de Souza wrote:
 I am still not able to block https sites.
 I tested all you sugested here.
 I am using transparent proxy. I am redirecting all outgoing traffic to
 port 80 to squid port 3128. If i redirect 443 port to squid i wont be
 able to access ANY https site.

I'm no squid expert but unless the https traffic is actually going
through squid it isn't up to squid to block it.

If you can get squid to proxy your https traffic then it will probably
be able to block it, if not, you will have to use some other software to
block the https sites.

HTH

Matt


Re: RES: [squid-users] How can I block a https site?

2008-10-24 Thread Marcus Kool

Ricardo,

You cannot do it with a transparent proxy.
If you want Squid to handle https traffic, you must
use Squid in a non-transparent setup.

-Marcus


Ricardo Augusto de Souza wrote:

I am still not able to block https sites.
I tested all you sugested here.
I am using transparent proxy. I am redirecting all outgoing traffic to
port 80 to squid port 3128. If i redirect 443 port to squid i wont be
able to access ANY https site.

I just wanna block *FEW* https sites like i AM ALREADY doing using


Acl bleh dstdomain /some/file/
http_access deny bleh




-Mensagem original-
De: Matus UHLAR - fantomas [mailto:[EMAIL PROTECTED] 
Enviada em: quinta-feira, 23 de outubro de 2008 08:20

Para: squid-users@squid-cache.org
Assunto: Re: [squid-users] How can I block a https site?


Matus UHLAR - fantomas wrote:

On 21.10.08 16:23, Alejandro Bednarik wrote:

You can also use url_regex -i

acl bad_sites url_regex -i /etc/squid/bad_sites.txt
http_access deny bad_sites

using regexes is very ineffective and may lead to problems if you

don't

count with:
- dot matching ANY character
- regex matching the middle of string, not just the end of it (like
 dstdomain does)


On 22.10.08 23:45, Amos Jeffries wrote:

 - URL parts often included in regex not occuring in CONNECT requests.
 - neither the http(s):// part.


no, but it can match different hosts it should not mach.


.imo.im


will block e.g. www.limolimo.com



[squid-users] Squid.Conf Needed For Proxy to Proxy Cache (Not Via ICP)

2008-10-24 Thread wiskbroom

Hello;

I am looking to force all requests sent to an internal proxy to another 
internal proxy.  The two proxies are separated via a WAN link and each one is 
managed by different admins.  I am not able to use ICP.

I will not be able to resolve via DNS any of the URLs parsed by my internal 
Squid proxy. I would prefer to have a method whereby I just redirect my traffic 
to the other proxy, without having to add the final dest URL that I want 
traffic sent to.

The reason for the proxy in the first place is due to a restrictive ACL that 
limits us to just a handful of IPs originating from our site.  This is entirely 
security related and not due to any licensing, etc.

I read the reverse proxy paper, and while this addresses some abilities, it 
does not for all.

Does anyone have a squid.conf that would address this requirement?

I am running Squid 2.7.STABLE5


Many thanks all in advance,

.vp






[squid-users] HTTP status - in http_log file

2008-10-24 Thread Strauss, Christopher
 I am running Squid version 2.6.STABLE20 as a proxy server on
 2.2.20-gentoo-r3 Linux. I am seeing HTTP status code - in the http_log
 file:
 216.82.93.201 - - [24/Oct/2008:07:32:49 -0400] GET
 http://www.galco.com/scripts/cgiip.exe/wa/wcat/catalog.htm? HTTP/1.1 - -
 - Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; SLCC1; .NET CLR
 2.0.50727; .NET CLR 3.0.04506; .NET CLR 3.5.21022)
 The problem is very sporadic, and I have not been able to reliably
 reproduce it so I'm not sure what the client is seeing. Any ideas why I
 would be getting this invalid HTTP status?
 
 Chris Strauss
 [EMAIL PROTECTED]
 



[squid-users] WARNING: Median response time is 57448 milliseconds: Why?

2008-10-24 Thread Linda W

I see alot of these messages in my squid warning log...

Specifically, in filtering off the date, and sort+uniq+counting, I see:

var/log# grp Median response warn|cut -c36-90 |more|sort|uniq -c
   107  WARNING: Median response time is 57448 milliseconds
 1  WARNING: Median response time is 6996 milliseconds
 1  WARNING: Median response time is 7384 milliseconds


  This is out of 471 lines in this 'warn' log -- over 20% are warnings from
squid that things are taking alot longer than I would think they should.

Usually, it doesn't behave this way...

Notable, I have my dns timeout set for 1 minute (so the above are all
slightly under my dns lookup time.
I also notice that the default timeout time waiting for a connection is
about 1 minute.

Could it really be that I have this many timeouts ... ?  1 minute seems like
a pathological case -- not something I'd consider 'normal'...

Ideas?  Suggestions?   Do other people get these types of warnings often?

Thanks (My DSL is 3Mb/768) that times close to that rate when I time it...)

-linda



Re: [squid-users] Squid.Conf Needed For Proxy to Proxy Cache (Not Via ICP)

2008-10-24 Thread Henrik Nordstrom
On fre, 2008-10-24 at 14:32 -0400, [EMAIL PROTECTED] wrote:

 I am looking to force all requests sent to an internal proxy to another 
 internal proxy.  The two proxies are separated via a WAN link and each one is 
 managed by different admins.  I am not able to use ICP.

 Does anyone have a squid.conf that would address this requirement?

http://wiki.squid-cache.org/SquidFaq/ConfiguringSquid#head-c050a0a0382c01fbfb9da7e9c18d58bafd4eb027

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] HTTP status - in http_log file

2008-10-24 Thread Henrik Nordstrom
On fre, 2008-10-24 at 14:46 -0400, Strauss, Christopher wrote:
  I am running Squid version 2.6.STABLE20 as a proxy server on
  2.2.20-gentoo-r3 Linux. I am seeing HTTP status code - in the http_log
  file:

It means the request was aborted before there was any form of response.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] WARNING: Median response time is 57448 milliseconds: Why?

2008-10-24 Thread Henrik Nordstrom
On fre, 2008-10-24 at 11:52 -0700, Linda W wrote:
 I see alot of these messages in my squid warning log...
 
 Specifically, in filtering off the date, and sort+uniq+counting, I see:
 
 var/log# grp Median response warn|cut -c36-90 |more|sort|uniq -c
 107  WARNING: Median response time is 57448 milliseconds
   1  WARNING: Median response time is 6996 milliseconds
   1  WARNING: Median response time is 7384 milliseconds

This can happen naturally if you at some time have only very few users
and those mostly perform downloads or other long running requests.

But if seen during normal load with mostly interactive browsing requests
then something is wrong.

So it depends on when you got these warnings and how Squid was being
used at the time.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


RE: [squid-users] HTTP status - in http_log file

2008-10-24 Thread Strauss, Christopher
Thanks for your reply, Henrik.
Has this always been the way squid handles these aborted requests? The
reason I'm asking is that we've been using squid as a proxy for over a year,
but I didn't start seeing any - status codes until just recently, around
the same time we updated to 6.2.STABLE20 from 6.2.STABLE3.
- Chris

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Friday, October 24, 2008 3:29 PM
To: Strauss, Christopher
Cc: 'squid-users@squid-cache.org'
Subject: Re: [squid-users] HTTP status - in http_log file


On fre, 2008-10-24 at 14:46 -0400, Strauss, Christopher wrote:
  I am running Squid version 2.6.STABLE20 as a proxy server on
  2.2.20-gentoo-r3 Linux. I am seeing HTTP status code - in the http_log
  file:

It means the request was aborted before there was any form of response.

Regards
Henrik



RE: [squid-users] HTTP status - in http_log file

2008-10-24 Thread Henrik Nordstrom
On fre, 2008-10-24 at 15:51 -0400, Strauss, Christopher wrote:
 Thanks for your reply, Henrik.
 Has this always been the way squid handles these aborted requests?

As far as I can remember yes.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


RE: [squid-users] Squid.Conf Needed For Proxy to Proxy Cache (Not Via ICP)

2008-10-24 Thread wiskbroom

Doh! That was like a lightning strike on my head :-)

Many thanks as always Henrik!

.vp

 Subject: Re: [squid-users] Squid.Conf Needed For Proxy to Proxy Cache (Not 
 Via ICP)
 From: [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 CC: squid-users@squid-cache.org
 Date: Fri, 24 Oct 2008 21:27:49 +0200

 On fre, 2008-10-24 at 14:32 -0400, [EMAIL PROTECTED] wrote:

 I am looking to force all requests sent to an internal proxy to another 
 internal proxy. The two proxies are separated via a WAN link and each one is 
 managed by different admins. I am not able to use ICP.

 Does anyone have a squid.conf that would address this requirement?

 http://wiki.squid-cache.org/SquidFaq/ConfiguringSquid#head-c050a0a0382c01fbfb9da7e9c18d58bafd4eb027

 Regards
 Henrik


[squid-users] Re: WARNING: Median response time is 57448 milliseconds: Why?

2008-10-24 Thread Linda W

Henrik Nordstrom wrote:

On fre, 2008-10-24 at 11:52 -0700, Linda W wrote:
 I see alot of these messages in my squid warning log...
 (count=107)  WARNING: Median response time is 57448 milliseconds

This can happen naturally if you at some time have only very few users
and those mostly perform downloads or other long running requests.

But if seen during normal load with mostly interactive browsing requests
then something is wrong.

So it depends on when you got these warnings and how Squid was being
used at the time.


   Well...definitely, the huge number of users on this 'squid' [ME] could
affect that...and, indeed,  over the time that log was gathered I know
I've done some multi-megabyte (multi-100MB, maybe a GB or 2) downloads...

BUT---this sure is misleading and confuses the heck out of poor
ignorami like me, who think of a response time as something along the lines
of (srchost)ping - (-remotehost-echo: YO! -)- (srchost: YO!)

I.e. Not a response time for 1 entire request, but a response time for
the smaller and vastly more interactive sub-packets that comprise the
data of a larger request.

I.e. -- I hear/get/grok what you are saying -- but more useful, for
me, as a real 'warning' would be something that tells me:

   hey, anal-retentive-response monitor (that would be me when I'm
   looking at these things), I sent a request, and it took 57448 ms, for me
   to get back ANY answer from the  other end -- even 1 byte... 


You know..like it took 57seconds for the other side to send back anything.
Sure, makes 'Mondo sense if the time it takes me to download a 1GB image
is greater than 57seconds...that's common sense -- but not something I'm
going to worry about as long as my host---host response time is somewhat
in a normal range. 


If all of a sudden all of my small(ish) single requests start taking 4 seconds
just get get back __anything__, THEN, I'm thinking -- uhoh---line problems or
something that requires my attention...but long response times when downloading
a large 1G nature move (ok, more likely some boring distro image), -- that's
not even something I'd care about seeing.

On the other hand -- a good warning stat -- if I am downloading large 'nature' 
images (of distros! what else where you thinking?!) I wouldn't mind a warning

if my _combined_ download rates (through squid) slowed down to some bogus
fraction of my line's capacity. 


For example say my line normally can download at 250KBytes/s.  If my squid
proxy detected that all download rates combined were only reaching 20KB/s,
that would be something _potentially_ worthy of a log warning message.

But just long -finish times on a single downloadnot so interesting -- 
especially

if my max-link speed is relatively slow relative to the normal objects I might 
be
downloading.

I can only see this warning getting worse over time, as larger downloads 
(images, HD-videos, nature-movies...etc. get longer and higher def, while my DSL 
stays stuck in the dark ages (making the slowskies (a Turtle family on Comcast 
commercials) happy, but


p.s. nature moves aren't usually my cup-of-tea, but thought I'd mention them as 
I know many downloading netizens like nature movies of some particular types or 
another.. :-)


Thanks for the clarificationbut ... is it possible to provide some other 
warning thresholds?


Merci Beaucoup,
Linda
(not that I'm French, but who doesn't like Paris of one sort or another; 
:-)...yeah
in the sillier than normal mood today, but c'est la vie)




[squid-users] Re: WARNING: Median response time is 57448 milliseconds: Why?

2008-10-24 Thread Henrik Nordstrom
On fre, 2008-10-24 at 13:32 -0700, Linda W wrote:

 BUT---this sure is misleading and confuses the heck out of poor
 ignorami like me, who think of a response time as something along the lines
 of (srchost)ping - (-remotehost-echo: YO! -)- (srchost: YO!)

It's the median response time of all responses completed within a 5
minutes period, so if there is ANY traffic at around the time when the
download finished then Squid won't even notice the download response
time. The problem only arises if there is no other traffic at the time.

Note: The median of 1,2,2,3,4,5,10 is 3

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] headers say HIT, logs say MISS, payload is truncated...

2008-10-24 Thread Neil Harkins
Hi. I'm seeing periodic odd behavior from one of our squid2.6 stable18
(and stable22) boxes during the peak hours when the squid is busiest,
but not off-peak, and no other signs of a capacity limit except the
occasional queue congestion warning. About 15% of the requests
for one url we're externally monitoring (presumedly others) come
back with the following headers:

Content-Length: 140493
Content-Type: text/html; charset=utf-8
Age: 1
X-Cache: HIT from oak-tp-squid008

but it only receives 33075 bytes of the payload,
which really is 140493, then the client times out.
In the squid access log:

2008/10/24-14:37:06-0700(PDT)  45626 10.17.2.26 TCP_MISS/200 33075 GET 
(logformat foo %{%Y/%m/%d-%H:%M:%S%z(%Z)}tl %6tr %a %Ss/%03Hs %st %rm)

I am certain that log msg corresponds to that request,
because of unique ids in a custom header.

We are using collapsed_forwarding here. I haven't tried disabling it yet.

Unfortunately, since the problem appears to be load-related, I've been
unable to reproduce for a tcpdump or running squid in debug thus far.

Any ideas how this could be happening?

-neil


Re: [squid-users] Question about ACLs and http_access in Squid 3

2008-10-24 Thread Tom Williams

Amos Jeffries wrote:

Tom Williams wrote:
Ok, now that I've basically got Squid 3 configured as a HTTP 
accelerator, I have a question about ACL rules and http_access.


Here is the basic config:  I've got two web servers behind a load 
balancer.   The idea is to have Squid server as a HTTP accelerator 
for Apache so it will cache static content (like global site 
graphics, etc) leaving Apache to deal with traffic that requires 
database access.


Here are my configuration lines:

acl directIP dst aaa.bbb.ccc.ddd/32
acl website dstdomain .mydomain.com

#Recommended minimum configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager
# Deny requests to unknown ports
http_access deny !Safe_ports
# Deny CONNECT to other than SSL ports
http_access deny CONNECT !SSL_ports
#
# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on localhost is a local user
http_access deny to_localhost
#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow directIP
http_access allow website

# And finally deny all other access to this proxy
http_access deny all


Now, when I point my browser at:

http://aaa.bbb.ccc.ddd/

I get an access denied 403 error page from Squid.

If I point my browser at:

http://www.mydomain.com/

It works just fine.  www.mydomain.com resolves to the 
aaa.bbb.ccc.ddd. IP address.


Why does the domain work yet the IP doesn't?  What am I missing?



All of the actual acceleration bits :)
  http://wiki.squid-cache.org/SquidFaq/ReverseProxy


Amos


Thanks for the suggestion.  I had looked at that article before but it 
didn't address my problem, unfortunately.   Is there a way to enable 
some debug level that will log exceptions processing the http_access 
rules?  I'm getting TCP_DENIED/403 messages in access.log, like this:


1224898553.333  2 www.xxx.yyy.zzz TCP_DENIED/403 2434 GET 
http://aaa.bbb.ccc.ddd/ - NONE/- text/html


yet I can't generate any debug info to provide more information as to 
why the TCP_DENIED was issued.


Thanks!

Peace...

Tom