[squid-users] Re: errors on Make squid 3.1.10, ubuntu 10.04.1 server

2011-01-23 Thread mbruell

Thanks - that did the trick. BTW, I replied a few days ago, but my post
didn't make it through (I used my email client instead of replying here).

Thanks again,

M 
-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/errors-on-Make-squid-3-1-10-ubuntu-10-04-1-server-tp3225450p3232602.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] ecap adapter munging cached body

2011-01-23 Thread Jonathan Wolfe
 Vary in Squid is currently treated as an exact-match text key. So when asked
 for a gzip,deflate variant Squid does not have enough smarts to serve the
 deflate variant. So it MISSes and gets a fresh one, which may or may not
 be gzipped, but is served gzipped to the client anyway.

Right on, that makes sense.  I was really trying to test gzip module
vs no zipping, for clients that don't support any zipping.

 When passing the second request through squid twice in a row does the reply
 change from a MISS to a HIT? or stay a MISS?

The second request stays a MISS.

 Squid caches only one of N variants so the expected behviour is that each
 new variant is a MISS but becomes a HIT on repeated duplicate requests until
 a new variant pushes it out of cache.

Ah, well that would sort of explain it, except I don't get a
subsequent cache HIT when requesting a zipped version.

So, to test all this out, I have the webserver returning either:
a) a full HTML page (57580 bytes) when no Accept-Encoding header is present
b) some alternate content (the Accept-Encoding header echoed back 5000
times) when Accept-Encoding is present, such that the response is a
different size and dependent on the Accept-Encoding header.

Then, I issue the same request headers, just modifying the
Accept-Encoding header value (or excluding that header altogether).

I'm using the values of asdf for a bogus Accept-Encoding value that
shouldn't trigger gzipping, and gzip for when I actually want to
invoke the module.  To be clear, the webserver isn't zipping at all.

The request headers before optionally adding an Accept-Encoding header are:

GET /squid-test HTTP/1.1
User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7
OpenSSL/0.9.8l zlib/1.2.3
Accept: */*
Host: www.example.com

Here are the response headers, all requests issued serially in the
order listed here:

1. Empty Accept-Encoding header - two requests in a row, expect MISS
then HIT, full content, not zipped.

HTTP/1.0 200 OK
Cache-Control: max-age=600
Expires: Sun, 23 Jan 2011 21:00:19 GMT
Vary: Accept-Encoding
Mime-Version: 1.0
Date: Sun, 23 Jan 2011 20:50:19 GMT
Server: AOLserver/4.5.1
Content-Type: text/html; charset=utf-8
Content-Length: 57580
X-Cache: MISS from www.example.com
X-Cache-Lookup: MISS from www.example.com:80
Via: 1.0 www.example.com (squid/3.1.10)
Connection: keep-alive

HTTP/1.0 200 OK
Cache-Control: max-age=600
Expires: Sun, 23 Jan 2011 21:00:19 GMT
Vary: Accept-Encoding
Mime-Version: 1.0
Date: Sun, 23 Jan 2011 20:50:19 GMT
Server: AOLserver/4.5.1
Content-Type: text/html; charset=utf-8
Content-Length: 57580
Age: 2
X-Cache: HIT from www.example.com
X-Cache-Lookup: HIT from www.example.com:80
Via: 1.0 www.example.com (squid/3.1.10)
Connection: keep-alive

2. Accept-Encoding: asdf header - two requests in a row, expect MISS
then HIT, alternate content, not zipped.

HTTP/1.0 200 OK
Cache-Control: max-age=600
Expires: Sun, 23 Jan 2011 21:00:35 GMT
Vary: Accept-Encoding
Mime-Version: 1.0
Date: Sun, 23 Jan 2011 20:50:35 GMT
Server: AOLserver/4.5.1
Content-Type: text/html; charset=utf-8
Content-Length: 20006
X-Cache: MISS from www.example.com
X-Cache-Lookup: MISS from www.example.com:80
Via: 1.0 www.example.com (squid/3.1.10)
Connection: keep-alive

HTTP/1.0 200 OK
Cache-Control: max-age=600
Expires: Sun, 23 Jan 2011 21:00:35 GMT
Vary: Accept-Encoding
Mime-Version: 1.0
Date: Sun, 23 Jan 2011 20:50:35 GMT
Server: AOLserver/4.5.1
Content-Type: text/html; charset=utf-8
Content-Length: 20006
Age: 2
X-Cache: HIT from www.example.com
X-Cache-Lookup: HIT from www.example.com:80
Via: 1.0 www.example.com (squid/3.1.10)
Connection: keep-alive

3. Try no Accept-Encoding again - get a HIT, same full content from (1).

HTTP/1.0 200 OK
Cache-Control: max-age=600
Expires: Sun, 23 Jan 2011 21:00:19 GMT
Vary: Accept-Encoding
Mime-Version: 1.0
Date: Sun, 23 Jan 2011 20:50:19 GMT
Server: AOLserver/4.5.1
Content-Type: text/html; charset=utf-8
Content-Length: 57580
Age: 22
X-Cache: HIT from www.example.com
X-Cache-Lookup: HIT from www.example.com:80
Via: 1.0 www.example.com (squid/3.1.10)
Connection: keep-alive

4. Now try Accept-Encoding: gzip.  Two requests in a row, expect MISS
then HIT, get MISSes every time.
(I included the squid access log rows here to see the small zipped
content length - 660 bytes.)

HTTP/1.0 200 OK
Cache-Control: max-age=600
Expires: Sun, 23 Jan 2011 21:00:49 GMT
Vary: Accept-Encoding
Mime-Version: 1.0
Date: Sun, 23 Jan 2011 20:50:49 GMT
Server: AOLserver/4.5.1
Content-Type: text/html; charset=utf-8
Content-Encoding: gzip
X-Cache: MISS from www.example.com
X-Cache-Lookup: MISS from www.example.com:80
Via: 1.0 www.example.com (squid/3.1.10)
Connection: close

[23/Jan/2011:15:50:49 -0500] GET http://www.example.com/squid-test
HTTP/1.1 200 660 345 ms - curl/7.19.7
(universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8l zlib/1.2.3
TCP_MISS:ROUNDROBIN_PARENT

HTTP/1.0 200 OK
Cache-Control: max-age=600
Expires: Sun, 23 Jan 2011 21:00:50 GMT
Vary: 

Re: [squid-users] SSL Stops responding

2011-01-23 Thread Henrik Nordström
lör 2011-01-22 klockan 12:16 -0500 skrev James P. Ashton:
 Does anyone have any thoughts on this?   I am not fond of the idea that both 
 squid instances stopped responding to SSL requests at the same time.

Is your OpenSSL up to date?

Regards
Henrik



RE: [squid-users] Squid 3.x very slow loading on ireport.cnn.com

2011-01-23 Thread Henrik Nordström
tor 2011-01-20 klockan 02:50 -0500 skrev Max Feil:
 Thanks. I am looking at the squid access.log and the delay is caused by
 a GET which for some reason does not result in a response from the
 server. Either there is no response or Squid is missing the response.
 After a 120 second time-out the page continues loading, but the end
 result may be malformed due to the object which did not load. 

I would take a peek at the traffic using wireshark to get some insight
in what is going on there.

REgards
Henrik



Re: [squid-users] Missing content-length header for POST and PUT

2011-01-23 Thread Henrik Nordström
fre 2011-01-21 klockan 05:45 +1300 skrev Amos Jeffries:

 empty? No. If they have no content length indicated they have to be 
 assumed as being infinite length transfers. HTTP specs require this 411 
 reply message.

Not quite. Requests without an entity is always headers-only. The
infinite length is only on responses.

 The client software is *supposed* to add a length and retry.

Yes.

But to be honest we do not really need to check that POST/PUT have a
request entity. This is mostly a relic from way back when request
entities were handled very special.

Regards
Henrik



Re: [squid-users] NONE/501 in an https:// POST request

2011-01-23 Thread Henrik Nordström
fre 2011-01-21 klockan 11:31 +0100 skrev Ralf Hildebrandt:
  1294685115.286  0 10.43.120.109 NONE/501 4145 POST 
  https://enis.eurotransplant.nl/donor-webservice/dpa?WDSL - HIER_NONE/- 
  text/html
 
 So, I enabled SSL using --enable-ssl and now I'm getting:
 
 1295605546.943313 141.42.231.227 TCP_MISS/503 4251 GET 
 https://enis.eurotransplant.nl/donor-webservice/dpa?WDSL - 
 HIER_DIRECT/194.151.178.174 text/html
 and the error output consists of the ERR_SECURE_CONNECT_FAIL error message

In both cases Squid received an https:// request unencrypted over plain
HTTP.

In the first case, as your Squid did not have SSL support if could not
forward the request at all, as it can not wrap the unencrypted request
in SSL/TLS for forwardning to the requested server.

In the section case Squid and the server did not agree on the SSL
protocol.

If using this http-https gatewaying capability then you should
configure Squid to not use SSLv2. SSLv2 is considered broken beyond
repair these days. See sslproxy_options for how to tune this in Squid.

Regards
Henrik



Re: [squid-users] ecap adapter munging cached body

2011-01-23 Thread Henrik Nordström
lör 2011-01-22 klockan 23:04 +1300 skrev Amos Jeffries:

 Squid caches only one of N variants so the expected behviour is that 
 each new variant is a MISS but becomes a HIT on repeated duplicate 
 requests until a new variant pushes it out of cache.

No it caches all N variants seen if the origin response has Vary:

But not sure what happens with the gzip eCAP module in this regard.

Regards
Henrik



Re: [squid-users] ecap adapter munging cached body

2011-01-23 Thread Henrik Nordström
sön 2011-01-23 klockan 14:14 -0800 skrev Jonathan Wolfe:

 I'm using the values of asdf for a bogus Accept-Encoding value that
 shouldn't trigger gzipping, and gzip for when I actually want to
 invoke the module.  To be clear, the webserver isn't zipping at all.

Is the web server responding with Vary: Accept-Encoding?

 I can change the behavior of the webserver to not include Vary:
 Accept-Encoding for content meant to be cached by squid, but that
 results in responses of the cached (unzipped) version even for clients
 who accept zipped versions, once the cache is populated by a client
 not requesting a zipped version, and that defeats the point of the
 gzip module for me because I want to gzip cached content for clients
 that support it.

Sounds like the gzip eCAP module handles things in a bad manner. It
should add Vary, and it's responses should be cacheable if the original
response is. Seems it does neither..

Regards
Henrik



Re: [squid-users] ecap adapter munging cached body

2011-01-23 Thread Jonathan Wolfe
In my test, yes, the web server was responding with Vary:
Accept-Encoding.  But that's only because of the behavior below, where
once a non-gzipped version is cached (i.e. a request comes in first
with no Accept-Encoding header at all) all subsequent requests get the
unzipped version, even if presenting gzip in the Accept-Encoding
header.

The eCAP module does add Vary: Accept-Encoding, actually.  Running the
same test without the webserver setting Vary results in the same
behavior, though - zipped response via the gzip module doesn't cache
(two MISSes in a row), and then once a nonzipped version enters the
cache, that nonzipped cached version gets served up on every request
for any incoming Accept-Encoding.

The module does not seem to touch Cache-Control or Expires headers at
all - they come through in the uncached gzipped responses just fine
(if requesting gzip encoding before anything else is cached).  Are
there headers besides Vary that the module needs to add or change to
ensure that the response can be cached?

-Jon

2011/1/23 Henrik Nordström hen...@henriknordstrom.net:
 sön 2011-01-23 klockan 14:14 -0800 skrev Jonathan Wolfe:

 I'm using the values of asdf for a bogus Accept-Encoding value that
 shouldn't trigger gzipping, and gzip for when I actually want to
 invoke the module.  To be clear, the webserver isn't zipping at all.

 Is the web server responding with Vary: Accept-Encoding?

 I can change the behavior of the webserver to not include Vary:
 Accept-Encoding for content meant to be cached by squid, but that
 results in responses of the cached (unzipped) version even for clients
 who accept zipped versions, once the cache is populated by a client
 not requesting a zipped version, and that defeats the point of the
 gzip module for me because I want to gzip cached content for clients
 that support it.

 Sounds like the gzip eCAP module handles things in a bad manner. It
 should add Vary, and it's responses should be cacheable if the original
 response is. Seems it does neither..

 Regards
 Henrik




RE: [squid-users] Squid 3.x very slow loading on ireport.cnn.com

2011-01-23 Thread Max Feil
Already did use Wireshark. Here is some more info:

If you look through the traces you'll notice that at some point Squid sends a 
TCP [FIN, ACK] right in the middle of a connection for seemingly no reason. 
(Attempting to close the connection) The server ignores this and sends the rest 
of the data, which Squid responds to with TCP RST (request to reset) since it 
now believes the connection to be closed.

From the browser side it seems to be given no notification that the connection 
was closed (and indeed I can see no reason why it should be closed) so it 
seems to sit around doing nothing as it may have reached the max connections 
limit. After about 2 minutes (possibly related to a persistent connection 
timeout limit in squid) Squid seems to terminate all the connections with 
FIN,ACKs. The browser then seems to realize its connections are gone and it 
requests the remaining resources resulting in a bunch of TCP SYNs followed by 
the rest of the resources.

Why it does this in the middle of connections we still have no clue, however 
turning off server_persistent_connections seems to make it load fast. However 
this is probably a bad idea in general...

Max

-Original Message-
From: Henrik Nordström [mailto:hen...@henriknordstrom.net] 
Sent: Sunday, January 23, 2011 7:16 PM
To: Max Feil
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Squid 3.x very slow loading on ireport.cnn.com

tor 2011-01-20 klockan 02:50 -0500 skrev Max Feil:
 Thanks. I am looking at the squid access.log and the delay is caused by
 a GET which for some reason does not result in a response from the
 server. Either there is no response or Squid is missing the response.
 After a 120 second time-out the page continues loading, but the end
 result may be malformed due to the object which did not load. 

I would take a peek at the traffic using wireshark to get some insight
in what is going on there.

REgards
Henrik



Re: [squid-users] ecap adapter munging cached body

2011-01-23 Thread Amos Jeffries

On 24/01/11 13:43, Henrik Nordström wrote:

lör 2011-01-22 klockan 23:04 +1300 skrev Amos Jeffries:


Squid caches only one of N variants so the expected behviour is that
each new variant is a MISS but becomes a HIT on repeated duplicate
requests until a new variant pushes it out of cache.


No it caches all N variants seen if the origin response has Vary:

But not sure what happens with the gzip eCAP module in this regard.


AFAIK, that proper variant handling was not yet ported to squid-3. Only 
in squid-2 right now.
 This identical behaviour is causing some problems with recent Chrome 
using sdch encoding. Thus clashing with the gzip|deflate cached variant 
from other browsers.


Though yes the adapter output seems to be borked anyway.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.10
  Beta testers wanted for 3.2.0.4


Re: [squid-users] Squid 3.x very slow loading on ireport.cnn.com

2011-01-23 Thread Eliezer



On 24/01/2011 06:35, Max Feil wrote:
 Already did use Wireshark. Here is some more info:

 If you look through the traces you'll notice that at some point Squid 
sends a TCP [FIN, ACK] right in the middle of a connection for seemingly 
no reason. (Attempting to close the connection) The server ignores this 
and sends the rest of the data, which Squid responds to with TCP RST 
(request to reset) since it now believes the connection to be closed.


 From the browser side it seems to be given no notification that the 
connection was closed (and indeed I can see no reason why it should be 
closed) so it seems to sit around doing nothing as it may have reached 
the max connections limit. After about 2 minutes (possibly related to a 
persistent connection timeout limit in squid) Squid seems to terminate 
all the connections with FIN,ACKs. The browser then seems to realize its 
connections are gone and it requests the remaining resources resulting 
in a bunch of TCP SYNs followed by the rest of the resources.


 Why it does this in the middle of connections we still have no clue, 
however turning off server_persistent_connections seems to make it load 
fast. However this is probably a bad idea in general...


 Max

 -Original Message-
 From: Henrik Nordström [mailto:hen...@henriknordstrom.net]
 Sent: Sunday, January 23, 2011 7:16 PM
 To: Max Feil
 Cc: squid-users@squid-cache.org
 Subject: RE: [squid-users] Squid 3.x very slow loading on ireport.cnn.com

 tor 2011-01-20 klockan 02:50 -0500 skrev Max Feil:

 Thanks. I am looking at the squid access.log and the delay is caused by
 a GET which for some reason does not result in a response from the
 server. Either there is no response or Squid is missing the response.
 After a 120 second time-out the page continues loading, but the end
 result may be malformed due to the object which did not load.

 I would take a peek at the traffic using wireshark to get some insight
 in what is going on there.

 REgards
 Henrik


just noticed your relply.
and also the mail daemon didnt like my log from a reason so i will send 
it to you seperetly:
try to make an icl for this sites\domains in the list below to not use 
cache at all.

send a log with much detail on the requests (headers\debug mode)


the last message below:



there was another guy with cnn problem no?
(named max)

did you made basic test like not with ping and dns stuff?

cnn like many others are using CDN.. what makes it a little problem 
sometimes.

did you compiled it yourself?

this is the second time so try these:
i will give you domain names and IP.
and also do you use a local dns server? or ISP ? or eles?
try to set the name server for the proxy as 8.8.8.8 (google dns)
ping it first..

the page has like 8-10 domains\names it is trying to get

ireport.cnn.com
i.cdn.turner.com
i2.cdn.turner.com
audience.cnn.com
b.scorecardresearch.com
metrics.cnn.com
metrics.ireport.com



to to ping and dig... each one of them and send it in the email.


then try to put  in the hosts file of the squid OS these lines


157.166.255.213 ireport.cnn.com
207.123.56.126  i.cdn.turner.com
192.12.94.30  i2.cdn.turner.com
157.166.255.80audience.cnn.com
92.123.69.155b.scorecardresearch.com
66.235.143.121 metrics.cnn.com
192.33.14.30metrics.ireport.com

also try to just get to the ip

http://192.12.94.30/


send the results for these.

another thing..
send us your settings file.

if squid is running in transparent mode specify the ipv4 address .
if it's not transparent even so set it to be able to...
next thing is to make sure that Failed DNS cache time is set on 5 seconds


dns_v4_fallback on

and of cvourse a log will be nice.
i will show you some of mine.



[squid-users] Some pages loading very slow in 3.1.10 Stable

2011-01-23 Thread Saiful Alam

Hello Everyone,
My Squid Configuration is pretty much default, except
 the fact, that I have added some refresh_patterns myself and collected 
from internet in order to get more hits. The server is a Squid3.1.10 (in
 3128 intercept transparent) running on Ubuntu 10.10 and comprises of 
Intel(R) Core(TM) i7 CPU 870  @ 2.93GHz. I would say that 
everything is fine, except some pages are loading very very slow. For 
example, www.music.com.bd / www.djmaza.com ... I will try to give more 
examples as I find more, but for the time being these two sites are very
 important for my clients. Don't understand actually whats wrong. If 
this has anything to do with the squid.conf file, I will attach it with 
the next reply. For the time being here's my refresh_patterns. 

refresh_pattern 
^http://.*\.facebook.com.*\.(jpg|gif|png|swf|wav|3gp|flv|swf|wmv|zip|rar) 10080 
300% 40320 reload-into-ims
refresh_pattern 
^http://.*\.fbcdn.net.*\.(jpg|gif|png|swf|wav|mp3|3gp|flv|swf|wmv|zip|rar) 
10080 300% 40320 reload-into-ims
refresh_pattern 
^http://.*\.yimg.com.*\.(jpg|gif|png|swf|wav|3gp|flv|swf|wmv|zip|rar) 10080 
300% 40320 reload-into-ims
refresh_pattern 
^http://.*\.ytimg.com.*\.(jpg|gif|png|swf|wav|mp3|3gp|flv|swf|wmv|zip|rar) 
10080 300% 40320 reload-into-ims
refresh_pattern ^http://.*\.gstatic.com.*\.(jpg|gif|png) 10080 300% 40320 
reload-into-ims
refresh_pattern ^http://.*\.google.com.*\.(jpg|gif|png) 10080 300% 40320 
reload-into-ims
refresh_pattern dsebd\.org.*\.(jpg|gif|png) 600 50% 3000
refresh_pattern windowsupdate.com/.*\.(cab|exe) 4320 100% 43200 reload-into-ims
refresh_pattern download.microsoft.com/.*\.(cab|exe) 4320 100% 43200 
reload-into-ims
refresh_pattern ^http://.*\.cnn\.com 1000 50% 4320 override-lastmod
refresh_pattern ^http://news\.bbc\.co\.uk 1000 50% 4320 override-lastmod
refresh_pattern microsoft 1080 150% 10080 override-lastmod
refresh_pattern msn\.com 4320 150% 10080 override-lastmod
refresh_pattern ^http://.*FIDO 360 1000% 480
refresh_pattern \.r[0-9][0-0]$ 10080 150% 40320
refresh_pattern ^http://.*\.asis$ 1440 50% 20160
refresh_pattern -i \.art$ 10080 150% 43200
refresh_pattern -i \.qtm$ 10080 150% 40320 reload-into-ims
refresh_pattern -i \.viv$ 10080 150% 40320 reload-into-ims
refresh_pattern -i \.pdf$ 10080 300% 40320 reload-into-ims
refresh_pattern -i \.jpg$ 10080 300% 40320 reload-into-ims
refresh_pattern -i \.png$ 10080 300% 40320 reload-into-ims
refresh_pattern -i \.gif$ 10080 300% 40320 reload-into-ims
refresh_pattern -i \.flv$ 10080 300% 40320 reload-into-ims
refresh_pattern -i \.ico$ 10080 300% 40320 reload-into-ims
refresh_pattern -i \.jpeg$ 10080 300% 40320 reload-into-ims
refresh_pattern -i \.x-flv$ 10080 300% 40320 reload-into-ims
refresh_pattern -i \.swf$ 10080 300% 40320 reload-into-ims
refresh_pattern -i \.wav$ 10080 300% 40320 reload-into-ims
refresh_pattern -i \.mp3$ 10080 300% 40320 override-expire override-lastmod 
reload-into-ims ignore-reload
refresh_pattern -i \.mp4$ 10080 300% 40320 override-expire override-lastmod 
reload-into-ims ignore-reload
refresh_pattern -i \.zip$ 10080 300% 40320 override-expire override-lastmod 
reload-into-ims ignore-reload
refresh_pattern -i \.rar$ 10080 300% 40320 override-expire override-lastmod 
reload-into-ims ignore-reload
refresh_pattern -i \.exe$ 10080 300% 40320 override-expire override-lastmod 
reload-into-ims ignore-reload
refresh_pattern -i \.zip$ 10080 300% 40320 override-expire override-lastmod 
reload-into-ims ignore-reload
refresh_pattern -i \.doc$ 10080 300% 40320 override-expire override-lastmod 
reload-into-ims ignore-reload
refresh_pattern -i \.ppt$ 10080 300% 40320 override-expire override-lastmod 
reload-into-ims ignore-reload
refresh_pattern -i \.tiff$ 10080 300% 40320 override-expire override-lastmod 
reload-into-ims ignore-reload
refresh_pattern -i \.ram$ 10080 300% 40320 override-expire override-lastmod 
reload-into-ims ignore-reload
refresh_pattern -i \.rm$ 10080 300% 40320 override-expire override-lastmod 
reload-into-ims ignore-reload
refresh_pattern -i \.rpm$ 10080 300% 40320 override-expire override-lastmod 
reload-into-ims ignore-reload
refresh_pattern -i \.htm$ 1440 50% 40320 override-expire override-lastmod 
reload-into-ims ignore-reload
refresh_pattern -i \.html$ 1440 50% 40320 override-expire override-lastmod 
reload-into-ims ignore-reload
refresh_pattern -i \.txt$ 1440 100% 20160 reload-into-ims override-lastmod
refresh_pattern -i \.arj$ 2880 200% 40320
refresh_pattern -i \.Z$ 10080 200% 40320
refresh_pattern ^ftp: 1440 50% 10080
refresh_pattern ^gopher: 1440 10% 1440
refresh_pattern . 0 40% 4320

negative_ttl 1 minute
positive_dns_ttl 15 hour
negative_dns_ttl 20 second

Thanks  Best Regards,
Saiful

RE: [squid-users] Some pages loading very slow in 3.1.10 Stable

2011-01-23 Thread Saiful Alam

Thanks for your prompt reply. Well I am not much experienced admin in linux or 
in terms of squid, and therefore I havent installed wireshark/tshark/tcpdump in 
squid yet, but I will install it now to go in deep.
 
My previous version of squid was 2.7 Stable downloaded from aptitude which was 
installed in a different machine, and not in this new machine. My current 
configuration in the current PC (Core i7) is a squid 3.1.10 custom compiled by 
me with only the ZPH and the Linux Net Filter enabled. I haven't used any other 
parameters while configuring it. 
 
Default Cache_Dir is splitted into three directories:
UFS /media/cache/storage501 51400 16 256
UFS /media/cache/storage502 51400 16 256 
UFS /media/cache/storage503 51400 16 256
 
and cache_mem is 150MB
 
Sometimes during peak hours while the network load is high, we do experience 
very slow browsing (in all websites) which is usually resolved after a quick 
restart of the squid process/service.
 
Thanks  Best Regards,
Shishir


 Date: Mon, 24 Jan 2011 07:48:12 +0100
 From: rm...@free.fr
 To: saifu...@hotmail.com
 CC: rm...@free.fr
 Subject: Re: [squid-users] Some pages loading very slow in 3.1.10 Stable

 Hi Saiful,
 On same plateform, do you have a old squid version working better? what is 
 your previous squid version please?
 can you start network capture like wireshark/tshark/tcpdump on your squid 
 please? and run firebug for example and compare last and your previous squid 
 version please?
 can you compare squid cpu usage last and previous version please?
 Regards
 Rmkml


 On Mon, 24 Jan 2011, Saiful Alam wrote:

 
  Hello Everyone,
  My Squid Configuration is pretty much default, except
  the fact, that I have added some refresh_patterns myself and collected
  from internet in order to get more hits. The server is a Squid3.1.10 (in
  3128 intercept transparent) running on Ubuntu 10.10 and comprises of
  Intel(R) Core(TM) i7 CPU 870 @ 2.93GHz. I would say that
  everything is fine, except some pages are loading very very slow. For
  example, www.music.com.bd / www.djmaza.com ... I will try to give more
  examples as I find more, but for the time being these two sites are very
  important for my clients. Don't understand actually whats wrong. If
  this has anything to do with the squid.conf file, I will attach it with
  the next reply. For the time being here's my refresh_patterns.
 
  refresh_pattern 
  ^http://.*\.facebook.com.*\.(jpg|gif|png|swf|wav|3gp|flv|swf|wmv|zip|rar) 
  10080 300% 40320 reload-into-ims
  refresh_pattern 
  ^http://.*\.fbcdn.net.*\.(jpg|gif|png|swf|wav|mp3|3gp|flv|swf|wmv|zip|rar) 
  10080 300% 40320 reload-into-ims
  refresh_pattern 
  ^http://.*\.yimg.com.*\.(jpg|gif|png|swf|wav|3gp|flv|swf|wmv|zip|rar) 10080 
  300% 40320 reload-into-ims
  refresh_pattern 
  ^http://.*\.ytimg.com.*\.(jpg|gif|png|swf|wav|mp3|3gp|flv|swf|wmv|zip|rar) 
  10080 300% 40320 reload-into-ims
  refresh_pattern ^http://.*\.gstatic.com.*\.(jpg|gif|png) 10080 300% 40320 
  reload-into-ims
  refresh_pattern ^http://.*\.google.com.*\.(jpg|gif|png) 10080 300% 40320 
  reload-into-ims
  refresh_pattern dsebd\.org.*\.(jpg|gif|png) 600 50% 3000
  refresh_pattern windowsupdate.com/.*\.(cab|exe) 4320 100% 43200 
  reload-into-ims
  refresh_pattern download.microsoft.com/.*\.(cab|exe) 4320 100% 43200 
  reload-into-ims
  refresh_pattern ^http://.*\.cnn\.com 1000 50% 4320 override-lastmod
  refresh_pattern ^http://news\.bbc\.co\.uk 1000 50% 4320 override-lastmod
  refresh_pattern microsoft 1080 150% 10080 override-lastmod
  refresh_pattern msn\.com 4320 150% 10080 override-lastmod
  refresh_pattern ^http://.*FIDO 360 1000% 480
  refresh_pattern \.r[0-9][0-0]$ 10080 150% 40320
  refresh_pattern ^http://.*\.asis$ 1440 50% 20160
  refresh_pattern -i \.art$ 10080 150% 43200
  refresh_pattern -i \.qtm$ 10080 150% 40320 reload-into-ims
  refresh_pattern -i \.viv$ 10080 150% 40320 reload-into-ims
  refresh_pattern -i \.pdf$ 10080 300% 40320 reload-into-ims
  refresh_pattern -i \.jpg$ 10080 300% 40320 reload-into-ims
  refresh_pattern -i \.png$ 10080 300% 40320 reload-into-ims
  refresh_pattern -i \.gif$ 10080 300% 40320 reload-into-ims
  refresh_pattern -i \.flv$ 10080 300% 40320 reload-into-ims
  refresh_pattern -i \.ico$ 10080 300% 40320 reload-into-ims
  refresh_pattern -i \.jpeg$ 10080 300% 40320 reload-into-ims
  refresh_pattern -i \.x-flv$ 10080 300% 40320 reload-into-ims
  refresh_pattern -i \.swf$ 10080 300% 40320 reload-into-ims
  refresh_pattern -i \.wav$ 10080 300% 40320 reload-into-ims
  refresh_pattern -i \.mp3$ 10080 300% 40320 override-expire override-lastmod 
  reload-into-ims ignore-reload
  refresh_pattern -i \.mp4$ 10080 300% 40320 override-expire override-lastmod 
  reload-into-ims ignore-reload
  refresh_pattern -i \.zip$ 10080 300% 40320 override-expire override-lastmod 
  reload-into-ims ignore-reload
  refresh_pattern -i \.rar$ 10080 300% 40320 override-expire override-lastmod 
  

RE: [squid-users] Some pages loading very slow in 3.1.10 Stable

2011-01-23 Thread Saiful Alam

Some results of TCPDUMP in -vv mode.

13:12:04.191180 IP (tos 0x0, ttl 127, id 2750, offset 0, flags [DF], proto TCP 
(6), length 40)
    172.16.80.2.1155  77.67.29.42.www: Flags [.], cksum 0x6de4 (correct), seq 
1127903567, ack 4192021369, win 64700, length 0
13:12:04.192822 IP (tos 0x0, ttl 64, id 4692, offset 0, flags [DF], proto TCP 
(6), length 823)
    www-12-02-snc5.facebook.com.www  10.16.63.123.3714: Flags [P.], cksum 
0xc789 (correct), seq 4235393899:4235394682, ack 3661185770, win 14600, length 
783
13:12:04.831628 IP (tos 0x0, ttl 127, id 29487, offset 0, flags [DF], proto TCP 
(6), length 40)
    172.16.80.6.rmiregistry  cds174.lax.llnw.net.www: Flags [.], cksum 0xfa45 
(correct), seq 0, ack 460801, win 65535, length 0
13:12:05.355300 IP (tos 0x0, ttl 127, id 2265, offset 0, flags [DF], proto TCP 
(6), length 40)
    192.168.96.165.57362  195.122.152.64.www: Flags [.], cksum 0xfc6c 
(correct), seq 1875916651, ack 4162436021, win 64240, length 0
13:12:05.355322 IP (tos 0x0, ttl 64, id 18785, offset 0, flags [DF], proto TCP 
(6), length 1500)
    195.122.152.64.www  192.168.96.165.57362: Flags [.], cksum 0x0f8f 
(correct), seq 4381:5841, ack 0, win 6432, length 1460
13:12:05.355328 IP (tos 0x0, ttl 127, id 8325, offset 0, flags [DF], proto TCP 
(6), length 40)
    172.16.112.124.cvspserver  8.27.4.126.www: Flags [.], cksum 0xfe6f 
(correct), seq 0, ack 662461, win 65535, length 0
13:12:05.655764 IP (tos 0x0, ttl 127, id 5393, offset 0, flags [DF], proto TCP 
(6), length 40)
    172.16.64.10.2308  co63.reliablehosting.com.www: Flags [.], cksum 0x1b43 
(correct), seq 63164814, ack 3610294926, win 65535, length 0
13:12:05.655789 IP (tos 0x0, ttl 64, id 65532, offset 0, flags [DF], proto TCP 
(6), length 1500)
    co63.reliablehosting.com.www  172.16.64.10.2308: Flags [P.], cksum 0x4e5f 
(correct), seq 1461:2921, ack 0, win 17184, length 1460
13:12:05.655798 IP (tos 0x0, ttl 64, id 65533, offset 0, flags [DF], proto TCP 
(6), length 1215)
    co63.reliablehosting.com.www  172.16.64.10.2308: Flags [P.], cksum 0xa23e 
(correct), seq 2921:4096, ack 0, win 17184, length 1175
13:12:05.656066 IP (tos 0x0, ttl 126, id 42247, offset 0, flags [DF], proto TCP 
(6), length 73)
    172.16.30.6.1575  79-110-88-28.gfsrv.net.15103: Flags [P.], cksum 0x207b 
(correct), seq 22088:22121, ack 55870, win 64475, length 33
13:12:05.656081 IP (tos 0x0, ttl 126, id 42248, offset 0, flags [DF], proto TCP 
(6), length 51)
    172.16.30.6.1575  79-110-88-28.gfsrv.net.15103: Flags [P.], cksum 0xd9a7 
(correct), seq 22121:22132, ack 55870, win 64475, length 11
13:12:05.949960 IP (tos 0x0, ttl 126, id 5909, offset 0, flags [DF], proto TCP 
(6), length 40)
    172.16.10.9.1408  GGCpmo1.pal.seabone.net.www: Flags [.], cksum 0xfe1a 
(correct), seq 3933402590, ack 133923280, win 65535, length 0
13:12:05.949983 IP (tos 0x0, ttl 126, id 31989, offset 0, flags [DF], proto TCP 
(6), length 52)
    172.16.52.6.62360  c-n180-u0282-208.webazilla.com.www: Flags [.], cksum 
0x0bbb (correct), seq 0, ack 1119485, win 64240, options [nop,nop,sack 1 
{1120945:1129705}], length 0
13:12:05.949997 IP (tos 0x0, ttl 127, id 1202, offset 0, flags [DF], proto TCP 
(6), length 40)
    172.16.200.42.1454  146.82.204.207.www: Flags [.], cksum 0x5a1a (correct), 
seq 0, ack 1965427, win 65375, length 0
13:12:05.950985 IP (tos 0x0, ttl 126, id 36156, offset 0, flags [DF], proto TCP 
(6), length 40)
    172.16.30.7.1795  cds17.sin.llnw.net.www: Flags [.], cksum 0x779d 
(correct), seq 4079975438, ack 3825116412, win 65535, length 0
13:12:05.951132 IP (tos 0x0, ttl 126, id 5910, offset 0, flags [DF], proto TCP 
(6), length 996)
    172.16.10.9.1408  GGCpmo1.pal.seabone.net.www: Flags [P.], cksum 0xf7ba 
(correct), seq 0:956, ack 1, win 65535, length 956
13:12:05.951152 IP (tos 0x0, ttl 64, id 20092, offset 0, flags [DF], proto TCP 
(6), length 40)
    GGCpmo1.pal.seabone.net.www  172.16.10.9.1408: Flags [.], cksum 0xdc7e 
(correct), seq 1, ack 956, win 7648, length 0
13:12:05.955120 IP (tos 0x0, ttl 127, id 2284, offset 0, flags [DF], proto TCP 
(6), length 40)
    192.168.96.165.57362  195.122.152.64.www: Flags [.], cksum 0xda34 
(correct), seq 0, ack 8761, win 64240, length 0
13:12:05.955148 IP (tos 0x0, ttl 64, id 18790, offset 0, flags [DF], proto TCP 
(6), length 972)
    195.122.152.64.www  192.168.96.165.57362: Flags [P.], cksum 0x02b4 
(correct), seq 8761:9693, ack 0, win 6432, length 932
13:12:05.955605 IP (tos 0x0, ttl 126, id 20667, offset 0, flags [DF], proto TCP 
(6), length 40)
^C    172.16.106.3.49956  213.254.248.146.www: Flags [.], cksum 0x696e 
(correct), seq 348423304, ack 3320774375, win 16425, length 0

226 packets captured
107740 packets received by filter
107433 packets dropped by kernel


Regards,
Saiful


 From: saifu...@hotmail.com
 To: rm...@free.fr
 CC: squid-users@squid-cache.org
 Date: Mon, 24 Jan 2011 18:06:02 +1100
 Subject: RE: [squid-users] Some pages loading very slow in 3.1.10 Stable


Re: [squid-users] Squid 3.x very slow loading on ireport.cnn.com

2011-01-23 Thread Eliezer

well i have found the problem..

it's not your proxy...

your proxy is doing fine cause it's identifying files mimes and stuff=20
like that.

have you ever heard of ZIP BOMB?

well it's not it but it's something like it.

the site itself working fine and the page is getting to your computer in=20
like the 5 first seconds...

but... they are using such amount of Java script that i dont know how=20
even a PIII computer will handle it.


well it's not the point.

the point is that you wont see the page until liek the 50+ element in=20
the page.. so if one of the elements in the site is stuck cause of a bug=20
in the server or what so..

you wont get it.

to make sure of it i uses paros to interogate it and i noticed this stuff=
.

so now the thing i do i allow only html mime... i want you to try suff


the first page is 100KB

wget will get you the page and you can try to look at the source and=20
stuff like that.

the thing is the after the object 13 in the page...

you will get the object:

http://ireport.cnn.com/themes/custom/resources/username-overlay.js

and then the action begins..

so

after the 84 object it takes forever



  ok so last line in here.


first use wget to get the index.html file

it will take about 1-2 seconds.


then open oit using any browser you want and tell me what happend with=20
squid on...

for me it took a second to show up..

  the same page just from http://ireport.cnn.com/ that loads every thing.=
..

takes to *render* a long time.


*so the guys who asked.. that is the case.*



what i did was to get the page ( i see on the top of the page the RSS=20
feed is here, in firefox)
i stop the page from loading

got into the source

copy the source

paste it in new html file...

load the file in firefox and get it without all the css ... the pictures=20
and every thing but not the look they wanted.




On 24/01/2011 06:35, Max Feil wrote:
 Already did use Wireshark. Here is some more info:

 If you look through the traces you'll notice that at some point Squid 
sends a TCP [FIN, ACK] right in the middle of a connection for seemingly 
no reason. (Attempting to close the connection) The server ignores this 
and sends the rest of the data, which Squid responds to with TCP RST 
(request to reset) since it now believes the connection to be closed.


 From the browser side it seems to be given no notification that the 
connection was closed (and indeed I can see no reason why it should be 
closed) so it seems to sit around doing nothing as it may have reached 
the max connections limit. After about 2 minutes (possibly related to a 
persistent connection timeout limit in squid) Squid seems to terminate 
all the connections with FIN,ACKs. The browser then seems to realize its 
connections are gone and it requests the remaining resources resulting 
in a bunch of TCP SYNs followed by the rest of the resources.


 Why it does this in the middle of connections we still have no clue, 
however turning off server_persistent_connections seems to make it load 
fast. However this is probably a bad idea in general...


 Max

 -Original Message-
 From: Henrik Nordström [mailto:hen...@henriknordstrom.net]
 Sent: Sunday, January 23, 2011 7:16 PM
 To: Max Feil
 Cc: squid-users@squid-cache.org
 Subject: RE: [squid-users] Squid 3.x very slow loading on ireport.cnn.com

 tor 2011-01-20 klockan 02:50 -0500 skrev Max Feil:
 Thanks. I am looking at the squid access.log and the delay is caused by
 a GET which for some reason does not result in a response from the
 server. Either there is no response or Squid is missing the response.
 After a 120 second time-out the page continues loading, but the end
 result may be malformed due to the object which did not load.

 I would take a peek at the traffic using wireshark to get some insight
 in what is going on there.

 REgards
 Henrik