[squid-users] [Help] Reverse Proxy: suspend bulk requests for invalid urls

2011-05-10 Thread Le Trung Kien
Hi, we're trying Squid v3 for reverse proxy using, and explore that we
receive too many access requests for old invalid URLs from client and
this makes our Squid Caches slow down our original servers by
attempting sending requests to retrieve information from original
servers.

This is from our squid cache.log:

WARNING: HTTP: Invalid Response: No object data received for
invalid_urls AKA invalid_urls

I have questions that:
Is there any way to delay or suspend squid's responding for these
repeated requests after certain times squid cannot retrieve content
for invalid URLs and return a default error page ?

Kien Le.


Re: [squid-users] [Help] Reverse Proxy: suspend bulk requests for invalid urls

2011-05-10 Thread Le Trung Kien
Hi, I checked that "negative_ttl" and it's definitely not in my squid.conf :)
And, I'm also checking our orignial servers for 404 and 30x return
codes if they don't work properly.
I have one more question for sure: Will Squid remembers invalid URLs
(for moment) and return the error page without validating those
invalid URLs against original servers ?

Thank you, Amos
Kien Le.

On Tue, May 10, 2011 at 7:27 PM, Amos Jeffries  wrote:
> On 10/05/11 20:58, Le Trung Kien wrote:
>>
>> Hi, we're trying Squid v3 for reverse proxy using, and explore that we
>> receive too many access requests for old invalid URLs from client and
>> this makes our Squid Caches slow down our original servers by
>> attempting sending requests to retrieve information from original
>> servers.
>>
>> This is from our squid cache.log:
>>
>> WARNING: HTTP: Invalid Response: No object data received for
>> invalid_urls AKA invalid_urls
>>
>> I have questions that:
>> Is there any way to delay or suspend squid's responding for these
>> repeated requests after certain times squid cannot retrieve content
>> for invalid URLs and return a default error page ?
>
> First, make sure that "negative_ttl" directive is *absent* from your
> squid.conf
>
> Then fin out why your origin servers are producing garbage instead of a 404
> or 30x reply like they should be.
>
> Squid is capable and will cache both 404 replies and 30x redirects if the
> origin sets the headers correctly to allow caching. ie Expires: header a
> year in the future, or Last-Modified some time in the past with storage
> friendly Cache-Control: values.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.12
>  Beta testers wanted for 3.2.0.7 and 3.1.12.1
>


Re: [squid-users] [Help] Reverse Proxy: suspend bulk requests for invalid urls

2011-05-11 Thread Le Trung Kien
I realized that the server reply both 403 and 404.
About 404, but I don't know how to cache 404 File Not Found reply from
original servers, should I add a default error page on web application
for invalid URLs ?
I tested and saw that cache misses on those URLs because we don't have
a default error page now :

404 TCP_MISS:FIRST_UP_PARENT

Kien Le

On Wed, May 11, 2011 at 10:12 AM, Amos Jeffries  wrote:
> On Wed, 11 May 2011 10:01:59 +0700, Le Trung Kien wrote:
>>
>> Hi, I checked that "negative_ttl" and it's definitely not in my squid.conf
>> :)
>> And, I'm also checking our orignial servers for 404 and 30x return
>> codes if they don't work properly.
>> I have one more question for sure: Will Squid remembers invalid URLs
>> (for moment) and return the error page without validating those
>> invalid URLs against original servers ?
>
> Yes I said:
>
>>> Squid is capable and will cache both 404 replies and 30x redirects if the
>>> origin sets the headers correctly to allow caching. ie Expires: header a
>>> year in the future, or Last-Modified some time in the past with storage
>>> friendly Cache-Control: values.
>
>
> The bigger problem appears to be that the server is sending some garbage
> that produces a 5xx from inside *Squid* ("Invalid Response"). The HTTP
> standards are being broken. Squid does not cache its own problem reports.
>
>>
>> Thank you, Amos
>> Kien Le.
>>
>> On Tue, May 10, 2011 at 7:27 PM, Amos Jeffries 
>> wrote:
>>>
>>> On 10/05/11 20:58, Le Trung Kien wrote:
>>>>
>>>> Hi, we're trying Squid v3 for reverse proxy using, and explore that we
>>>> receive too many access requests for old invalid URLs from client and
>>>> this makes our Squid Caches slow down our original servers by
>>>> attempting sending requests to retrieve information from original
>>>> servers.
>>>>
>>>> This is from our squid cache.log:
>>>>
>>>> WARNING: HTTP: Invalid Response: No object data received for
>>>> invalid_urls AKA invalid_urls
>>>>
>>>> I have questions that:
>>>> Is there any way to delay or suspend squid's responding for these
>>>> repeated requests after certain times squid cannot retrieve content
>>>> for invalid URLs and return a default error page ?
>>>
>>> First, make sure that "negative_ttl" directive is *absent* from your
>>> squid.conf
>>>
>>> Then fin out why your origin servers are producing garbage instead of a
>>> 404
>>> or 30x reply like they should be.
>>>
>>> Squid is capable and will cache both 404 replies and 30x redirects if the
>>> origin sets the headers correctly to allow caching. ie Expires: header a
>>> year in the future, or Last-Modified some time in the past with storage
>>> friendly Cache-Control: values.
>>>
>>> Amos
>>> --
>>> Please be using
>>>  Current Stable Squid 2.7.STABLE9 or 3.1.12
>>>  Beta testers wanted for 3.2.0.7 and 3.1.12.1
>>>
>
>


Re: [squid-users] [Help] Reverse Proxy: suspend bulk requests for invalid urls

2011-05-12 Thread Le Trung Kien
Hi,
On the original servers, I'm using IIS6.0 and the 404b.html is the
page returned when client requests non-existing pages.
I attempt to add a header like this on that page:

The page cannot be found





This header is the same on all pages generated by our web applications
and could be cached.
However, I test and see that our squid still doesn't cache that 404b.html page

squidclient -m HEAD http://invalid_URL

HTTP/1.0 404 Not Found
Content-Length: 1731
Content-Type: text/html
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
Date: Fri, 13 May 2011 03:41:10 GMT
X-Cache: MISS

squidclient -m HEAD http://existing_URL

HTTP/1.0 200 OK
Date: Fri, 13 May 2011 03:34:22 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
X-AspNet-Version: 2.0.50727
X-Powered-By: UrlRewriter.NET 2.0.0
Cache-Control: private
Content-Type: text/html; charset=utf-8
Content-Length: 121485
Age: 424
X-Cache: HIT

I realize that the header squidclient receives when requesting an
invalid URL is less and our squid still MISS.

Kien Le.

On Thu, May 12, 2011 at 1:57 PM, Amos Jeffries  wrote:
>
> On 12/05/11 17:10, Le Trung Kien wrote:
>>
>> I realized that the server reply both 403 and 404.
>> About 404, but I don't know how to cache 404 File Not Found reply from
>> original servers, should I add a default error page on web application
>> for invalid URLs ?
>> I tested and saw that cache misses on those URLs because we don't have
>> a default error page now :
>>
>> 404 TCP_MISS:FIRST_UP_PARENT
>>
>> Kien Le
>>
>
> Default page or not, Squid does not mind. All it needs is an Expires: header 
> at least a few seconds in the future.
>
> Personally, I use a script which detects and sets teh header if its an 
> unknown URL (5 seconds caching, the client could be about to create it) or 
> one of the permanently dead ones (1 year caching with sometimes info saying 
> where the new one is).
>
> How you do it depends on the server capabilities and website needs.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.12
>  Beta testers wanted for 3.2.0.7 and 3.1.12.1


Re: [squid-users] [Help] Reverse Proxy: suspend bulk requests for invalid urls

2011-05-12 Thread Le Trung Kien
I have just modified the HTTP header respond of IIS Servers

squidclient -m HEAD http://invalid_URL
HTTP/1.0 404 Not Found
Cache-Control: public
Content-Length: 1731
Content-Type: text/html
Expires: 1000
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
Date: Fri, 13 May 2011 04:30:09 GMT
X-Cache: MISS

It still gets MISS :(

Kien Le.


On Fri, May 13, 2011 at 11:13 AM, Amos Jeffries  wrote:
> On 13/05/11 15:55, Le Trung Kien wrote:
>>
>> Hi,
>> On the original servers, I'm using IIS6.0 and the 404b.html is the
>> page returned when client requests non-existing pages.
>> I attempt to add a header like this on that page:
>>
>> The page cannot be found
>> 
>> 
>> 
>> 
>>
>> This header is the same on all pages generated by our web applications
>> and could be cached.
>> However, I test and see that our squid still doesn't cache that 404b.html
>> page
>>
>> squidclient -m HEAD http://invalid_URL
>>
>> HTTP/1.0 404 Not Found
>> Content-Length: 1731
>> Content-Type: text/html
>> Server: Microsoft-IIS/6.0
>> X-Powered-By: ASP.NET
>> Date: Fri, 13 May 2011 03:41:10 GMT
>> X-Cache: MISS
>>
>> squidclient -m HEAD http://existing_URL
>>
>> HTTP/1.0 200 OK
>> Date: Fri, 13 May 2011 03:34:22 GMT
>> Server: Microsoft-IIS/6.0
>> X-Powered-By: ASP.NET
>> X-AspNet-Version: 2.0.50727
>> X-Powered-By: UrlRewriter.NET 2.0.0
>> Cache-Control: private
>> Content-Type: text/html; charset=utf-8
>> Content-Length: 121485
>> Age: 424
>> X-Cache: HIT
>>
>> I realize that the header squidclient receives when requesting an
>> invalid URL is less and our squid still MISS.
>>
>> Kien Le.
>>
>
>
> That is HTML. The page headers only affect the web browser graphical
> display.
>
> You need to set them in the HTTP headers instead. So Expires: shows up in
> your HEAD request. IIRC there is an XML site-wide config file somewhere (in
> the site root directory?) where these are set. I'm a bit vague on the
> details though, not being an IIS admin.
>
> Also,
>  Content-Type does not matter IIS is sending it anyway as you can see below.
>  "Cache-Control: private" will absolutely prevent Squid from caching the
> reply. The exact opposite of what you are trying to do. If you have
> private/confidential user details on that 404 they will have to be removed.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.12
>  Beta testers wanted for 3.2.0.7 and 3.1.12.1
>


Re: [squid-users] [Help] Reverse Proxy: suspend bulk requests for invalid urls

2011-05-12 Thread Le Trung Kien
I have just added the hard Expires value, but still MISS

squidclient -m HEAD http://invalid_URL
HTTP/1.0 404 Not Found
Cache-Control: public
Content-Length: 1635
Content-Type: text/html
Expires: Sat, 14 May 2011 16:00:00 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
Date: Fri, 13 May 2011 05:34:17 GMT
X-Cache: MISS



On Fri, May 13, 2011 at 11:57 AM, Amos Jeffries  wrote:
> On 13/05/11 16:34, Le Trung Kien wrote:
>>
>> I have just modified the HTTP header respond of IIS Servers
>>
>> squidclient -m HEAD http://invalid_URL
>> HTTP/1.0 404 Not Found
>> Cache-Control: public
>> Content-Length: 1731
>> Content-Type: text/html
>> Expires: 1000
>> Server: Microsoft-IIS/6.0
>> X-Powered-By: ASP.NET
>> Date: Fri, 13 May 2011 04:30:09 GMT
>> X-Cache: MISS
>>
>> It still gets MISS :(
>
> Expires: is a timestamp (same format as Date: header) but in the future from
> the value in Date. -1 means broken service discard immediately. All other
> values and content is ignored.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.12
>  Beta testers wanted for 3.2.0.7 and 3.1.12.1
>


Re: [squid-users] squid too many connections to cache_peers

2011-05-15 Thread Le Trung Kien
Hi there,
Could this help ?

cache_peer  parent  0 no-query
originserver name= max-conn=4000

Kien Le.

On Mon, May 16, 2011 at 1:34 PM, Or Gerson  wrote:
> Thanks for the help,
>
> Just some clarification to understand the logic of squid: " Squid uses as 
> many connections to each peer as there are
>  maximum parallel requests needing to go to that peer"
>
> how is it that squid has a lot less connections to each peer than the number 
> of connections he opens to each peers?
> I mean he has 3 peers to divide the requests to. So I would have thought that 
> the number of connections to each peer will
> Be roughly the number of connections to squid divided by 3.
>
> How is the maximum parallel requests needing to go to that peer is much 
> higher than the number of connections to squid?
>
>
>
> Or Gerson
> IT Manager
>
> Mobile: 972-54-555-0656  Office: 972-3-769-8513
> E-mail: o...@websplanet.com   Website: http://www.websplanet.com
>
>
>
>
> WebsPlanet, the leading provider of platforms for the mass production of 
> websites
>
>
>
> -Original Message-
> From: Amos Jeffries [mailto:squ...@treenet.co.nz]
> Sent: Monday, May 16, 2011 3:29 AM
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] squid too many connections to cache_peers
>
>  On Sun, 15 May 2011 15:05:52 +, Or Gerson wrote:
>> Hello everyone,
>>
>> I have squid 3.0.STABLE19-1 installed.
>> And 3 apache cache_peers.
>>
>> I find that squid is opening a lot of connections to the web servers
>> a lot more than the connections he is receiving from clients, also
>> squid doesn't close them and they are in TIME_WAIT status.
>> For example: in squid I see 324 TIME_WAIT connections
>> On each server I see around 800 TIME_WAIT connections
>
>  TIME_WAIT is a closing connection.
>  ESTABLISHED is an open connection.
>
>  This explains:
>  http://www.developerweb.net/forum/showthread.php?t=2941
>
>
>>
>> The cache_peers are configured with round-robin, but why squid
>> doesn't closes the connections?
>>
>> Can't squid use one connection for each peer and send and receive all
>> requests through that connection (although that probably will cause
>> performance problems in the apache side because he won't uses all
>> apache server processes)?
>
>  One connection to each is indeed a performance bottleneck. Pipelining
>  is also not safe for HTTP/1.0 software (ie Squid 3.0 and older) without
>  great risk of breakage (just ask Debian people about apt-get corruption
>  problems). Squid uses as many connections to each peer as there are
>  maximum parallel requests needing to go to that peer. Enable
>  server_persistent_connections to servers and Squid will re-use
>  connections as much as possible given the HTTP support version.
>
>  An upgrade to 3.1 series Squid will get you HTTP/1.1 connections to
>  servers and peers. This gives a lot better re-use of the connections
>  than HTTP/1.0 connection support in 3.0 series can. Provided Apache
>  plays along and uses HTTP/1.1 features to prevent early TCP link
>  closure.
>
>  Amos
>
> This message, together with its attachments, contains information from
> WebsPlanet Ltd., which is privileged and confidential.
> If you are not the intended recipient or you have received this message in
> error, then please notify us immediately by e-mail to i...@websplanet.com,
> and delete all copies of this message and its attachments.
>


Re: [squid-users] [Help] Reverse Proxy: suspend bulk requests for invalid urls

2011-05-17 Thread Le Trung Kien
Hi, I use both HEAD and GET and always get MISS for invalid_URLs but
with valid_URLs, HIT still be returned.

Kien Le.

On Tue, May 17, 2011 at 1:31 PM, Amos Jeffries  wrote:
> On 13/05/11 17:36, Le Trung Kien wrote:
>>
>> I have just added the hard Expires value, but still MISS
>>
>> squidclient -m HEAD http://invalid_URL
>> HTTP/1.0 404 Not Found
>> Cache-Control: public
>> Content-Length: 1635
>> Content-Type: text/html
>> Expires: Sat, 14 May 2011 16:00:00 GMT
>> Server: Microsoft-IIS/6.0
>> X-Powered-By: ASP.NET
>> Date: Fri, 13 May 2011 05:34:17 GMT
>> X-Cache: MISS
>>
>
> Are you testing only with HEAD?
>  NP: HEAD contains no body for caching and thus does not cause anything to
> become a HIT. Use a GET request to test this.
>
> Also check your refresh_pattern are not breaking the cache behaviour this by
> ignoring/overriding cache-control and expires values.
>
> Amos
>
>> On Fri, May 13, 2011 at 11:57 AM, Amos Jeffries wrote:
>>>
>>> On 13/05/11 16:34, Le Trung Kien wrote:
>>>>
>>>> I have just modified the HTTP header respond of IIS Servers
>>>>
>>>> squidclient -m HEAD http://invalid_URL
>>>> HTTP/1.0 404 Not Found
>>>> Cache-Control: public
>>>> Content-Length: 1731
>>>> Content-Type: text/html
>>>> Expires: 1000
>>>> Server: Microsoft-IIS/6.0
>>>> X-Powered-By: ASP.NET
>>>> Date: Fri, 13 May 2011 04:30:09 GMT
>>>> X-Cache: MISS
>>>>
>>>> It still gets MISS :(
>>>
>>> Expires: is a timestamp (same format as Date: header) but in the future
>>> from
>>> the value in Date. -1 means broken service discard immediately. All other
>>> values and content is ignored.
>>>
>>> Amos
>>> --
>>> Please be using
>>>  Current Stable Squid 2.7.STABLE9 or 3.1.12
>>>  Beta testers wanted for 3.2.0.7 and 3.1.12.1
>>>
>
>
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.12
>  Beta testers wanted for 3.2.0.7 and 3.1.12.1
>


[squid-users] FATAL: Too many queued redirector requests

2011-06-09 Thread Le Trung Kien
Hi, I don't know how to handle memory for squid. Squid automatically
breaks down itself and restarts:

In my squid.conf I set:

url_rewrite_children 120 startup=30 idle=100 concurrency=0
max_filedesc 7168
cache_swap_low 60
cache_swap_high 80

I notice that just one pending requests queued:

2011/06/09 11:00:17| WARNING: All redirector processes are busy.
2011/06/09 11:00:17| WARNING: 1 pending requests queued

and follow is more details of cache.log:

2011/06/09 11:00:16| WARNING: Cannot run
'/opt/squid-3.1.10/urlrewriter_new.pl' process.
2011/06/09 11:00:16| ipcCreate: fork: (12) Cannot allocate memory
2011/06/09 11:00:16| WARNING: Cannot run
'/opt/squid-3.1.10/urlrewriter_new.pl' process.
2011/06/09 11:00:16| ipcCreate: fork: (12) Cannot allocate memory
2011/06/09 11:00:16| WARNING: Cannot run
'/opt/squid-3.1.10/urlrewriter_new.pl' process.
2011/06/09 11:00:16| ipcCreate: fork: (12) Cannot allocate memory
2011/06/09 11:00:16| WARNING: Cannot run
'/opt/squid-3.1.10/urlrewriter_new.pl' process.
2011/06/09 11:00:17| WARNING: All redirector processes are busy.
2011/06/09 11:00:17| WARNING: 1 pending requests queued
2011/06/09 11:00:17| storeDirWriteCleanLogs: Starting...
2011/06/09 11:00:17| WARNING: Closing open FD  177
2011/06/09 11:00:17|   Finished.  Wrote 16367 entries.
2011/06/09 11:00:17|   Took 0.01 seconds (1308941.14 entries/sec).
FATAL: Too many queued redirector requests
Squid Cache (Version 3.1.10): Terminated abnormally.
<--  CRASH ***
CPU Usage: 6631.678 seconds = 4839.266 user + 1792.412 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 3299
Memory usage for squid via mallinfo():
    total space in arena:  -1160028 KB
    Ordinary blocks:   -1289701 KB  53024 blks
    Small blocks:   0 KB  1 blks
    Holding blocks: 15976 KB  6 blks
    Free Small blocks:  0 KB
    Free Ordinary blocks:  129672 KB
    Total in use:  -1273725 KB 110%
    Total free:    129672 KB -10%
2011/06/09 11:00:26| Starting Squid Cache version 3.1.10 for
x86_64-unknown-linux-gnu...
2011/06/09 11:00:26| Process ID 31241
2011/06/09 11:00:26| With 7168 file descriptors available


Re: [squid-users] FATAL: Too many queued redirector requests

2011-06-12 Thread Le Trung Kien
Hi, I decreased the number of url_rewrite_children and upgraded squid
to version 3.1.12, now after two days in weekend (Sat and Sun) the
memory consuming has not exceeded the amount of defined cache_mem and
it seems to be working properly :)

Page faults with physical i/o: 0

I still monitoring it on working days to see what happen, but I thinks
the problem may be caused by too much redirector programs spawning.

Thank you.
Kien Le.

On Fri, Jun 10, 2011 at 1:27 PM, Amos Jeffries  wrote:
> On 09/06/11 20:16, Le Trung Kien wrote:
>>
>> Hi, I don't know how to handle memory for squid. Squid automatically
>> breaks down itself and restarts:
>
> http://wiki.squid-cache.org/SquidFaq/SquidMemory
>
>>
>> In my squid.conf I set:
>>
>> url_rewrite_children 120 startup=30 idle=100 concurrency=0
>
> You need 3.2 for the dynamic startup capabilities. 3.1 only uses the first
> value (120 helpers to start *immediately*).
>
> Given that is actually a reasonable number of helpers I think the memory
> consumption elsewhere is probably killing them.
>
>
>
>> max_filedesc 7168
>> cache_swap_low 60
>> cache_swap_high 80
>
> If you are forced to do that with swap values it means you cache management
> is badly calculated.
>
> There is no use allocating for example 100GB of disk storage when
> cache_swap_low forces 40% of it not to be used. And Squid pausing
> occasionally while 20% of it (80%-60%) is erased can also cause visible
> slowdown to clients.
>
> The memory FAQ page linked above outlines how to estimate cache sizes.
>
>>
>> I notice that just one pending requests queued:
>>
>> 2011/06/09 11:00:17| WARNING: All redirector processes are busy.
>> 2011/06/09 11:00:17| WARNING: 1 pending requests queued
>>
>> and follow is more details of cache.log:
>>
>> 2011/06/09 11:00:16| WARNING: Cannot run
>> '/opt/squid-3.1.10/urlrewriter_new.pl' process.
>> 2011/06/09 11:00:16| ipcCreate: fork: (12) Cannot allocate memory
>
>
>> 2011/06/09 11:00:16| WARNING: Cannot run
>> '/opt/squid-3.1.10/urlrewriter_new.pl' process.
>> 2011/06/09 11:00:16| ipcCreate: fork: (12) Cannot allocate memory
>> 2011/06/09 11:00:16| WARNING: Cannot run
>> '/opt/squid-3.1.10/urlrewriter_new.pl' process.
>> 2011/06/09 11:00:16| ipcCreate: fork: (12) Cannot allocate memory
>> 2011/06/09 11:00:16| WARNING: Cannot run
>> '/opt/squid-3.1.10/urlrewriter_new.pl' process.
>> 2011/06/09 11:00:17| WARNING: All redirector processes are busy.
>> 2011/06/09 11:00:17| WARNING: 1 pending requests queued
>> 2011/06/09 11:00:17| storeDirWriteCleanLogs: Starting...
>> 2011/06/09 11:00:17| WARNING: Closing open FD  177
>> 2011/06/09 11:00:17|   Finished.  Wrote 16367 entries.
>> 2011/06/09 11:00:17|   Took 0.01 seconds (1308941.14 entries/sec).
>> FATAL: Too many queued redirector requests
>> Squid Cache (Version 3.1.10): Terminated abnormally.
>
> Your machine seems not to be capable of running 120 or these rewriter
> processes. You need more available to them RAM or less maximum helpers.
>
>
>> <--  CRASH ***
>> CPU Usage: 6631.678 seconds = 4839.266 user + 1792.412 sys
>> Maximum Resident Size: 0 KB
>> Page faults with physical i/o: 3299
>
> "Page faults" - your system is swapping badly to get that many just on
> startup. If this continues it will kill Squid performance later even if it
> gets past the helpers.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.12
>  Beta testers wanted for 3.2.0.8 and 3.1.12.2
>


[squid-users] [reverse proxy] ESTABLISHED on squid server

2011-06-22 Thread Le Trung, Kien
Hi, I trouble about how to reduce the ESTABLISHED statuses of connections.
It always have approximately 6300 ESTABLISHED connections when server
at high load time of the day and just reduce below 1000 ESTABLISHED in
the midnight.

  1 established)
  1 Foreign
  5 FIN_WAIT2
  6 LISTEN
 11 CLOSING
 43 SYN_RECV
 63 LAST_ACK
237 FIN_WAIT1
   1331 TIME_WAIT
   6258 ESTABLISHED

I wonder how to free the connection after data all been tranferred
between squid server and clients. (I think squid keeps connections as
much as possible).
On another web  server (same operating system and hardware, etc ...)
which has the same connections to (because of DNS round-robin) the
ESTABLISH connections are range from 2000-3000.

Best Regards.


Re: [squid-users] [reverse proxy] ESTABLISHED on squid server

2011-06-22 Thread Le Trung Kien
Thank you, Amos, for your very clear answer.


On Wed, Jun 22, 2011 at 7:51 PM, Amos Jeffries  wrote:
> On 22/06/11 22:54, Le Trung, Kien wrote:
>>
>> Hi, I trouble about how to reduce the ESTABLISHED statuses of connections.
>> It always have approximately 6300 ESTABLISHED connections when server
>> at high load time of the day and just reduce below 1000 ESTABLISHED in
>> the midnight.
>>
>>       1 established)
>>       1 Foreign
>>       5 FIN_WAIT2
>>       6 LISTEN
>>      11 CLOSING
>>      43 SYN_RECV
>>      63 LAST_ACK
>>     237 FIN_WAIT1
>>    1331 TIME_WAIT
>>    6258 ESTABLISHED
>>
>> I wonder how to free the connection after data all been tranferred
>> between squid server and clients. (I think squid keeps connections as
>> much as possible).
>
> Depends on which Squid you have. 3.1 and later try to use HTTP/1.1 features
> to speed up client access times. These require persistent connections.
>
> ~6300 connections is not bad. Your box can handle far more than that easily.
>
>> On another web  server (same operating system and hardware, etc ...)
>> which has the same connections to (because of DNS round-robin) the
>> ESTABLISH connections are range from 2000-3000.
>
> This is not a valid comparison. see:
> http://wiki.squid-cache.org/Features/LoadBalance#Bias:_Connection-based
>
> This same problem affects DNS round-robin and TCP SYN load balancers. Which
> are also per-connection.
>
>
>
> Generally speaking ESTABLISHED is good. They are either currently in active
> use or waiting and will have zero TCP connection setup delay when they are
> needed.
>
> The more recent your Squid version number the more efficiently it handles
> persistent connections. Thus the lower number it uses. So if this is
> actually a problem for you a newer version is better.
>
> You can also adjust it by tweaking the idle_timeout directive. Which
> determines a maximum amount of time any one connection can be kept waiting.
>
> You can disable the persistence and all HTTP features which rely on it by
> configuring client_persistent_connections and/or
> server_persistent_connections OFF.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.12
>  Beta testers wanted for 3.2.0.9 and 3.1.12.3
>


[squid-users] WARNING: accept_filter not supported on your OS

2012-11-27 Thread Le Trung, Kien
Hi,

Could anyone get accept_filter working on CentOS 6.3 with squid-3.2.3 ?

I have already got squid-3.1.18 working properly on CentOS 5.5.

Start Squid with -d9 I got this:
2012/11/28 09:19:22 kid1| WARNING: accept_filter not supported on your OS


--

Best Regards,
Trung Kiên.


Re: [squid-users] WARNING: accept_filter not supported on your OS

2012-11-28 Thread Le Trung, Kien
Hi,

I followed your guide and the WARNING message disappears at start.
However, new machines still have " 500 " MISS in the access.log while
old machine do not have any " 500 " MISS. Those caching machines
(including old and new ones) serve the same domains.
And those are reverse proxies.

I'll back tomorrow to inform you if 500 MISS gone.

Thanks and regards,
Trung Kien.

On Wed, Nov 28, 2012 at 1:36 PM, Amos Jeffries  wrote:
> On 28/11/2012 4:24 p.m., Le Trung, Kien wrote:
>>
>> Hi,
>>
>> Could anyone get accept_filter working on CentOS 6.3 with squid-3.2.3 ?
>>
>> I have already got squid-3.1.18 working properly on CentOS 5.5.
>>
>> Start Squid with -d9 I got this:
>> 2012/11/28 09:19:22 kid1| WARNING: accept_filter not supported on your OS
>>
>>
>
> It looks like a amissing include has caused accept_filter to be disabled.
> Try adding this to src/comm/TcpAcceptor.cc and re-building Squid:
>
>   #include 
>
>
> Amos



-- 

Best Regards,
Trung Kiên.


Re: [squid-users] WARNING: accept_filter not supported on your OS

2012-11-28 Thread Le Trung, Kien
Dear,

Yes, I didn't mention 500 MISS from start, but that the reason I
assumed that accept_filter makes different between my old cache and
new cache.

Now, my old cache doesn't has any 500 MISS but my all new caches have.
All cache share the same domain and same original servers.

The 500 MISS come from normal accesses as normal link (we're working
on reverse proxy scheme)

All new caches were rebuilt as you suggested above.

Thanks,
Trung Kien

On Wed, Nov 28, 2012 at 6:54 PM, Amos Jeffries  wrote:
> On 28/11/2012 10:37 p.m., Le Trung, Kien wrote:
>>
>> Hi,
>>
>> I followed your guide and the WARNING message disappears at start.
>> However, new machines still have " 500 " MISS in the access.log while
>> old machine do not have any " 500 " MISS. Those caching machines
>> (including old and new ones) serve the same domains.
>> And those are reverse proxies.
>
>
> "still" ? this is the first time you mentioned any 500 status problem.
>
> What are the circumstances of the 500 being sent?
>  and can you repeat the issue to read of the error page Squid sent out with
> that status?
>
> Amos
>
>
>> I'll back tomorrow to inform you if 500 MISS gone.
>>
>> Thanks and regards,
>> Trung Kien.
>>
>> On Wed, Nov 28, 2012 at 1:36 PM, Amos Jeffries 
>> wrote:
>>>
>>> On 28/11/2012 4:24 p.m., Le Trung, Kien wrote:
>>>>
>>>> Hi,
>>>>
>>>> Could anyone get accept_filter working on CentOS 6.3 with squid-3.2.3 ?
>>>>
>>>> I have already got squid-3.1.18 working properly on CentOS 5.5.
>>>>
>>>> Start Squid with -d9 I got this:
>>>> 2012/11/28 09:19:22 kid1| WARNING: accept_filter not supported on your
>>>> OS
>>>>
>>>>
>>> It looks like a amissing include has caused accept_filter to be disabled.
>>> Try adding this to src/comm/TcpAcceptor.cc and re-building Squid:
>>>
>>>#include 
>>>
>>>
>>> Amos
>>
>>
>>
>



-- 

Best Regards,
Trung Kiên.


Re: [squid-users] WARNING: accept_filter not supported on your OS

2012-12-04 Thread Le Trung, Kien
Hi,

Today, I built version 3.1.22, then started squid with or without
accept_filter directive in squid's configuration file and in both case
I got NO 500 MISS in the access log.

Moreover, the speed when access new links faster (not cached) than
version 3.2.3.


Best Regards,
Trung Kien

On Thu, Nov 29, 2012 at 10:07 AM, Le Trung, Kien
 wrote:
> Dear,
>
> Yes, I didn't mention 500 MISS from start, but that the reason I
> assumed that accept_filter makes different between my old cache and
> new cache.
>
> Now, my old cache doesn't has any 500 MISS but my all new caches have.
> All cache share the same domain and same original servers.
>
> The 500 MISS come from normal accesses as normal link (we're working
> on reverse proxy scheme)
>
> All new caches were rebuilt as you suggested above.
>
> Thanks,
> Trung Kien
>
> On Wed, Nov 28, 2012 at 6:54 PM, Amos Jeffries  wrote:
>> On 28/11/2012 10:37 p.m., Le Trung, Kien wrote:
>>>
>>> Hi,
>>>
>>> I followed your guide and the WARNING message disappears at start.
>>> However, new machines still have " 500 " MISS in the access.log while
>>> old machine do not have any " 500 " MISS. Those caching machines
>>> (including old and new ones) serve the same domains.
>>> And those are reverse proxies.
>>
>>
>> "still" ? this is the first time you mentioned any 500 status problem.
>>
>> What are the circumstances of the 500 being sent?
>>  and can you repeat the issue to read of the error page Squid sent out with
>> that status?
>>
>> Amos
>>
>>
>>> I'll back tomorrow to inform you if 500 MISS gone.
>>>
>>> Thanks and regards,
>>> Trung Kien.
>>>
>>> On Wed, Nov 28, 2012 at 1:36 PM, Amos Jeffries 
>>> wrote:
>>>>
>>>> On 28/11/2012 4:24 p.m., Le Trung, Kien wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> Could anyone get accept_filter working on CentOS 6.3 with squid-3.2.3 ?
>>>>>
>>>>> I have already got squid-3.1.18 working properly on CentOS 5.5.
>>>>>
>>>>> Start Squid with -d9 I got this:
>>>>> 2012/11/28 09:19:22 kid1| WARNING: accept_filter not supported on your
>>>>> OS
>>>>>
>>>>>
>>>> It looks like a amissing include has caused accept_filter to be disabled.
>>>> Try adding this to src/comm/TcpAcceptor.cc and re-building Squid:
>>>>
>>>>#include 
>>>>
>>>>
>>>> Amos
>>>
>>>
>>>
>>
>
>
>
> --
>
> Best Regards,
> Trung Kiên.



-- 

Best Regards,
Trung Kiên.


Re: [squid-users] WARNING: accept_filter not supported on your OS

2012-12-05 Thread Le Trung, Kien
checking for fgrep... /bin/grep -F
checking for ld used by gcc... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... yes
checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
checking the name lister (/usr/bin/nm -B) interface... BSD nm
checking whether ln -s works... yes
checking the maximum length of command line arguments... 1966080
checking whether the shell understands some XSI constructs... yes
checking whether the shell understands "+="... yes
checking for /usr/bin/ld option to reload object files... -r
checking for objdump... objdump
checking how to recognize dependent libraries... pass_all
checking for ar... ar
checking for strip... strip
checking for ranlib... (cached) ranlib
checking command to parse /usr/bin/nm -B output from gcc object... ok
checking how to run the C preprocessor... gcc -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking for dlfcn.h... yes
checking for objdir... .libs
checking if gcc supports -fno-rtti -fno-exceptions... no
checking for gcc option to produce PIC... -fPIC -DPIC
checking if gcc PIC flag -fPIC -DPIC works... yes
checking if gcc static flag -static works... no
checking if gcc supports -c -o file.o... yes
checking if gcc supports -c -o file.o... (cached) yes
checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports
shared libraries... yes
checking whether -lc should be explicitly linked in... no
checking dynamic linker characteristics... GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
checking for shl_load... no
checking for shl_load in -ldld... no
checking for dlopen... no
checking for dlopen in -ldl... yes
checking whether a program can dlopen itself... yes
checking whether a statically linked program can dlopen itself... yes
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... yes
checking how to run the C++ preprocessor... g++ -E
checking for ld used by g++... /usr/bin/ld -m elf_x86_64
checking if the linker (/usr/bin/ld -m elf_x86_64) is GNU ld... yes
checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports
shared libraries... yes
checking for g++ option to produce PIC... -fPIC -DPIC
checking if g++ PIC flag -fPIC -DPIC works... yes
checking if g++ static flag -static works... no
checking if g++ supports -c -o file.o... yes
checking if g++ supports -c -o file.o... (cached) yes
checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports
shared libraries... yes
checking dynamic linker characteristics... (cached) GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
checking for library containing dlopen... -ldl
checking for dlerror... yes
checking for shl_load... (cached) no
checking for shl_load in -ldld... (cached) no
checking for dld_link in -ldld... no
configure: strict error checking enabled: yes
checking iostream usability... yes
checking iostream presence... yes
checking for iostream... yes
checking for an ANSI C-conforming const... yes
checking for size_t... yes
configure: creating ./config.status
config.status: creating Makefile
config.status: creating include/Makefile
config.status: creating src/Makefile
config.status: creating test/Makefile
config.status: creating config.h
config.status: executing depfiles commands
config.status: executing libtool commands





On Wed, Dec 5, 2012 at 3:52 PM, Eliezer Croitoru  wrote:
> Hey Trung Kien,
>
> We will need more data to try helping you with the problem.
> If you can share the configure options of squid build and squid.conf it will
> give us a good look on why it may could be happening.
>
> If you can describe more about you infrastructure it will help.
>
> Note that this is a public list so remove any identifying and confidential
> data from squid.conf.
>
> Best Regards,
> Eliezer
>
>
>
> On 12/5/2012 9:59 AM, Le Trung, Kien wrote:
>>
>> Hi,
>>
>> Today, I built version 3.1.22, then started squid with or without
>> accept_filter directive in squid's configuration file and in both case
>> I got NO 500 MISS in the access log.
>>
>> Moreover, the speed when access new links faster (not cached) than
>> version 3.2.3.
>>
>>
>> Best Regards,
>> Trung Kien
>
>
> --
> Eliezer Croitoru
> https://www1.ngtech.co.il
> sip:ngt...@sip2sip.info
> IT consulting for Nonprofit organizations
> eliezer  ngtech.co.il



-- 

Best Regards,
Trung Kiên.


Re: [squid-users] WARNING: accept_filter not supported on your OS

2012-12-05 Thread Le Trung, Kien
And finally, my squid-configure

#
# Recommended minimum configuration:
#
acl localhost src A.B.C.D/32
acl purgehost src A.B.C.D/32
acl to_localhost dst A.B.C.D/32

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly
plugged) machines

acl Safe_ports port 80  # http
acl Safe_ports port 81  # http
acl Safe_ports port 82  # http
acl CONNECT method CONNECT
acl purge method PURGE

acl invalid_urls url_regex ^someregrex

acl valid_urls url_regex ^someregrex

#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
#acl Redirection  http_status 302
#cache allow Redirection

acl RedirectTC url_regex ^needredirect
http_access deny RedirectTC
deny_info ERR_REDIRECT_TC RedirectTC

client_persistent_connections on
connect_timeout 5 seconds
detect_broken_pconn on
accept_filter httpready
accept_filter data
negative_ttl 120 seconds
follow_x_forwarded_for allow localhost

http_access allow manager localhost
http_access allow purge purgehost
http_access allow purge localhost
http_access deny manager
http_access deny purge

# Deny requests to certain unsafe ports
http_access deny !Safe_ports
http_access deny invalid_urls
deny_info ERR_INVALID_URLS invalid_urls
http_access allow valid_urls
# Deny CONNECT to other than secure SSL ports
#http_access deny CONNECT !SSL_ports
http_access deny all
deny_info ERR_INVALID_URLS all

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Squid normally listens to port 3128

### One domain
cache_effective_user squid
http_port A.B.C.D:82 accel vhost ignore-cc

cache_peer A1.B1.C1.D1 parent 80 0 no-query originserver name=WEB1 max-conn=25
cache_peer_domain WEB1 domain1 domain2

cache_peer A2.B2.C2.D2 parent 80 0 no-query originserver name=WEB2
max-conn=20 round-robin
cache_peer A3.B3.C3.D3 parent 80 0 no-query originserver name=WEB3
max-conn=20 round-robin

cache_peer_domain WEB2 domain3 domain4
cache_peer_domain WEB3 domain4 domain4


acl web1 dstdomain domain1 domain2
acl web2 dstdomain domain3 domain4
acl web3 dstdomain domain4 domain4

cache_peer_access WEB1 allow web1
cache_peer_access WEB2 allow web2
cache_peer_access WEB3 allow web3

cache_peer_access web1 deny all
cache_peer_access web2 deny all
cache_peer_access web3 deny all

# from where browsing should be allowed
http_access allow localnet
http_access allow localhost
# And finally deny all other access to this proxy
http_access deny all


# We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?
#hierarchy_stoplist \?
acl CacheType urlpath_regex \? \.css \.gif \.gif\? \.html \.html\?
\.ico \.jpeg \.jpeg\? \.jpg \.jpg\? \.js \.js\? \.php \.php\? \.png
\.png\? \.swf \.swf\? \-
#cache allow CacheType

# Uncomment and adjust the following to add a disk cache directory.
cache_dir ufs /opt/squid/var/cache 9216 16 256

# Leave coredumps in the first cache dir
coredump_dir /opt/squid/var/cache

cache_mem 9216 MB
maximum_object_size_in_memory 1024 KB
cache_swap_low 30
cache_swap_high 50
strip_query_terms off
logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %>Hs %h" "%{User-Agent}>h" %Ss:%Sh
#access_log none
cache_store_log none
access_log stdio:/opt/squid/var/logs/access.log combined
cache_log /opt/squid/var/logs/cache.log
#cache_swap_log /var/log/squid/swap.state
#maximum_object_size 10 MB
#quick_abort_min 0 KB
#quick_abort_max 0 KB
#memory_replacement_policy lru
#cache_replacement_policy heap LFUDA
#store_dir_select_algorithm round-robin
#cache_dir null /tmp

# Add any of your own refresh_pattern entries above these.
#refresh_pattern ^ftp:  144020% 10080
#refresh_pattern ^gopher:   14400%  1440
refresh_pattern -i (^someregrex) ...
refresh_pattern -i (/cgi-bin/) 0 0%  0
refresh_pattern .   0   20% 4320


On Wed, Dec 5, 2012 at 3:52 PM, Eliezer Croitoru  wrote:
> Hey Trung Kien,
>
> We will need more data to try helping you with the problem.
> If you can share the configure options of squid build and squid.conf it will
> give us a good look on why it may could be happening.
>
> If you can describe more about you infrastructure it will help.
>
> Note that this is a public list so remove any identifying and confidential
> data from squid.conf

Re: [squid-users] Netflix+squid

2013-02-16 Thread Le Trung, Kien
You would need a proxy server for video streaming.

On Fri, Feb 15, 2013 at 5:21 PM,   wrote:
>
>
> Hi Amos,
>
> I still haven't configured/deployed anything yet. My
> approach is to have a server in the U.S. But I thought maybe there is a
> better solution/approach to this deployment. Maybe a proxy server local to
> them and configure it to use my proxy server in the U.S as its upstream
> proxy.
>
> Thanks
> Monah
>
>
>> On 15/02/2013
> 1:24 p.m., mb...@whywire.com wrote:
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>  Hi all,
>>>
>>> A friend of mine has a company
> outside
>>> the U.S, and
>>> wants to provide Netflix
> to his customers.
>>> Since I can setup a proxy here
>>> for him and have his
>>> clients use my proxy to
> access netflix, is there any
>>> other solution that can
> optimize it even better.
>>
>> Better than what? you have
> not provided any information on what
>> configuration settings you
> are using, we cannot tell whether you
>> configured it for good
> performance or not.
>>
>>
>>>   Can you
>>> cache the videos
>>> by the way?
>>
>>
> Unknown. You will want to look into the cached object size limits
>> (default maximum_object_size directive is probably too small for
> large
>> videos). then look into whether the videos are actually
> cacheable. Paste
>> one of their URLs into redbot.org for info on
> that.
>>
>> Amos
>>
>



-- 

Best Regards,
Trung Kiên.


[squid-users] HIT and MISS on same URL

2013-04-04 Thread Le Trung, Kien
Hi,

When checking log file, I found that my squid returned MISS and then
HIT again same URL (same file) and there are two file_sizes of for the
URL (file). One file_size always get HIT and the other always MISS.

Eg:
118.71.131.240 - - [05/Apr/2013:02:05:30 +0700] "GET
http://res.mysite.com/vnn2013/images/icon-mobile.png HTTP/1.0" 200
1795 "http://mysite.com/vn/index.html"; "Mozilla/5.0 (Windows NT 6.1;
WOW64) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.43
Safari/537.31" TCP_MISS:NONE
101.99.8.244 - - [05/Apr/2013:02:05:34 +0700] "GET
http://res.mysite.com/vnn2013/images/icon-mobile.png HTTP/1.0" 200
1795 "http://res.mysite.com/vnn2013/css/stylev104.css"; "Mozilla/5.0
(Windows NT 6.1; rv:12.0) Gecko/20100101 Firefox/12.0" TCP_MISS:NONE
75.37.35.197 - - [05/Apr/2013:02:05:34 +0700] "GET
http://res.mysite.com/vnn2013/images/icon-mobile.png HTTP/1.0" 200
1805 "http://mysite.com/vn/index.html"; "Mozilla/4.0 (compatible; MSIE
7.0; Windows NT 6.0; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0;
.NET CLR 3.5.30729; .NET CLR 3.0.30618; .NET4.0C)" TCP_MEM_HIT:NONE
192.249.41.3 - - [05/Apr/2013:02:05:35 +0700] "GET
http://res.mysite.com/vnn2013/images/icon-mobile.png HTTP/1.0" 200
1805 "http://mysite.com/"; "Mozilla/4.0 (compatible; MSIE 8.0; Windows
NT 6.1; WOW64; Trident/4.0; GTB7.4; SLCC2; .NET CLR 2.0.50727; .NET
CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET CLR
1.1.4322; .NET4.0C; .NET4.0E; InfoPath.3)" TCP_MEM_HIT:NONE
113.166.8.236 - - [05/Apr/2013:02:05:35 +0700] "GET
http://res.mysite.com/vnn2013/images/icon-mobile.png HTTP/1.0" 200
1795 "http://mysite.com/"; "Mozilla/5.0 (Windows NT 5.1)
AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.43
Safari/537.31" TCP_MISS:NONE
118.71.35.223 - - [05/Apr/2013:02:05:35 +0700] "GET
http://res.mysite.com/vnn2013/images/icon-mobile.png HTTP/1.0" 200
1795 "http://mysite.com/"; "Mozilla/5.0 (Windows NT 6.0; WOW64)
AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.43
Safari/537.31" TCP_MISS:NONE


--

Best Regards,
Trung Kiên.


Re: [squid-users] HIT and MISS on same URL

2013-04-04 Thread Le Trung, Kien
In my case, the scheme is reverse proxy

On Fri, Apr 5, 2013 at 6:54 AM, Squidblacklist
 wrote:
> I think the reason thats occurring is because that url is behind a load
> balancing scheme, or CDN which is loading from a different web server,
> notice the ip is different, so since its coming from different hosts,
> squid is redownoading the content. I have the same irritation with
> microsofts cdn and other entities doing that.
>
>
>
>
>
> -
> Signed,
>
> Fix Nichols
>
> http://www.squidblacklist.org



-- 

Best Regards,
Trung Kiên.


Re: [squid-users] HIT and MISS on same URL

2013-04-04 Thread Le Trung, Kien
yes, I altered mysite.com.

Those requests come from unknown clients. I will test using different
browser, to see how request headers affective

I read some articles about remove: ETag; here:
http://mark.koli.ch/2010/09/understanding-the-http-vary-header-and-caching-proxies-squid-etc.html

But no luck for me.
I'm going to disable Vary options in original server, too

On Fri, Apr 5, 2013 at 10:39 AM, Amos Jeffries  wrote:
> You will have to supply the HTTP headers sent to Squid by cleint and server
> for each of those transactions.
>
> FWIW the URLs you supplied show up as 404 errors in the mysite.com CDN. Did
> you alter the log?
>
> Amos
>



-- 

Best Regards,
Trung Kiên.


Re: [squid-users] HIT and MISS on same URL

2013-04-07 Thread Le Trung, Kien
Still now I'm playing with "removing response variables" from original
server responses header but, still not working.

Following is one part of my log for bullet.gif, there are two
file_size of the same object - ( 5329 - HIT, 5320 - MISS )
(format is:
[%tl] "%rm %ru HTTP/%rv" %>Hs %h" "Accept:
%{Accept}>h" "Accept-Encoding: %{Accept-Encoding}>h"
"%{Accept-Language}>h" "%{Cache-Control}>h" %Ss:%Sh )

[08/Apr/2013:11:13:15 +0700] "GET
http://res.mydomain.com/vnnv3/images/bullet.gif HTTP/1.0" 304 234
"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.31 (KHTML, like Gecko)
Chrome/26.0.1410.43 Safari/537.31" "Accept: */*" "Accept-Encoding:
gzip,deflate,sdch"
"vi-VN,vi;q=0.8,fr-FR;q=0.6,fr;q=0.4,en-US;q=0.2,en;q=0.2" "max-age=0"
TCP_IMS_HIT:NONE
[08/Apr/2013:11:13:15 +0700] "GET
http://res.mydomain.com/vnnv3/images/bullet.gif HTTP/1.0" 304 234
"Mozilla/5.0 (Windows NT 5.1; rv:19.0) Gecko/20100101 Firefox/19.0"
"Accept: image/png,image/*;q=0.8,*/*;q=0.5" "Accept-Encoding: gzip,
deflate" "en-US,en;q=0.5" "max-age=0" TCP_IMS_HIT:NONE
[08/Apr/2013:11:13:16 +0700] "GET
http://res.mydomain.com/vnnv3/images/bullet.gif HTTP/1.0" 200 5329
"Mozilla/5.0 (Windows NT 5.1; rv:19.0) Gecko/20100101 Firefox/19.0"
"Accept: image/png,image/*;q=0.8,*/*;q=0.5" "Accept-Encoding: gzip,
deflate" "en-US,en;q=0.5" "-" TCP_MEM_HIT:NONE
[08/Apr/2013:11:13:16 +0700] "GET
http://res.mydomain.com/vnnv3/images/bullet.gif HTTP/1.0" 200 5329
"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.31 (KHTML, like Gecko)
Chrome/26.0.1410.43 Safari/537.31" "Accept: */*" "Accept-Encoding:
gzip,deflate,sdch"
"vi-VN,vi;q=0.8,fr-FR;q=0.6,fr;q=0.4,en-US;q=0.2,en;q=0.2" "-"
TCP_MEM_HIT:NONE
[08/Apr/2013:11:13:16 +0700] "GET
http://res.mydomain.com/vnnv3/images/bullet.gif HTTP/1.0" 304 234
"Mozilla/5.0 (Windows NT 5.1; rv:19.0) Gecko/20100101 Firefox/19.0"
"Accept: image/png,image/*;q=0.8,*/*;q=0.5" "Accept-Encoding: gzip,
deflate" "en-US,en;q=0.5" "max-age=0" TCP_IMS_HIT:NONE
[08/Apr/2013:11:13:16 +0700] "GET
http://res.mydomain.com/vnnv3/images/bullet.gif HTTP/1.0" 304 234
"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:19.0) Gecko/20100101
Firefox/19.0" "Accept: image/png,image/*;q=0.8,*/*;q=0.5"
"Accept-Encoding: gzip, deflate" "en-US,en;q=0.5" "max-age=0"
TCP_IMS_HIT:NONE
[08/Apr/2013:11:13:17 +0700] "GET
http://res.mydomain.com/vnnv3/images/bullet.gif HTTP/1.0" 200 5329
"Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0)"
"Accept: */*" "Accept-Encoding: gzip, deflate" "en-us" "-"
TCP_MEM_HIT:NONE
[08/Apr/2013:11:13:17 +0700] "GET
http://res.mydomain.com/vnnv3/images/bullet.gif HTTP/1.0" 304 234
"Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET
CLR 2.0.50727;WUID=50289ACA0726479C8137D196BB78775D;WTB=3231)"
"Accept: */*" "Accept-Encoding: gzip, deflate" "en-us" "-"
TCP_IMS_HIT:NONE
[08/Apr/2013:11:13:18 +0700] "GET
http://res.mydomain.com/vnnv3/images/bullet.gif HTTP/1.0" 200 5329
"Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0)"
"Accept: image/png, image/svg+xml, image/*;q=0.8, */*;q=0.5"
"Accept-Encoding: gzip, deflate" "en-US" "-" TCP_MEM_HIT:NONE
[08/Apr/2013:11:13:18 +0700] "GET
http://res.mydomain.com/vnnv3/images/bullet.gif HTTP/1.0" 200 5329
"Mozilla/5.0 (Windows NT 6.1; rv:19.0) Gecko/20100101 Firefox/19.0"
"Accept: image/png,image/*;q=0.8,*/*;q=0.5" "Accept-Encoding: gzip,
deflate" "vi-vn,vi;q=0.8,en-us;q=0.5,en;q=0.3" "-" TCP_MEM_HIT:NONE
[08/Apr/2013:11:13:18 +0700] "GET
http://res.mydomain.com/vnnv3/images/bullet.gif HTTP/1.0" 200 5320
"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:12.0) Gecko/20100101
Firefox/12.0" "Accept: image/png,image/*;q=0.8,*/*;q=0.5"
"Accept-Encoding: -" "en-us,en;q=0.5" "max-age=0, max-age=0"
TCP_MISS:NONE
[08/Apr/2013:11:13:18 +0700] "GET
http://res.mydomain.com/vnnv3/images/bullet.gif HTTP/1.0" 200 5329
"Mozilla/5.0 (Windows NT 6.1; rv:19.0) Gecko/20100101 Firefox/19.0"
"Accept: image/png,image/*;q=0.8,*/*;q=0.5" "Accept-Encoding: gzip,
deflate" "en-US,en;q=0.5" "-" TCP_MEM_HIT:NONE
[08/Apr/2013:11:13:18 +0700] "GET
http://res.mydomain.com/vnnv3/images/bullet.gif HTTP/1.0" 200 5329
"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.31 (KHTML, like Gecko)
Chrome/26.0.1410.43 Safari/537.31" "Accept: */*" "Accept-Encoding:
gzip,deflate,sdch"
"vi-VN,vi;q=0.8,fr-FR;q=0.6,fr;q=0.4,en-US;q=0.2,en;q=0.2" "-"
TCP_MEM_HIT:NONE
[08/Apr/2013:11:13:18 +0700] "GET
http://res.mydomain.com/vnnv3/images/bullet.gif HTTP/1.0" 200 5329
"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.31 (KHTML, like Gecko)
Chrome/26.0.1410.43 Safari/537.31" "Accept: */*" "Accept-Encoding:
gzip,deflate,sdch" "en-US,en;q=0.8" "max-age=0" TCP_MEM_HIT:NONE
[08/Apr/2013:11:13:19 +0700] "GET
http://res.mydomain.com/vnnv3/images/bullet.gif HTTP/1.0" 304 234
"Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET
CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; CMDTDF;
MS-RTC LM 8)" "Accept: */*" "Accept-Encoding: -" "en-us" "-"
TCP_IMS_HIT:NONE
[08/Apr/2013:11:13:19 +0700] "GET
http://res.mydomain.com/vnnv3/images/b

[squid-users] Directives ignore-private and override-expire not working Squid 3.2 and 3.3

2013-11-25 Thread Le Trung, Kien
Hello everyone, (sorry if this email is duplicate because I sent this
by Outlook before but received no comments about this problem).

 I’m using these configurations which work fine with squid 3.1 every
items gets HIT. However these configurations  don’t work properly with
Squid 3.2 and 3.3, because I always get MISS with all items

http_port 127.0.0.1:82 accel ignore-cc

cache_peer 192.168.2.43 parent 80 0 no-query originserver name=Site1
max-conn=15 cache_peer_domain Site1 mysite.com refresh_pattern -i
((.)*) 30 30% 60 ignore-no-cache ignore-private ignore-reload
ignore-no-store override-lastmod override-expire

Header from 3.3 version:

HTTP/1.1 200 OK

Cache-Control: private

Content-Length: 117991

Content-Type: text/html; charset=utf-8

Expires: Thu, 21 Nov 2013 03:12:14 GMT

Server: Microsoft-IIS/7.5

Date: Thu, 21 Nov 2013 03:12:15 GMT

X-Cache: MISS from localhost.localdomain

Connection: close


 Please help.


-- 

Best Regards,
Kiên Lê


Re: [squid-users] Directives ignore-private and override-expire not working Squid 3.2 and 3.3

2013-11-25 Thread Le Trung, Kien
Thank Eliezer Croitoru for your response,

I double checked your suggestion but still MISS all request
refresh_pattern, of course, is in separated line from the cache_peer
line, just my mistake when copy/paste to email.
 I use "." for the refresh_pattern before but no luck

With this configuration, squid-3.1.23 still working properly (same
original server).



On Tue, Nov 26, 2013 at 7:49 AM, Eliezer Croitoru  wrote:
> Hey there,
>
> I am not sure and maybe it is a typo in the cache_peer line?
> The refresh_pattern should be a separated line from the cache_peer line.
> like this:
> ##START
>
> http_port 127.0.0.1:82 accel ignore-cc
> cache_peer 192.168.2.43 parent 80 0 no-query originserver name=Site1
> max-conn=15 cache_peer_domain Site1 mysite.com
> refresh_pattern -i ((.)*) 30 30% 60 ignore-no-cache ignore-private
> ignore-reload ignore-no-store override-lastmod override-expire
> ##END
>
> also note that there is not need for the whole "((.)*)" while you can use
> "." for the wanted effect effect.
> It might be the reason but we just see the response while the request also
> is very important for the matter.
>
> Try to change the refresh_pattern and let see what happens.
>
> Eliezer
>
>
> On 25/11/13 19:16, Le Trung, Kien wrote:
>>
>> Hello everyone, (sorry if this email is duplicate because I sent this
>> by Outlook before but received no comments about this problem).
>>
>>   I’m using these configurations which work fine with squid 3.1 every
>> items gets HIT. However these configurations  don’t work properly with
>> Squid 3.2 and 3.3, because I always get MISS with all items
>>
>> http_port 127.0.0.1:82 accel ignore-cc
>>
>> cache_peer 192.168.2.43 parent 80 0 no-query originserver name=Site1
>> max-conn=15 cache_peer_domain Site1 mysite.com refresh_pattern -i
>> ((.)*) 30 30% 60 ignore-no-cache ignore-private ignore-reload
>> ignore-no-store override-lastmod override-expire
>
>



-- 

Best Regards,
Kiên Lê


Re: [squid-users] Directives ignore-private and override-expire not working Squid 3.2 and 3.3

2013-11-25 Thread Le Trung, Kien
Hi, Eliezer Croitoru

I already sent the header in the first email. Is this the information you want ?
= Squid 3.3.x 
HTTP/1.1 200 OK
Cache-Control: private
Content-Length: 117991
Content-Type: text/html; charset=utf-8
Expires: Thu, 21 Nov 2013 03:12:14 GMT
Server: Microsoft-IIS/7.5
Date: Thu, 21 Nov 2013 03:12:15 GMT
X-Cache: MISS from localhost.localdomain
Connection: close

And after Amos's reply I check again the header of Squid-3.1

= Squid 3.1.x 
HTTP/1.0 200 OK
Cache-Control: private
Content-Type: text/html; charset=utf-8
Expires: Tue, 26 Nov 2013 05:00:03 GMT
Server: Microsoft-IIS/7.5
Date: Tue, 26 Nov 2013 05:00:04 GMT
Content-Length: 117904
Age: 64
Warning: 110 squid/3.1.23 "Response is stale" (confused here too !)
X-Cache: HIT from localhost.localdomain
Connection: close

In both case I used the same directives ignore-private and
override-expire and same origin server. Squids also built in same
server, the difference is only http service ports.

Still don't know why squid 3.3 and 3.2 can't ignore-private and
override-expire header.

Best Regards,
Kien Le

On Tue, Nov 26, 2013 at 11:10 AM, Amos Jeffries  wrote:
> On 26/11/2013 4:35 p.m., Eliezer Croitoru wrote:
>> Hey,
>>
>> Just to make sure you have taken a small look at the headers.
>> The headers states that at almost the same time the request was asked it
>> was expired.
>> I have not seen the Request headers and I cannot tell you why it is like
>> that but it seems like there is a reason for that.
>
> Usually this is done on resources where the webmaster knows what they
> are doing and is completely confident that the data MUST NOT be stored.
> You know, the stuff the contains *private* user details and such.
>
> Expires: header causes HTTP/1.0 caches to remove the content immediately
> (or not store in the first place).
>
> Cache-Control:private does the same thing for HTTP/1.1 caches except for
> browsers. Which in HTTP/1.1 are allowed to store private data unless the
> "Cache-Control:no-store" or "Expires:" controls are also used.
>
>
> Amos
>



-- 

Best Regards,
Kiên Lê


Re: [squid-users] Directives ignore-private and override-expire not working Squid 3.2 and 3.3

2013-11-26 Thread Le Trung, Kien
Thank you, I saw the problem.

So now I have to deal with "Cache-Control: private" header sent from IIS7.5
Don't know why IIS 7.5 always return "private", Google show some bugs of this.

Thank you again Mr Jeffries.



On Tue, Nov 26, 2013 at 2:14 PM, Amos Jeffries  wrote:
> On 26/11/2013 6:06 p.m., Le Trung, Kien wrote:
>> Hi, Eliezer Croitoru
>>
>> I already sent the header in the first email. Is this the information you 
>> want ?
>> = Squid 3.3.x 
>> HTTP/1.1 200 OK
>> Cache-Control: private
>> Content-Length: 117991
>> Content-Type: text/html; charset=utf-8
>> Expires: Thu, 21 Nov 2013 03:12:14 GMT
>> Server: Microsoft-IIS/7.5
>> Date: Thu, 21 Nov 2013 03:12:15 GMT
>> X-Cache: MISS from localhost.localdomain
>> Connection: close
>>
>> And after Amos's reply I check again the header of Squid-3.1
>>
>> = Squid 3.1.x 
>> HTTP/1.0 200 OK
>> Cache-Control: private
>> Content-Type: text/html; charset=utf-8
>> Expires: Tue, 26 Nov 2013 05:00:03 GMT
>> Server: Microsoft-IIS/7.5
>> Date: Tue, 26 Nov 2013 05:00:04 GMT
>> Content-Length: 117904
>> Age: 64
>> Warning: 110 squid/3.1.23 "Response is stale" (confused here too !)
>> X-Cache: HIT from localhost.localdomain
>> Connection: close
>>
>> In both case I used the same directives ignore-private and
>> override-expire and same origin server. Squids also built in same
>> server, the difference is only http service ports.
>>
>> Still don't know why squid 3.3 and 3.2 can't ignore-private and
>> override-expire header.
>
> I still think you are misunderstanding what is happening here.
>
>
> Ignoring "private" simply means that Squid will store it instead of
> discarding immediately as required by RFC 2616 (and by Law in many
> countries). For safe use of privileged information we consider this
> content to expire the instant it was received.
>  * The handling of that content once it is in cache still goes ahead in
> full accordance with HTTP/1.1 requirements had the private not been
> there to prevent caching.
>
>
> "override-expires" means that when the Expires: header is present the
> value inside it is replaced (overridden with) with the values in
> refresh_pattern header.
>  * The calculation of how fresh/stale the object is still happens - just
> without the HTTP response header value for Expires.
>
>
> 3.1.20 are HTTP/1.0 proxies and do not perform HTTP/1.1 protocol
> validation perfectly. The headers still contain the Squid Warning: about
> the object coming out of cache (HIT) and being stale.
>
> 3.2+ are HTTP/1.1 proxies and are more strictly following RFC2616
> requirements about revalidating stale content before use. It just
> happened that the server presented a new copy for delivery.
>
> NOTE: private *was* ignored. Expires *was* overridden. There was new
> content to deliver regardless of the values you changed them to.
>
> ALSO NOTE: The X-Cache header does not display REFRESH states. It
> displays "MISS" usually in the event of REFRESH_MODIFIED and "HIT"
> usually in the event of REFRESH_UNMODIFIED.
>
>
> You can get a better test of the private/Expires caching by causing the
> server those objects came from to be disconnected/unavailable when
> accessed from your Squid. In which case you should see the same headers
> as present in 3.1 indicating a HIT with stale object returned.
>
> Amos
>



-- 

Best Regards,
Kiên Lê