Re: [squid-users] The request or reply is too large, error

2008-10-22 Thread Tarak

Tarak Ranjan wrote:

Hi List,
I have set those acl for limiting the download size in
squid.conf file,
acl limitsize2 time MTWHF 00:30-07:55
acl limitsize time MTWHF 8:00-20:00
acl limitsize1 time SA 00:10-23:59

reply_body_max_size 25600 allow limitsize
reply_body_max_size 1024 allow limitsize2
reply_body_max_size 1024 allow limitsize1

And #Default:
 request_body_max_size 0 KB

The following error was encountered:

* The request or reply is too large.

  If you are making a POST or PUT request, then
your request body (the thing you are trying to upload)
is too large. If you are making a GET request, then
the reply body (what you are trying to download) is
too large. These limits have been established by the
Internet Service Provider who operates this cache.
Please contact them directly if you feel this is an
error. 


Can anyone help to figure out this error.

/\
Tarak 

  


I'm not able to figure out, what is went wrong in the above acl .


/\
Tarak


Re: [squid-users] How can I block a https site?

2008-10-22 Thread Matus UHLAR - fantomas
On 21.10.08 16:23, Alejandro Bednarik wrote:
  You can also use url_regex -i
 
  acl bad_sites url_regex -i /etc/squid/bad_sites.txt
  http_access deny bad_sites

using regexes is very ineffective and may lead to problems if you don't
count with:
- dot matching ANY character
- regex matching the middle of string, not just the end of it (like
  dstdomain does)

  # cat bad_sites.txt
  .youporn.com
  .rapidshare.com
  .googlevideo.com
  .photobucket.com
  .dailymotion.com
  .logmein.com
  .megavideo.com
  .audio.uol.com.br
  .imo.im
  #
 
  But I am able to connect https://imo.im
  I only got access denied when I  access http://imo.im

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Eagles may soar, but weasels don't get sucked into jet engines. 


Re: [squid-users] How can I block a https site?

2008-10-22 Thread Matus UHLAR - fantomas
On 21.10.08 14:58, Ricardo Augusto de Souza wrote:
 How do i block HTTPS sites?

if you mean any HTTPS sites, just deny the CONNECT method.

if you want to block HTTPS to specific sites, Lucas Brasiliano posted
somethign that should work and that you seem have ignored ...

 I am using this I squid.conf:
  acl bad_sites dstdomain /etc/squid/bad_sites.txt
 http_access deny bad_sites
 
 # cat bad_sites.txt
 .youporn.com
 .rapidshare.com
 .googlevideo.com
 .photobucket.com
 .dailymotion.com
 .logmein.com
 .megavideo.com
 .audio.uol.com.br
 .imo.im
 #
 
 But I am able to connect https://imo.im

are you using proxy for HTTPS ?

Also, do you block CONNECT to IP address? If not, usersd can avoid the
blockage...

 I only got access denied when I  access http://imo.im


-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
The early bird may get the worm, but the second mouse gets the cheese. 


Re: [squid-users] Announcement: txforward (for php behind squid)

2008-10-22 Thread Henrik Nordstrom
Interesting, but is missing a crucial piece. There is nothign which
establishes trust. If the same server can be reached directly without
using the reverse proxy then security is bypassed, or if the module is
loaded on a server not using a reverse proxy.

This needs a configuration directive indicating which addresses (hosts
and/or networks) is trusted with X-Forwarded-For.

When you have this you can also unwind the chain of IP addresses
properly when the request passes via a chain of reverse proxies in
peering relation.


On ons, 2008-10-22 at 01:02 +0200, Francois Cartegnie wrote:
 Hello,
 
 Txforward is php module providing a simple hack for deploying PHP 
 applications 
 behind squid in reverse proxy (accelerator) mode. You don't need anymore 
 X-Forward header aware applications.
 http://fcartegnie.free.fr/patchs/txforward.html
 
 PS: but you'll still need to fix your webserver logs :)
 
 Greetings,
 
 Francois


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] configuration question

2008-10-22 Thread Henrik Nordstrom
On tis, 2008-10-21 at 19:57 -0500, Lou Lohman wrote:

 I have been poking around the Internet and mailing lists and anything
 else I can think of, for DAYS, to try to answer what I thought would
 be a simple question, How can I configure Squid so that my authorized
 Windows users (Members of the proper security group in AD who are
 logged into the network) don't have to answer a challenge to get out
 to the Internet?

This consists of three pieces.

1. Configuring the clients to use the proxy, using a server name which
MSIE secururity classifies as Local LAN/Intranet. Usually a short
server name without domain works, but Windows people can answer this
better than me.

2. Configuring the proxy with ntlm (and perhaps negotiate)
authentication scheme support. Using Samba ntlm_auth as helper is
recommended.

3. Limiting access to the given group. Can be done in two ways, either
restrict ntlm_auth to only accept members of the given group, or lookup
the group membership using wbinfo_group.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] squid3 keeps many idle connections

2008-10-22 Thread Malte Schröder
Hello,
Squid3 seems to keep a LOT (over a thousand) idle connections to its
parent proxy. To me it seems as if doesn't properly reuse existing
connections. Is there a way to find out what's going on? From what I
can see there are not more than about two dozens requests at the
same time. I already reduced pconn_timeout to 10 seconds to reduce the
number of connections that are open to around 100.

Kind regards
Malte


Re: [squid-users] How can I block a https site?

2008-10-22 Thread Amos Jeffries

Matus UHLAR - fantomas wrote:

On 21.10.08 16:23, Alejandro Bednarik wrote:

 You can also use url_regex -i

 acl bad_sites url_regex -i /etc/squid/bad_sites.txt
 http_access deny bad_sites


using regexes is very ineffective and may lead to problems if you don't
count with:
- dot matching ANY character
- regex matching the middle of string, not just the end of it (like
  dstdomain does)


 - URL parts often included in regex not occuring in CONNECT requests.
 - neither the http(s):// part.




# cat bad_sites.txt
.youporn.com
.rapidshare.com
.googlevideo.com
.photobucket.com
.dailymotion.com
.logmein.com
.megavideo.com
.audio.uol.com.br
.imo.im
#

But I am able to connect https://imo.im
I only got access denied when I  access http://imo.im




Are you absolutely certain the HTTPS request is going through Squid 
then? Your browser may be configured to only send HTTP to squid and the 
rest elsewhere.


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] squid3 keeps many idle connections

2008-10-22 Thread Itzcak Pechtalt
Hi,
If you use tranparent cache, you will have several connections open
per client IP.

Itzcak

On Wed, Oct 22, 2008 at 11:31 AM, Malte Schröder [EMAIL PROTECTED] wrote:
 Hello,
 Squid3 seems to keep a LOT (over a thousand) idle connections to its
 parent proxy. To me it seems as if doesn't properly reuse existing
 connections. Is there a way to find out what's going on? From what I
 can see there are not more than about two dozens requests at the
 same time. I already reduced pconn_timeout to 10 seconds to reduce the
 number of connections that are open to around 100.

 Kind regards
 Malte



Re: [squid-users] Announcement: txforward (for php behind squid)

2008-10-22 Thread Francois Cartegnie
Le mercredi 22 octobre 2008, vous avez écrit :
 Interesting, but is missing a crucial piece. There is nothign which
 establishes trust. If the same server can be reached directly without
 using the reverse proxy then security is bypassed, or if the module is
 loaded on a server not using a reverse proxy.
That's what the README and the warning in the phpinfo output are for...

 When you have this you can also unwind the chain of IP addresses
 properly when the request passes via a chain of reverse proxies in
 peering relation.
This is not currently for chained accelerator proxies. There's some much code 
to do.


[squid-users] Diagnosing RPCviaHTTP setup?

2008-10-22 Thread Jakob Curdes

.. I am trying to setup a RCPviaHTTP reverse proxy scenario as described in

http://wiki.squid-cache.org/ConfigExamples/SquidAndRPCOverHttp

Squid starts with my configuration (like example plus some standard 
ACLs) but connections with a browser to the SSL port on the outside take 
eternally and eventually time out.
If I connect with a telnet 443 I get some sort of connection, so I 
suppose it's not a firewall issue. In the cache or access log I see 
nothing even after turning up debugging.
What else can I do to troubleshoot this ? What should I see when 
telnetting into 443?


Jakob


Re: [squid-users] CARP setup

2008-10-22 Thread Paras Fadte
Ok.

On 10/21/08, Henrik Nordstrom [EMAIL PROTECTED] wrote:
 Scrolling back to my first response in this thread:

  http://marc.info/?l=squid-usersm=122366977412432w=2


  On tis, 2008-10-21 at 21:18 +0530, Paras Fadte wrote:
   Hi Henrik,
  
   Thanks for your reply. What would be your suggestion for a CARP setup
   which would provide an efficient caching system?
  
   Thanks in advance.
  
   -Paras
  
   On 10/16/08, Henrik Nordstrom [EMAIL PROTECTED] wrote:
On tor, 2008-10-16 at 09:42 +0530, Paras Fadte wrote:
  Hi Henrik,
 
  In CARP setup, if one uses same weightage for all the parent caches
  how would the requests be handled ? will the requests be equally
  forwarded to all the parent caches ? if the weightages differ then
  won't all the requests be forwarded to a particular parent cache only
  which has the highest weightage ?
   
   
CARP is a hash algorithm. For each given URL there is one CARP parent
 that is the designated one.
   
 The weights control how large portion of the URL space is assigned to
 each member.
   
   
  Also if I do not use the proxy-only option in the squid which
  forwards the requests to parent caches, won't less number of requests
  be forwarded to parent caches since it will be already cached by squid
  in front of the parent caches?
   
   
Correct. And it's completely orthogonal to the use of CARP. As I said
 most setups do not want to use proxy-only. proxy-only is only useful in
 some very specific setups. These setups MAY be using CARP or some other
 peering method, the choice of peering method is unrelated to proxy-only.
   
 Regards
   
Henrik
   
   




[squid-users] Solved / RE : Diagnosing RPCviaHTTP setup?

2008-10-22 Thread Jakob Curdes

.. forget it, I had a NAT rule in place so the request ended up somewhere else..
tcpdump was my friend.

JC


Re: [squid-users] squid3 keeps many idle connections

2008-10-22 Thread Malte Schröder
No, it's a parent cache to another squid (2.7.STABLE5). It talks to a
WebWasher content filter.


On Wed, 22 Oct 2008 14:39:17 +0200
Itzcak Pechtalt [EMAIL PROTECTED] wrote:

 From: Itzcak Pechtalt [EMAIL PROTECTED]
 To: Malte Schröder [EMAIL PROTECTED]
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] squid3 keeps many idle connections
 Date: Wed, 22 Oct 2008 14:39:17 +0200
 
 Hi,
 If you use tranparent cache, you will have several connections open
 per client IP.
 
 Itzcak
 
 On Wed, Oct 22, 2008 at 11:31 AM, Malte Schröder [EMAIL PROTECTED] wrote:
  Hello,
  Squid3 seems to keep a LOT (over a thousand) idle connections to its
  parent proxy. To me it seems as if doesn't properly reuse existing
  connections. Is there a way to find out what's going on? From what I
  can see there are not more than about two dozens requests at the
  same time. I already reduced pconn_timeout to 10 seconds to reduce the
  number of connections that are open to around 100.
 
  Kind regards
  Malte
   


Re: [squid-users] The request or reply is too large, error

2008-10-22 Thread Chris Robertson

Tarak Ranjan wrote:

Hi List,
I have set those acl for limiting the download size in
squid.conf file,
acl limitsize2 time MTWHF 00:30-07:55
acl limitsize time MTWHF 8:00-20:00
acl limitsize1 time SA 00:10-23:59

reply_body_max_size 25600 allow limitsize
reply_body_max_size 1024 allow limitsize2
reply_body_max_size 1024 allow limitsize1

And #Default:
 request_body_max_size 0 KB

The following error was encountered:

* The request or reply is too large.

  If you are making a POST or PUT request, then
your request body (the thing you are trying to upload)
is too large. If you are making a GET request, then
the reply body (what you are trying to download) is
too large. These limits have been established by the
Internet Service Provider who operates this cache.
Please contact them directly if you feel this is an
error. 


Can anyone help to figure out this error.

/\
Tarak 


From 

http://www.squid-cache.org/Versions/v3/3.0/cfgman/reply_body_max_size.html

When the reply headers are received, the reply_body_max_size lines are 
processed, and the first line where all (if any) listed ACLs are true is 
used as the maximum body size for this reply.

[SNIP]
Configuration Format is: reply_body_max_size SIZE UNITS [acl ...]

So with that information, try...

reply_body_max_size 25600 B limitsize


...or...

reply_body_max_size 25600 KB limitsize


...instead.

Chris


Re: [squid-users] Objects Release from Cache Earlier Than Expected

2008-10-22 Thread BUI18
After some further investigation, it seem that RELEASE does not mean that Squid 
deletes the object from cache.  It appears that it releases from cache to the 
request.

To restate the problem I am having:

Squid seems to re-fetch the entire object even though the object never changed 
on the server after about 1 day - 2 days.

Here's my test scenario:

Object is initially cached.  Max age in squid.conf is set to 1 min.  Before 1 
min passes, I request the object and Squid returns TCP_HIT.  After 1 min, I try 
to request for object again.  Squid returns TCP_REFRESH_HIT, which is what I 
expect.  I leave the entire system untouched.  A day or a day and a half later, 
I ask for the object again and Squid returns TCP_REFRESH_MISS/200.

What could possibly cause Squid to refetch the entire object again?

Could there possibly be a problem with the interaction between IE7 and Squid 
that is forcing Squid to re-fetch the entire object?

Anyone with ideas on why this behavior occurs?

Thanks



- Original Message 
From: BUI18 [EMAIL PROTECTED]
To: Henrik Nordstrom [EMAIL PROTECTED]
Cc: squid-users@squid-cache.org
Sent: Tuesday, October 21, 2008 8:25:14 AM
Subject: Re: [squid-users] Objects Release from Cache Earlier Than Expected

The web server is IIS 6.

1)  Would there be any reason why it would return the full object when in fact 
the object has not been modified?
2)  If the min age guarantees freshness of the object, why would Squid actually 
issue and IMS request to the web server in the first place?  As I understand 
it, Squid should only issue and IMS request when objects become STALE.  As 
such, I would have expected Squid to return TCP_HIT instead of TCP_REFRESH_MISS.
3)  My big concern is that the store.log shows that the object was released 
(deleted) from cache well before the min age while there is still and abundant 
amount of disk space is available.

Also, one other question:

When Squid issues and IMS request, which date does it use?  Is it the date/time 
that it retrieved the object or is it the Last Modified date/time of the object 
ascertained by Squid on first retrieval of the object?

Regards
-bui



- Original Message 
From: Henrik Nordstrom [EMAIL PROTECTED]
To: BUI18 [EMAIL PROTECTED]
Cc: squid-users@squid-cache.org
Sent: Tuesday, October 21, 2008 3:50:20 AM
Subject: Re: [squid-users] Objects Release from Cache Earlier Than Expected

On mån, 2008-10-20 at 17:45 -0700, BUI18 wrote:
 I not sure what you mean by a newer copy of the same URL?  Can you elaborate 
 on that a bit?

The cache (i.e. Squid) performed a conditional request to the origin web
server, and the web server returned a new 200 OK object with full
content instead of a small 304 Not Modified.

Regards
Henrik





Re: [squid-users] Objects Release from Cache Earlier Than Expected

2008-10-22 Thread Henrik Nordstrom
On ons, 2008-10-22 at 14:35 -0700, BUI18 wrote:

 Object is initially cached.  Max age in squid.conf is set to 1 min.
 Before 1 min passes, I request the object and Squid returns TCP_HIT.
 After 1 min, I try to request for object again.  Squid returns
 TCP_REFRESH_HIT, which is what I expect.  I leave the entire system
 untouched.  A day or a day and a half later, I ask for the object
 again and Squid returns TCP_REFRESH_MISS/200.


TCP_HIT is a local hit on the Squid cache. Origin server was not asked.

TCP_REFRESH_HIT is a cache hit after the origin server was asked if the
object is still fresh.

TCP_REFREHS_MISS is when the origin server says the object is no longer
fresh and returns a new copy on the conditional query sent by the cache.
(same query as in TCP_REFRESH_HIT, different response from the web
server).

 What could possibly cause Squid to refetch the entire object again?

A better question is why your server responds with the entire object on
a If-Modified-Since type query if it hasn't been modified. It should
have responded with a 304 response as it did in the TCP_REFRESH_HIT
case.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] squid3 keeps many idle connections

2008-10-22 Thread Henrik Nordstrom
On ons, 2008-10-22 at 11:31 +0200, Malte Schröder wrote:
 Hello,
 Squid3 seems to keep a LOT (over a thousand) idle connections to its
 parent proxy.

Not normal.

Squid version?

And how did you measure these? You are not counting TIME_WAIT sockets
are you?

Regards
Henrik



signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Announcement: txforward (for php behind squid)

2008-10-22 Thread Henrik Nordstrom
On ons, 2008-10-22 at 15:02 +0200, Francois Cartegnie wrote:
 Le mercredi 22 octobre 2008, vous avez écrit :
  Interesting, but is missing a crucial piece. There is nothign which
  establishes trust. If the same server can be reached directly without
  using the reverse proxy then security is bypassed, or if the module is
  loaded on a server not using a reverse proxy.

 That's what the README and the warning in the phpinfo output are for...

And everyone reads documentation... and remembers to uninstall modules
no longer used..

Adding the small trusted server acl check isn't much code, and would
make this module generic and suitable as a version 1.0.

Note: The support for chains of proxies is just an idea for future
improvement, not a criticism.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Diagnosing RPCviaHTTP setup?

2008-10-22 Thread Henrik Nordstrom
On ons, 2008-10-22 at 16:49 +0200, Jakob Curdes wrote:
 .. I am trying to setup a RCPviaHTTP reverse proxy scenario as described in
 
 http://wiki.squid-cache.org/ConfigExamples/SquidAndRPCOverHttp
 
 Squid starts with my configuration (like example plus some standard 
 ACLs) but connections with a browser to the SSL port on the outside take 
 eternally and eventually time out.
 If I connect with a telnet 443 I get some sort of connection, so I 
 suppose it's not a firewall issue. In the cache or access log I see 
 nothing even after turning up debugging.
 What else can I do to troubleshoot this ? What should I see when 
 telnetting into 443?

Is there anything in cache.log and/or access.log?

Also try connecting with openssl

openssl s_client -connect ip:443

This should show you the SSL details.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Objects Release from Cache Earlier Than Expected

2008-10-22 Thread BUI18
Henrik -  Thanks for taking time out to respond to my questions.  I'm 
completely stumped on this one.

In our production environment, we set min and max to 5 and 7 days, respectively.

As I understand it, if the request is made for the object in say3 days or 4 
days (less than 5 days), I would always expect a TCP_HIT.

But again, after 1 to 2 days, I see TCP_REFRESH_MISS and I get the whole object.

I thought that by setting the min to 5 days would guarantee freshness up to 5 
days.

Do you know of a problem that maybe causes Squid to ignore the rules on 
determining whether an object is fresh?

We used fiddler and actually removed the If-Modified-Since part of the 
request and still we get TCP_REFRESH_MISS.

Do you have any other ideas on areas we might want to check to see what could 
possibly be causing this behavior?

Thanks





- Original Message 
From: Henrik Nordstrom [EMAIL PROTECTED]
To: BUI18 [EMAIL PROTECTED]
Cc: squid-users@squid-cache.org
Sent: Wednesday, October 22, 2008 4:06:33 PM
Subject: Re: [squid-users] Objects Release from Cache Earlier Than Expected

On ons, 2008-10-22 at 14:35 -0700, BUI18 wrote:

 Object is initially cached.  Max age in squid.conf is set to 1 min.
 Before 1 min passes, I request the object and Squid returns TCP_HIT.
 After 1 min, I try to request for object again.  Squid returns
 TCP_REFRESH_HIT, which is what I expect.  I leave the entire system
 untouched.  A day or a day and a half later, I ask for the object
 again and Squid returns TCP_REFRESH_MISS/200.


TCP_HIT is a local hit on the Squid cache. Origin server was not asked.

TCP_REFRESH_HIT is a cache hit after the origin server was asked if the
object is still fresh.

TCP_REFREHS_MISS is when the origin server says the object is no longer
fresh and returns a new copy on the conditional query sent by the cache.
(same query as in TCP_REFRESH_HIT, different response from the web
server).

 What could possibly cause Squid to refetch the entire object again?

A better question is why your server responds with the entire object on
a If-Modified-Since type query if it hasn't been modified. It should
have responded with a 304 response as it did in the TCP_REFRESH_HIT
case.

Regards
Henrik



  


Re: [squid-users] Objects Release from Cache Earlier Than Expected

2008-10-22 Thread Henrik Nordstrom
I am talking about If-Modified-Since between Squid and the web server,
not browser-squid.


On ons, 2008-10-22 at 17:57 -0700, BUI18 wrote:
 Henrik -  Thanks for taking time out to respond to my questions.  I'm 
 completely stumped on this one.
 
 In our production environment, we set min and max to 5 and 7 days, 
 respectively.
 
 As I understand it, if the request is made for the object in say3 days or 
 4 days (less than 5 days), I would always expect a TCP_HIT.
 
 But again, after 1 to 2 days, I see TCP_REFRESH_MISS and I get the whole 
 object.
 
 I thought that by setting the min to 5 days would guarantee freshness up to 5 
 days.
 
 Do you know of a problem that maybe causes Squid to ignore the rules on 
 determining whether an object is fresh?
 
 We used fiddler and actually removed the If-Modified-Since part of the 
 request and still we get TCP_REFRESH_MISS.
 
 Do you have any other ideas on areas we might want to check to see what could 
 possibly be causing this behavior?
 
 Thanks
 
 
 
 
 
 - Original Message 
 From: Henrik Nordstrom [EMAIL PROTECTED]
 To: BUI18 [EMAIL PROTECTED]
 Cc: squid-users@squid-cache.org
 Sent: Wednesday, October 22, 2008 4:06:33 PM
 Subject: Re: [squid-users] Objects Release from Cache Earlier Than Expected
 
 On ons, 2008-10-22 at 14:35 -0700, BUI18 wrote:
 
  Object is initially cached.  Max age in squid.conf is set to 1 min.
  Before 1 min passes, I request the object and Squid returns TCP_HIT.
  After 1 min, I try to request for object again.  Squid returns
  TCP_REFRESH_HIT, which is what I expect.  I leave the entire system
  untouched.  A day or a day and a half later, I ask for the object
  again and Squid returns TCP_REFRESH_MISS/200.
 
 
 TCP_HIT is a local hit on the Squid cache. Origin server was not asked.
 
 TCP_REFRESH_HIT is a cache hit after the origin server was asked if the
 object is still fresh.
 
 TCP_REFREHS_MISS is when the origin server says the object is no longer
 fresh and returns a new copy on the conditional query sent by the cache.
 (same query as in TCP_REFRESH_HIT, different response from the web
 server).
 
  What could possibly cause Squid to refetch the entire object again?
 
 A better question is why your server responds with the entire object on
 a If-Modified-Since type query if it hasn't been modified. It should
 have responded with a 304 response as it did in the TCP_REFRESH_HIT
 case.
 
 Regards
 Henrik
 
 
 
   


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Objects Release from Cache Earlier Than Expected

2008-10-22 Thread BUI18
But why would Squid even issue an If-Modified-Since to origin server if the 
min value is set to 5 days?  Would this object not be seen as fresh and would 
just be served up by Squid as a TCP_HIT?







- Original Message 
From: Henrik Nordstrom [EMAIL PROTECTED]
To: BUI18 [EMAIL PROTECTED]
Cc: squid-users@squid-cache.org
Sent: Wednesday, October 22, 2008 7:19:51 PM
Subject: Re: [squid-users] Objects Release from Cache Earlier Than Expected

I am talking about If-Modified-Since between Squid and the web server,
not browser-squid.


On ons, 2008-10-22 at 17:57 -0700, BUI18 wrote:
 Henrik -  Thanks for taking time out to respond to my questions.  I'm 
 completely stumped on this one.
 
 In our production environment, we set min and max to 5 and 7 days, 
 respectively.
 
 As I understand it, if the request is made for the object in say3 days or 
 4 days (less than 5 days), I would always expect a TCP_HIT.
 
 But again, after 1 to 2 days, I see TCP_REFRESH_MISS and I get the whole 
 object.
 
 I thought that by setting the min to 5 days would guarantee freshness up to 5 
 days.
 
 Do you know of a problem that maybe causes Squid to ignore the rules on 
 determining whether an object is fresh?
 
 We used fiddler and actually removed the If-Modified-Since part of the 
 request and still we get TCP_REFRESH_MISS.
 
 Do you have any other ideas on areas we might want to check to see what could 
 possibly be causing this behavior?
 
 Thanks
 
 
 
 
 
 - Original Message 
 From: Henrik Nordstrom [EMAIL PROTECTED]
 To: BUI18 [EMAIL PROTECTED]
 Cc: squid-users@squid-cache.org
 Sent: Wednesday, October 22, 2008 4:06:33 PM
 Subject: Re: [squid-users] Objects Release from Cache Earlier Than Expected
 
 On ons, 2008-10-22 at 14:35 -0700, BUI18 wrote:
 
  Object is initially cached.  Max age in squid.conf is set to 1 min.
  Before 1 min passes, I request the object and Squid returns TCP_HIT.
  After 1 min, I try to request for object again.  Squid returns
  TCP_REFRESH_HIT, which is what I expect.  I leave the entire system
  untouched.  A day or a day and a half later, I ask for the object
  again and Squid returns TCP_REFRESH_MISS/200.
 
 
 TCP_HIT is a local hit on the Squid cache. Origin server was not asked.
 
 TCP_REFRESH_HIT is a cache hit after the origin server was asked if the
 object is still fresh.
 
 TCP_REFREHS_MISS is when the origin server says the object is no longer
 fresh and returns a new copy on the conditional query sent by the cache.
 (same query as in TCP_REFRESH_HIT, different response from the web
 server).
 
  What could possibly cause Squid to refetch the entire object again?
 
 A better question is why your server responds with the entire object on
 a If-Modified-Since type query if it hasn't been modified. It should
 have responded with a 304 response as it did in the TCP_REFRESH_HIT
 case.
 
 Regards
 Henrik