[squid-users] read_timeout

2014-06-27 Thread Jeremy Hustache

Hello,

Is it possible to set read_timeout value to a negative value in order to 
have infinite timeout on this event ?


I use Squid Cache: Version 2.7.STABLE9, I try to set read_timeout to 
-1 but I have some assert in commSetTimeout() which crash squid daemon.


Thanks


Re: [squid-users] read_timeout

2014-06-27 Thread Jeremy Hustache
OK, if i understand a negative read_timeout value reset global structure 
of timeout.


So, is a 0 value for read_timeout token in squid conf file means no 
timeout ?


Thanks for your answer


On 06/27/14 14:43, Jeremy Hustache wrote:

Hello,

Is it possible to set read_timeout value to a negative value in order 
to have infinite timeout on this event ?


I use Squid Cache: Version 2.7.STABLE9, I try to set read_timeout to 
-1 but I have some assert in commSetTimeout() which crash squid daemon.


Thanks






Re: [squid-users] read_timeout

2014-06-27 Thread Alex Rousskov
On 06/27/2014 07:56 AM, Jeremy Hustache wrote:

 OK, if i understand a negative read_timeout value reset global structure
 of timeout.
 
 So, is a 0 value for read_timeout token in squid conf file means no
 timeout ?


I did not check Squid2 sources, but AFAICT, Squid3 does not treat a zero
read_timeout value specially, and I doubt it should. Squid should check
for overflows instead, but does not (yet?).

If you want a large read_timeout, use a large value. For example, two
years should be large enough for virtually all practical purposes and
small enough to prevent (current time + timeout) overflows in the
foreseeable future.

Please note that large timeouts create stuck connections in most
deployment environments, and those stuck connections not only consume
file descriptors but may eat 10s of MBs of RAM in environments where
Squid opens SSL connections to servers.


HTH,

Alex.


 On 06/27/14 14:43, Jeremy Hustache wrote:
 Hello,

 Is it possible to set read_timeout value to a negative value in order
 to have infinite timeout on this event ?

 I use Squid Cache: Version 2.7.STABLE9, I try to set read_timeout to
 -1 but I have some assert in commSetTimeout() which crash squid daemon.

 Thanks





Re: [squid-users] read_timeout and fwdServerClosed: re-forwarding

2007-12-06 Thread Chris Hostetter


:  So it kind of seems like i'm out of luck right?  my only option being to
:  try 2.HEAD which *may* have the behavior i'm describing.
:
: Its part and parcel with free software. We can be paid to test it in a lab
: and give you a certain answer if you'd like. :)

Oh, believe me -- I know the score ... It seems like I chant the mantra of
patches welcome! :) at least once a week over on the apache lists ... i
just wanted to make sure i wasn't missing anything obvious about how to
achieve this sort of thing with the current STABLE releases.

I'll try to find some time to test out the HEAD and report back my
findings (although being out sick for a week and a half has put me pretty
far behind on some other work, so i'm not sure if i'll ever get to that)

but i'll also open a bug to track that 10 retries is hardcoded in the
event of read_timeout.  (even if it never gets changed, at least it will
be be out there for other people to find it)

Thanks for all your help everybody.



-Hoss


Re: [squid-users] read_timeout and fwdServerClosed: re-forwarding

2007-12-05 Thread Chris Hostetter

sorry for the late reply, i was seriously sick last week and basically 
dead to the world...

:  The problem I'm running into is figuring out a way to get the analogous 
:  behavior when the origin server is up but taking too long to respond 
:  to the validation requests.   Ideally (in my mind) squid would have a 

: Hmm.. might be a good idea to try Squid-2.HEAD. This kind of things
: behaves a little differently there than 2.6..

Alas ... I don't think i could convince my boss to get on board the idea 
of using a devel releases.  then again, i'm not too clear on how 
branch/release management is done in squid ... do merges happen from 
2.HEAD to 2.6 (in which case does 2.6.STABLE17 have the behavior you are 
refering to?) or will 2.HEAD ultimately become 2.7 once it's more stable?


:  read_timeout was the only option I could find that seemed to relate to 
:  how long squid would wait for an origin server once connected -- but it 
:  has the retry problems previously discussed.  Even if it didn't retry, and 
:  returned the stale content as soon as the read_timeout was exceeded, 
:  I'm guessing it wouldn't wait for the fresh response from the origin 
:  server to cache it for future requests.
: 
: read_timeout in combination with forward_timeout should take care of the
: timeout part...

what do you mean by in combination with forward_timeout ... 
forward_timeout is just the 'connect' timeout for origin server requests 
right?  so i guess you mean that if i have a magic value of XX seconds 
that i'm willing to wait for data to come back, that i need to set 
fowrad_timeout and read_timeout such that they add up to XX right?  but as 
you say, that just solves the tieout problem, it doesn't get me stale 
content.

In my case, i'm not worried about the connect time for the origin server 
-- if it doesn't connect right away give up, not problem there.  it's 
getting stale content to be returned if the total request time excedes XX 
seconds that i'm worried about (without getting a bunch of 
automatic retries)


So it kind of seems like i'm out of luck right?  my only option being to 
try 2.HEAD which *may* have the behavior i'm describing.


:  for a fresh response) -- but it doesn't seem to work as advertised (see 
:  bug#2126).
: 
: Haven't looked at that report yet.. but a guess is that the refresh
: failed due to read_timeout?

(actually that was totally orthoginal to the read_timeout issues ... with 
refresh_stale_hit set to Y seconds, all requets are still considered cache 
hits up to Y seconds afer they expire -- with no attempt to validate.)


-Hoss


Re: [squid-users] read_timeout and fwdServerClosed: re-forwarding

2007-12-05 Thread Adrian Chadd
On Wed, Dec 05, 2007, Chris Hostetter wrote:

 : Hmm.. might be a good idea to try Squid-2.HEAD. This kind of things
 : behaves a little differently there than 2.6..
 
 Alas ... I don't think i could convince my boss to get on board the idea 
 of using a devel releases.  then again, i'm not too clear on how 
 branch/release management is done in squid ... do merges happen from 
 2.HEAD to 2.6 (in which case does 2.6.STABLE17 have the behavior you are 
 refering to?) or will 2.HEAD ultimately become 2.7 once it's more stable?

Squid-2.HEAD should eventually become Squid-2.7.

 So it kind of seems like i'm out of luck right?  my only option being to 
 try 2.HEAD which *may* have the behavior i'm describing.

Its part and parcel with free software. We can be paid to test it in a lab
and give you a certain answer if you'd like. :)


-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -


Re: [squid-users] read_timeout and fwdServerClosed: re-forwarding

2007-11-27 Thread Henrik Nordstrom
On mån, 2007-11-26 at 14:48 -0800, Chris Hostetter wrote:

 But in situations like that, wouldn't the normal behavior of a long 
 read_timeout (I believe the default is 15 minutes) be sufficient?

yes, and it is.. the retries is only done if within the forward_timeout

 : Hm, what about retry_on_error ? Does that do anything in an accelerator
 : setup?
 
 It might do something, but I'm not sure what :) ... even when i set it 
 explicitly to off squid still retries when the read_timeout is exceeded.

It applies when Squid received an error from the contacted server
only...

 In the event that a request is not in the cache at all, and an origin 
 server takes too long to send a response, using the quick_abort 0 option 
 in squid does exactly what I hoped it would: squid continues to wait 
 around for the response so that it is available in the cache for future 
 requests.

good.

 In the event that stale content is already in the cache, and the origin 
 server is down and won't accept any connections, squid does what I'd 
 hoped it would: returns the stale content even though it can't be 
 validated (albeit, without a proper warning, see bug#2119)

good.

 The problem I'm running into is figuring out a way to get the analogous 
 behavior when the origin server is up but taking too long to respond 
 to the validation requests.   Ideally (in my mind) squid would have a 
 force_stale_response_after XX milliseconds option, such that if squid 
 has a stale response available in the cache, it will return immediately 
 once XX milliseconds have elapsed since the client connected.  Any in 
 progress validation requests would still be completed/cached if they met 
 the conditions of the quick_abort option just as if the client had 
 aborted the connection without receiving any response.

Hmm.. might be a good idea to try Squid-2.HEAD. This kind of things
behaves a little differently there than 2.6..

 read_timeout was the only option I could find that seemed to relate to 
 how long squid would wait for an origin server once connected -- but it 
 has the retry problems previously discussed.  Even if it didn't retry, and 
 returned the stale content as soon as the read_timeout was exceeded, 
 I'm guessing it wouldn't wait for the fresh response from the origin 
 server to cache it for future requests.

read_timeout in combination with forward_timeout should take care of the
timeout part...

 FWIW: The refresh_stale_hit option seemed like a promising mechanism for
 ensuring that when concurrent requests come in, all but one would get 
 a stale response while waiting for a fresh response to be cached (which 
 could help minimize the number of clients that give up while waiting 
 for a fresh response) -- but it doesn't seem to work as advertised (see 
 bug#2126).

Haven't looked at that report yet.. but a guess is that the refresh
failed due to read_timeout?

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] read_timeout and fwdServerClosed: re-forwarding

2007-11-26 Thread Chris Hostetter

: Tip: fwdReforwardableStatus() I think is the function which implements
: the behaviour you're seeing. That and fwdCheckRetry.

My C Fu isn't strong enough for me to feel confident that I would even 
know what to look for if I started digging into the code ... I mainly just 
wanted to clarify that:
  a) this is expected behavior
  b) there isn't a(n existing) config option available to change this behavior

: You could set the HTTP Gateway timeout to return 0 so the request
: isn't forwarded and see if that works, or the n_tries check in 
fwdCheckRetry().

I'm not sure I understand ...  are you saying there is a squid option 
to set an explicit gateway timeout value? (such that origin requests which 
take longer then X cause squid to return a 504 to the client) ... This 
would ideal -- the only reason I was even experimenting with read_timeout 
was because I haven't found any documentation of anything like this. (but 
since the servers I'm dealing with don't write anything until the entire 
response is ready I figured I could make do with the read_timeout)

: I could easily make the 10 retry count a configurable parameter.

That might be prudent.  It seems like strange behavior to have hardcoded 
in squid.

: The feature, IIRC, was to work around transient network issues which
: would bring up error pages in a traditional forward-proxying setup.

But in situations like that, wouldn't the normal behavior of a long 
read_timeout (I believe the default is 15 minutes) be sufficient?

: Hm, what about retry_on_error ? Does that do anything in an accelerator
: setup?

It might do something, but I'm not sure what :) ... even when i set it 
explicitly to off squid still retries when the read_timeout is exceeded.


Perhaps I'm approaching things the wrong way -- I set out with some 
specific goals in mind, did some experimenting with various options to try 
and reach that goal, and then asked questions when i encountered behavior I 
couldn't explain.  Let me back up and describe my goals, and perhaps 
someone can offer some insight into the appropriate way to achieve 
them

I'm the middle man between origin servers which respond to every request 
by dynamicly generating (relatively small) responses; and clients that 
make GET requests to these servers but are only willing to wait around 
for a short amount of time (on the order of 100s of milliseconds) to get 
the responses before they abort the connection.  The clients would rather 
get no response (or an error) then wait around for a long time -- the 
servers meanwhile would rather the clients got stale responses then no 
responses (or error responses).  My goal, using squid as an accelerator, 
is to maximize the satisfaction of both the clients and the servers.

In the event that a request is not in the cache at all, and an origin 
server takes too long to send a response, using the quick_abort 0 option 
in squid does exactly what I hoped it would: squid continues to wait 
around for the response so that it is available in the cache for future 
requests.

In the event that stale content is already in the cache, and the origin 
server is down and won't accept any connections, squid does what I'd 
hoped it would: returns the stale content even though it can't be 
validated (albeit, without a proper warning, see bug#2119)

The problem I'm running into is figuring out a way to get the analogous 
behavior when the origin server is up but taking too long to respond 
to the validation requests.   Ideally (in my mind) squid would have a 
force_stale_response_after XX milliseconds option, such that if squid 
has a stale response available in the cache, it will return immediately 
once XX milliseconds have elapsed since the client connected.  Any in 
progress validation requests would still be completed/cached if they met 
the conditions of the quick_abort option just as if the client had 
aborted the connection without receiving any response.

Is there a way to get behavior like this (or close to it) from squid?


read_timeout was the only option I could find that seemed to relate to 
how long squid would wait for an origin server once connected -- but it 
has the retry problems previously discussed.  Even if it didn't retry, and 
returned the stale content as soon as the read_timeout was exceeded, 
I'm guessing it wouldn't wait for the fresh response from the origin 
server to cache it for future requests.

FWIW: The refresh_stale_hit option seemed like a promising mechanism for
ensuring that when concurrent requests come in, all but one would get 
a stale response while waiting for a fresh response to be cached (which 
could help minimize the number of clients that give up while waiting 
for a fresh response) -- but it doesn't seem to work as advertised (see 
bug#2126).



-Hoss


[squid-users] read_timeout and fwdServerClosed: re-forwarding

2007-11-21 Thread Chris Hostetter


Greetings,

I'm trying to make sense of some behavior I'm observing related to the
read_timeout.

I'm dealing with an accelerator setup, where I'd rather return stale
content (or an error) then wait for the origin server to return fresh
content if it is taking too long to respond.

I was hoping that setting the read_timeout to something very low,
(ie: a few seconds) I could get the behavior -- granted, if the origin
server sent back a few bytes every second, squid would keep waiting,
but as I said: accelerator setup; I know how the origin server
behaves, for every request it does a bunch of data crunching (which
occasionally takes a while) before it ever writes a single byte back
to the client.

What I've observed from testing with a simple JSP that does a sleep
before writing back the response is that anytime the read_timeout is
exceeded, squid will retry the request, and if that retry also exceeds
the read_timeout, it will retry again, up to a a total of 10 times (10
retries, 11 total requests to the origin server) before responding
back to the client.  It will do these retries even if there is a stale
entry in the cache for this request (returning the stale content
eventually -- but without a 'Warning' header).


Debugging logs for these retries look like this...


2007/11/20 14:04:10| checkTimeouts: FD 13 Expired
2007/11/20 14:04:10| checkTimeouts: FD 13: Call timeout handler
2007/11/20 14:04:10| httpTimeout: FD 13: 
'http://localhost/test-read-timeout.jsp?123'
2007/11/20 14:04:10| fwdFail: ERR_READ_TIMEOUT Gateway Time-out
http://localhost/test-read-timeout.jsp?123
   ...
2007/11/20 14:04:10| fwdServerClosed: FD 13 
http://localhost/test-read-timeout.jsp?123
2007/11/20 14:04:10| fwdServerClosed: re-forwarding (2 tries, 12 secs)
   ...
2007/11/20 14:04:16| checkTimeouts: FD 13 Expired
2007/11/20 14:04:16| checkTimeouts: FD 13: Call timeout handler
2007/11/20 14:04:16| httpTimeout: FD 13: 
'http://localhost/test-read-timeout.jsp?123'
2007/11/20 14:04:16| fwdFail: ERR_READ_TIMEOUT Gateway Time-out
http://localhost/test-read-timeout.jsp?123
   ...
2007/11/20 14:04:16| fwdServerClosed: FD 13 
http://localhost/test-read-timeout.jsp?123
2007/11/20 14:04:16| fwdServerClosed: re-forwarding (3 tries, 18 secs)


This seems very counter intuitive to me -- if the origin server accepts a 
connection, but takes a really long time to respond, in my experience that 
typically means it's overloaded and slamming it with 11 times the number 
of requests isn't going to help anything.


The only config option I could find that seemed to relate to retries
was maximum_single_addr_tries but setting it to 1 had no affect, I
did however notice this comment in it's docs...

#   Note: This is in addition to the request re-forwarding which
#   takes place if Squid fails to get a satisfying response.

...this sounds like what I'm seeing -- is there an option to control
the number of re-forwarding attempts (to be something smaller then 
10), or any further documentation on the definition of a satisfying 
response ?




-Hoss


Re: [squid-users] read_timeout and fwdServerClosed: re-forwarding

2007-11-21 Thread Adrian Chadd
On Wed, Nov 21, 2007, Chris Hostetter wrote:
 
 Greetings,
 
 I'm trying to make sense of some behavior I'm observing related to the
 read_timeout.

Tip: fwdReforwardableStatus() I think is the function which implements
the behaviour you're seeing. That and fwdCheckRetry.

You could set the HTTP Gateway timeout to return 0 so the request
isn't forwarded and see if that works, or the n_tries check in fwdCheckRetry().

I could easily make the 10 retry count a configurable parameter.

The feature, IIRC, was to work around transient network issues which
would bring up error pages in a traditional forward-proxying setup.

 I'm dealing with an accelerator setup, where I'd rather return stale
 content (or an error) then wait for the origin server to return fresh
 content if it is taking too long to respond.

Hm, what about retry_on_error ? Does that do anything in an accelerator
setup?




Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -