On Wed, 13 Jan 2010, Dan Poirier wrote:
We might simplify the model by not exposing the internal extending of
the timeout. Just let the admin specify an overall max time,
a minimum
rate, or both:
HeaderTimeout: Maximum seconds to read the entire header.
HeaderMinRate: Minimum rate (bytes/second) allowed when reading the
request header. If the rate drops below this, the request is timed
out.
We'd enforce both if specified. In that case HeaderTimeout would act
like headermax. Internally we'd probably implement HeaderMinRate by
gradually extending a timeout, but we wouldn't be tied to that.
But that would result in different behaviour, wouldn't it?
e.g. with init timeout set to 10 max timeout set to 30 and minrate set 500
the client can wait for 10 seconds before sending data at a rate of 500
bytes/sec.
If I understand your model correctly we would cancel the request anytime if the
client
falls below 500 bytes/sec. So if it does start only with 200 bytes/sec we would
cancel
it immediately.
Yes, my proposal probably simplifies things too much. We could allow
some time for things to get going. Maybe not start enforcing the
minimum rate until after some number of seconds, with a reasonable
default but it could be configured:
HeaderStartupTime: time in seconds before the specified HeaderMinRate
starts being enforced. Default = 10 seconds.
I'm not thrilled with that though, it's inelegant.
In any case, we need at least three values to completely define the
behaviour. IIRC I chose the initial timeout/maximum timeout over the
startup time/maximum timeout approach because it was easier to implement.
I still think it's ok, given that for normal configurations, there is not
much difference. But the "headerinit" keyword is just a bit too cryptic
for my taste.
Do you agree that using RequestReadTimeout instead of RequestTimeout and
using a single keyword with a timeout range is more descriptive than the
current syntax?