I'm cool w/ that… treat non-ascending ranges as potential hinky
and count those and only allow a certain number of them…
Still not sure if we should count overlaps as bad or not…
that RFC example troubles me:
14.35.1 Byte Ranges
- Several legal but not canonical specifications of the second 500
bytes (byte offsets 500-999, inclusive):
bytes=500-600,601-999
bytes=500-700,601-999
The 2nd seems to imply that one *MUST* merge adjacent overlaps to get the
correct response (500 bytes not 201+399=600bytes)
With all that in mind, I am still of the opinion that any
adjacent overlaps should be merged…
So how about we parse Range and merge all adjacent overlaps
(or merges (200-249,250-999 would merge into 200-999);
We then count how many non-ascends are in that revised set of
ranges and 200 out if it exceeds some config limit. We can also
provide some overall limit on the number of ranges, or at least
the ability to add one (a default of 0 means unlimited)…
Sound OK?
On Aug 24, 2011, at 4:39 PM, Greg Ames wrote:
> On Wed, Aug 24, 2011 at 3:19 PM, Jim Jagielski <[email protected]> wrote:
>
> >
> > If we only merge adjacent ascending ranges, then it seems like an attacker
> > could just craft a header where the ranges jump around and dodge our fix.
> >
>
> I think no matter what, we should still have some sort of
> upper limit on the number of range-sets we accept… after all,
> merge doesn't prevent jumping around ;)
>
>
> The problem I have with the upper limit on the number of range sets is the
> use case someone posted for JPEG2000 streaming. That has a lot of range sets
> but is completely legit. However, the ranges are in ascending order and
> don't overlap. Maybe we could count overlaps and/or non-ascending order
> ranges and fall back to 200 + the whole object if it exceeds a limit.
>
> Greg