> -----Original Message----- > From: Jim Jagielski [mailto:[email protected]] > Sent: Freitag, 26. August 2011 13:49 > To: [email protected] > Subject: Re: svn commit: r1161661 - > /httpd/httpd/trunk/modules/http/byterange_filter.c > > > On Aug 26, 2011, at 2:50 AM, Ruediger Pluem wrote: > > > > I followed through the above algorithm with a range of > "0-1,1000-1001" and the > > result was a range of "0-1001" which feels wrong. Culprit > seems to be the ((end-1) >= oend) check. > > Shouldn't that be oend >= (start-1)? > > Yeah, that is wrong. Investigating. That part was to catch > things like 10-20,21-100 > > > > >> + } else { > >> + char *nr = apr_psprintf(r->pool, "%" > APR_OFF_T_FMT "-%" APR_OFF_T_FMT, > >> + ostart, oend); > >> + merged = apr_pstrcat(r->pool, merged, > (num_ranges++ ? "," : ""), nr, NULL); > > > > Doing this in a loop feels bad as it creates new buffers > over and over. > > Shouldn't we create a buffer of the size of strlen(range) > and add to this subsequently > > (we know that the length of the merged range <= length of range)? > > My plan is to put each range into an array and at the > end flatten it via apr_array_pstrcat()
I thought about that as well, but I think a combination of a preallocated buffer and apr_snprintf using a moving pointer in this buffer could save even more memory in the typical use case. Of course this changes if you remove a lot of ranges by merging. Regards Rüdiger
