DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG 
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
<http://nagoya.apache.org/bugzilla/show_bug.cgi?id=25468>.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND 
INSERTED IN THE BUG DATABASE.

http://nagoya.apache.org/bugzilla/show_bug.cgi?id=25468

Unchecked response header length can cause HttpClient to loop endlessly





------- Additional Comments From [EMAIL PROTECTED]  2003-12-12 15:47 -------
I seem to recall that the chief complaint with the previous attempt to recover
from malicious servers did not attempt to deal with the particular failure
points, but rather took the approach of simply discarding the connection.  I
think the submitter also wanted the changes on the 2.0 branch.

Perhaps I can convince Oleg to be slightly less contrary to these suggested
patches?  If nothing else, I don't think we should discard the idea that some
users of HttpClient DO have to deal with malicious servers.  There are
characteristics of this suggestion that work much better for me (and I didn't
like the previous attempt to deal with this issue either):
* Test cases provided!
* The behavior is configurable, and the default behavior (at least with the
second round) leaves what we have today unchanged.
* The checks for malicious data are very focused on precisely the places where
HttpClient has demonstrable failure points.
* I don't see a problem with introducing an interface to capture a processing
concept prior to the 3.0 release.  In fact, I think it better to drive the
development of the specific interfaces off specific needs, issues for which
there happen to be no easy work-arounds, rather than as-yet-to-be determined
abstract concepts to solve ongoing issues for which there happen to be
work-arounds (redirect on POST needing a split of request/response).

Having said all that, I'm not sure I like the patches as-is.  I cannot decide
whether I like the original approach, where the option was configurable, and
everyone ran through the same code, or the second approach, where the header
parser could be substituted.

I'm thinking the general use case is to set the default limits to relatively
high values (not just 4K for headers, but maybe 256K), and perhaps a limit on
the total size of all headers, rather than just a limit on the number of
headers.  The defaults given - 4K for header values, 1000 headers max, could
easily trigger an out-of-memory.  For example, five threads all hitting the same
malicious server could consume 20M - enough to trigger out-of-memory failures. 
I think perhaps what wants to be limited is the maximum storage for all headers
on a particular message, for example, 256K max for one response header?

Mind you, I'm not a committer, so my opinion on this issue can at best influence
others....

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to