Hi,
This patch prevents Serf from accessing an already destroyed allocator
when freeing HTTP/2 HPACK buckets in error scenarios. One possible
reproduction script for the issue is:
(This is not 100% stable, but close to that. DEBUG_DOUBLE_FREE
should be defined.)
> serf_get -c4 -n4 --http2 https://www.cloudflare.com
Error running context: (120153) HTTP2 flow control limits exceeded
Aborted (core dumped)
See serf_http2__stream_setup_next_request() around http2_stream.c:378.
The HPACK bucket is created using the request->allocator, and is then
added into the pump's output queue. Certain error conditions, such as
the one above, can cause the request to be destroyed _before_ the pump
with the bucket still alive. Attempting to destroy this bucket will then
result in accessing an already destroyed request->allocator.
While the HTTP/2 implementation can handle error situations with enqueued
output that contains buckets from a request, it requires marking a request
with SERF_WRITING_STARTED, to allow the special cleanup logic in
outgoing_request.c:47 to kick in. This is how things currently work
for HTTP/2 requests with bodies, where the body bucket originates from
a serf_request_t.
Luckily, in this particular case with HPACK buckets and header-only
requests, it's possible to avoid using the request->allocator, and
allocate the new bucket using the HTTP/2 stream's allocator. Thus,
the lifetime of the new bucket will no longer depend on the underlying
serf_request_t.
Please see the attached patch with the fix described above. The log
message is included in the beginning of the patch file.
Regards,
Evgeny Kotkov
HTTP/2: Avoid accessing an already destroyed allocator when freeing
HPACK buckets in certain error scenarios.
One possible reproduction script for the issue is:
> serf_get -c4 -n4 --http2 https://www.cloudflare.com
Error running context: (120153) HTTP2 flow control limits exceeded
Aborted (core dumped)
See serf_http2__stream_setup_next_request() around http2_stream.c:378.
The HPACK bucket is created using the request->allocator, and is then
added into the pump's output queue. Certain error conditions, such as
the one above, can cause the request to be destroyed _before_ the pump
with the bucket still alive. Attempting to destroy this bucket will then
result in accessing an already destroyed request->allocator.
While the HTTP/2 implementation can handle error situations with enqueued
output that contains buckets from a request, it requires marking a request
with SERF_WRITING_STARTED, to allow the special cleanup logic in
outgoing_request.c:47 to kick in. This is how things currently work
for HTTP/2 requests with bodies, where the body bucket originates from
a serf_request_t.
Luckily, in this particular case with HPACK buckets and header-only
requests, it's possible to avoid using the request->allocator, and
allocate the new bucket using the HTTP/2 stream's allocator. Thus,
the lifetime of the new bucket will no longer depend on the underlying
serf_request_t.
* protocols/http2_stream.c
(serf_http2__stream_setup_next_request): Allocate the new HPACK
bucket using the HTTP/2 stream's allocator. Comment on why we don't
set SERF_WRITING_STARTED for header-only requests.
Patch by: Evgeny Kotkov <evgeny.kotkov{_AT_}visualsvn.com>
Index: protocols/http2_stream.c
===================================================================
--- protocols/http2_stream.c (revision 1787257)
+++ protocols/http2_stream.c (working copy)
@@ -375,7 +375,7 @@ serf_http2__stream_setup_next_request(serf_http2_s
&hpack, hpack_tbl,
request->req_bkt,
request->conn->host_info.scheme,
- request->allocator);
+ stream->alloc);
if (status)
return status;
@@ -387,7 +387,7 @@ serf_http2__stream_setup_next_request(serf_http2_s
serf_bucket_t *agg;
unsigned char priority_data[5];
- agg = serf_bucket_aggregate_create(request->allocator);
+ agg = serf_bucket_aggregate_create(stream->alloc);
priority_data[0] = (ds->streamid >> 24) & 0x7F;
/* bit 7 of [0] is the exclusive flag */
@@ -399,7 +399,7 @@ serf_http2__stream_setup_next_request(serf_http2_s
serf_bucket_aggregate_append(
agg,
serf_bucket_simple_copy_create((void *)priority_data,
- 5, request->allocator));
+ 5, stream->alloc));
serf_bucket_aggregate_append(agg, hpack);
hpack = agg;
@@ -423,6 +423,9 @@ serf_http2__stream_setup_next_request(serf_http2_s
if (end_stream) {
stream->status = H2S_HALFCLOSED_LOCAL; /* Headers sent; no body */
+ /* Don't set SERF_WRITING_STARTED for the request. At this point,
+ the outgoing buckets no longer depend on it, because there's no
+ body and we copied the headers to our HPACK bucket. */
return APR_SUCCESS;
}