On Tue, Sep 25, 2012 at 12:48 AM, Jimb Esser <[email protected]> wrote:
> Patch for logging was to replace the two occurrences of
> SET_ERRNO(HPE_INVALID_CHUNK_SIZE) with:
>           {
>             const char *p2;
>             for (p2=data; p2 != data + len; p2++) {
>               fprintf(stderr, "0x%02x, ", *p2);
>               if (p == p2) {
>                 fprintf(stderr, "/******/ ");
>               }
>             }
>           }
>           fprintf(stderr, "HPE_INVALID_CHUNK_SIZE case 1\n");
>           SET_ERRNO(HPE_INVALID_CHUNK_SIZE);
>
> I then loaded the stderr output into a trivial JS program to write it out as
> a binary file.
>
> When you say you don't see anything obviously wrong in the dump itself, do
> you mean in looks like a valid HTTP stream?  It looks to me like, near the
> end, it says an 8k data chunk is coming (2000\r\n), and then provides less
> than 8k data and starts another chunk (with the original 8k terminating in
> the middle of the second chunk).  I am basing this entirely on assumptions
> about the HTTP protocol gleaned from assumptions and reading http_parser.c,
> though, so I could be quite mistaken =).

The HTTP parser works across TCP packet boundaries. A request or
response doesn't necessarily fit in a single packet.

Unless you're benchmarking with a concurrency of 1 (a single client
that issues sequential requests), you'll see HTTP requests and
responses getting interleaved.

-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en

Reply via email to