On Mon, Dec 26, 2016 at 9:21 AM, Daniel Ruggeri <[email protected]> wrote: > Hi, all; > > I'm hoping just for a quick sanity check here... when consuming > buckets from the brigade inside a connection filter, I've seen that the > bucket length doesn't *appear* to accurately represent what data has > been made available when SSL is used. > > const char *tmp_buf; > apr_size_t nbytes; > apr_bucket_read(b, &tmp_buf, &nbytes, APR_BLOCK_READ); > > The particular scenario is that apr_bucket_read tells me that 11 > bytes were read according to nbytes... but a print of the string stashed > in the buffer shows many more. In fact, all data that should be > available at this time appears to be there (about as I would expect). > This seems to only happen when using SSL, FWIW. I consistently see 11 > bytes as being read by apr_bucket_read... but all 48 expected bytes are > there (strlen as well as any other method of examining the input concurs). > > I'm sure I've made a goofy assumption or am thinking about this wrong > somewhere... but the mistake in my understanding escapes me. Any thoughts?
No good hint here, but I have found the gdb macros in .gdbinit to be really helpful to show these details. (dump_brigade, dump_bucket IIRC)
