Re: svn commit: r1714297 - in /serf/trunk: buckets/event_buckets.c outgoing.c serf_private.h

2015-11-14 Thread Greg Stein
On Sat, Nov 14, 2015 at 5:01 AM, Greg Stein  wrote:

> On Sat, Nov 14, 2015 at 3:36 AM,  wrote:
>
>> Author: rhuijben
>> Date: Sat Nov 14 09:36:08 2015
>> New Revision: 1714297
>>
>> URL: http://svn.apache.org/viewvc?rev=1714297=rev
>> Log:
>> Redefine the event bucket as a wrapping bucket, to remove dependencies on
>> implementation details on the 'aggregation' stream around the event
>> bucket.
>>
>> Calculate total number of bytes read to increase event value.
>>
>> * buckets/event_buckets.c
>>   (event_context_t): Extend state a bit.
>>   (serf__bucket_event_create): Add start event and stream arguments.
>>
>
> What is the start callback for? You added that, but don't use it. And why
> is there bytes_read when it is (by definition) always zero. ?
>
> For that matter ... why even attempt to record bytes_read? That doesn't
> appear to be used. And it is missing bytes read via readline().
>
> I'm wary of the callbacks to start with -- that somehow normal bucket
> reading is not enough, and so "magic" needs to happen under the covers.
> (kinda like my query about hold_open)
>
> Cheers,
> -g
>

(oh, nevermind, re: readline ... those bytes are counted via read() )


Re: svn commit: r1714297 - in /serf/trunk: buckets/event_buckets.c outgoing.c serf_private.h

2015-11-14 Thread Greg Stein
On Sat, Nov 14, 2015 at 3:36 AM,  wrote:

> Author: rhuijben
> Date: Sat Nov 14 09:36:08 2015
> New Revision: 1714297
>
> URL: http://svn.apache.org/viewvc?rev=1714297=rev
> Log:
> Redefine the event bucket as a wrapping bucket, to remove dependencies on
> implementation details on the 'aggregation' stream around the event bucket.
>
> Calculate total number of bytes read to increase event value.
>
> * buckets/event_buckets.c
>   (event_context_t): Extend state a bit.
>   (serf__bucket_event_create): Add start event and stream arguments.
>

What is the start callback for? You added that, but don't use it. And why
is there bytes_read when it is (by definition) always zero. ?

For that matter ... why even attempt to record bytes_read? That doesn't
appear to be used. And it is missing bytes read via readline().

I'm wary of the callbacks to start with -- that somehow normal bucket
reading is not enough, and so "magic" needs to happen under the covers.
(kinda like my query about hold_open)

Cheers,
-g


RE: svn commit: r1714297 - in /serf/trunk: buckets/event_buckets.c outgoing.c serf_private.h

2015-11-14 Thread Bert Huijben


> -Original Message-
> From: Greg Stein [mailto:gst...@gmail.com]
> Sent: zaterdag 14 november 2015 12:01
> To: dev@serf.apache.org
> Subject: Re: svn commit: r1714297 - in /serf/trunk: buckets/event_buckets.c
> outgoing.c serf_private.h
> 
> On Sat, Nov 14, 2015 at 3:36 AM, <rhuij...@apache.org> wrote:
> 
> > Author: rhuijben
> > Date: Sat Nov 14 09:36:08 2015
> > New Revision: 1714297
> >
> > URL: http://svn.apache.org/viewvc?rev=1714297=rev
> > Log:
> > Redefine the event bucket as a wrapping bucket, to remove dependencies
> on
> > implementation details on the 'aggregation' stream around the event
> bucket.
> >
> > Calculate total number of bytes read to increase event value.
> >
> > * buckets/event_buckets.c
> >   (event_context_t): Extend state a bit.
> >   (serf__bucket_event_create): Add start event and stream arguments.
> >
> 
> What is the start callback for? You added that, but don't use it. And why
> is there bytes_read when it is (by definition) always zero. ?

On the start event it is always zeor, but it allows using the same callback for 
multiple events.

> 
> For that matter ... why even attempt to record bytes_read? That doesn't
> appear to be used. And it is missing bytes read via readline().

There is no specific readline implementation, so that will fall back to using 
read and peek... which do handle bytes_read.

> 
> I'm wary of the callbacks to start with -- that somehow normal bucket
> reading is not enough, and so "magic" needs to happen under the covers.
> (kinda like my query about hold_open)

As I already noted in my mail to Ivan we need accounting for http/2

Or we can add yet another bucket type which does what this bucket does, and 
adds bytes_read support.


In http/2 we need to strictly account for bytes send over streams. If we wrap 
this bucket over the split buckets (limit on steroids) added yesterday that 
will handle all these cases in a generic way.


In http/2 you are only allowed to send DATA frames upto the window limit that 
is managed by the receiver. If you go one byte over the limit the stream will 
be closed with a RESET, while you can't keep 'a safe margin', as a receiver 
might use this feature to receive data very slowly.


HTTP request and response bodies are transferred over this system.

Bert



RE: svn commit: r1714297 - in /serf/trunk: buckets/event_buckets.c outgoing.c serf_private.h

2015-11-14 Thread Bert Huijben


> -Original Message-
> From: Bert Huijben [mailto:b...@qqmail.nl]
> Sent: zaterdag 14 november 2015 15:15
> To: 'Ivan Zhakov' <i...@visualsvn.com>; rhuij...@apache.org
> Cc: dev@serf.apache.org
> Subject: RE: svn commit: r1714297 - in /serf/trunk: buckets/event_buckets.c
> outgoing.c serf_private.h


> Writing that function as a wrapper would require coding that function,
> reviewing it and testing it, while I don't see any usecase where somebody
> uses this function.
> And in all current implementations/usages just using the default readline
> would 100% do the same thing without measurable performance
> degradation.
> 
> Same reason as why we never implement read_for_sendfile() There is not
> a single user in serf that would use it.

Note that 1 week ago (before r1713788) the typical implementation of 
serf_bucket_readline() on most advanced buckets was a NULL pointer with a ### 
TODO comment.

Bert




Re: svn commit: r1714297 - in /serf/trunk: buckets/event_buckets.c outgoing.c serf_private.h

2015-11-14 Thread Ivan Zhakov
On 14 November 2015 at 17:19, Bert Huijben <b...@qqmail.nl> wrote:
>> -Original Message-
>> From: Bert Huijben [mailto:b...@qqmail.nl]
>> Sent: zaterdag 14 november 2015 15:15
>> To: 'Ivan Zhakov' <i...@visualsvn.com>; rhuij...@apache.org
>> Cc: dev@serf.apache.org
>> Subject: RE: svn commit: r1714297 - in /serf/trunk: buckets/event_buckets.c
>> outgoing.c serf_private.h
>
>
>> Writing that function as a wrapper would require coding that function,
>> reviewing it and testing it, while I don't see any usecase where somebody
>> uses this function.
>> And in all current implementations/usages just using the default readline
>> would 100% do the same thing without measurable performance
>> degradation.
>>
>> Same reason as why we never implement read_for_sendfile() There is not
>> a single user in serf that would use it.
>
> Note that 1 week ago (before r1713788) the typical implementation of
> serf_bucket_readline() on most advanced buckets was a NULL pointer with a ### 
> TODO comment.
>
Understand. I was just checking whether this intentional or oversight.
As far I understand serf_bucket_readline() is not that efficient as
native implementation could be. I also think that event bucket should
be transparent and shoud not change behavior of wrapped bucket if
possible. I'll implement native readline for event bucket if you would
not mind.


-- 
Ivan Zhakov


RE: svn commit: r1714297 - in /serf/trunk: buckets/event_buckets.c outgoing.c serf_private.h

2015-11-14 Thread Bert Huijben


> -Original Message-
> From: Ivan Zhakov [mailto:i...@visualsvn.com]
> Sent: zaterdag 14 november 2015 14:25
> To: rhuij...@apache.org
> Cc: dev@serf.apache.org
> Subject: Re: svn commit: r1714297 - in /serf/trunk: buckets/event_buckets.c
> outgoing.c serf_private.h
> 
> On 14 November 2015 at 12:36,  <rhuij...@apache.org> wrote:
> > Author: rhuijben
> > Date: Sat Nov 14 09:36:08 2015
> > New Revision: 1714297
> >
> > URL: http://svn.apache.org/viewvc?rev=1714297=rev
> > Log:
> > Redefine the event bucket as a wrapping bucket, to remove dependencies
> on
> > implementation details on the 'aggregation' stream around the event
> bucket.
> >
> > Calculate total number of bytes read to increase event value.
> >
> > @@ -128,7 +215,7 @@ static void serf_event_destroy(serf_buck
> >  const serf_bucket_type_t serf_bucket_type__event = {
> >  "EVENT",
> >  serf_event_read,
> > -serf_event_readline,
> > +serf_default_readline,
> Is it intentional? Why event bucket doesn't forward readline calls
> directly to wrapped bucket?

I don't see why you would use the event bucket when you are reading yourself, 
instead of putting it in a write stream. And write streams typically don't use 
readline. (They would typically use read or read_iovec depending on whether ssl 
or compression/decompression is used)

Writing that function as a wrapper would require coding that function, 
reviewing it and testing it, while I don't see any usecase where somebody uses 
this function.
And in all current implementations/usages just using the default readline would 
100% do the same thing without measurable performance degradation.

Same reason as why we never implement read_for_sendfile() There is not a 
single user in serf that would use it.

Bert



Re: svn commit: r1714297 - in /serf/trunk: buckets/event_buckets.c outgoing.c serf_private.h

2015-11-14 Thread Ivan Zhakov
On 14 November 2015 at 12:36,   wrote:
> Author: rhuijben
> Date: Sat Nov 14 09:36:08 2015
> New Revision: 1714297
>
> URL: http://svn.apache.org/viewvc?rev=1714297=rev
> Log:
> Redefine the event bucket as a wrapping bucket, to remove dependencies on
> implementation details on the 'aggregation' stream around the event bucket.
>
> Calculate total number of bytes read to increase event value.
>
> @@ -128,7 +215,7 @@ static void serf_event_destroy(serf_buck
>  const serf_bucket_type_t serf_bucket_type__event = {
>  "EVENT",
>  serf_event_read,
> -serf_event_readline,
> +serf_default_readline,
Is it intentional? Why event bucket doesn't forward readline calls
directly to wrapped bucket?

-- 
Ivan Zhakov


RE: svn commit: r1714297 - in /serf/trunk: buckets/event_buckets.c outgoing.c serf_private.h

2015-11-14 Thread Bert Huijben
> -Original Message-
> From: Ivan Zhakov [mailto:i...@visualsvn.com]
> Sent: zaterdag 14 november 2015 14:25
> To: rhuij...@apache.org
> Cc: dev@serf.apache.org
> Subject: Re: svn commit: r1714297 - in /serf/trunk: buckets/event_buckets.c
> outgoing.c serf_private.h
> 
> On 14 November 2015 at 12:36,  <rhuij...@apache.org> wrote:
> > Author: rhuijben
> > Date: Sat Nov 14 09:36:08 2015
> > New Revision: 1714297
> >
> > URL: http://svn.apache.org/viewvc?rev=1714297=rev
> > Log:
> > Redefine the event bucket as a wrapping bucket, to remove dependencies
> on
> > implementation details on the 'aggregation' stream around the event
> bucket.
> >
> > Calculate total number of bytes read to increase event value.
> >
> > @@ -128,7 +215,7 @@ static void serf_event_destroy(serf_buck
> >  const serf_bucket_type_t serf_bucket_type__event = {
> >  "EVENT",
> >  serf_event_read,
> > -serf_event_readline,
> > +serf_default_readline,
> Is it intentional? Why event bucket doesn't forward readline calls
> directly to wrapped bucket?

3th reply...

I added a bit of accounting to the event bucket. The accounting part makes it 
harder to forward the changes, and makes it easier to fall back to default 
implementations.


If you would like to improve on we might want to split the implementation to an 
event (start reading, stop reading, destruct) and an accounting bucket (done 
reading; amount read was).

Combining the two buckets into one will allow a callback implementation to call 
get_remaining() in the start reading call to see if it already knows the size 
ahead of time. In case of an accounting bucket that behavior might belong in 
the bucket itself.


All of this is to allow the decision on how big chunks will be to be delayed as 
long as possible. Writing exactly what we have in ram is just more efficient 
than making all chunks exactly the maximum size. Our buckets are extremely 
useful in allowing this.

I implemented the simple case for headers in r1714307 and a similar testcase in 
r1714328 to show what the breaking part is all about.


Next up is the body transfer scheduling to complete the http/2 code to fully 
functional.

Bert