On Wed, Mar 6, 2013 at 11:37 AM, Rajith Attapattu <rajit...@gmail.com> wrote:
> On Wed, Mar 6, 2013 at 10:09 AM, Rafael Schloming <r...@alum.mit.edu> wrote:
>> On Wed, Mar 6, 2013 at 6:52 AM, Ted Ross <tr...@redhat.com> wrote:
>>
>>>
>>> On 03/06/2013 08:30 AM, Rafael Schloming wrote:
>>>
>>>> On Wed, Mar 6, 2013 at 5:15 AM, Ted Ross <tr...@redhat.com> wrote:
>>>>
>>>>  This is exactly right.  The API behaves in a surprising way and causes
>>>>> reasonable programmers to write programs that don't work. For the sake of
>>>>> adoption, we should fix this, not merely document it.
>>>>>
>>>>
>>>> This seems like a bit of a leap to me. Have we actually seen anyone
>>>> misusing or abusing the API due to this? Mick didn't come across it till I
>>>> pointed it out and even then he had to construct an experiment where he's
>>>> basically observing the over-the-wire behaviour in order to detect it.
>>>>
>>>> --Rafael
>>>>
>>>>
>>> The following code doesn't work:
>>>
>>> while (True) {
>>>   wait_for_and_get_next_event(&**event);
>>>   pn_messenger_put(event);
>>> }
>>>
>>> If I add a send after every put, I'm going to limit my maximum message
>>> rate.  If I amortize my sends over every N puts, I may have
>>> arbitrarily/infinitely high latency on messages if the source of events
>>> goes quiet.

Having a background thread in the Messenger will only push this
problem from your application to the Messenger implementation.
Furthermore you will be at the mercy of the particulars of the client
library implementation as to how this background thread will take care
of the "outstanding work".
We could provide all kinds of knobs to tweak and tune this behaviour,
but I'd be far more comfortable if I as the application developer can
be in control of when the "flush" happens.

Either way you will have arbitrarily/infinitely high latency due to
complications at the TCP stack or the OS level. But you can at least
help your case a bit by having the application issue the flush than
letting the messenger doing it, bcos the application is in a better
position to determine what are the optimal conditions for doing so and
those conditions could be other than time, msg or byte count.

> You can employ a timer along with your event count (or based on a byte
> count) to get around that problem.
> The timer will ensure you flush events when there isn't enough activity.
> Isn't that acceptable ?
>
>>> I guess I'm questioning the mission of the Messenger API.  Which is the
>>> more important design goal:  general-purpose ease of use, or strict
>>> single-threaded asynchrony?
>>
>>
>> I wouldn't say it's a goal to avoid background threads, more of a really
>> nice thing to avoid if we can, and quite possibly a necessary mode of
>> operation in certain environments.
>>
>> I don't think your example code will work though even if there is a
>> background thread.
>
> This is a key point I missed when I thought about the problem along
> the same lines as Ted.
> Having a background thread cannot guarantee that your messages will be
> written on to the wire as that thread can be blocked due to TCP
> buffers being full or the thread being suppressed in favour of another
> more higher priority thread (for longer than you desire) thus
> increasing your latency beyond acceptable limits.
> You will invariably have outliers in your latency graph.
>
> On the other hand the library code will be much more simpler without
> the background thread.
>
>>  What do you want to happen when things start backing up?
>> Do you want messages to be dropped? Do you want put to start blocking? Do
>> you just want memory to grow indefinitely?
>>
>> --Rafael

Reply via email to