On 3/5/13 8:52 PM, Rafael Schloming wrote:
On Tue, Mar 5, 2013 at 11:10 AM, Ted Ross<tr...@redhat.com>  wrote:

On 03/05/2013 02:01 PM, Rafael Schloming wrote:

On Tue, Mar 5, 2013 at 10:42 AM, Michael Goulish<mgoul...@redhat.com
wrote:

So, am I understanding correctly? -- I should be able to get messages
from my sender to my receiver just by calling put() -- if the receiver
is ready to receive?

  Not necessarily, the receiver being ready just means you are unblocked
on
AMQP level flow control. You could also potentially block on the socket
write (i.e. TCP level flow control). You need to be unblocked on both for
put to succeed.

Certainly there is no TCP flow control happening in Mick's scenario.


  What I said was put is *allowed* to send optimistically, not that it is
required to. It actually did send optimistically in a previous version of
the code, however I commented that line out.

I would say the documented semantics of put and send should allow the
implementation the flexibility to do any of the following:

    1) optimistically transmit whatever it can everytime so long as it
doesn't block
    2) never bother transmitting anything until you force it to by calling
send
    3) anything in between the first two, e.g. magically transmit once
you've
put enough messages to reach the optimal batch size

The reason for the behaviour you are observing is that we currently do
option 2 in the C impl, however we've done option 1 in the past (and I
think we do option 1 still in the Java impl), and we will probably do
option 3 in the future.

If this is the case, then Mick's original view is correct.  The
application must assume that messages will not ever be sent unless "send"
is called.  There is no flowing, pipelined, non-blocking producer.

It's not correct as documentation of the API semantics. It's also not
correct to say that messages will never be sent unless "send" is called,
e.g. the following code will work fine:

Client:
   m.put(request);
   m.recv(); // wait for reply
   m.get(reply);

Server:
   while True:
     m.recv(); // wait for request
     m.get(request)
     m.put(reply);


Wow, by not ever calling pn_messenger_send(), but only pn_messenger_recv()
things are, unexpectedly, better! I'll explain below what 'better' means.

But first, this begs the question, what is the purpose of pn_messenger_send()
and where (and why) it's appropriate/required to call it.


The results are for slightly modified 0.3 release.
Most notably, I have a local change that exposes pn_driver_wakeup()
through the messenger api.

Our API is threaded internally but all proton communication is done
via a dedicated thread that runs the following (stub) event loop:
(The same event loop is used by both, the client library, and the 'broker')

  while(1) {
    ret = pn_messenger_recv(m,100)   // the 100 is hard to explain...
    if (ret != PN_WAKED_UP) {        // new error code for wakeup case
    /*
     * apparently there's no need to call send...
     * pn_messenger_send(m);
     */
    }
    Command cmd = cmd_queue.get();   // cmd_queue.put() from another thread
// will call pn_driver_wakeup() and will
                                     // break out of pn_messenger_recv()
    if (cmd)
      handle(cmd);                   // ends up calling pn_messenger_put()
    if (pn_messenger_incoming(m)) {
       msg = pn_messenger_get(m);    // handle just one message
                                     // pn_messenger_recv() will not block
                                     // until we're done
handle(msg); // can end up calling pn_messenger_put()
    }
  }


So, before the change, a test client that produced messages needed to
throttle a bit, about 8ms between each 'command' that resulted in
one 'pn_messenger_put()'

If a lower delay (or no delay) was used, the client's messenger got confused
after some fairly small number of messages sent (order of magnitude 10)
and ended up sitting in pn_driver_wait while it had unsent messages to send.

With the one line change of commenting out the send() it can go full speed!


I know it's hard to comment on out-of-tree modified pseudo code, but
is such an event loop within the design goals of the messenger?

Longer term we'll most likely be switching from messenger to engine + driver
so we can go multithreaded with the event loop.



I also did the reverse experiment, just for fun, commented out the recv()
and left just send(). The effect is absolutely not the same. The 'broker' which expects
first to get a message goes into a wild spin inside the above loop,
consuming 100%cpu and never receives even one message (kinda expected).

The clients spin too a bit, until they receive first command (and do a put()) after that they settle into pn_driver_wait() but never make any progress since
broker is so busy not listening to them :)

Bozzo

As for there being "no flowing, pipelined, non-blocking producer," that
seems like an orthogonal issue, and depending on what you mean I wouldn't
say that's necessarily true either. You can certainly set the messenger's
timeout to zero and then call put followed by send to get the exact same
semantics you would get if put were to optimistically send every time.

--Rafael


Reply via email to