----- Original Message -----
>I think as soon as you start talking about credit you're already at much
>lower level than is appropriate for the messenger API. The messenger
>interface has API doc in both C and python and they both describe the
>semantics of recv without making any mention of the term credit. In fact
>that term doesn't appear anywhere in the messenger API or docs, only in the
>implementation.
>
>On Tue, Feb 26, 2013 at 9:34 AM, Michael Goulish <mgoul...@redhat.com>wrote:
>
>>
>> One hole that I feel like I'm seeing in the messenger
>> interface concerns credit.
>>
>> I have a way of using credit to set a max number of
>> messages that a recv call should return in one gulp,
>> or a way of doing ... something that I'm still figuring
>> out ... by setting credit=-1.
>>
>> What I don't have is any way of getting guidance about
>> what effect my credit allocation is having.
>>
>> A messenger app might have some flexibility in how
>> much time it spends servicing incoming messages vs.
>> time it spends doing its own processing, and it might
>> be able to allocate more time to servicing incoming
>> messages if it knows more about what's happening.
>>
>
>What's the distinction between "servicing incoming messages" and "doing its
>own processing"? 




Apps that use messaging have work to do other than handling messages.
The work that they do is what they get paid for -- the messaging is
how they communicate with their peers to get new work to do, or subcontract
out tasks, or whatever.

An app has a fixed amount of compute power available that it has to
allocate between doing its payload work, and looking at incoming messages.
( Or maybe nodes can request more compute power, but they have to know 
that they need it. )

In some applications you can make tradeoffs.  Do your own work faster,
and thus be able to handle more incoming requests.  Or do it more
thoroughly and handle fewer tasks.  ( example: a machine vision
system where I want to use messaging can choose more expensive operations
in hopes of getting a slightly more accurate result. But if it is building
up a large number of incoming requests, it can choose to go cheap-and-fast.)

It seems to me that one of the fundamental questions that a communicating
node has to ask, to be a good citizen of the network, is "How many people
are *trying* to talk to me?"    A count of the messages that *could* have
been given to you if you had made the N in recv() larger.

Yes, so as you say below  "a count of remotely blocked messages".
If that could be exposed without a philosophy-violation, then I vote for it.

Otherwise, a node has a way of saying how many messages we will take,
but no way to get knowledge about what effect that decision is having on 
its customers. 


*** Now maybe *** concerns about congestion in the network should be handled
at a higher level.  i.e. the manager sees that node A is building up a backlog 
of messages trying to get to it -- so he fires off another copy of node A
and reroutes some of the load.     

But I thought it would be nice to allow the network designer the option of 
putting some of that intelligence right at node A.  So it could say something 
like "Oh crap, look at all the requests waiting for me!  I've gotta speed up!"





>I'm struggling to imagine a scenario where such a division
>would be discretely controlled as opposed to say things happening in
>different threads. That aside, I'm also not sure how that would play into
>credit. Credit is really a measure of how many messages the receiver can
>handle at any given point, and doesn't necessarily relate at all to how
>many incoming messages are available, e.g. just because you allocate 1000
>units of credit doesn't mean there are any messages to handle. The number
>of actual messages currently being buffered would seem to be much more
>relevant to any sort of manual allocation of processing, and this is
>directly available through the API already via the incoming field.
>
>Alternatively, it might want to set the credit allocated
>> per recv call based on the number of current incoming
>> links.  ( And assume that the credit will be distributed
>> round-robin across all incoming links. )
>>
>> Would it be practical / desirable / catastrophic
>> to expose current backlog or number of incoming links,
>> or both, at the messenger level ?
>>
> 
>What do you mean by backlog? If you're talking about messages buffered by
>the messenger, this is already available through the incoming field. If
>you're talking about remotely blocked messages, then we could certainly
>expose an aggregate count without violating any philosophy, however I think
>it would be a major issue if any but the most advanced/obscure scenarios
>would actually require using such a thing.
>
>
>> Or would that violate part of the messenger API philosophy?
>> ( And if so, what is that philosophy?  I want to be able
>> to explain it. )
>>
>
>I would say the philosophy is the same as in brokered messaging. The user
>just wants to focus on processing messages and shouldn't have to care about
>where they are coming from or how they arrive.
>
>Imagine a messenger app that is configured with one or more subscriptions
>and processes whatever messages arrive. You could pass that app an address
>of "amqp://~0.0.0.0/" or "amqp://foobroker/queue". In the former case
>incoming messages will be arriving to the messenger on potentially
>thousands of connections and links. In the latter case messages will be
>arriving to the messenger on a single link over one connection. If you have
>to alter your application code for the different scenarios then I would say
>we've failed to provide a sufficient messenger implementation.
>
>I would also point out that in the latter scenario you are still funneling
>all the messages into a single queue and you still need to solve the very
>same credit allocation issues. That queue just happens to reside in a
>remote broker rather than locally colocated with the messenger.
>
>--Rafael
>

Reply via email to