I think as soon as you start talking about credit you're already at much
lower level than is appropriate for the messenger API. The messenger
interface has API doc in both C and python and they both describe the
semantics of recv without making any mention of the term credit. In fact
that term doesn't appear anywhere in the messenger API or docs, only in the
implementation.

On Tue, Feb 26, 2013 at 9:34 AM, Michael Goulish <mgoul...@redhat.com>wrote:

>
> One hole that I feel like I'm seeing in the messenger
> interface concerns credit.
>
> I have a way of using credit to set a max number of
> messages that a recv call should return in one gulp,
> or a way of doing ... something that I'm still figuring
> out ... by setting credit=-1.
>
> What I don't have is any way of getting guidance about
> what effect my credit allocation is having.
>
> A messenger app might have some flexibility in how
> much time it spends servicing incoming messages vs.
> time it spends doing its own processing, and it might
> be able to allocate more time to servicing incoming
> messages if it knows more about what's happening.
>

What's the distinction between "servicing incoming messages" and "doing its
own processing"? I'm struggling to imagine a scenario where such a division
would be discretely controlled as opposed to say things happening in
different threads. That aside, I'm also not sure how that would play into
credit. Credit is really a measure of how many messages the receiver can
handle at any given point, and doesn't necessarily relate at all to how
many incoming messages are available, e.g. just because you allocate 1000
units of credit doesn't mean there are any messages to handle. The number
of actual messages currently being buffered would seem to be much more
relevant to any sort of manual allocation of processing, and this is
directly available through the API already via the incoming field.

Alternatively, it might want to set the credit allocated
> per recv call based on the number of current incoming
> links.  ( And assume that the credit will be distributed
> round-robin across all incoming links. )
>
> Would it be practical / desirable / catastrophic
> to expose current backlog or number of incoming links,
> or both, at the messenger level ?
>

What do you mean by backlog? If you're talking about messages buffered by
the messenger, this is already available through the incoming field. If
you're talking about remotely blocked messages, then we could certainly
expose an aggregate count without violating any philosophy, however I think
it would be a major issue if any but the most advanced/obscure scenarios
would actually require using such a thing.


> Or would that violate part of the messenger API philosophy?
> ( And if so, what is that philosophy?  I want to be able
> to explain it. )
>

I would say the philosophy is the same as in brokered messaging. The user
just wants to focus on processing messages and shouldn't have to care about
where they are coming from or how they arrive.

Imagine a messenger app that is configured with one or more subscriptions
and processes whatever messages arrive. You could pass that app an address
of "amqp://~0.0.0.0/" or "amqp://foobroker/queue". In the former case
incoming messages will be arriving to the messenger on potentially
thousands of connections and links. In the latter case messages will be
arriving to the messenger on a single link over one connection. If you have
to alter your application code for the different scenarios then I would say
we've failed to provide a sufficient messenger implementation.

I would also point out that in the latter scenario you are still funneling
all the messages into a single queue and you still need to solve the very
same credit allocation issues. That queue just happens to reside in a
remote broker rather than locally colocated with the messenger.

--Rafael

Reply via email to