On Thu, 2014-11-20 at 14:10 -0500, Michael Goulish wrote:
I am using two processors, both Intel Xeon E5420 @ 2.50GHz with 6144
KB cache. (Letting the OS decide which processors to use for my two
processes.)

On that system, with the above credit scheme, the test is sustaining
throughput of 408,500 messages per second . That's over a single link,
between two singly-threaded processes.

This is significantly faster than my previous, non-event-based code,
and I find the code*much*  easier to understand.


Out of curiosity on the same hardware how does that test perform relative to the messenger based soak tests msgr-send/msgr-recv and how about if you tweaked msgr-send/msgr-recv to use non-blocking and passive mode. I'm curious about where the messenger bottlenecks might be?


FWIW I definitely think there's mileage in event based operation - I'm also pretty interested in the best way to have things scale across loads of cores too, I think that's one worry I have with qpidd and the traditional clients. Do we know when lock contention starts to limit throughput? Given initiatives in ActiveMQ Apollo for more aync. lock-free operation (I think it uses hawt-dispatch, but I'm no expert) I suspect that now is a good time to think about how qpid based systems might scale across loads of cores.

That said with talk of new APIs I think that we should have a reasonably clear "roadmap", we've already got qpid::messaging and messenger, two separate AMQP 1.0 JMS clients not to mention potential confusion on the native python versus the python qpid::messaging binding (and don't get me started on QMF - three separate APIs depending on the language :'( )

I don't think we've done a great job clearing up confusion arounf the differing APIs that we have.

I could have predicted a change brewing, 'cause I've finally (just about) got my head around Messenger :-D

Frase

Reply via email to