missed the 'k' on the 80.. both 80k :-)
Carl Trieloff wrote:
ack, there are good cases to batch - but just need to comp that same
thing. i.e.
Rabbit 80k/sec on 16 CPU at 4k
Qpid 80/sec on 2 CPU at 4k
so direct comp (1.2M batched, qpid uses 8x less CPU for the same 1.2M
statement)
Carl.
Rupert Smith wrote:
Thanks for that Carl. I suspected the same thing.
Batching multiple application level messages into one transport level
message, seems like a valid strategy for small messages which are all
going to the same destination, where higher than normal throughput is
required. Those unqualified numbers kind of give the impression that
you could set up a selector, or topic matching, or series of
different queues on a 1.3M message stream to filter out just a subset
of of events that you are interested in (which sounds like the sort
of thing someone would actually want to do with that event stream).
With batching you can no longer route each individual application
level event individually.
Also they don't claim that Rabbit does 1.3M per second, but try and
make the reader falsely infer that from the claim that they do make.
Rupert
On 10/01/2008, * Carl Trieloff* <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
> I especially wanted to run perf tests against Rabbit after
reading
> their 1.3M msgs/sec claims.
>
>
Yea, that is why I played with it ... the number they put out are
bogus.
From their list " /each datagram contained 16 OPRA messages, and was
sent as one 256 byte AMQP message.... Ingress of 80,000 AMQP
messages
per second"
/That is only not very impressive given they used a 16 core box to
do it
and 4k messages. I did some comp runs when the numbers first came
out
and Rabbit is a hog compared to qpid, and a lot slower for the
same CPU.
From my test 16x slower at equivalent CPU. If you used batched
numbers
it is easier to fill the pipe the larger the message. I would
ignore the
1.2M number... it is more like the best rate for Rabbit on a 16 way
state of the art box is 80k msg/sec with no ack on publish at 4k.
For comp, I have seen trunk break 250k msg/sec using full ack on
publish
using <4 cores.
Once I got that far I stopped investing time into issues that
came up.
Carl.