> So if you batch disk writes, you either batch acks (which has a negative
> performance impact; I'm not sure if the loss will be greater or less than
> the gain from batching writes, but at a minimum it'll decrease the expected
> improvement from batching writes)



Yes.  This is the point I’m trying to make.

If you’re on HDD without any form of write caching, the MAX you can commit
is around 100 sync’s per second.

So the sync throughput for ActiveMQ is lower than this due to ActiveMQ
needing to do additional work (memory copying, etc). But in theory it’s
probably 95% of this number.

However, if you batch commit, you have 10ms between each sync where you can
accumulate messages. ALL the messages you accumulate in that window can
then be sync’d in the next round.

(I’m sure you guys know this stuff, just codifying it)

The next time around you can write ALL the message at once, and with
LevelDB the performance should be near sequential disk IO  (in theory,
since it’s writing to the log) which is pretty darn fast.

So in practice you DO have to batch the ACKs back but that’s more than made
up for with the performance bump.

Disk is just so amazingly slow.

I’d be willing to do a proof of concept branch with benchmarks if someone
can point me to the right spot in the code.  Should be a small chunk of
code.

or you optimistically ack before the bits
> hit the disk (which violates the JMS contract because it allows message
> loss if the server crashes before the batch is written).
>
>

-- 

Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
… or check out my Google+ profile
<https://plus.google.com/102718274791889610666/posts>
<http://spinn3r.com>

Reply via email to