Keep in mind that as soon as you have a network of brokers, some portion of
your messages will have to cross two brokers to get from producer to
consumer (and more than that if your topology isn't a mesh) instead of just
one.  So my gut instinct (with nothing to back it up) is that if you're
going to subdivide, you should break up into as many instances as you think
your server can handle; definitely don't just make a cluster of 2 brokers,
or you might end up slower than you were with just one.  Then you just need
to make sure you do a good job of balancing your producers and your
consumers across the cluster (which might be easy or might be hard,
depending on how regular/predictable their workloads are).

But I'm curious about what your bottleneck is if it's not disk I/O.  Are
you pegging CPUs?  Are you saturating your NICs?  Madly swapping memory?
If none of the above, maybe locking within the KahaDB/LevelDB (which is
it?) code is to blame...

Tim

On Thu, Mar 12, 2015 at 9:00 PM, Kevin Burton <bur...@spinn3r.com> wrote:

> I’m trying to improve our ActiveMQ message throughput and I’m not really
> seeing the performance I would expect.
>
> We moved from non-persistent to persistent (which would be more ideal) and
> the performance is about 4-5x slower than before.
>
> I think this would be somewhat reasonable but our disks aren’t really at
> 100% utilization and they’re on SSD.  They’re only at about 5% utilization.
>
> So perhaps instead of ONE ActiveMQ node on a 16 core box it might be more
> beneficial to use say 4 or 8 ? (maybe putting them in containers).
>
> Granted this would require a TON of work right now, but that might be the
> best way to go (in theory).
>
> Kevin
>
> --
>
> Founder/CEO Spinn3r.com
> Location: *San Francisco, CA*
> blog: http://burtonator.wordpress.com
> … or check out my Google+ profile
> <https://plus.google.com/102718274791889610666/posts>
> <http://spinn3r.com>
>

Reply via email to