and +1 to what Tim says ;-)

On 13 March 2015 at 12:17, Gary Tully <gary.tu...@gmail.com> wrote:
> kevin - peek at https://issues.apache.org/jira/browse/AMQ-5578 it
> gives a big win in terms of OS system work per journal write, it is
> however kahadb focused at the moment.
>
> To drive the disk, the more parallel work that the brokers can
> introduce the better. In other words, to get max throughput, you can
> increase the num of parallel jms connections per destination, the the
> number of destinations per broker, then the number of brokers. At each
> stage track peek throughput till it starts to drop off before
> introducing the next level of parallelism.
>
> A broker will try and batch writes but it needs parallel connections
> and or destinations to do that.
>
> With SSD, write performance may not be the bottleneck, but working
> through a process like the above will identify what is.
>
> On 13 March 2015 at 03:00, Kevin Burton <bur...@spinn3r.com> wrote:
>> I’m trying to improve our ActiveMQ message throughput and I’m not really
>> seeing the performance I would expect.
>>
>> We moved from non-persistent to persistent (which would be more ideal) and
>> the performance is about 4-5x slower than before.
>>
>> I think this would be somewhat reasonable but our disks aren’t really at
>> 100% utilization and they’re on SSD.  They’re only at about 5% utilization.
>>
>> So perhaps instead of ONE ActiveMQ node on a 16 core box it might be more
>> beneficial to use say 4 or 8 ? (maybe putting them in containers).
>>
>> Granted this would require a TON of work right now, but that might be the
>> best way to go (in theory).
>>
>> Kevin
>>
>> --
>>
>> Founder/CEO Spinn3r.com
>> Location: *San Francisco, CA*
>> blog: http://burtonator.wordpress.com
>> … or check out my Google+ profile
>> <https://plus.google.com/102718274791889610666/posts>
>> <http://spinn3r.com>

Reply via email to