Hi,
I wondered if you managed to significantly improve your Riak CS
performance, or not?

I just ask as we've been getting not-dissimilar performance out of
Riak CS too (4-5 mbyte/sec max per client, on bare metal hardware),
for quite a long time. (I swear it was faster originally, when there
was a lot less data in the whole system.)
This is after applying all the tweaks available -- networking stack,
filesystem mount options, assorted Erlang vm.args, and increased put
concurrency/buffer options.

We put up with it because it's been just-about sufficient enough for
our needs and Riak CS has been reliable and easy to administer -- but
it's becoming more of an issue, and so I'm curious to know if other
people *do* manage to achieve *good* per-client speeds out of Riak CS
or if this is just how things always are?
And we're way off the mark, maybe we can find out why..

Details of our setup:
6 node cluster. RIng size of 64.
Riak 1.4.10
Riak CS 1.5.2
(installed from official Basho repos)

Tests conducted using both multi-part and non-multi-part upload mode;
performance is similar with both. Tested against cluster when very
lightly loaded.
For the sake of testing, a 100M file is being used, that contains
random (hard to compress) data.

Cheers,
Toby

On 8 November 2014 at 01:41, David Meekin <david.mee...@autotrader.co.uk> wrote:
> Hi,
> I’ve setup a test 4 node RiakCS cluster on HP BL460c hardware and I can’t 
> seem to get S3 upload speeds above 2MB/s
> I’m connecting direct to RiackCS on one of the nodes so there is no load 
> balancing software in place.
> I have also installed s3cmd locally onto one of the nodes and the speeds 
> locally are the same.
> These 4 nodes also run a test CEPH cluster with RadosGW and s3 uploads to 
> CEPH achieve 125MB/s
> Any help would be appreciated as I’m currently evaluating both CEPH and 
> RiakCS.

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to