On 05/14/2014 06:36 PM, Tyler Wilson wrote:
Hey All,

Hi!


I am setting up a new storage cluster that absolutely must have the best
read/write sequential speed @ 128k and the highest IOps at 4k read/write
as possible.

I assume random?


My current specs for each storage node are currently;
CPU: 2x E5-2670V2
Motherboard: SM X9DRD-EF
OSD Disks: 20-30 Samsung 840 1TB
OSD Journal(s): 1-2 Micron RealSSD P320h
Network: 4x 10gb, Bridged
Memory: 32-96GB depending on need

Does anyone see any potential bottlenecks in the above specs? What kind
of improvements or configurations can we make on the OSD config side? We
are looking to run this with 2 replication.

Likely you'll run into latency due to context switching and lock contention in the OSDs and maybe even some kernel slowness. Potentially you could end up CPU limited too, even with E5-2670s given how fast all of those SSDs are. I'd suggest considering a chassis without an expander backplane and using multiple controllers with the drives directly attached.

There's work going into improving things on the Ceph side but I don't know how much of it has even hit our wip branches in github yet. So for now ymmv, but there's a lot of work going on in this area as it's something that lots of folks are interested in.

I'd also suggest testing whether or not putting all of the journals on the RealSSD cards actually helps you that much over just putting your journals on the other SSDs. The advantage here is that by putting journals on the 2.5" SSDs, you don't lose a pile of OSDs if one of those PCIE cards fails.

The only other thing I would be careful about is making sure that your SSDs are good about dealing with power failure during writes. Not all SSDs behave as you would expect.


Thanks for your guys assistance with this.

np, good luck!



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to