Hello,

see the current "Blocked requests/ops?" thread in this ML, especially the
later parts.
And a number of similar threads.

In short, the CPU requirement for SSD based pools are significantly higher
than for HDD or HDD/SSD journal pools.

So having dedicated SSD nodes with less OSDs, faster CPUs and potentially
faster network makes a lot of sense. 
It also helps a bit to keep you and your CRUSH rules sane.

In your example you'd have 12 HDD based OSDs with journals, at 1.5-2GHz
CPU per OSD (things will get CPU bound with small write IOPS).
A SSD (I'm assuming something like DC S3700) based OSD will eat all the
CPU you can throw at it, 6-8GHZ would be a pretty conservative number.

Search the archives for the latest tests/benchmarks by others,  don't take
my (slightly dated) word for it.

Lastly you may find like other that cache-tiers currently aren't all great
performance wise.

Christian.

On Sat, 30 May 2015 10:36:39 +0200 Martin Palma wrote:

> Hello,
> 
> We are planing to deploy our first Ceph cluster with 14 storage nodes
> and 3 monitor nodes. The storage node have 12 SATA disks and 4 SSDs. 2
> of the SSDs we plan to use as
> journal disks and 2 for cache tiering.
> 
> Now the question raised in our team if it would be better to put all SSDs
> lets say in 2 storage nodes and consider them as fast nodes or to
> distribute the SSDs for the cache tiering over all 14 nodes (2 per node).
> 
> In mine opinion, if I understood the concept of Ceph right (I'm still in
> the learning process ;-) distributing the SSDs across all storage nodes
> would be better since this also would distribute the network traffic
> (client access) across all 14 nodes and not only limit it to 2 nodes.
> Right?
> 
> Any suggestion on that?
> 
> Best,
> Martin


-- 
Christian Balzer        Network/Systems Engineer                
ch...@gol.com           Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to