Hello,

On Tue, 12 Jul 2016 19:14:14 +0200 (CEST) Wido den Hollander wrote:

> 
> > Op 12 juli 2016 om 15:31 schreef Ashley Merrick <ash...@amerrick.co.uk>:
> > 
> > 
> > Hello,
> > 
> > Looking at final stages of planning / setup for a CEPH Cluster.
> > 
> > Per a Storage node looking @
> > 
> > 2 x SSD OS / Journal
> > 10 x SATA Disk
> > 
> > Will have a small Raid 1 Partition for the OS, however not sure if best to 
> > do:
> > 
> > 5 x Journal Per a SSD
> 
> Best solution. Will give you the most performance for the OSDs. RAID-1 will 
> just burn through cycles on the SSDs.
> 
> SSDs don't fail that often.
>
What Wido wrote, but let us know what SSDs you're planning to use.

Because the detailed version of that sentence should read: 
"Well known and tested DC level SSDs whose size/endurance levels are
matched to the workload rarely fail, especially unexpected."
 
> Wido
> 
> > 10 x Journal on Raid 1 of two SSD's
> > 
> > Is the "Performance" increase from splitting 5 Journal's on each SSD worth 
> > the "issue" caused when one SSD goes down?
> > 
As always, assume at least a node being the failure domain you need to be
able to handle.

Christian

> > Thanks,
> > Ashley
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
ch...@gol.com           Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to