This link points to a presentation we did a few weeks back where we used NVMe 
devices for both the data and journal.  We partitioned the devices multiple 
times to co-locate multiple OSD's per device.  The configuration data on the 
cluster is in the backup.

http://www.slideshare.net/Inktank_Ceph/accelerating-cassandra-workloads-on-ceph-with-allflash-pcie-ssds

Thanks,

Stephen

-----Original Message-----
From: ceph-devel-ow...@vger.kernel.org 
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of 
piotr.da...@ts.fujitsu.com
Sent: Friday, November 20, 2015 3:50 AM
To: Mike Almateia; Ceph Development
Subject: RE: All-Flash Ceph cluster and journal

> -----Original Message-----
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel- 
> ow...@vger.kernel.org] On Behalf Of Mike Almateia
> Sent: Friday, November 20, 2015 11:43 AM

> By now is it resonable to use NVMe Flash for OSD on Ceph? Overpower? 
> Is it possible to achive full speed NVMe Flash driver under Ceph?

Yes and no. Ceph on any Flash drive will perform way better than on regular 
spinning disks, though certainly will not utilize its full potential. There is 
ongoing effort from multiple developers from multiple companies to fix that and 
things are getting better with each release.


With best regards / Pozdrawiam
Piotr Dałek

 {.n +       +%  lzwm  b 맲  r  yǩ ׯzX    ܨ}   Ơz &j:+v        zZ+  +zf   h   
~    i   z  w   ?    & )ߢf

Reply via email to