It's an example of what Hammer can do, and we're seeing some improvements 
already with Infernalis.  I agree regarding the tuning and optimization, and a 
lot of work is currently underway towards that goal as Piotr pointed out.

For completeness we did do a bit of the OSD tweaking a week or so ago (results 
in the mailinglist).  http://www.spinics.net/lists/ceph-devel/msg27256.html

Thanks,

Stephen

-----Original Message-----
From: ceph-devel-ow...@vger.kernel.org 
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Daniel Swarbrick
Sent: Monday, November 23, 2015 11:39 AM
To: ceph-devel@vger.kernel.org
Subject: Re: All-Flash Ceph cluster and journal

Simple but clever way of ensuring that NVMe's deep queues aren't starved of 
work to do. But doesn't this suggest that Ceph needs optimizing or tuning for 
NVMe? Could you not have tweaked OSD parameters to ensure more threads / IO 
operations in parallel and have the same effect?

On 23/11/15 18:37, Blinick, Stephen L wrote:
> This link points to a presentation we did a few weeks back where we used NVMe 
> devices for both the data and journal.  We partitioned the devices multiple 
> times to co-locate multiple OSD's per device.  The configuration data on the 
> cluster is in the backup.
> 
> http://www.slideshare.net/Inktank_Ceph/accelerating-cassandra-workload
> s-on-ceph-with-allflash-pcie-ssds
> 
> Thanks,
> 
> Stephen
> 


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the 
body of a message to majord...@vger.kernel.org More majordomo info at  
http://vger.kernel.org/majordomo-info.html

Reply via email to