May I ask why are you using EL repo with centos?
AFAIK, Redhat is backporting all ceph features to 3.10 kernels. Am I wrong?

On Fri, Feb 2, 2018 at 2:44 PM, Richard Hesketh
<richard.hesk...@rd.bbc.co.uk> wrote:
> On 02/02/18 08:33, Kevin Olbrich wrote:
>> Hi!
>>
>> I am planning a new Flash-based cluster. In the past we used SAMSUNG PM863a 
>> 480G as journal drives in our HDD cluster.
>> After a lot of tests with luminous and bluestore on HDD clusters, we plan to 
>> re-deploy our whole RBD pool (OpenNebula cloud) using these disks.
>>
>> As far as I understand, it would be best to skip journaling / WAL and just 
>> deploy every OSD 1-by-1. This would have the following pro's (correct me, if 
>> I am wrong):
>> - maximum performance as the journal is spread accross all devices
>> - a lost drive does not affect any other drive
>>
>> Currently we are on CentOS 7 with elrepo 4.4.x-kernel. We plan to migrate to 
>> Ubuntu 16.04.3 with HWE (kernel 4.10).
>> Clients will be Fedora 27 + OpenNebula.
>>
>> Any comments?
>>
>> Thank you.
>>
>> Kind regards,
>> Kevin
>
> There is only a real advantage to separating the DB/WAL from the main data if 
> they're going to be hosted on a device which is appreciably faster than the 
> main storage. Since you're going all SSD, it makes sense to deploy each OSD 
> all-in-one; as you say, you don't bottleneck on any one disk, and it also 
> offers you more maintenance flexibility as you will be able to easily move 
> OSDs between hosts if required. If you wanted to start pushing performance 
> more, you'd be looking at putting NVMe disks in your hosts for DB/WAL.
>
> FYI, the 16.04 HWE kernel has currently rolled on over to 4.13.
>
> Rich
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to