> On Dec 1, 2016, at 6:26 PM, Christian Balzer <ch...@gol.com> wrote:
> 
> On Thu, 1 Dec 2016 18:06:38 -0600 Reed Dier wrote:
> 
>> Apologies if this has been asked dozens of times before, but most answers 
>> are from pre-Jewel days, and want to double check that the methodology still 
>> holds.
>> 
> It does.
> 
>> Currently have 16 OSD’s across 8 machines with on-disk journals, created 
>> using ceph-deploy.
>> 
>> These machines have NVMe storage (Intel P3600 series) for the system volume, 
>> and am thinking about carving out a partition for SSD journals for the 
>> OSD’s. The drives don’t make tons of use of the local storage, so should 
>> have plenty of io overhead to support the OSD journaling, as well as the 
>> P3600 should have the endurance to handle the added write wear.
>> 
> Slight disconnect there, money for a NVMe (which size?) and on disk
> journals? ^_-

NVMe was already in place before the ceph project began. 400GB P3600, with 
~275GB available space after swap partition.

>> From what I’ve read, you need a partition per OSD journal, so with the 
>> probability of a third (and final) OSD being added to each node, I should 
>> create 3 partitions, each ~8GB in size (is this a good value? 8TB OSD’s, is 
>> the journal size based on size of data or number of objects, or something 
>> else?).
>> 
> Journal size is unrelated to the OSD per se, with default parameters and
> HDDs for OSDs a size of 10GB would be more than adequate, the default of
> 5GB would do as well.

I was under the impression that it was agnostic to either metric, but figured I 
should ask while I had the chance.

>> So:
>> {create partitions}
>> set noout
>> service ceph stop osd.$i
>> ceph-osd -i osd.$i —flush-journal
>> rm -f rm -f /var/lib/ceph/osd/<osd-id>/journal
> Typo and there should be no need for -f. ^_^
> 
>> ln -s  /var/lib/ceph/osd/<osd-id>/journal /dev/<ssd-partition-for-journal>
> Even though in your case with a single(?) NVMe there is little chance for
> confusion, ALWAYS reference to devices by their UUID or similar, I prefer
> the ID:
> ---
> lrwxrwxrwx   1 root root    44 May 21  2015 journal -> 
> /dev/disk/by-id/wwn-0x55cd2e404b73d570-part4
> —

Correct, would reference by UUID.

Thanks again for the sanity check.

Reed

> 
>> ceph-osd -i osd.$i -mkjournal
>> service ceph start osd.$i
>> ceph osd unset noout
>> 
>> Does this logic appear to hold up?
>> 
> Yup.
> 
> Christian
> 
>> Appreciate the help.
>> 
>> Thanks,
>> 
>> Reed
> 
> -- 
> Christian Balzer        Network/Systems Engineer                
> ch...@gol.com         Global OnLine Japan/Rakuten Communications
> http://www.gol.com/

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to