Hello Ceph Users!

 This is my first message to this list, thanks in advance for any help!

 I'm playing with Ceph Ansible for a couple months, including deploying it
from within OpenStack Ansible (still using Ceph Ansible though), it works
great! But I have a feeling that I don't know exactly what I'm doing...

 To try to better understand Ceph, I got a new PC with the following
storage devices:

 2 x NVMe of 1T - /dev/md0 for root file system (XFS)
 2 x SATA SSD of 256G
 2 x SATA SSHDD of 1T
 2 x SATA HDD of 3T

 The first 2 NVMe is where I'm running Ubuntu 18.04 on top of a old school
Linux RAID1 formatted with XFS, it's also where I'm running a lot of QEMU
Virtual Machines (including OpenStack and Ceph Controllers) and LXD
containers.

 My idea is to run Ceph infrastructure stuff (mgrs, metadata, radosgw, etc)
within the VMs on top of the NVMe devices and Ceph OSD on bare-metal Ubuntu
to control all the other storage devices directly.

 Thing is, I have a vague idea about what to do with those devices on Ceph!

 I already decided to use new Ceph Bluestore, where LVM is also involved.
So, for each block device, there is a LVM physical device and just one LVM
logical device using the whole thing.

 What I have in mind is something like this:

 - Use the SATA SSDs to speed up writes.
 - Use the *HDD to long-term storage.
 - Be able to lose half of the devices and still keep my data safe.
 - Be able to lose all SATA SSDs while keeping my data safe (so, I can't
use them as a Ceph DB (of an OSD), only as "WAL", is that right?).

 So far, my osds.yml looks like:

---
lvm_volumes:
  - data: ata-Samsung_SSD_860_EVO_250GB_S3YHNX0KB77803H
    data_vg: ata-Samsung_SSD_860_EVO_250GB_S3YHNX0KB77803H

  - data: ata-Samsung_SSD_860_EVO_250GB_S3YHNX0M117675W
    data_vg: ata-Samsung_SSD_860_EVO_250GB_S3YHNX0M117675W

  - data: ata-ST1000DX001-1NS162_Z4YCWM2F
    data_vg: ata-ST1000DX001-1NS162_Z4YCWM2F

  - data: ata-ST1000DX001-1NS162_Z4YE0APL
    data_vg: ata-ST1000DX001-1NS162_Z4YE0APL

  - data: ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K0DFV6HE
    data_vg: ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K0DFV6HE

  - data: ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K7FUATRK
    data_vg: ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K7FUATRK
---

 As you guys can see, I'm not using "wal", neither "db", just "data" but,
this isn't going to do what I have in mind... Right?!

 NOTE: I'm okay to move the root file system to the 2 SATA SSDs (out from
the NVMe), if you guys thinks that it's better to use the NVMe for the Ceph
OSDs... But I really prefer NVMe for my daily Ubuntu usage and all of its
VMs.

 What about Linux BCache? I mean, I think that it would be awesome to use
the BCache to create 2 logical devices (2 * (SSD + SSHDD + HDD)), where
only the SSD will be used by the BCache (I'll ignore the SSDHDD) and then,
present the BCache block device to Ceph, instead of the devices directly.
I'l asking this because from the little I know about Ceph, when with
Bluestore, there is no way to use the SSDs as a write-cache for the rest of
the slower discs, is that right? How would Ceph detect bad  blocks and
failures if on top of BCache? I'm also thinking about using VDO in between
BCache and Ceph... Lots of ideas!  lol

Cheers!
Thiago
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to