Hi Christian,
El 27/2/20 a las 20:08, Christian Wahl escribió:
Hi everyone,
we currently have 6 OSDs with 8TB HDDs split across 3 hosts.
The main usage is KVM-Images.
To improve speed we planned on putting the block.db and WAL onto NVMe-SSDs.
The plan was to put 2x1TB in each host.
One option
Christian;
What is your failure domain? If your failure domain is set to OSD / drive, and
2 OSDs share a DB / WAL device, and that DB / WAL device dies, then portions of
the data could drop to read-only (or be lost...).
Ceph is really set up to own the storage hardware directly. It doesn't