> Thomas Goirand <z...@debian.org>:
> 
> On 4/2/19 11:23 PM, Alfredo Deza wrote:
>> 
>> 
>> I would argue that an unusable ceph-volume would make Ceph unusable
>> for most users.

Agree. Virtually no-one will be able to use it.

> If you want to set things up by hand, you may reuse this script too (and
> adapt it to your needs, of course, as this tightly integrates with OCI):
> 
> https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer/blob/debian/stein/bin/oci-make-osd
> 
> Comments on it, or even better, improvements, would be welcome.

I'm sorry, but this script is horrible. Suggested improvement would be 
re-writing it based on
ceph-volume, there is just too much wrong with it.
Just from skimming it:

* no support for nvme disks
* what about servers with more than 26 disks?
* that's not how you should allocate IDs for OSDs, just let ceph handle that
* adding a separate wal/db partition *on the same device* makes no sense
* no support for wal/db on separate disks
* no support for encrypted OSDs
* what happens if you have to move disks between servers?
* the part where you write a ceph.conf entry for each OSD and manually set the 
port particularly horrible

You basically implemented a limited and broken version of ceph-volume yourself 
for no apparent good reason



Paul

Reply via email to