going to deploy a test cluster and successfully deployed my first
monitor (hurray!).

Now trying to add the first osd host following instructions at:
https://docs.ceph.com/en/latest/install/manual-deployment/#bluestore

ceph-volume lvm zap --destroy /dev/sdb
ceph-volume lvm create --data /dev/sdb --dmcrypt

systemctl enable ceph-osd@0

# ceph-volume lvm create --data /dev/sdb --dmcrypt --cluster euch01
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph-authtool --gen-print-key
-->  RuntimeError: No valid ceph configuration file was loaded.
[root@osd1 ~]# ceph-authtool --gen-print-key
AQCADBxhqFIDNhAAQlwoW1l983923Ms/EJuSiA==
[root@osd1 ~]# ceph-authtool --gen-print-key --cluster euch01
AQCcDBxh1zzbGBAAW8tVp0aX668zpGUobhQWBg==


This to say that the zap --destroy worked.

The lvm create raised an error; running the ceph-authtool alone worked, so I have a valid configuration file on my osd node; ceph-authtool worked both specifying and not specifying the cluster name.

F.

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to