uay.io/ceph/ceph:v17 -e NODE_NAME=node1-ceph -e
CEPH_USE_RANDOM_NONCE=1 -v
/var/log/ceph/4ce3a92a-8ddd-11ee-9b23-6341187f70c1:/var/log/ceph:z -v
/tmp/ceph-tmp6yz3vt5s:/etc/ceph/ceph.client.admin.keyring:z -v
/tmp/ceph-tmpfhd01qwu:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v17 orch
apply ceph-expo
run `cephadm ls` and
> see things listed, you can grab the fsid from the output of that command
> and run `cephadm rm-cluster --force --fsid ` to clean up the env
> before bootstrapping again.
>
> On Wed, Nov 29, 2023 at 11:32 AM Francisco Arencibia Quesada <
> arencibia.franci..
ke to recommend setting up OSDs through
> drive group specs (
> https://docs.ceph.com/en/latest/cephadm/services/osd/#advanced-osd-service-specifications)
> over using `ceph orch daemon add osd...` although that's a tangent to what
> you're trying to do now.
>
> On Wed, N
| Monitor Daemon | | Manager
Daemon | |Manager Daemon(standby) | | | +---+
+---+ +---+
--
Regards
*Francisco Arencibia Quesada.*
*DevOps Engineer*
___
ceph-users mailing list -- ceph-users