That message in the `ceph orch device ls` output is just why the device is
unavailable for an OSD. The reason it now has sufficient space in this case
is because you've already put an OSD on it, so it's really just telling you
you can't place another one. So you can expect to see something like
Thanks again guys,
The cluster is healthy now, is this normal? all looks look except for this
output
*Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected *
root@node1-ceph:~# cephadm shell -- ceph status
Inferring fsid 209a7bf0-8f6d-11ee-8828-23977d76b74f
Inferring config
To run a `ceph orch...` (or really any command to the cluster) you should
first open a shell with `cephadm shell`. That will put you in a bash shell
inside a container that has the ceph packages matching the ceph version in
your cluster. If you just want a single command rather than an interactive
Thanks so much Adam, that worked great, however I can not add any storage
with:
sudo cephadm ceph orch daemon add osd node2-ceph:/dev/nvme1n1
root@node1-ceph:~# ceph status
cluster:
id: 9d8f1112-8ef9-11ee-838e-a74e679f7866
health: HEALTH_WARN
Failed to apply 1
Le 29/11/2023 à 11:44:57-0500, Adam King a écrit
Hi,
> I think I remember a bug that happened when there was a small mismatch
> between the cephadm version being used for bootstrapping and the container.
> In this case, the cephadm binary used for bootstrap knows about the
> ceph-exporter
I think I remember a bug that happened when there was a small mismatch
between the cephadm version being used for bootstrapping and the container.
In this case, the cephadm binary used for bootstrap knows about the
ceph-exporter service and the container image being used does not. The
On 08.03.23 13:22, wodel youchi wrote:
I am trying to deploy Ceph Quincy using ceph-ansible on Rocky9. I am having
some problems and I don't know where to search for the reason.
The README.rst of the ceph-ansible project on
https://github.com/ceph/ceph-ansible encourages you to move to cephadm