This was a bug in some versions of ceph, which has been fixed:

https://tracker.ceph.com/issues/49014
https://github.com/ceph/ceph/pull/39083

You'll want to upgrade Ceph to resolve this behavior, or you can use
size or something else to filter if that is not possible.

David

On Thu, Aug 19, 2021 at 9:12 AM Eric Fahnle <efah...@nubi2go.com> wrote:
>
> Hi everyone!
> I've got a doubt, I tried searching for it in this list, but didn't find an 
> answer.
>
> I've got 4 OSD servers. Each server has 4 HDDs and 1 NVMe SSD disk. The 
> deployment was done with "ceph orch apply deploy-osd.yaml", in which the file 
> "deploy-osd.yaml" contained the following:
> ---
> service_type: osd
> service_id: default_drive_group
> placement:
>   label: "osd"
> data_devices:
>   rotational: 1
> db_devices:
>   rotational: 0
>
> After the deployment, each HDD had an OSD and the NVMe shared the 4 OSDs, 
> plus the DB.
>
> A few days ago, an HDD broke and got replaced. Ceph detected the new disk and 
> created a new OSD for the HDD but didn't use the NVMe. Now the NVMe in that 
> server has 3 OSDs running but didn't add the new one. I couldn't find out how 
> to re-create the OSD with the exact configuration it had before. The only 
> "way" I found was to delete all 4 OSDs and create everything from scratch (I 
> didn't actually do it, as I hope there is a better way).
>
> Has anyone had this issue before? I'd be glad if someone pointed me in the 
> right direction.
>
> Currently running:
> Version
> 15.2.8
> octopus (stable)
>
> Thank you in advance and best regards,
> Eric
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to