Thank you very much Anthony and Eugen , I followed your instructions and
now it works fine class are hdd and ssd , and also now we have 60 OSDS from
48

Thanks again

Michel

On Mon, 9 Jan 2023, 17:00 Anthony D'Atri, <anthony.da...@gmail.com> wrote:

> For anyone finding this thread down the road: I wrote to the poster
> yesterday with the same observation.  Browsing the ceph-ansible docs and
> code, to get them to deploy as they want, one may pre-create LVs and
> enumerate them as explicit data devices. Their configuration also enables
> primary affinity, so I suspect they’re trying the [brittle] trick of mixing
> HDD and SSD OSDs in the same pool, with the SSDs forced primary for reads.
>
> >
> > Hi,
> >
> > it appears that the SSDs were used as db devices (/dev/sd[efgh]).
> According to [1] (I don't use ansible) the simple case is that:
> >
> >> [...] most of the decisions on how devices are configured to provision
> an OSD are made by the Ceph tooling (ceph-volume lvm batch in this case).
> >
> > And I assume that this exactly what happened, ceph-volume batch deployed
> the SSDs as rocksDB, not sure how to prevent ansible from doing that, but
> there are probably several threads out there that explain it.
> >
> > Regards,
> > Eugen
> >
> > [1]
> https://docs.ceph.com/projects/ceph-ansible/en/latest/osds/scenarios.html
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to