I think that the backported fix for this issue made it into ceph v16.2.11.
https://ceph.io/en/news/blog/2023/v16-2-11-pacific-released/
"ceph-volume: Pacific backports (pr#47413, Guillaume Abrioux, Zack Cerza,
Arthur Outhenin-Chalandre)"
ur Outhenin-Chalandre" ,
"ceph-users"
Envoyé: Jeudi 11 Août 2022 10:14:17
Objet: [ceph-users] Re: cephadm automatic sizing of WAL/DB on SSD
Hi Patrick,
I am also facing this bug when deploying a new cluster at the time 16.2.7
release.
The bugs relative to the way ceph calculator db
quot;
> > À: "Calhoun, Patrick"
> > Cc: "Arthur Outhenin-Chalandre" ,
> "ceph-users"
> > Envoyé: Jeudi 11 Août 2022 10:14:17
> > Objet: [ceph-users] Re: cephadm automatic sizing of WAL/DB on SSD
>
> > Hi Patrick,
> >
> > I
ring rocksdb levels and compaction.
>>
>> -Patrick
>>
>>
>> From: Arthur Outhenin-Chalandre
>> Sent: Friday, July 29, 2022 2:11 AM
>> To: ceph-users@ceph.io
>> Subject: [ceph-users] Re: cephadm automatic sizing of WAL
or sizing WAL/DB volumes, considering rocksdb levels and compaction.
>
> -Patrick
>
>
> From: Arthur Outhenin-Chalandre
> Sent: Friday, July 29, 2022 2:11 AM
> To: ceph-users@ceph.io
> Subject: [ceph-users] Re: cephadm automatic sizing of WAL/DB on S
/DB volumes, considering rocksdb levels and compaction.
-Patrick
From: Arthur Outhenin-Chalandre
Sent: Friday, July 29, 2022 2:11 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: cephadm automatic sizing of WAL/DB on SSD
Hi Patrick,
On 7/28/22 16:22, Calhoun
Hi Patrick,
On 7/28/22 16:22, Calhoun, Patrick wrote:
> In a new OSD node with 24 hdd (16 TB each) and 2 ssd (1.44 TB each), I'd like
> to have "ceph orch" allocate WAL and DB on the ssd devices.
>
> I use the following service spec:
> spec:
> data_devices:
> rotational: 1
> size: