[ceph-users] Re: Ceph cluster out of balance after adding OSDs

2023-03-27 Thread Robert Sander
On 27.03.23 16:04, Pat Vaughan wrote: we looked at the number of PGs for that pool, and found that there was only 1 for the rgw.data and rgw.log pools, and "osd pool autoscale-status" doesn't return anything, so it looks like that hasn't been working. If you are in this situation, have a look

[ceph-users] Re: Ceph cluster out of balance after adding OSDs

2023-03-27 Thread Robert Sander
On 27.03.23 16:34, Pat Vaughan wrote: Yes, all the OSDs are using the SSD device class. Do you have multiple CRUSH rules by chance? Are all pools using the same CRUSH rule? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel

[ceph-users] Re: Ceph cluster out of balance after adding OSDs

2023-03-27 Thread Pat Vaughan
Looking at the pools, there are 2 crush rules. Only one pool has a meaningful amount of data, the charlotte.rgw.buckets.data pool. This is the crush rule for that pool. { "rule_id": 1, "rule_name": "charlotte.rgw.buckets.data", "type": 3, "steps": [ { "op": "se

[ceph-users] Re: Ceph cluster out of balance after adding OSDs

2023-03-28 Thread Robert Sander
On 27.03.23 23:13, Pat Vaughan wrote: Looking at the pools, there are 2 crush rules. Only one pool has a meaningful amount of data, the  charlotte.rgw.buckets.data pool. This is the crush rule for that pool. So that pool uses the device class ssd explicitely where the other pools do not care

[ceph-users] Re: Ceph cluster out of balance after adding OSDs

2023-03-28 Thread Pat Vaughan
Yes, this is an EC pool, and it was created automatically via the dashboard. Will this help to correct my current situation? Currently, there are 3 OSDs out of 12 that are about 90% full. One of them just crashed and will not come back up with "bluefs _allocate unable to allocate 0x8 on bdev