> - IIUC, if a root SSD fails, there is pretty much >no way to rebuild a
> new>node with the same OSDs and avoid data >shuffling - is this correct?You
> can still rebuild the node and add old OSDs and avoid shuffling. Might need
> to enable NOOUT flag while you work on configuration of new
Yes. You just need to create a separate zone with radosgw-admin and the
corresponding pool names for that rgw zone. Then on the radosgw host you need
to put rgw zone for which it would operate in ceph.confНадіслано з пристрою
Galaxy
Оригінальне повідомлення Від: J-P Methot
Not sure how much it would help the performance with osd's backed with ssd db
and wal devices. Even if you go this route with one ssd per 10 hdd, you might
want to set the failure domain per host in crush rules in case ssd is out of
service. But from the practice ssd will not help too much to
light as to why the allocator did not work and you had to
compact.Надіслано з пристрою Galaxy
Оригінальне повідомлення Від: mhnx
Дата: 09.11.21 03:05 (GMT+02:00) Кому: prosergey07
Копія: Ceph Users Тема: Re: [ceph-users]
allocate_bluefs_freespace failed to allocate I
Are those problematic OSDs getting almost full ? I do not have Ubuntu account
to check their pastebin.Надіслано з пристрою Galaxy
Оригінальне повідомлення Від: mhnx
Дата: 08.11.21 15:31 (GMT+02:00) Кому: Ceph Users Тема:
[ceph-users] allocate_bluefs_freespace failed to
When resharding is performed I believe its considered as bucket operation and
undergoes through updating the bucket stats. Like new bucket shard is created
and it may increase the number of objects within the bucket stats. If it was
broken during resharding, you could check the current bucket
Theoretically you should be able to reshard buckets which are not in sync. That
would produce new .dir.new_bucket_index objects inside your bucket.index pool
which would put omap key/values into new shards (.dir.new_bucket_index).
Objects itself would be left intact as marker id is not changed.