[ceph-users] Re: Handling node failures.

2021-11-12 Thread prosergey07
> - IIUC, if a root SSD fails, there is pretty much >no way to rebuild a > new>node with the same OSDs and avoid data >shuffling - is this correct?You > can still rebuild the node and add old OSDs and avoid shuffling. Might need > to enable NOOUT flag while you work on configuration of new

[ceph-users] Re: 2 zones for a single RGW cluster

2021-11-10 Thread prosergey07
Yes. You just need to create a separate zone with radosgw-admin and the corresponding pool names for that rgw zone. Then on the radosgw host you need to put rgw zone for which it would operate in ceph.confНадіслано з пристрою Galaxy Оригінальне повідомлення Від: J-P Methot

[ceph-users] Re: Question if WAL/block.db partition will benefit us

2021-11-09 Thread prosergey07
Not sure how much it would help the performance with osd's backed with ssd db and wal devices. Even if you go this route with one ssd per 10 hdd, you might want to set the failure domain per host in crush rules in case ssd is out of service. But from the practice ssd will not help too much to

[ceph-users] Re: allocate_bluefs_freespace failed to allocate

2021-11-09 Thread prosergey07
light as to why the allocator did not work and you had to compact.Надіслано з пристрою Galaxy Оригінальне повідомлення Від: mhnx Дата: 09.11.21 03:05 (GMT+02:00) Кому: prosergey07 Копія: Ceph Users Тема: Re: [ceph-users] allocate_bluefs_freespace failed to allocate I

[ceph-users] Re: allocate_bluefs_freespace failed to allocate

2021-11-08 Thread prosergey07
Are those problematic OSDs getting almost full ? I do not have Ubuntu account to check their pastebin.Надіслано з пристрою Galaxy Оригінальне повідомлення Від: mhnx Дата: 08.11.21 15:31 (GMT+02:00) Кому: Ceph Users Тема: [ceph-users] allocate_bluefs_freespace failed to

[ceph-users] Re: large bucket index in multisite environement (how to deal with large omap objects warning)?

2021-11-08 Thread prosergey07
When resharding is performed I believe its considered as bucket operation and undergoes through updating the bucket stats. Like new bucket shard is created and it may increase the number of objects within the bucket stats.  If it was broken during resharding, you could check the current bucket

[ceph-users] Re: large bucket index in multisite environement (how to deal with large omap objects warning)?

2021-11-08 Thread prosergey07
Theoretically you should be able to reshard buckets which are not in sync. That would produce new .dir.new_bucket_index objects inside your bucket.index pool which would put omap key/values into new shards (.dir.new_bucket_index). Objects itself would be left intact as marker id is not changed.