[ceph-users] Re: allocate_bluefs_freespace failed to allocate

2021-11-30 Thread Konstantin Shalygin
> On 30 Nov 2021, at 13:40, mhnx wrote: > > Is there any other solution to cover the issue for safe upgrade? > Is there any problem with switching hybrid allocator to bitmap > allocator just for upgrade? > Do I need to re-create OSD's? Or just stop and start with the bitmap > allocator and afte

[ceph-users] Re: allocate_bluefs_freespace failed to allocate

2021-11-30 Thread mhnx
Hello again. It's hard to upgrade while having this problem because I have high I/O usage and 1/30 OSD's are flapping almost everyday. I'm afraid of having the OSD fail during the upgrade. I need a temporary solution because I'm sure that while upgrading the system at least one of the OSD's will f

[ceph-users] Re: allocate_bluefs_freespace failed to allocate

2021-11-11 Thread Konstantin Shalygin
Hi, Just try to upgrade to last Nautilus Many things with allocator and collections was fixed on last nau releases k > On 11 Nov 2021, at 13:15, mhnx wrote: > > I have 10 nodes and I use; CephFS, RBD and RGW clients and all of my > clients are 14.2.16 Nautilus. > My clients, MONs, OSDs are on

[ceph-users] Re: allocate_bluefs_freespace failed to allocate

2021-11-11 Thread mhnx
I have 10 nodes and I use; CephFS, RBD and RGW clients and all of my clients are 14.2.16 Nautilus. My clients, MONs, OSDs are on the same servers. I have constant usage: 50-300MiB/s rd, 15-30k op/s rd --- 100-300MiB/s wr, 1-4 op/s wr. With the allocator issue it's highly possible to get slow ops an

[ceph-users] Re: allocate_bluefs_freespace failed to allocate

2021-11-10 Thread mhnx
Hello Igor. Thanks for the answer. There are so many changes to read and test for me but I will plan an upgrade to Octopus when I'm available. Is there any problem upgrading from 14.2.16 ---> 15.2.15 ? Igor Fedotov , 10 Kas 2021 Çar, 17:50 tarihinde şunu yazdı: > I would encourage you to upgr

[ceph-users] Re: allocate_bluefs_freespace failed to allocate

2021-11-10 Thread Igor Fedotov
I would encourage you to upgrade to at least the latest Nautilus (and preferably to Octopus). There were a bunch of allocator's bugs fixed since 14.2.16. Not even sure all of them landed into N since it's EOL. A couple examples are (both are present in the latest Nautilus): https://github.co

[ceph-users] Re: allocate_bluefs_freespace failed to allocate

2021-11-10 Thread mhnx
Yes. I don't have separate DB/WAL. These SSD's are only using by rgw index. The command "--command bluefs-bdev-sizes" is not working if the osd up and working. I need a new OSD failure to get useful output. I will check when I get one. I picked an OSD from my test environment to check the command

[ceph-users] Re: allocate_bluefs_freespace failed to allocate

2021-11-09 Thread prosergey07
From my understanding you do not have a separate DB/WAL device per OSD. Since RocksDB uses bluefs for OMAP storage, we can check the usage and free size for bluefs on problematic osd's.ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-OSD_ID --command bluefs-bdev-sizesProbably it can shed some

[ceph-users] Re: allocate_bluefs_freespace failed to allocate

2021-11-08 Thread mhnx
I was trying to keep things clear and I was aware of the login issue. Sorry. You're right. OSD's are not full. Need balance but I can't activate the balancer because of the issue. ceph osd df tree | grep 'CLASS\|ssd' ID CLASS WEIGHT REWEIGHT SIZERAW USE DATAOMAPMETA AVAIL %USE

[ceph-users] Re: allocate_bluefs_freespace failed to allocate

2021-11-08 Thread prosergey07
Are those problematic OSDs getting almost full ? I do not have Ubuntu account to check their pastebin.Надіслано з пристрою Galaxy Оригінальне повідомлення Від: mhnx Дата: 08.11.21 15:31 (GMT+02:00) Кому: Ceph Users Тема: [ceph-users] allocate_bluefs_freespace failed to alloc

[ceph-users] Re: allocate_bluefs_freespace failed to allocate / ceph_abort_msg("bluefs enospc")

2020-12-17 Thread Stephan Austermühle
Hi Igor, thanks for your reply. To workaround it you might want to switch both bluestore and bluefs allocators back to bitmap for now. Indeed, setting both allocators to bitmap brought the OSD back online and the cluster recovered. You rescued my cluster. ;-) Cheers Stephan smime.p7s D

[ceph-users] Re: allocate_bluefs_freespace failed to allocate / ceph_abort_msg("bluefs enospc")

2020-12-16 Thread Igor Fedotov
Hi Stephan,  it looks like you've faced the following bug: https://tracker.ceph.com/issues/47883 To workaround it you might want to switch both bluestore and bluefs allocators back to bitmap for now. The fixes for Octopus/Nautilus are on their ways: https://github.com/ceph/ceph/pull/38474