> On 30 Nov 2021, at 13:40, mhnx wrote:
>
> Is there any other solution to cover the issue for safe upgrade?
> Is there any problem with switching hybrid allocator to bitmap
> allocator just for upgrade?
> Do I need to re-create OSD's? Or just stop and start with the bitmap
> allocator and afte
Hello again.
It's hard to upgrade while having this problem because I have high I/O
usage and 1/30 OSD's are flapping almost everyday. I'm afraid of
having the OSD fail during the upgrade.
I need a temporary solution because I'm sure that while upgrading the
system at least one of the OSD's will f
Hi,
Just try to upgrade to last Nautilus
Many things with allocator and collections was fixed on last nau releases
k
> On 11 Nov 2021, at 13:15, mhnx wrote:
>
> I have 10 nodes and I use; CephFS, RBD and RGW clients and all of my
> clients are 14.2.16 Nautilus.
> My clients, MONs, OSDs are on
I have 10 nodes and I use; CephFS, RBD and RGW clients and all of my
clients are 14.2.16 Nautilus.
My clients, MONs, OSDs are on the same servers.
I have constant usage: 50-300MiB/s rd, 15-30k op/s rd --- 100-300MiB/s wr,
1-4 op/s wr.
With the allocator issue it's highly possible to get slow ops an
Hello Igor. Thanks for the answer.
There are so many changes to read and test for me but I will plan an
upgrade to Octopus when I'm available.
Is there any problem upgrading from 14.2.16 ---> 15.2.15 ?
Igor Fedotov , 10 Kas 2021 Çar, 17:50 tarihinde şunu
yazdı:
> I would encourage you to upgr
I would encourage you to upgrade to at least the latest Nautilus (and
preferably to Octopus).
There were a bunch of allocator's bugs fixed since 14.2.16. Not even
sure all of them landed into N since it's EOL.
A couple examples are (both are present in the latest Nautilus):
https://github.co
Yes. I don't have separate DB/WAL. These SSD's are only using by rgw index.
The command "--command bluefs-bdev-sizes" is not working if the osd up and
working.
I need a new OSD failure to get useful output. I will check when I get one.
I picked an OSD from my test environment to check the command
From my understanding you do not have a separate DB/WAL device per OSD. Since
RocksDB uses bluefs for OMAP storage, we can check the usage and free size for
bluefs on problematic osd's.ceph-bluestore-tool --path
/var/lib/ceph/osd/ceph-OSD_ID --command bluefs-bdev-sizesProbably it can shed
some
I was trying to keep things clear and I was aware of the login issue.
Sorry. You're right.
OSD's are not full. Need balance but I can't activate the balancer
because of the issue.
ceph osd df tree | grep 'CLASS\|ssd'
ID CLASS WEIGHT REWEIGHT SIZERAW USE DATAOMAPMETA
AVAIL %USE
Are those problematic OSDs getting almost full ? I do not have Ubuntu account
to check their pastebin.Надіслано з пристрою Galaxy
Оригінальне повідомлення Від: mhnx
Дата: 08.11.21 15:31 (GMT+02:00) Кому: Ceph Users Тема:
[ceph-users] allocate_bluefs_freespace failed to alloc
Hi Igor,
thanks for your reply.
To workaround it you might want to switch both bluestore and bluefs allocators
back to bitmap for now.
Indeed, setting both allocators to bitmap brought the OSD back online and the
cluster recovered.
You rescued my cluster. ;-)
Cheers
Stephan
smime.p7s
D
Hi Stephan,
it looks like you've faced the following bug:
https://tracker.ceph.com/issues/47883
To workaround it you might want to switch both bluestore and bluefs
allocators back to bitmap for now.
The fixes for Octopus/Nautilus are on their ways:
https://github.com/ceph/ceph/pull/38474
12 matches
Mail list logo