[ceph-users] Re: 3 node CEPH PVE hyper-converged cluster serious fragmentation and performance loss in matter of days.

2022-03-18 Thread Igor Fedotov
On 3/10/2022 6:10 PM, Sasa Glumac wrote: > In this respect could you please try to switch bluestore and bluefs > allocators to bitmap and run some smoke benchmarking again. Can i change this on live server (is there possibility of losing data etc )? Can you please share correct procedure.

[ceph-users] Re: 3 node CEPH PVE hyper-converged cluster serious fragmentation and performance loss in matter of days.

2022-03-10 Thread Sasa Glumac
> First of all I'd like to clarify what exact command are you using to > assess the fragmentation. There are two options: "bluestore allocator > score" and "bluestore allocator fragmentation" I am using this one : "ceph daemon osd.$i bluestore allocator score block" > Both are not very accurate

[ceph-users] Re: 3 node CEPH PVE hyper-converged cluster serious fragmentation and performance loss in matter of days.

2022-03-10 Thread Igor Fedotov
Hi Sasa, jsut a few thoughts/questions on your issue in attempt to understand what's happening. First of all I'd like to clarify what exact command are you using to assess the fragmentation. There are two options: "bluestore allocator score" and "bluestore allocator fragmentation" Both

[ceph-users] Re: 3 node CEPH PVE hyper-converged cluster serious fragmentation and performance loss in matter of days.

2022-03-08 Thread Sasa Glumac
Rados bench before deleting OSD's and recreating them + syncing with fragmentation 0.89 > T1 - wr,4M > Total time run 60.0405 > Total writes made 9997 > Write size 4194304 > Object size4194304 > Bandwidth (MB/sec) 666,017 > Stddev Bandwidth 24.1108 >