[ceph-users] Re: *****SPAM***** 3 node CEPH PVE hyper-converged cluster serious fragmentation and performance loss in matter of days.

2022-03-08 Thread Sasa Glumac
> Where is the rados bench before and after your problem? Rados bench before deleting OSD's and recreating them + syncing with fragmentation 0.89 T1 - wr,4M T2 = ro,seq,4M T3 = ro,rand,4M > Total time run 60.0405 Total time run 250.486 Total time run > 600.463 > Total writes made

[ceph-users] Re: *****SPAM***** 3 node CEPH PVE hyper-converged cluster serious fragmentation and performance loss in matter of days. (Marc)

2022-03-08 Thread Sasa Glumac
> Where is the rados bench before and after your problem? Rados bench before deleting OSD's and recreating them + syncing with fragmentation 0.89 T1 - wr,4M T2 = ro,seq,4M T3 = ro,rand,4M > Total time run 60.0405 Total time run 250.486 Total time run > 600.463 > Total writes made

[ceph-users] Re: *****SPAM***** 3 node CEPH PVE hyper-converged cluster serious fragmentation and performance loss in matter of days.

2022-03-08 Thread Marc
> > VM don't do many writes and i migrated main testing VM's to 2TB pool which > in turns fragments faster. > > > Did a lot of tests and recreated pools and OSD's in many ways but in a > matter of days every time each OSD's gets severely fragmented and loses up > to 80% of write performance (tes