[ceph-users] After a huge amount of snaphot delete many snaptrim+snaptrim_wait pgs

2021-05-15 Thread Szabo, Istvan (Agoda)
Hi, The user deleted 20-30 snapshots and clones from the cluster and it seems like slows down the whole system. I’ve set the snaptrim parameters to the lowest as possible, set bufferred_io to true so at least have some speed for the user, but I can see the objects removal from the cluster is

[ceph-users] Re: CRUSH rule for EC 6+2 on 6-node cluster

2021-05-15 Thread Bryan Stillwell
Actually both our solutions don't work very well. Frequently the same OSD was chosen for multiple chunks: 8.72 9751 0 00 408955125760 0 1302 active+clean 2h 224790'12801 225410:49810 [13,1,14,11,18,2,19,13]p13

[ceph-users] Re: after upgrade to 16.2.3 16.2.4 and after adding few hdd's OSD's started to fail 1 by 1.

2021-05-15 Thread Bartosz Lis
Hi, Today I had a very similar case: 2 nvme OSDs got down and out. I had freshly installed 16.2.1 version. Before failure disks were under some load ~1.5k read IOPS + ~600 write IOPS. When they failed, nothing helped. After every trial of resterting them I was finding in logs messages

[ceph-users] Re: Upgrade tips from Luminous to Nautilus?

2021-05-15 Thread Mark Schouten
On Fri, May 14, 2021 at 09:12:07PM +0200, Mark Schouten wrote: > It seems (documentation was no longer available, so ik took some > searching) that I needed to run ceph mds deactivate $fs:$rank for every > MDS I wanted to deactivate. Ok, so that helped for one of the MDS'es. Trying to deactivate

[ceph-users] Re: radosgw lost config during upgrade 14.2.16 -> 21

2021-05-15 Thread Arnaud Lefebvre
Hello, I believe you are hitting https://tracker.ceph.com/issues/50249. I've also ended up configuring my rgw instances directly using /etc/ceph/ceph.conf for the time being. Hope this helps. Arnaud On Fri, 14 May 2021 at 22:04, Jan Kasprzak wrote: > > Hello, > > I have just upgraded my