[ceph-users] Re: Newer linux kernel cephfs clients is more trouble?

2022-05-11 Thread Alex Closs
Hey y'all - As a datapoint, I *don't* see this issue on 5.17.4-200.fc35.x86_64. Hosts are Fedora 35 server, with 17.2.0. Happy to test or provide more data from this cluster if it would be helpful. -Alex On May 11, 2022, 2:02 PM -0400, David Rivera , wrote: > Hi, > > My experience is similar,

[ceph-users] Re: Laggy OSDs

2022-03-29 Thread Alex Closs
; > > > > > What is your pool configuration (EC k+m or replicated X setup), and do you > > > use the same pool for indexes and data? I'm assuming this is RGW usage via > > > the S3 API, let us know if this is not correct. > > > > > > On Tue, M

[ceph-users] Laggy OSDs

2022-03-29 Thread Alex Closs
Hey folks, We have a 16.2.7 cephadm cluster that's had slow ops and several (constantly changing) laggy PGs. The set of OSDs with slow ops seems to change at random, among all 6 OSD hosts in the cluster. All drives are enterprise SATA SSDs, by either Intel or Micron. We're still not ruling out