[ceph-users] Re: OSDs growing beyond full ratio

2022-08-30 Thread Wyll Ingersoll
, 2022 9:24 AM To: Jarett ; ceph-users@ceph.io Subject: [ceph-users] Re: OSDs growing beyond full ratio I would think so, but it isn't happening nearly fast enough. It's literally been over 10 days with 40 new drives across 2 new servers and they barely have any PGs yet. A few, but not nearly

[ceph-users] Re: OSDs growing beyond full ratio

2022-08-30 Thread Dave Schulz
Hi Weiwen, Thanks for the reference link.  That does indeed indicate the opposite.  I'm not sure why our issues became much less when the big files were deleted.  I suppose it's just that there was more space available after deleting the big files. -Dave On 2022-08-30 11:56 a.m., 胡 玮文

[ceph-users] Re: OSDs growing beyond full ratio

2022-08-30 Thread 胡 玮文
在 2022年8月30日,23:20,Dave Schulz 写道: Is a file in ceph assigned to a specific PG? In my case it seems like a file that's close to the size of a single OSD gets moved from one OSD to the next filling it up and domino-ing around the cluster filling up OSDs. I believe no. Each large file is

[ceph-users] Re: OSDs growing beyond full ratio

2022-08-30 Thread Wyll Ingersoll
Thanks, we may resort to that if we can't make progress in rebalancing things. From: Dave Schulz Sent: Tuesday, August 30, 2022 11:18 AM To: Wyll Ingersoll ; Josh Baergen Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Re: OSDs growing beyond full ratio Hi

[ceph-users] Re: OSDs growing beyond full ratio

2022-08-30 Thread Dave Schulz
gen *Sent:* Tuesday, August 30, 2022 9:46 AM *To:* Wyll Ingersoll *Cc:* Dave Schulz ; ceph-users@ceph.io *Subject:* Re: [ceph-users] Re: OSDs growing beyond full ratio Hey Wyll, I haven't been following this thread very closely so my apologies if this has already been covered: Are the OSDs on HDD

[ceph-users] Re: OSDs growing beyond full ratio

2022-08-30 Thread Wyll Ingersoll
; ceph-users@ceph.io Subject: Re: [ceph-users] Re: OSDs growing beyond full ratio Hey Wyll, I haven't been following this thread very closely so my apologies if this has already been covered: Are the OSDs on HDDs or SSDs (or hybrid)? If HDDs, you may want to look at decreasing

[ceph-users] Re: OSDs growing beyond full ratio

2022-08-30 Thread Wyll Ingersoll
e osd > when it was at above the full ratio." > > thanks... > > > From: Wyll Ingersoll > Sent: Monday, August 29, 2022 9:24 AM > To: Jarett ; ceph-users@ceph.io > Subject: [ceph-users] Re: OSDs growing beyond full ratio > > >

[ceph-users] Re: OSDs growing beyond full ratio

2022-08-29 Thread Dave Schulz
ll ratio." thanks... From: Wyll Ingersoll Sent: Monday, August 29, 2022 9:24 AM To: Jarett ; ceph-users@ceph.io Subject: [ceph-users] Re: OSDs growing beyond full ratio I would think so, but it isn't happening nearly fast enough. It's literally been over 10 days with 40 new driv

[ceph-users] Re: OSDs growing beyond full ratio

2022-08-29 Thread Wyll Ingersoll
; ceph-users@ceph.io Subject: [ceph-users] Re: OSDs growing beyond full ratio I would think so, but it isn't happening nearly fast enough. It's literally been over 10 days with 40 new drives across 2 new servers and they barely have any PGs yet. A few, but not nearly enough to help

[ceph-users] Re: OSDs growing beyond full ratio

2022-08-29 Thread Wyll Ingersoll
Thank You! I will see about trying these out, probably using your suggestion of several iterations with #1 and then #3. From: Stefan Kooman Sent: Monday, August 29, 2022 1:38 AM To: Wyll Ingersoll ; ceph-users@ceph.io Subject: Re: [ceph-users] OSDs growing

[ceph-users] Re: OSDs growing beyond full ratio

2022-08-29 Thread Wyll Ingersoll
I would think so, but it isn't happening nearly fast enough. It's literally been over 10 days with 40 new drives across 2 new servers and they barely have any PGs yet. A few, but not nearly enough to help with the imbalance. From: Jarett Sent: Sunday, August

[ceph-users] Re: OSDs growing beyond full ratio

2022-08-28 Thread Stefan Kooman
On 8/28/22 17:30, Wyll Ingersoll wrote: We have a pacific cluster that is overly filled and is having major trouble recovering. We are desperate for help in improving recovery speed. We have modified all of the various recovery throttling parameters. The full_ratio is 0.95 but we have

[ceph-users] Re: OSDs growing beyond full ratio

2022-08-28 Thread Jarett
Isn’t rebalancing onto the empty OSDs default behavior? From: Wyll IngersollSent: Sunday, August 28, 2022 10:31 AMTo: ceph-users@ceph.ioSubject: [ceph-users] OSDs growing beyond full ratio We have a pacific cluster that is overly filled and is having major trouble recovering.  We are desperate for