On 5/31/23 16:15, Igor Fedotov wrote:
On 31/05/2023 15:26, Stefan Kooman wrote:
On 5/29/23 15:52, Igor Fedotov wrote:
Hi Stefan,
given that allocation probes include every allocation (including
short 4K ones) your stats look pretty high indeed.
Although you omitted historic probes so it's
On 5/31/23 09:15, Igor Fedotov wrote:
On 31/05/2023 15:26, Stefan Kooman wrote:
On 5/29/23 15:52, Igor Fedotov wrote:
Hi Stefan,
given that allocation probes include every allocation (including
short 4K ones) your stats look pretty high indeed.
Although you omitted historic probes so
bluestore(/var/lib/ceph/osd/ceph-183) probe -20: 0, 0, 0
Thanks,
Kevin
From: Fox, Kevin M
Sent: Thursday, May 25, 2023 9:36 AM
To: Igor Fedotov; Hector Martin; ceph-users@ceph.io
Subject: Re: [ceph-users] Re: BlueStore fragmentation woes
Ok, I'm
away right now.
>
> Thanks,
> Kevin
>
>
> From: Igor Fedotov
> Sent: Thursday, May 25, 2023 9:17 AM
> To: Fox, Kevin M; Hector Martin; ceph-users@ceph.io
> Subject: Re: [ceph-users] Re: BlueStore fragmentation woes
>
> Perhaps...
>
> I don't like
On 29/05/2023 22.26, Igor Fedotov wrote:
> So fragmentation score calculation was improved recently indeed, see
> https://github.com/ceph/ceph/pull/49885
>
>
> And yeah one can see some fragmentation in allocations for the first two
> OSDs. Doesn't look that dramatic as fragmentation scores
Hi Stefan,
given that allocation probes include every allocation (including short
4K ones) your stats look pretty high indeed.
Although you omitted historic probes so it's hard to tell if there is
negative trend in it..
As I mentioned in my reply to Hector one might want to make further
So fragmentation score calculation was improved recently indeed,
seehttps://github.com/ceph/ceph/pull/49885
And yeah one can see some fragmentation in allocations for the first two
OSDs. Doesn't look that dramatic as fragmentation scores tell though.
Additionally you might want to collect
So chiming in, I think something is definitely wrong with at *least* the
frag score.
Here's what happened so far:
1. I had 8 OSDs (all 8T HDDs)
2. I added 2 more (osd.0,1) , with Quincy defaults
3. I marked 2 old ones out (the ones that seemed to be struggling the
most with IOPS)
4. I added 2
On 5/25/23 22:12, Igor Fedotov wrote:
On 25/05/2023 20:36, Stefan Kooman wrote:
On 5/25/23 18:17, Igor Fedotov wrote:
Perhaps...
I don't like the idea to use fragmentation score as a real index. IMO
it's mostly like a very imprecise first turn marker to alert that
something might be wrong.
yeah, definitely this makes sense
On 26/05/2023 09:39, Konstantin Shalygin wrote:
Hi Igor,
Should we backpot this to the p,q and reef release's?
Thanks,
k
Sent from my iPhone
On 25 May 2023, at 23:13, Igor Fedotov wrote:
You might be facing the issue fixed by
Hi Igor,
Should we backpot this to the p,q and reef release's?
Thanks,
k
Sent from my iPhone
> On 25 May 2023, at 23:13, Igor Fedotov wrote:
>
> You might be facing the issue fixed by https://github.com/ceph/ceph/pull/49885
___
ceph-users mailing
On 5/25/23 18:17, Igor Fedotov wrote:
Perhaps...
I don't like the idea to use fragmentation score as a real index. IMO
it's mostly like a very imprecise first turn marker to alert that
something might be wrong. But not a real quantitative high-quality
estimate.
Chiming in on the high
the metrics, restart and gather
some more after and let you know.
Thanks,
Kevin
From: Igor Fedotov
Sent: Thursday, May 25, 2023 9:29 AM
To: Fox, Kevin M; Hector Martin; ceph-users@ceph.io
Subject: Re: [ceph-users] Re: BlueStore fragmentation woes
Just run th
away right now.
>
> Thanks,
> Kevin
>
>
> From: Igor Fedotov
> Sent: Thursday, May 25, 2023 9:17 AM
> To: Fox, Kevin M; Hector Martin; ceph-users@ceph.io
> Subject: Re: [ceph-users] Re: BlueStore fragmentation woes
>
> Perhaps...
>
> I don't like the idea to use f
or Fedotov
Sent: Thursday, May 25, 2023 9:17 AM
To: Fox, Kevin M; Hector Martin; ceph-users@ceph.io
Subject: Re: [ceph-users] Re: BlueStore fragmentation woes
Perhaps...
I don't like the idea to use fragmentation score as a real index. IMO
it's mostly like a very imprecise first turn marke
; ceph-users@ceph.io
Subject: Re: [ceph-users] Re: BlueStore fragmentation woes
Perhaps...
I don't like the idea to use fragmentation score as a real index. IMO
it's mostly like a very imprecise first turn marker to alert that
something might be wrong. But not a real quantitative high-quality estimate
://tracker.ceph.com/issues/58022 ?
We still see run away osds at times, somewhat randomly, that causes runaway
fragmentation issues.
Thanks,
Kevin
From: Igor Fedotov
Sent: Thursday, May 25, 2023 8:29 AM
To: Hector Martin; ceph-users@ceph.io
Subject: [ceph-users] Re
-users@ceph.io
Subject: [ceph-users] Re: BlueStore fragmentation woes
Check twice before you click! This email originated from outside PNNL.
Hi Hector,
I can advise two tools for further fragmentation analysis:
1) One might want to use ceph-bluestore-tool's free-dump command to get
a list of free
Hi Hector,
I can advise two tools for further fragmentation analysis:
1) One might want to use ceph-bluestore-tool's free-dump command to get
a list of free chunks for an OSD and try to analyze whether it's really
highly fragmented and lacks long enough extents. free-dump just returns
a list
On 5/24/23 09:18, Hector Martin wrote:
On 24/05/2023 22.07, Mark Nelson wrote:
Yep, bluestore fragmentation is an issue. It's sort of a natural result
of using copy-on-write and never implementing any kind of
defragmentation scheme. Adam and I have been talking about doing it
now, probably
On 25/05/2023 01.40, 胡 玮文 wrote:
> Hi Hector,
>
> Not related to fragmentation. But I see you mentioned CephFS, and your OSDs
> are at high utilization. Is your pool NEAR FULL? CephFS write performance is
> severely degraded if the pool is NEAR FULL. Buffered write will be disabled,
> and
Hi Hector,
Not related to fragmentation. But I see you mentioned CephFS, and your OSDs are
at high utilization. Is your pool NEAR FULL? CephFS write performance is
severely degraded if the pool is NEAR FULL. Buffered write will be disabled,
and every single write() system call needs to wait
On 24/05/2023 22.07, Mark Nelson wrote:
> Yep, bluestore fragmentation is an issue. It's sort of a natural result
> of using copy-on-write and never implementing any kind of
> defragmentation scheme. Adam and I have been talking about doing it
> now, probably piggybacking on scrub or other
Yep, bluestore fragmentation is an issue. It's sort of a natural result
of using copy-on-write and never implementing any kind of
defragmentation scheme. Adam and I have been talking about doing it
now, probably piggybacking on scrub or other operations that already
area reading all of the
24 matches
Mail list logo