Dave
-Original Message-
From: Igor Fedotov
Sent: 30 September 2021 17:03
To: Dave Piper ; ceph-users@ceph.io
Subject: Re: [EXTERNAL] RE: [ceph-users] OSDs flapping with "_open_alloc loaded
132 GiB in 2930776 extents available 113 GiB"
On 9/30/2021 6:28 PM, Dave Piper wrote:
>
Hi,
On 9/30/21 18:02, Igor Fedotov wrote:
Using non-default min_alloc_size is generally not recommended. Primarily
due to perfomance penalties. Some side effects (like your ones) can be
observed as well. That's simple - non-default parameters generally mean
much worse QA coverage devs and
to
use the add-mon.yml playbooks to do this; I'll look into that.
Cheers,
Dave
-Original Message-
From: Igor Fedotov
Sent: 29 September 2021 13:27
To: Dave Piper ; ceph-users@ceph.io
Subject: Re: [EXTERNAL] RE: [ceph-users] OSDs flapping with "_open_alloc loaded
132 GiB in 2930776 extent
.
Cheers,
Dave
-Original Message-
From: Igor Fedotov
Sent: 29 September 2021 13:27
To: Dave Piper ; ceph-users@ceph.io
Subject: Re: [EXTERNAL] RE: [ceph-users] OSDs flapping with "_open_alloc loaded 132
GiB in 2930776 extents available 113 GiB"
Hi Dave,
I think it's your disk
Some interesting updates on our end.
This cluster (condor) is in a multisite RGW zonegroup with another cluster
(albans). Albans is still on nautilus and was healthy back when we started this
thread. As a last resort, we decided to destroy condor and recreate it, putting
it back in the
On 9/21/2021 10:44 AM, Dave Piper wrote:
I still can't find a way to get ceph-bluestore-tool working in my containerized
deployment. As soon as the OSD daemon stops, the contents of
/var/lib/ceph/osd/ceph- are unreachable.
Some speculations on the above. /var/lib/ceph/osd/ceph- is just a
I still can't find a way to get ceph-bluestore-tool working in my containerized
deployment. As soon as the OSD daemon stops, the contents of
/var/lib/ceph/osd/ceph- are unreachable.
I've found this blog post that suggests changes to the container's entrypoint
are required, but the proposed
Den mån 20 sep. 2021 kl 18:02 skrev Dave Piper :
> Okay - I've finally got full debug logs from the flapping OSDs. The raw logs
> are both 100M each - I can email them directly if necessary. (Igor I've
> already sent these your way.)
> Both flapping OSDs are reporting the same "bluefs _allocate
Okay - I've finally got full debug logs from the flapping OSDs. The raw logs
are both 100M each - I can email them directly if necessary. (Igor I've already
sent these your way.)
Both flapping OSDs are reporting the same "bluefs _allocate failed to allocate"
errors as before. I've also
We've started hitting this issue again, despite having bitmap allocator
configured. The logs just before the crash look similar to before (pasted
below).
So perhaps this isn't a hybrid allocator issue after all?
I'm still struggling to collect the full set of diags / run ceph-bluestore-tool
Message-
From: Igor Fedotov
Sent: 23 August 2021 14:22
To: Dave Piper ; ceph-users@ceph.io
Subject: Re: [EXTERNAL] Re: [ceph-users] OSDs flapping with "_open_alloc loaded 132
GiB in 2930776 extents available 113 GiB"
Hi Dave,
so may be another bug in Hybid Allocator...
Could you p
ocker[15282]: 2021-07-26T08:55:35.042+
>>> 7f0e15b3df40 -1 bluestore(/var/lib/ceph/osd/ceph-1)
>>> allocate_bluefs_freespace failed to allocate on 0x4000 min_size
>>> 0x11 > allocated total 0x0 bluefs_shared_alloc_size 0x1
>>> allocated 0x0 availab
again for all your help,
Dave
-Original Message-
From: Igor Fedotov
Sent: 26 July 2021 13:30
To: Dave Piper ; ceph-users@ceph.io
Subject: Re: [EXTERNAL] Re: [ceph-users] OSDs flapping with "_open_alloc loaded 132
GiB in 2930776 extents available 113 GiB"
Dave,
please see inline
the container, but I've not figured it out yet.
>
> Cheers again for all your help,
>
> Dave
>
> -Original Message-
> From: Igor Fedotov
> Sent: 26 July 2021 13:30
> To: Dave Piper ; ceph-users@ceph.io
> Subject: Re: [EXTERNAL] Re: [ceph-users] OSDs flapp
again for all your help,
Dave
-Original Message-
From: Igor Fedotov
Sent: 26 July 2021 13:30
To: Dave Piper ; ceph-users@ceph.io
Subject: Re: [EXTERNAL] Re: [ceph-users] OSDs flapping with "_open_alloc loaded
132 GiB in 2930776 extents available 113 GiB"
Dave,
please see inlin
: 26 July 2021 13:30
To: Dave Piper ; ceph-users@ceph.io
Subject: Re: [EXTERNAL] Re: [ceph-users] OSDs flapping with "_open_alloc loaded 132
GiB in 2930776 extents available 113 GiB"
Dave,
please see inline
On 7/26/2021 1:57 PM, Dave Piper wrote:
Hi Igor,
So to get more verbose bu
ers,
Dave
-Original Message-
From: Igor Fedotov
Sent: 23 July 2021 20:45
To: Dave Piper ; ceph-users@ceph.io
Subject: [EXTERNAL] Re: [ceph-users] OSDs flapping with "_open_alloc loaded 132
GiB in 2930776 extents available 113 GiB"
Hi Dave,
The follow log line indicates tha
>
>
>
> -----Original Message-----
> From: Igor Fedotov
> Sent: 23 July 2021 20:45
> To: Dave Piper ; ceph-users@ceph.io
> Subject: [EXTERNAL] Re: [ceph-users] OSDs flapping with "_open_alloc loaded
> 132 GiB in 2930776 extents available 113 GiB"
>
>
again,
Dave
-Original Message-
From: Igor Fedotov
Sent: 26 July 2021 11:14
To: Dave Piper ; ceph-users@ceph.io
Subject: Re: [EXTERNAL] Re: [ceph-users] OSDs flapping with "_open_alloc loaded 132
GiB in 2930776 extents available 113 GiB"
Hi Dave,
Some notes first:
1) The foll
rsion 15.2.11 (e3523634d9c2227df9af89a4eac33d16738c49cb) octopus (stable)
Cheers,
Dave
-Original Message-
From: Igor Fedotov
Sent: 23 July 2021 20:45
To: Dave Piper ; ceph-users@ceph.io
Subject: [EXTERNAL] Re: [ceph-users] OSDs flapping with "_open_alloc loaded 132 GiB
in 2930776 extents available
20 matches
Mail list logo