[ceph-users] Re: [EXTERNAL] RE: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

2021-10-18 Thread Dave Piper
Dave -Original Message- From: Igor Fedotov Sent: 30 September 2021 17:03 To: Dave Piper ; ceph-users@ceph.io Subject: Re: [EXTERNAL] RE: [ceph-users] OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB" On 9/30/2021 6:28 PM, Dave Piper wrote: >

[ceph-users] Re: [EXTERNAL] RE: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

2021-09-30 Thread Stefan Kooman
Hi, On 9/30/21 18:02, Igor Fedotov wrote: Using non-default min_alloc_size is generally not recommended. Primarily due to perfomance penalties. Some side effects (like your ones) can be observed as well. That's simple - non-default parameters generally mean much worse QA coverage devs and

[ceph-users] Re: [EXTERNAL] RE: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

2021-09-30 Thread Dave Piper
to use the add-mon.yml playbooks to do this; I'll look into that. Cheers, Dave -Original Message- From: Igor Fedotov Sent: 29 September 2021 13:27 To: Dave Piper ; ceph-users@ceph.io Subject: Re: [EXTERNAL] RE: [ceph-users] OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extent

[ceph-users] Re: [EXTERNAL] RE: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

2021-09-30 Thread Igor Fedotov
. Cheers, Dave -Original Message- From: Igor Fedotov Sent: 29 September 2021 13:27 To: Dave Piper ; ceph-users@ceph.io Subject: Re: [EXTERNAL] RE: [ceph-users] OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB" Hi Dave, I think it's your disk

[ceph-users] Re: [EXTERNAL] RE: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

2021-09-29 Thread Dave Piper
Some interesting updates on our end. This cluster (condor) is in a multisite RGW zonegroup with another cluster (albans). Albans is still on nautilus and was healthy back when we started this thread. As a last resort, we decided to destroy condor and recreate it, putting it back in the

[ceph-users] Re: [EXTERNAL] RE: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

2021-09-29 Thread Igor Fedotov
On 9/21/2021 10:44 AM, Dave Piper wrote: I still can't find a way to get ceph-bluestore-tool working in my containerized deployment. As soon as the OSD daemon stops, the contents of /var/lib/ceph/osd/ceph- are unreachable. Some speculations on the above. /var/lib/ceph/osd/ceph- is just a

[ceph-users] Re: [EXTERNAL] RE: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

2021-09-21 Thread Dave Piper
I still can't find a way to get ceph-bluestore-tool working in my containerized deployment. As soon as the OSD daemon stops, the contents of /var/lib/ceph/osd/ceph- are unreachable. I've found this blog post that suggests changes to the container's entrypoint are required, but the proposed

[ceph-users] Re: [EXTERNAL] Re: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

2021-09-21 Thread Janne Johansson
Den mån 20 sep. 2021 kl 18:02 skrev Dave Piper : > Okay - I've finally got full debug logs from the flapping OSDs. The raw logs > are both 100M each - I can email them directly if necessary. (Igor I've > already sent these your way.) > Both flapping OSDs are reporting the same "bluefs _allocate

[ceph-users] Re: [EXTERNAL] Re: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

2021-09-20 Thread Dave Piper
Okay - I've finally got full debug logs from the flapping OSDs. The raw logs are both 100M each - I can email them directly if necessary. (Igor I've already sent these your way.) Both flapping OSDs are reporting the same "bluefs _allocate failed to allocate" errors as before. I've also

[ceph-users] Re: [EXTERNAL] Re: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

2021-09-08 Thread Dave Piper
We've started hitting this issue again, despite having bitmap allocator configured. The logs just before the crash look similar to before (pasted below). So perhaps this isn't a hybrid allocator issue after all? I'm still struggling to collect the full set of diags / run ceph-bluestore-tool

[ceph-users] Re: [EXTERNAL] Re: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

2021-08-27 Thread Igor Fedotov
Message- From: Igor Fedotov Sent: 23 August 2021 14:22 To: Dave Piper ; ceph-users@ceph.io Subject: Re: [EXTERNAL] Re: [ceph-users] OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB" Hi Dave, so may be another bug in Hybid Allocator... Could you p

[ceph-users] Re: [EXTERNAL] Re: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

2021-08-26 Thread Dave Piper
ocker[15282]: 2021-07-26T08:55:35.042+ >>> 7f0e15b3df40 -1 bluestore(/var/lib/ceph/osd/ceph-1) >>> allocate_bluefs_freespace failed to allocate on 0x4000 min_size >>> 0x11 > allocated total 0x0 bluefs_shared_alloc_size 0x1 >>> allocated 0x0 availab

[ceph-users] Re: [EXTERNAL] Re: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

2021-08-23 Thread Igor Fedotov
again for all your help, Dave -Original Message- From: Igor Fedotov Sent: 26 July 2021 13:30 To: Dave Piper ; ceph-users@ceph.io Subject: Re: [EXTERNAL] Re: [ceph-users] OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB" Dave, please see inline

[ceph-users] Re: [EXTERNAL] Re: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

2021-08-20 Thread Dave Piper
the container, but I've not figured it out yet. > > Cheers again for all your help, > > Dave > > -Original Message- > From: Igor Fedotov > Sent: 26 July 2021 13:30 > To: Dave Piper ; ceph-users@ceph.io > Subject: Re: [EXTERNAL] Re: [ceph-users] OSDs flapp

[ceph-users] Re: [EXTERNAL] Re: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

2021-08-12 Thread Dave Piper
again for all your help, Dave -Original Message- From: Igor Fedotov Sent: 26 July 2021 13:30 To: Dave Piper ; ceph-users@ceph.io Subject: Re: [EXTERNAL] Re: [ceph-users] OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB" Dave, please see inlin

[ceph-users] Re: [EXTERNAL] Re: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

2021-08-12 Thread Igor Fedotov
: 26 July 2021 13:30 To: Dave Piper ; ceph-users@ceph.io Subject: Re: [EXTERNAL] Re: [ceph-users] OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB" Dave, please see inline On 7/26/2021 1:57 PM, Dave Piper wrote: Hi Igor, So to get more verbose bu

[ceph-users] Re: [EXTERNAL] Re: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

2021-07-26 Thread Dave Piper
ers, Dave -Original Message- From: Igor Fedotov Sent: 23 July 2021 20:45 To: Dave Piper ; ceph-users@ceph.io Subject: [EXTERNAL] Re: [ceph-users] OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB" Hi Dave, The follow log line indicates tha

[ceph-users] Re: [EXTERNAL] Re: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

2021-07-26 Thread Dave Piper
> > > > -----Original Message----- > From: Igor Fedotov > Sent: 23 July 2021 20:45 > To: Dave Piper ; ceph-users@ceph.io > Subject: [EXTERNAL] Re: [ceph-users] OSDs flapping with "_open_alloc loaded > 132 GiB in 2930776 extents available 113 GiB" > >

[ceph-users] Re: [EXTERNAL] Re: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

2021-07-26 Thread Igor Fedotov
again, Dave -Original Message- From: Igor Fedotov Sent: 26 July 2021 11:14 To: Dave Piper ; ceph-users@ceph.io Subject: Re: [EXTERNAL] Re: [ceph-users] OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB" Hi Dave, Some notes first: 1) The foll

[ceph-users] Re: [EXTERNAL] Re: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

2021-07-26 Thread Igor Fedotov
rsion 15.2.11 (e3523634d9c2227df9af89a4eac33d16738c49cb) octopus (stable) Cheers, Dave -Original Message- From: Igor Fedotov Sent: 23 July 2021 20:45 To: Dave Piper ; ceph-users@ceph.io Subject: [EXTERNAL] Re: [ceph-users] OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available