With the nodes that has some free space on that namespace, we don't have issue,
only with this which is weird.
From: Anthony D'Atri
Sent: Friday, December 1, 2023 10:53 PM
To: David C.
Cc: Szabo, Istvan (Agoda) ; Ceph Users
Subject: Re: [ceph-users] How to
On 12/1/23 21:08, Frank Schilder wrote:
Hi Xiubo,
I uploaded a test script with session output showing the issue. When I look at
your scripts, I can't see the stat-check on the second host anywhere. Hence, I
don't really know what you are trying to compare.
Frank,
I did that manually and
It seems you are running into this:
https://github.com/rook/rook/issues/11474#issuecomment-1365523469
You can check the output of the below command, and see if the disks are
detected by the ceph-volume:
ceph-volume inventory --format json-pretty
Also try to add a specific device path to the
Hey Cephers,
Hope you're all doing well! I'm in a bit of a pickle and could really use
some of your power.
Here's the scoop:
I have a setup with around 10 HDDs. and 2 NVME's (+uninteresting boot
disks)
My initial goal was to configure part of the HDDs (6 out of 7TB) into an
md0 or similar
Thanks! The bug I referenced is the reason for the 1st OSD crash, but not
for the subsequent crashes. The reason for those is described where you
. I'm asking for help with that one.
/Z
On Sun, 3 Dec 2023 at 15:31, Kai Stian Olstad wrote:
> On Sun, Dec 03, 2023 at 06:53:08AM +0200, Zakhar
On Sun, Dec 03, 2023 at 06:53:08AM +0200, Zakhar Kirpichenko wrote:
One of our 16.2.14 cluster OSDs crashed again because of the dreaded
https://tracker.ceph.com/issues/53906 bug.
It would be good to understand what has triggered this condition and how it
can be resolved without rebooting