[ceph-users] Re: How to identify the index pool real usage?

2023-12-03 Thread Szabo, Istvan (Agoda)
With the nodes that has some free space on that namespace, we don't have issue, only with this which is weird. From: Anthony D'Atri Sent: Friday, December 1, 2023 10:53 PM To: David C. Cc: Szabo, Istvan (Agoda) ; Ceph Users Subject: Re: [ceph-users] How to

[ceph-users] Re: ceph fs (meta) data inconsistent

2023-12-03 Thread Xiubo Li
On 12/1/23 21:08, Frank Schilder wrote: Hi Xiubo, I uploaded a test script with session output showing the issue. When I look at your scripts, I can't see the stat-check on the second host anywhere. Hence, I don't really know what you are trying to compare. Frank, I did that manually and

[ceph-users] Re: Setting Up Multiple HDDs with replicated DB Device

2023-12-03 Thread 陶冬冬
It seems you are running into this: https://github.com/rook/rook/issues/11474#issuecomment-1365523469 You can check the output of the below command, and see if the disks are detected by the ceph-volume: ceph-volume inventory --format json-pretty Also try to add a specific device path to the

[ceph-users] Setting Up Multiple HDDs with replicated DB Device

2023-12-03 Thread P Wagner-Beccard
Hey Cephers, Hope you're all doing well! I'm in a bit of a pickle and could really use some of your power. Here's the scoop: I have a setup with around 10 HDDs. and 2 NVME's (+uninteresting boot disks) My initial goal was to configure part of the HDDs (6 out of 7TB) into an md0 or similar

[ceph-users] Re: Ceph 16.2.14: osd crash, bdev() _aio_thread got r=-1 ((1) Operation not permitted)

2023-12-03 Thread Zakhar Kirpichenko
Thanks! The bug I referenced is the reason for the 1st OSD crash, but not for the subsequent crashes. The reason for those is described where you . I'm asking for help with that one. /Z On Sun, 3 Dec 2023 at 15:31, Kai Stian Olstad wrote: > On Sun, Dec 03, 2023 at 06:53:08AM +0200, Zakhar

[ceph-users] Re: Ceph 16.2.14: osd crash, bdev() _aio_thread got r=-1 ((1) Operation not permitted)

2023-12-03 Thread Kai Stian Olstad
On Sun, Dec 03, 2023 at 06:53:08AM +0200, Zakhar Kirpichenko wrote: One of our 16.2.14 cluster OSDs crashed again because of the dreaded https://tracker.ceph.com/issues/53906 bug. It would be good to understand what has triggered this condition and how it can be resolved without rebooting