[ceph-users] Re: 16.2.8 pacific QE validation status, RC2 available for testing

2022-05-10 Thread Neha Ojha
Hi Yuri, rados and upgrade/pacific-p2p look good to go. On Tue, May 10, 2022 at 5:46 AM Benoît Knecht wrote: > > On Mon, May 09, 2022 at 07:32:59PM +1000, Brad Hubbard wrote: > > It's the current HEAD of the pacific branch or, alternatively, > >

[ceph-users] repairing damaged cephfs_metadata pool

2022-05-10 Thread Horvath, Dustin Marshall
Hi there, newcomer here. I've been trying to figure out if it's possible to repair or recover cephfs after some unfortunate issues a couple of months ago; these couple of nodes have been offline most of the time since the incident. I'm sure the problem is that I lack the ceph expertise to

[ceph-users] Re: Erasure-coded PG stuck in the failed_repair state

2022-05-10 Thread Wesley Dillingham
In my experience: "No scrub information available for pg 11.2b5 error 2: (2) No such file or directory" is the output you get from the command when the up or acting osd set has changed since the last deep-scrub. Have you tried to run a deep scrub (ceph pg deep-scrub 11.2b5) on the pg and then

[ceph-users] Erasure-coded PG stuck in the failed_repair state

2022-05-10 Thread Robert Appleyard - STFC UKRI
Hi, We've got an outstanding issue with one of our Ceph clusters here at RAL. The cluster is 'Echo', our 40PB cluster. We found an object from an 8+3EC RGW pool in the failed_repair state. We aren't sure how the object got into this state, but it doesn't appear to be a case of correlated drive

[ceph-users] Re: not so empty bucket

2022-05-10 Thread Ulrich Klein
Yo, I’m having the same problem and can easily reproduce it. See https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/XOQXZYOWYMMQBWFXMHYDQUJ7LZZPFLSU/ And similar ones. The problem still exists in Quincy 17.2.0. But looks like it’s too low priority to be fixed. Ciao, Uli > On 09.

[ceph-users] Re: ceph-crash user requirements

2022-05-10 Thread Eugen Block
Hi, there's a profile "crash" for that. In a lab setup with Nautilus thre's one crash client with these caps: admin:~ # ceph auth get client.crash [client.crash] key = caps mgr = "allow profile crash" caps mon = "allow profile crash" On a Octopus cluster

[ceph-users] ceph-crash user requirements

2022-05-10 Thread Burkhard Linke
Hi, I just stumpled over some log messages regarding ceph-crash: May 10 09:32:19 bigfoot60775 ceph-crash[2756]: WARNING:ceph-crash:post /var/lib/ceph/crash/2022-05-10T07:10:55.837665Z_7f3b726e-0368-4149-8834-6cafd92fb13f as client.admin failed: b'2022-05-10T09:32:19.099+0200 7f911ad92700 -1

[ceph-users] Re: Is osd_scrub_auto_repair dangerous?

2022-05-10 Thread Janne Johansson
Den mån 9 maj 2022 kl 21:04 skrev Vladimir Brik : > Hello > Does osd_scrub_auto_repair default to false because it's > dangerous? I assume `ceph pg repair` is also dangerous then? > > In what kinds of situations do they cause problems? With filestore, there were less (or no?) checksums, so the