From: Lincoln Bryant
Sent: Wednesday, November 17, 2021 9:18 AM
To: Eugen Block ; ceph-users@ceph.io
Subject: [ceph-users] Re: cephadm / ceph orch : indefinite hang adding hosts to
new cluster
Hi,
Yes, the hosts have internet access and other Ceph commands work successfully.
Every host we
d not help either.
>
> In any case, we are not sure how to proceed from here. Is there
> anything we can do to turn up logging verbosity, or other things to
> check? I've tried to find the ceph orch​ source code to try to
> understand what ma
nd that did not help either.
In any case, we are not sure how to proceed from here. Is there anything we can
do to turn up logging verbosity, or other things to check? I've tried to find
the ceph orch​ source code to try to understand what may be happening, but I'm
not sure where to look.
Th
Hi Ricardo,
I just had a similar issue recently.
I did a dump of the monitor store (i.e., something like "ceph-monstore-tool
/var/lib/ceph/mon/mon-a/ dump-keys") and most messages were of type 'logm'. For
me I think it was a lot of log messages coming from an oddly behaving OSD.
I've seen
From: Wido den Hollander
Sent: Wednesday, March 3, 2021 2:03 AM
To: Lincoln Bryant ; ceph-users
Subject: Re: [ceph-users] Monitor leveldb growing without bound v14.2.16
On 03/03/2021 00:55, Lincoln Bryant wrote:
> Hi list,
>
> We recently had a cluster outage over the week
Hi list,
We recently had a cluster outage over the weekend where several OSDs were
inaccessible over night for several hours. When I found the cluster in the
morning, the monitors' root disks (which contained both the monitor's leveldb
and the Ceph logs) had completely filled.
After
/issues/44286 . This seems totally in line with what we
are seeing as well.
--Lincoln
From: Lincoln Bryant
Sent: Thursday, February 27, 2020 12:05 PM
To: Sage Weil ; Paul Emmerich
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: Cache tier OSDs crashing due
s.
What's the best way to proceed here? It would be very very preferable not to
lose the data.
Thanks,
Lincoln
____
From: Lincoln Bryant
Sent: Thursday, February 27, 2020 9:26 AM
To: Sage Weil ; Paul Emmerich
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: Cache tier OSD
: Sage Weil
Sent: Thursday, February 27, 2020 9:01 AM
To: Paul Emmerich
Cc: Lincoln Bryant ; ceph-users@ceph.io
Subject: Re: [ceph-users] Re: Cache tier OSDs crashing due to unfound hitset
object 14.2.7
If the pg in question can recover without that OSD, I would use
use ceph-objectstore-tool
AM
To: Lincoln Bryant
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Cache tier OSDs crashing due to unfound hitset object
14.2.7
I've also encountered this issue, but luckily without the crashing
OSDs, so marking as lost resolved it for us.
See https://tracker.ceph.com/issues/44286
Paul
--
P
this cluster is inoperable due to 3 down PGs.
Thanks,
Lincoln Bryant
[1]
-4> 2020-02-26 22:26:29.455 7ff52edaa700 0 0x559587fa91e0 36.321b
unexpected need for
36:d84c:.ceph-internal::hit_set_36.321b_archive_2020-02-24
21%3a15%3a16.792846_2020-02-24 21%3a15%3a32.457855:head h
11 matches
Mail list logo