I have been trying to recover our ceph cluster from a power outage. I
was able to recover most of the cluster using the data from the OSDs.
But the MDS maps were gone, and now I'm trying to recover that. I was
looking around and found a section in the Quincy manual titled
RECOVERING THE FILE
Hi Josh,
Thanks for your reply.
But this I already tried that, with no luck.
Primary OSD goes down and hangs forever, upon "mark_unfound_lost delete”
command.
I guess it is too damaged to salvage, unless one really starts deleting
individual corrupt objects?
Anyway, as I said. files in
Hi,
rbd namespaces are supported. It's just that "rbd device (un)map" seems
to ignore the "--namespace" argument (unmap has the same issue on Linux,
we'll have to look into it).
The good news is that the namespace can be specified as part of the
image identifier, e.g:
rbd device map
Hi Jesper,
Given that the PG is marked recovery_unfound, I think you need to
follow
https://docs.ceph.com/en/quincy/rados/troubleshooting/troubleshooting-pg/#unfound-objects.
Josh
On Tue, Sep 20, 2022 at 12:56 AM Jesper Lykkegaard Karlsen
wrote:
>
> Dear all,
>
> System: latest Octopus, 8+3
Dear Sebastian,
thanks, it might be best to do so... That's what I also thought to be one
of the better solutions. :-)
I'll give feedback after a successful transition to Quincy.
Christoph
Am Di., 20. Sept. 2022 um 14:08 Uhr schrieb Sebastian Knust <
skn...@physik.uni-bielefeld.de>:
> Hi
On 9/20/22 11:23, Lucian Petrut wrote:
Hi,
We've just published new Pacific and Quincy MSIs, including the ipv6 fix
as well as a few others that haven't landed upstream yet. Please let us
know how it works.
It works \o/. Just successfully created, mapped, and written to an rbd
image in
Hi Christoph,
I am able to reproducibly kernel panic CentOS 7 clients with native
kernel (3.10.0-1160.76.1.el7) when accessing CephFS snapshots via SMB
with vfs_shadow_copy2. This occurs on a Pacific cluster. IIRC accessing
the snapshots on the server also lead to a kernel panic, but I'm not
Hello all,
i would like to upgrade our well running Rocky 8.6 based bare metal cluster
from Octopus to Quincy next few days. But there are some Centos7 Kernel
based clients mapping RBDs or mounting CephFS in our environment.
Is there someone here who can confirm Centos 7 clients
Hi,
We did find a potentially related issue: the error codes were
inconsistent on Windows [1], which can cause all kinds of unexpected
problems.
We've published new Ceph MSIs [2] that include this fix along with a few
other patches that haven't landed upstream yet.
Regards,
Lucian
[1]
Whops, wrong paste. Here's the actual Ceph MSI link:
https://cloudbase.it/ceph-for-windows/
Regards,
Lucian
On 20.09.2022 12:11, Lucian Petrut wrote:
Hi,
From what I recall, we did try a Pacific Windows client with a
Nautilus cluster and it seemed to work fine. New Ceph versions might
Hi,
From what I recall, we did try a Pacific Windows client with a Nautilus
cluster and it seemed to work fine. New Ceph versions might introduce
non-backwards compatible changes, so I think the consensus is that it
*might* work but it's not something that's being tested or supported.
Think I resolved this by following steps here:
https://ceph-users.ceph.narkive.com/gYNAbBJP/cannot-commit-period-period-does-not-have-a-master-zone-of-a-master-zonegroup
Specifically passing in multisite zonegroup/zone config as a JSON with the
relative set commands.
I didn't manage to "forget
Hi,
I can reproduce it with 16.2.7 (without wal_device). Although you
could simply use a yaml file and run 'ceph orch apply -i osd.specs'
this seems to be a bug since it's documented[1]. I'd recommend to
create a tracker issue for this.
Thanks,
Eugen
[1]
We are running Kernel:
Linux kairo 5.4.0-110-generic #124-Ubuntu SMP Thu Apr 14 19:46:19 UTC 2022
x86_64 x86_64 x86_64 GNU/Linux
On OS:
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.4 LTS"
And running in Docker Container
CONTAINER ID IMAGE
Dear all,
System: latest Octopus, 8+3 erasure Cephfs
I have a PG that has been driving me crazy.
It had gotten to a bad state after heavy backfilling, combined with OSD going
down in turn.
State is:
active+recovery_unfound+undersized+degraded+remapped
I have tried repairing it with
On 19/09/2022 23:32, j.rasakunasin...@autowork.com wrote:
Hi,
we have 3x controller and 6xstorage Ceph Cluster running. We use iscsi/tcmu
runner (16.2.9) to connect VMware to Ceph.
We face an issue, that we lost the connection to the iscsi gateways, that
ESXi is connected not works properly.
16 matches
Mail list logo