On Tue, May 31, 2022 at 3:42 AM Magnus HAGDORN wrote:
>
> Hi all,
> it seems to be the time of stuck MDSs. We also have our ceph filesystem
> degraded. The MDS is stuck in replay for about 20 hours now.
>
> We run a nautilus ceph cluster with about 300TB of data and many
> millions of files. We
On Wed, 1 Jun 2022 at 23:52, David Galloway wrote:
>
> The master branch has been deleted from all recently active repos except
> ceph.git. I'm slowly retargeting existing PRs from master to main.
>
> The tool I used to rename the branches didn't take care of that for me
> unfortunately so it
The master branch has been deleted from all recently active repos except
ceph.git. I'm slowly retargeting existing PRs from master to main.
The tool I used to rename the branches didn't take care of that for me
unfortunately so it has to be done manually.
As far as I know, this should
I have a simple multisite radosgw configuration setup for testing. There is 1
realm, 1 zonegroup, and 2 separate clusters each with its own zone. There is 1
bucket with 1 object in it and no updates currently happening. There is no
group sync policy currently defined.
The problem I see is
Hey guys and girls, newbie question here (still in planning phase).
I'm thinking about starting out with a mini cluster with 4 nodes and
perhaps 3x replication, because of budgetary reasons. In a few months or
next year, I'll get extra budget and can extend to 7-8 nodes. I will
then want to
i have error im dashboard ceph
--
CephMgrPrometheusModuleInactive
description
The mgr/prometheus module at opcpmfpskup0101.p.fnst.10.in-addr.arpa:9283 is
unreachable. This could mean that the module has been disabled or the mgr
itself is down. Without the mgr/prometheus module metrics and
Hi,
how did you end up with that many PGs per OSD? According to your
output the pg_autoscaler is enabled, if that was done by the
autoscaler I would create a tracker issue for that. Then I would
either disable it or set the mode to "warn" and then reduce the pg_num
for some of the pools.