[ceph-users] Re: MDS stuck in replay

2022-06-01 Thread Ramana Venkatesh Raja
On Tue, May 31, 2022 at 3:42 AM Magnus HAGDORN wrote: > > Hi all, > it seems to be the time of stuck MDSs. We also have our ceph filesystem > degraded. The MDS is stuck in replay for about 20 hours now. > > We run a nautilus ceph cluster with about 300TB of data and many > millions of files. We

[ceph-users] Re: Ceph Repo Branch Rename - May 24

2022-06-01 Thread Rishabh Dave
On Wed, 1 Jun 2022 at 23:52, David Galloway wrote: > > The master branch has been deleted from all recently active repos except > ceph.git. I'm slowly retargeting existing PRs from master to main. > > The tool I used to rename the branches didn't take care of that for me > unfortunately so it

[ceph-users] Re: Ceph Repo Branch Rename - May 24

2022-06-01 Thread David Galloway
The master branch has been deleted from all recently active repos except ceph.git. I'm slowly retargeting existing PRs from master to main. The tool I used to rename the branches didn't take care of that for me unfortunately so it has to be done manually. As far as I know, this should

[ceph-users] radosgw multisite sync /admin/log requests overloading system.

2022-06-01 Thread Wyll Ingersoll
I have a simple multisite radosgw configuration setup for testing. There is 1 realm, 1 zonegroup, and 2 separate clusters each with its own zone. There is 1 bucket with 1 object in it and no updates currently happening. There is no group sync policy currently defined. The problem I see is

[ceph-users] Moving rbd-images across pools?

2022-06-01 Thread Angelo Hongens
Hey guys and girls, newbie question here (still in planning phase). I'm thinking about starting out with a mini cluster with 4 nodes and perhaps 3x replication, because of budgetary reasons. In a few months or next year, I'll get extra budget and can extend to 7-8 nodes. I will then want to

[ceph-users] Error CephMgrPrometheusModuleInactive

2022-06-01 Thread farhad kh
i have error im dashboard ceph -- CephMgrPrometheusModuleInactive description The mgr/prometheus module at opcpmfpskup0101.p.fnst.10.in-addr.arpa:9283 is unreachable. This could mean that the module has been disabled or the mgr itself is down. Without the mgr/prometheus module metrics and

[ceph-users] Re: Degraded data redundancy and too many PGs per OSD

2022-06-01 Thread Eugen Block
Hi, how did you end up with that many PGs per OSD? According to your output the pg_autoscaler is enabled, if that was done by the autoscaler I would create a tracker issue for that. Then I would either disable it or set the mode to "warn" and then reduce the pg_num for some of the pools.