It seems like it maybe didn't actually do the redeploy as it should log
something saying it's actually doing it on top of the line saying it
scheduled it. To confirm, the upgrade is paused ("ceph orch upgrade status"
reports is_paused as false)? If so, maybe try doing a mgr failover ("ceph
mgr
On Sun, Apr 9, 2023 at 11:21 PM Ulrich Pralle
wrote:
>
> Hi,
>
> we are using ceph version 17.2.5 on Ubuntu 22.04.1 LTS.
>
> We deployed multi-mds (max_mds=4, plus standby-replay mds).
> Currently we statically directory-pinned our user home directories (~50k).
> The cephfs' root directory is
Hello.
I have a 10 node cluster. I want to create a non-replicated pool
(replication 1) and I want to ask some questions about it:
Let me tell you my use case:
- I don't care about losing data,
- All of my data is JUNK and these junk files are usually between 1KB to 32MB.
- These files will be
I did what you told me.
I also see in the log, that the command went through:
2023-04-10T19:58:46.522477+ mgr.ceph04.qaexpv [INF] Schedule
redeploy daemon mds.mds01.ceph06.rrxmks
2023-04-10T20:01:03.360559+ mgr.ceph04.qaexpv [INF] Schedule
redeploy daemon mds.mds01.ceph05.pqxmvt
Will also note that the normal upgrade process scales down the mds service
to have only 1 mds per fs before upgrading it, so maybe something you'd
want to do as well if the upgrade didn't do it already. It does so by
setting the max_mds to 1 for the fs.
On Mon, Apr 10, 2023 at 3:51 PM Adam King
You could try pausing the upgrade and manually "upgrading" the mds daemons
by redeploying them on the new image. Something like "ceph orch daemon
redeploy --image <17.2.6 image>" (daemon names should
match those in "ceph orch ps" output). If you do that for all of them and
then get them into an
Hi,
If you remember, I hit bug https://tracker.ceph.com/issues/58489 so I
was very relieved when 17.2.6 was released and started to update
immediately.
But now I'm stuck again with my broken MDS. MDS won't get into up:active
without the update but the update waits for them to get into
On Sat, Apr 8, 2023 at 2:26 PM Michal Strnad wrote:
>cluster:
> id: a12aa2d2-fae7-df35-ea2f-3de23100e345
> health: HEALTH_WARN
...
> pgs: 1656117639/32580808518 objects misplaced (5.083%)
That's why the space is eaten. The stuff that eats the disk space on
MONs is
We're happy to announce the 6th backport release in the Quincy series.
https://ceph.io/en/news/blog/2023/v17-2-6-quincy-released/
Notable Changes
---
* `ceph mgr dump` command now outputs `last_failure_osd_epoch` and
`active_clients` fields at the top level. Previously, these