I'm not using Ceph Ganesha but GPFS Ganesha, so YMMV
> ceph nfs export create cephfs --cluster-id nfs-cephfs --pseudo-path /mnt
> --fsname vol1
> --> nfs mount
> mount -t nfs -o nfsvers=4.1,proto=tcp 192.168.7.80:/mnt /mnt/ceph
>
> - Although I can mount the export I can't write on it
You
Colleagues, I have the update.
Starting from yestrerday the situation with ceph health is much worse than it
was previously.
We found that
- ceph -s informs us that some PGs are in stale state
- almost all diagnostic ceph subcommands hang! For example, "ceph osd ls" ,
"ceph osd dump", "ceph
first of all thanks to all!
Al supposed by Robert Sander, I get "permission denied" but even writing
with root privileges I get the same error.
As soon as I can I'll test your suggestions and update the thread.
Thanks again
On 4/24/24 16:05, Adam King wrote:
- Although I can mount
No, we didn’t change much, just increased the max pg per osd to avoid
warnings and inactive PGs in case a node would fail during this
process. And the max backfills, of course.
Zitat von Frédéric Nass :
Hello Eugen,
Thanks for sharing the good news. Did you have to raise
Hello Eugen,
Thanks for sharing the good news. Did you have to raise mon_osd_nearfull_ratio
temporarily?
Frédéric.
- Le 25 Avr 24, à 12:35, Eugen Block ebl...@nde.ag a écrit :
> For those interested, just a short update: the split process is
> approaching its end, two days ago there
For those interested, just a short update: the split process is
approaching its end, two days ago there were around 230 PGs left
(target are 4096 PGs). So far there were no complaints, no cluster
impact was reported (the cluster load is quite moderate, but still
sensitive). Every now and
Hi,
On 4/24/24 09:39, Roberto Maggi @ Debian wrote:
ceph orch host add cephstage01 10.20.20.81 --labels
_admin,mon,mgr,prometheus,grafana
ceph orch host add cephstage02 10.20.20.82 --labels
_admin,mon,mgr,prometheus,grafana
ceph orch host add cephstage03 10.20.20.83 --labels
Hi,
We're testing with rbd-mirror (mode snapshot) and try to get status
updates about snapshots as fast a possible. We want to use rbd-mirror as
a migration tool between two clusters and keep downtime during migration
as short as possible. Therefore we have tuned the following parameters
and
Hi,
I saw something like this a couple of weeks ago on a customer cluster.
I'm not entirely sure, but this was either due to (yet) missing or
wrong cephadm ssh config or a label/client-keyring management issue.
If this is still an issue I would recommend to check the configured
keys to be