I'm not using Ceph Ganesha but GPFS Ganesha, so YMMV
> ceph nfs export create cephfs --cluster-id nfs-cephfs --pseudo-path /mnt
> --fsname vol1
> --> nfs mount
> mount -t nfs -o nfsvers=4.1,proto=tcp 192.168.7.80:/mnt /mnt/ceph
>
> - Although I can mount the export I can't write on it
You
With cephadm you're able to set these values cluster wide.
See the host-management section of the docs.
https://docs.ceph.com/en/reef/cephadm/host-management/#os-tuning-profiles
On Fri, 19 Apr 2024 at 12:40, Konstantin Shalygin wrote:
> Hi,
>
> > On 19 Apr 2024, at 10:39, Pardhiv Karri wrote:
Hey Cephers,
Hope you're all doing well! I'm in a bit of a pickle and could really use
some of your power.
Here's the scoop:
I have a setup with around 10 HDDs. and 2 NVME's (+uninteresting boot
disks)
My initial goal was to configure part of the HDDs (6 out of 7TB) into an
md0 or similar
l need to use v17.2.6 until the fix comes out for quincy in v17.2.8.
>
> Travis
>
> On Thu, Nov 23, 2023 at 4:06 PM P Wagner-Beccard <
> wagner-kerschbau...@schaffroth.eu> wrote:
>
>> Hi Mailing-Lister's,
>>
>> I am reaching out for assistance regarding a dep
Hi Mailing-Lister's,
I am reaching out for assistance regarding a deployment issue I am facing
with Ceph on a 4 node RKE2 cluster. We are attempting to deploy Ceph via
the rook helm chart, but we are encountering an issue that apparently seems
related to a known bug