[ceph-users] ceph-csi-cephfs - InvalidArgument desc = provided secret is empty

2023-08-03 Thread Shawn Weeks
I’m attempting to setup the CephFS CSI on K3s managed by Rancher against an external CephFS using the Helm chart. I’m using all default values on the Helm chart accept for cephConf and secret. I’ve verified that the configmap ceph-config get’s created with the values from Helm and I’ve verified

[ceph-users] Re: CephFS Kernel Mount Options Without Mount Helper

2023-02-28 Thread Shawn Weeks
Even the documentation at https://www.kernel.org/doc/html/v5.14/filesystems/ceph.html#mount-options is incomplete and doesn’t list options like “secret” and “mds_namespace” Thanks Shawn > On Feb 28, 2023, at 11:03 AM, Shawn Weeks wrote: > > I’m trying to find documentation for wh

[ceph-users] CephFS Kernel Mount Options Without Mount Helper

2023-02-28 Thread Shawn Weeks
I’m trying to find documentation for which mount options are supported directly by the kernel module. For example in the kernel module included in Rocky Linux 8 and 9 the secretfile option isn’t supported even though the documentation seems to imply it is. It seems like the documentation

[ceph-users] Re: RadosGW - Performance Expectations

2023-02-10 Thread Shawn Weeks
With this options I still see around 38-40MB/s for my 16gb test file. So far my testing is mostly synthetic, I’m going to be using some programs like GitLab and Sonatype Nexus that store their data in object storage. At work I deal with real S3 and regular see upload speeds in the 100s of MB/s

[ceph-users] Re: RadosGW - Performance Expectations

2023-02-10 Thread Shawn Weeks
, To get another S3 upload baseline, I'd recommend doing some upload testing with s5cmd [1]. 1. https://github.com/peak/s5cmd Matt On Fri, Feb 10, 2023 at 9:38 AM Shawn Weeks mailto:swe...@weeksconsulting.us>> wrote: Good morning everyone, been running a small Ceph cluster with P

[ceph-users] RadosGW - Performance Expectations

2023-02-10 Thread Shawn Weeks
Good morning everyone, been running a small Ceph cluster with Proxmox for a while now and I’ve finally run across an issue I can’t find any information on. I have a 3 node cluster with 9 Samsung PM983 960GB NVME drives running on a dedicated 10gb network. RBD and CephFS performance have been