[ceph-users] ceph-csi-cephfs - InvalidArgument desc = provided secret is empty

2023-08-03 Thread Shawn Weeks
I’m attempting to setup the CephFS CSI on K3s managed by Rancher against an 
external CephFS using the Helm chart. I’m using all default values on the Helm 
chart accept for cephConf and secret. I’ve verified that the configmap 
ceph-config get’s created with the values from Helm and I’ve verified that the 
secret csi-cephfs-secret also get’s created with the same values as seen below. 
Any attempts to create a PVC result in the following error. The only posts I’ve 
found are about expansion and I am not trying to expand a CephFS volume,  just 
create one.

I0803 19:23:39.715036   1 event.go:298] 
Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"coder", 
Name:"test", UID:"9c7e51b6-0321-48e1-9950-444f786c14fb", APIVersion:"v1", 
ResourceVersion:"4523108", FieldPath:""}): type: 'Warning' reason: 
'ProvisioningFailed' failed to provision volume with StorageClass "cephfs": rpc 
error: code = InvalidArgument desc = provided secret is empty

cephConfConfigMapName: ceph-config
cephconf: |
  [global]
fsid = 9b98ccd8-450e-4172-af70-512e4e77bc36
mon_host = [v2:10.0.5.11:3300/0,v1:10.0.5.11:6789/0] 
[v2:10.0.5.12:3300/0,v1:10.0.5.12:6789/0] 
[v2:10.0.5.13:3300/0,v1:10.0.5.13:6789/0]
commonLabels: {}
configMapName: ceph-csi-config
csiConfig: null
driverName: cephfs.csi.ceph.com
externallyManagedConfigmap: false
kubeletDir: /var/lib/kubelet
logLevel: 5
nodeplugin:
  affinity: {}
  fusemountoptions: ''
  httpMetrics:
containerPort: 8081
enabled: true
service:
  annotations: {}
  clusterIP: ''
  enabled: true
  externalIPs: null
  loadBalancerIP: ''
  loadBalancerSourceRanges: null
  servicePort: 8080
  type: ClusterIP
  imagePullSecrets: null
  kernelmountoptions: ''
  name: nodeplugin
  nodeSelector: {}
  plugin:
image:
  pullPolicy: IfNotPresent
  repository: quay.io/cephcsi/cephcsi
  tag: v3.9.0
resources: {}
  priorityClassName: system-node-critical
  profiling:
enabled: false
  registrar:
image:
  pullPolicy: IfNotPresent
  repository: registry.k8s.io/sig-storage/csi-node-driver-registrar
  tag: v2.8.0
resources: {}
  tolerations: null
  updateStrategy: RollingUpdate
pluginSocketFile: csi.sock
provisioner:
  affinity: {}
  enableHostNetwork: false
  httpMetrics:
containerPort: 8081
enabled: true
service:
  annotations: {}
  clusterIP: ''
  enabled: true
  externalIPs: null
  loadBalancerIP: ''
  loadBalancerSourceRanges: null
  servicePort: 8080
  type: ClusterIP
  imagePullSecrets: null
  name: provisioner
  nodeSelector: {}
  priorityClassName: system-cluster-critical
  profiling:
enabled: false
  provisioner:
extraArgs: null
image:
  pullPolicy: IfNotPresent
  repository: registry.k8s.io/sig-storage/csi-provisioner
  tag: v3.5.0
resources: {}
  replicaCount: 3
  resizer:
enabled: true
extraArgs: null
image:
  pullPolicy: IfNotPresent
  repository: registry.k8s.io/sig-storage/csi-resizer
  tag: v1.8.0
name: resizer
resources: {}
  setmetadata: true
  snapshotter:
extraArgs: null
image:
  pullPolicy: IfNotPresent
  repository: registry.k8s.io/sig-storage/csi-snapshotter
  tag: v6.2.2
resources: {}
  strategy:
rollingUpdate:
  maxUnavailable: 50%
type: RollingUpdate
  timeout: 60s
  tolerations: null
provisionerSocketFile: csi-provisioner.sock
rbac:
  create: true
secret:
  adminID: 
  adminKey: 
  create: true
  name: csi-cephfs-secret
selinuxMount: true
serviceAccounts:
  nodeplugin:
create: true
name: null
  provisioner:
create: true
name: null
sidecarLogLevel: 1
storageClass:
  allowVolumeExpansion: true
  annotations: {}
  clusterID: 
  controllerExpandSecret: csi-cephfs-secret
  controllerExpandSecretNamespace: ''
  create: false
  fsName: myfs
  fuseMountOptions: ''
  kernelMountOptions: ''
  mountOptions: null
  mounter: ''
  name: csi-cephfs-sc
  nodeStageSecret: csi-cephfs-secret
  nodeStageSecretNamespace: ''
  pool: ''
  provisionerSecret: csi-cephfs-secret
  provisionerSecretNamespace: ''
  reclaimPolicy: Delete
  volumeNamePrefix: ''
global:
  cattle:
clusterId: c-m-xschvkd5
clusterName: dev-cluster
rkePathPrefix: ''
rkeWindowsPathPrefix: ''
systemProjectId: p-g6rqs
url: https://rancher.example.com
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: CephFS Kernel Mount Options Without Mount Helper

2023-02-28 Thread Shawn Weeks
Even the documentation at 
https://www.kernel.org/doc/html/v5.14/filesystems/ceph.html#mount-options is 
incomplete and doesn’t list options like “secret” and “mds_namespace”

Thanks
Shawn

> On Feb 28, 2023, at 11:03 AM, Shawn Weeks  wrote:
> 
> I’m trying to find documentation for which mount options are supported 
> directly by the kernel module. For example in the kernel module included in 
> Rocky Linux 8 and 9 the secretfile option isn’t supported even though the 
> documentation seems to imply it is. It seems like the documentation assumes 
> you’ll always be using the mount.ceph helper and I’m trying to find out what 
> options are supported if you don’t have mount.ceph helper.
> 
> Thanks
> Shawn
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] CephFS Kernel Mount Options Without Mount Helper

2023-02-28 Thread Shawn Weeks
I’m trying to find documentation for which mount options are supported directly 
by the kernel module. For example in the kernel module included in Rocky Linux 
8 and 9 the secretfile option isn’t supported even though the documentation 
seems to imply it is. It seems like the documentation assumes you’ll always be 
using the mount.ceph helper and I’m trying to find out what options are 
supported if you don’t have mount.ceph helper.

Thanks
Shawn
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RadosGW - Performance Expectations

2023-02-10 Thread Shawn Weeks
With this options I still see around 38-40MB/s for my 16gb test file. So far my 
testing is mostly synthetic, I’m going to be using some programs like GitLab 
and Sonatype Nexus that store their data in object storage. At work I deal with 
real S3 and regular see upload speeds in the 100s of MB/s so I was kinda 
surprised that the aws cli was only doing 25 or so.

Thanks
Shawn

> On Feb 10, 2023, at 8:46 AM, Janne Johansson  wrote:
> 
>> The problem I’m seeing is after setting up RadosGW I can only upload to “S3” 
>> at around 25MBs with the official AWS CLI. Using s3cmd is slightly better at 
>> around 45MB/s. I’m going directly to the RadosGW instance with no load 
>> balancers in between and no ssl enabled. Just trying to figure out if this 
>> is normal. I’m not expecting it to be as fast as writing directly to a RBD 
>> but I was kinda hoping for more than this.
>> 
>> So what should I expect in performance from the RadosGW?
> 
> For s3cmd, I have some perf options I use,
> 
> multipart_chunk_size_mb = 256
> send_chunk = 262144
> recv_chunk = 262144
> and frequently see 100-150MB/s for well connected client runs,
> especially if you repeat uploads and use s3cmd's   --cache-file=FILE
> option so that you don't benchmark your local computers ability to
> checksum the object(s).
> 
> But I would also consider using rclone and/or something that actually
> makes sure to split up large files/objects and uploads them in
> parallel. We have hdd+nvme clusters on 25GE networks that ingest some
> 1.5-2 GB/s using lots of threads and many clients, but the totals are
> in that vicinity. Several load balancers and some 6-9 rgws to share
> the load helps there.
> 
> -- 
> May the most significant bit of your life be positive.

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RadosGW - Performance Expectations

2023-02-10 Thread Shawn Weeks
With s5cmd and its defaults I got around 127MB/s for a single 16gb test file. 
Is there any way to make s5cmd give feedback while it’s running. At first I 
didn’t think it was working because it just sat there for a while.

Thanks
Shawn

On Feb 10, 2023, at 8:45 AM, Matt Benjamin  wrote:

Hi Shawn,

To get another S3 upload baseline, I'd recommend doing some upload testing with 
s5cmd [1].

1. https://github.com/peak/s5cmd

Matt


On Fri, Feb 10, 2023 at 9:38 AM Shawn Weeks 
mailto:swe...@weeksconsulting.us>> wrote:
Good morning everyone, been running a small Ceph cluster with Proxmox for a 
while now and I’ve finally run across an issue I can’t find any information on. 
I have a 3 node cluster with 9 Samsung PM983 960GB NVME drives running on a 
dedicated 10gb network. RBD and CephFS performance have been great, most of the 
time I see over 500MBs writes and a rados benchmark shows 951 MB/s write and 
1140 MB/s read bandwidth.

The problem I’m seeing is after setting up RadosGW I can only upload to “S3” at 
around 25MBs with the official AWS CLI. Using s3cmd is slightly better at 
around 45MB/s. I’m going directly to the RadosGW instance with no load 
balancers in between and no ssl enabled. Just trying to figure out if this is 
normal. I’m not expecting it to be as fast as writing directly to a RBD but I 
was kinda hoping for more than this.

So what should I expect in performance from the RadosGW?

Here are some rados bench results and my ceph report

https://gist.github.com/shawnweeks/f6ef028284b5cdb10d80b8dc0654eec5

https://gist.github.com/shawnweeks/7cfe94c08adbc24f2a3d8077688df438

Thanks
Shawn
___
ceph-users mailing list -- ceph-users@ceph.io<mailto:ceph-users@ceph.io>
To unsubscribe send an email to 
ceph-users-le...@ceph.io<mailto:ceph-users-le...@ceph.io>


--

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] RadosGW - Performance Expectations

2023-02-10 Thread Shawn Weeks
Good morning everyone, been running a small Ceph cluster with Proxmox for a 
while now and I’ve finally run across an issue I can’t find any information on. 
I have a 3 node cluster with 9 Samsung PM983 960GB NVME drives running on a 
dedicated 10gb network. RBD and CephFS performance have been great, most of the 
time I see over 500MBs writes and a rados benchmark shows 951 MB/s write and 
1140 MB/s read bandwidth.

The problem I’m seeing is after setting up RadosGW I can only upload to “S3” at 
around 25MBs with the official AWS CLI. Using s3cmd is slightly better at 
around 45MB/s. I’m going directly to the RadosGW instance with no load 
balancers in between and no ssl enabled. Just trying to figure out if this is 
normal. I’m not expecting it to be as fast as writing directly to a RBD but I 
was kinda hoping for more than this.

So what should I expect in performance from the RadosGW?

Here are some rados bench results and my ceph report

https://gist.github.com/shawnweeks/f6ef028284b5cdb10d80b8dc0654eec5

https://gist.github.com/shawnweeks/7cfe94c08adbc24f2a3d8077688df438

Thanks
Shawn
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io