I’m attempting to setup the CephFS CSI on K3s managed by Rancher against an 
external CephFS using the Helm chart. I’m using all default values on the Helm 
chart accept for cephConf and secret. I’ve verified that the configmap 
ceph-config get’s created with the values from Helm and I’ve verified that the 
secret csi-cephfs-secret also get’s created with the same values as seen below. 
Any attempts to create a PVC result in the following error. The only posts I’ve 
found are about expansion and I am not trying to expand a CephFS volume,  just 
create one.

I0803 19:23:39.715036       1 event.go:298] 
Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"coder", 
Name:"test", UID:"9c7e51b6-0321-48e1-9950-444f786c14fb", APIVersion:"v1", 
ResourceVersion:"4523108", FieldPath:""}): type: 'Warning' reason: 
'ProvisioningFailed' failed to provision volume with StorageClass "cephfs": rpc 
error: code = InvalidArgument desc = provided secret is empty

cephConfConfigMapName: ceph-config
cephconf: |
  [global]
    fsid = 9b98ccd8-450e-4172-af70-512e4e77bc36
    mon_host = [v2:10.0.5.11:3300/0,v1:10.0.5.11:6789/0] 
[v2:10.0.5.12:3300/0,v1:10.0.5.12:6789/0] 
[v2:10.0.5.13:3300/0,v1:10.0.5.13:6789/0]
commonLabels: {}
configMapName: ceph-csi-config
csiConfig: null
driverName: cephfs.csi.ceph.com
externallyManagedConfigmap: false
kubeletDir: /var/lib/kubelet
logLevel: 5
nodeplugin:
  affinity: {}
  fusemountoptions: ''
  httpMetrics:
    containerPort: 8081
    enabled: true
    service:
      annotations: {}
      clusterIP: ''
      enabled: true
      externalIPs: null
      loadBalancerIP: ''
      loadBalancerSourceRanges: null
      servicePort: 8080
      type: ClusterIP
  imagePullSecrets: null
  kernelmountoptions: ''
  name: nodeplugin
  nodeSelector: {}
  plugin:
    image:
      pullPolicy: IfNotPresent
      repository: quay.io/cephcsi/cephcsi
      tag: v3.9.0
    resources: {}
  priorityClassName: system-node-critical
  profiling:
    enabled: false
  registrar:
    image:
      pullPolicy: IfNotPresent
      repository: registry.k8s.io/sig-storage/csi-node-driver-registrar
      tag: v2.8.0
    resources: {}
  tolerations: null
  updateStrategy: RollingUpdate
pluginSocketFile: csi.sock
provisioner:
  affinity: {}
  enableHostNetwork: false
  httpMetrics:
    containerPort: 8081
    enabled: true
    service:
      annotations: {}
      clusterIP: ''
      enabled: true
      externalIPs: null
      loadBalancerIP: ''
      loadBalancerSourceRanges: null
      servicePort: 8080
      type: ClusterIP
  imagePullSecrets: null
  name: provisioner
  nodeSelector: {}
  priorityClassName: system-cluster-critical
  profiling:
    enabled: false
  provisioner:
    extraArgs: null
    image:
      pullPolicy: IfNotPresent
      repository: registry.k8s.io/sig-storage/csi-provisioner
      tag: v3.5.0
    resources: {}
  replicaCount: 3
  resizer:
    enabled: true
    extraArgs: null
    image:
      pullPolicy: IfNotPresent
      repository: registry.k8s.io/sig-storage/csi-resizer
      tag: v1.8.0
    name: resizer
    resources: {}
  setmetadata: true
  snapshotter:
    extraArgs: null
    image:
      pullPolicy: IfNotPresent
      repository: registry.k8s.io/sig-storage/csi-snapshotter
      tag: v6.2.2
    resources: {}
  strategy:
    rollingUpdate:
      maxUnavailable: 50%
    type: RollingUpdate
  timeout: 60s
  tolerations: null
provisionerSocketFile: csi-provisioner.sock
rbac:
  create: true
secret:
  adminID: <my keyring is for client.home so i put home here>
  adminKey: <exact value from my keyring here>
  create: true
  name: csi-cephfs-secret
selinuxMount: true
serviceAccounts:
  nodeplugin:
    create: true
    name: null
  provisioner:
    create: true
    name: null
sidecarLogLevel: 1
storageClass:
  allowVolumeExpansion: true
  annotations: {}
  clusterID: <cluster-ID>
  controllerExpandSecret: csi-cephfs-secret
  controllerExpandSecretNamespace: ''
  create: false
  fsName: myfs
  fuseMountOptions: ''
  kernelMountOptions: ''
  mountOptions: null
  mounter: ''
  name: csi-cephfs-sc
  nodeStageSecret: csi-cephfs-secret
  nodeStageSecretNamespace: ''
  pool: ''
  provisionerSecret: csi-cephfs-secret
  provisionerSecretNamespace: ''
  reclaimPolicy: Delete
  volumeNamePrefix: ''
global:
  cattle:
    clusterId: c-m-xschvkd5
    clusterName: dev-cluster
    rkePathPrefix: ''
    rkeWindowsPathPrefix: ''
    systemProjectId: p-g6rqs
    url: https://rancher.example.com
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to