[ceph-users] Re: CephFS keyrings for K8s

2022-01-25 Thread Frédéric Nass


Le 25/01/2022 à 12:09, Frédéric Nass a écrit :


Hello Michal,

With cephfs and a single filesystem shared across multiple k8s 
clusters, you should subvolumegroups to limit data exposure. You'll 
find an example of how to use subvolumegroups in the ceph-csi-cephfs 
helm chart [1]. Essentially you just have to set the subvolumeGroup to 
whatever you like and then create the associated cephfs keyring with 
the following caps:


ceph auth get-or-create client.cephfs.k8s-cluster-1.admin mon "allow 
r" osd "allow rw tag cephfs *=*" mds "allow rw 
path=/volumes/csi-k8s-cluster-1" mgr "allow rw" -o 
/etc/ceph/client.cephfs.k8s-cluster-1.admin.keyring


    caps: [mds] allow rw path=/volumes/csi-k8s-cluster-1
    caps: [mgr] allow rw
    caps: [mon] allow r
    caps: [osd] allow rw tag cephfs *=*

The subvolume group will be created by ceph-csi-cephfs if I remember 
correctly but you can also take care of this on the ceph side with 
'ceph fs subvolumegroup create cephfs csi-k8s-cluster-1'.
PVs will then be created as subvolumes in this subvolumegroup. To list 
them, use 'ceph fs subvolume ls cephfs --group_name=csi-k8s-cluster-1'.


To achieve the same goal with RBD images, you should use rados 
namespaces. The current helm chart [2] seems to lack information about 
the radosNamespace setting but it works effectively considering you 
set it as below:


csiConfig:
  - clusterID: ""
    monitors:
  - ""
  - ""
    radosNamespace: "k8s-cluster-1"

ceph auth get-or-create client.rbd.name.admin mon "profile rbd" osd 
"allow rwx pool  object_prefix rbd_info, allow rwx pool 
 namespace k8s-cluster-1" mgr "profile rbd 
pool= namespace=k8s-cluster-1" -o 
/etc/ceph/client.rbd.name.admin.keyring


    caps: [mon] profile rbd
    caps: [osd] allow class-read object_prefix rbd_children, allow rwx 
pool= namespace=k8s-cluster-1


Sorry, the admin caps should read:

    caps: [mgr] profile rbd pool= namespace=k8s-cluster-1
    caps: [mon] profile rbd
    caps: [osd] allow rwx pool  object_prefix rbd_info, 
allow rwx pool  namespace k8s-cluster-1


Regards,

Frédéric.



ceph auth get-or-create client.rbd.name.user mon "profile rbd" osd 
"allow class-read object_prefix rbd_children, allow rwx 
pool= namespace=k8s-cluster-1" -o 
/etc/ceph/client.rbd.name.user.keyring


    caps: [mon] profile rbd
    caps: [osd] allow class-read object_prefix rbd_children, allow rwx 
pool= namespace=k8s-cluster-1


Capabilities required for ceph-csi-cephfs and ceph-csi-rbd are 
described here [3].


This should get you started. Let me know if you see any clever/safer 
caps to use.


Regards,

Frédéric.

[1] 
https://github.com/ceph/ceph-csi/blob/devel/charts/ceph-csi-cephfs/values.yaml#L20
[2] 
https://github.com/ceph/ceph-csi/blob/devel/charts/ceph-csi-rbd/values.yaml#L20

[3] https://github.com/ceph/ceph-csi/blob/devel/docs/capabilities.md


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: CephFS keyrings for K8s

2022-01-25 Thread Frédéric Nass

Hello Michal,

With cephfs and a single filesystem shared across multiple k8s clusters, 
you should subvolumegroups to limit data exposure. You'll find an 
example of how to use subvolumegroups in the ceph-csi-cephfs helm chart 
[1]. Essentially you just have to set the subvolumeGroup to whatever you 
like and then create the associated cephfs keyring with the following caps:


ceph auth get-or-create client.cephfs.k8s-cluster-1.admin mon "allow r" 
osd "allow rw tag cephfs *=*" mds "allow rw 
path=/volumes/csi-k8s-cluster-1" mgr "allow rw" -o 
/etc/ceph/client.cephfs.k8s-cluster-1.admin.keyring


    caps: [mds] allow rw path=/volumes/csi-k8s-cluster-1
    caps: [mgr] allow rw
    caps: [mon] allow r
    caps: [osd] allow rw tag cephfs *=*

The subvolume group will be created by ceph-csi-cephfs if I remember 
correctly but you can also take care of this on the ceph side with 'ceph 
fs subvolumegroup create cephfs csi-k8s-cluster-1'.
PVs will then be created as subvolumes in this subvolumegroup. To list 
them, use 'ceph fs subvolume ls cephfs --group_name=csi-k8s-cluster-1'.


To achieve the same goal with RBD images, you should use rados 
namespaces. The current helm chart [2] seems to lack information about 
the radosNamespace setting but it works effectively considering you set 
it as below:


csiConfig:
  - clusterID: ""
    monitors:
  - ""
  - ""
    radosNamespace: "k8s-cluster-1"

ceph auth get-or-create client.rbd.name.admin mon "profile rbd" osd 
"allow rwx pool  object_prefix rbd_info, allow rwx pool 
 namespace k8s-cluster-1" mgr "profile rbd 
pool= namespace=k8s-cluster-1" -o 
/etc/ceph/client.rbd.name.admin.keyring


    caps: [mon] profile rbd
    caps: [osd] allow class-read object_prefix rbd_children, allow rwx 
pool= namespace=k8s-cluster-1


ceph auth get-or-create client.rbd.name.user mon "profile rbd" osd 
"allow class-read object_prefix rbd_children, allow rwx 
pool= namespace=k8s-cluster-1" -o 
/etc/ceph/client.rbd.name.user.keyring


    caps: [mon] profile rbd
    caps: [osd] allow class-read object_prefix rbd_children, allow rwx 
pool= namespace=k8s-cluster-1


Capabilities required for ceph-csi-cephfs and ceph-csi-rbd are described 
here [3].


This should get you started. Let me know if you see any clever/safer 
caps to use.


Regards,

Frédéric.

[1] 
https://github.com/ceph/ceph-csi/blob/devel/charts/ceph-csi-cephfs/values.yaml#L20
[2] 
https://github.com/ceph/ceph-csi/blob/devel/charts/ceph-csi-rbd/values.yaml#L20

[3] https://github.com/ceph/ceph-csi/blob/devel/docs/capabilities.md

--
Cordialement,

Frédéric Nass
Direction du Numérique
Sous-direction Infrastructures et Services

Tél : 03.72.74.11.35

Le 20/01/2022 à 09:26, Michal Strnad a écrit :

Hi,

We are using CephFS in our Kubernetes clusters and now we are trying 
to optimize permissions/caps in keyrings. Every guide which we found 
contains something like - Create the file system by specifying the 
desired settings for the metadata pool, data pool and admin keyring 
with access to the entire file system ... Is there better way where we 
don't need admin key, but restricted key only? What are you using in 
your environments?


Multiple file systems isn't option for us.

Thanks for your help

Regards,
Michal Strnad


___
ceph-users mailing list --ceph-users@ceph.io
To unsubscribe send an email toceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: CephFS keyrings for K8s

2022-01-20 Thread Burkhard Linke

Hi,

On 1/20/22 9:26 AM, Michal Strnad wrote:

Hi,

We are using CephFS in our Kubernetes clusters and now we are trying 
to optimize permissions/caps in keyrings. Every guide which we found 
contains something like - Create the file system by specifying the 
desired settings for the metadata pool, data pool and admin keyring 
with access to the entire file system ... Is there better way where we 
don't need admin key, but restricted key only? What are you using in 
your environments?


The 'ceph fs authorize' cli function can generate keys suitable for your 
use case. You can restrict the access scope to sub directories etc.



See https://docs.ceph.com/en/pacific/cephfs/client-auth/  (or the pages 
for your current release).



We use the CSI cephfs plugin in our main k8s cluster, and it is working 
fine with those keys.



Regards,

Burkhard Linke


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: CephFS keyrings for K8s

2022-01-20 Thread Michal Strnad

Ad. We are using Nautilus on Ceph side.

Michal Strnad


On 1/20/22 9:26 AM, Michal Strnad wrote:

Hi,

We are using CephFS in our Kubernetes clusters and now we are trying to 
optimize permissions/caps in keyrings. Every guide which we found 
contains something like - Create the file system by specifying the 
desired settings for the metadata pool, data pool and admin keyring with 
access to the entire file system ... Is there better way where we don't 
need admin key, but restricted key only? What are you using in your 
environments?


Multiple file systems isn't option for us.

Thanks for your help

Regards,
Michal Strnad


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io