[ceph-users] Re: CephFS: Isolating folders for different users

2023-01-02 Thread Venky Shankar
Hi Jonas,

On Mon, Jan 2, 2023 at 10:52 PM Jonas Schwab
 wrote:
>
> Thank you very much! Works like a charm, except for one thing: I gave my
> clients the MDS caps 'allow rws path=' to also be able
> to create snapshots from the client, but `mkdir .snap/test` still returns
>  mkdir: cannot create directory ‘.snap/test’: Operation not permitted
>
> Do you have an idea what might be the issue here?

If you are using cephfs subvolume, its a good idea to take snapshots via

ceph fs subvolume snapshot create ...

since there is some subvolume jugglery done which might deny taking
snapshots at arbitrary levels.

>
> Best regards,
> Jonas
>
> PS: A happy new year to everyone!
>
> On 23.12.22 10:05, Kai Stian Olstad wrote:
> > On 22.12.2022 15:47, Jonas Schwab wrote:
> >> Now the question: Since I established this setup more or less through
> >> trial and error, I was wondering if there is a more elegant/better
> >> approach than what is outlined above?
> >
> > You can use namespace so you don't need separate pools.
> > Unfortunately the documentation is sparse on the subject, I use it
> > with subvolume like this
> >
> >
> > # Create a subvolume
> >
> > ceph fs subvolume create  
> > --pool_layout  --namespace-isolated
> >
> > The subvolume is created with namespace fsvolume_
> > You can also find the name with
> >
> > ceph fs subvolume info   | jq -r
> > .pool_namespace
> >
> >
> > # Create a user with access to the subvolume and the namespace
> >
> > ## First find the path to the subvolume
> >
> > ceph fs subvolume getpath  
> >
> > ## Create the user
> >
> > ceph auth get-or-create client. mon 'allow r' osd 'allow
> > rw pool= namespace=fsvolumens_'
> >
> >
> > I have found this by looking at how Openstack does it and some trial
> > and error.
> >
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io



-- 
Cheers,
Venky

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: CephFS: Isolating folders for different users

2023-01-02 Thread Robert Gallop
One side affect of using sub volumes is that you can then only take a snap
at the sub volume level, nothing further down the tree.

I find you can use the same path on the auth without the sub volume unless
I’m missing something in this thread.

On Mon, Jan 2, 2023 at 10:21 AM Jonas Schwab <
jonas.sch...@physik.uni-wuerzburg.de> wrote:

> Thank you very much! Works like a charm, except for one thing: I gave my
> clients the MDS caps 'allow rws path=' to also be able
> to create snapshots from the client, but `mkdir .snap/test` still returns
>  mkdir: cannot create directory ‘.snap/test’: Operation not permitted
>
> Do you have an idea what might be the issue here?
>
> Best regards,
> Jonas
>
> PS: A happy new year to everyone!
>
> On 23.12.22 10:05, Kai Stian Olstad wrote:
> > On 22.12.2022 15:47, Jonas Schwab wrote:
> >> Now the question: Since I established this setup more or less through
> >> trial and error, I was wondering if there is a more elegant/better
> >> approach than what is outlined above?
> >
> > You can use namespace so you don't need separate pools.
> > Unfortunately the documentation is sparse on the subject, I use it
> > with subvolume like this
> >
> >
> > # Create a subvolume
> >
> > ceph fs subvolume create  
> > --pool_layout  --namespace-isolated
> >
> > The subvolume is created with namespace fsvolume_
> > You can also find the name with
> >
> > ceph fs subvolume info   | jq -r
> > .pool_namespace
> >
> >
> > # Create a user with access to the subvolume and the namespace
> >
> > ## First find the path to the subvolume
> >
> > ceph fs subvolume getpath  
> >
> > ## Create the user
> >
> > ceph auth get-or-create client. mon 'allow r' osd 'allow
> > rw pool= namespace=fsvolumens_'
> >
> >
> > I have found this by looking at how Openstack does it and some trial
> > and error.
> >
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: CephFS: Isolating folders for different users

2023-01-02 Thread Jonas Schwab
Thank you very much! Works like a charm, except for one thing: I gave my 
clients the MDS caps 'allow rws path=' to also be able 
to create snapshots from the client, but `mkdir .snap/test` still returns

    mkdir: cannot create directory ‘.snap/test’: Operation not permitted

Do you have an idea what might be the issue here?

Best regards,
Jonas

PS: A happy new year to everyone!

On 23.12.22 10:05, Kai Stian Olstad wrote:

On 22.12.2022 15:47, Jonas Schwab wrote:

Now the question: Since I established this setup more or less through
trial and error, I was wondering if there is a more elegant/better
approach than what is outlined above?


You can use namespace so you don't need separate pools.
Unfortunately the documentation is sparse on the subject, I use it 
with subvolume like this



# Create a subvolume

    ceph fs subvolume create   
--pool_layout  --namespace-isolated


The subvolume is created with namespace fsvolume_
You can also find the name with

    ceph fs subvolume info   | jq -r 
.pool_namespace



# Create a user with access to the subvolume and the namespace

## First find the path to the subvolume

    ceph fs subvolume getpath  

## Create the user

    ceph auth get-or-create client. mon 'allow r' osd 'allow 
rw pool= namespace=fsvolumens_'



I have found this by looking at how Openstack does it and some trial 
and error.




___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: CephFS: Isolating folders for different users

2022-12-24 Thread Milind Changire
You could try creating Subvolumes as well:
https://docs.ceph.com/en/latest/cephfs/fs-volumes/
As usual, ceph caps and data layout semantics apply to Subvolumes as well.


On Thu, Dec 22, 2022 at 8:19 PM Jonas Schwab <
jonas.sch...@physik.uni-wuerzburg.de> wrote:

> Hello everyone,
>
> I would like to setup my CephFS with different directories exclusively
> accessible by corresponding clients. By this, I mean e.g. /dir_a only
> accessible by client.a and /dir_b only by client.b.
>
>  From the documentation I gathered, having client caps like
>
> client.a
>  key: 
>  caps: [mds] allow rw fsname=cephfs path=/dir_a
>  caps: [mon] allow r fsname=cephfs
>  caps: [osd] allow rw tag cephfs data=cephfs
>
> client.b
>  key: 
>  caps: [mds] allow rw fsname=cephfs path=/dir_b
>  caps: [mon] allow r fsname=cephfs
>  caps: [osd] allow rw tag cephfs data=cephfs
>
> is not enough, since it does only restrict the clients' access to the
> metadata pool. So to restrict access to the data, I create pools for
> each of the directories, e.g. cephfs_a_data and cephfs_b_data. To make
> the data end up on the right pool, I set attributes through cephfs-shell:
>
>  setxattr /dir_a ceph.dir.layout.pool cephfs_a_data
> setxattr /dir_b ceph.dir.layout.pool cephfs_b_data
>
> Through trial an error, I found out the following client caps work with
> this setup:
>
> client.a
>  key: 
>  caps: [mds] allow rw fsname=cephfs path=/dir_a
>  caps: [mon] allow r fsname=cephfs
>  caps: [osd] allow rwx pool=cephfs_a_data
>
> client.b
>  key: 
>  caps: [mds] allow rw fsname=cephfs path=/dir_b
>  caps: [mon] allow r fsname=cephfs
>  caps: [osd] allow rwx pool=cephfs_b_data
>
> With only rw on osds, I was not able to write in the mounted dirs.
>
> Now the question: Since I established this setup more or less through
> trial and error, I was wondering if there is a more elegant/better
> approach than what is outlined above?
>
> Thank you for you help!
>
> Best regards,
> Jonas
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>


-- 
Milind
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: CephFS: Isolating folders for different users

2022-12-23 Thread Kai Stian Olstad

On 22.12.2022 15:47, Jonas Schwab wrote:

Now the question: Since I established this setup more or less through
trial and error, I was wondering if there is a more elegant/better
approach than what is outlined above?


You can use namespace so you don't need separate pools.
Unfortunately the documentation is sparse on the subject, I use it with 
subvolume like this



# Create a subvolume

ceph fs subvolume create   
--pool_layout  --namespace-isolated


The subvolume is created with namespace fsvolume_
You can also find the name with

ceph fs subvolume info   | jq -r 
.pool_namespace



# Create a user with access to the subvolume and the namespace

## First find the path to the subvolume

ceph fs subvolume getpath  

## Create the user

ceph auth get-or-create client. mon 'allow r' mds 'allow 
rw path=' osd 'allow rw pool= 
namespace=fsvolumens_'



I have found this by looking at how Openstack does it and some trial and 
error.



--
Kai Stian Olstad
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io