Hi,

I am attempting to test the cephfs filesystem layouts.
I created a user with rights to write only in one pool :

client.puppet
        key:zzz
        caps: [mon] allow r
        caps: [osd] allow rwx pool=puppet

I also created another pool in which I would assume this user is allowed to do 
nothing after I successfully configure things.
By the way : looks like the "ceph fs ls" command is inconsistent when the 
cephfs is mounted (I used a locally compiled kmod-ceph rpm):

[root@ceph0 ~]# ceph fs ls
name: cephfs_puppet, metadata pool: puppet_metadata, data pools: [puppet ]
(umount /mnt ...)
[root@ceph0 ~]# ceph fs ls
name: cephfs_puppet, metadata pool: puppet_metadata, data pools: [puppet root ]

So, I have this pool named "root" that I added in the cephfs filesystem.
I then edited the filesystem xattrs :

[root@ceph0 ~]# getfattr -n ceph.dir.layout /mnt/root
getfattr: Removing leading '/' from absolute path names
# file: mnt/root
ceph.dir.layout="stripe_unit=4194304 stripe_count=1 object_size=4194304 
pool=root"

I'm therefore assuming client.puppet should not be allowed to write or read 
anything in /mnt/root, which belongs to the "root" pool... but that is not the 
case.
On another machine where I mounted cephfs using the client.puppet key, I can do 
this :

The mount was done with the client.puppet key, not the admin one that is not 
deployed on that node :
1.2.3.4:6789:/ on /mnt type ceph 
(rw,relatime,name=puppet,secret=<hidden>,nodcache)

[root@dev7248 ~]# echo "not allowed" > /mnt/root/secret.notfailed
[root@dev7248 ~]#
[root@dev7248 ~]# cat /mnt/root/secret.notfailed
not allowed

And I can even see the xattrs inherited from the parent dir :
[root@dev7248 ~]# getfattr -n ceph.file.layout /mnt/root/secret.notfailed
getfattr: Removing leading '/' from absolute path names
# file: mnt/root/secret.notfailed
ceph.file.layout="stripe_unit=4194304 stripe_count=1 object_size=4194304 
pool=root"

Whereas on the node where I mounted cephfs as ceph admin, I get nothing :
[root@ceph0 ~]# cat /mnt/root/secret.notfailed
[root@ceph0 ~]# ls -l /mnt/root/secret.notfailed
-rw-r--r-- 1 root root 12 Mar  3 15:27 /mnt/root/secret.notfailed

After some time, the file also gets empty on the "puppet client" host :
[root@dev7248 ~]# cat /mnt/root/secret.notfailed
[root@dev7248 ~]#
(but the metadata remained ?)

Also, as an unpriviledged user, I can get ownership of a "secret" file by 
changing the extended attribute :

[root@dev7248 ~]# setfattr -n ceph.file.layout.pool -v puppet 
/mnt/root/secret.notfailed
[root@dev7248 ~]# getfattr -n ceph.file.layout /mnt/root/secret.notfailed
getfattr: Removing leading '/' from absolute path names
# file: mnt/root/secret.notfailed
ceph.file.layout="stripe_unit=4194304 stripe_count=1 object_size=4194304 
pool=puppet"

But fortunately, I haven't succeeded yet (?) in reading that file...
My question therefore is : what am I doing wrong ?

Final question for those that read down here : it appears that before creating 
the cephfs filesystem, I used the "puppet" pool to store a test rbd instance.
And it appears I cannot get the list of cephfs objects in that pool, whereas I 
can get those that are on the newly created "root" pool :

[root@ceph0 ~]# rados -p puppet ls
test.rbd
rbd_directory
[root@ceph0 ~]# rados -p root ls
1000000000a.00000000
1000000000b.00000000

Bug, or feature ?

Thanks && regards


P.S : ceph release :

[root@dev7248 ~]# rpm -qa '*ceph*'
kmod-libceph-3.10.0-0.1.20150130gitee04310.el7.centos.x86_64
libcephfs1-0.87-0.el7.centos.x86_64
ceph-common-0.87-0.el7.centos.x86_64
ceph-0.87-0.el7.centos.x86_64
kmod-ceph-3.10.0-0.1.20150130gitee04310.el7.centos.x86_64
ceph-fuse-0.87.1-0.el7.centos.x86_64
python-ceph-0.87-0.el7.centos.x86_64
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to