Writes to mounted Ceph FS fail silently if client has no write capability on data pool

2012-07-05 Thread Florian Haas
Hi everyone,

please enlighten me if I'm misinterpreting something, but I think the
Ceph FS layer could handle the following situation better.

How to reproduce (this is on a 3.2.0 kernel):

1. Create a client, mine is named test, with the following capabilities:

client.test
key: key
caps: [mds] allow
caps: [mon] allow r
caps: [osd] allow rw pool=testpool

Note the client only has access to a single pool, testpool.

2. Export the client's secret and mount a Ceph FS.

mount -t ceph -o name=test,secretfile=/etc/ceph/test.secret
daisy,eric,frank:/ /mnt

This succeeds, despite us not even having read access to the data pool.

3. Write something to a file.

root@alice:/mnt# echo hello world  hello.txt
root@alice:/mnt# cat hello.txt

This too succeeds.

4. Sync and clear caches.

root@alice:/mnt# sync
root@alice:/mnt# echo 3  /proc/sys/vm/drop_caches

5. Check file size and contents.

root@alice:/mnt# ls -la
total 5
drwxr-xr-x  1 root root0 Jul  5 17:15 .
drwxr-xr-x 21 root root 4096 Jun 11 09:03 ..
-rw-r--r--  1 root root   12 Jul  5 17:15 hello.txt
root@alice:/mnt# cat hello.txt
root@alice:/mnt#

Note the reported file size in unchanged, but the file is empty.

Checking the data pool with client.admin credentials obviously shows
that that pool is empty, so objects are never written. Interestingly,
cephfs hello.txt show_location does list an object_name, identifying
an object which doesn't exist.

Is there any way to make the client fail with -EIO, -EPERM,
-EOPNOTSUPP or whatever else is appropriate, rather than pretending to
write when it can't?

Also, going down the rabbit hole, how would this behavior change if I
used cephfs to set the default layout on some directory to use a
different pool?

All thoughts appreciated.

Cheers,
Florian
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Writes to mounted Ceph FS fail silently if client has no write capability on data pool

2012-07-05 Thread Gregory Farnum
On Thu, Jul 5, 2012 at 10:40 AM, Florian Haas flor...@hastexo.com wrote:
 Hi everyone,

 please enlighten me if I'm misinterpreting something, but I think the
 Ceph FS layer could handle the following situation better.

 How to reproduce (this is on a 3.2.0 kernel):

 1. Create a client, mine is named test, with the following capabilities:

 client.test
         key: key
         caps: [mds] allow
         caps: [mon] allow r
         caps: [osd] allow rw pool=testpool

 Note the client only has access to a single pool, testpool.

 2. Export the client's secret and mount a Ceph FS.

 mount -t ceph -o name=test,secretfile=/etc/ceph/test.secret
 daisy,eric,frank:/ /mnt

 This succeeds, despite us not even having read access to the data pool.

 3. Write something to a file.

 root@alice:/mnt# echo hello world  hello.txt
 root@alice:/mnt# cat hello.txt

 This too succeeds.

 4. Sync and clear caches.

 root@alice:/mnt# sync
 root@alice:/mnt# echo 3  /proc/sys/vm/drop_caches

 5. Check file size and contents.

 root@alice:/mnt# ls -la
 total 5
 drwxr-xr-x  1 root root    0 Jul  5 17:15 .
 drwxr-xr-x 21 root root 4096 Jun 11 09:03 ..
 -rw-r--r--  1 root root   12 Jul  5 17:15 hello.txt
 root@alice:/mnt# cat hello.txt
 root@alice:/mnt#

 Note the reported file size in unchanged, but the file is empty.

 Checking the data pool with client.admin credentials obviously shows
 that that pool is empty, so objects are never written. Interestingly,
 cephfs hello.txt show_location does list an object_name, identifying
 an object which doesn't exist.

 Is there any way to make the client fail with -EIO, -EPERM,
 -EOPNOTSUPP or whatever else is appropriate, rather than pretending to
 write when it can't?

There definitely are, but I don't think we're going to fix that until
we get to working seriously on the filesystem. Create a bug! ;)

 Also, going down the rabbit hole, how would this behavior change if I
 used cephfs to set the default layout on some directory to use a
 different pool?

I'm not sure what you're asking here — if you have access to the
metadata server, you can change the pool that new files go into, and I
think you can set the pool to be whatever you like (and we should
probably harden all this, too). So you can fix it if it's a problem,
but you can also turn it into a problem.
Is that what you were after?
-Greg
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Writes to mounted Ceph FS fail silently if client has no write capability on data pool

2012-07-05 Thread Florian Haas
On Thu, Jul 5, 2012 at 10:01 PM, Gregory Farnum g...@inktank.com wrote:
 Also, going down the rabbit hole, how would this behavior change if I
 used cephfs to set the default layout on some directory to use a
 different pool?

 I'm not sure what you're asking here — if you have access to the
 metadata server, you can change the pool that new files go into, and I
 think you can set the pool to be whatever you like (and we should
 probably harden all this, too). So you can fix it if it's a problem,
 but you can also turn it into a problem.

I am aware that I would be able to do this.

My question was more along the lines of: if the pool that data is
written to can be set on a per-file or per-directory basis, and we can
also set read and write permissions per pool, how would the filesystem
behave properly? Hide files the mounting user doesn't have read access
to? Return -EIO or -EPERM on writes to files stored in pools we can't
write to? Failing a mount if we're missing some permission on any file
or directory in the fs? All of these sound painful in one way or
another, so I'm having trouble envisioning what the correct behavior
would look like.

Florian
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Writes to mounted Ceph FS fail silently if client has no write capability on data pool

2012-07-05 Thread Gregory Farnum
On Thu, Jul 5, 2012 at 1:25 PM, Florian Haas flor...@hastexo.com wrote:
 On Thu, Jul 5, 2012 at 10:01 PM, Gregory Farnum g...@inktank.com wrote:
 Also, going down the rabbit hole, how would this behavior change if I
 used cephfs to set the default layout on some directory to use a
 different pool?

 I'm not sure what you're asking here — if you have access to the
 metadata server, you can change the pool that new files go into, and I
 think you can set the pool to be whatever you like (and we should
 probably harden all this, too). So you can fix it if it's a problem,
 but you can also turn it into a problem.

 I am aware that I would be able to do this.

 My question was more along the lines of: if the pool that data is
 written to can be set on a per-file or per-directory basis, and we can
 also set read and write permissions per pool, how would the filesystem
 behave properly? Hide files the mounting user doesn't have read access
 to? Return -EIO or -EPERM on writes to files stored in pools we can't
 write to? Failing a mount if we're missing some permission on any file
 or directory in the fs? All of these sound painful in one way or
 another, so I'm having trouble envisioning what the correct behavior
 would look like.

Ah, yes. My feeling would be that we want to treat it like a local
file they aren't allowed to access — ie, return EPERM. I *think* that
is what will actually happen if they try to read those files, but the
write path works a bit differently (since the writes are flushed out
asynchronously) and so we would need to introduce some smarts into the
client to check the pool permissions and proactively apply them on any
attempted access.
-Greg
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html