Re: [ceph-users] CephFS: files never stored on OSDs

2014-02-28 Thread Florent Bautista
Hi Sage, Thank you for your answer. I do not see anything about that... root@test2:~# ceph auth list installed auth entries: mds.0 key: AQCfOw9TgF4QNBAAkiVjKh5sGPULV8ZsO4/q1A== caps: [mds] allow caps: [mon] allow rwx caps: [osd] allow * osd.0 key: AQCnbgtTKAdABBAAIjnQLlzMnXg2

Re: [ceph-users] CephFS: files never stored on OSDs

2014-02-28 Thread Florent Bautista
Okay... I forgot that! Thank you both Gregory & Michael ! I had to set all layout options to make it work : cephfs /mnt/ceph set_layout -p 4 -s 4194304 -u 4194304 -c 1 On 02/28/2014 04:52 PM, Michael J. Kidd wrote: > Seems that you may also need to tell CephFS to use the new pool > instead of

Re: [ceph-users] CephFS: files never stored on OSDs

2014-02-28 Thread Gregory Farnum
By default your filesystem data is stored in the "data" pool, ID 0. You can change to a different pool (for files going forward, not existing ones) by setting the root directory's layout via the ceph.layout.pool virtual xattr, but it doesn't look like you've done that yet. Until then, you've got tw

Re: [ceph-users] CephFS: files never stored on OSDs

2014-02-28 Thread Michael J. Kidd
Seems that you may also need to tell CephFS to use the new pool instead of the default.. After CephFS is mounted, run: # cephfs /mnt/ceph set_layout -p 4 Michael J. Kidd Sr. Storage Consultant Inktank Professional Services On Fri, Feb 28, 2014 at 9:12 AM, Sage Weil wrote: > Hi Florent, > > I

Re: [ceph-users] CephFS: files never stored on OSDs

2014-02-28 Thread Sage Weil
Hi Florent, It sounds like the capability for the user you are authenticating as does not have access to the new OSD data pool. Try doing ceph auth list and see if there is an osd cap that mentions the data pool but not the new pool you created; that would explain your symptoms. sage On Fr

[ceph-users] CephFS: files never stored on OSDs

2014-02-28 Thread Florent Bautista
Hi all, Today I'm testing CephFS with client-side kernel drivers. My installation is composed of 2 nodes, each one with a monitor and an OSD. One of them is also MDS. root@test2:~# ceph -s cluster 42081905-1a6b-4b9e-8984-145afe0f22f6 health HEALTH_OK monmap e2: 2 mons at {0=192.168