Hi Sage, Thank you for your answer.

I do not see anything about that...

root@test2:~# ceph auth list
installed auth entries:

mds.0
    key: AQCfOw9TgF4QNBAAkiVjKh5sGPULV8ZsO4/q1A==
    caps: [mds] allow
    caps: [mon] allow rwx
    caps: [osd] allow *
osd.0
    key: AQCnbgtTKAdABBAAIjnQLlzMnXg2Mej/uLiZdw==
    caps: [mon] allow profile osd
    caps: [osd] allow *
osd.1
    key: AQB9YwdTWERbCBAADmaGmSiQV7Gh8Xj86mT3+w==
    caps: [mon] allow profile osd
    caps: [osd] allow *
client.admin
    key: AQDWQAZTuPxQJxAA028V1ly+pezshrWza8ahZA==
    caps: [mds] allow
    caps: [mon] allow *
    caps: [osd] allow *
client.bootstrap-mds
    key: AQDZQAZTuNHLLBAAIEkKJ0VC2cTsbsLvBaLRCQ==
    caps: [mon] allow profile bootstrap-mds
client.bootstrap-osd
    key: AQDZQAZTMJfnHhAAkElTx+9CQKf6UV+T3lKGOw==
    caps: [mon] allow profile bootstrap-osd
client.test
    key: AQC9YhBT8CE9GhAAdgDiVLGIIgEleen4vkOp5w==
    caps: [mds] allow
    caps: [mon] allow *
>>    caps: [osd] allow * pool=CephFS



root@test2:~# ceph osd dump | grep pool
pool 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins
pg_num 64 pgp_num 64 last_change 1 owner 0 crash_replay_interval 45
pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash
rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0
pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins
pg_num 64 pgp_num 64 last_change 1 owner 0
pool 3 'CephTest' rep size 2 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 64 pgp_num 64 last_change 76 owner 0
>>pool 4 'CephFS' rep size 2 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 128 pgp_num 128 last_change 54 owner 0


I want data stored in CephFS. Is that the right way to do ?


Metadatas are stored successfully, metadata pool is growing...

On 02/28/2014 03:12 PM, Sage Weil wrote:
> Hi Florent,
>
> It sounds like the capability for the user you are authenticating as does 
> not have access to the new OSD data pool.  Try doing
>
>  ceph auth list
>
> and see if there is an osd cap that mentions the data pool but not the new 
> pool you created; that would explain your symptoms.
>
> sage
>
> On Fri, 28 Feb 2014, Florent Bautista wrote:
>
>> Hi all,
>>
>> Today I'm testing CephFS with client-side kernel drivers.
>>
>> My installation is composed of 2 nodes, each one with a monitor and an OSD.
>> One of them is also MDS.
>>
>> root@test2:~# ceph -s
>>     cluster 42081905-1a6b-4b9e-8984-145afe0f22f6
>>      health HEALTH_OK
>>      monmap e2: 2 mons at {0=192.168.0.202:6789/0,1=192.168.0.200:6789/0},
>> election epoch 18, quorum 0,1 0,1
>>      mdsmap e15: 1/1/1 up {0=0=up:active}
>>      osdmap e82: 2 osds: 2 up, 2 in
>>       pgmap v4405: 384 pgs, 5 pools, 16677 MB data, 4328 objects
>>             43473 MB used, 2542 GB / 2584 GB avail
>>                  384 active+clean
>>
>>
>> I added data pool to MDS : ceph mds add_data_pool 4
>>
>> Then I created keyring for my client :
>>
>> ceph --id admin --keyring /etc/ceph/ceph.client.admin.keyring auth
>> get-or-create client.test mds 'allow' osd 'allow * pool=CephFS' mon 'allow
>> *' > /etc/ceph/ceph.client.test.keyring
>>
>>
>> And I mount FS with :
>>
>> mount -o name=test,secret=AQC9YhBT8CE9GhAAdgDiVLGIIgEleen4vkOp5w==,noatime
>> -t ceph 192.168.0.200,192.168.0.202:/ /mnt/ceph
>>
>>
>> The client could be Debian 7.4 (kernel 3.2) or Ubuntu 13.11 (kernel 3.11).
>>
>> Mount is OK. I can write files to it. I can see files on every clients
>> mounted.
>>
>> BUT...
>>
>> Where are stored my files ?
>>
>> My pool stays at 0 disk usage on rados df
>>
>> Disk usage of OSDs never grows...
>>
>> What did I miss ?
>>
>> When client A writes a file, I got "Operation not permitted" when client B
>> reads the file, even if I "sync" FS.
>>
>> That sounds very strange to me, I think I missed something but I don't know
>> what. Of course, no error in logs.
>>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to