Hello,
On a fresh install (Nautilus 14.2.6) deploy with ceph-ansible playbook stable-4.0, I have an issue with cephfs. I can create a folder, I can
create empty files, but cannot write data on like I'm not allowed to write to the cephfs_data pool.
$ ceph -s
cluster:
id: fded5bb5-62c5-4a88-b62c-0986d7c7ac09
health: HEALTH_OK
services:
mon: 3 daemons, quorum iccluster039,iccluster041,iccluster042 (age 23h)
mgr: iccluster039(active, since 21h), standbys: iccluster041, iccluster042
mds: cephfs:3
{0=iccluster043=up:active,1=iccluster041=up:active,2=iccluster042=up:active}
osd: 24 osds: 24 up (since 22h), 24 in (since 22h)
rgw: 1 daemon active (iccluster043.rgw0)
data:
pools: 9 pools, 568 pgs
objects: 800 objects, 225 KiB
usage: 24 GiB used, 87 TiB / 87 TiB avail
pgs: 568 active+clean
The 2 cephfs pools:
$ ceph osd pool ls detail | grep cephfs
pool 1 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash
rjenkins pg_num 256 pgp_num 256 autoscale_mode warn last_change 83 lfor 0/0/81
flags hashpspool stripe_width 0 expected_num_objects 1 application cephfs
pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash
rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 48 flags hashpspool
stripe_width 0 expected_num_objects 1 pg_autoscale_bias 4 pg_num_min 16
recovery_priority 5 application cephfs
The status of the cephfs filesystem:
$ ceph fs status
cephfs - 1 clients
======
+------+--------+--------------+---------------+-------+-------+
| Rank | State | MDS | Activity | dns | inos |
+------+--------+--------------+---------------+-------+-------+
| 0 | active | iccluster043 | Reqs: 0 /s | 34 | 18 |
| 1 | active | iccluster041 | Reqs: 0 /s | 12 | 16 |
| 2 | active | iccluster042 | Reqs: 0 /s | 10 | 13 |
+------+--------+--------------+---------------+-------+-------+
+-----------------+----------+-------+-------+
| Pool | type | used | avail |
+-----------------+----------+-------+-------+
| cephfs_metadata | metadata | 4608k | 27.6T |
| cephfs_data | data | 0 | 27.6T |
+-----------------+----------+-------+-------+
+-------------+
| Standby MDS |
+-------------+
+-------------+
MDS version: ceph version 14.2.6 (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9)
nautilus (stable)
# mkdir folder
# echo "foo" > bar
-bash: echo: write error: Operation not permitted
# ls -al
total 4
drwxrwxrwx 1 root root 2 Jan 22 07:30 .
drwxr-xr-x 28 root root 4096 Jan 21 09:25 ..
-rw-r--r-- 1 root root 0 Jan 22 07:30 bar
drwxrwxrwx 1 root root 1 Jan 21 16:49 folder
# df -hT .
Filesystem Type Size Used Avail Use%
Mounted on
10.90.38.15,10.90.38.17,10.90.38.18:/dslab2020 ceph 28T 0 28T 0%
/cephfs
I try 2 client config :
$ ceph --cluster dslab2020 fs authorize cephfs client.cephfsadmin / rw
[snip]
$ ceph auth get client.fsadmin
exported keyring for client.fsadmin
[client.fsadmin]
key = [snip]
caps mds = "allow rw"
caps mon = "allow r"
caps osd = "allow rw tag cephfs data=cephfs"
$ ceph --cluster dslab2020 fs authorize cephfs client.cephfsadmin / rw
[snip]
$ ceph auth caps client.cephfsadmin mds "allow rw" mon "allow r" osd "allow rw tag
cephfs pool=cephfs_data "
[snip]
ceph auth caps client.cephfsadmin mds "allow rw" mon "allow r" osd "allow rw tag cephfs
pool=cephfs_data "> updated caps for client.cephfsadmin
$ ceph auth get client.cephfsadmin
exported keyring for client.cephfsadmin
[client.cephfsadmin]
key = [snip]
caps mds = "allow rw"
caps mon = "allow r"
caps osd = "allow rw tag cephfs pool=cephfs_data "
I don't where to look to get more information about that issue. Anyone can help
me? Thanks
Best regards,
--
Yoann Moulin
EPFL IC-IT
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io