Re: [ceph-users] mds directory pinning, status display

2019-09-13 Thread Patrick Donnelly
On Fri, Sep 13, 2019 at 7:09 AM thoralf schulze  wrote:
>
> hi there,
>
> while debugging metadata servers reporting slow requests, we took a stab
> at pinning directories of a cephfs like so:
>
> setfattr -n ceph.dir.pin -v 1 /tubfs/kubernetes/
> setfattr -n ceph.dir.pin -v 0 /tubfs/profiles/
> setfattr -n ceph.dir.pin -v 0 /tubfs/homes
>
> on the active mds for rank 0, we can see all pinnings like expected:
>
> ceph daemon /var/run/[rank0].asok get subtrees | jq -c
> '.[]|select(.dir.path|contains("/"))|[.dir.path, .export_pin, .auth_first]'
> ["/kubernetes",1,1]
> ["/homes",0,0]
> ["/profiles",0,0]
>
> while the active mds for rank 1 reports back its own pinnings only:
>
> ceph daemon /var/run/[rank1].asok get subtrees | jq -c
> '.[]|select(.dir.path|contains("/"))|[.dir.path, .export_pin, .auth_first]'
> ["/kubernetes",1,1]
> ["/.ctdb",-1,1]
>
> is this to be expected? anecdotical data indicate that the pinning does
> work as intended.

Each MDS rank can only see subtrees that border the ones its
authoritative for. Therefore, you need to gather all subtrees from all
ranks and merge to see the entire distribution. This could be made
simpler by showing this information in the upcoming `ceph fs top`
display. I've created a tracker ticket:
https://tracker.ceph.com/issues/41824

-- 
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] mds directory pinning, status display

2019-09-13 Thread thoralf schulze
hi there,

while debugging metadata servers reporting slow requests, we took a stab
at pinning directories of a cephfs like so:

setfattr -n ceph.dir.pin -v 1 /tubfs/kubernetes/
setfattr -n ceph.dir.pin -v 0 /tubfs/profiles/
setfattr -n ceph.dir.pin -v 0 /tubfs/homes

on the active mds for rank 0, we can see all pinnings like expected:

ceph daemon /var/run/[rank0].asok get subtrees | jq -c
'.[]|select(.dir.path|contains("/"))|[.dir.path, .export_pin, .auth_first]'
["/kubernetes",1,1]
["/homes",0,0]
["/profiles",0,0]

while the active mds for rank 1 reports back its own pinnings only:

ceph daemon /var/run/[rank1].asok get subtrees | jq -c
'.[]|select(.dir.path|contains("/"))|[.dir.path, .export_pin, .auth_first]'
["/kubernetes",1,1]
["/.ctdb",-1,1]

is this to be expected? anecdotical data indicate that the pinning does
work as intended.

thank you very much & with kind regards,
thoralf.



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com