I am making use of CephFS plus the cephfs-hadoop shim to replace HDFS in a 
system I’ve been experimenting with.

I’ve noticed that a large number of my HDFS clients have a ‘num_caps’ value of 
16385, as seen when running ‘session ls’ on the active mds. This appears to be 
one larger than the default value for ‘client_cache_size’ so I presume some 
relation, though I have not seen any documentation to corroborate this.

What I was hoping to do is track down which ceph client is actually holding all 
these ‘caps’, but since my system can have work scheduled dynamically and 
multiple clients can be running on the same host, its not obvious how to 
associate the client ‘id’ as reported by ‘session ls’ with any one process on 
the give host.

Is there steps I can follow to back track the client ‘id’ to a process id?

-Chris
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to