Hi,

>>Did the fs have lots of mount/umount? 

not too much, I have around 300 ceph-fuse clients (12.2.2 && 12.2.4) and ceph 
cluster is 12.2.2.
maybe when client reboot, but that don't happen too much.


>> We recently found a memory leak
>>bug in that area https://github.com/ceph/ceph/pull/20148

Ok thanks. Does session occur only at mount/unmount ?



I have another cluster, with 64 fuse-client, mds memory is around 500mb.
(with default mds_cache_memory_limit , no tuning, and ceph cluster is 12.2.4 
instead 12.2.2)

Clients are also ceph-fuse 12.2.2 && 12.2.4



I'll try to upgrade this buggy mds to 12.2.4 to see if it's helping.

----- Mail original -----
De: "Zheng Yan" <uker...@gmail.com>
À: "aderumier" <aderum...@odiso.com>
Cc: "ceph-users" <ceph-users@lists.ceph.com>
Envoyé: Vendredi 23 Mars 2018 01:08:46
Objet: Re: [ceph-users] ceph mds memory usage 20GB : is it normal ?

Did the fs have lots of mount/umount? We recently found a memory leak 
bug in that area https://github.com/ceph/ceph/pull/20148 

Regards 
Yan, Zheng 

On Thu, Mar 22, 2018 at 5:29 PM, Alexandre DERUMIER <aderum...@odiso.com> 
wrote: 
> Hi, 
> 
> I'm running cephfs since 2 months now, 
> 
> and my active msd memory usage is around 20G now (still growing). 
> 
> ceph 1521539 10.8 31.2 20929836 20534868 ? Ssl janv.26 8573:34 
> /usr/bin/ceph-mds -f --cluster ceph --id 2 --setuser ceph --setgroup ceph 
> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND 
> 
> 
> this is on luminous 12.2.2 
> 
> only tuning done is: 
> 
> mds_cache_memory_limit = 5368709120 
> 
> 
> (5GB). I known it's a soft limit, but 20G seem quite huge vs 5GB .... 
> 
> 
> Is it normal ? 
> 
> 
> 
> 
> # ceph daemon mds.2 perf dump mds 
> { 
> "mds": { 
> "request": 1444009197, 
> "reply": 1443999870, 
> "reply_latency": { 
> "avgcount": 1443999870, 
> "sum": 1657849.656122933, 
> "avgtime": 0.001148095 
> }, 
> "forward": 0, 
> "dir_fetch": 51740910, 
> "dir_commit": 9069568, 
> "dir_split": 64367, 
> "dir_merge": 58016, 
> "inode_max": 2147483647, 
> "inodes": 2042975, 
> "inodes_top": 152783, 
> "inodes_bottom": 138781, 
> "inodes_pin_tail": 1751411, 
> "inodes_pinned": 1824714, 
> "inodes_expired": 7258145573, 
> "inodes_with_caps": 1812018, 
> "caps": 2538233, 
> "subtrees": 2, 
> "traverse": 1591668547, 
> "traverse_hit": 1259482170, 
> "traverse_forward": 0, 
> "traverse_discover": 0, 
> "traverse_dir_fetch": 30827836, 
> "traverse_remote_ino": 7510, 
> "traverse_lock": 86236, 
> "load_cent": 144401980319, 
> "q": 49, 
> "exported": 0, 
> "exported_inodes": 0, 
> "imported": 0, 
> "imported_inodes": 0 
> } 
> } 
> _______________________________________________ 
> ceph-users mailing list 
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to