ses": 0,
"num_caps": 26,
"num_leases": 0,
"num_caps": 2,
"num_leases": 0,
"num_caps": 2012,
"num_leases": 0,
"num_caps": 10120,
"num_leases": 0,
"
Worked around it by doing a fail of the mds because I read somewhere
about restarting it. However would be nice to know what causes this and
how to prevent it.
ceph mds fail c
-Original Message-
Subject: [ceph-users] Re: Luminous 12.2.12 "clients failing to respond
to capab
These are not excessive values, are they? How to resolve this?
[@~]# ceph daemon mds.c cache status
{
"pool": {
"items": 266303962,
"bytes": 7599982391
}
}
[@~]# ceph daemon mds.c objecter_requests
{
"ops": [],
"linger_ops": [],
"pool_ops": [],
"pool_st