I am having again "1 clients failing to respond to capability release" 
and "1 MDSs report slow requests"
It looks like if I stop nfs-ganesha these disappear and if I start 
ganesha they reappear. However when you look at the sessions[0] they are 
all quite low the sessions with 10120, 2012 and 1816 are direct cephfs 
mounts.

The only thing I changed recently is that I increased the pg of some 
pools. Updating nfs-ganesha from 2.6 to 2.7 does not resolve this issue. 
I have been running this setup for many months without issues.

1. does this affect other clients?
2. why do I have this suddenly, did not even reboot the servers 127 
days?


[0]
[@c01 ~]# ceph daemon mds.a session ls | egrep 'num_leases|num_caps'
        "num_leases": 0,
        "num_caps": 2,
        "num_leases": 0,
        "num_caps": 48,
        "num_leases": 0,
        "num_caps": 4,
        "num_leases": 0,
        "num_caps": 1,
        "num_leases": 0,
        "num_caps": 9,
        "num_leases": 0,
        "num_caps": 4,
        "num_leases": 0,
        "num_caps": 26,
        "num_leases": 0,
        "num_caps": 2,
        "num_leases": 0,
        "num_caps": 2012,
        "num_leases": 0,
        "num_caps": 10120,
        "num_leases": 0,
        "num_caps": 1816,
        "num_leases": 0,
        "num_caps": 8,
        "num_leases": 0,
        "num_caps": 7,



-----Original Message-----
Subject: [ceph-users] Re: Luminous 12.2.12 "clients failing to respond 
to capability release" & "MDSs report slow requests" error

 
Worked around it by doing a fail of the mds because I read somewhere 
about restarting it. However would be nice to know what causes this and 
how to prevent it.

ceph mds fail c



-----Original Message-----
Subject: [ceph-users] Re: Luminous 12.2.12 "clients failing to respond 
to capability release" & "MDSs report slow requests" error

These are not excessive values, are they? How to resolve this? 


[@~]# ceph daemon mds.c cache status
{
    "pool": {
        "items": 266303962,
        "bytes": 7599982391
    }
} 

[@~]# ceph daemon  mds.c objecter_requests {
    "ops": [],
    "linger_ops": [],
    "pool_ops": [],
    "pool_stat_ops": [],
    "statfs_ops": [],
    "command_ops": []
}


-----Original Message-----
Subject: [ceph-users] Re: Luminous 12.2.12 "clients failing to respond 
to capability release"


On the client I have this 

[@~]# cat /proc/sys/fs/file-nr
10368      0     381649

-----Original Message-----

Subject: [ceph-users] Luminous 12.2.12 "clients failing to respond to 
capability release"


I am getting this error 

I have in two sessions[0] num_caps high ( I assume the error is about 
num_caps ). I am using a default luminous and a default centos7 with 
default kernel 3.10. 

Do I really still need to change to a not stock kernel to resolve this? 
I read this in posts of 2016 and 2019, yet I also saw some post on 
tuning the mds.


[0]
ceph daemon mds.c session ls


    {
        "id": 3634861,
        "num_leases": 0,
        "num_caps": 2183,
        "state": "open",
        "request_load_avg": 181,
        "uptime": 2545057.775869,
        "replay_requests": 0,
        "completed_requests": 0,
        "reconnecting": false,


    {
        "id": 3644756,
        "num_leases": 0,
        "num_caps": 2080,
        "state": "open",
        "request_load_avg": 32,
        "uptime": 2545057.775869,
        "replay_requests": 0,
        "completed_requests": 0,
        "reconnecting": false,
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to