These are not excessive values, are they? How to resolve this? 

[@~]# ceph daemon mds.c cache status
{
    "pool": {
        "items": 266303962,
        "bytes": 7599982391
    }
} 

[@~]# ceph daemon  mds.c objecter_requests
{
    "ops": [],
    "linger_ops": [],
    "pool_ops": [],
    "pool_stat_ops": [],
    "statfs_ops": [],
    "command_ops": []
}


-----Original Message-----
Subject: [ceph-users] Re: Luminous 12.2.12 "clients failing to respond 
to capability release"


On the client I have this 

[@~]# cat /proc/sys/fs/file-nr
10368      0     381649

-----Original Message-----

Subject: [ceph-users] Luminous 12.2.12 "clients failing to respond to 
capability release"


I am getting this error 

I have in two sessions[0] num_caps high ( I assume the error is about 
num_caps ). I am using a default luminous and a default centos7 with 
default kernel 3.10. 

Do I really still need to change to a not stock kernel to resolve this? 
I read this in posts of 2016 and 2019, yet I also saw some post on 
tuning the mds.


[0]
ceph daemon mds.c session ls


    {
        "id": 3634861,
        "num_leases": 0,
        "num_caps": 2183,
        "state": "open",
        "request_load_avg": 181,
        "uptime": 2545057.775869,
        "replay_requests": 0,
        "completed_requests": 0,
        "reconnecting": false,


    {
        "id": 3644756,
        "num_leases": 0,
        "num_caps": 2080,
        "state": "open",
        "request_load_avg": 32,
        "uptime": 2545057.775869,
        "replay_requests": 0,
        "completed_requests": 0,
        "reconnecting": false,
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to