Thanks for your reply.

Yes, Already set it.

[mds]
> mds_max_caps_per_client = 10485760     # default is 1048576


I think the current configuration is big enough for per client. Do I need
to continue to increase this value?

Thanks.

Patrick Donnelly <pdonn...@redhat.com> 于2019年10月19日周六 上午6:30写道:

> Hello Lei,
>
> On Thu, Oct 17, 2019 at 8:43 PM Lei Liu <liul.st...@gmail.com> wrote:
> >
> > Hi cephers,
> >
> > We have some ceph clusters use cephfs in production(mount with kernel
> cephfs), but several of clients often keep a lot of caps(millions)
> unreleased.
> > I know this is due to the client's inability to complete the cache
> release, errors might have been encountered, but no logs.
> >
> > client kernel version is 3.10.0-957.21.3.el7.x86_64
> > ceph version is mostly v12.2.8
> >
> > ceph status shows:
> >
> > x clients failing to respond to cache pressure
> >
> > client kernel debug shows:
> >
> > # cat
> /sys/kernel/debug/ceph/a00cc99c-f9f9-4dd9-9281-43cd12310e41.client11291811/caps
> > total 23801585
> > avail 1074
> > used 23800511
> > reserved 0
> > min 1024
> >
> > mds config:
> > [mds]
> > mds_max_caps_per_client = 10485760
> > # 50G
> > mds_cache_memory_limit = 53687091200
> >
> > I want to know if some ceph configurations can solve this problem ?
>
> mds_max_caps_per_client is new in Luminous 12.2.12. See [1]. You need
> to upgrade.
>
> [1] https://tracker.ceph.com/issues/38130
>
> --
> Patrick Donnelly, Ph.D.
> He / Him / His
> Senior Software Engineer
> Red Hat Sunnyvale, CA
> GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to