Re: [ceph-users] kernel cephfs - too many caps used by client

2019-10-24 Thread Patrick Donnelly
It's not clear what the problem is to me. Please try increasing the
debugging on your MDS and share a snippet (privately to me if you
wish). Other information would also be helpful like `ceph status` and
what kind of workloads these clients are running.

On Fri, Oct 18, 2019 at 7:22 PM Lei Liu  wrote:
>
> Only osds is v12.2.8, all of mds and mon used v12.2.12
>
> # ceph versions
> {
> "mon": {
> "ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) 
> luminous (stable)": 3
> },
> "mgr": {
> "ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) 
> luminous (stable)": 4
> },
> "osd": {
> "ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) 
> luminous (stable)": 24,
> "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) 
> luminous (stable)": 203
> },
> "mds": {
> "ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) 
> luminous (stable)": 5
> },
> "rgw": {
> "ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) 
> luminous (stable)": 1
> },
> "overall": {
> "ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) 
> luminous (stable)": 37,
> "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) 
> luminous (stable)": 203
> }
> }
>
> Lei Liu  于2019年10月19日周六 上午10:09写道:
>>
>> Thanks for your reply.
>>
>> Yes, Already set it.
>>
>>> [mds]
>>> mds_max_caps_per_client = 10485760 # default is 1048576
>>
>>
>> I think the current configuration is big enough for per client. Do I need to 
>> continue to increase this value?
>>
>> Thanks.
>>
>> Patrick Donnelly  于2019年10月19日周六 上午6:30写道:
>>>
>>> Hello Lei,
>>>
>>> On Thu, Oct 17, 2019 at 8:43 PM Lei Liu  wrote:
>>> >
>>> > Hi cephers,
>>> >
>>> > We have some ceph clusters use cephfs in production(mount with kernel 
>>> > cephfs), but several of clients often keep a lot of caps(millions) 
>>> > unreleased.
>>> > I know this is due to the client's inability to complete the cache 
>>> > release, errors might have been encountered, but no logs.
>>> >
>>> > client kernel version is 3.10.0-957.21.3.el7.x86_64
>>> > ceph version is mostly v12.2.8
>>> >
>>> > ceph status shows:
>>> >
>>> > x clients failing to respond to cache pressure
>>> >
>>> > client kernel debug shows:
>>> >
>>> > # cat 
>>> > /sys/kernel/debug/ceph/a00cc99c-f9f9-4dd9-9281-43cd12310e41.client11291811/caps
>>> > total 23801585
>>> > avail 1074
>>> > used 23800511
>>> > reserved 0
>>> > min 1024
>>> >
>>> > mds config:
>>> > [mds]
>>> > mds_max_caps_per_client = 10485760
>>> > # 50G
>>> > mds_cache_memory_limit = 53687091200
>>> >
>>> > I want to know if some ceph configurations can solve this problem ?
>>>
>>> mds_max_caps_per_client is new in Luminous 12.2.12. See [1]. You need
>>> to upgrade.
>>>
>>> [1] https://tracker.ceph.com/issues/38130
>>>
>>> --
>>> Patrick Donnelly, Ph.D.
>>> He / Him / His
>>> Senior Software Engineer
>>> Red Hat Sunnyvale, CA
>>> GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
>>>


-- 
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] kernel cephfs - too many caps used by client

2019-10-18 Thread Lei Liu
Only osds is v12.2.8, all of mds and mon used v12.2.12

# ceph versions
{
"mon": {
"ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777)
luminous (stable)": 3
},
"mgr": {
"ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777)
luminous (stable)": 4
},
"osd": {
"ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777)
luminous (stable)": 24,
"ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0)
luminous (stable)": 203
},
"mds": {
"ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777)
luminous (stable)": 5
},
"rgw": {
"ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777)
luminous (stable)": 1
},
"overall": {
"ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777)
luminous (stable)": 37,
"ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0)
luminous (stable)": 203
}
}

Lei Liu  于2019年10月19日周六 上午10:09写道:

> Thanks for your reply.
>
> Yes, Already set it.
>
> [mds]
>> mds_max_caps_per_client = 10485760 # default is 1048576
>
>
> I think the current configuration is big enough for per client. Do I need
> to continue to increase this value?
>
> Thanks.
>
> Patrick Donnelly  于2019年10月19日周六 上午6:30写道:
>
>> Hello Lei,
>>
>> On Thu, Oct 17, 2019 at 8:43 PM Lei Liu  wrote:
>> >
>> > Hi cephers,
>> >
>> > We have some ceph clusters use cephfs in production(mount with kernel
>> cephfs), but several of clients often keep a lot of caps(millions)
>> unreleased.
>> > I know this is due to the client's inability to complete the cache
>> release, errors might have been encountered, but no logs.
>> >
>> > client kernel version is 3.10.0-957.21.3.el7.x86_64
>> > ceph version is mostly v12.2.8
>> >
>> > ceph status shows:
>> >
>> > x clients failing to respond to cache pressure
>> >
>> > client kernel debug shows:
>> >
>> > # cat
>> /sys/kernel/debug/ceph/a00cc99c-f9f9-4dd9-9281-43cd12310e41.client11291811/caps
>> > total 23801585
>> > avail 1074
>> > used 23800511
>> > reserved 0
>> > min 1024
>> >
>> > mds config:
>> > [mds]
>> > mds_max_caps_per_client = 10485760
>> > # 50G
>> > mds_cache_memory_limit = 53687091200
>> >
>> > I want to know if some ceph configurations can solve this problem ?
>>
>> mds_max_caps_per_client is new in Luminous 12.2.12. See [1]. You need
>> to upgrade.
>>
>> [1] https://tracker.ceph.com/issues/38130
>>
>> --
>> Patrick Donnelly, Ph.D.
>> He / Him / His
>> Senior Software Engineer
>> Red Hat Sunnyvale, CA
>> GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
>>
>>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] kernel cephfs - too many caps used by client

2019-10-18 Thread Lei Liu
Thanks for your reply.

Yes, Already set it.

[mds]
> mds_max_caps_per_client = 10485760 # default is 1048576


I think the current configuration is big enough for per client. Do I need
to continue to increase this value?

Thanks.

Patrick Donnelly  于2019年10月19日周六 上午6:30写道:

> Hello Lei,
>
> On Thu, Oct 17, 2019 at 8:43 PM Lei Liu  wrote:
> >
> > Hi cephers,
> >
> > We have some ceph clusters use cephfs in production(mount with kernel
> cephfs), but several of clients often keep a lot of caps(millions)
> unreleased.
> > I know this is due to the client's inability to complete the cache
> release, errors might have been encountered, but no logs.
> >
> > client kernel version is 3.10.0-957.21.3.el7.x86_64
> > ceph version is mostly v12.2.8
> >
> > ceph status shows:
> >
> > x clients failing to respond to cache pressure
> >
> > client kernel debug shows:
> >
> > # cat
> /sys/kernel/debug/ceph/a00cc99c-f9f9-4dd9-9281-43cd12310e41.client11291811/caps
> > total 23801585
> > avail 1074
> > used 23800511
> > reserved 0
> > min 1024
> >
> > mds config:
> > [mds]
> > mds_max_caps_per_client = 10485760
> > # 50G
> > mds_cache_memory_limit = 53687091200
> >
> > I want to know if some ceph configurations can solve this problem ?
>
> mds_max_caps_per_client is new in Luminous 12.2.12. See [1]. You need
> to upgrade.
>
> [1] https://tracker.ceph.com/issues/38130
>
> --
> Patrick Donnelly, Ph.D.
> He / Him / His
> Senior Software Engineer
> Red Hat Sunnyvale, CA
> GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] kernel cephfs - too many caps used by client

2019-10-18 Thread Patrick Donnelly
Hello Lei,

On Thu, Oct 17, 2019 at 8:43 PM Lei Liu  wrote:
>
> Hi cephers,
>
> We have some ceph clusters use cephfs in production(mount with kernel 
> cephfs), but several of clients often keep a lot of caps(millions) unreleased.
> I know this is due to the client's inability to complete the cache release, 
> errors might have been encountered, but no logs.
>
> client kernel version is 3.10.0-957.21.3.el7.x86_64
> ceph version is mostly v12.2.8
>
> ceph status shows:
>
> x clients failing to respond to cache pressure
>
> client kernel debug shows:
>
> # cat 
> /sys/kernel/debug/ceph/a00cc99c-f9f9-4dd9-9281-43cd12310e41.client11291811/caps
> total 23801585
> avail 1074
> used 23800511
> reserved 0
> min 1024
>
> mds config:
> [mds]
> mds_max_caps_per_client = 10485760
> # 50G
> mds_cache_memory_limit = 53687091200
>
> I want to know if some ceph configurations can solve this problem ?

mds_max_caps_per_client is new in Luminous 12.2.12. See [1]. You need
to upgrade.

[1] https://tracker.ceph.com/issues/38130

-- 
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] kernel cephfs - too many caps used by client

2019-10-17 Thread Lei Liu
Hi cephers,

We have some ceph clusters use cephfs in production(mount with kernel
cephfs), but several of clients often keep a lot of caps(millions)
unreleased.
I know this is due to the client's inability to complete the cache release,
errors might have been encountered, but no logs.

client kernel version is 3.10.0-957.21.3.el7.x86_64
ceph version is mostly v12.2.8

ceph status shows:

x clients failing to respond to cache pressure

client kernel debug shows:

# cat
/sys/kernel/debug/ceph/a00cc99c-f9f9-4dd9-9281-43cd12310e41.client11291811/caps
total 23801585
avail 1074
used 23800511
reserved 0
min 1024

mds config:
[mds]
mds_max_caps_per_client = 10485760
# 50G
mds_cache_memory_limit = 53687091200

I want to know if some ceph configurations can solve this problem ?

Any suggestions?

Thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com