> On Feb 3, 2016, at 17:39, Michael Metz-Martini | SpeedPartner GmbH 
> <m...@speedpartner.de> wrote:
> 
> Hi,
> 
> Am 03.02.2016 um 10:26 schrieb Gregory Farnum:
>> On Tue, Feb 2, 2016 at 10:09 PM, Michael Metz-Martini | SpeedPartner
>> GmbH <m...@speedpartner.de> wrote:
>>> Putting some higher load via cephfs on the cluster leads to messages
>>> like mds0: Client X failing to respond to capability release after some
>>> minutes. Requests from other clients start to block after a while.
>>> Rebooting the client named client resolves the issue.
>> There are some bugs around this functionality, but I *think* your
>> clients are new enough it shouldn't be an issue.
>> However, it's entirely possible your clients are actually making use
>> of enough inodes that the MDS server is running into its default
>> limits. If your MDS has memory available, you probably want to
>> increase the cache size from its default 100k (mds cache size = X).
> mds_cache_size is already 4000000 and so a lot higher than usual.
> (google said I should increase ...)
> 
>> Or maybe your kernels are too old; Zheng would know.
> We're already far away from centos-Dist-Kernel. but upgrading to 4.4.x
> for the clients should be possible if that might help.
> 

mds log should contain messages like:

client.XXXX isn't responding to mclientcaps(revoke)

please send these messages to us.

Regards
Yan, Zheng



> -- 
> Kind regards
> Michael
> 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to