I've seen issues with clients reconnects on older kernels, yeah. They
sometimes get stuck after a network failure

Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


On Thu, Apr 30, 2020 at 10:19 PM Gregory Farnum <gfar...@redhat.com> wrote:

> On Tue, Apr 28, 2020 at 11:52 AM Robert LeBlanc <rob...@leblancnet.us>
> wrote:
> >
> > In the Nautilus manual it recommends >= 4.14 kernel for multiple active
> > MDSes. What are the potential issues for running the 4.4 kernel with
> > multiple MDSes? We are in the process of upgrading the clients, but at
> > times overrun the capacity of a single MDS server.
>
> I don't think this is documented specifically; you'd have to go
> through the git logs. Talked with the team and 4.14 was the upstream
> kernel when we marked multi-MDS as stable, with the general stream of
> ongoing fixes that always applies there.
>
> There aren't any known issues that will cause file consistency to
> break or anything; I'd be more worried about clients having issues
> reconnecting when their network blips or an MDS fails over.
> -Greg
>
> >
> > MULTIPLE ACTIVE METADATA SERVERS
> > <
> https://docs.ceph.com/docs/nautilus/cephfs/kernel-features/#multiple-active-metadata-servers
> >
> >
> > The feature has been supported since the Luminous release. It is
> > recommended to use Linux kernel clients >= 4.14 when there are multiple
> > active MDS.
> > Thank you,
> > Robert LeBlanc
> > ----------------
> > Robert LeBlanc
> > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to