On Fri, Apr 5, 2019 at 9:55 AM Deepshikha Khandelwal <dkhan...@redhat.com> wrote:
> > > On Fri, Apr 5, 2019 at 12:16 PM Michael Scherer <msche...@redhat.com> > wrote: > >> Le jeudi 04 avril 2019 à 18:24 +0200, Michael Scherer a écrit : >> > Le jeudi 04 avril 2019 à 19:10 +0300, Yaniv Kaul a écrit : >> > > I'm not convinced this is solved. Just had what I believe is a >> > > similar >> > > failure: >> > > >> > > *00:12:02.532* A dependency job for rpc-statd.service failed. See >> > > 'journalctl -xe' for details.*00:12:02.532* mount.nfs: rpc.statd is >> > > not running but is required for remote locking.*00:12:02.532* >> > > mount.nfs: Either use '-o nolock' to keep locks local, or start >> > > statd.*00:12:02.532* mount.nfs: an incorrect mount option was >> > > specified >> > > >> > > (of course, it can always be my patch!) >> > > >> > > https://build.gluster.org/job/centos7-regression/5384/console >> > >> > same issue, different builder (206). I will check them all, as the >> > issue is more widespread than I expected (or it did popup since last >> > time I checked). >> >> Deepshika did notice that the issue came back on one server >> (builder202) after a reboot, so the rpcbind issue is not related to the >> network initscript one, so the RCA continue. >> >> We are looking for another workaround involving fiddling with the >> socket (until we find why it do use ipv6 at boot, but not after, when >> ipv6 is disabled). >> >> Maybe we could run the test suite on a node without all the ipv6 >> disabling to see if that cause a issue ? >> > Do our test regression suit started supporting ipv6 now? Else this > investigation would lead to further issues. > I suspect not yet. But we certainly would like to, at some point, to ensure we run with IPv6 as well! Y. > -- >> Michael Scherer >> Sysadmin, Community Infrastructure and Platform, OSAS >> >> >> _______________________________________________ >> Gluster-infra mailing list >> gluster-in...@gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-infra >> >
_______________________________________________ Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel