> Software Defined Storage - IT Specialist
>
> Phone: 614-2133-7927
> E-mail: abeat...@au1.ibm.com
>
> ----- Original message -
> From: Matt Weil <mw...@wustl.edu>
> Sent by: gpfsug-discuss-boun...@spectrumscale.org
> To: <gpfsug-discuss@spectrumsca
nodes are RH/SLES clients
Could you elaborate further.
With Regards,
Ravi K Komanduri
From: Matt Weil <mw...@wustl.edu>
To: <gpfsug-discuss@spectrumscale.org>
Date: 01/04/2017 07:00 AM
Subject: Re: [gpfsug-discuss] CES nodes mount nfsv3 not responding
Sent by:
drew Beattie
> Software Defined Storage - IT Specialist
> Phone: 614-2133-7927
> E-mail: abeat...@au1.ibm.com <mailto:abeat...@au1.ibm.com>
>
>
>
> - Original message -
> From: Matt Weil <mw...@wustl.edu>
>
--
> From: Matt Weil <mw...@wustl.edu>
> Sent by: gpfsug-discuss-boun...@spectrumscale.org
> To: <gpfsug-discuss@spectrumscale.org>
> Cc:
> Subject: [gpfsug-discuss] CES nodes mount nfsv3 not responding
> Date: Wed, Jan 4, 2017 6:27 AM
>
Andrew,
You may have been stung by:
2.34 What considerations are there when running on SELinux?
https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html?view=kc#selinux
I've see this issue on a customer site myself.
Matt,
Could you increase the logging verbosity and check
Matt
What Operating system are you running?
I have an open PMR at present with something very similar
when ever we publish an NFS export via the protocol nodes the nfs service stops, although we have no issues publishing SMB exports.
I"m waiting on some testing by the customer but L3 support
On Tue, 03 Jan 2017 14:27:17 -0600, Matt Weil said:
> this follows the IP what ever node the ip lands on. the ganesha.nfsd
> process seems to stop working. any ideas? there is nothing helpful in
> the logs.
Does it in fact "stop working", or are you just having a mount issue? Do
already
this follows the IP what ever node the ip lands on. the ganesha.nfsd
process seems to stop working. any ideas? there is nothing helpful in
the logs.
time mount ces200:/vol/aggr14/temp403 /mnt/test
mount.nfs: mount system call failed
real1m0.000s
user0m0.000s
sys 0m0.010s