Re: [Gluster-users] NFS not start on localhost

2014-11-07 Thread Jason Russler
I've run into this as well. After installing hosted-engine for ovirt on a 
gluster volume. The only way to get things working again for me was to manually 
de-register (rpcinfo -d ...) nlockmgr from the portmapper and then restart 
glusterd. Then gluster's NFS successfully registers. I don't really get what's 
going on though.

- Original Message -
From: Sven Achtelik sven.achte...@mailpool.us
To: gluster-users@gluster.org
Sent: Friday, November 7, 2014 5:28:32 PM
Subject: Re: [Gluster-users] NFS not start on localhost



Hi everyone, 



I’m facing the exact same issue on my installation. Nfs.log entries indicate 
that something is blocking the gluster nfs from registering with rpcbind. 



[root@ovirt-one ~]# rpcinfo -p 

program vers proto port service 

10 4 tcp 111 portmapper 

10 3 tcp 111 portmapper 

10 2 tcp 111 portmapper 

10 4 udp 111 portmapper 

10 3 udp 111 portmapper 

10 2 udp 111 portmapper 

15 3 tcp 38465 mountd 

15 1 tcp 38466 mountd 

13 3 tcp 2049 nfs 

100227 3 tcp 2049 nfs_acl 

100021 3 udp 34343 nlockmgr 

100021 4 udp 34343 nlockmgr 

100021 3 tcp 54017 nlockmgr 

100021 4 tcp 54017 nlockmgr 

100024 1 udp 39097 status 

100024 1 tcp 53471 status 

100021 1 udp 715 nlockmgr 



I’m sure that I’m not using the system NFS Server and I didn’t mount any nfs 
share. 



@Tibor: Did you solve that issue somehow ? 



Best, 



Sven 






Hi, 
Thank you for you reply. 
I did your recommendations, but there are no changes. 
In the nfs.log there are no new things. 
[ root at node0 glusterfs]# reboot 
Connection to 172.16.0.10 closed by remote host. 
Connection to 172.16.0.10 closed. 
[ tdemeter at sirius-31 ~]$ ssh root at 172.16.0.10 
root at 172.16.0.10 's password: 
Last login: Mon Oct 20 11:02:13 2014 from 192.168.133.106 
[ root at node0 ~]# systemctl status nfs.target 
nfs.target - Network File System Server 
Loaded: loaded (/usr/lib/systemd/system/nfs.target; disabled) 
Active: inactive (dead) 
[ root at node0 ~]# gluster volume status engine 
Status of volume: engine 
Gluster process  Port    Online    Pid 
-- 
Brick gs00.itsmart.cloud:/gluster/engine0    50160   Y    3271 
Brick gs01.itsmart.cloud:/gluster/engine1    50160   Y    595 
NFS Server on localhost N/A N   N/A 
Self-heal Daemon on localhost    N/A Y    3286 
NFS Server on gs01.itsmart.cloud 2049    Y    6951 
Self-heal Daemon on gs01.itsmart.cloud   N/A Y    6958 
Task Status of Volume engine 
-- 
There are no active volume tasks 
[ root at node0 ~]# systemctl status 
Display all 262 possibilities? (y or n) 
[ root at node0 ~]# systemctl status nfs-lock 
nfs-lock.service - NFS file locking service. 
Loaded: loaded (/usr/lib/systemd/system/nfs-lock.service; enabled) 
Active: inactive (dead) 
[ root at node0 ~]# systemctl stop nfs-lock 
[ root at node0 ~]# systemctl restart gluster 
glusterd.service    glusterfsd.service  gluster.mount 
[ root at node0 ~]# systemctl restart gluster 
glusterd.service    glusterfsd.service  gluster.mount 
[ root at node0 ~]# systemctl restart glusterfsd.service 
[ root at node0 ~]# systemctl restart glusterd.service 
[ root at node0 ~]# gluster volume status engine 
Status of volume: engine 
Gluster process  Port    Online    Pid 
-- 
Brick gs00.itsmart.cloud:/gluster/engine0    50160   Y    5140 
Brick gs01.itsmart.cloud:/gluster/engine1    50160   Y    2037 
NFS Server on localhost N/A N   N/A 
Self-heal Daemon on localhost    N/A N    N/A 
NFS Server on gs01.itsmart.cloud 2049    Y    6951 
Self-heal Daemon on gs01.itsmart.cloud   N/A Y    6958 
Any other idea? 
Tibor 
- Eredeti üzenet - 
 On Mon, Oct 20, 2014 at 09:04:2.8AM +0200, Demeter Tibor wrote: 
  Hi, 
  
  This is the full nfs.log after delete  reboot. 
  It is refers to portmap registering problem. 
  
  [ root at node0 glusterfs]# cat nfs.log 
  [2014-10-20 06:48:43.221136] I [glusterfsd.c:1959:main] 
  0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.2 
  (/usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p 
  /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S 
  /var/run/567e0bba7ad7102eae3049e2ad6c3ed7.socket) 
  [2014-10-20 06:48:43.22] I [socket.c:3561:socket_init] 
  0-socket.glusterfsd: SSL support is NOT enabled 
  [2014-10-20 06:48:43.224475] I [socket.c:3576:socket_init] 
  0-socket.glusterfsd: using system polling thread 
  

Re: [Gluster-users] NFS not start on localhost

2014-11-07 Thread Jason Russler
Thanks, Niels. Yes, CentOS7. It's been driving me nuts. Much better.

- Original Message -
From: Niels de Vos nde...@redhat.com
To: Jason Russler jruss...@redhat.com
Cc: Sven Achtelik sven.achte...@mailpool.us, gluster-users@gluster.org
Sent: Friday, November 7, 2014 9:32:11 PM
Subject: Re: [Gluster-users] NFS not start on localhost

On Fri, Nov 07, 2014 at 07:51:47PM -0500, Jason Russler wrote:
 I've run into this as well. After installing hosted-engine for ovirt
 on a gluster volume. The only way to get things working again for me
 was to manually de-register (rpcinfo -d ...) nlockmgr from the
 portmapper and then restart glusterd. Then gluster's NFS successfully
 registers. I don't really get what's going on though.

Is this on RHEL/CentOS 7? A couple of days back someone on IRC had an
issue with this as well. We found out that rpcbind.service uses the
-w option by default (for warm-restart). Registered services are
written to a cache file, and upon reboot those services get
re-registered automatically, even when not running.

The solution was something like this:

# cp /usr/lib/systemd/system/rpcbind.service /etc/systemd/system/
* edit /etc/systemd/system/rpcbind.service and remove the -w
  option
# systemctl daemon-reload
# systemctl restart rpcbind.service
# systemctl restart glusterd.service

I am not sure why -w was added by default, but it doen not seem to
play nice with Gluster/NFS. Gluster/NFS does not want to break other
registered services, so it bails out when something is registered
already.

HTH,
Niels

 
 - Original Message -
 From: Sven Achtelik sven.achte...@mailpool.us
 To: gluster-users@gluster.org
 Sent: Friday, November 7, 2014 5:28:32 PM
 Subject: Re: [Gluster-users] NFS not start on localhost
 
 
 
 Hi everyone, 
 
 
 
 I’m facing the exact same issue on my installation. Nfs.log entries indicate 
 that something is blocking the gluster nfs from registering with rpcbind. 
 
 
 
 [root@ovirt-one ~]# rpcinfo -p 
 
 program vers proto port service 
 
 10 4 tcp 111 portmapper 
 
 10 3 tcp 111 portmapper 
 
 10 2 tcp 111 portmapper 
 
 10 4 udp 111 portmapper 
 
 10 3 udp 111 portmapper 
 
 10 2 udp 111 portmapper 
 
 15 3 tcp 38465 mountd 
 
 15 1 tcp 38466 mountd 
 
 13 3 tcp 2049 nfs 
 
 100227 3 tcp 2049 nfs_acl 
 
 100021 3 udp 34343 nlockmgr 
 
 100021 4 udp 34343 nlockmgr 
 
 100021 3 tcp 54017 nlockmgr 
 
 100021 4 tcp 54017 nlockmgr 
 
 100024 1 udp 39097 status 
 
 100024 1 tcp 53471 status 
 
 100021 1 udp 715 nlockmgr 
 
 
 
 I’m sure that I’m not using the system NFS Server and I didn’t mount any nfs 
 share. 
 
 
 
 @Tibor: Did you solve that issue somehow ? 
 
 
 
 Best, 
 
 
 
 Sven 
 
 
 
 
 
 
 Hi, 
 Thank you for you reply. 
 I did your recommendations, but there are no changes. 
 In the nfs.log there are no new things. 
 [ root at node0 glusterfs]# reboot 
 Connection to 172.16.0.10 closed by remote host. 
 Connection to 172.16.0.10 closed. 
 [ tdemeter at sirius-31 ~]$ ssh root at 172.16.0.10 
 root at 172.16.0.10 's password: 
 Last login: Mon Oct 20 11:02:13 2014 from 192.168.133.106 
 [ root at node0 ~]# systemctl status nfs.target 
 nfs.target - Network File System Server 
 Loaded: loaded (/usr/lib/systemd/system/nfs.target; disabled) 
 Active: inactive (dead) 
 [ root at node0 ~]# gluster volume status engine 
 Status of volume: engine 
 Gluster process  Port    Online    
 Pid 
 --
  
 Brick gs00.itsmart.cloud:/gluster/engine0    50160   Y    3271 
 Brick gs01.itsmart.cloud:/gluster/engine1    50160   Y    595 
 NFS Server on localhost N/A N   
 N/A 
 Self-heal Daemon on localhost    N/A Y    3286 
 NFS Server on gs01.itsmart.cloud 2049    Y    6951 
 Self-heal Daemon on gs01.itsmart.cloud   N/A Y    6958 
 Task Status of Volume engine 
 --
  
 There are no active volume tasks 
 [ root at node0 ~]# systemctl status 
 Display all 262 possibilities? (y or n) 
 [ root at node0 ~]# systemctl status nfs-lock 
 nfs-lock.service - NFS file locking service. 
 Loaded: loaded (/usr/lib/systemd/system/nfs-lock.service; enabled) 
 Active: inactive (dead) 
 [ root at node0 ~]# systemctl stop nfs-lock 
 [ root at node0 ~]# systemctl restart gluster 
 glusterd.service    glusterfsd.service  gluster.mount 
 [ root at node0 ~]# systemctl restart gluster 
 glusterd.service    glusterfsd.service  gluster.mount 
 [ root at node0 ~]# systemctl restart glusterfsd.service 
 [ root at node0 ~]# systemctl restart glusterd.service 
 [ root at node0 ~]# gluster volume status engine 
 Status of volume: engine 
 Gluster process  Port    Online