Re: [Gluster-users] NFS not start on localhost

2014-11-07 Thread Sven Achtelik
Hi everyone,

I’m facing the exact same issue on my installation. Nfs.log entries indicate 
that something is blocking  the gluster nfs from registering with 
rpcbind.

[root@ovirt-one ~]# rpcinfo -p
   program vers proto   port  service
104   tcp111  portmapper
103   tcp111  portmapper
102   tcp111  portmapper
104   udp111  portmapper
103   udp111  portmapper
102   udp111  portmapper
153   tcp  38465  mountd
151   tcp  38466  mountd
133   tcp   2049  nfs
1002273   tcp   2049  nfs_acl
1000213   udp  34343  nlockmgr
1000214   udp  34343  nlockmgr
1000213   tcp  54017  nlockmgr
1000214   tcp  54017  nlockmgr
1000241   udp  39097  status
1000241   tcp  53471  status
1000211   udp715  nlockmgr

I’m sure that I’m not using the system NFS Server and I didn’t mount any nfs 
share.

@Tibor: Did you solve that issue somehow ?

Best,

Sven



Hi,



Thank you for you reply.



I did your recommendations, but there are no changes.



In the nfs.log there are no new things.





[root at node0http://supercolony.gluster.org/mailman/listinfo/gluster-users 
glusterfs]# reboot

Connection to 172.16.0.10 closed by remote host.

Connection to 172.16.0.10 closed.

[tdemeter at 
sirius-31http://supercolony.gluster.org/mailman/listinfo/gluster-users ~]$ 
ssh root at 
172.16.0.10http://supercolony.gluster.org/mailman/listinfo/gluster-users

root at 
172.16.0.10http://supercolony.gluster.org/mailman/listinfo/gluster-users's 
password:

Last login: Mon Oct 20 11:02:13 2014 from 192.168.133.106

[root at node0http://supercolony.gluster.org/mailman/listinfo/gluster-users 
~]# systemctl status nfs.target

nfs.target - Network File System Server

   Loaded: loaded (/usr/lib/systemd/system/nfs.target; disabled)

   Active: inactive (dead)



[root at node0http://supercolony.gluster.org/mailman/listinfo/gluster-users 
~]# gluster volume status engine

Status of volume: engine

Gluster process  PortOnlinePid

--

Brick gs00.itsmart.cloud:/gluster/engine050160   Y3271

Brick gs01.itsmart.cloud:/gluster/engine150160   Y595

NFS Server on localhost N/A N   N/A

Self-heal Daemon on localhostN/A Y3286

NFS Server on gs01.itsmart.cloud 2049Y6951

Self-heal Daemon on gs01.itsmart.cloud   N/A Y6958



Task Status of Volume engine

--

There are no active volume tasks



[root at node0http://supercolony.gluster.org/mailman/listinfo/gluster-users 
~]# systemctl status

Display all 262 possibilities? (y or n)

[root at node0http://supercolony.gluster.org/mailman/listinfo/gluster-users 
~]# systemctl status nfs-lock

nfs-lock.service - NFS file locking service.

   Loaded: loaded (/usr/lib/systemd/system/nfs-lock.service; enabled)

   Active: inactive (dead)



[root at node0http://supercolony.gluster.org/mailman/listinfo/gluster-users 
~]# systemctl stop nfs-lock

[root at node0http://supercolony.gluster.org/mailman/listinfo/gluster-users 
~]# systemctl restart gluster

glusterd.serviceglusterfsd.service  gluster.mount

[root at node0http://supercolony.gluster.org/mailman/listinfo/gluster-users 
~]# systemctl restart gluster

glusterd.serviceglusterfsd.service  gluster.mount

[root at node0http://supercolony.gluster.org/mailman/listinfo/gluster-users 
~]# systemctl restart glusterfsd.service

[root at node0http://supercolony.gluster.org/mailman/listinfo/gluster-users 
~]# systemctl restart glusterd.service

[root at node0http://supercolony.gluster.org/mailman/listinfo/gluster-users 
~]# gluster volume status engine

Status of volume: engine

Gluster process  PortOnlinePid

--

Brick gs00.itsmart.cloud:/gluster/engine050160   Y5140

Brick gs01.itsmart.cloud:/gluster/engine150160   Y2037

NFS Server on localhost N/A N   N/A

Self-heal Daemon on localhostN/A NN/A

NFS Server on gs01.itsmart.cloud 2049Y6951

Self-heal Daemon on gs01.itsmart.cloud   N/A Y6958





Any other idea?



Tibor

















- Eredeti üzenet -

 On Mon, Oct 20, 2014 at 09:04:2.8AM +0200, Demeter Tibor wrote:

  Hi,

 

  This is the full nfs.log after delete  reboot.

  It is refers to portmap registering problem.

 

  [root at 
  

Re: [Gluster-users] NFS not start on localhost

2014-11-07 Thread Jason Russler
I've run into this as well. After installing hosted-engine for ovirt on a 
gluster volume. The only way to get things working again for me was to manually 
de-register (rpcinfo -d ...) nlockmgr from the portmapper and then restart 
glusterd. Then gluster's NFS successfully registers. I don't really get what's 
going on though.

- Original Message -
From: Sven Achtelik sven.achte...@mailpool.us
To: gluster-users@gluster.org
Sent: Friday, November 7, 2014 5:28:32 PM
Subject: Re: [Gluster-users] NFS not start on localhost



Hi everyone, 



I’m facing the exact same issue on my installation. Nfs.log entries indicate 
that something is blocking the gluster nfs from registering with rpcbind. 



[root@ovirt-one ~]# rpcinfo -p 

program vers proto port service 

10 4 tcp 111 portmapper 

10 3 tcp 111 portmapper 

10 2 tcp 111 portmapper 

10 4 udp 111 portmapper 

10 3 udp 111 portmapper 

10 2 udp 111 portmapper 

15 3 tcp 38465 mountd 

15 1 tcp 38466 mountd 

13 3 tcp 2049 nfs 

100227 3 tcp 2049 nfs_acl 

100021 3 udp 34343 nlockmgr 

100021 4 udp 34343 nlockmgr 

100021 3 tcp 54017 nlockmgr 

100021 4 tcp 54017 nlockmgr 

100024 1 udp 39097 status 

100024 1 tcp 53471 status 

100021 1 udp 715 nlockmgr 



I’m sure that I’m not using the system NFS Server and I didn’t mount any nfs 
share. 



@Tibor: Did you solve that issue somehow ? 



Best, 



Sven 






Hi, 
Thank you for you reply. 
I did your recommendations, but there are no changes. 
In the nfs.log there are no new things. 
[ root at node0 glusterfs]# reboot 
Connection to 172.16.0.10 closed by remote host. 
Connection to 172.16.0.10 closed. 
[ tdemeter at sirius-31 ~]$ ssh root at 172.16.0.10 
root at 172.16.0.10 's password: 
Last login: Mon Oct 20 11:02:13 2014 from 192.168.133.106 
[ root at node0 ~]# systemctl status nfs.target 
nfs.target - Network File System Server 
Loaded: loaded (/usr/lib/systemd/system/nfs.target; disabled) 
Active: inactive (dead) 
[ root at node0 ~]# gluster volume status engine 
Status of volume: engine 
Gluster process  Port    Online    Pid 
-- 
Brick gs00.itsmart.cloud:/gluster/engine0    50160   Y    3271 
Brick gs01.itsmart.cloud:/gluster/engine1    50160   Y    595 
NFS Server on localhost N/A N   N/A 
Self-heal Daemon on localhost    N/A Y    3286 
NFS Server on gs01.itsmart.cloud 2049    Y    6951 
Self-heal Daemon on gs01.itsmart.cloud   N/A Y    6958 
Task Status of Volume engine 
-- 
There are no active volume tasks 
[ root at node0 ~]# systemctl status 
Display all 262 possibilities? (y or n) 
[ root at node0 ~]# systemctl status nfs-lock 
nfs-lock.service - NFS file locking service. 
Loaded: loaded (/usr/lib/systemd/system/nfs-lock.service; enabled) 
Active: inactive (dead) 
[ root at node0 ~]# systemctl stop nfs-lock 
[ root at node0 ~]# systemctl restart gluster 
glusterd.service    glusterfsd.service  gluster.mount 
[ root at node0 ~]# systemctl restart gluster 
glusterd.service    glusterfsd.service  gluster.mount 
[ root at node0 ~]# systemctl restart glusterfsd.service 
[ root at node0 ~]# systemctl restart glusterd.service 
[ root at node0 ~]# gluster volume status engine 
Status of volume: engine 
Gluster process  Port    Online    Pid 
-- 
Brick gs00.itsmart.cloud:/gluster/engine0    50160   Y    5140 
Brick gs01.itsmart.cloud:/gluster/engine1    50160   Y    2037 
NFS Server on localhost N/A N   N/A 
Self-heal Daemon on localhost    N/A N    N/A 
NFS Server on gs01.itsmart.cloud 2049    Y    6951 
Self-heal Daemon on gs01.itsmart.cloud   N/A Y    6958 
Any other idea? 
Tibor 
- Eredeti üzenet - 
 On Mon, Oct 20, 2014 at 09:04:2.8AM +0200, Demeter Tibor wrote: 
  Hi, 
  
  This is the full nfs.log after delete  reboot. 
  It is refers to portmap registering problem. 
  
  [ root at node0 glusterfs]# cat nfs.log 
  [2014-10-20 06:48:43.221136] I [glusterfsd.c:1959:main] 
  0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.2 
  (/usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p 
  /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S 
  /var/run/567e0bba7ad7102eae3049e2ad6c3ed7.socket) 
  [2014-10-20 06:48:43.22] I [socket.c:3561:socket_init] 
  0-socket.glusterfsd: SSL support is NOT enabled 
  [2014-10-20 06:48:43.224475] I [socket.c:3576:socket_init] 
  0-socket.glusterfsd: using system polling thread 
  [2014-10

Re: [Gluster-users] NFS not start on localhost

2014-11-07 Thread Niels de Vos
On Fri, Nov 07, 2014 at 07:51:47PM -0500, Jason Russler wrote:
 I've run into this as well. After installing hosted-engine for ovirt
 on a gluster volume. The only way to get things working again for me
 was to manually de-register (rpcinfo -d ...) nlockmgr from the
 portmapper and then restart glusterd. Then gluster's NFS successfully
 registers. I don't really get what's going on though.

Is this on RHEL/CentOS 7? A couple of days back someone on IRC had an
issue with this as well. We found out that rpcbind.service uses the
-w option by default (for warm-restart). Registered services are
written to a cache file, and upon reboot those services get
re-registered automatically, even when not running.

The solution was something like this:

# cp /usr/lib/systemd/system/rpcbind.service /etc/systemd/system/
* edit /etc/systemd/system/rpcbind.service and remove the -w
  option
# systemctl daemon-reload
# systemctl restart rpcbind.service
# systemctl restart glusterd.service

I am not sure why -w was added by default, but it doen not seem to
play nice with Gluster/NFS. Gluster/NFS does not want to break other
registered services, so it bails out when something is registered
already.

HTH,
Niels

 
 - Original Message -
 From: Sven Achtelik sven.achte...@mailpool.us
 To: gluster-users@gluster.org
 Sent: Friday, November 7, 2014 5:28:32 PM
 Subject: Re: [Gluster-users] NFS not start on localhost
 
 
 
 Hi everyone, 
 
 
 
 I’m facing the exact same issue on my installation. Nfs.log entries indicate 
 that something is blocking the gluster nfs from registering with rpcbind. 
 
 
 
 [root@ovirt-one ~]# rpcinfo -p 
 
 program vers proto port service 
 
 10 4 tcp 111 portmapper 
 
 10 3 tcp 111 portmapper 
 
 10 2 tcp 111 portmapper 
 
 10 4 udp 111 portmapper 
 
 10 3 udp 111 portmapper 
 
 10 2 udp 111 portmapper 
 
 15 3 tcp 38465 mountd 
 
 15 1 tcp 38466 mountd 
 
 13 3 tcp 2049 nfs 
 
 100227 3 tcp 2049 nfs_acl 
 
 100021 3 udp 34343 nlockmgr 
 
 100021 4 udp 34343 nlockmgr 
 
 100021 3 tcp 54017 nlockmgr 
 
 100021 4 tcp 54017 nlockmgr 
 
 100024 1 udp 39097 status 
 
 100024 1 tcp 53471 status 
 
 100021 1 udp 715 nlockmgr 
 
 
 
 I’m sure that I’m not using the system NFS Server and I didn’t mount any nfs 
 share. 
 
 
 
 @Tibor: Did you solve that issue somehow ? 
 
 
 
 Best, 
 
 
 
 Sven 
 
 
 
 
 
 
 Hi, 
 Thank you for you reply. 
 I did your recommendations, but there are no changes. 
 In the nfs.log there are no new things. 
 [ root at node0 glusterfs]# reboot 
 Connection to 172.16.0.10 closed by remote host. 
 Connection to 172.16.0.10 closed. 
 [ tdemeter at sirius-31 ~]$ ssh root at 172.16.0.10 
 root at 172.16.0.10 's password: 
 Last login: Mon Oct 20 11:02:13 2014 from 192.168.133.106 
 [ root at node0 ~]# systemctl status nfs.target 
 nfs.target - Network File System Server 
 Loaded: loaded (/usr/lib/systemd/system/nfs.target; disabled) 
 Active: inactive (dead) 
 [ root at node0 ~]# gluster volume status engine 
 Status of volume: engine 
 Gluster process  Port    Online    
 Pid 
 --
  
 Brick gs00.itsmart.cloud:/gluster/engine0    50160   Y    3271 
 Brick gs01.itsmart.cloud:/gluster/engine1    50160   Y    595 
 NFS Server on localhost N/A N   
 N/A 
 Self-heal Daemon on localhost    N/A Y    3286 
 NFS Server on gs01.itsmart.cloud 2049    Y    6951 
 Self-heal Daemon on gs01.itsmart.cloud   N/A Y    6958 
 Task Status of Volume engine 
 --
  
 There are no active volume tasks 
 [ root at node0 ~]# systemctl status 
 Display all 262 possibilities? (y or n) 
 [ root at node0 ~]# systemctl status nfs-lock 
 nfs-lock.service - NFS file locking service. 
 Loaded: loaded (/usr/lib/systemd/system/nfs-lock.service; enabled) 
 Active: inactive (dead) 
 [ root at node0 ~]# systemctl stop nfs-lock 
 [ root at node0 ~]# systemctl restart gluster 
 glusterd.service    glusterfsd.service  gluster.mount 
 [ root at node0 ~]# systemctl restart gluster 
 glusterd.service    glusterfsd.service  gluster.mount 
 [ root at node0 ~]# systemctl restart glusterfsd.service 
 [ root at node0 ~]# systemctl restart glusterd.service 
 [ root at node0 ~]# gluster volume status engine 
 Status of volume: engine 
 Gluster process  Port    Online    
 Pid 
 --
  
 Brick gs00.itsmart.cloud:/gluster/engine0    50160   Y    5140 
 Brick gs01.itsmart.cloud:/gluster/engine1    50160   Y    2037 
 NFS Server on localhost N/A N   
 N/A 
 Self-heal Daemon

Re: [Gluster-users] NFS not start on localhost

2014-11-07 Thread Jason Russler
Thanks, Niels. Yes, CentOS7. It's been driving me nuts. Much better.

- Original Message -
From: Niels de Vos nde...@redhat.com
To: Jason Russler jruss...@redhat.com
Cc: Sven Achtelik sven.achte...@mailpool.us, gluster-users@gluster.org
Sent: Friday, November 7, 2014 9:32:11 PM
Subject: Re: [Gluster-users] NFS not start on localhost

On Fri, Nov 07, 2014 at 07:51:47PM -0500, Jason Russler wrote:
 I've run into this as well. After installing hosted-engine for ovirt
 on a gluster volume. The only way to get things working again for me
 was to manually de-register (rpcinfo -d ...) nlockmgr from the
 portmapper and then restart glusterd. Then gluster's NFS successfully
 registers. I don't really get what's going on though.

Is this on RHEL/CentOS 7? A couple of days back someone on IRC had an
issue with this as well. We found out that rpcbind.service uses the
-w option by default (for warm-restart). Registered services are
written to a cache file, and upon reboot those services get
re-registered automatically, even when not running.

The solution was something like this:

# cp /usr/lib/systemd/system/rpcbind.service /etc/systemd/system/
* edit /etc/systemd/system/rpcbind.service and remove the -w
  option
# systemctl daemon-reload
# systemctl restart rpcbind.service
# systemctl restart glusterd.service

I am not sure why -w was added by default, but it doen not seem to
play nice with Gluster/NFS. Gluster/NFS does not want to break other
registered services, so it bails out when something is registered
already.

HTH,
Niels

 
 - Original Message -
 From: Sven Achtelik sven.achte...@mailpool.us
 To: gluster-users@gluster.org
 Sent: Friday, November 7, 2014 5:28:32 PM
 Subject: Re: [Gluster-users] NFS not start on localhost
 
 
 
 Hi everyone, 
 
 
 
 I’m facing the exact same issue on my installation. Nfs.log entries indicate 
 that something is blocking the gluster nfs from registering with rpcbind. 
 
 
 
 [root@ovirt-one ~]# rpcinfo -p 
 
 program vers proto port service 
 
 10 4 tcp 111 portmapper 
 
 10 3 tcp 111 portmapper 
 
 10 2 tcp 111 portmapper 
 
 10 4 udp 111 portmapper 
 
 10 3 udp 111 portmapper 
 
 10 2 udp 111 portmapper 
 
 15 3 tcp 38465 mountd 
 
 15 1 tcp 38466 mountd 
 
 13 3 tcp 2049 nfs 
 
 100227 3 tcp 2049 nfs_acl 
 
 100021 3 udp 34343 nlockmgr 
 
 100021 4 udp 34343 nlockmgr 
 
 100021 3 tcp 54017 nlockmgr 
 
 100021 4 tcp 54017 nlockmgr 
 
 100024 1 udp 39097 status 
 
 100024 1 tcp 53471 status 
 
 100021 1 udp 715 nlockmgr 
 
 
 
 I’m sure that I’m not using the system NFS Server and I didn’t mount any nfs 
 share. 
 
 
 
 @Tibor: Did you solve that issue somehow ? 
 
 
 
 Best, 
 
 
 
 Sven 
 
 
 
 
 
 
 Hi, 
 Thank you for you reply. 
 I did your recommendations, but there are no changes. 
 In the nfs.log there are no new things. 
 [ root at node0 glusterfs]# reboot 
 Connection to 172.16.0.10 closed by remote host. 
 Connection to 172.16.0.10 closed. 
 [ tdemeter at sirius-31 ~]$ ssh root at 172.16.0.10 
 root at 172.16.0.10 's password: 
 Last login: Mon Oct 20 11:02:13 2014 from 192.168.133.106 
 [ root at node0 ~]# systemctl status nfs.target 
 nfs.target - Network File System Server 
 Loaded: loaded (/usr/lib/systemd/system/nfs.target; disabled) 
 Active: inactive (dead) 
 [ root at node0 ~]# gluster volume status engine 
 Status of volume: engine 
 Gluster process  Port    Online    
 Pid 
 --
  
 Brick gs00.itsmart.cloud:/gluster/engine0    50160   Y    3271 
 Brick gs01.itsmart.cloud:/gluster/engine1    50160   Y    595 
 NFS Server on localhost N/A N   
 N/A 
 Self-heal Daemon on localhost    N/A Y    3286 
 NFS Server on gs01.itsmart.cloud 2049    Y    6951 
 Self-heal Daemon on gs01.itsmart.cloud   N/A Y    6958 
 Task Status of Volume engine 
 --
  
 There are no active volume tasks 
 [ root at node0 ~]# systemctl status 
 Display all 262 possibilities? (y or n) 
 [ root at node0 ~]# systemctl status nfs-lock 
 nfs-lock.service - NFS file locking service. 
 Loaded: loaded (/usr/lib/systemd/system/nfs-lock.service; enabled) 
 Active: inactive (dead) 
 [ root at node0 ~]# systemctl stop nfs-lock 
 [ root at node0 ~]# systemctl restart gluster 
 glusterd.service    glusterfsd.service  gluster.mount 
 [ root at node0 ~]# systemctl restart gluster 
 glusterd.service    glusterfsd.service  gluster.mount 
 [ root at node0 ~]# systemctl restart glusterfsd.service 
 [ root at node0 ~]# systemctl restart glusterd.service 
 [ root at node0 ~]# gluster volume status engine 
 Status of volume: engine 
 Gluster process  Port    Online

Re: [Gluster-users] NFS not start on localhost

2014-10-23 Thread Niels de Vos
The only way I can manage to hit this issue too, is by mounting an
NFS-export on the Gluster server that starts the Gluster/NFS process.
There is not crash happening on my side, Gluster/NFS just fails to
start.

Steps to reproduce:
1. mount -t nfs nas.example.net:/export /mnt
2. systemctl start glusterd

After this, the error about being unable to register NLM4 is in
/var/log/glusterfs/nfs.log.

This is expected, because the Linux kernel NFS-server requires an NLM
service in portmap/rpcbind (nlockmgr). You can verify what process
occupies the service slot in rpcbind like this:

1. list the rpc-programs and their port numbers

# rpcinfo -p

2. check the process that listens on the TCP-port for nlockmgr (port
   32770 was returned by the command from point 1)

# netstat -nlpt | grep -w 32770

If the right column in the output lists 'glusterfs', then the
Gluster/NFS process could register successfully and is handling the NLM4
calls. However, if the right columnt contains a single '-', the Linux
kernel module 'lockd' is handling the NLM4 calls. Gluster/NFS can not
work together with the Linux kernel NFS-client (mountpoint) or the Linux
kernel NFS-server.

Does this help? If something is unclear, post the output  if the above
commands and tell us what further details you want to see clarified.

Cheers,
Niels


On Mon, Oct 20, 2014 at 12:53:46PM +0200, Demeter Tibor wrote:
 
 Hi,
 
 Thank you for you reply.
 
 I did your recommendations, but there are no changes.
 
 In the nfs.log there are no new things.
 
 
 [root@node0 glusterfs]# reboot
 Connection to 172.16.0.10 closed by remote host.
 Connection to 172.16.0.10 closed.
 [tdemeter@sirius-31 ~]$ ssh root@172.16.0.10
 root@172.16.0.10's password: 
 Last login: Mon Oct 20 11:02:13 2014 from 192.168.133.106
 [root@node0 ~]# systemctl status nfs.target 
 nfs.target - Network File System Server
Loaded: loaded (/usr/lib/systemd/system/nfs.target; disabled)
Active: inactive (dead)
 
 [root@node0 ~]# gluster volume status engine
 Status of volume: engine
 Gluster process   PortOnline  
 Pid
 --
 Brick gs00.itsmart.cloud:/gluster/engine0 50160   Y   3271
 Brick gs01.itsmart.cloud:/gluster/engine1 50160   Y   595
 NFS Server on localhost   N/A N   
 N/A
 Self-heal Daemon on localhost N/A Y   3286
 NFS Server on gs01.itsmart.cloud  2049Y   6951
 Self-heal Daemon on gs01.itsmart.cloudN/A Y   
 6958
  
 Task Status of Volume engine
 --
 There are no active volume tasks
  
 [root@node0 ~]# systemctl status 
 Display all 262 possibilities? (y or n)
 [root@node0 ~]# systemctl status nfs-lock
 nfs-lock.service - NFS file locking service.
Loaded: loaded (/usr/lib/systemd/system/nfs-lock.service; enabled)
Active: inactive (dead)
 
 [root@node0 ~]# systemctl stop nfs-lock
 [root@node0 ~]# systemctl restart gluster
 glusterd.serviceglusterfsd.service  gluster.mount   
 [root@node0 ~]# systemctl restart gluster
 glusterd.serviceglusterfsd.service  gluster.mount   
 [root@node0 ~]# systemctl restart glusterfsd.service 
 [root@node0 ~]# systemctl restart glusterd.service 
 [root@node0 ~]# gluster volume status engine
 Status of volume: engine
 Gluster process   PortOnline  
 Pid
 --
 Brick gs00.itsmart.cloud:/gluster/engine0 50160   Y   5140
 Brick gs01.itsmart.cloud:/gluster/engine1 50160   Y   2037
 NFS Server on localhost   N/A N   
 N/A
 Self-heal Daemon on localhost N/A N   N/A
 NFS Server on gs01.itsmart.cloud  2049Y   6951
 Self-heal Daemon on gs01.itsmart.cloudN/A Y   
 6958
  
 
 Any other idea?
 
 Tibor
 
 
 
 
 
 
 
 
 - Eredeti üzenet -
  On Mon, Oct 20, 2014 at 09:04:2.8AM +0200, Demeter Tibor wrote:
   Hi,
   
   This is the full nfs.log after delete  reboot.
   It is refers to portmap registering problem.
   
   [root@node0 glusterfs]# cat nfs.log
   [2014-10-20 06:48:43.221136] I [glusterfsd.c:1959:main]
   0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.2
   (/usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p
   /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S
   /var/run/567e0bba7ad7102eae3049e2ad6c3ed7.socket)
   [2014-10-20 06:48:43.22] I [socket.c:3561:socket_init]
   0-socket.glusterfsd: SSL support is NOT enabled
   [2014-10-20 06:48:43.224475] I [socket.c:3576:socket_init]
   0-socket.glusterfsd: using system 

Re: [Gluster-users] NFS not start on localhost

2014-10-23 Thread Gene Liverman
Could you also provide the output of this command:
$ mount | column -t

--
Gene Liverman
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu

ITS: Making Technology Work for You!

This e-mail and any attachments may contain confidential and privileged
information. If you are not the intended recipient, please notify the
sender immediately by return mail, delete this message, and destroy any
copies. Any dissemination or use of this information by a person other than
the intended recipient is unauthorized and may be illegal or actionable by
law.

On Oct 23, 2014 10:07 AM, Niels de Vos nde...@redhat.com wrote:

 The only way I can manage to hit this issue too, is by mounting an
 NFS-export on the Gluster server that starts the Gluster/NFS process.
 There is not crash happening on my side, Gluster/NFS just fails to
 start.

 Steps to reproduce:
 1. mount -t nfs nas.example.net:/export /mnt
 2. systemctl start glusterd

 After this, the error about being unable to register NLM4 is in
 /var/log/glusterfs/nfs.log.

 This is expected, because the Linux kernel NFS-server requires an NLM
 service in portmap/rpcbind (nlockmgr). You can verify what process
 occupies the service slot in rpcbind like this:

 1. list the rpc-programs and their port numbers

 # rpcinfo -p

 2. check the process that listens on the TCP-port for nlockmgr (port
32770 was returned by the command from point 1)

 # netstat -nlpt | grep -w 32770

 If the right column in the output lists 'glusterfs', then the
 Gluster/NFS process could register successfully and is handling the NLM4
 calls. However, if the right columnt contains a single '-', the Linux
 kernel module 'lockd' is handling the NLM4 calls. Gluster/NFS can not
 work together with the Linux kernel NFS-client (mountpoint) or the Linux
 kernel NFS-server.

 Does this help? If something is unclear, post the output  if the above
 commands and tell us what further details you want to see clarified.

 Cheers,
 Niels


 On Mon, Oct 20, 2014 at 12:53:46PM +0200, Demeter Tibor wrote:
 
  Hi,
 
  Thank you for you reply.
 
  I did your recommendations, but there are no changes.
 
  In the nfs.log there are no new things.
 
 
  [root@node0 glusterfs]# reboot
  Connection to 172.16.0.10 closed by remote host.
  Connection to 172.16.0.10 closed.
  [tdemeter@sirius-31 ~]$ ssh root@172.16.0.10
  root@172.16.0.10's password:
  Last login: Mon Oct 20 11:02:13 2014 from 192.168.133.106
  [root@node0 ~]# systemctl status nfs.target
  nfs.target - Network File System Server
 Loaded: loaded (/usr/lib/systemd/system/nfs.target; disabled)
 Active: inactive (dead)
 
  [root@node0 ~]# gluster volume status engine
  Status of volume: engine
  Gluster process   Port
 Online  Pid
 
 --
  Brick gs00.itsmart.cloud:/gluster/engine0 50160   Y
  3271
  Brick gs01.itsmart.cloud:/gluster/engine1 50160   Y   595
  NFS Server on localhost   N/A N
  N/A
  Self-heal Daemon on localhost N/A Y
  3286
  NFS Server on gs01.itsmart.cloud  2049Y
  6951
  Self-heal Daemon on gs01.itsmart.cloudN/A Y
  6958
 
  Task Status of Volume engine
 
 --
  There are no active volume tasks
 
  [root@node0 ~]# systemctl status
  Display all 262 possibilities? (y or n)
  [root@node0 ~]# systemctl status nfs-lock
  nfs-lock.service - NFS file locking service.
 Loaded: loaded (/usr/lib/systemd/system/nfs-lock.service; enabled)
 Active: inactive (dead)
 
  [root@node0 ~]# systemctl stop nfs-lock
  [root@node0 ~]# systemctl restart gluster
  glusterd.serviceglusterfsd.service  gluster.mount
  [root@node0 ~]# systemctl restart gluster
  glusterd.serviceglusterfsd.service  gluster.mount
  [root@node0 ~]# systemctl restart glusterfsd.service
  [root@node0 ~]# systemctl restart glusterd.service
  [root@node0 ~]# gluster volume status engine
  Status of volume: engine
  Gluster process   Port
 Online  Pid
 
 --
  Brick gs00.itsmart.cloud:/gluster/engine0 50160   Y
  5140
  Brick gs01.itsmart.cloud:/gluster/engine1 50160   Y
  2037
  NFS Server on localhost   N/A N
  N/A
  Self-heal Daemon on localhost N/A N   N/A
  NFS Server on gs01.itsmart.cloud  2049Y
  6951
  Self-heal Daemon on gs01.itsmart.cloudN/A Y
  6958
 
 
  Any other idea?
 
  Tibor
 
 
 
 
 
 
 
 
  - Eredeti üzenet -
   On Mon, Oct 20, 2014 at 09:04:2.8AM +0200, Demeter Tibor wrote:

Re: [Gluster-users] NFS not start on localhost

2014-10-20 Thread Vijay Bellur

On 10/19/2014 06:56 PM, Niels de Vos wrote:

On Sat, Oct 18, 2014 at 01:24:12PM +0200, Demeter Tibor wrote:

Hi,

[root@node0 ~]# tail -n 20 /var/log/glusterfs/nfs.log
[2014-10-18 07:41:06.136035] E [graph.c:307:glusterfs_graph_init] 0-nfs-server: 
initializing translator failed
[2014-10-18 07:41:06.136040] E [graph.c:502:glusterfs_graph_activate] 0-graph: 
init failed
pending frames:
frame : type(0) op(0)

patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash: 2014-10-18 07:41:06configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.5.2


This definitely is a gluster/nfs issue. For whatever reasone, the
gluster/nfs server crashes :-/ The log does not show enough details,
some more lines before this are needed.



I wonder if the crash is due to a cleanup after the translator 
initialization failure. The complete logs might help in understanding 
why the initialization failed.


-Vijay

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS not start on localhost

2014-10-20 Thread Demeter Tibor
Hi,

This is the full nfs.log after delete  reboot.
It is refers to portmap registering problem.

[root@node0 glusterfs]# cat nfs.log
[2014-10-20 06:48:43.221136] I [glusterfsd.c:1959:main] 0-/usr/sbin/glusterfs: 
Started running /usr/sbin/glusterfs version 3.5.2 (/usr/sbin/glusterfs -s 
localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l 
/var/log/glusterfs/nfs.log -S /var/run/567e0bba7ad7102eae3049e2ad6c3ed7.socket)
[2014-10-20 06:48:43.22] I [socket.c:3561:socket_init] 0-socket.glusterfsd: 
SSL support is NOT enabled
[2014-10-20 06:48:43.224475] I [socket.c:3576:socket_init] 0-socket.glusterfsd: 
using system polling thread
[2014-10-20 06:48:43.224654] I [socket.c:3561:socket_init] 0-glusterfs: SSL 
support is NOT enabled
[2014-10-20 06:48:43.224667] I [socket.c:3576:socket_init] 0-glusterfs: using 
system polling thread
[2014-10-20 06:48:43.235876] I [rpcsvc.c:2127:rpcsvc_set_outstanding_rpc_limit] 
0-rpc-service: Configured rpc.outstanding-rpc-limit with value 16
[2014-10-20 06:48:43.254087] I [socket.c:3561:socket_init] 0-socket.nfs-server: 
SSL support is NOT enabled
[2014-10-20 06:48:43.254116] I [socket.c:3576:socket_init] 0-socket.nfs-server: 
using system polling thread
[2014-10-20 06:48:43.255241] I [socket.c:3561:socket_init] 0-socket.nfs-server: 
SSL support is NOT enabled
[2014-10-20 06:48:43.255264] I [socket.c:3576:socket_init] 0-socket.nfs-server: 
using system polling thread
[2014-10-20 06:48:43.257279] I [socket.c:3561:socket_init] 0-socket.nfs-server: 
SSL support is NOT enabled
[2014-10-20 06:48:43.257315] I [socket.c:3576:socket_init] 0-socket.nfs-server: 
using system polling thread
[2014-10-20 06:48:43.258135] I [socket.c:3561:socket_init] 0-socket.NLM: SSL 
support is NOT enabled
[2014-10-20 06:48:43.258157] I [socket.c:3576:socket_init] 0-socket.NLM: using 
system polling thread
[2014-10-20 06:48:43.293724] E [rpcsvc.c:1314:rpcsvc_program_register_portmap] 
0-rpc-service: Could not register with portmap
[2014-10-20 06:48:43.293760] E [nfs.c:332:nfs_init_versions] 0-nfs: Program  
NLM4 registration failed
[2014-10-20 06:48:43.293771] E [nfs.c:1312:init] 0-nfs: Failed to initialize 
protocols
[2014-10-20 06:48:43.293777] E [xlator.c:403:xlator_init] 0-nfs-server: 
Initialization of volume 'nfs-server' failed, review your volfile again
[2014-10-20 06:48:43.293783] E [graph.c:307:glusterfs_graph_init] 0-nfs-server: 
initializing translator failed
[2014-10-20 06:48:43.293789] E [graph.c:502:glusterfs_graph_activate] 0-graph: 
init failed
pending frames:
frame : type(0) op(0)

patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash: 2014-10-20 06:48:43configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.5.2
[root@node0 glusterfs]# systemctl status portma
portma.service
   Loaded: not-found (Reason: No such file or directory)
   Active: inactive (dead)



Also I have checked the rpcbind service.

[root@node0 glusterfs]# systemctl status rpcbind.service 
rpcbind.service - RPC bind service
   Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled)
   Active: active (running) since h 2014-10-20 08:48:39 CEST; 2min 52s ago
  Process: 1940 ExecStart=/sbin/rpcbind -w ${RPCBIND_ARGS} (code=exited, 
status=0/SUCCESS)
 Main PID: 1946 (rpcbind)
   CGroup: /system.slice/rpcbind.service
   └─1946 /sbin/rpcbind -w

okt 20 08:48:39 node0.itsmart.cloud systemd[1]: Starting RPC bind service...
okt 20 08:48:39 node0.itsmart.cloud systemd[1]: Started RPC bind service.

The restart does not solve this problem.


I think this is the problem. Why are exited the portmap status?


On node1 is ok:

[root@node1 ~]# systemctl status rpcbind.service 
rpcbind.service - RPC bind service
   Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled)
   Active: active (running) since p 2014-10-17 19:15:21 CEST; 2 days ago
 Main PID: 1963 (rpcbind)
   CGroup: /system.slice/rpcbind.service
   └─1963 /sbin/rpcbind -w

okt 17 19:15:21 node1.itsmart.cloud systemd[1]: Starting RPC bind service...
okt 17 19:15:21 node1.itsmart.cloud systemd[1]: Started RPC bind service.



Thanks in advance

Tibor



- Eredeti üzenet -
 On 10/19/2014 06:56 PM, Niels de Vos wrote:
  On Sat, Oct 18, 2014 at 01:24:12PM +0200, Demeter Tibor wrote:
  Hi,
 
  [root@node0 ~]# tail -n 20 /var/log/glusterfs/nfs.log
  [2014-10-18 07:41:06.136035] E [graph.c:307:glusterfs_graph_init]
  0-nfs-server: initializing translator failed
  [2014-10-18 07:41:06.136040] E [graph.c:502:glusterfs_graph_activate]
  0-graph: init failed
  pending frames:
  frame : type(0) op(0)
 
  patchset: git://git.gluster.com/glusterfs.git
  signal received: 11
  time of crash: 2014-10-18 07:41:06configuration details:
  argp 1
  backtrace 1
  dlfcn 1
  fdatasync 1
  libpthread 1
  llistxattr 1
  setfsid 1
  spinlock 1
  epoll.h 1
  xattr.h 1
  st_atim.tv_nsec 1
  

Re: [Gluster-users] NFS not start on localhost

2014-10-20 Thread Demeter Tibor
Also it's funny, because meanwhile the portmap are listening on localhost.

[root@node0 log]# netstat -tunlp | grep 111
tcp0  0 0.0.0.0:111 0.0.0.0:*   LISTEN  
4709/rpcbind
tcp6   0  0 :::111  :::*LISTEN  
4709/rpcbind
udp0  0 0.0.0.0:111 0.0.0.0:*   
4709/rpcbind
udp6   0  0 :::111  :::*
4709/rpcbind   

Demeter Tibor 



- Eredeti üzenet -
 Hi,
 
 This is the full nfs.log after delete  reboot.
 It is refers to portmap registering problem.
 
 [root@node0 glusterfs]# cat nfs.log
 [2014-10-20 06:48:43.221136] I [glusterfsd.c:1959:main]
 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.2
 (/usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p
 /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S
 /var/run/567e0bba7ad7102eae3049e2ad6c3ed7.socket)
 [2014-10-20 06:48:43.22] I [socket.c:3561:socket_init]
 0-socket.glusterfsd: SSL support is NOT enabled
 [2014-10-20 06:48:43.224475] I [socket.c:3576:socket_init]
 0-socket.glusterfsd: using system polling thread
 [2014-10-20 06:48:43.224654] I [socket.c:3561:socket_init] 0-glusterfs: SSL
 support is NOT enabled
 [2014-10-20 06:48:43.224667] I [socket.c:3576:socket_init] 0-glusterfs: using
 system polling thread
 [2014-10-20 06:48:43.235876] I
 [rpcsvc.c:2127:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured
 rpc.outstanding-rpc-limit with value 16
 [2014-10-20 06:48:43.254087] I [socket.c:3561:socket_init]
 0-socket.nfs-server: SSL support is NOT enabled
 [2014-10-20 06:48:43.254116] I [socket.c:3576:socket_init]
 0-socket.nfs-server: using system polling thread
 [2014-10-20 06:48:43.255241] I [socket.c:3561:socket_init]
 0-socket.nfs-server: SSL support is NOT enabled
 [2014-10-20 06:48:43.255264] I [socket.c:3576:socket_init]
 0-socket.nfs-server: using system polling thread
 [2014-10-20 06:48:43.257279] I [socket.c:3561:socket_init]
 0-socket.nfs-server: SSL support is NOT enabled
 [2014-10-20 06:48:43.257315] I [socket.c:3576:socket_init]
 0-socket.nfs-server: using system polling thread
 [2014-10-20 06:48:43.258135] I [socket.c:3561:socket_init] 0-socket.NLM: SSL
 support is NOT enabled
 [2014-10-20 06:48:43.258157] I [socket.c:3576:socket_init] 0-socket.NLM:
 using system polling thread
 [2014-10-20 06:48:43.293724] E
 [rpcsvc.c:1314:rpcsvc_program_register_portmap] 0-rpc-service: Could not
 register with portmap
 [2014-10-20 06:48:43.293760] E [nfs.c:332:nfs_init_versions] 0-nfs: Program
 NLM4 registration failed
 [2014-10-20 06:48:43.293771] E [nfs.c:1312:init] 0-nfs: Failed to initialize
 protocols
 [2014-10-20 06:48:43.293777] E [xlator.c:403:xlator_init] 0-nfs-server:
 Initialization of volume 'nfs-server' failed, review your volfile again
 [2014-10-20 06:48:43.293783] E [graph.c:307:glusterfs_graph_init]
 0-nfs-server: initializing translator failed
 [2014-10-20 06:48:43.293789] E [graph.c:502:glusterfs_graph_activate]
 0-graph: init failed
 pending frames:
 frame : type(0) op(0)
 
 patchset: git://git.gluster.com/glusterfs.git
 signal received: 11
 time of crash: 2014-10-20 06:48:43configuration details:
 argp 1
 backtrace 1
 dlfcn 1
 fdatasync 1
 libpthread 1
 llistxattr 1
 setfsid 1
 spinlock 1
 epoll.h 1
 xattr.h 1
 st_atim.tv_nsec 1
 package-string: glusterfs 3.5.2
 [root@node0 glusterfs]# systemctl status portma
 portma.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
 
 
 
 Also I have checked the rpcbind service.
 
 [root@node0 glusterfs]# systemctl status rpcbind.service
 rpcbind.service - RPC bind service
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled)
Active: active (running) since h 2014-10-20 08:48:39 CEST; 2min 52s ago
   Process: 1940 ExecStart=/sbin/rpcbind -w ${RPCBIND_ARGS} (code=exited,
   status=0/SUCCESS)
  Main PID: 1946 (rpcbind)
CGroup: /system.slice/rpcbind.service
└─1946 /sbin/rpcbind -w
 
 okt 20 08:48:39 node0.itsmart.cloud systemd[1]: Starting RPC bind service...
 okt 20 08:48:39 node0.itsmart.cloud systemd[1]: Started RPC bind service.
 
 The restart does not solve this problem.
 
 
 I think this is the problem. Why are exited the portmap status?
 
 
 On node1 is ok:
 
 [root@node1 ~]# systemctl status rpcbind.service
 rpcbind.service - RPC bind service
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled)
Active: active (running) since p 2014-10-17 19:15:21 CEST; 2 days ago
  Main PID: 1963 (rpcbind)
CGroup: /system.slice/rpcbind.service
└─1963 /sbin/rpcbind -w
 
 okt 17 19:15:21 node1.itsmart.cloud systemd[1]: Starting RPC bind service...
 okt 17 19:15:21 node1.itsmart.cloud systemd[1]: Started RPC bind service.
 
 
 
 Thanks in advance
 
 Tibor
 
 
 
 - Eredeti üzenet -
  On 10/19/2014 06:56 PM, Niels de Vos 

Re: [Gluster-users] NFS not start on localhost

2014-10-20 Thread Niels de Vos
On Mon, Oct 20, 2014 at 09:04:28AM +0200, Demeter Tibor wrote:
 Hi,
 
 This is the full nfs.log after delete  reboot.
 It is refers to portmap registering problem.
 
 [root@node0 glusterfs]# cat nfs.log
 [2014-10-20 06:48:43.221136] I [glusterfsd.c:1959:main] 
 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.2 
 (/usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p 
 /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S 
 /var/run/567e0bba7ad7102eae3049e2ad6c3ed7.socket)
 [2014-10-20 06:48:43.22] I [socket.c:3561:socket_init] 
 0-socket.glusterfsd: SSL support is NOT enabled
 [2014-10-20 06:48:43.224475] I [socket.c:3576:socket_init] 
 0-socket.glusterfsd: using system polling thread
 [2014-10-20 06:48:43.224654] I [socket.c:3561:socket_init] 0-glusterfs: SSL 
 support is NOT enabled
 [2014-10-20 06:48:43.224667] I [socket.c:3576:socket_init] 0-glusterfs: using 
 system polling thread
 [2014-10-20 06:48:43.235876] I 
 [rpcsvc.c:2127:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured 
 rpc.outstanding-rpc-limit with value 16
 [2014-10-20 06:48:43.254087] I [socket.c:3561:socket_init] 
 0-socket.nfs-server: SSL support is NOT enabled
 [2014-10-20 06:48:43.254116] I [socket.c:3576:socket_init] 
 0-socket.nfs-server: using system polling thread
 [2014-10-20 06:48:43.255241] I [socket.c:3561:socket_init] 
 0-socket.nfs-server: SSL support is NOT enabled
 [2014-10-20 06:48:43.255264] I [socket.c:3576:socket_init] 
 0-socket.nfs-server: using system polling thread
 [2014-10-20 06:48:43.257279] I [socket.c:3561:socket_init] 
 0-socket.nfs-server: SSL support is NOT enabled
 [2014-10-20 06:48:43.257315] I [socket.c:3576:socket_init] 
 0-socket.nfs-server: using system polling thread
 [2014-10-20 06:48:43.258135] I [socket.c:3561:socket_init] 0-socket.NLM: SSL 
 support is NOT enabled
 [2014-10-20 06:48:43.258157] I [socket.c:3576:socket_init] 0-socket.NLM: 
 using system polling thread
 [2014-10-20 06:48:43.293724] E 
 [rpcsvc.c:1314:rpcsvc_program_register_portmap] 0-rpc-service: Could not 
 register with portmap
 [2014-10-20 06:48:43.293760] E [nfs.c:332:nfs_init_versions] 0-nfs: Program  
 NLM4 registration failed

The above line suggests that there already is a service registered at
portmapper for the NLM4 program/service. This happens when the kernel
module 'lockd' is loaded. The kernel NFS-client and NFS-server depend on
this, but unfortunately it conflicts with the Gluster/nfs server.

Could you verify that the module is loaded?
 - use 'lsmod | grep lockd' to check the modules
 - use 'rpcinfo | grep nlockmgr' to check the rpcbind registrations

Make sure that you do not mount any NFS exports on the Gluster server.
Unmount all NFS mounts.

You mentioned you are running CentOS-7, which is systemd based. You
should be able to stop any conflicting NFS services like this:

 # systemctl stop nfs-lock.service
 # systemctl stop nfs.target
 # systemctl disable nfs.target

If all these services cleanup themselves, you should be able to start
the Gluster/nfs service:

  # systemctl restart glusterd.service

In case some bits are still lingering around, it might be easier to
reboot after disabling the 'nfs.target'.

 [2014-10-20 06:48:43.293771] E [nfs.c:1312:init] 0-nfs: Failed to initialize 
 protocols
 [2014-10-20 06:48:43.293777] E [xlator.c:403:xlator_init] 0-nfs-server: 
 Initialization of volume 'nfs-server' failed, review your volfile again
 [2014-10-20 06:48:43.293783] E [graph.c:307:glusterfs_graph_init] 
 0-nfs-server: initializing translator failed
 [2014-10-20 06:48:43.293789] E [graph.c:502:glusterfs_graph_activate] 
 0-graph: init failed
 pending frames:
 frame : type(0) op(0)
 
 patchset: git://git.gluster.com/glusterfs.git
 signal received: 11
 time of crash: 2014-10-20 06:48:43configuration details:
 argp 1
 backtrace 1
 dlfcn 1
 fdatasync 1
 libpthread 1
 llistxattr 1
 setfsid 1
 spinlock 1
 epoll.h 1
 xattr.h 1
 st_atim.tv_nsec 1
 package-string: glusterfs 3.5.2
 [root@node0 glusterfs]# systemctl status portma
 portma.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
 
 
 
 Also I have checked the rpcbind service.
 
 [root@node0 glusterfs]# systemctl status rpcbind.service 
 rpcbind.service - RPC bind service
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled)
Active: active (running) since h 2014-10-20 08:48:39 CEST; 2min 52s ago
   Process: 1940 ExecStart=/sbin/rpcbind -w ${RPCBIND_ARGS} (code=exited, 
 status=0/SUCCESS)
  Main PID: 1946 (rpcbind)
CGroup: /system.slice/rpcbind.service
└─1946 /sbin/rpcbind -w
 
 okt 20 08:48:39 node0.itsmart.cloud systemd[1]: Starting RPC bind service...
 okt 20 08:48:39 node0.itsmart.cloud systemd[1]: Started RPC bind service.
 
 The restart does not solve this problem.
 
 
 I think this is the problem. Why are exited the portmap status?

The 'portmap' service has been replaced with 'rpcbind' since RHEL-6.
They have the same 

Re: [Gluster-users] NFS not start on localhost

2014-10-20 Thread Demeter Tibor

Hi,

Thank you for you reply.

I did your recommendations, but there are no changes.

In the nfs.log there are no new things.


[root@node0 glusterfs]# reboot
Connection to 172.16.0.10 closed by remote host.
Connection to 172.16.0.10 closed.
[tdemeter@sirius-31 ~]$ ssh root@172.16.0.10
root@172.16.0.10's password: 
Last login: Mon Oct 20 11:02:13 2014 from 192.168.133.106
[root@node0 ~]# systemctl status nfs.target 
nfs.target - Network File System Server
   Loaded: loaded (/usr/lib/systemd/system/nfs.target; disabled)
   Active: inactive (dead)

[root@node0 ~]# gluster volume status engine
Status of volume: engine
Gluster process PortOnline  Pid
--
Brick gs00.itsmart.cloud:/gluster/engine0   50160   Y   3271
Brick gs01.itsmart.cloud:/gluster/engine1   50160   Y   595
NFS Server on localhost N/A N   N/A
Self-heal Daemon on localhost   N/A Y   3286
NFS Server on gs01.itsmart.cloud2049Y   6951
Self-heal Daemon on gs01.itsmart.cloud  N/A Y   6958
 
Task Status of Volume engine
--
There are no active volume tasks
 
[root@node0 ~]# systemctl status 
Display all 262 possibilities? (y or n)
[root@node0 ~]# systemctl status nfs-lock
nfs-lock.service - NFS file locking service.
   Loaded: loaded (/usr/lib/systemd/system/nfs-lock.service; enabled)
   Active: inactive (dead)

[root@node0 ~]# systemctl stop nfs-lock
[root@node0 ~]# systemctl restart gluster
glusterd.serviceglusterfsd.service  gluster.mount   
[root@node0 ~]# systemctl restart gluster
glusterd.serviceglusterfsd.service  gluster.mount   
[root@node0 ~]# systemctl restart glusterfsd.service 
[root@node0 ~]# systemctl restart glusterd.service 
[root@node0 ~]# gluster volume status engine
Status of volume: engine
Gluster process PortOnline  Pid
--
Brick gs00.itsmart.cloud:/gluster/engine0   50160   Y   5140
Brick gs01.itsmart.cloud:/gluster/engine1   50160   Y   2037
NFS Server on localhost N/A N   N/A
Self-heal Daemon on localhost   N/A N   N/A
NFS Server on gs01.itsmart.cloud2049Y   6951
Self-heal Daemon on gs01.itsmart.cloud  N/A Y   6958
 

Any other idea?

Tibor








- Eredeti üzenet -
 On Mon, Oct 20, 2014 at 09:04:2.8AM +0200, Demeter Tibor wrote:
  Hi,
  
  This is the full nfs.log after delete  reboot.
  It is refers to portmap registering problem.
  
  [root@node0 glusterfs]# cat nfs.log
  [2014-10-20 06:48:43.221136] I [glusterfsd.c:1959:main]
  0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.2
  (/usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p
  /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S
  /var/run/567e0bba7ad7102eae3049e2ad6c3ed7.socket)
  [2014-10-20 06:48:43.22] I [socket.c:3561:socket_init]
  0-socket.glusterfsd: SSL support is NOT enabled
  [2014-10-20 06:48:43.224475] I [socket.c:3576:socket_init]
  0-socket.glusterfsd: using system polling thread
  [2014-10-20 06:48:43.224654] I [socket.c:3561:socket_init] 0-glusterfs: SSL
  support is NOT enabled
  [2014-10-20 06:48:43.224667] I [socket.c:3576:socket_init] 0-glusterfs:
  using system polling thread
  [2014-10-20 06:48:43.235876] I
  [rpcsvc.c:2127:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured
  rpc.outstanding-rpc-limit with value 16
  [2014-10-20 06:48:43.254087] I [socket.c:3561:socket_init]
  0-socket.nfs-server: SSL support is NOT enabled
  [2014-10-20 06:48:43.254116] I [socket.c:3576:socket_init]
  0-socket.nfs-server: using system polling thread
  [2014-10-20 06:48:43.255241] I [socket.c:3561:socket_init]
  0-socket.nfs-server: SSL support is NOT enabled
  [2014-10-20 06:48:43.255264] I [socket.c:3576:socket_init]
  0-socket.nfs-server: using system polling thread
  [2014-10-20 06:48:43.257279] I [socket.c:3561:socket_init]
  0-socket.nfs-server: SSL support is NOT enabled
  [2014-10-20 06:48:43.257315] I [socket.c:3576:socket_init]
  0-socket.nfs-server: using system polling thread
  [2014-10-20 06:48:43.258135] I [socket.c:3561:socket_init] 0-socket.NLM:
  SSL support is NOT enabled
  [2014-10-20 06:48:43.258157] I [socket.c:3576:socket_init] 0-socket.NLM:
  using system polling thread
  [2014-10-20 06:48:43.293724] E
  [rpcsvc.c:1314:rpcsvc_program_register_portmap] 0-rpc-service: Could not
  register with portmap
  [2014-10-20 06:48:43.293760] E [nfs.c:332:nfs_init_versions] 0-nfs: Program
  NLM4 registration failed
 
 The above line suggests that 

Re: [Gluster-users] NFS not start on localhost

2014-10-19 Thread Niels de Vos
On Sat, Oct 18, 2014 at 01:24:12PM +0200, Demeter Tibor wrote:
 Hi, 
 
 [root@node0 ~]# tail -n 20 /var/log/glusterfs/nfs.log 
 [2014-10-18 07:41:06.136035] E [graph.c:307:glusterfs_graph_init] 
 0-nfs-server: initializing translator failed 
 [2014-10-18 07:41:06.136040] E [graph.c:502:glusterfs_graph_activate] 
 0-graph: init failed 
 pending frames: 
 frame : type(0) op(0) 
 
 patchset: git://git.gluster.com/glusterfs.git 
 signal received: 11 
 time of crash: 2014-10-18 07:41:06configuration details: 
 argp 1 
 backtrace 1 
 dlfcn 1 
 fdatasync 1 
 libpthread 1 
 llistxattr 1 
 setfsid 1 
 spinlock 1 
 epoll.h 1 
 xattr.h 1 
 st_atim.tv_nsec 1 
 package-string: glusterfs 3.5.2 

This definitely is a gluster/nfs issue. For whatever reasone, the
gluster/nfs server crashes :-/ The log does not show enough details,
some more lines before this are needed.

There might be an issue where the NFS RPC-services can not register. I
think I have seen similar crashes before, but never found the cause. You
should check with the 'rpcinfo' command to see if there are any NFS
RPC-services registered (nfs, lockd, mount, lockmgr). If there are any,
verify that there are no other nfs processes running, this includes
NFS-mounts in /etc/fstab and similar.

Could you file a bug, attach the full (gzipped) nfs.log? Try to explain
as much details of the setup as you can, and add a link to the archives
of this thread. Please post the url to the bug in a response to this
thread. A crashing process is never good, even when is could be caused
by external processes.

Link to file a bug:
- 
https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFScomponent=nfsversion=3.5.2

Thanks,
Niels


 
 Udv: 
 
 Demeter Tibor 
 
 Email: tdemeter @itsmart.hu 
 Skype: candyman_78 
 Phone: +36 30 462 0500 
 Web : www.it smart.hu 
 
 IT SMART KFT. 
 2120 Dunakeszi Wass Albert utca 2. I. em 9. 
 Telefon: +36 30 462-0500 Fax: +36 27 637-486 
 
 [EN] This message and any attachments are confidential and privileged and 
 intended for the use of the addressee only. If you have received this 
 communication in error, please notify the sender by replay e-mail and delete 
 this message from your system. Please note that Internet e-mail guarantees 
 neither the confidentiality nor the proper receipt of the message sent. The 
 data deriving from our correspondence with you are included in a file of 
 ITSMART Ltd which exclusive purpose is to manage the communications of the 
 company; under the understanding that, in maintaining said correspondence, 
 you authorize the treatment of such data for the mentioned purpose. You are 
 entitled to exercise your rights of access, rectification, cancellation and 
 opposition by addressing such written application to address above. 
 [HUN] Ez az üzenet és annak bármely csatolt anyaga bizalmas, a nyilvános 
 közléstol védett, kizárólag a címzett használhatja fel. Ha Ön nem az üzenet 
 címzettje, úgy kérjük válaszüzenetben értesítse errol a feladót és törölje az 
 üzenetet a rendszerbol. Kérjük vegye figyelembe, hogy az email-en történo 
 információtovábbítás kockázattal járhat, nem garantálja sem a csatorna 
 bizalmasságát, sem a kézbesítést. A levél az ITSMART Informatikai Kft. 
 kommunikációjának eszköze, az adatokat kizárólag erre a célra használjuk. 
 Jogosult tájékoztatást kérni személyes adatai kezelésérol, kérheti azok 
 helyesbítését, illetve törlését írásos kérelemben a fenti e-mail címen. 
 
 - Eredeti üzenet -
 
  Maybe share the last 15-20 lines of you /var/log/glusterfs/nfs.log for the
  consideration of everyone on the list? Thanks.
 
  From: Demeter Tibor tdeme...@itsmart.hu;
  To: Anirban Ghoshal chalcogen_eg_oxy...@yahoo.com;
  Cc: gluster-users gluster-users@gluster.org;
  Subject: Re: [Gluster-users] NFS not start on localhost
  Sent: Sat, Oct 18, 2014 10:36:36 AM
 
  
  Hi,
 
  I've try out these things:
 
  - nfs.disable on-of
  - iptables disable
  - volume stop-start
 
  but same.
  So, when I make a new volume everything is fine.
  After reboot the NFS won't listen on local host (only on server has brick0)
 
  Centos7 with last ovirt
 
  Regards,
 
  Tibor
 
  - Eredeti üzenet -
 
   It happens with me sometimes. Try `tail -n 20 /var/log/glusterfs/nfs.log`.
   You will probably find something out that will help your cause. In 
   general,
   if you just wish to start the thing up without going into the why of it,
   try
   `gluster volume set engine nfs.disable on` followed by ` gluster volume 
   set
   engine nfs.disable off`. It does the trick quite often for me because it 
   is
   a polite way to askmgmt/glusterd to try and respawn the nfs server process
   if need be. But, keep in mind that this will call a (albeit small) service
   interruption to all clients accessing volume engine over nfs.
  
 
   Thanks,
  
   Anirban
  
 
   On Saturday, 18 October 2014 1:03 AM, Demeter Tibor tdeme...@itsmart.hu
   wrote:
  
 
   Hi,
  
 
   I have make a glusterfs with nfs support

Re: [Gluster-users] NFS not start on localhost

2014-10-18 Thread Demeter Tibor
Hi, 

I've try out these things: 

- nfs.disable on-of 
- iptables disable 
- volume stop-start 

but same. 
So, when I make a new volume everything is fine. 
After reboot the NFS won't listen on local host (only on server has brick0) 

Centos7 with last ovirt 

Regards, 

Tibor 

- Eredeti üzenet -

 It happens with me sometimes. Try `tail -n 20 /var/log/glusterfs/nfs.log`.
 You will probably find something out that will help your cause. In general,
 if you just wish to start the thing up without going into the why of it, try
 `gluster volume set engine nfs.disable on` followed by ` gluster volume set
 engine nfs.disable off`. It does the trick quite often for me because it is
 a polite way to askmgmt/glusterd to try and respawn the nfs server process
 if need be. But, keep in mind that this will call a (albeit small) service
 interruption to all clients accessing volume engine over nfs.

 Thanks,
 Anirban

 On Saturday, 18 October 2014 1:03 AM, Demeter Tibor tdeme...@itsmart.hu
 wrote:

 Hi,

 I have make a glusterfs with nfs support.

 I don't know why, but after a reboot the nfs does not listen on localhost,
 only on gs01.

 [root@node0 ~]# gluster volume info engine

 Volume Name: engine
 Type: Replicate
 Volume ID: 2ea009bf-c740-492e-956d-e1bca76a0bd3
 Status: Started
 Number of Bricks: 1 x 2 = 2
 Transport-type: tcp
 Bricks:
 Brick1: gs00.itsmart.cloud:/gluster/engine0
 Brick2: gs01.itsmart.cloud:/gluster/engine1
 Options Reconfigured:
 storage.owner-uid: 36
 storage.owner-gid: 36
 performance.quick-read: off
 performance.read-ahead: off
 performance.io-cache: off
 performance.stat-prefetch: off
 cluster.eager-lock: enable
 network.remote-dio: enable
 cluster.quorum-type: auto
 cluster.server-quorum-type: server
 auth.allow: *
 nfs.disable: off

 [root@node0 ~]# gluster volume status engine
 Status of volume: engine
 Gluster process Port Online Pid
 --
 Brick gs00.itsmart.cloud:/gluster/engine0 50158 Y 3250
 Brick gs01.itsmart.cloud:/gluster/engine1 50158 Y 5518
 NFS Server on localhost N/A N N/A
 Self-heal Daemon on localhost N/A Y 3261
 NFS Server on gs01.itsmart.cloud 2049 Y 5216
 Self-heal Daemon on gs01.itsmart.cloud N/A Y 5223

 Does anybody help me?

 Thanks in advance.

 Tibor

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] NFS not start on localhost

2014-10-18 Thread Anirban Ghoshal
Maybe share the last 15-20 lines of you /var/log/glusterfs/nfs.log for the 
consideration of everyone on the list? Thanks. ___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] NFS not start on localhost

2014-10-18 Thread Demeter Tibor
Hi, 

[root@node0 ~]# tail -n 20 /var/log/glusterfs/nfs.log 
[2014-10-18 07:41:06.136035] E [graph.c:307:glusterfs_graph_init] 0-nfs-server: 
initializing translator failed 
[2014-10-18 07:41:06.136040] E [graph.c:502:glusterfs_graph_activate] 0-graph: 
init failed 
pending frames: 
frame : type(0) op(0) 

patchset: git://git.gluster.com/glusterfs.git 
signal received: 11 
time of crash: 2014-10-18 07:41:06configuration details: 
argp 1 
backtrace 1 
dlfcn 1 
fdatasync 1 
libpthread 1 
llistxattr 1 
setfsid 1 
spinlock 1 
epoll.h 1 
xattr.h 1 
st_atim.tv_nsec 1 
package-string: glusterfs 3.5.2 

Udv: 

Demeter Tibor 

Email: tdemeter @itsmart.hu 
Skype: candyman_78 
Phone: +36 30 462 0500 
Web : www.it smart.hu 

IT SMART KFT. 
2120 Dunakeszi Wass Albert utca 2. I. em 9. 
Telefon: +36 30 462-0500 Fax: +36 27 637-486 

[EN] This message and any attachments are confidential and privileged and 
intended for the use of the addressee only. If you have received this 
communication in error, please notify the sender by replay e-mail and delete 
this message from your system. Please note that Internet e-mail guarantees 
neither the confidentiality nor the proper receipt of the message sent. The 
data deriving from our correspondence with you are included in a file of 
ITSMART Ltd which exclusive purpose is to manage the communications of the 
company; under the understanding that, in maintaining said correspondence, you 
authorize the treatment of such data for the mentioned purpose. You are 
entitled to exercise your rights of access, rectification, cancellation and 
opposition by addressing such written application to address above. 
[HUN] Ez az üzenet és annak bármely csatolt anyaga bizalmas, a nyilvános 
közléstol védett, kizárólag a címzett használhatja fel. Ha Ön nem az üzenet 
címzettje, úgy kérjük válaszüzenetben értesítse errol a feladót és törölje az 
üzenetet a rendszerbol. Kérjük vegye figyelembe, hogy az email-en történo 
információtovábbítás kockázattal járhat, nem garantálja sem a csatorna 
bizalmasságát, sem a kézbesítést. A levél az ITSMART Informatikai Kft. 
kommunikációjának eszköze, az adatokat kizárólag erre a célra használjuk. 
Jogosult tájékoztatást kérni személyes adatai kezelésérol, kérheti azok 
helyesbítését, illetve törlését írásos kérelemben a fenti e-mail címen. 

- Eredeti üzenet -

 Maybe share the last 15-20 lines of you /var/log/glusterfs/nfs.log for the
 consideration of everyone on the list? Thanks.

 From: Demeter Tibor tdeme...@itsmart.hu;
 To: Anirban Ghoshal chalcogen_eg_oxy...@yahoo.com;
 Cc: gluster-users gluster-users@gluster.org;
 Subject: Re: [Gluster-users] NFS not start on localhost
 Sent: Sat, Oct 18, 2014 10:36:36 AM

 
 Hi,

 I've try out these things:

 - nfs.disable on-of
 - iptables disable
 - volume stop-start

 but same.
 So, when I make a new volume everything is fine.
 After reboot the NFS won't listen on local host (only on server has brick0)

 Centos7 with last ovirt

 Regards,

 Tibor

 - Eredeti üzenet -

  It happens with me sometimes. Try `tail -n 20 /var/log/glusterfs/nfs.log`.
  You will probably find something out that will help your cause. In general,
  if you just wish to start the thing up without going into the why of it,
  try
  `gluster volume set engine nfs.disable on` followed by ` gluster volume set
  engine nfs.disable off`. It does the trick quite often for me because it is
  a polite way to askmgmt/glusterd to try and respawn the nfs server process
  if need be. But, keep in mind that this will call a (albeit small) service
  interruption to all clients accessing volume engine over nfs.
 

  Thanks,
 
  Anirban
 

  On Saturday, 18 October 2014 1:03 AM, Demeter Tibor tdeme...@itsmart.hu
  wrote:
 

  Hi,
 

  I have make a glusterfs with nfs support.
 

  I don't know why, but after a reboot the nfs does not listen on localhost,
  only on gs01.
 

  [root@node0 ~]# gluster volume info engine
 

  Volume Name: engine
 
  Type: Replicate
 
  Volume ID: 2ea009bf-c740-492e-956d-e1bca76a0bd3
 
  Status: Started
 
  Number of Bricks: 1 x 2 = 2
 
  Transport-type: tcp
 
  Bricks:
 
  Brick1: gs00.itsmart.cloud:/gluster/engine0
 
  Brick2: gs01.itsmart.cloud:/gluster/engine1
 
  Options Reconfigured:
 
  storage.owner-uid: 36
 
  storage.owner-gid: 36
 
  performance.quick-read: off
 
  performance.read-ahead: off
 
  performance.io-cache: off
 
  performance.stat-prefetch: off
 
  cluster.eager-lock: enable
 
  network.remote-dio: enable
 
  cluster.quorum-type: auto
 
  cluster.server-quorum-type: server
 
  auth.allow: *
 
  nfs.disable: off
 

  [root@node0 ~]# gluster volume status engine
 
  Status of volume: engine
 
  Gluster process Port Online Pid
 
  --
 
  Brick gs00.itsmart.cloud:/gluster/engine0 50158 Y 3250
 
  Brick gs01.itsmart.cloud:/gluster/engine1 50158 Y 5518
 
  NFS Server on localhost N/A N N/A
 
  Self-heal Daemon on localhost N

Re: [Gluster-users] NFS not start on localhost

2014-10-18 Thread Justin Clift
Hmmm, do you have any custom translators installed, or have you been trying
out GlusterFlow?

I used to get crashes of the NFS translator (looks like this) when I was
getting GlusterFlow up and running, when everything wasn't quite setup
correctly.

If you don't have any custom translators installed (or trying out
GlusterFlow), ignore this. ;)

Regards and best wishes,

Justin Clift


- Original Message -
 Hi,
 
 [root@node0 ~]# tail -n 20 /var/log/glusterfs/nfs.log
 [2014-10-18 07:41:06.136035] E [graph.c:307:glusterfs_graph_init]
 0-nfs-server: initializing translator failed
 [2014-10-18 07:41:06.136040] E [graph.c:502:glusterfs_graph_activate]
 0-graph: init failed
 pending frames:
 frame : type(0) op(0)
 
 patchset: git://git.gluster.com/glusterfs.git
 signal received: 11
 time of crash: 2014-10-18 07:41:06configuration details:
 argp 1
 backtrace 1
 dlfcn 1
 fdatasync 1
 libpthread 1
 llistxattr 1
 setfsid 1
 spinlock 1
 epoll.h 1
 xattr.h 1
 st_atim.tv_nsec 1
 package-string: glusterfs 3.5.2
 
 Udv:
 
 Demeter Tibor
 
 Email: tdemeter @itsmart.hu
 Skype: candyman_78
 Phone: +36 30 462 0500
 Web : www.it smart.hu
 
 IT SMART KFT.
 2120 Dunakeszi Wass Albert utca 2. I. em 9.
 Telefon: +36 30 462-0500 Fax: +36 27 637-486
 
 [EN] This message and any attachments are confidential and privileged and
 intended for the use of the addressee only. If you have received this
 communication in error, please notify the sender by replay e-mail and delete
 this message from your system. Please note that Internet e-mail guarantees
 neither the confidentiality nor the proper receipt of the message sent. The
 data deriving from our correspondence with you are included in a file of
 ITSMART Ltd which exclusive purpose is to manage the communications of the
 company; under the understanding that, in maintaining said correspondence,
 you authorize the treatment of such data for the mentioned purpose. You are
 entitled to exercise your rights of access, rectification, cancellation and
 opposition by addressing such written application to address above.
 [HUN] Ez az üzenet és annak bármely csatolt anyaga bizalmas, a nyilvános
 közléstol védett, kizárólag a címzett használhatja fel. Ha Ön nem az üzenet
 címzettje, úgy kérjük válaszüzenetben értesítse errol a feladót és törölje
 az üzenetet a rendszerbol. Kérjük vegye figyelembe, hogy az email-en történo
 információtovábbítás kockázattal járhat, nem garantálja sem a csatorna
 bizalmasságát, sem a kézbesítést. A levél az ITSMART Informatikai Kft.
 kommunikációjának eszköze, az adatokat kizárólag erre a célra használjuk.
 Jogosult tájékoztatást kérni személyes adatai kezelésérol, kérheti azok
 helyesbítését, illetve törlését írásos kérelemben a fenti e-mail címen.
 
 - Eredeti üzenet -
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

-- 
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] NFS not start on localhost

2014-10-18 Thread Demeter Tibor

I'm sorry, but I dont know what do you nfs translator mean.

I've followed up the ovirt hosted-engine setup howto and I installed glusterfs, 
etc from scratch. So it is a centos7 minimal install. The nfs-utils package are 
installed, but it disabled, so it does not run as service.

So it is a simple gluster volume and ovirt use this as nfs store for 
hosted-engine-setup. 
When I did the whole setup everything was fine. After reboot there are no nfs 
on localhost (or on local ip), only on the node1. But my hosted engine could 
not run only from this host.


Maybe is it an ovirt bug?

Thanks

Tibor


- Eredeti üzenet -
 Hmmm, do you have any custom translators installed, or have you been trying
 out GlusterFlow?
 
 I used to get crashes of the NFS translator (looks like this) when I was
 getting GlusterFlow up and running, when everything wasn't quite setup
 correctly.
 
 If you don't have any custom translators installed (or trying out
 GlusterFlow), ignore this. ;)
 
 Regards and best wishes,
 
 Justin Clift
 
 
 - Original Message -
  Hi,
  
  [root@node0 ~]# tail -n 20 /var/log/glusterfs/nfs.log
  [2014-10-18 07:41:06.136035] E [graph.c:307:glusterfs_graph_init]
  0-nfs-server: initializing translator failed
  [2014-10-18 07:41:06.136040] E [graph.c:502:glusterfs_graph_activate]
  0-graph: init failed
  pending frames:
  frame : type(0) op(0)
  
  patchset: git://git.gluster.com/glusterfs.git
  signal received: 11
  time of crash: 2014-10-18 07:41:06configuration details:
  argp 1
  backtrace 1
  dlfcn 1
  fdatasync 1
  libpthread 1
  llistxattr 1
  setfsid 1
  spinlock 1
  epoll.h 1
  xattr.h 1
  st_atim.tv_nsec 1
  package-string: glusterfs 3.5.2
  
  Udv:
  
  Demeter Tibor
  
  Email: tdemeter @itsmart.hu
  Skype: candyman_78
  Phone: +36 30 462 0500
  Web : www.it smart.hu
  
  IT SMART KFT.
  2120 Dunakeszi Wass Albert utca 2. I. em 9.
  Telefon: +36 30 462-0500 Fax: +36 27 637-486
  
  [EN] This message and any attachments are confidential and privileged and
  intended for the use of the addressee only. If you have received this
  communication in error, please notify the sender by replay e-mail and
  delete
  this message from your system. Please note that Internet e-mail guarantees
  neither the confidentiality nor the proper receipt of the message sent. The
  data deriving from our correspondence with you are included in a file of
  ITSMART Ltd which exclusive purpose is to manage the communications of the
  company; under the understanding that, in maintaining said correspondence,
  you authorize the treatment of such data for the mentioned purpose. You are
  entitled to exercise your rights of access, rectification, cancellation and
  opposition by addressing such written application to address above.
  [HUN] Ez az üzenet és annak bármely csatolt anyaga bizalmas, a nyilvános
  közléstol védett, kizárólag a címzett használhatja fel. Ha Ön nem az üzenet
  címzettje, úgy kérjük válaszüzenetben értesítse errol a feladót és törölje
  az üzenetet a rendszerbol. Kérjük vegye figyelembe, hogy az email-en
  történo
  információtovábbítás kockázattal járhat, nem garantálja sem a csatorna
  bizalmasságát, sem a kézbesítést. A levél az ITSMART Informatikai Kft.
  kommunikációjának eszköze, az adatokat kizárólag erre a célra használjuk.
  Jogosult tájékoztatást kérni személyes adatai kezelésérol, kérheti azok
  helyesbítését, illetve törlését írásos kérelemben a fenti e-mail címen.
  
  - Eredeti üzenet -
  
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-users
 
 --
 GlusterFS - http://www.gluster.org
 
 An open source, distributed file system scaling to several
 petabytes, and handling thousands of clients.
 
 My personal twitter: twitter.com/realjustinclift
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] NFS not start on localhost

2014-10-17 Thread Demeter Tibor

Hi, 

I have make a glusterfs with nfs support. 

I don't know why, but after a reboot the nfs does not listen on localhost, only 
on gs01. 




[root@node0 ~]# gluster volume info engine 

Volume Name: engine 
Type: Replicate 
Volume ID: 2ea009bf-c740-492e-956d-e1bca76a0bd3 
Status: Started 
Number of Bricks: 1 x 2 = 2 
Transport-type: tcp 
Bricks: 
Brick1: gs00.itsmart.cloud:/gluster/engine0 
Brick2: gs01.itsmart.cloud:/gluster/engine1 
Options Reconfigured: 
storage.owner-uid: 36 
storage.owner-gid: 36 
performance.quick-read: off 
performance.read-ahead: off 
performance.io-cache: off 
performance.stat-prefetch: off 
cluster.eager-lock: enable 
network.remote-dio: enable 
cluster.quorum-type: auto 
cluster.server-quorum-type: server 
auth.allow: * 
nfs.disable: off 


[root@node0 ~]# gluster volume status engine 
Status of volume: engine 
Gluster process Port Online Pid 
-- 
Brick gs00.itsmart.cloud:/gluster/engine0 50158 Y 3250 
Brick gs01.itsmart.cloud:/gluster/engine1 50158 Y 5518 
NFS Server on localhost N/A N N/A 
Self-heal Daemon on localhost N/A Y 3261 
NFS Server on gs01.itsmart.cloud 2049 Y 5216 
Self-heal Daemon on gs01.itsmart.cloud N/A Y 5223 






Does anybody help me? 


Thanks in advance. 




Tibor 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] NFS not start on localhost

2014-10-17 Thread Niels de Vos
On Fri, Oct 17, 2014 at 09:33:06PM +0200, Demeter Tibor wrote:
 
 Hi, 
 
 I have make a glusterfs with nfs support. 
 
 I don't know why, but after a reboot the nfs does not listen on localhost, 
 only on gs01. 

You should be able to find some hints in /var/log/glusterfs/nfs.log.

HTH,
Niels

 
 
 
 
 [root@node0 ~]# gluster volume info engine 
 
 Volume Name: engine 
 Type: Replicate 
 Volume ID: 2ea009bf-c740-492e-956d-e1bca76a0bd3 
 Status: Started 
 Number of Bricks: 1 x 2 = 2 
 Transport-type: tcp 
 Bricks: 
 Brick1: gs00.itsmart.cloud:/gluster/engine0 
 Brick2: gs01.itsmart.cloud:/gluster/engine1 
 Options Reconfigured: 
 storage.owner-uid: 36 
 storage.owner-gid: 36 
 performance.quick-read: off 
 performance.read-ahead: off 
 performance.io-cache: off 
 performance.stat-prefetch: off 
 cluster.eager-lock: enable 
 network.remote-dio: enable 
 cluster.quorum-type: auto 
 cluster.server-quorum-type: server 
 auth.allow: * 
 nfs.disable: off 
 
 
 [root@node0 ~]# gluster volume status engine 
 Status of volume: engine 
 Gluster process Port Online Pid 
 --
  
 Brick gs00.itsmart.cloud:/gluster/engine0 50158 Y 3250 
 Brick gs01.itsmart.cloud:/gluster/engine1 50158 Y 5518 
 NFS Server on localhost N/A N N/A 
 Self-heal Daemon on localhost N/A Y 3261 
 NFS Server on gs01.itsmart.cloud 2049 Y 5216 
 Self-heal Daemon on gs01.itsmart.cloud N/A Y 5223 
 
 
 
 
 
 
 Does anybody help me? 
 
 
 Thanks in advance. 
 
 
 
 
 Tibor 

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS not start on localhost

2014-10-17 Thread Anirban Ghoshal
It happens with me sometimes. Try `tail -n 20 /var/log/glusterfs/nfs.log`. You 
will probably find something out that will help your cause. In general, if you 
just wish to start the thing up without going into the why of it, try `gluster 
volume set engine nfs.disable on` followed by `gluster volume set engine 
nfs.disable off`. It does the trick quite often for me because it is a polite 
way to askmgmt/glusterd to try and respawn the nfs server process if need be. 
But, keep in mind that this will call a (albeit small) service interruption to 
all clients accessing volume engine over nfs.

Thanks, 
Anirban


On Saturday, 18 October 2014 1:03 AM, Demeter Tibor tdeme...@itsmart.hu wrote:
 




Hi,

I have make a glusterfs with nfs support.

I don't know why, but after a reboot the nfs does not listen on localhost, only 
on gs01.


[root@node0 ~]# gluster volume info engine

Volume Name: engine
Type: Replicate
Volume ID: 2ea009bf-c740-492e-956d-e1bca76a0bd3
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gs00.itsmart.cloud:/gluster/engine0
Brick2: gs01.itsmart.cloud:/gluster/engine1
Options Reconfigured:
storage.owner-uid: 36
storage.owner-gid: 36
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
auth.allow: *
nfs.disable: off

[root@node0 ~]# gluster volume status engine
Status of volume: engine
Gluster process Port Online Pid
--
Brick gs00.itsmart.cloud:/gluster/engine0 50158 Y 3250
Brick gs01.itsmart.cloud:/gluster/engine1 50158 Y 5518
NFS Server on localhost N/A N N/A
Self-heal Daemon on localhost N/A Y 3261
NFS Server on gs01.itsmart.cloud 2049 Y 5216
Self-heal Daemon on gs01.itsmart.cloud N/A Y 5223



Does anybody help me?

Thanks in advance.

Tibor
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users