[Gluster-users] Move one brick to new host

2014-11-15 Thread Demeter Tibor

Hi, 

I have two node (node0 and node1) with ovirt 3.5 and I created a replicated 
volume with one-one bricks on servers. 

Now I added the third node (node2) and I would like to pull-out the node1 from 
the whole system. Currently it's impossible because there are a brick on node1. 
How can I move the brick from node1 to node2? 

I just wondering I will add node2 as new brick to the volume and after the 
syncronisation I just remove brick from node1. 

Is it possible ? 
If no, then how is it possible? 

Thanks 
Tibor 



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] self-heal daemon not running

2014-11-09 Thread Demeter Tibor

Hi, 

I have a 2 node replicated volume under ovirt 3.5. 



My self heal daemon is not running. I have a lot of misshealted vms on my 
glusterfs 



[root@node1 ~]# gluster volume heal g1sata info 
Brick node0.itsmart.cloud:/data/sata/brick/ 
 
Number of entries: 1 

Brick node1.itsmart.cloud:/data/sata/brick/ 
 
/fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/6788e53a-750d-4566-8579-37f586a0f306/2f62334e-39dc-4ffa-9102-51289588c42b
 - Possibly undergoing heal 
/fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/12ff021d-4075-4032-979c-685520dc1895/4051ffec-3dd2-495d-989b-eefb9fe92221
 - Possibly undergoing heal 
/fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/c9dbc63e-b9a2-43aa-b433-8c53ce824492/bb0efb35-5164-4b22-9bed-5daeacf97129
 - Possibly undergoing heal 
/fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/388c14f5-5690-4eae-a7dc-76d782ad8acc/0059a2c2-f8b1-4979-8321-41422d9a469f
 - Possibly undergoing heal 
/fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/2cb7ee4b-5c43-45e7-b13e-18aa3df0ef66/c0cd0554-ac37-4feb-803c-d1207219e3a1
 - Possibly undergoing heal 
/fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/1bb441b8-84a2-4d5b-bd29-f57b100bbce4/095230c2-0411-44cf-a085-3c929e4ca9b6
 - Possibly undergoing heal 
/fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/e3751092-3f6a-4aa6-b569-2a2fb4ae294a/133b2d17-2a2a-4ec3-b26a-4fd685aa2b78
 - Possibly undergoing heal 
/fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/1535497b-d6ca-40e3-84b0-85f55217cbc9/144ddc5c-be25-4d5e-91a4-a0864ea2a10e
 - Possibly undergoing heal 
Number of entries: 9 




Status of volume: g1sata 
Gluster process Port Online Pid 
-- 
Brick 172.16.0.10:/data/sata/brick 49152 Y 27983 
Brick 172.16.0.11:/data/sata/brick 49152 Y 2581 
NFS Server on localhost 2049 Y 14209 
Self-heal Daemon on localhost N/A Y 14225 
NFS Server on 172.16.0.10 2049 Y 27996 
Self-heal Daemon on 172.16.0.10 N/A Y 28004 

Task Status of Volume g1sata 
-- 
There are no active volume tasks 





[root@node1 ~]# rpm -qa|grep gluster 
glusterfs-libs-3.5.2-1.el6.x86_64 
glusterfs-cli-3.5.2-1.el6.x86_64 
glusterfs-rdma-3.5.2-1.el6.x86_64 
glusterfs-server-3.5.2-1.el6.x86_64 
glusterfs-3.5.2-1.el6.x86_64 
glusterfs-api-3.5.2-1.el6.x86_64 
glusterfs-fuse-3.5.2-1.el6.x86_64 
vdsm-gluster-4.16.7-1.gitdb83943.el6.noarch 




centos 6.5 , firewall is disabled, selinux is on permissive 







I did a service restart on each node but that isn't helped. 




Also I have split-brained 




could someone help me? 

Thanks 




Tibor 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] NFS not start on localhost

2014-10-20 Thread Demeter Tibor

Hi,

Thank you for you reply.

I did your recommendations, but there are no changes.

In the nfs.log there are no new things.


[root@node0 glusterfs]# reboot
Connection to 172.16.0.10 closed by remote host.
Connection to 172.16.0.10 closed.
[tdemeter@sirius-31 ~]$ ssh root@172.16.0.10
root@172.16.0.10's password: 
Last login: Mon Oct 20 11:02:13 2014 from 192.168.133.106
[root@node0 ~]# systemctl status nfs.target 
nfs.target - Network File System Server
   Loaded: loaded (/usr/lib/systemd/system/nfs.target; disabled)
   Active: inactive (dead)

[root@node0 ~]# gluster volume status engine
Status of volume: engine
Gluster process PortOnline  Pid
--
Brick gs00.itsmart.cloud:/gluster/engine0   50160   Y   3271
Brick gs01.itsmart.cloud:/gluster/engine1   50160   Y   595
NFS Server on localhost N/A N   N/A
Self-heal Daemon on localhost   N/A Y   3286
NFS Server on gs01.itsmart.cloud2049Y   6951
Self-heal Daemon on gs01.itsmart.cloud  N/A Y   6958
 
Task Status of Volume engine
--
There are no active volume tasks
 
[root@node0 ~]# systemctl status 
Display all 262 possibilities? (y or n)
[root@node0 ~]# systemctl status nfs-lock
nfs-lock.service - NFS file locking service.
   Loaded: loaded (/usr/lib/systemd/system/nfs-lock.service; enabled)
   Active: inactive (dead)

[root@node0 ~]# systemctl stop nfs-lock
[root@node0 ~]# systemctl restart gluster
glusterd.serviceglusterfsd.service  gluster.mount   
[root@node0 ~]# systemctl restart gluster
glusterd.serviceglusterfsd.service  gluster.mount   
[root@node0 ~]# systemctl restart glusterfsd.service 
[root@node0 ~]# systemctl restart glusterd.service 
[root@node0 ~]# gluster volume status engine
Status of volume: engine
Gluster process PortOnline  Pid
--
Brick gs00.itsmart.cloud:/gluster/engine0   50160   Y   5140
Brick gs01.itsmart.cloud:/gluster/engine1   50160   Y   2037
NFS Server on localhost N/A N   N/A
Self-heal Daemon on localhost   N/A N   N/A
NFS Server on gs01.itsmart.cloud2049Y   6951
Self-heal Daemon on gs01.itsmart.cloud  N/A Y   6958
 

Any other idea?

Tibor








- Eredeti üzenet -
> On Mon, Oct 20, 2014 at 09:04:2.8AM +0200, Demeter Tibor wrote:
> > Hi,
> > 
> > This is the full nfs.log after delete & reboot.
> > It is refers to portmap registering problem.
> > 
> > [root@node0 glusterfs]# cat nfs.log
> > [2014-10-20 06:48:43.221136] I [glusterfsd.c:1959:main]
> > 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.2
> > (/usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p
> > /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S
> > /var/run/567e0bba7ad7102eae3049e2ad6c3ed7.socket)
> > [2014-10-20 06:48:43.22] I [socket.c:3561:socket_init]
> > 0-socket.glusterfsd: SSL support is NOT enabled
> > [2014-10-20 06:48:43.224475] I [socket.c:3576:socket_init]
> > 0-socket.glusterfsd: using system polling thread
> > [2014-10-20 06:48:43.224654] I [socket.c:3561:socket_init] 0-glusterfs: SSL
> > support is NOT enabled
> > [2014-10-20 06:48:43.224667] I [socket.c:3576:socket_init] 0-glusterfs:
> > using system polling thread
> > [2014-10-20 06:48:43.235876] I
> > [rpcsvc.c:2127:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured
> > rpc.outstanding-rpc-limit with value 16
> > [2014-10-20 06:48:43.254087] I [socket.c:3561:socket_init]
> > 0-socket.nfs-server: SSL support is NOT enabled
> > [2014-10-20 06:48:43.254116] I [socket.c:3576:socket_init]
> > 0-socket.nfs-server: using system polling thread
> > [2014-10-20 06:48:43.255241] I [socket.c:3561:socket_init]
> > 0-socket.nfs-server: SSL support is NOT enabled
> > [2014-10-20 06:48:43.255264] I [socket.c:3576:socket_init]
> > 0-socket.nfs-server: using system polling thread
> > [2014-10-20 06:48:43.257279] I [socket.c:3561:socket_init]
> > 0-socket.nfs-server: SSL support is NOT enabled
> > [2014-10-20 06:48:43.257315] I [socket.c:3576:socket_init]
> > 0-socket.nfs-server: using system polling thread
> > [2014-10-20 06:48:43.258135] I [socket.c:3561:socket_init] 0-socket.NLM:
> > SSL support is NOT enabled
> > [2014-10-20 06:48:43.258157] I [socket.c:3576:socket_i

Re: [Gluster-users] NFS not start on localhost

2014-10-20 Thread Demeter Tibor
Also it's funny, because meanwhile the portmap are listening on localhost.

[root@node0 log]# netstat -tunlp | grep 111
tcp0  0 0.0.0.0:111 0.0.0.0:*   LISTEN  
4709/rpcbind
tcp6   0  0 :::111  :::*LISTEN  
4709/rpcbind
udp0  0 0.0.0.0:111 0.0.0.0:*   
4709/rpcbind
udp6   0  0 :::111  :::*
4709/rpcbind   

Demeter Tibor 



- Eredeti üzenet -
> Hi,
> 
> This is the full nfs.log after delete & reboot.
> It is refers to portmap registering problem.
> 
> [root@node0 glusterfs]# cat nfs.log
> [2014-10-20 06:48:43.221136] I [glusterfsd.c:1959:main]
> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.2
> (/usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p
> /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S
> /var/run/567e0bba7ad7102eae3049e2ad6c3ed7.socket)
> [2014-10-20 06:48:43.22] I [socket.c:3561:socket_init]
> 0-socket.glusterfsd: SSL support is NOT enabled
> [2014-10-20 06:48:43.224475] I [socket.c:3576:socket_init]
> 0-socket.glusterfsd: using system polling thread
> [2014-10-20 06:48:43.224654] I [socket.c:3561:socket_init] 0-glusterfs: SSL
> support is NOT enabled
> [2014-10-20 06:48:43.224667] I [socket.c:3576:socket_init] 0-glusterfs: using
> system polling thread
> [2014-10-20 06:48:43.235876] I
> [rpcsvc.c:2127:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured
> rpc.outstanding-rpc-limit with value 16
> [2014-10-20 06:48:43.254087] I [socket.c:3561:socket_init]
> 0-socket.nfs-server: SSL support is NOT enabled
> [2014-10-20 06:48:43.254116] I [socket.c:3576:socket_init]
> 0-socket.nfs-server: using system polling thread
> [2014-10-20 06:48:43.255241] I [socket.c:3561:socket_init]
> 0-socket.nfs-server: SSL support is NOT enabled
> [2014-10-20 06:48:43.255264] I [socket.c:3576:socket_init]
> 0-socket.nfs-server: using system polling thread
> [2014-10-20 06:48:43.257279] I [socket.c:3561:socket_init]
> 0-socket.nfs-server: SSL support is NOT enabled
> [2014-10-20 06:48:43.257315] I [socket.c:3576:socket_init]
> 0-socket.nfs-server: using system polling thread
> [2014-10-20 06:48:43.258135] I [socket.c:3561:socket_init] 0-socket.NLM: SSL
> support is NOT enabled
> [2014-10-20 06:48:43.258157] I [socket.c:3576:socket_init] 0-socket.NLM:
> using system polling thread
> [2014-10-20 06:48:43.293724] E
> [rpcsvc.c:1314:rpcsvc_program_register_portmap] 0-rpc-service: Could not
> register with portmap
> [2014-10-20 06:48:43.293760] E [nfs.c:332:nfs_init_versions] 0-nfs: Program
> NLM4 registration failed
> [2014-10-20 06:48:43.293771] E [nfs.c:1312:init] 0-nfs: Failed to initialize
> protocols
> [2014-10-20 06:48:43.293777] E [xlator.c:403:xlator_init] 0-nfs-server:
> Initialization of volume 'nfs-server' failed, review your volfile again
> [2014-10-20 06:48:43.293783] E [graph.c:307:glusterfs_graph_init]
> 0-nfs-server: initializing translator failed
> [2014-10-20 06:48:43.293789] E [graph.c:502:glusterfs_graph_activate]
> 0-graph: init failed
> pending frames:
> frame : type(0) op(0)
> 
> patchset: git://git.gluster.com/glusterfs.git
> signal received: 11
> time of crash: 2014-10-20 06:48:43configuration details:
> argp 1
> backtrace 1
> dlfcn 1
> fdatasync 1
> libpthread 1
> llistxattr 1
> setfsid 1
> spinlock 1
> epoll.h 1
> xattr.h 1
> st_atim.tv_nsec 1
> package-string: glusterfs 3.5.2
> [root@node0 glusterfs]# systemctl status portma
> portma.service
>Loaded: not-found (Reason: No such file or directory)
>Active: inactive (dead)
> 
> 
> 
> Also I have checked the rpcbind service.
> 
> [root@node0 glusterfs]# systemctl status rpcbind.service
> rpcbind.service - RPC bind service
>Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled)
>Active: active (running) since h 2014-10-20 08:48:39 CEST; 2min 52s ago
>   Process: 1940 ExecStart=/sbin/rpcbind -w ${RPCBIND_ARGS} (code=exited,
>   status=0/SUCCESS)
>  Main PID: 1946 (rpcbind)
>CGroup: /system.slice/rpcbind.service
>└─1946 /sbin/rpcbind -w
> 
> okt 20 08:48:39 node0.itsmart.cloud systemd[1]: Starting RPC bind service...
> okt 20 08:48:39 node0.itsmart.cloud systemd[1]: Started RPC bind service.
> 
> The restart does not solve this problem.
> 
> 
> I think this is the problem. Why are "exited" the portmap status?
> 
> 
> On node1 is ok:
> 
> [root@node1 ~]# systemctl status rpcbind.service
> rpcbind.service - RPC bind service
>Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled)
>

Re: [Gluster-users] NFS not start on localhost

2014-10-20 Thread Demeter Tibor
Hi,

This is the full nfs.log after delete & reboot.
It is refers to portmap registering problem.

[root@node0 glusterfs]# cat nfs.log
[2014-10-20 06:48:43.221136] I [glusterfsd.c:1959:main] 0-/usr/sbin/glusterfs: 
Started running /usr/sbin/glusterfs version 3.5.2 (/usr/sbin/glusterfs -s 
localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l 
/var/log/glusterfs/nfs.log -S /var/run/567e0bba7ad7102eae3049e2ad6c3ed7.socket)
[2014-10-20 06:48:43.22] I [socket.c:3561:socket_init] 0-socket.glusterfsd: 
SSL support is NOT enabled
[2014-10-20 06:48:43.224475] I [socket.c:3576:socket_init] 0-socket.glusterfsd: 
using system polling thread
[2014-10-20 06:48:43.224654] I [socket.c:3561:socket_init] 0-glusterfs: SSL 
support is NOT enabled
[2014-10-20 06:48:43.224667] I [socket.c:3576:socket_init] 0-glusterfs: using 
system polling thread
[2014-10-20 06:48:43.235876] I [rpcsvc.c:2127:rpcsvc_set_outstanding_rpc_limit] 
0-rpc-service: Configured rpc.outstanding-rpc-limit with value 16
[2014-10-20 06:48:43.254087] I [socket.c:3561:socket_init] 0-socket.nfs-server: 
SSL support is NOT enabled
[2014-10-20 06:48:43.254116] I [socket.c:3576:socket_init] 0-socket.nfs-server: 
using system polling thread
[2014-10-20 06:48:43.255241] I [socket.c:3561:socket_init] 0-socket.nfs-server: 
SSL support is NOT enabled
[2014-10-20 06:48:43.255264] I [socket.c:3576:socket_init] 0-socket.nfs-server: 
using system polling thread
[2014-10-20 06:48:43.257279] I [socket.c:3561:socket_init] 0-socket.nfs-server: 
SSL support is NOT enabled
[2014-10-20 06:48:43.257315] I [socket.c:3576:socket_init] 0-socket.nfs-server: 
using system polling thread
[2014-10-20 06:48:43.258135] I [socket.c:3561:socket_init] 0-socket.NLM: SSL 
support is NOT enabled
[2014-10-20 06:48:43.258157] I [socket.c:3576:socket_init] 0-socket.NLM: using 
system polling thread
[2014-10-20 06:48:43.293724] E [rpcsvc.c:1314:rpcsvc_program_register_portmap] 
0-rpc-service: Could not register with portmap
[2014-10-20 06:48:43.293760] E [nfs.c:332:nfs_init_versions] 0-nfs: Program  
NLM4 registration failed
[2014-10-20 06:48:43.293771] E [nfs.c:1312:init] 0-nfs: Failed to initialize 
protocols
[2014-10-20 06:48:43.293777] E [xlator.c:403:xlator_init] 0-nfs-server: 
Initialization of volume 'nfs-server' failed, review your volfile again
[2014-10-20 06:48:43.293783] E [graph.c:307:glusterfs_graph_init] 0-nfs-server: 
initializing translator failed
[2014-10-20 06:48:43.293789] E [graph.c:502:glusterfs_graph_activate] 0-graph: 
init failed
pending frames:
frame : type(0) op(0)

patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash: 2014-10-20 06:48:43configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.5.2
[root@node0 glusterfs]# systemctl status portma
portma.service
   Loaded: not-found (Reason: No such file or directory)
   Active: inactive (dead)



Also I have checked the rpcbind service.

[root@node0 glusterfs]# systemctl status rpcbind.service 
rpcbind.service - RPC bind service
   Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled)
   Active: active (running) since h 2014-10-20 08:48:39 CEST; 2min 52s ago
  Process: 1940 ExecStart=/sbin/rpcbind -w ${RPCBIND_ARGS} (code=exited, 
status=0/SUCCESS)
 Main PID: 1946 (rpcbind)
   CGroup: /system.slice/rpcbind.service
   └─1946 /sbin/rpcbind -w

okt 20 08:48:39 node0.itsmart.cloud systemd[1]: Starting RPC bind service...
okt 20 08:48:39 node0.itsmart.cloud systemd[1]: Started RPC bind service.

The restart does not solve this problem.


I think this is the problem. Why are "exited" the portmap status?


On node1 is ok:

[root@node1 ~]# systemctl status rpcbind.service 
rpcbind.service - RPC bind service
   Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled)
   Active: active (running) since p 2014-10-17 19:15:21 CEST; 2 days ago
 Main PID: 1963 (rpcbind)
   CGroup: /system.slice/rpcbind.service
   └─1963 /sbin/rpcbind -w

okt 17 19:15:21 node1.itsmart.cloud systemd[1]: Starting RPC bind service...
okt 17 19:15:21 node1.itsmart.cloud systemd[1]: Started RPC bind service.



Thanks in advance

Tibor



- Eredeti üzenet -
> On 10/19/2014 06:56 PM, Niels de Vos wrote:
> > On Sat, Oct 18, 2014 at 01:24:12PM +0200, Demeter Tibor wrote:
> >> Hi,
> >>
> >> [root@node0 ~]# tail -n 20 /var/log/glusterfs/nfs.log
> >> [2014-10-18 07:41:06.136035] E [graph.c:307:glusterfs_graph_init]
> >> 0-nfs-server: initializing translator failed
> >> [2014-10-18 07:41:06.136040] E [graph.c:502:glusterfs_graph_activate]
> >> 0-graph: init failed
> >> pending frames:
> >> frame : type(0) op(0)
> >>
> >> patchset: git://git.gluster.com/glusterfs.git
> >> signal received: 11
> >> t

Re: [Gluster-users] NFS not start on localhost

2014-10-18 Thread Demeter Tibor

I'm sorry, but I dont know what do you "nfs translator" mean.

I've followed up the ovirt hosted-engine setup howto and I installed glusterfs, 
etc from scratch. So it is a centos7 minimal install. The nfs-utils package are 
installed, but it disabled, so it does not run as service.

So it is a simple gluster volume and ovirt use this as nfs store for 
hosted-engine-setup. 
When I did the whole setup everything was fine. After reboot there are no nfs 
on localhost (or on local ip), only on the node1. But my hosted engine could 
not run only from this host.


Maybe is it an ovirt bug?

Thanks

Tibor


- Eredeti üzenet -
> Hmmm, do you have any custom translators installed, or have you been trying
> out GlusterFlow?
> 
> I used to get crashes of the NFS translator (looks like this) when I was
> getting GlusterFlow up and running, when everything wasn't quite setup
> correctly.
> 
> If you don't have any custom translators installed (or trying out
> GlusterFlow), ignore this. ;)
> 
> Regards and best wishes,
> 
> Justin Clift
> 
> 
> - Original Message -
> > Hi,
> > 
> > [root@node0 ~]# tail -n 20 /var/log/glusterfs/nfs.log
> > [2014-10-18 07:41:06.136035] E [graph.c:307:glusterfs_graph_init]
> > 0-nfs-server: initializing translator failed
> > [2014-10-18 07:41:06.136040] E [graph.c:502:glusterfs_graph_activate]
> > 0-graph: init failed
> > pending frames:
> > frame : type(0) op(0)
> > 
> > patchset: git://git.gluster.com/glusterfs.git
> > signal received: 11
> > time of crash: 2014-10-18 07:41:06configuration details:
> > argp 1
> > backtrace 1
> > dlfcn 1
> > fdatasync 1
> > libpthread 1
> > llistxattr 1
> > setfsid 1
> > spinlock 1
> > epoll.h 1
> > xattr.h 1
> > st_atim.tv_nsec 1
> > package-string: glusterfs 3.5.2
> > 
> > Udv:
> > 
> > Demeter Tibor
> > 
> > Email: tdemeter @itsmart.hu
> > Skype: candyman_78
> > Phone: +36 30 462 0500
> > Web : www.it smart.hu
> > 
> > IT SMART KFT.
> > 2120 Dunakeszi Wass Albert utca 2. I. em 9.
> > Telefon: +36 30 462-0500 Fax: +36 27 637-486
> > 
> > [EN] This message and any attachments are confidential and privileged and
> > intended for the use of the addressee only. If you have received this
> > communication in error, please notify the sender by replay e-mail and
> > delete
> > this message from your system. Please note that Internet e-mail guarantees
> > neither the confidentiality nor the proper receipt of the message sent. The
> > data deriving from our correspondence with you are included in a file of
> > ITSMART Ltd which exclusive purpose is to manage the communications of the
> > company; under the understanding that, in maintaining said correspondence,
> > you authorize the treatment of such data for the mentioned purpose. You are
> > entitled to exercise your rights of access, rectification, cancellation and
> > opposition by addressing such written application to address above.
> > [HUN] Ez az üzenet és annak bármely csatolt anyaga bizalmas, a nyilvános
> > közléstol védett, kizárólag a címzett használhatja fel. Ha Ön nem az üzenet
> > címzettje, úgy kérjük válaszüzenetben értesítse errol a feladót és törölje
> > az üzenetet a rendszerbol. Kérjük vegye figyelembe, hogy az email-en
> > történo
> > információtovábbítás kockázattal járhat, nem garantálja sem a csatorna
> > bizalmasságát, sem a kézbesítést. A levél az ITSMART Informatikai Kft.
> > kommunikációjának eszköze, az adatokat kizárólag erre a célra használjuk.
> > Jogosult tájékoztatást kérni személyes adatai kezelésérol, kérheti azok
> > helyesbítését, illetve törlését írásos kérelemben a fenti e-mail címen.
> > 
> > - Eredeti üzenet -
> > 
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
> --
> GlusterFS - http://www.gluster.org
> 
> An open source, distributed file system scaling to several
> petabytes, and handling thousands of clients.
> 
> My personal twitter: twitter.com/realjustinclift
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] NFS not start on localhost

2014-10-18 Thread Demeter Tibor
Hi, 

[root@node0 ~]# tail -n 20 /var/log/glusterfs/nfs.log 
[2014-10-18 07:41:06.136035] E [graph.c:307:glusterfs_graph_init] 0-nfs-server: 
initializing translator failed 
[2014-10-18 07:41:06.136040] E [graph.c:502:glusterfs_graph_activate] 0-graph: 
init failed 
pending frames: 
frame : type(0) op(0) 

patchset: git://git.gluster.com/glusterfs.git 
signal received: 11 
time of crash: 2014-10-18 07:41:06configuration details: 
argp 1 
backtrace 1 
dlfcn 1 
fdatasync 1 
libpthread 1 
llistxattr 1 
setfsid 1 
spinlock 1 
epoll.h 1 
xattr.h 1 
st_atim.tv_nsec 1 
package-string: glusterfs 3.5.2 

Udv: 

Demeter Tibor 

Email: tdemeter @itsmart.hu 
Skype: candyman_78 
Phone: +36 30 462 0500 
Web : www.it smart.hu 

IT SMART KFT. 
2120 Dunakeszi Wass Albert utca 2. I. em 9. 
Telefon: +36 30 462-0500 Fax: +36 27 637-486 

[EN] This message and any attachments are confidential and privileged and 
intended for the use of the addressee only. If you have received this 
communication in error, please notify the sender by replay e-mail and delete 
this message from your system. Please note that Internet e-mail guarantees 
neither the confidentiality nor the proper receipt of the message sent. The 
data deriving from our correspondence with you are included in a file of 
ITSMART Ltd which exclusive purpose is to manage the communications of the 
company; under the understanding that, in maintaining said correspondence, you 
authorize the treatment of such data for the mentioned purpose. You are 
entitled to exercise your rights of access, rectification, cancellation and 
opposition by addressing such written application to address above. 
[HUN] Ez az üzenet és annak bármely csatolt anyaga bizalmas, a nyilvános 
közléstol védett, kizárólag a címzett használhatja fel. Ha Ön nem az üzenet 
címzettje, úgy kérjük válaszüzenetben értesítse errol a feladót és törölje az 
üzenetet a rendszerbol. Kérjük vegye figyelembe, hogy az email-en történo 
információtovábbítás kockázattal járhat, nem garantálja sem a csatorna 
bizalmasságát, sem a kézbesítést. A levél az ITSMART Informatikai Kft. 
kommunikációjának eszköze, az adatokat kizárólag erre a célra használjuk. 
Jogosult tájékoztatást kérni személyes adatai kezelésérol, kérheti azok 
helyesbítését, illetve törlését írásos kérelemben a fenti e-mail címen. 

- Eredeti üzenet -

> Maybe share the last 15-20 lines of you /var/log/glusterfs/nfs.log for the
> consideration of everyone on the list? Thanks.

> From: Demeter Tibor ;
> To: Anirban Ghoshal ;
> Cc: gluster-users ;
> Subject: Re: [Gluster-users] NFS not start on localhost
> Sent: Sat, Oct 18, 2014 10:36:36 AM

> 
> Hi,

> I've try out these things:

> - nfs.disable on-of
> - iptables disable
> - volume stop-start

> but same.
> So, when I make a new volume everything is fine.
> After reboot the NFS won't listen on local host (only on server has brick0)

> Centos7 with last ovirt

> Regards,

> Tibor

> - Eredeti üzenet -

> > It happens with me sometimes. Try `tail -n 20 /var/log/glusterfs/nfs.log`.
> > You will probably find something out that will help your cause. In general,
> > if you just wish to start the thing up without going into the why of it,
> > try
> > `gluster volume set engine nfs.disable on` followed by ` gluster volume set
> > engine nfs.disable off`. It does the trick quite often for me because it is
> > a polite way to askmgmt/glusterd to try and respawn the nfs server process
> > if need be. But, keep in mind that this will call a (albeit small) service
> > interruption to all clients accessing volume engine over nfs.
> 

> > Thanks,
> 
> > Anirban
> 

> > On Saturday, 18 October 2014 1:03 AM, Demeter Tibor 
> > wrote:
> 

> > Hi,
> 

> > I have make a glusterfs with nfs support.
> 

> > I don't know why, but after a reboot the nfs does not listen on localhost,
> > only on gs01.
> 

> > [root@node0 ~]# gluster volume info engine
> 

> > Volume Name: engine
> 
> > Type: Replicate
> 
> > Volume ID: 2ea009bf-c740-492e-956d-e1bca76a0bd3
> 
> > Status: Started
> 
> > Number of Bricks: 1 x 2 = 2
> 
> > Transport-type: tcp
> 
> > Bricks:
> 
> > Brick1: gs00.itsmart.cloud:/gluster/engine0
> 
> > Brick2: gs01.itsmart.cloud:/gluster/engine1
> 
> > Options Reconfigured:
> 
> > storage.owner-uid: 36
> 
> > storage.owner-gid: 36
> 
> > performance.quick-read: off
> 
> > performance.read-ahead: off
> 
> > performance.io-cache: off
> 
> > performance.stat-prefetch: off
> 
> > cluster.eager-lock: enable
> 
> > network.remote-dio: enable
> 
> > cluster.quorum-type: auto
> 
> > cluster.server-quorum-type: server
> 
> >

Re: [Gluster-users] NFS not start on localhost

2014-10-18 Thread Demeter Tibor
Hi, 

I've try out these things: 

- nfs.disable on-of 
- iptables disable 
- volume stop-start 

but same. 
So, when I make a new volume everything is fine. 
After reboot the NFS won't listen on local host (only on server has brick0) 

Centos7 with last ovirt 

Regards, 

Tibor 

- Eredeti üzenet -

> It happens with me sometimes. Try `tail -n 20 /var/log/glusterfs/nfs.log`.
> You will probably find something out that will help your cause. In general,
> if you just wish to start the thing up without going into the why of it, try
> `gluster volume set engine nfs.disable on` followed by ` gluster volume set
> engine nfs.disable off`. It does the trick quite often for me because it is
> a polite way to askmgmt/glusterd to try and respawn the nfs server process
> if need be. But, keep in mind that this will call a (albeit small) service
> interruption to all clients accessing volume engine over nfs.

> Thanks,
> Anirban

> On Saturday, 18 October 2014 1:03 AM, Demeter Tibor 
> wrote:

> Hi,

> I have make a glusterfs with nfs support.

> I don't know why, but after a reboot the nfs does not listen on localhost,
> only on gs01.

> [root@node0 ~]# gluster volume info engine

> Volume Name: engine
> Type: Replicate
> Volume ID: 2ea009bf-c740-492e-956d-e1bca76a0bd3
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: gs00.itsmart.cloud:/gluster/engine0
> Brick2: gs01.itsmart.cloud:/gluster/engine1
> Options Reconfigured:
> storage.owner-uid: 36
> storage.owner-gid: 36
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> auth.allow: *
> nfs.disable: off

> [root@node0 ~]# gluster volume status engine
> Status of volume: engine
> Gluster process Port Online Pid
> --
> Brick gs00.itsmart.cloud:/gluster/engine0 50158 Y 3250
> Brick gs01.itsmart.cloud:/gluster/engine1 50158 Y 5518
> NFS Server on localhost N/A N N/A
> Self-heal Daemon on localhost N/A Y 3261
> NFS Server on gs01.itsmart.cloud 2049 Y 5216
> Self-heal Daemon on gs01.itsmart.cloud N/A Y 5223

> Does anybody help me?

> Thanks in advance.

> Tibor

> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] NFS not start on localhost

2014-10-17 Thread Demeter Tibor

Hi, 

I have make a glusterfs with nfs support. 

I don't know why, but after a reboot the nfs does not listen on localhost, only 
on gs01. 




[root@node0 ~]# gluster volume info engine 

Volume Name: engine 
Type: Replicate 
Volume ID: 2ea009bf-c740-492e-956d-e1bca76a0bd3 
Status: Started 
Number of Bricks: 1 x 2 = 2 
Transport-type: tcp 
Bricks: 
Brick1: gs00.itsmart.cloud:/gluster/engine0 
Brick2: gs01.itsmart.cloud:/gluster/engine1 
Options Reconfigured: 
storage.owner-uid: 36 
storage.owner-gid: 36 
performance.quick-read: off 
performance.read-ahead: off 
performance.io-cache: off 
performance.stat-prefetch: off 
cluster.eager-lock: enable 
network.remote-dio: enable 
cluster.quorum-type: auto 
cluster.server-quorum-type: server 
auth.allow: * 
nfs.disable: off 


[root@node0 ~]# gluster volume status engine 
Status of volume: engine 
Gluster process Port Online Pid 
-- 
Brick gs00.itsmart.cloud:/gluster/engine0 50158 Y 3250 
Brick gs01.itsmart.cloud:/gluster/engine1 50158 Y 5518 
NFS Server on localhost N/A N N/A 
Self-heal Daemon on localhost N/A Y 3261 
NFS Server on gs01.itsmart.cloud 2049 Y 5216 
Self-heal Daemon on gs01.itsmart.cloud N/A Y 5223 






Does anybody help me? 


Thanks in advance. 




Tibor 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] bonding question

2014-09-29 Thread Demeter Tibor
Hi Alex,

Thank you for you replys.

It will be in productive environment, so I need a reliable solution.
The maximum troghtput is important, but the stability is the first in this 
project.

I don't know yet, the rr, mode=6 or the mode=4 (with switch support) is the 
perfect for us.
(I have an dlink DGS-1510 switch, I think it could be any 802.3ad mode)

Thanks a lot!

Regards,



Tibor

- Eredeti üzenet -
> Yes, but even with rr it's still one tcp connection. At layer2 it gets
> distributed over multiple physical links. TCP doesn't care or notice
> (except for retransmissions as I mentioned before).
> 
> This is one advantage of iSCSI/FCoE/FC/SCSI etc in that you can use
> "multipath" which is transparent, scales per-link close to linear and is
> part of the storage protocol (ie multiple abstract paths between
> initiators and targets) rather than the network stack.
> 
> You could serve up iSCSI from files on a mounted via FUSE from a
> glusterfs cluster, which would enable multipath, but I've only ever seen
> a demo of this on YouTube and I was not convinced that on its own it
> would be crash-consistent or resistant to gluster split-brain. Anyone
> else that's tried this is welcome to put me right on this.
> 
> Cheers
> 
> Alex
> 
> 
> On 29/09/14 15:10, Demeter Tibor wrote:
> > Hi,
> >
> > I would like to use glusterfs as ovirt-vmstore.
> > I this case one vm, that is running on one compute node will use only one
> > tcp connection?
> >
> > Thanks
> >
> >
> >
> >
> > - Eredeti üzenet -
> >>> Ok, I mean this is a network based solution, but I think the 100MB/sec is
> >>> possible with one nic too.
> >>> I just wondering, maybe my bonding isn't working fine.
> >> You should test with multiple clients/dd streams.
> >>
> >> http://serverfault.com/questions/569060/link-aggregation-lacp-802-3ad-max-throughput/
> >>
> >> rr
> >>
> >> ___
> >> Gluster-users mailing list
> >> Gluster-users@gluster.org
> >> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> >>
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] bonding question

2014-09-29 Thread Demeter Tibor
In glusterfs documentation the recommended mode is the mode=6. 
My switch (dlink dgs-1510) can be 802.3ad modes, in this case is this better 
than mode=6 ?


Tibor

- Eredeti üzenet -
> Indeed. Only the rr (round robin) mode will get higher performance on a
> single stream. It also means that packets may be received out-of-order
> which can cause retransmissions (so it should never be used for UDP
> services like SIP/RTP). AFAIK it only works with Cisco etherchannel and
> does not scale well.
> 
> Multiple streams are balanced using the XOR of the two endpoint MAC
> addresses in mode 4. This can be changed to include L3 data (eg src/dest
> IP) but switch support is again limited for the alternate algo. I know
> my kit can't be changed to add L3 data. As long as you have multiple
> clients the default mode 4 will scale almost linearly and will be
> guaranteed to work across any switch that supports LACP.
> 
> Cheers
> 
> Alex
> 
> On 29/09/14 15:03, Reinis Rozitis wrote:
> >> Ok, I mean this is a network based solution, but I think the
> >> 100MB/sec is possible with one nic too.
> >> I just wondering, maybe my bonding isn't working fine.
> >
> > You should test with multiple clients/dd streams.
> >
> > http://serverfault.com/questions/569060/link-aggregation-lacp-802-3ad-max-throughput/
> >
> >
> > rr
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> >
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] bonding question

2014-09-29 Thread Demeter Tibor
Hi, 

I would like to use glusterfs as ovirt-vmstore.
I this case one vm, that is running on one compute node will use only one tcp 
connection?

Thanks




- Eredeti üzenet -
> > Ok, I mean this is a network based solution, but I think the 100MB/sec is
> > possible with one nic too.
> > I just wondering, maybe my bonding isn't working fine.
> 
> You should test with multiple clients/dd streams.
> 
> http://serverfault.com/questions/569060/link-aggregation-lacp-802-3ad-max-throughput/
> 
> rr
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] bonding question

2014-09-29 Thread Demeter Tibor

Hi, 

I made short tests with glusterfs and bonding, but I have performance issues. 

Environment: 

- bonding mode=4 (with switch support) or mode=6 
- centos7 
- vlans 
- two servers with 4 nic/node, one nic on the internet (this is the default 
route) and 3 nic as bonded interface 
- MTU 9000 on all interface (bondings, vlans, eths, etc), MTU 9216 on the 
switch ports 
- each host vlan-s can ping each host on the vlan subnets and on the non vlan 
subnets. 
- the volume uses the bonded vlans as bricks 



[root@node1 lock]# gluster vol info 

Volume Name: meta 
Type: Replicate 
Volume ID: f4d026e7-3edd-442f-9207-f0a849acebf5 
Status: Started 
Number of Bricks: 1 x 2 = 2 
Transport-type: tcp 
Bricks: 
Brick1: gs00.itsmart.cloud:/gluster/meta0 
Brick2: gs01.itsmart.cloud:/gluster/meta1 


I did this test: 



[root@node0 lock]# dd if=/dev/zero of=/mnt/lock/disk bs=1M count=1000 
conv=fdatasync 
1000+0 records in 
1000+0 records out 
1048576000 bytes (1,0 GB) copied, 10,3035 s, 102 MB/s 




I compared with local hdd speed tests: 




[root@node0 lock]# dd if=/dev/zero of=/home/disk bs=1M count=1000 
conv=fdatasync 
1000+0 records in 
1000+0 records out 
1048576000 bytes (1,0 GB) copied, 3,04411 s, 344 MB/s 




Ok, I mean this is a network based solution, but I think the 100MB/sec is 
possible with one nic too. 

I just wondering, maybe my bonding isn't working fine. 

What do you think, is it ok? 




The port utilization is minimal, there are two bigger traffic on two ports 
only. 




Thanks in advance. 







Tibor 





___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [ovirt-users] Raid-5 like gluster method?

2014-09-28 Thread Demeter Tibor
Hi, 
Do you recommend this featrue for productive environment? 
Tibor 

- Eredeti üzenet -

> use 3.6 disperse feature, but it is beta2 now, you could use it when it is GA

> On Wed, Sep 24, 2014 at 2:55 PM, Sahina Bose < sab...@redhat.com > wrote:

> > [+gluster-users]
> 

> > On 09/24/2014 11:59 AM, Demeter Tibor wrote:
> 

> > > Hi,
> > 
> 

> > > Is there any method in glusterfs, like raid-5?
> > 
> 

> > > I have three node, each node has 5 TB of disk. I would like utilize all
> > > of
> > > space with redundancy, like raid-5.
> > 
> 
> > > If it not possible, can I make raid-6 like redundanci within three node?
> > > (two
> > > brick/node?).
> > 
> 
> > > Thanks in advance,
> > 
> 

> > > Tibor
> > 
> 

> > > ___
> > 
> 
> > > Users mailing list us...@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/users
> > 
> 

> > ___
> 
> > Gluster-users mailing list
> 
> > Gluster-users@gluster.org
> 
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [ovirt-users] Raid-5 like gluster method?

2014-09-24 Thread Demeter Tibor
Hi, 

Could I help anybody? 

Tibor 

> [+gluster-users]

> On 09/24/2014 11:59 AM, Demeter Tibor wrote:

> > Hi,
> 

> > Is there any method in glusterfs, like raid-5?
> 

> > I have three node, each node has 5 TB of disk. I would like utilize all of
> > space with redundancy, like raid-5.
> 
> > If it not possible, can I make raid-6 like redundanci within three node?
> > (two
> > brick/node?).
> 
> > Thanks in advance,
> 

> > Tibor
> 

> > ___
> 
> > Users mailing list us...@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users