Hello and thank you so far,
What I have recognized is, having more than one nic running this is confusing 
glusterfs 3.5. I never
saw this on my glusterfs 3.4 and 3.2 systems still working.
So I set up just clean erased gluster with yum glusterfs* erase  and did:
Logged in to my both nodes in the 135 subnet,ex:
Ssh 192.168.135.36 (centclust1)  (172.17.2.30 is the 2nd nic)
Ssh 192.168.135.46 (centclust2)  (172.17.2.31 is the 2nd nic)
Started gluster on both nodes , service glusterd start.
Did the peer probe on 192.168.135.36/centclust1:
Gluster peer probe 192.168.135.46 //Former I did gluster peer probe centclust2 
This result in:
[root@centclust1 ~]# gluster peer status
Number of Peers: 1

Hostname: 192.168.135.46
Uuid: c395c15d-5187-4e5b-b680-57afcb88b881
State: Peer in Cluster (Connected)

[root@centclust2 backup]# gluster peer status
Number of Peers: 1

Hostname: 192.168.135.36
Uuid: 94d5903b-ebe9-40d6-93bf-c2f2e92909a0
State: Peer in Cluster (Connected)
The signifent difference gluster now shows the ip of both nodes

Now I did the create the replicating vol:
gluster volume create smbcluster replica 2 transport tcp  
192.168.135.36:/sbu/glusterfs/export  192.168.135.46:/sbu/glusterfs/export
started the volume
gluster volume status
Status of volume: smbcluster
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick 192.168.135.36:/sbu/glusterfs/export              49152   Y       27421
Brick 192.168.135.46:/sbu/glusterfs/export              49152   Y       12186
NFS Server on localhost                                 2049    Y       27435
Self-heal Daemon on localhost                           N/A     Y       27439
NFS Server on 192.168.135.46                            2049    Y       12200
Self-heal Daemon on 192.168.135.46                      N/A     Y       12204

Task Status of Volume smbcluster
------------------------------------------------------------------------------
There are no active volume tasks

Mounted the volumes:

Centclust1:mount -t glusterfs 192.168.135.36:/smbcluster /mntgluster -o acl
Centclust2:mount -t glusterfs 192.168.135.46:/smbcluster /mntgluster -o acl

And BINGO up and running!!!!!!!


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de




-----Ursprüngliche Nachricht-----
Von: Krishnan Parthasarathi [mailto:kpart...@redhat.com] 
Gesendet: Mittwoch, 30. Juli 2014 16:52
An: muel...@tropenklinik.de
Cc: gluster-devel-boun...@gluster.org; gluster-users@gluster.org
Betreff: Re: [Gluster-users] WG: Strange issu concerning glusterfs 3.5.1 on 
centos 6.5

Daniel,

I didn't get a chance to follow up with
debugging this issue. I will look into this and get back to you. I suspect that 
there is something different about the network layer behaviour in your setup.

~KP

----- Original Message -----
> Just another other test:
> [root@centclust1 sicherung]# getfattr -d -e hex -m . /sicherung/bu
> getfattr: Entferne führenden '/' von absoluten Pfadnamen # file: 
> sicherung/bu 
> security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c6
> 55f743a733000 
> trusted.afr.smbbackup-client-0=0x000000000000000000000000
> trusted.afr.smbbackup-client-1=0x000000000000000200000001
> trusted.gfid=0x00000000000000000000000000000001
> trusted.glusterfs.dht=0x000000010000000000000000ffffffff
> trusted.glusterfs.volume-id=0x6f51d002e634437db58d9b952693f1df
> 
> [root@centclust2 glusterfs]# getfattr -d -e hex -m . /sicherung/bu
> getfattr: Entferne führenden '/' von absoluten Pfadnamen # file: 
> sicherung/bu 
> security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c6
> 55f743a733000
> trusted.afr.smbbackup-client-0=0x000000000000000200000001
> trusted.afr.smbbackup-client-1=0x000000000000000000000000
> trusted.gfid=0x00000000000000000000000000000001
> trusted.glusterfs.dht=0x000000010000000000000000ffffffff
> trusted.glusterfs.volume-id=0x6f51d002e634437db58d9b952693f1df
> 
> Is this ok?
> 
> After long testing and doing a /etc/init.d/network restart the 
> replication started once/a short time then ended up!?
> Any idea???????
> 
> 
> EDV Daniel Müller
> 
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: muel...@tropenklinik.de
> Internet: www.tropenklinik.de
> 
> "Der Mensch ist die Medizin des Menschen"
> 
> 
> 
> 
> -----Ursprüngliche Nachricht-----
> Von: Krishnan Parthasarathi [mailto:kpart...@redhat.com]
> Gesendet: Mittwoch, 30. Juli 2014 11:09
> An: muel...@tropenklinik.de
> Cc: gluster-devel-boun...@gluster.org; gluster-users@gluster.org
> Betreff: Re: [Gluster-users] WG: Strange issu concerning glusterfs 
> 3.5.1 on centos 6.5
> 
> Could you provide the output of the following command?
> 
> netstat -ntap | grep gluster
> 
> This should tell us if glusterfsd processes (bricks) are listening on 
> all interfaces.
> 
> ~KP
> 
> ----- Original Message -----
> > Just one idea
> > I add a second NIC with a 172.2.17... adress on both machines.
> > Could this cause the trouble!?
> > 
> > EDV Daniel Müller
> > 
> > Leitung EDV
> > Tropenklinik Paul-Lechler-Krankenhaus Paul-Lechler-Str. 24
> > 72076 Tübingen
> > Tel.: 07071/206-463, Fax: 07071/206-499
> > eMail: muel...@tropenklinik.de
> > Internet: www.tropenklinik.de
> > 
> > 
> > 
> > 
> > -----Ursprüngliche Nachricht-----
> > Von: Krishnan Parthasarathi [mailto:kpart...@redhat.com]
> > Gesendet: Mittwoch, 30. Juli 2014 09:29
> > An: muel...@tropenklinik.de
> > Cc: gluster-devel-boun...@gluster.org; gluster-users@gluster.org
> > Betreff: Re: [Gluster-users] WG: Strange issu concerning glusterfs
> > 3.5.1 on centos 6.5
> > 
> > Daniel,
> > 
> > From a quick look, I see that glustershd and the nfs client is 
> > unable to connect to one of the bricks. This is resulting in data 
> > from mounts being written to local bricks only.
> > I should have asked this before, could you provide the bricks logs as well?
> > 
> > Could you also try to connect to the bricks using telnet?
> > For eg, from centclust1, telnet centclust2 <brick-port>.
> > 
> > ~KP
> > 
> > ----- Original Message -----
> > > So my logs. I disable ssl meanwhile but it is the same situation. 
> > > No replication!?
> > > 
> > > 
> > > 
> > > EDV Daniel Müller
> > > 
> > > Leitung EDV
> > > Tropenklinik Paul-Lechler-Krankenhaus Paul-Lechler-Str. 24
> > > 72076 Tübingen
> > > Tel.: 07071/206-463, Fax: 07071/206-499
> > > eMail: muel...@tropenklinik.de
> > > Internet: www.tropenklinik.de
> > > 
> > > 
> > > 
> > > 
> > > 
> > > -----Ursprüngliche Nachricht-----
> > > Von: Krishnan Parthasarathi [mailto:kpart...@redhat.com]
> > > Gesendet: Mittwoch, 30. Juli 2014 08:56
> > > An: muel...@tropenklinik.de
> > > Cc: gluster-users@gluster.org; gluster-devel-boun...@gluster.org
> > > Betreff: Re: [Gluster-users] WG: Strange issu concerning glusterfs
> > > 3.5.1 on centos 6.5
> > > 
> > > Could you attach the entire mount and glustershd log files to this 
> > > thread?
> > > 
> > > ~KP
> > > 
> > > ----- Original Message -----
> > > > NO ONE!??
> > > > This is an entry of my glustershd.log:
> > > > [2014-07-30 06:40:59.294334] W
> > > > [client-handshake.c:1846:client_dump_version_cbk] 0-smbbackup-client-1:
> > > > received RPC status error
> > > > [2014-07-30 06:40:59.294352] I [client.c:2229:client_rpc_notify]
> > > > 0-smbbackup-client-1: disconnected from 172.17.2.31:49152. 
> > > > Client process will keep trying to connect to glusterd until 
> > > > brick's port is available
> > > > 
> > > > 
> > > > This is from mnt-sicherung.log:
> > > > [2014-07-30 06:40:38.259850] E [socket.c:2820:socket_connect]
> > > > 1-smbbackup-client-0: connection attempt on 172.17.2.30:24007 
> > > > failed, (Connection timed out) [2014-07-30 06:40:41.275120] I 
> > > > [rpc-clnt.c:1729:rpc_clnt_reconfig]
> > > > 1-smbbackup-client-0: changing port to 49152 (from 0)
> > > > 
> > > > [root@centclust1 sicherung]# gluster --remote-host=centclust1 
> > > > peer status Number of Peers: 1
> > > > 
> > > > Hostname: centclust2
> > > > Uuid: 4f15e9bd-9b5a-435b-83d2-4ed202c66b11
> > > > State: Peer in Cluster (Connected)
> > > > 
> > > > [root@centclust1 sicherung]# gluster --remote-host=centclust2 
> > > > peer status Number of Peers: 1
> > > > 
> > > > Hostname: 172.17.2.30
> > > > Uuid: 99fe6a2c-df7e-4475-a7bc-a35abba620fb
> > > > State: Peer in Cluster (Connected)
> > > > 
> > > > [root@centclust1 ssl]# ps aux | grep gluster
> > > > root     13655  0.0  0.0 413848 16872 ?        Ssl  08:10   0:00
> > > > /usr/sbin/glusterd --pid-file=/var/run/glusterd.pid
> > > > root     13958  0.0  0.0 12139920 44812 ?      Ssl  08:11   0:00
> > > > /usr/sbin/glusterfsd -s centclust1.tplk.loc --volfile-id 
> > > > smbbackup.centclust1.tplk.loc.sicherung-bu -p 
> > > > /var/lib/glusterd/vols/smbbackup/run/centclust1.tplk.loc-sicherung-bu.
> > > > pid -S /var/run/4c65260e12e2d3a9a5549446f491f383.socket
> > > > --brick-name /sicherung/bu -l
> > > > /var/log/glusterfs/bricks/sicherung-bu.log
> > > > --xlator-option
> > > > *-posix.glusterd-uuid=99fe6a2c-df7e-4475-a7bc-a35abba620fb
> > > > --brick-port
> > > > 49152 --xlator-option smbbackup-server.listen-port=49152
> > > > root     13972  0.0  0.0 815748 58252 ?        Ssl  08:11   0:00
> > > > /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p 
> > > > /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log 
> > > > -S /var/run/ee6f37fc79b9cb1968eca387930b39fb.socket
> > > > root     13976  0.0  0.0 831160 29492 ?        Ssl  08:11   0:00
> > > > /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd 
> > > > -p /var/lib/glusterd/glustershd/run/glustershd.pid -l 
> > > > /var/log/glusterfs/glustershd.log -S 
> > > > /var/run/aa970d146eb23ba7124e6c4511879850.socket --xlator-option 
> > > > *replicate*.node-uuid=99fe6a2c-df7e-4475-a7bc-a35abba620fb
> > > > root     15781  0.0  0.0 105308   932 pts/1    S+   08:47   0:00 grep
> > > > gluster
> > > > root     29283  0.0  0.0 451116 56812 ?        Ssl  Jul29   0:21
> > > > /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p 
> > > > /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log 
> > > > -S /var/run/a7fcb1d1d3a769d28df80b85ae5d13c4.socket
> > > > root     29287  0.0  0.0 335432 25848 ?        Ssl  Jul29   0:21
> > > > /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd 
> > > > -p /var/lib/glusterd/glustershd/run/glustershd.pid -l 
> > > > /var/log/glusterfs/glustershd.log -S 
> > > > /var/run/833e60f976365c2a307f92fb233942a2.socket --xlator-option 
> > > > *replicate*.node-uuid=64b1a7eb-2df3-47bd-9379-39c29e5a001a
> > > > root     31698  0.0  0.0 1438392 57952 ?       Ssl  Jul29   0:12
> > > > /usr/sbin/glusterfs --acl --volfile-server=centclust1.tplk.loc
> > > > --volfile-id=/smbbackup /mnt/sicherung
> > > > 
> > > > [root@centclust2 glusterfs]#  ps aux | grep gluster
> > > > root      1561  0.0  0.0 1481492 60152 ?       Ssl  Jul29   0:12
> > > > /usr/sbin/glusterfs --acl --volfile-server=centclust2.tplk.loc
> > > > --volfile-id=/smbbackup /mnt/sicherung
> > > > root     15656  0.0  0.0 413848 16832 ?        Ssl  08:11   0:01
> > > > /usr/sbin/glusterd --pid-file=/var/run/glusterd.pid
> > > > root     15942  0.0  0.0 12508704 43860 ?      Ssl  08:11   0:00
> > > > /usr/sbin/glusterfsd -s centclust2.tplk.loc --volfile-id 
> > > > smbbackup.centclust2.tplk.loc.sicherung-bu -p 
> > > > /var/lib/glusterd/vols/smbbackup/run/centclust2.tplk.loc-sicherung-bu.
> > > > pid -S /var/run/40a554af3860eddd5794b524576d0520.socket
> > > > --brick-name /sicherung/bu -l
> > > > /var/log/glusterfs/bricks/sicherung-bu.log
> > > > --xlator-option
> > > > *-posix.glusterd-uuid=4f15e9bd-9b5a-435b-83d2-4ed202c66b11
> > > > --brick-port
> > > > 49152 --xlator-option smbbackup-server.listen-port=49152
> > > > root     15956  0.0  0.0 825992 57496 ?        Ssl  08:11   0:00
> > > > /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p 
> > > > /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log 
> > > > -S /var/run/602d1d8ba7b80ded2b70305ed7417cf5.socket
> > > > root     15960  0.0  0.0 841404 26760 ?        Ssl  08:11   0:00
> > > > /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd 
> > > > -p /var/lib/glusterd/glustershd/run/glustershd.pid -l 
> > > > /var/log/glusterfs/glustershd.log -S 
> > > > /var/run/504d01c7f7df8b8306951cc2aaeaf52c.socket --xlator-option
> > > > *replicate*.node-uuid=4f15e9bd-9b5a-435b-83d2-4ed202c66b11
> > > > root     17728  0.0  0.0 105312   936 pts/0    S+   08:48   0:00 grep
> > > > gluster
> > > > root     32363  0.0  0.0 451100 55584 ?        Ssl  Jul29   0:21
> > > > /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p 
> > > > /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log 
> > > > -S /var/run/73054288d1cadfb87b4b9827bd205c7b.socket
> > > > root     32370  0.0  0.0 335432 26220 ?        Ssl  Jul29   0:21
> > > > /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd 
> > > > -p /var/lib/glusterd/glustershd/run/glustershd.pid -l 
> > > > /var/log/glusterfs/glustershd.log -S 
> > > > /var/run/de1427ce373c792c76c38b12c106f029.socket --xlator-option
> > > > *replicate*.node-uuid=83e6d78c-0119-4537-8922-b3e731718864
> > > > 
> > > > 
> > > > 
> > > > 
> > > > Leitung EDV
> > > > Tropenklinik Paul-Lechler-Krankenhaus Paul-Lechler-Str. 24
> > > > 72076 Tübingen
> > > > Tel.: 07071/206-463, Fax: 07071/206-499
> > > > eMail: muel...@tropenklinik.de
> > > > Internet: www.tropenklinik.de
> > > > 
> > > > 
> > > > 
> > > > -----Ursprüngliche Nachricht-----
> > > > Von: Daniel Müller [mailto:muel...@tropenklinik.de]
> > > > Gesendet: Dienstag, 29. Juli 2014 16:02
> > > > An: 'gluster-users@gluster.org'
> > > > Betreff: Strange issu concerning glusterfs 3.5.1 on centos 6.5
> > > > 
> > > > Dear all,
> > > > 
> > > > there is a strange issue centos6.5 and glusterfs 3.5.1:
> > > > 
> > > >  glusterd -V
> > > > glusterfs 3.5.1 built on Jun 24 2014 15:09:41 Repository revision:
> > > > git://git.gluster.com/glusterfs.git
> > > > Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/> 
> > > > GlusterFS comes with ABSOLUTELY NO WARRANTY.
> > > > It is licensed to you under your choice of the GNU Lesser 
> > > > General Public License, version 3 or any later version (LGPLv3 
> > > > or later), or the GNU General Public License, version 2 (GPLv2), 
> > > > in all cases as published by the Free Software Foundation
> > > > 
> > > > I try to set up a replicated 2 brick vol on two centos 6.5 server.
> > > > I can probe well and my nodes are reporting no errors:
> > > >  
> > > > [root@centclust1 mnt]# gluster peer status Number of Peers: 1
> > > > 
> > > > Hostname: centclust2
> > > > Uuid: 4f15e9bd-9b5a-435b-83d2-4ed202c66b11
> > > > State: Peer in Cluster (Connected)
> > > > 
> > > > [root@centclust2 sicherung]# gluster peer status Number of Peers:
> > > > 1
> > > > 
> > > > Hostname: 172.17.2.30
> > > > Uuid: 99fe6a2c-df7e-4475-a7bc-a35abba620fb
> > > > State: Peer in Cluster (Connected)
> > > > 
> > > > Now I set up a replicating VOl on an XFS-Disk: /dev/sdb1 on 
> > > > /sicherung type xfs (rw)
> > > > 
> > > > gluster volume create smbbackup replica 2 transport tcp 
> > > > centclust1.tplk.loc:/sicherung/bu 
> > > > centclust2.tplk.loc:/sicherung/bu
> > > > 
> > > > gluster volume smbbackup status reports ok:
> > > > 
> > > > [root@centclust1 mnt]# gluster volume status smbbackup Status of
> > > > volume: smbbackup
> > > > Gluster process                                         Port    Online
> > > > Pid
> > > > ----------------------------------------------------------------
> > > > --
> > > > --
> > > > --
> > > > ------
> > > > --
> > > > Brick centclust1.tplk.loc:/sicherung/bu                 49152   Y
> > > > 31969
> > > > Brick centclust2.tplk.loc:/sicherung/bu                 49152   Y
> > > > 2124
> > > > NFS Server on localhost                                 2049    Y
> > > > 31983
> > > > Self-heal Daemon on localhost                           N/A     Y
> > > > 31987
> > > > NFS Server on centclust2                                2049    Y
> > > > 2138
> > > > Self-heal Daemon on centclust2                          N/A     Y
> > > > 2142
> > > > 
> > > > Task Status of Volume smbbackup
> > > > ----------------------------------------------------------------
> > > > --
> > > > --
> > > > --
> > > > ------
> > > > --
> > > > There are no active volume tasks
> > > > 
> > > > [root@centclust2 sicherung]# gluster volume status smbbackup 
> > > > Status of
> > > > volume: smbbackup
> > > > Gluster process                                         Port    Online
> > > > Pid
> > > > ----------------------------------------------------------------
> > > > --
> > > > --
> > > > --
> > > > ------
> > > > --
> > > > Brick centclust1.tplk.loc:/sicherung/bu                 49152   Y
> > > > 31969
> > > > Brick centclust2.tplk.loc:/sicherung/bu                 49152   Y
> > > > 2124
> > > > NFS Server on localhost                                 2049    Y
> > > > 2138
> > > > Self-heal Daemon on localhost                           N/A     Y
> > > > 2142
> > > > NFS Server on 172.17.2.30                               2049    Y
> > > > 31983
> > > > Self-heal Daemon on 172.17.2.30                         N/A     Y
> > > > 31987
> > > > 
> > > > Task Status of Volume smbbackup
> > > > ----------------------------------------------------------------
> > > > --
> > > > --
> > > > --
> > > > ------
> > > > --
> > > > There are no active volume tasks
> > > > 
> > > > I mounted the vol on both servers with:
> > > > 
> > > > mount -t glusterfs centclust1.tplk.loc:/smbbackup  
> > > > /mnt/sicherung -o acl mount -t glusterfs 
> > > > centclust2.tplk.loc:/smbbackup /mnt/sicherung -o acl
> > > > 
> > > > But when I write in /mnt/sicherung the files are not replicated 
> > > > to the other node in anyway!??
> > > > 
> > > > They rest on the local server in /mnt/sicherung and 
> > > > /sicherung/bu On each node separate:#
> > > > [root@centclust1 sicherung]# pwd /mnt/sicherung
> > > > 
> > > > [root@centclust1 sicherung]# touch test.txt
> > > > [root@centclust1 sicherung]# ls
> > > > test.txt
> > > > [root@centclust2 sicherung]# pwd /mnt/sicherung
> > > > [root@centclust2 sicherung]# ls
> > > > more.txt
> > > > [root@centclust1 sicherung]# ls -la /sicherung/bu insgesamt 0 
> > > > drwxr-xr-x.  3 root root  38 29. Jul 15:56 .
> > > > drwxr-xr-x.  3 root root  15 29. Jul 14:31 ..
> > > > drw-------. 15 root root 142 29. Jul 15:56 .glusterfs
> > > > -rw-r--r--.  2 root root   0 29. Jul 15:56 test.txt
> > > > [root@centclust2 sicherung]# ls -la /sicherung/bu insgesamt 0 
> > > > drwxr-xr-x. 3 root root 38 29. Jul 15:32 .
> > > > drwxr-xr-x. 3 root root 15 29. Jul 14:31 ..
> > > > drw-------. 7 root root 70 29. Jul 15:32 .glusterfs -rw-r--r--. 
> > > > 2 root root  0 29. Jul 15:32 more.txt
> > > > 
> > > > 
> > > > 
> > > > Greetings
> > > > Daniel
> > > > 
> > > > 
> > > > 
> > > > EDV Daniel Müller
> > > > 
> > > > Leitung EDV
> > > > Tropenklinik Paul-Lechler-Krankenhaus Paul-Lechler-Str. 24
> > > > 72076 Tübingen
> > > > Tel.: 07071/206-463, Fax: 07071/206-499
> > > > eMail: muel...@tropenklinik.de
> > > > Internet: www.tropenklinik.de
> > > > 
> > > > 
> > > > 
> > > > 
> > > > _______________________________________________
> > > > Gluster-users mailing list
> > > > Gluster-users@gluster.org
> > > > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> > > > 
> > > 
> > > _______________________________________________
> > > Gluster-users mailing list
> > > Gluster-users@gluster.org
> > > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> > 
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to