Re: [Gluster-users] FW: cant mount gluster volume

2012-11-21 Thread Steve Postma
Your right Eco
I am only able to telnet on port  24007,  ports 24009, 24010 and 24011 are all 
connection refused . Iptables is not running on any of the machines


mseas-data  24007, 24009 are open, 24010 and 24011 closed
nas-0-0 24007 open, 24009,24010 and 24011 closed
nas-0-124007 open, 24009,24010 and 24011 closed



Steve

From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Eco Willson [ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 6:32 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,
On 11/20/2012 02:43 PM, Steve Postma wrote:
 Hi Eco,
 I believe you are asking that I run

 find /mount/glusterfs /dev/null

 only? That should take care of the issue?
Meaning, run a recursive find against the client mount point
(/mount/glusterfs is used as an example in the docs). This should solve
the specific issue of the files not being visible.
However, the issue of the disk space discrepancy is different. From the
df output, the only filesystem with 18GB is / on the mseas-data node, I
assume this is where you are mounting from?
If so, then the issue goes back to one of connectivity, the gluster
bricks most likely are still not being connected to, which may actually
be the root cause of both problems.

Can you confirm that iptables is off on all hosts (and from any client
you would connect from)? I had seen your previous tests with telnet,
was this done from and to all hosts from the client machine?
Make sure that at a minimum you can hit 24007, 24009, 24010 and 24011.
This will test the management port and the expected initial port for
each of the bricks in the volume.


Thanks,

Eco

 Thanks for your time,
 Steve

 
 From: 
 gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org 
 [gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org] 
 on behalf of Eco Willson [ewill...@redhat.commailto:ewill...@redhat.com]
 Sent: Tuesday, November 20, 2012 5:39 PM
 To: gluster-users@gluster.orgmailto:gluster-users@gluster.org
 Subject: Re: [Gluster-users] FW: cant mount gluster volume

 Steve,

 On 11/20/2012 01:32 PM, Steve Postma wrote:

 [root@mseas-datamailto:root@mseas-data gdata]# df -h
 Filesystem Size Used Avail Use% Mounted on
 /dev/sda1 18G 6.6G 9.7G 41% /
 /dev/sda6 77G 49G 25G 67% /scratch
 /dev/sda3 18G 3.8G 13G 24% /var
 /dev/sda2 18G 173M 16G 2% /tmp
 tmpfs 3.9G 0 3.9G 0% /dev/shm
 /dev/mapper/the_raid-lv_home
 3.0T 2.2T 628G 79% /home
 glusterfs#mseas-data:/gdata
 15T 14T 606G 96% /gdata


 [root@nas-0-0mailto:root@nas-0-0 ~]# df -h
 Filesystem Size Used Avail Use% Mounted on
 /dev/sda3 137G 33G 97G 26% /
 /dev/sda1 190M 24M 157M 14% /boot
 tmpfs 2.0G 0 2.0G 0% /dev/shm
 /dev/sdb1 21T 19T 1.5T 93% /mseas-data-0-0

 [root@nas-0-1mailto:root@nas-0-1 ~]# df -h
 Filesystem Size Used Avail Use% Mounted on
 /dev/sda3 137G 34G 97G 26% /
 /dev/sda1 190M 24M 157M 14% /boot
 tmpfs 2.0G 0 2.0G 0% /dev/shm
 /dev/sdb1 21T 19T 1.3T 94% /mseas-data-0-1


 Thanks for confirming.

 cat of /etc/glusterfs/glusterd.vol from backup

 [root@mseas-datamailto:root@mseas-data glusterd]# cat 
 /root/mseas_backup/etc/glusterfs/glusterd.vol
 volume management
 type mgmt/glusterd
 option working-directory /etc/glusterd
 option transport-type socket,rdma
 option transport.socket.keepalive-time 10
 option transport.socket.keepalive-interval 2
 end-volume


 The vol file for 2.x would be in /etc/glusterfs/volume name.vol I believe. 
 It should contain an entry similar to this output for each of the servers 
 toward the top of the file.

 Article you referenced is looking for the words glusterfs-volgen in a vol 
 file. I have used locate and grep, but can find no such entry in any .vol 
 files.


 This would not appear if the glusterfs-volgen command wasn't used during 
 creation. The main consideration is to ensure that you have the command in 
 step 5:

 find /mount/glusterfs /dev/null

 - Eco

 Thanks




 
 From: 
 gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org
  
 [gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org]
  on behalf of Eco Willson 
 [ewill...@redhat.commailto:ewill...@redhat.commailto:ewill...@redhat.com]
 Sent: Tuesday, November 20, 2012 4:03 PM
 To: 
 gluster-users@gluster.orgmailto:gluster-users@gluster.orgmailto:gluster-users@gluster.org
 Subject: Re: [Gluster-users] FW: cant mount gluster volume

 Steve,



 On 11/20/2012 12:03 PM, Steve Postma wrote:


 The do show expected size. I have a backup of /etc/glusterd and 
 /etc/glusterfs from before upgrade.


 Can we see the vol file from the 2.x install and the output of df -h for
 each of the bricks?


 Its interesting that gluster volume info shows the correct path for each 
 machine.

 These are the correct

Re: [Gluster-users] FW: cant mount gluster volume

2012-11-21 Thread Steve Postma
Eco,
they all appear to be using 24007 and 24009, none of them are running on 24010 
or 24011.
Steve

[root@nas-0-0 ~]# lsof | grep 24010
[root@nas-0-0 ~]# lsof | grep 24011
[root@nas-0-0 ~]# lsof | grep 24009
glusterfs 3536  root   18u IPv4 143541  0t0TCP 
10.1.1.10:1022-gluster-data:24009 (ESTABLISHED)
[root@nas-0-0 ~]# lsof | grep 24007
glusterd  3515  root6u IPv4 143469  0t0TCP 
nas-0-0:24007-nas-0-0:1022 (ESTABLISHED)
glusterd  3515  root8u IPv4  77801  0t0TCP 
*:24007 (LISTEN)
glusterd  3515  root   12u IPv4 143805  0t0TCP 
10.1.1.10:1020-gluster-data:24007 (ESTABLISHED)
glusterfs 3536  root7u IPv4 143468  0t0TCP 
nas-0-0:1022-nas-0-0:24007 (ESTABLISHED)
glusterfs 3536  root   16u IPv4 399743  0t0TCP 
10.1.1.10:1023-gluster-0-0:24007 (SYN_SENT)
glusterfs 3536  root   17u IPv4 399745  0t0TCP 
10.1.1.10:1021-gluster-0-1:24007 (SYN_SENT)



[root@nas-0-1 ~]# lsof | grep 24007
glusterd  3447  root6u IPv4  77189  0t0TCP 
nas-0-1:24007-nas-0-1:1021 (ESTABLISHED)
glusterd  3447  root8u IPv4  11540  0t0TCP 
*:24007 (LISTEN)
glusterd  3447  root   10u IPv4 317363  0t0TCP 
10.1.1.11:1022-gluster-0-0:24007 (SYN_SENT)
glusterd  3447  root   12u IPv4  77499  0t0TCP 
10.1.1.11:1023-gluster-data:24007 (ESTABLISHED)
glusterfs 3468  root7u IPv4  77188  0t0TCP 
nas-0-1:1021-nas-0-1:24007 (ESTABLISHED)
glusterfs 3468  root   17u IPv4 317361  0t0TCP 
10.1.1.11:1019-gluster-0-1:24007 (SYN_SENT)
[root@nas-0-1 ~]# lsof | grep 24009
glusterfs 3468  root   18u IPv4  77259  0t0TCP 
10.1.1.11:1021-gluster-data:24009 (ESTABLISHED)
[root@nas-0-1 ~]# lsof | grep 24010
[root@nas-0-1 ~]# lsof | grep 24011

glusterfs  4301  root   16u IPv4 586766  
TCP 10.1.1.2:1021-gluster-0-0:24007 (SYN_SENT)
glusterfs  4301  root   17u IPv4 586768  
TCP 10.1.1.2:1020-gluster-0-1:24007 (SYN_SENT)
glusterfs 17526  root8u IPv4 205563  
TCP mseas-data.mit.edu:1015-mseas-data.mit.edu:24007 (ESTABLISHED)
[root@mseas-data ~]# lsof | grep 24009
glusterfs  4008  root   10u IPv4  77692  
TCP *:24009 (LISTEN)
glusterfs  4008  root   13u IPv4 148473  
TCP gluster-data:24009-gluster-data:1018 (ESTABLISHED)
glusterfs  4008  root   14u IPv4  82251  
TCP gluster-data:24009-10.1.1.10:1022 (ESTABLISHED)
glusterfs  4008  root   15u IPv4  82440  
TCP gluster-data:24009-10.1.1.11:1021 (ESTABLISHED)
glusterfs  4008  root   16u IPv4 205600  
TCP gluster-data:24009-gluster-data:1023 (ESTABLISHED)
glusterfs  4008  root   17u IPv4 218671  
TCP 10.1.1.2:24009-10.1.1.1:1018 (ESTABLISHED)
glusterfs  4301  root   18u IPv4 148472  
TCP gluster-data:1018-gluster-data:24009 (ESTABLISHED)
glusterfs 17526  root   12u IPv4 205599  
TCP gluster-data:1023-gluster-data:24009 (ESTABLISHED)
[root@mseas-data ~]# lsof | grep 24010
[root@mseas-data ~]# lsof | grep 24011
[root@mseas-data ~]#




From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Steve Postma [spos...@ztechnet.com]
Sent: Wednesday, November 21, 2012 10:19 AM
To: Eco Willson; gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Your right Eco
I am only able to telnet on port 24007, ports 24009, 24010 and 24011 are all 
connection refused . Iptables is not running on any of the machines


mseas-data 24007, 24009 are open, 24010 and 24011 closed
nas-0-0 24007 open, 24009,24010 and 24011 closed
nas-0-1 24007 open, 24009,24010 and 24011 closed



Steve

From: 
gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org 
[gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org] 
on behalf of Eco Willson [ewill...@redhat.commailto:ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 6:32 PM
To: gluster-users@gluster.orgmailto:gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,
On 11/20/2012 02:43 PM, Steve Postma wrote:
 Hi Eco,
 I believe you are asking that I run

 find /mount/glusterfs /dev/null

 only? That should take care of the issue?
Meaning, run a recursive find against the client mount point
(/mount/glusterfs is used as an example in the docs). This should

Re: [Gluster-users] FW: cant mount gluster volume

2012-11-21 Thread Eco Willson

Steve,

The simplest way to troubleshoot (assuming that the nodes are not in 
production) would be:


1) unmounting from the clients
2) stopping gluster
3) `killall gluster{,d,fs,fsd}`
4) Start gluster again

Try to telnet to the ports again afterwards which would be expected to work.

Thanks,

Eco



On 11/21/2012 07:19 AM, Steve Postma wrote:

Your right Eco
I am only able to telnet on port  24007,  ports 24009, 24010 and 24011 are all 
connection refused . Iptables is not running on any of the machines


mseas-data  24007, 24009 are open, 24010 and 24011 closed
nas-0-0 24007 open, 24009,24010 and 24011 closed
nas-0-124007 open, 24009,24010 and 24011 closed



Steve

From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Eco Willson [ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 6:32 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,
On 11/20/2012 02:43 PM, Steve Postma wrote:

Hi Eco,
I believe you are asking that I run

find /mount/glusterfs /dev/null

only? That should take care of the issue?

Meaning, run a recursive find against the client mount point
(/mount/glusterfs is used as an example in the docs). This should solve
the specific issue of the files not being visible.
However, the issue of the disk space discrepancy is different. From the
df output, the only filesystem with 18GB is / on the mseas-data node, I
assume this is where you are mounting from?
If so, then the issue goes back to one of connectivity, the gluster
bricks most likely are still not being connected to, which may actually
be the root cause of both problems.

Can you confirm that iptables is off on all hosts (and from any client
you would connect from)? I had seen your previous tests with telnet,
was this done from and to all hosts from the client machine?
Make sure that at a minimum you can hit 24007, 24009, 24010 and 24011.
This will test the management port and the expected initial port for
each of the bricks in the volume.


Thanks,

Eco


Thanks for your time,
Steve


From: gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org 
[gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org] on behalf of 
Eco Willson [ewill...@redhat.commailto:ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 5:39 PM
To: gluster-users@gluster.orgmailto:gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,

On 11/20/2012 01:32 PM, Steve Postma wrote:

[root@mseas-datamailto:root@mseas-data gdata]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 18G 6.6G 9.7G 41% /
/dev/sda6 77G 49G 25G 67% /scratch
/dev/sda3 18G 3.8G 13G 24% /var
/dev/sda2 18G 173M 16G 2% /tmp
tmpfs 3.9G 0 3.9G 0% /dev/shm
/dev/mapper/the_raid-lv_home
3.0T 2.2T 628G 79% /home
glusterfs#mseas-data:/gdata
15T 14T 606G 96% /gdata


[root@nas-0-0mailto:root@nas-0-0 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 137G 33G 97G 26% /
/dev/sda1 190M 24M 157M 14% /boot
tmpfs 2.0G 0 2.0G 0% /dev/shm
/dev/sdb1 21T 19T 1.5T 93% /mseas-data-0-0

[root@nas-0-1mailto:root@nas-0-1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 137G 34G 97G 26% /
/dev/sda1 190M 24M 157M 14% /boot
tmpfs 2.0G 0 2.0G 0% /dev/shm
/dev/sdb1 21T 19T 1.3T 94% /mseas-data-0-1


Thanks for confirming.

cat of /etc/glusterfs/glusterd.vol from backup

[root@mseas-datamailto:root@mseas-data glusterd]# cat 
/root/mseas_backup/etc/glusterfs/glusterd.vol
volume management
type mgmt/glusterd
option working-directory /etc/glusterd
option transport-type socket,rdma
option transport.socket.keepalive-time 10
option transport.socket.keepalive-interval 2
end-volume


The vol file for 2.x would be in /etc/glusterfs/volume name.vol I believe. It 
should contain an entry similar to this output for each of the servers toward the top 
of the file.

Article you referenced is looking for the words glusterfs-volgen in a vol 
file. I have used locate and grep, but can find no such entry in any .vol files.


This would not appear if the glusterfs-volgen command wasn't used during 
creation. The main consideration is to ensure that you have the command in step 
5:

find /mount/glusterfs /dev/null

- Eco

Thanks





From: 
gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org
 
[gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org]
 on behalf of Eco Willson [ewill...@redhat.commailto:ewill...@redhat.commailto:ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 4:03 PM
To: 
gluster-users@gluster.orgmailto:gluster-users@gluster.orgmailto:gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,



On 11/20/2012 12:03 PM, Steve Postma wrote:


The do show expected size. I have

Re: [Gluster-users] FW: cant mount gluster volume

2012-11-21 Thread Steve Postma

Eco, after stopping Gluster and restarting, same results as before with telnet 
able to connect to 24007, none of the other ports.  I noticed 1 machine has a 
process running that the other two do not. 22603 refers to --volfile-id 
gdata.gluster-data.data
and is only running on the one machine. Is this correct?




[root@mseas-data ~]# ps -ef | grep gluster
root 22582 1  0 15:00 ?00:00:00 /usr/sbin/glusterd -p 
/var/run/glusterd.pid
root 22603 1  0 15:00 ?00:00:00 /usr/sbin/glusterfsd -s 
localhost --volfile-id gdata.gluster-data.data -p 
/var/lib/glusterd/vols/gdata/run/gluster-data-data.pid -S 
/tmp/e3eac7ce95e786a3d909b8fc65ed2059.socket --brick-name /data -l 
/var/log/glusterfs/bricks/data.log --xlator-option 
*-posix.glusterd-uuid=22f1102a-08e6-482d-ad23-d8e063cf32ed --brick-port 24009 
--xlator-option gdata-server.listen-port=24009
root 22609 1  0 15:00 ?00:00:00 /usr/sbin/glusterfs -s 
localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l 
/var/log/glusterfs/nfs.log -S /tmp/d5c892de43c28a1ee7481b780245b789.socket
root 22690 22511  0 15:01 pts/000:00:00 grep gluster



[root@nas-0-0 ~]# ps -ef | grep gluster
root  7943 1  3 14:43 ?00:00:00 /usr/sbin/glusterd -p 
/var/run/glusterd.pid
root  7965 1  0 14:43 ?00:00:00 /usr/sbin/glusterfs -s 
localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l 
/var/log/glusterfs/nfs.log -S /tmp/8f87e178e9707e4694ee7a2543c66db9.socket
root  7976  7898  0 14:43 pts/100:00:00 grep gluster
[root@nas-0-0 ~]#
[root@nas-0-1 ~]# ps -ef | grep gluster
root  7567 1  4 14:47 ?00:00:00 /usr/sbin/glusterd -p 
/var/run/glusterd.pid
root  7589 1  0 14:47 ?00:00:00 /usr/sbin/glusterfs -s 
localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l 
/var/log/glusterfs/nfs.log -S /tmp/6054da6605d9f9d1c1e99252f1d235a6.socket
root  7600  7521  0 14:47 pts/200:00:00 grep gluster

From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Eco Willson [ewill...@redhat.com]
Sent: Wednesday, November 21, 2012 2:52 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,

The simplest way to troubleshoot (assuming that the nodes are not in
production) would be:

1) unmounting from the clients
2) stopping gluster
3) `killall gluster{,d,fs,fsd}`
4) Start gluster again

Try to telnet to the ports again afterwards which would be expected to work.

Thanks,

Eco



On 11/21/2012 07:19 AM, Steve Postma wrote:
 Your right Eco
 I am only able to telnet on port 24007, ports 24009, 24010 and 24011 are all 
 connection refused . Iptables is not running on any of the machines


 mseas-data 24007, 24009 are open, 24010 and 24011 closed
 nas-0-0 24007 open, 24009,24010 and 24011 closed
 nas-0-1 24007 open, 24009,24010 and 24011 closed



 Steve
 
 From: 
 gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org 
 [gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org] 
 on behalf of Eco Willson [ewill...@redhat.commailto:ewill...@redhat.com]
 Sent: Tuesday, November 20, 2012 6:32 PM
 To: gluster-users@gluster.orgmailto:gluster-users@gluster.org
 Subject: Re: [Gluster-users] FW: cant mount gluster volume

 Steve,
 On 11/20/2012 02:43 PM, Steve Postma wrote:
 Hi Eco,
 I believe you are asking that I run

 find /mount/glusterfs /dev/null

 only? That should take care of the issue?
 Meaning, run a recursive find against the client mount point
 (/mount/glusterfs is used as an example in the docs). This should solve
 the specific issue of the files not being visible.
 However, the issue of the disk space discrepancy is different. From the
 df output, the only filesystem with 18GB is / on the mseas-data node, I
 assume this is where you are mounting from?
 If so, then the issue goes back to one of connectivity, the gluster
 bricks most likely are still not being connected to, which may actually
 be the root cause of both problems.

 Can you confirm that iptables is off on all hosts (and from any client
 you would connect from)? I had seen your previous tests with telnet,
 was this done from and to all hosts from the client machine?
 Make sure that at a minimum you can hit 24007, 24009, 24010 and 24011.
 This will test the management port and the expected initial port for
 each of the bricks in the volume.


 Thanks,

 Eco

 Thanks for your time,
 Steve

 
 From: 
 gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org
  
 [gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org]
  on behalf of Eco Willson 
 [ewill...@redhat.commailto:ewill...@redhat.commailto:ewill...@redhat.com]
 Sent: Tuesday, November

Re: [Gluster-users] FW: cant mount gluster volume

2012-11-21 Thread David Coulson
I would be concerned about the connections in a SYN_SENT state. Would be 
helpful if this was done with the -n flag so no DNS and we could see the 
real IPs.


On 11/21/12 2:49 PM, Steve Postma wrote:

Eco,
they all appear to be using 24007 and 24009, none of them are running on 24010 
or 24011.
Steve

[root@nas-0-0 ~]# lsof | grep 24010
[root@nas-0-0 ~]# lsof | grep 24011
[root@nas-0-0 ~]# lsof | grep 24009
glusterfs 3536  root   18u IPv4 143541  0t0TCP 
10.1.1.10:1022-gluster-data:24009 (ESTABLISHED)
[root@nas-0-0 ~]# lsof | grep 24007
glusterd  3515  root6u IPv4 143469  0t0TCP 
nas-0-0:24007-nas-0-0:1022 (ESTABLISHED)
glusterd  3515  root8u IPv4  77801  0t0TCP 
*:24007 (LISTEN)
glusterd  3515  root   12u IPv4 143805  0t0TCP 
10.1.1.10:1020-gluster-data:24007 (ESTABLISHED)
glusterfs 3536  root7u IPv4 143468  0t0TCP 
nas-0-0:1022-nas-0-0:24007 (ESTABLISHED)
glusterfs 3536  root   16u IPv4 399743  0t0TCP 
10.1.1.10:1023-gluster-0-0:24007 (SYN_SENT)
glusterfs 3536  root   17u IPv4 399745  0t0TCP 
10.1.1.10:1021-gluster-0-1:24007 (SYN_SENT)



[root@nas-0-1 ~]# lsof | grep 24007
glusterd  3447  root6u IPv4  77189  0t0TCP 
nas-0-1:24007-nas-0-1:1021 (ESTABLISHED)
glusterd  3447  root8u IPv4  11540  0t0TCP 
*:24007 (LISTEN)
glusterd  3447  root   10u IPv4 317363  0t0TCP 
10.1.1.11:1022-gluster-0-0:24007 (SYN_SENT)
glusterd  3447  root   12u IPv4  77499  0t0TCP 
10.1.1.11:1023-gluster-data:24007 (ESTABLISHED)
glusterfs 3468  root7u IPv4  77188  0t0TCP 
nas-0-1:1021-nas-0-1:24007 (ESTABLISHED)
glusterfs 3468  root   17u IPv4 317361  0t0TCP 
10.1.1.11:1019-gluster-0-1:24007 (SYN_SENT)
[root@nas-0-1 ~]# lsof | grep 24009
glusterfs 3468  root   18u IPv4  77259  0t0TCP 
10.1.1.11:1021-gluster-data:24009 (ESTABLISHED)
[root@nas-0-1 ~]# lsof | grep 24010
[root@nas-0-1 ~]# lsof | grep 24011

glusterfs  4301  root   16u IPv4 586766  TCP 
10.1.1.2:1021-gluster-0-0:24007 (SYN_SENT)
glusterfs  4301  root   17u IPv4 586768  TCP 
10.1.1.2:1020-gluster-0-1:24007 (SYN_SENT)
glusterfs 17526  root8u IPv4 205563  TCP 
mseas-data.mit.edu:1015-mseas-data.mit.edu:24007 (ESTABLISHED)
[root@mseas-data ~]# lsof | grep 24009
glusterfs  4008  root   10u IPv4  77692  
TCP *:24009 (LISTEN)
glusterfs  4008  root   13u IPv4 148473  TCP 
gluster-data:24009-gluster-data:1018 (ESTABLISHED)
glusterfs  4008  root   14u IPv4  82251  TCP 
gluster-data:24009-10.1.1.10:1022 (ESTABLISHED)
glusterfs  4008  root   15u IPv4  82440  TCP 
gluster-data:24009-10.1.1.11:1021 (ESTABLISHED)
glusterfs  4008  root   16u IPv4 205600  TCP 
gluster-data:24009-gluster-data:1023 (ESTABLISHED)
glusterfs  4008  root   17u IPv4 218671  TCP 
10.1.1.2:24009-10.1.1.1:1018 (ESTABLISHED)
glusterfs  4301  root   18u IPv4 148472  TCP 
gluster-data:1018-gluster-data:24009 (ESTABLISHED)
glusterfs 17526  root   12u IPv4 205599  TCP 
gluster-data:1023-gluster-data:24009 (ESTABLISHED)
[root@mseas-data ~]# lsof | grep 24010
[root@mseas-data ~]# lsof | grep 24011
[root@mseas-data ~]#




From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Steve Postma [spos...@ztechnet.com]
Sent: Wednesday, November 21, 2012 10:19 AM
To: Eco Willson; gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Your right Eco
I am only able to telnet on port 24007, ports 24009, 24010 and 24011 are all 
connection refused . Iptables is not running on any of the machines


mseas-data 24007, 24009 are open, 24010 and 24011 closed
nas-0-0 24007 open, 24009,24010 and 24011 closed
nas-0-1 24007 open, 24009,24010 and 24011 closed



Steve

From: gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org 
[gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org] on behalf of 
Eco Willson [ewill...@redhat.commailto:ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 6:32 PM
To: gluster-users@gluster.orgmailto:gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,
On 11/20/2012 02:43 PM, Steve Postma wrote:

Hi Eco,
I believe you are asking that I run

find

Re: [Gluster-users] FW: cant mount gluster volume

2012-11-21 Thread Steve Postma
; gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

I would be concerned about the connections in a SYN_SENT state. Would be
helpful if this was done with the -n flag so no DNS and we could see the
real IPs.

On 11/21/12 2:49 PM, Steve Postma wrote:
 Eco,
 they all appear to be using 24007 and 24009, none of them are running on 
 24010 or 24011.
 Steve

 [root@nas-0-0mailto:root@nas-0-0 ~]# lsof | grep 24010
 [root@nas-0-0mailto:root@nas-0-0 ~]# lsof | grep 24011
 [root@nas-0-0mailto:root@nas-0-0 ~]# lsof | grep 24009
 glusterfs 3536 root 18u IPv4 143541 0t0 TCP 
 10.1.1.10:1022-gluster-data:24009 (ESTABLISHED)
 [root@nas-0-0mailto:root@nas-0-0 ~]# lsof | grep 24007
 glusterd 3515 root 6u IPv4 143469 0t0 TCP nas-0-0:24007-nas-0-0:1022 
 (ESTABLISHED)
 glusterd 3515 root 8u IPv4 77801 0t0 TCP *:24007 (LISTEN)
 glusterd 3515 root 12u IPv4 143805 0t0 TCP 10.1.1.10:1020-gluster-data:24007 
 (ESTABLISHED)
 glusterfs 3536 root 7u IPv4 143468 0t0 TCP nas-0-0:1022-nas-0-0:24007 
 (ESTABLISHED)
 glusterfs 3536 root 16u IPv4 399743 0t0 TCP 10.1.1.10:1023-gluster-0-0:24007 
 (SYN_SENT)
 glusterfs 3536 root 17u IPv4 399745 0t0 TCP 10.1.1.10:1021-gluster-0-1:24007 
 (SYN_SENT)



 [root@nas-0-1mailto:root@nas-0-1 ~]# lsof | grep 24007
 glusterd 3447 root 6u IPv4 77189 0t0 TCP nas-0-1:24007-nas-0-1:1021 
 (ESTABLISHED)
 glusterd 3447 root 8u IPv4 11540 0t0 TCP *:24007 (LISTEN)
 glusterd 3447 root 10u IPv4 317363 0t0 TCP 10.1.1.11:1022-gluster-0-0:24007 
 (SYN_SENT)
 glusterd 3447 root 12u IPv4 77499 0t0 TCP 10.1.1.11:1023-gluster-data:24007 
 (ESTABLISHED)
 glusterfs 3468 root 7u IPv4 77188 0t0 TCP nas-0-1:1021-nas-0-1:24007 
 (ESTABLISHED)
 glusterfs 3468 root 17u IPv4 317361 0t0 TCP 10.1.1.11:1019-gluster-0-1:24007 
 (SYN_SENT)
 [root@nas-0-1mailto:root@nas-0-1 ~]# lsof | grep 24009
 glusterfs 3468 root 18u IPv4 77259 0t0 TCP 10.1.1.11:1021-gluster-data:24009 
 (ESTABLISHED)
 [root@nas-0-1mailto:root@nas-0-1 ~]# lsof | grep 24010
 [root@nas-0-1mailto:root@nas-0-1 ~]# lsof | grep 24011

 glusterfs 4301 root 16u IPv4 586766 TCP 10.1.1.2:1021-gluster-0-0:24007 
 (SYN_SENT)
 glusterfs 4301 root 17u IPv4 586768 TCP 10.1.1.2:1020-gluster-0-1:24007 
 (SYN_SENT)
 glusterfs 17526 root 8u IPv4 205563 TCP 
 mseas-data.mit.edu:1015-mseas-data.mit.edu:24007 (ESTABLISHED)
 [root@mseas-datamailto:root@mseas-data ~]# lsof | grep 24009
 glusterfs 4008 root 10u IPv4 77692 TCP *:24009 (LISTEN)
 glusterfs 4008 root 13u IPv4 148473 TCP gluster-data:24009-gluster-data:1018 
 (ESTABLISHED)
 glusterfs 4008 root 14u IPv4 82251 TCP gluster-data:24009-10.1.1.10:1022 
 (ESTABLISHED)
 glusterfs 4008 root 15u IPv4 82440 TCP gluster-data:24009-10.1.1.11:1021 
 (ESTABLISHED)
 glusterfs 4008 root 16u IPv4 205600 TCP gluster-data:24009-gluster-data:1023 
 (ESTABLISHED)
 glusterfs 4008 root 17u IPv4 218671 TCP 10.1.1.2:24009-10.1.1.1:1018 
 (ESTABLISHED)
 glusterfs 4301 root 18u IPv4 148472 TCP gluster-data:1018-gluster-data:24009 
 (ESTABLISHED)
 glusterfs 17526 root 12u IPv4 205599 TCP 
 gluster-data:1023-gluster-data:24009 (ESTABLISHED)
 [root@mseas-datamailto:root@mseas-data ~]# lsof | grep 24010
 [root@mseas-datamailto:root@mseas-data ~]# lsof | grep 24011
 [root@mseas-datamailto:root@mseas-data ~]#



 
 From: 
 gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org 
 [gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org] 
 on behalf of Steve Postma [spos...@ztechnet.commailto:spos...@ztechnet.com]
 Sent: Wednesday, November 21, 2012 10:19 AM
 To: Eco Willson; gluster-users@gluster.orgmailto:gluster-users@gluster.org
 Subject: Re: [Gluster-users] FW: cant mount gluster volume

 Your right Eco
 I am only able to telnet on port 24007, ports 24009, 24010 and 24011 are all 
 connection refused . Iptables is not running on any of the machines


 mseas-data 24007, 24009 are open, 24010 and 24011 closed
 nas-0-0 24007 open, 24009,24010 and 24011 closed
 nas-0-1 24007 open, 24009,24010 and 24011 closed



 Steve
 
 From: 
 gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org
  
 [gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org]
  on behalf of Eco Willson 
 [ewill...@redhat.commailto:ewill...@redhat.commailto:ewill...@redhat.com]
 Sent: Tuesday, November 20, 2012 6:32 PM
 To: 
 gluster-users@gluster.orgmailto:gluster-users@gluster.orgmailto:gluster-users@gluster.org
 Subject: Re: [Gluster-users] FW: cant mount gluster volume

 Steve,
 On 11/20/2012 02:43 PM, Steve Postma wrote:
 Hi Eco,
 I believe you are asking that I run

 find /mount/glusterfs /dev/null

 only? That should take care of the issue?
 Meaning, run a recursive find against the client mount point
 (/mount/glusterfs is used as an example in the docs). This should solve
 the specific issue of the files not being visible.
 However

Re: [Gluster-users] FW: cant mount gluster volume

2012-11-21 Thread David Coulson
:1021 (ESTABLISHED)
 glusterfs 22603  root   15u IPv4 670783 
 TCP 10.0.0.2:24009-10.1.1.11:1021 (ESTABLISHED)
 glusterfs 22609  root   18u IPv4 665089 
 TCP 10.0.0.2:1022-10.0.0.2:24009 (ESTABLISHED)
 [root@mseas-data ~]# lsof -n | grep 24010
 [root@mseas-data ~]# lsof -n | grep 24011
 [root@mseas-data ~]#
 
 
 
 
 
 
 From: David Coulson [da...@davidcoulson.net]
 Sent: Wednesday, November 21, 2012 3:20 PM
 To: Steve Postma
 Cc: Eco Willson; gluster-users@gluster.org
 Subject: Re: [Gluster-users] FW: cant mount gluster volume
 
 I would be concerned about the connections in a SYN_SENT state. Would be
 helpful if this was done with the -n flag so no DNS and we could see the
 real IPs.
 
 On 11/21/12 2:49 PM, Steve Postma wrote:
 Eco,
 they all appear to be using 24007 and 24009, none of them are running on 
 24010 or 24011.
 Steve
 
 [root@nas-0-0mailto:root@nas-0-0 ~]# lsof | grep 24010
 [root@nas-0-0mailto:root@nas-0-0 ~]# lsof | grep 24011
 [root@nas-0-0mailto:root@nas-0-0 ~]# lsof | grep 24009
 glusterfs 3536 root 18u IPv4 143541 0t0 TCP 
 10.1.1.10:1022-gluster-data:24009 (ESTABLISHED)
 [root@nas-0-0mailto:root@nas-0-0 ~]# lsof | grep 24007
 glusterd 3515 root 6u IPv4 143469 0t0 TCP nas-0-0:24007-nas-0-0:1022 
 (ESTABLISHED)
 glusterd 3515 root 8u IPv4 77801 0t0 TCP *:24007 (LISTEN)
 glusterd 3515 root 12u IPv4 143805 0t0 TCP 
 10.1.1.10:1020-gluster-data:24007 (ESTABLISHED)
 glusterfs 3536 root 7u IPv4 143468 0t0 TCP nas-0-0:1022-nas-0-0:24007 
 (ESTABLISHED)
 glusterfs 3536 root 16u IPv4 399743 0t0 TCP 
 10.1.1.10:1023-gluster-0-0:24007 (SYN_SENT)
 glusterfs 3536 root 17u IPv4 399745 0t0 TCP 
 10.1.1.10:1021-gluster-0-1:24007 (SYN_SENT)
 
 
 
 [root@nas-0-1mailto:root@nas-0-1 ~]# lsof | grep 24007
 glusterd 3447 root 6u IPv4 77189 0t0 TCP nas-0-1:24007-nas-0-1:1021 
 (ESTABLISHED)
 glusterd 3447 root 8u IPv4 11540 0t0 TCP *:24007 (LISTEN)
 glusterd 3447 root 10u IPv4 317363 0t0 TCP 10.1.1.11:1022-gluster-0-0:24007 
 (SYN_SENT)
 glusterd 3447 root 12u IPv4 77499 0t0 TCP 10.1.1.11:1023-gluster-data:24007 
 (ESTABLISHED)
 glusterfs 3468 root 7u IPv4 77188 0t0 TCP nas-0-1:1021-nas-0-1:24007 
 (ESTABLISHED)
 glusterfs 3468 root 17u IPv4 317361 0t0 TCP 
 10.1.1.11:1019-gluster-0-1:24007 (SYN_SENT)
 [root@nas-0-1mailto:root@nas-0-1 ~]# lsof | grep 24009
 glusterfs 3468 root 18u IPv4 77259 0t0 TCP 
 10.1.1.11:1021-gluster-data:24009 (ESTABLISHED)
 [root@nas-0-1mailto:root@nas-0-1 ~]# lsof | grep 24010
 [root@nas-0-1mailto:root@nas-0-1 ~]# lsof | grep 24011
 
 glusterfs 4301 root 16u IPv4 586766 TCP 10.1.1.2:1021-gluster-0-0:24007 
 (SYN_SENT)
 glusterfs 4301 root 17u IPv4 586768 TCP 10.1.1.2:1020-gluster-0-1:24007 
 (SYN_SENT)
 glusterfs 17526 root 8u IPv4 205563 TCP 
 mseas-data.mit.edu:1015-mseas-data.mit.edu:24007 (ESTABLISHED)
 [root@mseas-datamailto:root@mseas-data ~]# lsof | grep 24009
 glusterfs 4008 root 10u IPv4 77692 TCP *:24009 (LISTEN)
 glusterfs 4008 root 13u IPv4 148473 TCP 
 gluster-data:24009-gluster-data:1018 (ESTABLISHED)
 glusterfs 4008 root 14u IPv4 82251 TCP gluster-data:24009-10.1.1.10:1022 
 (ESTABLISHED)
 glusterfs 4008 root 15u IPv4 82440 TCP gluster-data:24009-10.1.1.11:1021 
 (ESTABLISHED)
 glusterfs 4008 root 16u IPv4 205600 TCP 
 gluster-data:24009-gluster-data:1023 (ESTABLISHED)
 glusterfs 4008 root 17u IPv4 218671 TCP 10.1.1.2:24009-10.1.1.1:1018 
 (ESTABLISHED)
 glusterfs 4301 root 18u IPv4 148472 TCP 
 gluster-data:1018-gluster-data:24009 (ESTABLISHED)
 glusterfs 17526 root 12u IPv4 205599 TCP 
 gluster-data:1023-gluster-data:24009 (ESTABLISHED)
 [root@mseas-datamailto:root@mseas-data ~]# lsof | grep 24010
 [root@mseas-datamailto:root@mseas-data ~]# lsof | grep 24011
 [root@mseas-datamailto:root@mseas-data ~]#
 
 
 
 
 From: 
 gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org 
 [gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org]
  on behalf of Steve Postma 
 [spos...@ztechnet.commailto:spos...@ztechnet.com]
 Sent: Wednesday, November 21, 2012 10:19 AM
 To: Eco Willson; gluster-users@gluster.orgmailto:gluster-users@gluster.org
 Subject: Re: [Gluster-users] FW: cant mount gluster volume
 
 Your right Eco
 I am only able to telnet on port 24007, ports 24009, 24010 and 24011 are all 
 connection refused . Iptables is not running on any of the machines
 
 
 mseas-data 24007, 24009 are open, 24010 and 24011 closed
 nas-0-0 24007 open, 24009,24010 and 24011 closed
 nas-0-1 24007 open, 24009,24010 and 24011 closed
 
 
 
 Steve
 
 From: 
 gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org
  
 [gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org]
  on behalf of Eco Willson 
 [ewill...@redhat.commailto:ewill...@redhat.commailto:ewill...@redhat.com

Re: [Gluster-users] FW: cant mount gluster volume

2012-11-21 Thread Steve Postma
Wow, thank you David and Eco for your help!

10.0.0.10 and 10.0.0.11 were fibre interfaces on two of the bricks. Recently it 
was determined these interfaces were bad hardware but routing of IP was still 
taking place with the IP address and not the hardware. After updates and 
reboots the card were dropped from the OS. I missed this.

Changed host files on the 3 machines and can now mount and see the entire 
volume.

Thank You again for your help!

Steve



From: David Coulson [da...@davidcoulson.net]
Sent: Wednesday, November 21, 2012 3:46 PM
To: Steve Postma
Cc: Eco Willson; gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

What is 10.0.0.10 and 10.0.0.11? Seems those are part of the problem. Can you 
set rp_filter to 0 on all your interfaces?

Sent from my iPad

On Nov 21, 2012, at 3:33 PM, Steve Postma 
spos...@ztechnet.commailto:spos...@ztechnet.com wrote:

 Hi David, as requested,

 Thanks,
 Steve


 [root@nas-0-0mailto:root@nas-0-0 ~]# lsof -n | grep 24007
 glusterd 7943 root 6u IPv4 468358 0t0 TCP 127.0.0.1:24007-127.0.0.1:1021 
 (ESTABLISHED)
 glusterd 7943 root 8u IPv4 402704 0t0 TCP *:24007 (LISTEN)
 glusterd 7943 root 10u IPv4 472454 0t0 TCP 10.1.1.10:1022-10.0.0.11:24007 
 (SYN_SENT)
 glusterd 7943 root 12u IPv4 468316 0t0 TCP 10.1.1.10:1023-10.0.0.2:24007 
 (ESTABLISHED)
 glusterfs 7965 root 7u IPv4 468357 0t0 TCP 127.0.0.1:1021-127.0.0.1:24007 
 (ESTABLISHED)
 glusterfs 7965 root 16u IPv4 472450 0t0 TCP 10.1.1.10:1020-10.0.0.10:24007 
 (SYN_SENT)
 [root@nas-0-0mailto:root@nas-0-0 ~]# lsof -n | grep 24009
 glusterfs 7965 root 18u IPv4 468428 0t0 TCP 10.1.1.10:1021-10.0.0.2:24009 
 (ESTABLISHED)
 [root@nas-0-0mailto:root@nas-0-0 ~]# lsof -n | grep 24010
 [root@nas-0-0mailto:root@nas-0-0 ~]# lsof -n | grep 24011
 [root@nas-0-0mailto:root@nas-0-0 ~]#


 [root@nas-0-1mailto:root@nas-0-1 ~]# lsof -n | grep 24007
 glusterd 7567 root 6u IPv4 388277 0t0 TCP 127.0.0.1:24007-127.0.0.1:1021 
 (ESTABLISHED)
 glusterd 7567 root 8u IPv4 322628 0t0 TCP *:24007 (LISTEN)
 glusterd 7567 root 12u IPv4 388240 0t0 TCP 10.1.1.11:1023-10.0.0.2:24007 
 (ESTABLISHED)
 glusterfs 7589 root 7u IPv4 388276 0t0 TCP 127.0.0.1:1021-127.0.0.1:24007 
 (ESTABLISHED)
 glusterfs 7589 root 17u IPv4 392099 0t0 TCP 10.1.1.11:1019-10.0.0.11:24007 
 (SYN_SENT)
 [root@nas-0-1mailto:root@nas-0-1 ~]# lsof -n | grep 24009
 glusterfs 7589 root 18u IPv4 388347 0t0 TCP 10.1.1.11:1021-10.0.0.2:24009 
 (ESTABLISHED)
 [root@nas-0-1mailto:root@nas-0-1 ~]# lsof -n | grep 24010
 [root@nas-0-1mailto:root@nas-0-1 ~]# lsof -n | grep 24011
 [root@nas-0-1mailto:root@nas-0-1 ~]#



 [root@mseas-datamailto:root@mseas-data ~]# lsof -n | grep 24007
 glusterd 22582 root 6u IPv4 664645 TCP 127.0.0.1:24007-127.0.0.1:1021 
 (ESTABLISHED)
 glusterd 22582 root 8u IPv4 598942 TCP *:24007 (LISTEN)
 glusterd 22582 root 10u IPv4 664647 TCP 127.0.0.1:24007-127.0.0.1:1020 
 (ESTABLISHED)
 glusterd 22582 root 15u IPv4 664723 TCP 18.38.1.23:24007-10.1.255.240:1023 
 (ESTABLISHED)
 glusterd 22582 root 16u IPv4 664725 TCP 18.38.1.23:24007-10.1.255.175:1023 
 (ESTABLISHED)
 glusterd 22582 root 17u IPv4 664726 TCP 18.38.1.23:24007-10.1.255.160:1023 
 (ESTABLISHED)
 glusterd 22582 root 18u IPv4 664727 TCP 18.38.1.23:24007-10.1.255.189:1023 
 (ESTABLISHED)
 glusterd 22582 root 19u IPv4 664728 TCP 18.38.1.23:24007-10.1.255.188:1023 
 (ESTABLISHED)
 glusterd 22582 root 20u IPv4 664729 TCP 18.38.1.23:24007-10.1.255.203:1023 
 (ESTABLISHED)
 glusterd 22582 root 21u IPv4 664731 TCP 18.38.1.23:24007-10.1.255.197:1023 
 (ESTABLISHED)
 glusterd 22582 root 22u IPv4 664733 TCP 18.38.1.23:24007-10.1.255.157:1023 
 (ESTABLISHED)
 glusterd 22582 root 23u IPv4 664734 TCP 18.38.1.23:24007-10.1.255.186:1023 
 (ESTABLISHED)
 glusterd 22582 root 24u IPv4 664735 TCP 18.38.1.23:24007-10.1.255.130:1023 
 (ESTABLISHED)
 glusterd 22582 root 25u IPv4 664737 TCP 18.38.1.23:24007-10.1.255.211:1023 
 (ESTABLISHED)
 glusterd 22582 root 26u IPv4 664738 TCP 18.38.1.23:24007-10.1.255.150:1023 
 (ESTABLISHED)
 glusterd 22582 root 27u IPv4 664740 TCP 18.38.1.23:24007-10.1.255.168:1023 
 (ESTABLISHED)
 glusterd 22582 root 28u IPv4 664741 TCP 18.38.1.23:24007-10.1.255.215:1023 
 (ESTABLISHED)
 glusterd 22582 root 29u IPv4 664742 TCP 18.38.1.23:24007-10.1.255.208:1023 
 (ESTABLISHED)
 glusterd 22582 root 30u IPv4 664743 TCP 18.38.1.23:24007-10.1.255.164:1023 
 (ESTABLISHED)
 glusterd 22582 root 31u IPv4 664744 TCP 18.38.1.23:24007-10.1.255.156:1023 
 (ESTABLISHED)
 glusterd 22582 root 32u IPv4 664745 TCP 18.38.1.23:24007-10.1.255.236:1023 
 (ESTABLISHED)
 glusterd 22582 root 33u IPv4 664746 TCP 18.38.1.23:24007-10.1.255.193:1023 
 (ESTABLISHED)
 glusterd 22582 root 34u IPv4 664747 TCP 18.38.1.23:24007-10.1.255.217:1023 
 (ESTABLISHED)
 glusterd 22582 root 35u IPv4 664749 TCP 18.38.1.23:24007-10.1.255.228:1023 
 (ESTABLISHED)
 glusterd 22582 root 36u IPv4 664750 TCP 18.38.1.23:24007-10.1.255.202:1023 
 (ESTABLISHED)
 glusterd 22582 root 37u

Re: [Gluster-users] FW: cant mount gluster volume

2012-11-21 Thread Eco Willson

Steve,

On 11/21/2012 12:13 PM, Steve Postma wrote:

Eco, after stopping Gluster and restarting, same results as before with telnet able to 
connect to 24007, none of the other ports.  I noticed 1 machine has a process running 
that the other two do not. 22603 refers to --volfile-id 
gdata.gluster-data.data
and is only running on the one machine. Is this correct?
If you have a client mount on this machine, then this is expected. If 
24009 is available then that is fine, one port is consumed per brick but 
in instances where gluster has restarted for some reason, the port can 
increment.  Is the df -h output to the client mount correct now or still 
showing as 18GB?


- Eco




[root@mseas-data ~]# ps -ef | grep gluster
root 22582 1  0 15:00 ?00:00:00 /usr/sbin/glusterd -p 
/var/run/glusterd.pid
root 22603 1  0 15:00 ?00:00:00 /usr/sbin/glusterfsd -s 
localhost --volfile-id gdata.gluster-data.data -p 
/var/lib/glusterd/vols/gdata/run/gluster-data-data.pid -S 
/tmp/e3eac7ce95e786a3d909b8fc65ed2059.socket --brick-name /data -l 
/var/log/glusterfs/bricks/data.log --xlator-option 
*-posix.glusterd-uuid=22f1102a-08e6-482d-ad23-d8e063cf32ed --brick-port 24009 
--xlator-option gdata-server.listen-port=24009
root 22609 1  0 15:00 ?00:00:00 /usr/sbin/glusterfs -s 
localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l 
/var/log/glusterfs/nfs.log -S /tmp/d5c892de43c28a1ee7481b780245b789.socket
root 22690 22511  0 15:01 pts/000:00:00 grep gluster



[root@nas-0-0 ~]# ps -ef | grep gluster
root  7943 1  3 14:43 ?00:00:00 /usr/sbin/glusterd -p 
/var/run/glusterd.pid
root  7965 1  0 14:43 ?00:00:00 /usr/sbin/glusterfs -s 
localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l 
/var/log/glusterfs/nfs.log -S /tmp/8f87e178e9707e4694ee7a2543c66db9.socket
root  7976  7898  0 14:43 pts/100:00:00 grep gluster
[root@nas-0-0 ~]#
[root@nas-0-1 ~]# ps -ef | grep gluster
root  7567 1  4 14:47 ?00:00:00 /usr/sbin/glusterd -p 
/var/run/glusterd.pid
root  7589 1  0 14:47 ?00:00:00 /usr/sbin/glusterfs -s 
localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l 
/var/log/glusterfs/nfs.log -S /tmp/6054da6605d9f9d1c1e99252f1d235a6.socket
root  7600  7521  0 14:47 pts/200:00:00 grep gluster

From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Eco Willson [ewill...@redhat.com]
Sent: Wednesday, November 21, 2012 2:52 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,

The simplest way to troubleshoot (assuming that the nodes are not in
production) would be:

1) unmounting from the clients
2) stopping gluster
3) `killall gluster{,d,fs,fsd}`
4) Start gluster again

Try to telnet to the ports again afterwards which would be expected to work.

Thanks,

Eco



On 11/21/2012 07:19 AM, Steve Postma wrote:

Your right Eco
I am only able to telnet on port 24007, ports 24009, 24010 and 24011 are all 
connection refused . Iptables is not running on any of the machines


mseas-data 24007, 24009 are open, 24010 and 24011 closed
nas-0-0 24007 open, 24009,24010 and 24011 closed
nas-0-1 24007 open, 24009,24010 and 24011 closed



Steve

From: gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org 
[gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org] on behalf of 
Eco Willson [ewill...@redhat.commailto:ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 6:32 PM
To: gluster-users@gluster.orgmailto:gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,
On 11/20/2012 02:43 PM, Steve Postma wrote:

Hi Eco,
I believe you are asking that I run

find /mount/glusterfs /dev/null

only? That should take care of the issue?

Meaning, run a recursive find against the client mount point
(/mount/glusterfs is used as an example in the docs). This should solve
the specific issue of the files not being visible.
However, the issue of the disk space discrepancy is different. From the
df output, the only filesystem with 18GB is / on the mseas-data node, I
assume this is where you are mounting from?
If so, then the issue goes back to one of connectivity, the gluster
bricks most likely are still not being connected to, which may actually
be the root cause of both problems.

Can you confirm that iptables is off on all hosts (and from any client
you would connect from)? I had seen your previous tests with telnet,
was this done from and to all hosts from the client machine?
Make sure that at a minimum you can hit 24007, 24009, 24010 and 24011.
This will test the management port and the expected initial port for
each of the bricks in the volume.


Thanks,

Eco


Thanks for your time,
Steve


From: 
gluster-users-boun

[Gluster-users] FW: cant mount gluster volume

2012-11-20 Thread Steve Postma
 I have a 3 node gluster cluster that had 3.1.4 uninstalled and 3.3.1 installed.

I had some mounting issues yesterday, from a rocks 6.2 install to the cluster. 
I was able to overcome those issues and mount the export on my node. Thanks to 
all for your help.

However, I can only view the portion of files that is directly stored on the 
one brick in the cluster. The other bricks do not seem to be replicating, tho 
gluster reports the volume as up.

[root@mseas-data ~]# gluster volume info
Volume Name: gdata
Type: Distribute
Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data



The brick we are attaching to has this in the fstab file.
/dev/mapper/the_raid-lv_data /data  xfs quota,noauto1 0


but mount -a does not appear to do anything.
I have to run mount -t xfs  /dev/mapper/the_raid-lv_data /data
manually to mount it.



Any help with troubleshooting why we are only seeing data from 1 brick of 3 
would be appreciated,
Thanks,
Steve Postma








From: Steve Postma
Sent: Monday, November 19, 2012 3:29 PM
To: gluster-users@gluster.org
Subject: cant mount gluster volume

 I am still unable to mount a new 3.3.1 glusterfs install. I have tried from 
one of the actual machines in the cluster to itself, as well as from various 
other clients. They all seem to be failing in the same part of the process.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] FW: cant mount gluster volume

2012-11-20 Thread Eco Willson

Steve,

The volume is a pure distribute:


Type: Distribute

In order to have files replicate, you need
1) to have a number of bricks that is a multiple of the replica count, 
e.g., for your three node configuration, you would need two bricks per 
node to set up replica two.  You could set up replica 3, but you will 
take a performance hit in doing so.

2) to add a replica count during the volume creation, e.g.
`gluster volume create vol name replica 2 server1:/export server2:/export

From the volume info you provided, the export directories are different 
for all three nodes:


Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data
 

Which node are you trying to mount to /data?  If it is not the 
gluster-data node, then it will fail if there is not a /data directory.  
In this case, it is a good thing, since mounting to /data on gluster-0-0 
or gluster-0-1 would not accomplish what you need.
To clarify, there is a distinction to be made between the export volume 
mount and the gluster mount point.  In this case, you are mounting the 
brick.
In order to see all the files, you would need to mount the volume with 
the native client, or NFS.

For the native client:
mount -t glusterfs gluster-data:/gdata /mnt/gluster mount dir
For NFS:
mount -t nfs -o vers=3 gluster-data:/gdata /mnt/gluster mount dir


Thanks,

Eco
On 11/20/2012 09:42 AM, Steve Postma wrote:

  I have a 3 node gluster cluster that had 3.1.4 uninstalled and 3.3.1 
installed.

I had some mounting issues yesterday, from a rocks 6.2 install to the cluster. 
I was able to overcome those issues and mount the export on my node. Thanks to 
all for your help.

However, I can only view the portion of files that is directly stored on the 
one brick in the cluster. The other bricks do not seem to be replicating, tho 
gluster reports the volume as up.

[root@mseas-data ~]# gluster volume info
Volume Name: gdata
Type: Distribute
Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data



The brick we are attaching to has this in the fstab file.
/dev/mapper/the_raid-lv_data /data  xfs quota,noauto1 0


but mount -a does not appear to do anything.
I have to run mount -t xfs  /dev/mapper/the_raid-lv_data /data
manually to mount it.



Any help with troubleshooting why we are only seeing data from 1 brick of 3 
would be appreciated,
Thanks,
Steve Postma








From: Steve Postma
Sent: Monday, November 19, 2012 3:29 PM
To: gluster-users@gluster.org
Subject: cant mount gluster volume

  I am still unable to mount a new 3.3.1 glusterfs install. I have tried from 
one of the actual machines in the cluster to itself, as well as from various 
other clients. They all seem to be failing in the same part of the process.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] FW: cant mount gluster volume

2012-11-20 Thread Steve Postma
Hi Eco, thanks for your help.

If I run on brick 1:
mount -t glusterfs gluster-data:/gdata /gdata

it mounts but appears as a 18 GB partition with nothing in it.


I can mount it from the client, but again, there is nothing in it.



Before upgrade this was a 50 TB gluster volume. Was that volume information 
lost with upgrade?

The file structure appears intact on each brick.


Steve



From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Eco Willson [ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 1:29 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,

The volume is a pure distribute:

 Type: Distribute
In order to have files replicate, you need
1) to have a number of bricks that is a multiple of the replica count,
e.g., for your three node configuration, you would need two bricks per
node to set up replica two. You could set up replica 3, but you will
take a performance hit in doing so.
2) to add a replica count during the volume creation, e.g.
`gluster volume create vol name replica 2 server1:/export server2:/export

From the volume info you provided, the export directories are different
for all three nodes:

Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data


Which node are you trying to mount to /data? If it is not the
gluster-data node, then it will fail if there is not a /data directory.
In this case, it is a good thing, since mounting to /data on gluster-0-0
or gluster-0-1 would not accomplish what you need.
To clarify, there is a distinction to be made between the export volume
mount and the gluster mount point. In this case, you are mounting the
brick.
In order to see all the files, you would need to mount the volume with
the native client, or NFS.
For the native client:
mount -t glusterfs gluster-data:/gdata /mnt/gluster mount dir
For NFS:
mount -t nfs -o vers=3 gluster-data:/gdata /mnt/gluster mount dir


Thanks,

Eco
On 11/20/2012 09:42 AM, Steve Postma wrote:
 I have a 3 node gluster cluster that had 3.1.4 uninstalled and 3.3.1 
 installed.

 I had some mounting issues yesterday, from a rocks 6.2 install to the 
 cluster. I was able to overcome those issues and mount the export on my node. 
 Thanks to all for your help.

 However, I can only view the portion of files that is directly stored on the 
 one brick in the cluster. The other bricks do not seem to be replicating, tho 
 gluster reports the volume as up.

 [root@mseas-datamailto:root@mseas-data ~]# gluster volume info
 Volume Name: gdata
 Type: Distribute
 Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
 Status: Started
 Number of Bricks: 3
 Transport-type: tcp
 Bricks:
 Brick1: gluster-0-0:/mseas-data-0-0
 Brick2: gluster-0-1:/mseas-data-0-1
 Brick3: gluster-data:/data



 The brick we are attaching to has this in the fstab file.
 /dev/mapper/the_raid-lv_data /data xfs quota,noauto 1 0


 but mount -a does not appear to do anything.
 I have to run mount -t xfs /dev/mapper/the_raid-lv_data /data
 manually to mount it.



 Any help with troubleshooting why we are only seeing data from 1 brick of 3 
 would be appreciated,
 Thanks,
 Steve Postma







 
 From: Steve Postma
 Sent: Monday, November 19, 2012 3:29 PM
 To: gluster-users@gluster.orgmailto:gluster-users@gluster.org
 Subject: cant mount gluster volume

 I am still unable to mount a new 3.3.1 glusterfs install. I have tried from 
 one of the actual machines in the cluster to itself, as well as from various 
 other clients. They all seem to be failing in the same part of the process.

 ___
 Gluster-users mailing list
 Gluster-users@gluster.orgmailto:Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.orgmailto:Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] FW: cant mount gluster volume

2012-11-20 Thread Eco Willson

Steve,




Does df -h show the expected directories on each server, and do they 
show the expected size?


If the file


On 11/20/2012 11:09 AM, Steve Postma wrote:

Hi Eco, thanks for your help.

If I run on brick 1:
mount -t glusterfs gluster-data:/gdata /gdata

it mounts but appears as a 18 GB partition with nothing in it
To confirm, are the export directories mounted properly on all three 
servers?
Does df -h show the expected directories on each server, and do they 
show the expected size?

Does gluster volume info show the same output on all three servers?


I can mount it from the client, but again, there is nothing in it.



Before upgrade this was a 50 TB gluster volume. Was that volume information 
lost with upgrade?
Do you have the old vol files from before the upgrade?  It would be good 
to see them to make sure the volume got recreated properly.

The file structure appears intact on each brick.
As long as the file structure is intact, you will be able to recreate 
the volume although it may require a potentially painful rsync in the 
worst case.


- Eco





Steve



From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Eco Willson [ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 1:29 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,

The volume is a pure distribute:


Type: Distribute

In order to have files replicate, you need
1) to have a number of bricks that is a multiple of the replica count,
e.g., for your three node configuration, you would need two bricks per
node to set up replica two. You could set up replica 3, but you will
take a performance hit in doing so.
2) to add a replica count during the volume creation, e.g.
`gluster volume create vol name replica 2 server1:/export server2:/export

 From the volume info you provided, the export directories are different
for all three nodes:

Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data


Which node are you trying to mount to /data? If it is not the
gluster-data node, then it will fail if there is not a /data directory.
In this case, it is a good thing, since mounting to /data on gluster-0-0
or gluster-0-1 would not accomplish what you need.
To clarify, there is a distinction to be made between the export volume
mount and the gluster mount point. In this case, you are mounting the
brick.
In order to see all the files, you would need to mount the volume with
the native client, or NFS.
For the native client:
mount -t glusterfs gluster-data:/gdata /mnt/gluster mount dir
For NFS:
mount -t nfs -o vers=3 gluster-data:/gdata /mnt/gluster mount dir


Thanks,

Eco
On 11/20/2012 09:42 AM, Steve Postma wrote:

I have a 3 node gluster cluster that had 3.1.4 uninstalled and 3.3.1 installed.

I had some mounting issues yesterday, from a rocks 6.2 install to the cluster. 
I was able to overcome those issues and mount the export on my node. Thanks to 
all for your help.

However, I can only view the portion of files that is directly stored on the 
one brick in the cluster. The other bricks do not seem to be replicating, tho 
gluster reports the volume as up.

[root@mseas-datamailto:root@mseas-data ~]# gluster volume info
Volume Name: gdata
Type: Distribute
Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data



The brick we are attaching to has this in the fstab file.
/dev/mapper/the_raid-lv_data /data xfs quota,noauto 1 0


but mount -a does not appear to do anything.
I have to run mount -t xfs /dev/mapper/the_raid-lv_data /data
manually to mount it.



Any help with troubleshooting why we are only seeing data from 1 brick of 3 
would be appreciated,
Thanks,
Steve Postma








From: Steve Postma
Sent: Monday, November 19, 2012 3:29 PM
To: gluster-users@gluster.orgmailto:gluster-users@gluster.org
Subject: cant mount gluster volume

I am still unable to mount a new 3.3.1 glusterfs install. I have tried from one 
of the actual machines in the cluster to itself, as well as from various other 
clients. They all seem to be failing in the same part of the process.

___
Gluster-users mailing list
Gluster-users@gluster.orgmailto:Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.orgmailto:Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster

Re: [Gluster-users] FW: cant mount gluster volume

2012-11-20 Thread Steve Postma
The do show expected size. I have a backup of /etc/glusterd and /etc/glusterfs 
from before upgrade.


Its interesting that gluster volume info shows the correct path for each 
machine.

These are the correct mountpoints on each machine, and from each machine I can 
see the files and structure.


[root@mseas-data data]# gluster volume info

Volume Name: gdata
Type: Distribute
Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data




From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Eco Willson [ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 3:02 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,




Does df -h show the expected directories on each server, and do they
show the expected size?

If the file


On 11/20/2012 11:09 AM, Steve Postma wrote:
 Hi Eco, thanks for your help.

 If I run on brick 1:
 mount -t glusterfs gluster-data:/gdata /gdata

 it mounts but appears as a 18 GB partition with nothing in it
To confirm, are the export directories mounted properly on all three
servers?
Does df -h show the expected directories on each server, and do they
show the expected size?
Does gluster volume info show the same output on all three servers?

 I can mount it from the client, but again, there is nothing in it.



 Before upgrade this was a 50 TB gluster volume. Was that volume information 
 lost with upgrade?
Do you have the old vol files from before the upgrade? It would be good
to see them to make sure the volume got recreated properly.
 The file structure appears intact on each brick.
As long as the file structure is intact, you will be able to recreate
the volume although it may require a potentially painful rsync in the
worst case.

- Eco




 Steve


 
 From: 
 gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org 
 [gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org] 
 on behalf of Eco Willson [ewill...@redhat.commailto:ewill...@redhat.com]
 Sent: Tuesday, November 20, 2012 1:29 PM
 To: gluster-users@gluster.orgmailto:gluster-users@gluster.org
 Subject: Re: [Gluster-users] FW: cant mount gluster volume

 Steve,

 The volume is a pure distribute:

 Type: Distribute
 In order to have files replicate, you need
 1) to have a number of bricks that is a multiple of the replica count,
 e.g., for your three node configuration, you would need two bricks per
 node to set up replica two. You could set up replica 3, but you will
 take a performance hit in doing so.
 2) to add a replica count during the volume creation, e.g.
 `gluster volume create vol name replica 2 server1:/export server2:/export

 From the volume info you provided, the export directories are different
 for all three nodes:

 Brick1: gluster-0-0:/mseas-data-0-0
 Brick2: gluster-0-1:/mseas-data-0-1
 Brick3: gluster-data:/data


 Which node are you trying to mount to /data? If it is not the
 gluster-data node, then it will fail if there is not a /data directory.
 In this case, it is a good thing, since mounting to /data on gluster-0-0
 or gluster-0-1 would not accomplish what you need.
 To clarify, there is a distinction to be made between the export volume
 mount and the gluster mount point. In this case, you are mounting the
 brick.
 In order to see all the files, you would need to mount the volume with
 the native client, or NFS.
 For the native client:
 mount -t glusterfs gluster-data:/gdata /mnt/gluster mount dir
 For NFS:
 mount -t nfs -o vers=3 gluster-data:/gdata /mnt/gluster mount dir


 Thanks,

 Eco
 On 11/20/2012 09:42 AM, Steve Postma wrote:
 I have a 3 node gluster cluster that had 3.1.4 uninstalled and 3.3.1 
 installed.

 I had some mounting issues yesterday, from a rocks 6.2 install to the 
 cluster. I was able to overcome those issues and mount the export on my 
 node. Thanks to all for your help.

 However, I can only view the portion of files that is directly stored on the 
 one brick in the cluster. The other bricks do not seem to be replicating, 
 tho gluster reports the volume as up.

 [root@mseas-datamailto:root@mseas-datamailto:root@mseas-data ~]# gluster 
 volume info
 Volume Name: gdata
 Type: Distribute
 Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
 Status: Started
 Number of Bricks: 3
 Transport-type: tcp
 Bricks:
 Brick1: gluster-0-0:/mseas-data-0-0
 Brick2: gluster-0-1:/mseas-data-0-1
 Brick3: gluster-data:/data



 The brick we are attaching to has this in the fstab file.
 /dev/mapper/the_raid-lv_data /data xfs quota,noauto 1 0


 but mount -a does not appear to do anything.
 I have to run mount -t xfs /dev/mapper/the_raid-lv_data /data
 manually to mount it.



 Any help with troubleshooting why we are only seeing data from 1 brick of 3 
 would

Re: [Gluster-users] FW: cant mount gluster volume

2012-11-20 Thread Eco Willson

Steve,



On 11/20/2012 12:03 PM, Steve Postma wrote:

The do show expected size. I have a backup of /etc/glusterd and /etc/glusterfs 
from before upgrade.
Can we see the vol file from the 2.x install and the output of df -h for 
each of the bricks?


Its interesting that gluster volume info shows the correct path for each 
machine.

These are the correct mountpoints on each machine, and from each machine I can 
see the files and structure.
If the volume was created in a different order than before, then it is 
expected you would be able to see the files only from the backend 
directories and not from the client mount.
If this is the case, recreating the volume in the correct order should 
show the files from the mount.
If the volume was recreated properly, make sure you have followed the 
upgrade steps to go from versions prior to 3.1:

http://www.gluster.org/community/documentation/index.php/Gluster_3.0_to_3.2_Upgrade_Guide

This would explain why the files can't be viewed from the client, but 
the size discrepancy isn't expected if we see the expected output from 
df for the bricks.





[root@mseas-data data]# gluster volume info

Volume Name: gdata
Type: Distribute
Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data




From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Eco Willson [ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 3:02 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,




Does df -h show the expected directories on each server, and do they
show the expected size?

If the file


On 11/20/2012 11:09 AM, Steve Postma wrote:

Hi Eco, thanks for your help.

If I run on brick 1:
mount -t glusterfs gluster-data:/gdata /gdata

it mounts but appears as a 18 GB partition with nothing in it

To confirm, are the export directories mounted properly on all three
servers?
Does df -h show the expected directories on each server, and do they
show the expected size?
Does gluster volume info show the same output on all three servers?

I can mount it from the client, but again, there is nothing in it.



Before upgrade this was a 50 TB gluster volume. Was that volume information 
lost with upgrade?

Do you have the old vol files from before the upgrade? It would be good
to see them to make sure the volume got recreated properly.

The file structure appears intact on each brick.

As long as the file structure is intact, you will be able to recreate
the volume although it may require a potentially painful rsync in the
worst case.

- Eco




Steve



From: gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org 
[gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org] on behalf of 
Eco Willson [ewill...@redhat.commailto:ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 1:29 PM
To: gluster-users@gluster.orgmailto:gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,

The volume is a pure distribute:


Type: Distribute

In order to have files replicate, you need
1) to have a number of bricks that is a multiple of the replica count,
e.g., for your three node configuration, you would need two bricks per
node to set up replica two. You could set up replica 3, but you will
take a performance hit in doing so.
2) to add a replica count during the volume creation, e.g.
`gluster volume create vol name replica 2 server1:/export server2:/export

 From the volume info you provided, the export directories are different
for all three nodes:

Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data


Which node are you trying to mount to /data? If it is not the
gluster-data node, then it will fail if there is not a /data directory.
In this case, it is a good thing, since mounting to /data on gluster-0-0
or gluster-0-1 would not accomplish what you need.
To clarify, there is a distinction to be made between the export volume
mount and the gluster mount point. In this case, you are mounting the
brick.
In order to see all the files, you would need to mount the volume with
the native client, or NFS.
For the native client:
mount -t glusterfs gluster-data:/gdata /mnt/gluster mount dir
For NFS:
mount -t nfs -o vers=3 gluster-data:/gdata /mnt/gluster mount dir


Thanks,

Eco
On 11/20/2012 09:42 AM, Steve Postma wrote:

I have a 3 node gluster cluster that had 3.1.4 uninstalled and 3.3.1 installed.

I had some mounting issues yesterday, from a rocks 6.2 install to the cluster. 
I was able to overcome those issues and mount the export on my node. Thanks to 
all for your help.

However, I can only view the portion of files that is directly stored on the 
one brick in the cluster. The other bricks do

Re: [Gluster-users] FW: cant mount gluster volume

2012-11-20 Thread Steve Postma
 [root@mseas-data gdata]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda1  18G  6.6G  9.7G  41% /
/dev/sda6  77G   49G   25G  67% /scratch
/dev/sda3  18G  3.8G   13G  24% /var
/dev/sda2  18G  173M   16G   2% /tmp
tmpfs 3.9G 0  3.9G   0% /dev/shm
/dev/mapper/the_raid-lv_home
  3.0T  2.2T  628G  79% /home
glusterfs#mseas-data:/gdata
   15T   14T  606G  96% /gdata


[root@nas-0-0 ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda3 137G   33G   97G  26% /
/dev/sda1 190M   24M  157M  14% /boot
tmpfs 2.0G 0  2.0G   0% /dev/shm
/dev/sdb1  21T   19T  1.5T  93% /mseas-data-0-0

[root@nas-0-1 ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda3 137G   34G   97G  26% /
/dev/sda1 190M   24M  157M  14% /boot
tmpfs 2.0G 0  2.0G   0% /dev/shm
/dev/sdb1  21T   19T  1.3T  94% /mseas-data-0-1



 cat of /etc/glusterfs/glusterd.vol from backup

[root@mseas-data glusterd]# cat /root/mseas_backup/etc/glusterfs/glusterd.vol
volume management
type mgmt/glusterd
option working-directory /etc/glusterd
option transport-type socket,rdma
option transport.socket.keepalive-time 10
option transport.socket.keepalive-interval 2
end-volume




Article you referenced is looking for the words glusterfs-volgen in a vol 
file. I have used locate and grep, but can find no such entry in any .vol files.


Thanks





From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Eco Willson [ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 4:03 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,



On 11/20/2012 12:03 PM, Steve Postma wrote:
 The do show expected size. I have a backup of /etc/glusterd and 
 /etc/glusterfs from before upgrade.
Can we see the vol file from the 2.x install and the output of df -h for
each of the bricks?

 Its interesting that gluster volume info shows the correct path for each 
 machine.

 These are the correct mountpoints on each machine, and from each machine I 
 can see the files and structure.
If the volume was created in a different order than before, then it is
expected you would be able to see the files only from the backend
directories and not from the client mount.
If this is the case, recreating the volume in the correct order should
show the files from the mount.
If the volume was recreated properly, make sure you have followed the
upgrade steps to go from versions prior to 3.1:
http://www.gluster.org/community/documentation/index.php/Gluster_3.0_to_3.2_Upgrade_Guide

This would explain why the files can't be viewed from the client, but
the size discrepancy isn't expected if we see the expected output from
df for the bricks.



 [root@mseas-datamailto:root@mseas-data data]# gluster volume info

 Volume Name: gdata
 Type: Distribute
 Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
 Status: Started
 Number of Bricks: 3
 Transport-type: tcp
 Bricks:
 Brick1: gluster-0-0:/mseas-data-0-0
 Brick2: gluster-0-1:/mseas-data-0-1
 Brick3: gluster-data:/data



 
 From: 
 gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org 
 [gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org] 
 on behalf of Eco Willson [ewill...@redhat.commailto:ewill...@redhat.com]
 Sent: Tuesday, November 20, 2012 3:02 PM
 To: gluster-users@gluster.orgmailto:gluster-users@gluster.org
 Subject: Re: [Gluster-users] FW: cant mount gluster volume

 Steve,




 Does df -h show the expected directories on each server, and do they
 show the expected size?

 If the file


 On 11/20/2012 11:09 AM, Steve Postma wrote:
 Hi Eco, thanks for your help.

 If I run on brick 1:
 mount -t glusterfs gluster-data:/gdata /gdata

 it mounts but appears as a 18 GB partition with nothing in it
 To confirm, are the export directories mounted properly on all three
 servers?
 Does df -h show the expected directories on each server, and do they
 show the expected size?
 Does gluster volume info show the same output on all three servers?
 I can mount it from the client, but again, there is nothing in it.



 Before upgrade this was a 50 TB gluster volume. Was that volume information 
 lost with upgrade?
 Do you have the old vol files from before the upgrade? It would be good
 to see them to make sure the volume got recreated properly.
 The file structure appears intact on each brick.
 As long as the file structure is intact, you will be able to recreate
 the volume although it may require a potentially painful rsync in the
 worst case.

 - Eco



 Steve


 
 From: 
 gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.orgmailto:gluster

Re: [Gluster-users] FW: cant mount gluster volume

2012-11-20 Thread Eco Willson

Steve,

On 11/20/2012 01:32 PM, Steve Postma wrote:

  [root@mseas-data gdata]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda1  18G  6.6G  9.7G  41% /
/dev/sda6  77G   49G   25G  67% /scratch
/dev/sda3  18G  3.8G   13G  24% /var
/dev/sda2  18G  173M   16G   2% /tmp
tmpfs 3.9G 0  3.9G   0% /dev/shm
/dev/mapper/the_raid-lv_home
   3.0T  2.2T  628G  79% /home
glusterfs#mseas-data:/gdata
15T   14T  606G  96% /gdata


[root@nas-0-0 ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda3 137G   33G   97G  26% /
/dev/sda1 190M   24M  157M  14% /boot
tmpfs 2.0G 0  2.0G   0% /dev/shm
/dev/sdb1  21T   19T  1.5T  93% /mseas-data-0-0

[root@nas-0-1 ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda3 137G   34G   97G  26% /
/dev/sda1 190M   24M  157M  14% /boot
tmpfs 2.0G 0  2.0G   0% /dev/shm
/dev/sdb1  21T   19T  1.3T  94% /mseas-data-0-1

Thanks for confirming.



  cat of /etc/glusterfs/glusterd.vol from backup

[root@mseas-data glusterd]# cat /root/mseas_backup/etc/glusterfs/glusterd.vol
volume management
 type mgmt/glusterd
 option working-directory /etc/glusterd
 option transport-type socket,rdma
 option transport.socket.keepalive-time 10
 option transport.socket.keepalive-interval 2
end-volume
The vol file for 2.x would be in /etc/glusterfs/volume name.vol I 
believe. It should contain an entry similar to this output for each of 
the servers toward the top of the file.




Article you referenced is looking for the words glusterfs-volgen in a vol 
file. I have used locate and grep, but can find no such entry in any .vol files.
This would not appear if the glusterfs-volgen command wasn't used during 
creation.  The main consideration is to ensure that you have the command 
in step 5:


find /mount//glusterfs/ /dev/null

- Eco


Thanks





From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Eco Willson [ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 4:03 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,



On 11/20/2012 12:03 PM, Steve Postma wrote:

The do show expected size. I have a backup of /etc/glusterd and /etc/glusterfs 
from before upgrade.

Can we see the vol file from the 2.x install and the output of df -h for
each of the bricks?

Its interesting that gluster volume info shows the correct path for each 
machine.

These are the correct mountpoints on each machine, and from each machine I can 
see the files and structure.

If the volume was created in a different order than before, then it is
expected you would be able to see the files only from the backend
directories and not from the client mount.
If this is the case, recreating the volume in the correct order should
show the files from the mount.
If the volume was recreated properly, make sure you have followed the
upgrade steps to go from versions prior to 3.1:
http://www.gluster.org/community/documentation/index.php/Gluster_3.0_to_3.2_Upgrade_Guide

This would explain why the files can't be viewed from the client, but
the size discrepancy isn't expected if we see the expected output from
df for the bricks.



[root@mseas-datamailto:root@mseas-data data]# gluster volume info

Volume Name: gdata
Type: Distribute
Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data




From: gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org 
[gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org] on behalf of 
Eco Willson [ewill...@redhat.commailto:ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 3:02 PM
To: gluster-users@gluster.orgmailto:gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,




Does df -h show the expected directories on each server, and do they
show the expected size?

If the file


On 11/20/2012 11:09 AM, Steve Postma wrote:

Hi Eco, thanks for your help.

If I run on brick 1:
mount -t glusterfs gluster-data:/gdata /gdata

it mounts but appears as a 18 GB partition with nothing in it

To confirm, are the export directories mounted properly on all three
servers?
Does df -h show the expected directories on each server, and do they
show the expected size?
Does gluster volume info show the same output on all three servers?

I can mount it from the client, but again, there is nothing in it.



Before upgrade this was a 50 TB gluster volume. Was that volume information 
lost with upgrade?

Do you have the old vol files from before the upgrade

Re: [Gluster-users] FW: cant mount gluster volume

2012-11-20 Thread Steve Postma
Hi Eco,
I believe you are asking that I run

find /mount/glusterfs /dev/null

only? That should take care of the issue?

Thanks for your time,
Steve


From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Eco Willson [ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 5:39 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,

On 11/20/2012 01:32 PM, Steve Postma wrote:

 [root@mseas-data gdata]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda1  18G  6.6G  9.7G  41% /
/dev/sda6  77G   49G   25G  67% /scratch
/dev/sda3  18G  3.8G   13G  24% /var
/dev/sda2  18G  173M   16G   2% /tmp
tmpfs 3.9G 0  3.9G   0% /dev/shm
/dev/mapper/the_raid-lv_home
  3.0T  2.2T  628G  79% /home
glusterfs#mseas-data:/gdata
   15T   14T  606G  96% /gdata


[root@nas-0-0 ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda3 137G   33G   97G  26% /
/dev/sda1 190M   24M  157M  14% /boot
tmpfs 2.0G 0  2.0G   0% /dev/shm
/dev/sdb1  21T   19T  1.5T  93% /mseas-data-0-0

[root@nas-0-1 ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda3 137G   34G   97G  26% /
/dev/sda1 190M   24M  157M  14% /boot
tmpfs 2.0G 0  2.0G   0% /dev/shm
/dev/sdb1  21T   19T  1.3T  94% /mseas-data-0-1


Thanks for confirming.

 cat of /etc/glusterfs/glusterd.vol from backup

[root@mseas-data glusterd]# cat /root/mseas_backup/etc/glusterfs/glusterd.vol
volume management
type mgmt/glusterd
option working-directory /etc/glusterd
option transport-type socket,rdma
option transport.socket.keepalive-time 10
option transport.socket.keepalive-interval 2
end-volume


The vol file for 2.x would be in /etc/glusterfs/volume name.vol I believe. It 
should contain an entry similar to this output for each of the servers toward 
the top of the file.

Article you referenced is looking for the words glusterfs-volgen in a vol 
file. I have used locate and grep, but can find no such entry in any .vol files.


This would not appear if the glusterfs-volgen command wasn't used during 
creation.  The main consideration is to ensure that you have the command in 
step 5:

find /mount/glusterfs /dev/null

- Eco

Thanks





From: 
gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org 
[gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org] 
on behalf of Eco Willson [ewill...@redhat.commailto:ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 4:03 PM
To: gluster-users@gluster.orgmailto:gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,



On 11/20/2012 12:03 PM, Steve Postma wrote:


The do show expected size. I have a backup of /etc/glusterd and /etc/glusterfs 
from before upgrade.


Can we see the vol file from the 2.x install and the output of df -h for
each of the bricks?


Its interesting that gluster volume info shows the correct path for each 
machine.

These are the correct mountpoints on each machine, and from each machine I can 
see the files and structure.


If the volume was created in a different order than before, then it is
expected you would be able to see the files only from the backend
directories and not from the client mount.
If this is the case, recreating the volume in the correct order should
show the files from the mount.
If the volume was recreated properly, make sure you have followed the
upgrade steps to go from versions prior to 3.1:
http://www.gluster.org/community/documentation/index.php/Gluster_3.0_to_3.2_Upgrade_Guide

This would explain why the files can't be viewed from the client, but
the size discrepancy isn't expected if we see the expected output from
df for the bricks.




[root@mseas-datamailto:root@mseas-datamailto:root@mseas-data data]# gluster 
volume info

Volume Name: gdata
Type: Distribute
Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data




From: 
gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org
 
[gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org]
 on behalf of Eco Willson 
[ewill...@redhat.commailto:ewill...@redhat.commailto:ewill...@redhat.commailto:ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 3:02 PM
To: 
gluster-users@gluster.orgmailto:gluster-users@gluster.orgmailto:gluster-users@gluster.orgmailto:gluster

Re: [Gluster-users] FW: cant mount gluster volume

2012-11-20 Thread Patrick Haley


Hi Eco,
I believe you are asking that I run

find /mount/glusterfs /dev/null

only? That should take care of the issue?

Thanks for your time,
Steve
__

Thanks for all the help you have been giving Steve.  This has
been invaluable to us.  I hate to impose, but Steve has left for
the day, will you be available to answer questions tomorrow
afternoon (EST) too?  If not, is there someone else we should
direct our question to?

Thanks again.

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley  Email: 
pha...@mit.edu
Center for Ocean Engineering Phone:(617) 253-6824
Dept. of Mechanical Engineering Fax:(617) 253-8125
MIT, Room 5-213  
http://web.mit.edu/phaley/www/
77 Massachusetts Avenue
Cambridge, MA  02139-4301
___
From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Eco Willson [ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 5:39 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,

On 11/20/2012 01:32 PM, Steve Postma wrote:

 [root@mseas-data gdata]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda1  18G  6.6G  9.7G  41% /
/dev/sda6  77G   49G   25G  67% /scratch
/dev/sda3  18G  3.8G   13G  24% /var
/dev/sda2  18G  173M   16G   2% /tmp
tmpfs 3.9G 0  3.9G   0% /dev/shm
/dev/mapper/the_raid-lv_home
  3.0T  2.2T  628G  79% /home
glusterfs#mseas-data:/gdata
   15T   14T  606G  96% /gdata


[root@nas-0-0 ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda3 137G   33G   97G  26% /
/dev/sda1 190M   24M  157M  14% /boot
tmpfs 2.0G 0  2.0G   0% /dev/shm
/dev/sdb1  21T   19T  1.5T  93% /mseas-data-0-0

[root@nas-0-1 ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda3 137G   34G   97G  26% /
/dev/sda1 190M   24M  157M  14% /boot
tmpfs 2.0G 0  2.0G   0% /dev/shm
/dev/sdb1  21T   19T  1.3T  94% /mseas-data-0-1


Thanks for confirming.

 cat of /etc/glusterfs/glusterd.vol from backup

[root@mseas-data glusterd]# cat /root/mseas_backup/etc/glusterfs/glusterd.vol
volume management
type mgmt/glusterd
option working-directory /etc/glusterd
option transport-type socket,rdma
option transport.socket.keepalive-time 10
option transport.socket.keepalive-interval 2
end-volume


The vol file for 2.x would be in /etc/glusterfs/volume name.vol I believe. It 
should contain an entry similar to this output for each of the servers toward 
the top of the file.

Article you referenced is looking for the words glusterfs-volgen in a vol 
file. I have used locate and grep, but can find no such entry in any .vol files.


This would not appear if the glusterfs-volgen command wasn't used during 
creation.  The main consideration is to ensure that you have the command in 
step 5:

find /mount/glusterfs /dev/null

- Eco

Thanks





From: 
gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org 
[gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org] 
on behalf of Eco Willson [ewill...@redhat.commailto:ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 4:03 PM
To: gluster-users@gluster.orgmailto:gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,



On 11/20/2012 12:03 PM, Steve Postma wrote:


The do show expected size. I have a backup of /etc/glusterd and /etc/glusterfs 
from before upgrade.


Can we see the vol file from the 2.x install and the output of df -h for
each of the bricks?


Its interesting that gluster volume info shows the correct path for each 
machine.

These are the correct mountpoints on each machine, and from each machine I can 
see the files and structure.


If the volume was created in a different order than before, then it is
expected you would be able to see the files only from the backend
directories and not from the client mount.
If this is the case, recreating the volume in the correct order should
show the files from the mount.
If the volume was recreated properly, make sure you have followed the
upgrade steps to go from versions prior to 3.1:
http://www.gluster.org/community/documentation/index.php/Gluster_3.0_to_3.2_Upgrade_Guide

This would explain why the files can't be viewed from the client, but
the size discrepancy isn't expected if we see the expected output from
df for the bricks.




[root@mseas-datamailto:root@mseas-datamailto:root@mseas-data data]# gluster 
volume info

Volume Name: gdata
Type: Distribute
Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1

Re: [Gluster-users] FW: cant mount gluster volume

2012-11-20 Thread Eco Willson

Steve,
On 11/20/2012 02:43 PM, Steve Postma wrote:

Hi Eco,
I believe you are asking that I run

find /mount/glusterfs /dev/null

only? That should take care of the issue?
Meaning, run a recursive find against the client mount point 
(/mount/glusterfs is used as an example in the docs).  This should solve 
the specific issue of the files not being visible.
However, the issue of the disk space discrepancy is different.  From the 
df output, the only filesystem with 18GB is / on the mseas-data node, I 
assume this is where you are mounting from?
If so, then the issue goes back to one of connectivity, the gluster 
bricks most likely are still not being connected to, which may actually 
be the root cause of both problems.


Can you confirm that iptables is off on all hosts (and from any client 
you would connect from)?  I had seen your previous tests with telnet, 
was this done from and to all hosts from the client machine?
Make sure that at a minimum you can hit 24007, 24009, 24010 and 24011.  
This will test the management port and the expected initial port for 
each of the bricks in the volume.



Thanks,

Eco


Thanks for your time,
Steve


From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Eco Willson [ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 5:39 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,

On 11/20/2012 01:32 PM, Steve Postma wrote:

  [root@mseas-data gdata]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda1  18G  6.6G  9.7G  41% /
/dev/sda6  77G   49G   25G  67% /scratch
/dev/sda3  18G  3.8G   13G  24% /var
/dev/sda2  18G  173M   16G   2% /tmp
tmpfs 3.9G 0  3.9G   0% /dev/shm
/dev/mapper/the_raid-lv_home
   3.0T  2.2T  628G  79% /home
glusterfs#mseas-data:/gdata
15T   14T  606G  96% /gdata


[root@nas-0-0 ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda3 137G   33G   97G  26% /
/dev/sda1 190M   24M  157M  14% /boot
tmpfs 2.0G 0  2.0G   0% /dev/shm
/dev/sdb1  21T   19T  1.5T  93% /mseas-data-0-0

[root@nas-0-1 ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda3 137G   34G   97G  26% /
/dev/sda1 190M   24M  157M  14% /boot
tmpfs 2.0G 0  2.0G   0% /dev/shm
/dev/sdb1  21T   19T  1.3T  94% /mseas-data-0-1


Thanks for confirming.

  cat of /etc/glusterfs/glusterd.vol from backup

[root@mseas-data glusterd]# cat /root/mseas_backup/etc/glusterfs/glusterd.vol
volume management
 type mgmt/glusterd
 option working-directory /etc/glusterd
 option transport-type socket,rdma
 option transport.socket.keepalive-time 10
 option transport.socket.keepalive-interval 2
end-volume


The vol file for 2.x would be in /etc/glusterfs/volume name.vol I believe. It 
should contain an entry similar to this output for each of the servers toward the top 
of the file.

Article you referenced is looking for the words glusterfs-volgen in a vol 
file. I have used locate and grep, but can find no such entry in any .vol files.


This would not appear if the glusterfs-volgen command wasn't used during 
creation.  The main consideration is to ensure that you have the command in 
step 5:

find /mount/glusterfs /dev/null

- Eco

Thanks





From: gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org 
[gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org] on behalf of 
Eco Willson [ewill...@redhat.commailto:ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 4:03 PM
To: gluster-users@gluster.orgmailto:gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,



On 11/20/2012 12:03 PM, Steve Postma wrote:


The do show expected size. I have a backup of /etc/glusterd and /etc/glusterfs 
from before upgrade.


Can we see the vol file from the 2.x install and the output of df -h for
each of the bricks?


Its interesting that gluster volume info shows the correct path for each 
machine.

These are the correct mountpoints on each machine, and from each machine I can 
see the files and structure.


If the volume was created in a different order than before, then it is
expected you would be able to see the files only from the backend
directories and not from the client mount.
If this is the case, recreating the volume in the correct order should
show the files from the mount.
If the volume was recreated properly, make sure you have followed the
upgrade steps to go from versions prior to 3.1:
http://www.gluster.org/community/documentation/index.php/Gluster_3.0_to_3.2_Upgrade_Guide

This would explain why the files can't be viewed from the client, but
the size discrepancy isn't expected if we see