Steve,


On 11/20/2012 12:03 PM, Steve Postma wrote:
The do show expected size. I have a backup of /etc/glusterd and /etc/glusterfs 
from before upgrade.
Can we see the vol file from the 2.x install and the output of df -h for each of the bricks?

Its interesting that "gluster volume info" shows the correct path for each 
machine.

These are the correct mountpoints on each machine, and from each machine I can 
see the files and structure.
If the volume was created in a different order than before, then it is expected you would be able to see the files only from the backend directories and not from the client mount. If this is the case, recreating the volume in the correct order should show the files from the mount. If the volume was recreated properly, make sure you have followed the upgrade steps to go from versions prior to 3.1:
http://www.gluster.org/community/documentation/index.php/Gluster_3.0_to_3.2_Upgrade_Guide

This would explain why the files can't be viewed from the client, but the size discrepancy isn't expected if we see the expected output from df for the bricks.



[root@mseas-data data]# gluster volume info

Volume Name: gdata
Type: Distribute
Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data



________________________________
From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Eco Willson [ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 3:02 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,




Does df -h show the expected directories on each server, and do they
show the expected size?

If the file


On 11/20/2012 11:09 AM, Steve Postma wrote:
Hi Eco, thanks for your help.

If I run on brick 1:
mount -t glusterfs gluster-data:/gdata /gdata

it mounts but appears as a 18 GB partition with nothing in it
To confirm, are the export directories mounted properly on all three
servers?
Does df -h show the expected directories on each server, and do they
show the expected size?
Does gluster volume info show the same output on all three servers?
I can mount it from the client, but again, there is nothing in it.



Before upgrade this was a 50 TB gluster volume. Was that volume information 
lost with upgrade?
Do you have the old vol files from before the upgrade? It would be good
to see them to make sure the volume got recreated properly.
The file structure appears intact on each brick.
As long as the file structure is intact, you will be able to recreate
the volume although it may require a potentially painful rsync in the
worst case.

- Eco



Steve


________________________________
From: gluster-users-boun...@gluster.org<mailto:gluster-users-boun...@gluster.org> 
[gluster-users-boun...@gluster.org<mailto:gluster-users-boun...@gluster.org>] on behalf of 
Eco Willson [ewill...@redhat.com<mailto:ewill...@redhat.com>]
Sent: Tuesday, November 20, 2012 1:29 PM
To: gluster-users@gluster.org<mailto:gluster-users@gluster.org>
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,

The volume is a pure distribute:

Type: Distribute
In order to have files replicate, you need
1) to have a number of bricks that is a multiple of the replica count,
e.g., for your three node configuration, you would need two bricks per
node to set up replica two. You could set up replica 3, but you will
take a performance hit in doing so.
2) to add a replica count during the volume creation, e.g.
`gluster volume create <vol name> replica 2 server1:/export server2:/export

 From the volume info you provided, the export directories are different
for all three nodes:

Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data


Which node are you trying to mount to /data? If it is not the
gluster-data node, then it will fail if there is not a /data directory.
In this case, it is a good thing, since mounting to /data on gluster-0-0
or gluster-0-1 would not accomplish what you need.
To clarify, there is a distinction to be made between the export volume
mount and the gluster mount point. In this case, you are mounting the
brick.
In order to see all the files, you would need to mount the volume with
the native client, or NFS.
For the native client:
mount -t glusterfs gluster-data:/gdata /mnt/<gluster mount dir>
For NFS:
mount -t nfs -o vers=3 gluster-data:/gdata /mnt/<gluster mount dir>


Thanks,

Eco
On 11/20/2012 09:42 AM, Steve Postma wrote:
I have a 3 node gluster cluster that had 3.1.4 uninstalled and 3.3.1 installed.

I had some mounting issues yesterday, from a rocks 6.2 install to the cluster. 
I was able to overcome those issues and mount the export on my node. Thanks to 
all for your help.

However, I can only view the portion of files that is directly stored on the 
one brick in the cluster. The other bricks do not seem to be replicating, tho 
gluster reports the volume as up.

[root@mseas-data<mailto:root@mseas-data><mailto:root@mseas-data> ~]# gluster 
volume info
Volume Name: gdata
Type: Distribute
Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data



The brick we are attaching to has this in the fstab file.
/dev/mapper/the_raid-lv_data /data xfs quota,noauto 1 0


but "mount -a" does not appear to do anything.
I have to run "mount -t xfs /dev/mapper/the_raid-lv_data /data"
manually to mount it.



Any help with troubleshooting why we are only seeing data from 1 brick of 3 
would be appreciated,
Thanks,
Steve Postma







________________________________
From: Steve Postma
Sent: Monday, November 19, 2012 3:29 PM
To: 
gluster-users@gluster.org<mailto:gluster-users@gluster.org><mailto:gluster-users@gluster.org>
Subject: cant mount gluster volume

I am still unable to mount a new 3.3.1 glusterfs install. I have tried from one 
of the actual machines in the cluster to itself, as well as from various other 
clients. They all seem to be failing in the same part of the process.

_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org<mailto:Gluster-users@gluster.org><mailto:Gluster-users@gluster.org>
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org<mailto:Gluster-users@gluster.org><mailto:Gluster-users@gluster.org>
http://supercolony.gluster.org/mailman/listinfo/gluster-users
________________________________
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org<mailto:Gluster-users@gluster.org>
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org<mailto:Gluster-users@gluster.org>
http://supercolony.gluster.org/mailman/listinfo/gluster-users
________________________________
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to