Dear Craig,

I'm afraid I wasn't able to start a self heal the way you suggested. I tested the following:

----------
WHAT I DID
----------
I created on fs7:/storage/7
user_7_1  user_7_2  user_7_3

and on fs8:/storage/8
user_8_1  user_8_2  user_8_3

and filled all of the directories with some small files and subdirectories.

Then, on fs8:
gluster volume create heal_me transport tcp 192.168.101.246:/storage/8 192.168.101.247:/storage/7

Then on fs8 and afterwards on fs7:
mount -t glusterfs localhost:/heal_me /tempmount/
cd /tempmount
find . | xargs stat >>/dev/null 2>&1
umount /tempmount

All went well, no error messages or anything. The output of `find . | xargs stat` is probably too long to post it here, but there were no error messages or anything else that would look suspicious to me.

-------
RESULTS
-------
ls fs8:storage/8
user_8_1  user_8_2  user_8_3

ls fs7:/storage/7
user_7_1  user_7_2  user_7_3  user_8_1  user_8_2  user_8_3

ls client:/storage/gluster
user_8_1  user_8_2  user_8_3

ls fs7:/tempmount
user_8_1  user_8_2  user_8_3

ls fs8:/tempmount
user_8_1  user_8_2  user_8_3

Unmounting and remounting has no effect.

Servers are both Ubuntu Server 10.4, client is CentOS 5, 64bits all around.

Thanks and regards,
Daniel


On 12/03/2010 10:10 AM, Craig Carl wrote:
Daniel -
If you want to export existing data you will need to run the self heal
process so extended attributes can get written. While this should work
without any issues it isn't an officially supported process, please make
sure you have complete and up to date backups.

After you have setup and started the Gluster volume mount it locally on
one of the servers using `mount -t glusterfs localhost:/<volname> /<some
temporary mount>`. CD into the root of the mount point and run `find . |
xargs stat >>/dev/null 2>&1` to start a self heal.

Also the command you used to create the volume should not have worked,
it is missing a volume name - gluster volume create <VOLNAME> transport
tcp fs7:/storage/7, fs8:/storage/8, typo maybe?

Please let us know how it goes, and please let me know if you have any
other questions.

Thanks,

Craig

-->
Craig Carl
Senior Systems Engineer; Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Office - (408) 770-1884
Gtalk - craig.c...@gmail.com
Twitter - @gluster
http://rackerhacker.com/2010/08/11/one-month-with-glusterfs-in-production/




On 12/02/2010 11:38 PM, Daniel Zander wrote:
Dear all,

at our institute, we currently have 6 file servers, each one of them
individually mounted via NFS on ~ 20 clients. The structure on the
servers and the clients is the following:

/storage/1/<user_directories> (NFS export from FS1)
/storage/2/<user_directories> (NFS export from FS2)
etc ...

Recently, we decided that we would like to migrate this to glusterFS,
so that we can have one big storage directory on the clients. Let's
call it

/gluster/<user_directories>

I tried to set up a gluster volume with two empty fileservers and it
worked without any problems. I could easily mount it on a client and
use it (using the native glusterFS mount).

If we now want to migrate the entire institute, it would be very
convenient, if existing folders could be easily included into a new
volume. I tried to do this, but I did not succeed.

Here's a short description of what I tried:

Existing folders:
on fs7: /storage/7/user_1,user_2
on fs8: /storage/8/user_3,user_4

gluster volume create transport tcp fs7:/storage/7, fs8:/storage/8

I hoped to see on the client:
/gluster/user_1
/gluster/user_2
/gluster/user_3
/gluster/user_4

The creation was successful, the volume could be started and mounted.
On the client, however, I could only find (via "ls /gluster") the
directories user_1 and user_2. But when I tried "cd /gluster/user_3",
it succeeded! Now "ls /gluster" showed me user_1, user_2 and user_3.
Unfortunately, user_3's subdirectories and files were still invisible,
but with the above mentioned trick, I could make them visible.

This is however not an option, as there are too much users and too
complicated file structures to do this manually. It anyhow seems like
Voodoo to me.

Is it possible to include all of the existing directories in the new
glusterFS volume? If yes: how?

Thank you in advance for your efforts,
Regards,
Daniel
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to