Re: [Gluster-users] Start a new volume with pre-existing directories

2010-12-07 Thread Dan Bretherton



Date: Tue, 07 Dec 2010 09:15:06 +0100
From: Daniel Zanderzan...@ekp.uni-karlsruhe.de
Subject: Re: [Gluster-users] Start a new volume with pre-existing
 directories
To: gluster-users@gluster.org
Message-ID:4cfded0a.5030...@ekp.uni-karlsruhe.de
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Dear all,

as there have been no further questions or suggestions, I assume that
you are just as out of ideas as I am. Maybe the logfiles of the two
bricks will help. They were recorded while I created a new volume,
started it, mounted it on FS8 and ran `find . | xargs stat/dev/null
21` and unmounted again. Then the same on FS7. And finally, I mounte
it on a client. And here are the logfiles:

-
 FS8
-

[2010-12-07 09:00:26.86494] W [graph.c:274:gf_add_cmdline_options]
heal_me-server: adding option 'listen-port' for volume 'heal_me-server'
with value '24022'
[2010-12-07 09:00:26.87247] W
[rpc-transport.c:566:validate_volume_options] tcp.heal_me-server: option
'listen-port' is deprecated, preferred is
'transport.socket.listen-port', continuing with correction
Given volfile:
+--+
1: volume heal_me-posix
2: type storage/posix
3: option directory /storage/8
4: end-volume
5:
6: volume heal_me-access-control
7: type features/access-control
8: subvolumes heal_me-posix
9: end-volume
   10:
   11: volume heal_me-locks
   12: type features/locks
   13: subvolumes heal_me-access-control
   14: end-volume
   15:
   16: volume heal_me-io-threads
   17: type performance/io-threads
   18: option thread-count 16
   19: subvolumes heal_me-locks
   20: end-volume
   21:
   22: volume /storage/8
   23: type debug/io-stats
   24: subvolumes heal_me-io-threads
   25: end-volume
   26:
   27: volume heal_me-server
   28: type protocol/server
   29: option transport-type tcp
   30: option auth.addr./storage/8.allow *
   31: subvolumes /storage/8
   32: end-volume

+--+
[2010-12-07 09:00:30.168852] I [server-handshake.c:535:server_setvolume]
heal_me-server: accepted client from 192.168.101.246:1023
[2010-12-07 09:00:30.240014] I [server-handshake.c:535:server_setvolume]
heal_me-server: accepted client from 192.168.101.247:1022
[2010-12-07 09:01:17.729708] I [server-handshake.c:535:server_setvolume]
heal_me-server: accepted client from 192.168.101.246:1019
[2010-12-07 09:02:27.588813] I [server-handshake.c:535:server_setvolume]
heal_me-server: accepted client from 192.168.101.247:1017
[2010-12-07 09:03:05.394282] I [server-handshake.c:535:server_setvolume]
heal_me-server: accepted client from 192.168.101.203:1008


-
 FS7
-
[2010-12-07 08:59:04.673533] W [graph.c:274:gf_add_cmdline_options]
heal_me-server: adding option 'listen-port' for volume 'heal_me-server'
with value '24022'
[2010-12-07 08:59:04.674068] W
[rpc-transport.c:566:validate_volume_options] tcp.heal_me-server: option
'listen-port' is deprecated, preferred is
'transport.socket.listen-port', continuing with correction
Given volfile:
+--+
1: volume heal_me-posix
2: type storage/posix
3: option directory /storage/7
4: end-volume
5:
6: volume heal_me-access-control
7: type features/access-control
8: subvolumes heal_me-posix
9: end-volume
   10:
   11: volume heal_me-locks
   12: type features/locks
   13: subvolumes heal_me-access-control
   14: end-volume
   15:
   16: volume heal_me-io-threads
   17: type performance/io-threads
   18: option thread-count 16
   19: subvolumes heal_me-locks
   20: end-volume
   21:
   22: volume /storage/7
   23: type debug/io-stats
   24: subvolumes heal_me-io-threads
   25: end-volume
   26:
   27: volume heal_me-server
   28: type protocol/server
   29: option transport-type tcp
   30: option auth.addr./storage/7.allow *
   31: subvolumes /storage/7
   32: end-volume

+--+
[2010-12-07 08:59:08.717715] I [server-handshake.c:535:server_setvolume]
heal_me-server: accepted client from 192.168.101.247:1023
[2010-12-07 08:59:08.757648] I [server-handshake.c:535:server_setvolume]
heal_me-server: accepted client from 192.168.101.246:1021
[2010-12-07 08:59:56.274677] I [server-handshake.c:535:server_setvolume]
heal_me-server: accepted client from 192.168.101.246:1020
[2010-12-07 09:01:06.130142] I [server-handshake.c:535:server_setvolume]
heal_me-server: accepted client from 192.168.101.247:1020
[2010-12-07 09:01:43.945880] I [server-handshake.c:535:server_setvolume]
heal_me-server: accepted client from 192.168.101.203:1007


Any help is greatly appreciated,
Regards,
Daniel



On 12/03/2010 01:24 PM, Daniel

Re: [Gluster-users] Start a new volume with pre-existing directories

2010-12-03 Thread Craig Carl

Daniel -
   If you want to export existing data you will need to run the self 
heal process so extended attributes can get written. While this should 
work without any issues it isn't an officially supported process, please 
make sure you have complete and up to date backups.


After you have setup and started the Gluster volume mount it locally on 
one of the servers using `mount -t  glusterfs localhost:/volname 
/some temporary mount`. CD into the root of the mount point and run  
`find . | xargs stat /dev/null 21` to start a self heal.


Also the command you used to create the volume should not have worked, 
it is missing a volume name - gluster volume create VOLNAME transport 
tcp fs7:/storage/7, fs8:/storage/8, typo maybe?


Please let us know how it goes, and please let me know if you have any 
other questions.


Thanks,

Craig

--
Craig Carl
Senior Systems Engineer; Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Office - (408) 770-1884
Gtalk - craig.c...@gmail.com
Twitter - @gluster
http://rackerhacker.com/2010/08/11/one-month-with-glusterfs-in-production/




On 12/02/2010 11:38 PM, Daniel Zander wrote:

Dear all,

at our institute, we currently have 6 file servers, each one of them 
individually mounted via NFS on ~ 20 clients. The structure on the 
servers and the clients is the following:


/storage/1/user_directories (NFS export from FS1)
/storage/2/user_directories (NFS export from FS2)
etc ...

Recently, we decided that we would like to migrate this to glusterFS, 
so that we can have one big storage directory on the clients. Let's 
call it


/gluster/user_directories

I tried to set up a gluster volume with two empty fileservers and it 
worked without any problems. I could easily mount it on a client and 
use it (using the native glusterFS mount).


If we now want to migrate the entire institute, it would be very 
convenient, if existing folders could be easily included into a new 
volume. I tried to do this, but I did not succeed.


Here's a short description of what I tried:

Existing folders:
on fs7: /storage/7/user_1,user_2
on fs8: /storage/8/user_3,user_4

gluster volume create transport tcp fs7:/storage/7, fs8:/storage/8

I hoped to see on the client:
/gluster/user_1
/gluster/user_2
/gluster/user_3
/gluster/user_4

The creation was successful, the volume could be started and mounted. 
On the client, however, I could only find (via ls /gluster) the 
directories user_1 and user_2. But when I tried  cd /gluster/user_3, 
it succeeded! Now ls /gluster showed me user_1, user_2 and user_3. 
Unfortunately, user_3's subdirectories and files were still invisible, 
but with the above mentioned trick, I could make them visible.


This is however not an option, as there are too much users and too 
complicated file structures to do this manually. It anyhow seems like 
Voodoo to me.


Is it possible to include all of the existing directories in the new 
glusterFS volume? If yes: how?


Thank you in advance for your efforts,
Regards,
Daniel
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Start a new volume with pre-existing directories

2010-12-03 Thread Daniel Zander

Hi!

 Also the command you used to create the volume should not have worked,
 it is missing a volume name - gluster volume create VOLNAME transport
 tcp fs7:/storage/7, fs8:/storage/8, typo maybe?

Yes, typo. Sorry ...

Unfortunately, we do not have the storage capacity for a complete 
backup. If we should decide to take the risk, I will let you know how it 
goes.


Thanks for your help,
Daniel



On 12/03/2010 10:10 AM, Craig Carl wrote:

Daniel -
If you want to export existing data you will need to run the self heal
process so extended attributes can get written. While this should work
without any issues it isn't an officially supported process, please make
sure you have complete and up to date backups.

After you have setup and started the Gluster volume mount it locally on
one of the servers using `mount -t glusterfs localhost:/volname /some
temporary mount`. CD into the root of the mount point and run `find . |
xargs stat /dev/null 21` to start a self heal.

Also the command you used to create the volume should not have worked,
it is missing a volume name - gluster volume create VOLNAME transport
tcp fs7:/storage/7, fs8:/storage/8, typo maybe?

Please let us know how it goes, and please let me know if you have any
other questions.

Thanks,

Craig

--
Craig Carl
Senior Systems Engineer; Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Office - (408) 770-1884
Gtalk - craig.c...@gmail.com
Twitter - @gluster
http://rackerhacker.com/2010/08/11/one-month-with-glusterfs-in-production/




On 12/02/2010 11:38 PM, Daniel Zander wrote:

Dear all,

at our institute, we currently have 6 file servers, each one of them
individually mounted via NFS on ~ 20 clients. The structure on the
servers and the clients is the following:

/storage/1/user_directories (NFS export from FS1)
/storage/2/user_directories (NFS export from FS2)
etc ...

Recently, we decided that we would like to migrate this to glusterFS,
so that we can have one big storage directory on the clients. Let's
call it

/gluster/user_directories

I tried to set up a gluster volume with two empty fileservers and it
worked without any problems. I could easily mount it on a client and
use it (using the native glusterFS mount).

If we now want to migrate the entire institute, it would be very
convenient, if existing folders could be easily included into a new
volume. I tried to do this, but I did not succeed.

Here's a short description of what I tried:

Existing folders:
on fs7: /storage/7/user_1,user_2
on fs8: /storage/8/user_3,user_4

gluster volume create transport tcp fs7:/storage/7, fs8:/storage/8

I hoped to see on the client:
/gluster/user_1
/gluster/user_2
/gluster/user_3
/gluster/user_4

The creation was successful, the volume could be started and mounted.
On the client, however, I could only find (via ls /gluster) the
directories user_1 and user_2. But when I tried cd /gluster/user_3,
it succeeded! Now ls /gluster showed me user_1, user_2 and user_3.
Unfortunately, user_3's subdirectories and files were still invisible,
but with the above mentioned trick, I could make them visible.

This is however not an option, as there are too much users and too
complicated file structures to do this manually. It anyhow seems like
Voodoo to me.

Is it possible to include all of the existing directories in the new
glusterFS volume? If yes: how?

Thank you in advance for your efforts,
Regards,
Daniel
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Start a new volume with pre-existing directories

2010-12-03 Thread Daniel Zander

Dear Craig,

I'm afraid I wasn't able to start a self heal the way you suggested. I 
tested the following:


--
WHAT I DID
--
I created on fs7:/storage/7
user_7_1  user_7_2  user_7_3

and on fs8:/storage/8
user_8_1  user_8_2  user_8_3

and filled all of the directories with some small files and subdirectories.

Then, on fs8:
gluster volume create heal_me transport tcp 192.168.101.246:/storage/8 
192.168.101.247:/storage/7


Then on fs8 and afterwards on fs7:
mount -t glusterfs localhost:/heal_me /tempmount/
cd /tempmount
find . | xargs stat /dev/null 21
umount /tempmount

All went well, no error messages or anything. The output of `find . | 
xargs stat` is probably too long to post it here, but there were no 
error messages or anything else that would look suspicious to me.


---
RESULTS
---
ls fs8:storage/8
user_8_1  user_8_2  user_8_3

ls fs7:/storage/7
user_7_1  user_7_2  user_7_3  user_8_1  user_8_2  user_8_3

ls client:/storage/gluster
user_8_1  user_8_2  user_8_3

ls fs7:/tempmount
user_8_1  user_8_2  user_8_3

ls fs8:/tempmount
user_8_1  user_8_2  user_8_3

Unmounting and remounting has no effect.

Servers are both Ubuntu Server 10.4, client is CentOS 5, 64bits all around.

Thanks and regards,
Daniel


On 12/03/2010 10:10 AM, Craig Carl wrote:

Daniel -
If you want to export existing data you will need to run the self heal
process so extended attributes can get written. While this should work
without any issues it isn't an officially supported process, please make
sure you have complete and up to date backups.

After you have setup and started the Gluster volume mount it locally on
one of the servers using `mount -t glusterfs localhost:/volname /some
temporary mount`. CD into the root of the mount point and run `find . |
xargs stat /dev/null 21` to start a self heal.

Also the command you used to create the volume should not have worked,
it is missing a volume name - gluster volume create VOLNAME transport
tcp fs7:/storage/7, fs8:/storage/8, typo maybe?

Please let us know how it goes, and please let me know if you have any
other questions.

Thanks,

Craig

--
Craig Carl
Senior Systems Engineer; Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Office - (408) 770-1884
Gtalk - craig.c...@gmail.com
Twitter - @gluster
http://rackerhacker.com/2010/08/11/one-month-with-glusterfs-in-production/




On 12/02/2010 11:38 PM, Daniel Zander wrote:

Dear all,

at our institute, we currently have 6 file servers, each one of them
individually mounted via NFS on ~ 20 clients. The structure on the
servers and the clients is the following:

/storage/1/user_directories (NFS export from FS1)
/storage/2/user_directories (NFS export from FS2)
etc ...

Recently, we decided that we would like to migrate this to glusterFS,
so that we can have one big storage directory on the clients. Let's
call it

/gluster/user_directories

I tried to set up a gluster volume with two empty fileservers and it
worked without any problems. I could easily mount it on a client and
use it (using the native glusterFS mount).

If we now want to migrate the entire institute, it would be very
convenient, if existing folders could be easily included into a new
volume. I tried to do this, but I did not succeed.

Here's a short description of what I tried:

Existing folders:
on fs7: /storage/7/user_1,user_2
on fs8: /storage/8/user_3,user_4

gluster volume create transport tcp fs7:/storage/7, fs8:/storage/8

I hoped to see on the client:
/gluster/user_1
/gluster/user_2
/gluster/user_3
/gluster/user_4

The creation was successful, the volume could be started and mounted.
On the client, however, I could only find (via ls /gluster) the
directories user_1 and user_2. But when I tried cd /gluster/user_3,
it succeeded! Now ls /gluster showed me user_1, user_2 and user_3.
Unfortunately, user_3's subdirectories and files were still invisible,
but with the above mentioned trick, I could make them visible.

This is however not an option, as there are too much users and too
complicated file structures to do this manually. It anyhow seems like
Voodoo to me.

Is it possible to include all of the existing directories in the new
glusterFS volume? If yes: how?

Thank you in advance for your efforts,
Regards,
Daniel
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Start a new volume with pre-existing directories

2010-12-03 Thread Craig Carl

Can you send the output of -

`gluster volume info all`
`gluster peer status`

from a gluster storage server and

`mount` from the client?

Craig


On 12/03/2010 02:50 AM, Daniel Zander wrote:

Dear Craig,

I'm afraid I wasn't able to start a self heal the way you suggested. I 
tested the following:


--
WHAT I DID
--
I created on fs7:/storage/7
user_7_1  user_7_2  user_7_3

and on fs8:/storage/8
user_8_1  user_8_2  user_8_3

and filled all of the directories with some small files and 
subdirectories.


Then, on fs8:
gluster volume create heal_me transport tcp 192.168.101.246:/storage/8 
192.168.101.247:/storage/7


Then on fs8 and afterwards on fs7:
mount -t glusterfs localhost:/heal_me /tempmount/
cd /tempmount
find . | xargs stat /dev/null 21
umount /tempmount

All went well, no error messages or anything. The output of `find . | 
xargs stat` is probably too long to post it here, but there were no 
error messages or anything else that would look suspicious to me.


---
RESULTS
---
ls fs8:storage/8
user_8_1  user_8_2  user_8_3

ls fs7:/storage/7
user_7_1  user_7_2  user_7_3  user_8_1  user_8_2  user_8_3

ls client:/storage/gluster
user_8_1  user_8_2  user_8_3

ls fs7:/tempmount
user_8_1  user_8_2  user_8_3

ls fs8:/tempmount
user_8_1  user_8_2  user_8_3

Unmounting and remounting has no effect.

Servers are both Ubuntu Server 10.4, client is CentOS 5, 64bits all 
around.


Thanks and regards,
Daniel


On 12/03/2010 10:10 AM, Craig Carl wrote:

Daniel -
If you want to export existing data you will need to run the self heal
process so extended attributes can get written. While this should work
without any issues it isn't an officially supported process, please make
sure you have complete and up to date backups.

After you have setup and started the Gluster volume mount it locally on
one of the servers using `mount -t glusterfs localhost:/volname /some
temporary mount`. CD into the root of the mount point and run `find . |
xargs stat /dev/null 21` to start a self heal.

Also the command you used to create the volume should not have worked,
it is missing a volume name - gluster volume create VOLNAME transport
tcp fs7:/storage/7, fs8:/storage/8, typo maybe?

Please let us know how it goes, and please let me know if you have any
other questions.

Thanks,

Craig

--
Craig Carl
Senior Systems Engineer; Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Office - (408) 770-1884
Gtalk - craig.c...@gmail.com
Twitter - @gluster
http://rackerhacker.com/2010/08/11/one-month-with-glusterfs-in-production/ 






On 12/02/2010 11:38 PM, Daniel Zander wrote:

Dear all,

at our institute, we currently have 6 file servers, each one of them
individually mounted via NFS on ~ 20 clients. The structure on the
servers and the clients is the following:

/storage/1/user_directories (NFS export from FS1)
/storage/2/user_directories (NFS export from FS2)
etc ...

Recently, we decided that we would like to migrate this to glusterFS,
so that we can have one big storage directory on the clients. Let's
call it

/gluster/user_directories

I tried to set up a gluster volume with two empty fileservers and it
worked without any problems. I could easily mount it on a client and
use it (using the native glusterFS mount).

If we now want to migrate the entire institute, it would be very
convenient, if existing folders could be easily included into a new
volume. I tried to do this, but I did not succeed.

Here's a short description of what I tried:

Existing folders:
on fs7: /storage/7/user_1,user_2
on fs8: /storage/8/user_3,user_4

gluster volume create transport tcp fs7:/storage/7, fs8:/storage/8

I hoped to see on the client:
/gluster/user_1
/gluster/user_2
/gluster/user_3
/gluster/user_4

The creation was successful, the volume could be started and mounted.
On the client, however, I could only find (via ls /gluster) the
directories user_1 and user_2. But when I tried cd /gluster/user_3,
it succeeded! Now ls /gluster showed me user_1, user_2 and user_3.
Unfortunately, user_3's subdirectories and files were still invisible,
but with the above mentioned trick, I could make them visible.

This is however not an option, as there are too much users and too
complicated file structures to do this manually. It anyhow seems like
Voodoo to me.

Is it possible to include all of the existing directories in the new
glusterFS volume? If yes: how?

Thank you in advance for your efforts,
Regards,
Daniel
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___

Re: [Gluster-users] Start a new volume with pre-existing directories

2010-12-03 Thread Daniel Zander

Hi!

Can you send the output of -

`gluster volume info all`
`gluster peer status`

from a gluster storage server and

`mount` from the client?

Certainly

--
r...@ekpfs8:~# gluster volume info all
Volume Name: heal_me
Type: Distribute
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.101.246:/storage/8
Brick2: 192.168.101.247:/storage/7
--
r...@ekpfs8:~# gluster peer status
Number of Peers: 1

Hostname: 192.168.101.247
Uuid: b36ce6e3-fa14-4d7e-bc4a-170a59a6f4f5
State: Peer in Cluster (Connected)
--
[r...@ekpbelle ~]# mount
[ ... ]
glusterfs#192.168.101.246:/heal_me on /storage/gluster type fuse 
(rw,allow_other,default_permissions,max_read=131072)

--

Regards,
Daniel
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users