Re: [Gluster-users] Maintainance mode for bricks

2010-04-14 Thread Tejas N. Bhise
Thanks, Ian.

Fred - the method Ian describes, using self heal, is a good way of doing it 
while maintaining gluster semantics and not playing around directly with the
backend.

In a future release, with dynamic volume management, one would be able to do 
such things with simple commands with having the data online all the while. 

Regards,
Tejas.

- Original Message -
From: "Ian Rogers" 
To: gluster-users@gluster.org
Sent: Wednesday, April 14, 2010 8:43:19 PM
Subject: Re: [Gluster-users] Maintainance mode for bricks

On 14/04/2010 13:20, Fred Stober wrote:
> On Wednesday 14 April 2010, Tejas N. Bhise wrote:
>
>> Fred,
>>
>> Would you like to tell us more about the use case ? Like why would you want
>> to do this ? If we take a brick out, it would not be possible to get it
>> back in ( with the existing data ).
>>  
> Ok, here is our use case:
> We have a small test system running on 3 file servers. cluster/distribute is
> used to give a flat view of the file servers. Now have the problem that one
> file server is going to be replaced with a larger one. Therefore we want to
> put the old file server into read only mode to rsync the files to the new
> server. Unfortunately this will take ~2days. During this time it would be
> nice to keep the glusterfs in read/write mode.
>
> If I understand it correctly, I should be able to use "lookup-unhashed" to
> reintegrate the new fileserver in the existing file system, when we switch
> off the old server.
>
> Cheers,
> Fred
>
>

Could you use gluster to put the new server and old one into a 
cluster/replicate pair so it looks just like one server to the 
cluster/distribute above it? Then do rsync or let gluster copy 
everything across with a "self heal". When the new one is up to date 
just disable the old one and remove the cluster/replicate.

-- 
www.ContactClean.com
Making changing email address as easy as clicking a mouse.
Helping you keep in touch.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Maintainance mode for bricks

2010-04-14 Thread Ian Rogers

On 14/04/2010 13:20, Fred Stober wrote:

On Wednesday 14 April 2010, Tejas N. Bhise wrote:
   

Fred,

Would you like to tell us more about the use case ? Like why would you want
to do this ? If we take a brick out, it would not be possible to get it
back in ( with the existing data ).
 

Ok, here is our use case:
We have a small test system running on 3 file servers. cluster/distribute is
used to give a flat view of the file servers. Now have the problem that one
file server is going to be replaced with a larger one. Therefore we want to
put the old file server into read only mode to rsync the files to the new
server. Unfortunately this will take ~2days. During this time it would be
nice to keep the glusterfs in read/write mode.

If I understand it correctly, I should be able to use "lookup-unhashed" to
reintegrate the new fileserver in the existing file system, when we switch
off the old server.

Cheers,
Fred

   


Could you use gluster to put the new server and old one into a 
cluster/replicate pair so it looks just like one server to the 
cluster/distribute above it? Then do rsync or let gluster copy 
everything across with a "self heal". When the new one is up to date 
just disable the old one and remove the cluster/replicate.


--
www.ContactClean.com
Making changing email address as easy as clicking a mouse.
Helping you keep in touch.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Maintainance mode for bricks

2010-04-14 Thread Fred Stober
Dear Tejas,

On Wednesday 14 April 2010, Tejas N. Bhise wrote:
> Ok. and how many clients mount these volume(s) .. asking so I can
> understand how many need to remount if config is changed.
The file system is currently mounted on ~25 of our computing nodes and 5 
portal machines.

Cheers,
Fred

-- 
Fred-Markus Stober
sto...@cern.ch
Institute of Experimental Nuclear Physics
Karlsruhe Institute of Technology
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Maintainance mode for bricks

2010-04-14 Thread Tejas N. Bhise
Ok. and how many clients mount these volume(s) .. asking so I can understand 
how many need to remount if config is changed.


- Original Message -
From: "Fred Stober" 
To: "Tejas N. Bhise" 
Cc: gluster-users@gluster.org
Sent: Wednesday, April 14, 2010 7:00:56 PM
Subject: Re: [Gluster-users] Maintainance mode for bricks

Dear Tejas,

On Wednesday 14 April 2010, Tejas N. Bhise wrote:
> ok .. so if I understand correctly, you want to *replace* an existing
> export by a new export on a new machine, while keeping all data online and
> keep the source export of the replace in read mode so it can be copied off.
Exactly.

> Are there other processes also doing a read off the read only
> export/sub-volume, besides the rsync ?
Yes, there is some activity ...

The goal is to keep the whole process as transparent to the users which 
read/write to the flat space as possible. Since our users mostly have a 
write-once read-many usage pattern, it should be possible to keep them happy.

Regards,
Fred

-- 
Fred-Markus Stober
sto...@cern.ch
Karlsruhe Institute of Technology
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Maintainance mode for bricks

2010-04-14 Thread Fred Stober
Dear Tejas,

On Wednesday 14 April 2010, Tejas N. Bhise wrote:
> ok .. so if I understand correctly, you want to *replace* an existing
> export by a new export on a new machine, while keeping all data online and
> keep the source export of the replace in read mode so it can be copied off.
Exactly.

> Are there other processes also doing a read off the read only
> export/sub-volume, besides the rsync ?
Yes, there is some activity ...

The goal is to keep the whole process as transparent to the users which 
read/write to the flat space as possible. Since our users mostly have a 
write-once read-many usage pattern, it should be possible to keep them happy.

Regards,
Fred

-- 
Fred-Markus Stober
sto...@cern.ch
Karlsruhe Institute of Technology
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Maintainance mode for bricks

2010-04-14 Thread Tejas N. Bhise
ok .. so if I understand correctly, you want to *replace* an existing export by 
a new export on a new machine, while keeping all data online and keep the 
source export of the replace in read mode so it can be copied off. Are there 
other processes also doing a read off the read only export/sub-volume, besides 
the rsync ?

Regards,
Tejas.

- Original Message -
From: "Fred Stober" 
To: "Tejas N. Bhise" 
Cc: gluster-users@gluster.org
Sent: Wednesday, April 14, 2010 5:50:33 PM
Subject: Re: [Gluster-users] Maintainance mode for bricks

On Wednesday 14 April 2010, Tejas N. Bhise wrote:
> Fred,
>
> Would you like to tell us more about the use case ? Like why would you want
> to do this ? If we take a brick out, it would not be possible to get it
> back in ( with the existing data ).

Ok, here is our use case:
We have a small test system running on 3 file servers. cluster/distribute is 
used to give a flat view of the file servers. Now have the problem that one 
file server is going to be replaced with a larger one. Therefore we want to 
put the old file server into read only mode to rsync the files to the new 
server. Unfortunately this will take ~2days. During this time it would be 
nice to keep the glusterfs in read/write mode.

If I understand it correctly, I should be able to use "lookup-unhashed" to 
reintegrate the new fileserver in the existing file system, when we switch 
off the old server.

Cheers,
Fred

-- 
Fred-Markus Stober
sto...@cern.ch
Karlsruhe Institute of Technology
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Maintainance mode for bricks

2010-04-14 Thread Fred Stober
On Wednesday 14 April 2010, Tejas N. Bhise wrote:
> Fred,
>
> Would you like to tell us more about the use case ? Like why would you want
> to do this ? If we take a brick out, it would not be possible to get it
> back in ( with the existing data ).

Ok, here is our use case:
We have a small test system running on 3 file servers. cluster/distribute is 
used to give a flat view of the file servers. Now have the problem that one 
file server is going to be replaced with a larger one. Therefore we want to 
put the old file server into read only mode to rsync the files to the new 
server. Unfortunately this will take ~2days. During this time it would be 
nice to keep the glusterfs in read/write mode.

If I understand it correctly, I should be able to use "lookup-unhashed" to 
reintegrate the new fileserver in the existing file system, when we switch 
off the old server.

Cheers,
Fred

-- 
Fred-Markus Stober
sto...@cern.ch
Karlsruhe Institute of Technology
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Maintainance mode for bricks

2010-04-14 Thread Tejas N. Bhise
Fred,

Would you like to tell us more about the use case ? Like why would you want to 
do this ? If we take a brick out, it would not be possible to get it back in ( 
with the existing data ). 

Regards,
Tejas.
- Original Message -
From: "Fred Stober" 
To: gluster-users@gluster.org
Sent: Wednesday, April 14, 2010 4:57:24 PM
Subject: [Gluster-users] Maintainance mode for bricks

Dear all,

Is there an easy way to put a storage brick, which is part of a dht volume, 
into some kind of read-only maintainance mode, while keeping the whole dht 
volume in read/write state?
Currently it almost works, but files are still scheduled to go to the server 
in maintainance mode and in this case you get an error. It should be possible 
to write to another brick instead.

Sincerely,

-- 
Fred-Markus Stober
fred.sto...@kit.edu
Karlsruhe Institute of Technology
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Maintainance mode for bricks

2010-04-14 Thread Fred Stober
Dear all,

Is there an easy way to put a storage brick, which is part of a dht volume, 
into some kind of read-only maintainance mode, while keeping the whole dht 
volume in read/write state?
Currently it almost works, but files are still scheduled to go to the server 
in maintainance mode and in this case you get an error. It should be possible 
to write to another brick instead.

Sincerely,

-- 
Fred-Markus Stober
fred.sto...@kit.edu
Karlsruhe Institute of Technology
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users