Re: [Gluster-users] Maintainance mode for bricks

2010-04-14 Thread Fred Stober
Dear Tejas, On Wednesday 14 April 2010, Tejas N. Bhise wrote: > Ok. and how many clients mount these volume(s) .. asking so I can > understand how many need to remount if config is changed. The file system is currently mounted on ~25 of our computing nodes and 5 portal machines. Cheers, Fred --

Re: [Gluster-users] Maintainance mode for bricks

2010-04-14 Thread Fred Stober
Dear Tejas, On Wednesday 14 April 2010, Tejas N. Bhise wrote: > ok .. so if I understand correctly, you want to *replace* an existing > export by a new export on a new machine, while keeping all data online and > keep the source export of the replace in read mode so it can be copied off. Exactly.

Re: [Gluster-users] Maintainance mode for bricks

2010-04-14 Thread Fred Stober
On Wednesday 14 April 2010, Tejas N. Bhise wrote: > Fred, > > Would you like to tell us more about the use case ? Like why would you want > to do this ? If we take a brick out, it would not be possible to get it > back in ( with the existing data ). Ok, here is our use case: We have a small test s

[Gluster-users] Maintainance mode for bricks

2010-04-14 Thread Fred Stober
Dear all, Is there an easy way to put a storage brick, which is part of a dht volume, into some kind of read-only maintainance mode, while keeping the whole dht volume in read/write state? Currently it almost works, but files are still scheduled to go to the server in maintainance mode and in t