All right, so we can shrink the file system. The manual has useful
info about OST failure/removal. I have a few related questions about
it.

The manual has a note in failover chapter 8-4 for stopping client
process which waits indefinitely saying - "the OST is explicitly
marked "inactive" on the clients: lctl --device <failed OSC device on
the client> deactivate". But, a note in chapter 4-18 says "Do not
deactivate the OST on the clients. Do so causes errors (EIOs), and the
copy out to fail.". This is a bit confusing. So what should we do when
an OST fails? and when should we deactivate OST (or to be precide
OSC?) on client?

Could you please elaborate more on configuring failover while making a
new filesystem? The mkfs.lustre command does not have --failover
switch, but rather has --failnode switch. So we just need to specify
'--failnode=<ip.addr.of.another....@interface>'  or anything else?
What is the correct method?

And do we need to configure this (spare) OST for the file system and
be it active/mounted while running above mkfs.lustre command?

-
CS.


On Mon, Jul 13, 2009 at 12:56 PM, Brian J. Murrell<brian.murr...@sun.com> wrote:
> On Mon, 2009-07-13 at 12:51 -0500, Carlos Santana wrote:
>> Does lustre support shrinking of file system size - online or offline?
>> I read online is not supported, but I couldn't find any info for the
>> offline shrinking. My guess is that it is not supported. Please
>> correct me if I am wrong.
>
> You can shrink the filesystem by simply removing an OST.  Of course, if
> there are objects on that OST, you need to move them off first, or you
> will lose the files, (or parts of files in the case of striped files)
> those objects are members of.
>
> All of the manual, this list and bugzilla have discussed how to move
> files off an OST in pretty great detail.  Please check them for details
> on how.
>
> b.
>
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
>
_______________________________________________
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss

Reply via email to