On Mar 29, 2020, at 20:04, Gong-Do Hwang
mailto:grover.hw...@gmail.com>> wrote:
Thanks Andreas,
I ran "mkfs.lustre --ost --reformat --fsname lfs_home --index 6 --mgsnode
10.10.0.10@o2ib --servicenode 10.10.0.13@o2ib --failnode 10.10.0.14@o2ib
/dev/mapper/mpathx", and at that time /dev/mapp
It would be useful if you provided the actual error messages, so we can see
where the problem is.
What command did you run on the OST?
Does the OST still show that it has data in it (e.g. "df" or "dumpe2fs -h"
shows lots of used blocks)?
On Mar 25, 2020, at 10:05, Gong-Do Hwang
mailto:grover.
Dear Lustre,
Months ago when I tried to add a new disk to my new Lustre FS, I
accidentally target the mkfs.lustre to a then mounted OST partition of
another Lustre FS. Weird enough the command passed through, and without
paying attention to it, I umount the partition months later and couldn't
moun
It is usually best to use the newest e2fsprogs release, since it has the most
fixes. This is currently 1.42.7.wc2 though we are just in the process of
releasing 1.42.9.wc1.
That said, I would not run e2fsck on the failing device. That can cause extra
stress on the device and cause it to fail so
Hi,
I have one of 10 OSTs with underlying hardware failure (not catastrophic
yet just flakey). Initially e2fsck resolved errors but the last pass came
up with "short read" and I am dealing with harwdare issues on ailing OST. I
have inactivated this OST but the remaining data is not much use without
Hi ,
Im trying to figure out what is the best way to recover a failed OST , basicly
we have 10 OST's
each has DRBD + HA on top of raid 6 so its kind of redundent and suppose to be
solid
just want to notice for the other post that asked of that configuration that
its working ok and the
performanc