Date: Wed, 8 Sep 1999 18:11:46 +0900
From: Sang-yong Suh <[EMAIL PROTECTED]>
By the way, I need one more help. The /dev/md0 partition has a lot
of entries in its lost+found directory even after "rm -f". It has
9484 entries now. When I tried rm, it says "Operation not permited".
mingo,
Thank you very much for your help.
I now recovered almost all data. No need to recreate the raid
array, or the ext2 filesystem.
The program, e2fsck, is good. It recovered the filesystem
successfully. I answered "no" on its question:
Clone duplicate/bad blocks? No
Ingo wrote:
> I will make the
> double-disk failure more graceful, it makes no sense to play hardball at
> that time anymore. We might be strict wrt. failures happening on a
> redundant array, but if it's in degraded mode we should either shut the
> array down immediately (as suggested before), or
On Wed, 8 Sep 1999, David van der Spoel wrote:
> Sep 6 13:10:01 yfs kernel: (skipping faulty sdh1 )
> Sep 6 13:10:01 yfs kernel: (skipping faulty sdc1 )
> i.e., a disk fails, recovery kicks in, and produces a hopefully correct
> array in degraded mode. Then when writing the superblock somet
Hi,
I think this bit is the crucial one (and identical to my problem):
Sep 6 13:10:01 yfs kernel: md: recovery thread got woken up ...
Sep 6 13:10:01 yfs kernel: md0: no spare disk to reconstruct array! --
continuing in degraded mode
Sep 6 13:10:01 yfs kernel: md: recovery thread finished ..
On Wed, 8 Sep 1999 [EMAIL PROTECTED] wrote:
> Sang-yong wrote:
> >Actually, the second disk is never failed, but md misunderstood...
>
> Oops! I checked older logs and found a part of my second disk
> is broken four days before the major failure happen. There was
> single block I/O error on i
Sang-yong wrote:
>Actually, the second disk is never failed, but md misunderstood...
Oops! I checked older logs and found a part of my second disk
is broken four days before the major failure happen. There was
single block I/O error on it.
--
sysuh
Sep 2 10:06:22 yfs kernel: scsi : aborting c
On Tue, Sep 07, 1999 at 06:29:21PM +0200, [EMAIL PROTECTED] wrote:
>
> On Wed, 8 Sep 1999 [EMAIL PROTECTED] wrote:
>
> > I have the exactly same problem. My startup message was as follows:
> >
> > Sep 7 12:28:41 yfs kernel: md: kicking non-fresh sdc1 from array!
> > Sep 7 12:28:41 yfs ke
| good. I suppose that 1% is due to the filesystem data getting corrupted
| due to the double-disk failure.
just an idea for you guys if you're daring and have the right parts:
a few months ago, I had a double-disk failure on a RAID4 array (happened on power-up).
we ended up taking one of the f
On Wed, 8 Sep 1999 [EMAIL PROTECTED] wrote:
> I have the exactly same problem. My startup message was as follows:
>
> Sep 7 12:28:41 yfs kernel: md: kicking non-fresh sdc1 from array!
> Sep 7 12:28:41 yfs kernel: unbind
> Sep 7 12:28:41 yfs kernel: export_rdev(sdc1)
> Sep 7 12:28:4
10 matches
Mail list logo