Hello,
Is there anywhere the mailing list archive?
Greetings,
Marian Porwol
On Wed, 8 Sep 1999, David van der Spoel wrote:
> Sep 6 13:10:01 yfs kernel: (skipping faulty sdh1 )
> Sep 6 13:10:01 yfs kernel: (skipping faulty sdc1 )
> i.e., a disk fails, recovery kicks in, and produces a hopefully correct
> array in degraded mode. Then when writing the superblock somet
Hi,
I think this bit is the crucial one (and identical to my problem):
Sep 6 13:10:01 yfs kernel: md: recovery thread got woken up ...
Sep 6 13:10:01 yfs kernel: md0: no spare disk to reconstruct array! --
continuing in degraded mode
Sep 6 13:10:01 yfs kernel: md: recovery thread finished ..
On Wed, 8 Sep 1999 [EMAIL PROTECTED] wrote:
> Sang-yong wrote:
> >Actually, the second disk is never failed, but md misunderstood...
>
> Oops! I checked older logs and found a part of my second disk
> is broken four days before the major failure happen. There was
> single block I/O error on i
Sang-yong wrote:
>Actually, the second disk is never failed, but md misunderstood...
Oops! I checked older logs and found a part of my second disk
is broken four days before the major failure happen. There was
single block I/O error on it.
--
sysuh
Sep 2 10:06:22 yfs kernel: scsi : aborting c
Hi,
I'm using a tyan thunder X dual Xeon 450 with a built-on Adaptec
AIC-7896 chipset. It's dual channel, SMP, and totally stable... not a
single worry. I would HIGHLY recommend it.
Kenneth P. Persing
Voice: 7EC26321
Cell: 222D7BCFD
Fax: 24721FE18
On Tue, 7 Sep 1999, Chris Mauritz wrote
On Tue, Sep 07, 1999 at 06:29:21PM +0200, [EMAIL PROTECTED] wrote:
>
> On Wed, 8 Sep 1999 [EMAIL PROTECTED] wrote:
>
> > I have the exactly same problem. My startup message was as follows:
> >
> > Sep 7 12:28:41 yfs kernel: md: kicking non-fresh sdc1 from array!
> > Sep 7 12:28:41 yfs ke
With the following configuration, any attempt to access /dev/md1 will
lock the process in D (disk sleep) state:
raiddev /dev/md0
raid-level1
nr-raid-disks 2
nr-spare-disks0
persistent-superblock 1
chunk-size64
d
Hi Hubert!
I use a Symbios Logic U2W controller for my swraid. Upgraded from
Symbios Logic U-Scsi Controller, and what shall I say: Plugged, worked.
They have only one channel, and no lvd-to-hvd bridge, though.
Marc
--
Marc Mutz <[EMAIL PROTECTED]>http://marc.mutz.com/
Univ
How do I get it to properly detect my RAID1 set on boot up. I think the
problem relates to the fact that RAID1 is not compiled into the kernel,
I have it as a module. However I think it's possible with lilo to use a
ramdisk to load modules at boot time much like is done for SCSI.
However I don't
I am currently using a Mylex ExtremeRAID controller with good results. If
you haven't been to this page, please check it out:
http://www.dandelion.com/Linux/DAC960.html
And if you haven't read this FAQ, do so also:
http://www.dandelion.com/Linux/README.DAC960
Here is an excerpt about
Hello everybody,
I am following "The Sofware-RAID HOWTO" to make system with / on RAID device.
I have a problem after unmounting original (from "install" disk) /boot
When mount the boot device on /mnt/newroot/boot:
#mount /mnt/newroot/boot /boot
mount: /mnt/newroot/boot is not a block device
cat
> From [EMAIL PROTECTED] Tue Sep 7 11:50:39 1999
>
> What is the most reliable LVD SCSI controler for Linux ?
>
> (I use several Buslogic controlers, but as far as I know they don't
> have an LVD version, which is absolutely necessary for long SCSI chains,
> and my Buslogic controlers went in
OK, no matter what I do or with any combo of drives I get this.
I am running Mandrake 6.0 w/ kernel 2.2.12 (tried 2.2.9) with MD support and
raid1 compiled in.
Tried the mandrake rpm raid tools and raid tools 19990824
The /etc/raidtab is exactly like the howtos and examples are.
I am trying to
| good. I suppose that 1% is due to the filesystem data getting corrupted
| due to the double-disk failure.
just an idea for you guys if you're daring and have the right parts:
a few months ago, I had a double-disk failure on a RAID4 array (happened on power-up).
we ended up taking one of the f
On Wed, 8 Sep 1999 [EMAIL PROTECTED] wrote:
> I have the exactly same problem. My startup message was as follows:
>
> Sep 7 12:28:41 yfs kernel: md: kicking non-fresh sdc1 from array!
> Sep 7 12:28:41 yfs kernel: unbind
> Sep 7 12:28:41 yfs kernel: export_rdev(sdc1)
> Sep 7 12:28:4
What is the most reliable LVD SCSI controler for Linux ?
(I use several Buslogic controlers, but as far as I know they don't
have an LVD version, which is absolutely necessary for long SCSI chains,
and my Buslogic controlers went in an infinite reset loop several times,
which raid cannot prote
On Tue, 7 Sep 1999, paul wrote:
> What do these messages mean? I get them in my /var/log/messages every once
> in a while (but not at boot).
>
>
> It appears to be saying that my partitions overlap one another but
> that isn't the case.
>
> >md: syncing RAID array md5
> >md: minimum _guara
It means the partitions exist on the same physical disk so it would be
detrimental to attempt to synchronize them in parallel.
Kevin C.
>
> What do these messages mean? I get them in my /var/log/messages every once
> in a while (but not at boot).
>
>
> It appears to be saying that my part
Hello, mingo and Tso,
Please help. I had the exactly same problem, i.e., one of my raid-5 array
disks was crashed, and the other one is marked bad. However, I was
able to recover from it. Consequently, it is not perfect recovery.
e2fsck is not working. :-(
In message <[EMAIL PROTECTED]>
mingo
What do these messages mean? I get them in my /var/log/messages every once
in a while (but not at boot).
It appears to be saying that my partitions overlap one another but
that isn't the case.
>md: syncing RAID array md5
>md: minimum _guaranteed_ reconstruction speed: 100 KB/sec.
>md: usin
On 09/04/1999 10:11 +0200, [EMAIL PROTECTED] wrote:
>>
>> On Fri, 3 Sep 1999, Tim Walberg wrote:
>>
>> > I want to mirror the NT partition when I'm running Linux.
>> > Obviously, the mirror will be broken when I boot under NT,
>>
>> Eg. if you have sda and sdb
On Tue, 7 Sep 1999, David van der Spoel wrote:
> I run a 2.2.12 kernel + latest RAID (990824) + latest knfsd (1.4.7)
> OS is on an IDE disk, I have a 8 x 9 Gb SCSI disk RAID 5 array with
> 2 SCSI controllers each having four disks (four internal, four external)
> After three days uptime (and rea
Hi,
I run a 2.2.12 kernel + latest RAID (990824) + latest knfsd (1.4.7)
OS is on an IDE disk, I have a 8 x 9 Gb SCSI disk RAID 5 array with
2 SCSI controllers each having four disks (four internal, four external)
After three days uptime (and reading 50 Gb data from tapes) the following
crash occu
>> Original Message <<
On 9/6/99, 10:25:34 PM, <[EMAIL PROTECTED]> wrote regarding Re: Raid
Problems:
> install the old raidtools (0.5)
> or use the newest (0.9) with the 0.9 kernel patches
Second to this.
Raid tools in Mandrake 6.0 are broken.
> On Mon, 6
Hi all,
o.k. I know the advantages of software RAID. But wouldn't it be a viable
option to use two cheap IDE drives with a hardware RAID controller for the
OS and put the data on a fast RAID 5 SW RAID?
I've read about an IDE RAID 1 Controller (Araid99-300, www.top101usa.com)
that is working under
26 matches
Mail list logo