Can it be done with lilo (version 21 that came with Slackware 7.0)? Had to
move /boot and kernels to /dev/sdc6, a small leftover in the end of a
FAT16-drive and lilo installs and boots beautifully that way. Root is
/dev/md0, sda1+sdb1 RAID-0 (working great and fast, thanks guys) but as long
as
What does this means ? Where is the overlapping ?
Christophe (Please CC answers to me, I'm not subscribed to the list)
Dec 9 10:14:11 localhost kernel: md: serializing resync, md1 has
overlapping ph
ysical units with md2!
Dec 9 10:14:11 localhost kernel: md: serializing resync, md0 has
What does this means ?
It would thrash the disk heads to keep moving from one partition to another
between each disk operation, so when re-synching disks, it ensures that only
one thread is using a particular disk (spindle) at a time.
Where is the overlapping ?
Not really overlapping,
T just tried making the same change on my system to see if it would help
me. But the symptons stayed the same. If a drive is attached to IDE3 or
IDE4 channels, the system locks up during bootup. One difference I am
using the BE-6 motherboard.
Best Regards,
Robert Laughlin
On Wed, 8 Dec 1999,
Thanks for writing back, Michael. Yes it complied cleanly, and I have a
short script that copies over the new kernal and runs lilo for me, so I
don't have to remember all the steps...:)
I have written Andre a number of times providing him with details about my
problem, I have also added some
Interesting. I never had lockups during boot, only during heavy IDE load.
Just a stupid question: you did make sure that the change was cleanly
compiled in and installed and all? I assume you probably did, but I've
missed steps before when not really watching what I was doing and sat
Hi!
I setup a system with RAID0 using 2.2.13-ac3 and raidtools 0.90.990824
(the current debian potato package, compiled for slink).
My main goal in building this raid is stability and fault-tolerance. And
indeed, the Linux software RAID system looks very promising -- I just
found one problem.
Bruno Prior wrote on 3/12/99 9:39:
This is a little unorthodox,
but try the following
Thanks for the clear explanation. I gave it a try, but it didn't work. It may be
because I used the 'old' raid as compiled into the standard 2.2.13 kernel. I've been
very confused by the various raid docs
I'm running RAID 1.
Here is a portion of my messages log file:
=
Dec 2 09:56:25 mislxsrv kernel: attempt to access beyond end of device
Dec 2 09:56:25 mislxsrv kernel: 08:05: rw=0, want=333199165,
limit=6144831
Dec 2 09:56:25 mislxsrv kernel: raid1: Disk failure on sda5, disabling
At 11:18 AM 12/9/1999 -0500, you wrote:
==
Now what? Does this mean that the disk is bad? I have other RAID 1's
living on the same two disks and they have not slipped into degraded
mode. Should i just try to raidhotadd the device back into the the RAID
and see if it corrects itself?
Thanks
Hi,
I am in a bit of a pickle.
I am just setting up a system and have put in two identical disks as hda and hdc. fdisk
sees hda as having 255 heads and 63 sectors but hdc as having 16 heads and 63 sectors !
Now this gives me a big problem in that I bought two identical drives so that I could
[ Thursday, December 9, 1999 ] David Cooley wrote:
It could be a drive problem, but it could also be that since data is spread
across the drives of an array, with only 2 drives, removing one removed
half the data. Without at least 3 drives making up the array, it has
nothing to
Hi there,
I want to build up an array of four IBM DJNA 15 GB harddisks on an Abit
PE6 with ATA/66 Controller. The array should be RAID-5, what so you know
about the performance? I mean in general not only for this specific
constellation? Is the source stable and usable for production?
Tim Niemueller wrote:
Hi there,
I want to build up an array of four IBM DJNA 15 GB harddisks on an Abit
PE6 with ATA/66 Controller. The array should be RAID-5, what so you know
about the performance? I mean in general not only for this specific
constellation? Is the source stable and
hi everybody,
I have a problem with creating a new RAID-1 array.
Sry, little mistake: This is the right version. The other one was
before I partitioned hda...
-
Failure:
handling MD device /dev/md0
analyzing super-block
disk 0:
Tim Niemueller wrote:
Hi there,
I want to build up an array of four IBM DJNA 15 GB harddisks on an Abit
PE6 with ATA/66 Controller. The array should be RAID-5, what so you know
about the performance? I mean in general not only for this specific
constellation? Is the source stable and
I had _exactly_ the same problem. The only way I found around it was to
go into the BIOS and manually set both disks to NORMAL instead of LBA.
Attempting to set both to LBA didn't work. This is with RedHat 6.0 with
the 2.2.5-22 kernel. Perhaps this is fixed in a later kernel?
--
Stephen
hi everybody,
I have a problem with creating a new RAID-1 array.
Software:
SuSE 6.3 with Std.Kernel 2.2.13-SMP (non-SuSE)
Hardware:
Abit BP6 with 2x400MHz Celeron (SMP-Kernel)
2x 15 GB IBM DJNA (IDE)
hda:
Disk /dev/hda: 255 heads, 63 sectors, 1869 cylinders
Units = cylinders of 16065 * 512
Check your bios settings.. See if one is in LBA mode,etc..
Just ask you bios to autodetect the drives.
I believe the bios has three ways of accessing drives.. each way makes the harddrive
report different heads/sectors/etc.
David Robinson.
Lyndon David wrote:
Hi,
I am in a bit of a
Remo Strotkamp wrote:
As far as I remember you should use sizes at least 4 times what
you have as ram, for bonnie. As you have 1gig of ram (methinks that is
cool, methinks want this too), and the max size for bonnies is 2gigs,
you are in a certain dilemma here.
Have you reduced the
Tim, I have been trying for months to get the ATA66 channels to
work on an ABIT BE6 motherboard with no success. At the moment, despite
trying numerous kernal versions, numerous patch combinations and most
recently two different bios version, my system still locks up on boot when
a drive is
I had exactly the same experience with 2.2.13ac3 kernel, only setting disks to
NORMAL made fdisk to see them as two identical disks.
Does this influence the speed of disks in any way ?
On piĀ±, 10 gru 1999, Stephen Walton wrote:
I had _exactly_ the same problem. The only way I found around it
22 matches
Mail list logo