On Thu, Apr 12, 2007 at 02:57:57PM +0200, Brice Figureau wrote:
Now, I don't know why all the UUID are equals (my other machines are not
affected).
I think at some point either in sarge or in testing between sarge and
etch, there was included a version of mdadm which had this bug (all
arrays had
Neil Brown a écrit :
On Thursday March 1, [EMAIL PROTECTED] wrote:
You can only grow a RAID5 array in Linux as of 2.6.20 AFAIK.
There are two dimensions for growth.
You can increase the amount of each device that is used, or you can
increase the number of devices.
You are correct that
On Fri, Apr 13, 2007 at 10:15:05AM +0200, Laurent CARON wrote:
Neil Brown a écrit :
On Thursday March 1, [EMAIL PROTECTED] wrote:
You can only grow a RAID5 array in Linux as of 2.6.20 AFAIK.
There are two dimensions for growth.
You can increase the amount of each device that is used,
I managed to mess up a RAID-5 array by mdadm -adding a few failed disks
back, trying to get the array running again. Unfortunately, -add didn't do
what I expected, but instead made spares out of the failed disks. The
disks failed due to loose SATA cabling and the data inside should be
fairly
Dear All,
I have an 8-drive raid-5 array running under 2.6.11. This morning it
bombed out, and when I brought
it up again, two drives had incorrect event counts:
sda1: 0.8258715
sdb1: 0.8258715
sdc1: 0.8258715
sdd1: 0.8258715
sde1: 0.8258715
sdf1: 0.8258715
sdg1: 0.8258708
sdh1: 0.8258716
Lasse Kärkkäinen wrote:
I managed to mess up a RAID-5 array by mdadm -adding a few failed disks
back, trying to get the array running again. Unfortunately, -add didn't
do what I expected, but instead made spares out of the failed disks. The
disks failed due to loose SATA cabling and the data
On Friday April 13, [EMAIL PROTECTED] wrote:
Dear All,
I have an 8-drive raid-5 array running under 2.6.11. This morning it
bombed out, and when I brought
it up again, two drives had incorrect event counts:
sda1: 0.8258715
sdb1: 0.8258715
sdc1: 0.8258715
sdd1: 0.8258715
sde1:
On Friday April 13, [EMAIL PROTECTED] wrote:
Lasse Kärkkäinen wrote:
disk 0, o:1, dev:sdc1
disk 1, o:1, dev:sde1
disk 2, o:1, dev:sdg1
disk 3, o:1, dev:sdi1
disk 4, o:1, dev:sdh1
disk 5, o:1, dev:sdf1
disk 6, o:1, dev:sdd1
I gather that I need a way to alter the superblocks
Hi,
Thanks for the answer,
On Fri, 2007-04-13 at 09:32 +0200, Iustin Pop wrote:
On Thu, Apr 12, 2007 at 02:57:57PM +0200, Brice Figureau wrote:
Now, I don't know why all the UUID are equals (my other machines are not
affected).
I think at some point either in sarge or in testing between
John == John Stoffel [EMAIL PROTECTED] writes:
This is an update email, my system is now up and running properly,
though with some caveats.
John I've just installed a new SATA controller and a pair of 320Gb
John disks into my system. Went great. I'm running 2.6.21-rc6, with
John the ATA
Neil Brown wrote:
On Friday April 13, [EMAIL PROTECTED] wrote:
Dear All,
I have an 8-drive raid-5 array running under 2.6.11. This morning it
bombed out, and when I brought
it up again, two drives had incorrect event counts:
sda1: 0.8258715
sdb1: 0.8258715
sdc1: 0.8258715
sdd1:
Bill == Bill Davidsen [EMAIL PROTECTED] writes:
Is there anyway I can interrupt the command I used:
mdadm --grow /dev/md0 --size=#
which I know now I should have used the --size=max paramter instead,
but it wasn't in the man page or the online help. Oh well...
I tried removing
Neil Brown wrote:
I'll see what I can do :-)
The problem could be resolved by removing one of the two external SATA
controllers (PCI card with ALI M5283) and using Kernel 2.6.20.6
Only removing the ALI PCI card brought the numbering scheme in line
again so the old (degraded) array became
Ok--I got moved in to my new place and am back and running on the 'net.
I sat down for a few hours and attempted to write a script to try all
possible combinations of drives...but I have to admit that I'm lost.
I have 8 drives in the array--and I can output every possible
combination of those.
14 matches
Mail list logo