try to assemble the array instead, --run is trying to start a partially built
array, --stop deactivated the array and
released all resources, so --run will NOT work, use --assemble (-A) instead.
Good luck
Jim
-- Original Message ---
From: "Ian Brown" <[EMAIL PROTECTED]>
To: "G
use LVM, that way you can resize the volumns
-- Original Message ---
From: Chris Allen <[EMAIL PROTECTED]>
To: linux-raid@vger.kernel.org
Sent: Sun, 25 Jun 2006 23:37:01 +0100
Subject: Multiple raids on one machine?
> Back to my 12 terabyte fileserver, I have decided to split the
The disks contain a ~2TB Postgresql database so i was unable to copy to another
system. The force worked as suggested
by Neil.
Thanks for the reply
Jim
-- Original Message ---
From: Dan Stromberg <[EMAIL PROTECTED]>
To: Neil Brown <[EMAIL PROTECTED]>
Cc: [EMAIL PROTECTED], li
Neil,
Thanks for the reply, the --force worked great, md0 is syncing now, I will run
testing against my database once the
sync completes in 400 minutes.
Jim
-- Original Message ---
From: Neil Brown <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Cc: linux-raid@vger.kernel.org
Sent
I believe there were data access errors on the console (scrolling to fast to
read). I will try the force and see what
happends.
-- Original Message ---
From: Neil Brown <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Cc: linux-raid@vger.kernel.org
Sent: Wed, 16 Nov 2005 10:16:22 +1
all,
I have a 13 disk raid 5 set with 4 disks marks as "clean" and the rest marked
as dirty. When I do the following command
to start the raid set (md0) I get an error. Any ideas on how to recover?
This is a debian sarge system running kernel 2.6.8-1-686-smp and mdadm version
v1.4.0 - 29 Oct
All,
I have a 13 disk (250G each) software raid 5 set using 1 16 port adaptec SATA
controller.
I am very happy with the performance. The reason I went with the 13 disk raid 5
set was for the space NOT performance.
I have a single postgresql database that is over 2 TB with about 500 GB free