I had two disks taken off from the three-disk raid set (hda,hdb,hdc - hdc
failed by itself, hdb was marked bad with raidsetfaulty). When the new disks
were added back to the set (with raidhotadd), only two of them became active
and one remained as a hot spare.
I there a way to reconfigure all thre
>> >I had an mdadm device running fine, and had created my own scripts for
>> >shutting it down and such. I upgraded my distro, and all of a sudden it
>> >decided to start initializing md devices on it's own, which include one
>> >that I want removed.
>>
>They were indeed set to raid autodetect.
Hi all,
Some time ago, I wanted to setup a software RAID-1 between hdc2 and
hdg2. However, not being familiar with mdadm and software raid, I made a
couple of bad commands. I don't remember the exact commands, but it's in
the order of setting /dev/hdc, /dev/hdc2, /dev/hdg and /dev/hdg2 in
th
Hello Neil ,
On Tue, 15 Nov 2005, NeilBrown wrote:
Following are two patches for md in 2.6.14-mm2 that are suitable to go
into the 2.6.15-rc series.
The first adds a date to the deprecation of START_ARRAY ioctl.
The second fixes a recently introduced problem that causes md threads
to p
Why do I need to run mdadm -A -ap /dev/md_d0 /dev/sda /dev/sdb to start
my array everytime I boot? This seems like I will never be able to boot
from this array if I have to run this command.
What do I have to do to autostart this array on boot?
Thanks,
Spencer
-
To unsubscribe from this list:
On Tue, Nov 15, 2005 at 09:50:59AM -0700, Spencer Tuttle wrote:
> What do I have to do to autostart this array on boot?
Read Documentation/md.txt in your kernel source. Has lots of cool
options for setting up kernel assembly on boot.
If you have a reasonably recent array with persistent superblo
>Some time ago, I wanted to setup a software RAID-1 between hdc2 and
>hdg2. However, not being familiar with mdadm and software raid, I made a
>couple of bad commands. I don't remember the exact commands, but it's in
> the order of setting /dev/hdc, /dev/hdc2, /dev/hdg and /dev/hdg2 in
>the sa
all,
I have a 13 disk raid 5 set with 4 disks marks as "clean" and the rest marked
as dirty. When I do the following command
to start the raid set (md0) I get an error. Any ideas on how to recover?
This is a debian sarge system running kernel 2.6.8-1-686-smp and mdadm version
v1.4.0 - 29 Oct
On Tuesday November 15, [EMAIL PROTECTED] wrote:
> I had two disks taken off from the three-disk raid set (hda,hdb,hdc - hdc
> failed by itself, hdb was marked bad with raidsetfaulty). When the new disks
> were added back to the set (with raidhotadd), only two of them became active
> and one remain
On Tuesday November 15, [EMAIL PROTECTED] wrote:
>
> I'm wondering what would be the easiest way to correct this. If
> possible, I'd prefer not having to start from scratch.
>
It's not entirely clear to me what is happening, In particular, why
md is tring to bind '/disc,*' to an array. Ma
Ross Vandegrift wrote:
On Tue, Nov 15, 2005 at 09:50:59AM -0700, Spencer Tuttle wrote:
What do I have to do to autostart this array on boot?
Read Documentation/md.txt in your kernel source. Has lots of cool
options for setting up kernel assembly on boot.
If you have a reasonably recent arr
On Tuesday November 15, [EMAIL PROTECTED] wrote:
> Hi,
>
> Yesterday i installed the new 2.6.15-rc1 kernel to test ata passthrough to
> get smartctl working on my sata disks. After boot I noticed a rather
> high load of ~5. I checked with top, ps, but no processes where running
> taking up CPU, an
On Tuesday November 15, [EMAIL PROTECTED] wrote:
> all,
>
> I have a 13 disk raid 5 set with 4 disks marks as "clean" and the
> rest marked as dirty.
And important question to answer is 'how did this happen'?
> When I do the following command
> to start the raid set (md0) I get an error. Any id
I believe there were data access errors on the console (scrolling to fast to
read). I will try the force and see what
happends.
-- Original Message ---
From: Neil Brown <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Cc: linux-raid@vger.kernel.org
Sent: Wed, 16 Nov 2005 10:16:22 +1
Thanks for your help Andrew. I'm not sure if I did something wrong, but
I'm having some problems...
Le 05-11-15, à 13:44, Andrew Burgess a écrit :
Look at:
mdadm -E /dev/hdc
If it has a superblock, zero it with 'mdadm --zero-superblock /dev/hdc'
> Same for hdg
I did this, rebooted and the s
On Wed, 2005-11-16 at 10:16 +1100, Neil Brown wrote:
> On Tuesday November 15, [EMAIL PROTECTED] wrote:
> > all,
> >
> > I have a 13 disk raid 5 set with 4 disks marks as "clean" and the
> > rest marked as dirty.
>
> And important question to answer is 'how did this happen'?
>
> > When I do the
Neil,
Thanks for the reply, the --force worked great, md0 is syncing now, I will run
testing against my database once the
sync completes in 400 minutes.
Jim
-- Original Message ---
From: Neil Brown <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Cc: linux-raid@vger.kernel.org
Sent
The disks contain a ~2TB Postgresql database so i was unable to copy to another
system. The force worked as suggested
by Neil.
Thanks for the reply
Jim
-- Original Message ---
From: Dan Stromberg <[EMAIL PROTECTED]>
To: Neil Brown <[EMAIL PROTECTED]>
Cc: [EMAIL PROTECTED], li
On Wed, Nov 16, 2005 at 10:01:10AM +1100, you [Neil Brown] wrote:
> On Tuesday November 15, [EMAIL PROTECTED] wrote:
> > I had two disks taken off from the three-disk raid set (hda,hdb,hdc - hdc
> > failed by itself, hdb was marked bad with raidsetfaulty). When the new disks
> > were added back to
19 matches
Mail list logo