Re: performance limitations of linux raid

2000-05-05 Thread Christopher E. Brown

On Fri, 5 May 2000, Michael Robinton wrote:

> > > > 
> > > > Not entirely, there is a fair bit more CPU overhead running an
> > > > IDE bus than a proper SCSI one.
> > > 
> > > A "fair" bit on a 500mhz+ processor is really negligible.
> > 
> > 
> > Ehem, a fair bit on a 500Mhz CPU is ~ 30%.  I have watched a
> > *single* UDMA66 drive (with read ahead, multiblock io, 32bit mode, and
> > dma transfers enabled) on a 2.2.14 + IDE + RAID patched take over 30%
> > of the CPU during disk activity.  The same system with a 4 x 28G RAID0
> > set running would be < .1% idle during large copies.  An exactly
> > configured system with UltraWide SCSI instead of IDE sits ~ 95% idle
> > during the same ops.
> 
> Try turning on DMA

Ahem, try re-reading the above, line 3 first word!


---
As folks might have suspected, not much survives except roaches, 
and they don't carry large enough packets fast enough...
--About the Internet and nuclear war.





RE: IDE Controllers

2000-05-05 Thread Gregory Leblanc

> -Original Message-
> From: Andre Hedrick [mailto:[EMAIL PROTECTED]]
> Sent: Friday, May 05, 2000 7:59 PM
> To: Gary E. Miller
> Cc: Linux Kernel; Linux RAID
> Subject: Re: IDE Controllers
> 
> What you do not know is that there will be a drive int the futre that
> will have a native SCSI overlay and front end.  This will 
> have a SCB->ATA
> converter/emulation.  This will require setup and booting as a SCSI
> device.  FUN, heh??

Bleah, why?  I haven't figured why there are all those IDE-SCSI hiding
things yet.  The history of Linux seems to point towards the IDE support
being better than the SCSI, and yet the CD-R/W devices work through the SCSI
interface, and it looks like now the disks will.  Obviousally, I don't keep
up on all of the kernel developments, I've still got a full time job to keep
track of, but I'm still interested.
Greg

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: IDE Controllers

2000-05-05 Thread Andre Hedrick


What you do not know is that there will be a drive int the futre that
will have a native SCSI overlay and front end.  This will have a SCB->ATA
converter/emulation.  This will require setup and booting as a SCSI
device.  FUN, heh??


Andre Hedrick
The Linux ATA/IDE guy




trouble with lilo on /dev/hdc

2000-05-05 Thread Jason Lin

Hi there:
My raid1 is running fine on /dev/hda(boot disk) and
/dev/hdc.

 ;;Modified /etc/lilo.conf(Add one section for   
initrd.raid.img) 
Two cmds issued:
 mkinitrd /boot/initrd.raid1.img --with raid1
2.2.12-20
 lilo -v ;;No warning.

Then, power down, remove /dev/hdc, power up.
/dev/hda is able to boot up. /proc/mdstat indicates
it's running on partitions from /dev/hda only.
Everything works as expected.
(Partitions on /dev/hda and /dev/hdc are symmetrical)


But if raid1 is running and /dev/hdc is boot disk,
with same steps repeated
I can't boot up. I see "01 01 01 ..." continuously
running on screen.

Same steps means the following:
Modified /etc/lilo.conf from previous operation(from
/boot/lilo.conf in /dev/hda,  except "hda" was
replaced by "hdc".)
Two cmds issued:
 mkinitrd /boot/initrd.raid1.img --with raid1
2.2.12-20
 lilo -v   ;; get warning saying /dev/hdc is not the  
first disk.

Then, power down, remove /dev/hda,  power up.
Can't boot up!!

What am I missing here?
Is "lilo -v" the problelm?
Any help is appreciated.

J.





-
Info with /dev/hda  being boot disk.

/etc]$ cat /proc/mdstat
Personalities : [raid1]
read_ahead 1024 sectors
md0 : active raid1 hda7[0] 513984 blocks [2/1] [U_]
md1 : active raid1 hda8[0] 513984 blocks [2/1] [U_]
unused devices: 
/etc]$  

cat /etc/lilo.conf
boot=/dev/hda
map=/boot/map
install=/boot/boot.b
prompt
timeout=50
default=linux_raid1

image=/boot/vmlinuz-2.2.12-20
label=linux
initrd=/boot/initrd-2.2.12-20.img
read-only
root=/dev/hda11

image=/boot/vmlinuz-2.2.12-20
label=linux_raid1
initrd=/boot/initrd.raid1.img
read-only
root=/dev/hda11   



cat /etc/raidtab
# Config file for raid-1 device.
raiddev/dev/md0
raid-level 1
nr-raid-disks  2
nr-spare-disks 0
chunk-size 4
persistent-superblock  1

  device  /dev/hdc7
  raid-disk   0

  device  /dev/hda7
 raid-disk   1


raiddev/dev/md1
raid-level 1
nr-raid-disks  2
nr-spare-disks 0
chunk-size 4
persistent-superblock  1


  device  /dev/hdc8
  raid-disk   0

  device  /dev/hda8
 raid-disk   1  


-

-
Info with /dev/hdc as boot disk

cat /etc/lilo.conf
boot=/dev/hdc
map=/boot/map
install=/boot/boot.b
prompt
timeout=50
default=linux_raid1

image=/boot/vmlinuz-2.2.12-20
label=linux
initrd=/boot/initrd-2.2.12-20.img
read-only
root=/dev/hdc11

image=/boot/vmlinuz-2.2.12-20
label=linux_raid1
initrd=/boot/initrd.raid1.img
read-only
root=/dev/hdc11   
-

__
Do You Yahoo!?
Send instant messages & get email alerts with Yahoo! Messenger.
http://im.yahoo.com/



Re: How to remove a disk from Raidset which has not yet failed?

2000-05-05 Thread Jason Lin

I have RedHat6.1, but raidsetfaulty doesn't seem to
work for me.  Am I missing something? My
/sbin/raidsetfaulty is linked to /sbin/raidstart.

/home/mcajalin/ftp_hdc/raid1]# cat /proc/mdstat
Personalities : [raid1]
read_ahead 1024 sectors
md1 : active raid1 hdc8[1] hda8[0] 513984 blocks [2/2]
[UU]
md0 : active raid1 hdc7[1] hda7[0] 513984 blocks [2/2]
[UU]
unused devices: 

/home/mcajalin/ftp_hdc/raid1]# ls -l
/sbin/raidsetfaulty
lrwxr--r--   1 root root   15 May  5 10:36
/sbin/raidsetfaulty -> /sbin/raidstart

/home/mcajalin/ftp_hdc/raid1]#  /sbin/raidsetfaulty
/dev/md0  /dev/hda7
Unknown command /sbin/raidsetfaulty
usage: raidsetfaulty [--all] [--configfile] [--help]
[--version] [-achv] *
/home/mcajalin/ftp_hdc/raid1]#





--- Neil Brown <[EMAIL PROTECTED]> wrote:
> raidsetfaulty /dev/mdx /dev/sdy1

>NeilBrown




__
Do You Yahoo!?
Send instant messages & get email alerts with Yahoo! Messenger.
http://im.yahoo.com/



Re: IDE Controllers

2000-05-05 Thread Gary E. Miller

Yo Andre!

I know that, I am using Mingo's RAID-1 patch.  Sort of figured that
should be obvious on the linux-raid group...

RGDS
GARY

On Fri, 5 May 2000, Andre Hedrick wrote:

> RUDE surprize for you
> 
> Hardware RAID 1 under promise is not hardware!
> 
> Details.drives and host bios rev (promise)?
> 
> On Fri, 5 May 2000, Gary E. Miller wrote:
> 
> > Yo Andre!
> > 
> > 2.2.14 did not work for me.  I have a dual PIII with onboard UDMA33
> > controller running RAID1.  Very stable.  When I just moved the
> > two drives to a Promise Ultra66 the system became very unstable
> > (uptimes in minutes). YMMV.

---
Gary E. Miller Rellim 20340 Empire Ave, Suite E-3, Bend, OR 97701
[EMAIL PROTECTED]  Tel:+1(541)382-8588 Fax: +1(541)382-8676


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: IDE Controllers

2000-05-05 Thread Andre Hedrick


RUDE surprize for you

Hardware RAID 1 under promise is not hardware!

Details.drives and host bios rev (promise)?

On Fri, 5 May 2000, Gary E. Miller wrote:

> Yo Andre!
> 
> 2.2.14 did not work for me.  I have a dual PIII with onboard UDMA33
> controller running RAID1.  Very stable.  When I just moved the
> two drives to a Promise Ultra66 the system became very unstable
> (uptimes in minutes). YMMV.
> 
> RGDS
> GARY

Andre Hedrick
The Linux ATA/IDE guy




Re: IDE Controllers

2000-05-05 Thread Gary E. Miller

Yo Andre!

2.2.14 did not work for me.  I have a dual PIII with onboard UDMA33
controller running RAID1.  Very stable.  When I just moved the
two drives to a Promise Ultra66 the system became very unstable
(uptimes in minutes). YMMV.

RGDS
GARY

On Fri, 5 May 2000, Andre Hedrick wrote:

> http://www.linux-ide.org/
> 
> Yes PDC20246/PDC20262 are SMP safe.
> 
> On Fri, 5 May 2000, Edward Muller wrote:
> 
> > I was wondering which add in PCI IDE controllers are good to use and SMP safe
> > with a 2.2.14 or 2.2.15 kernel. I did some looking for Ultra66 controllers and
> > the only thing I could find that was supported was the Ultra66 from
> > Promise.

---
Gary E. Miller Rellim 20340 Empire Ave, Suite E-3, Bend, OR 97701
[EMAIL PROTECTED]  Tel:+1(541)382-8588 Fax: +1(541)382-8676




Re: performance limitations of linux raid

2000-05-05 Thread Mel Walters



"Christopher E. Brown" wrote:

> On Thu, 4 May 2000, Michael Robinton wrote:
> > >
> > > Not entirely, there is a fair bit more CPU overhead running an
> > > IDE bus than a proper SCSI one.
> >
> > A "fair" bit on a 500mhz+ processor is really negligible.
>
> Ehem, a fair bit on a 500Mhz CPU is ~ 30%.  I have watched a
> *single* UDMA66 drive (with read ahead, multiblock io, 32bit mode, and
> dma transfers enabled) on a 2.2.14 + IDE + RAID patched take over 30%
> of the CPU during disk activity.  The same system with a 4 x 28G RAID0
> set running would be < .1% idle during large copies.  An exactly
> configured system with UltraWide SCSI instead of IDE sits ~ 95% idle
> during the same ops.

Can I inquire as to how you are checking the CPU utilization?  I have a
200 Mhz K6 with both IDE and SCSI drives running raid0.  Bonnie results on
both the IDE raid array, and the SCSI array show very low cpu usage, less
than 2%.  This is on a HX motherboard that doesn't support UDMA in the
bios (PIO Mode 4 max).  DMA is enabled in Linux though.  Accually the
1.3GB maxtor ide drive in bonnie is FASTER than a Quantum 2.1GB SCSI
(sym53c8xx drv) when tested in non-raid.  I have two of those 2.1GB in
raid0, so the SCSI array is faster overall of course.

On a second line of thought, RAID cpu overhead seems really low.  When I
wanted more space on the 1.3GB maxtor, I added another IDE drive (fairly
old 540 meg) in a linear array (since its slower than the maxtor raid0
would be stupid) along side the 1.3.  A bonnie test comparing the 1.3
single to ( 1.3 + .5 ) with raid linear shows some test slower by about
5%, and some faster by about 5%.  I realize the bonnie test was mainly
writing to the 1.3, however it has to go through the raid layer as well.
This tells me that appending drives (even if slower) to give more space
doesn't affect performance (much) compared to the single drive.

One more question I have is how do you tell how much cpu time something
compiled into the kernel is using ( ie masquerade, scsi driver, etc ) or
as a module ( ie bridge etc ).  Top seems to show only userspace
programs.  Is there anyway to check a certain driver compiled into the
kernel?

A little of topic, but oh well...





No Subject

2000-05-05 Thread Arnel B. Reodica



 


Re: IDE Controllers

2000-05-05 Thread Andre Hedrick


http://www.linux-ide.org/

Yes PDC20246/PDC20262 are SMP safe.

On Fri, 5 May 2000, Edward Muller wrote:

> I was wondering which add in PCI IDE controllers are good to use and SMP safe
> with a 2.2.14 or 2.2.15 kernel. I did some looking for Ultra66 controllers and
> the only thing I could find that was supported was the Ultra66 from
> Promise. Checking on their site, they state the the driver is included with
> 2.2.10 and later, but in 2.2.15 I could not find a reference to an Ultra66
> driver in the kernel sources (or I was being very stupid :-) ). Plus, the driver
> that I downloaded from Promises website states that it isn't SMP safe.
> 
> So to sum up a long story. I have four drives, using RAID and would like to
> migrate the two slaves to their own controllers (making them single/masters of
> course), both for performance reasons as well as RAID stability reasons.
> 
> 
> What PCI IDE controllers have good support in the stable (2.2.14/2.2.15) tree?
> 
> -- 
> Edward Muller
> [EMAIL PROTECTED]
> [EMAIL PROTECTED]
> 
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [EMAIL PROTECTED]
> Please read the FAQ at http://www.tux.org/lkml/
> 

Andre Hedrick
The Linux ATA/IDE guy




Re: performance limitations of linux raid

2000-05-05 Thread Michael Robinton

> > > 
> > >   Not entirely, there is a fair bit more CPU overhead running an
> > > IDE bus than a proper SCSI one.
> > 
> > A "fair" bit on a 500mhz+ processor is really negligible.
> 
> 
>   Ehem, a fair bit on a 500Mhz CPU is ~ 30%.  I have watched a
> *single* UDMA66 drive (with read ahead, multiblock io, 32bit mode, and
> dma transfers enabled) on a 2.2.14 + IDE + RAID patched take over 30%
> of the CPU during disk activity.  The same system with a 4 x 28G RAID0
> set running would be < .1% idle during large copies.  An exactly
> configured system with UltraWide SCSI instead of IDE sits ~ 95% idle
> during the same ops.

Try turning on DMA



Re: performance limitations of linux raid

2000-05-05 Thread Christopher E. Brown

On Thu, 4 May 2000, Michael Robinton wrote:
> > 
> > Not entirely, there is a fair bit more CPU overhead running an
> > IDE bus than a proper SCSI one.
> 
> A "fair" bit on a 500mhz+ processor is really negligible.


Ehem, a fair bit on a 500Mhz CPU is ~ 30%.  I have watched a
*single* UDMA66 drive (with read ahead, multiblock io, 32bit mode, and
dma transfers enabled) on a 2.2.14 + IDE + RAID patched take over 30%
of the CPU during disk activity.  The same system with a 4 x 28G RAID0
set running would be < .1% idle during large copies.  An exactly
configured system with UltraWide SCSI instead of IDE sits ~ 95% idle
during the same ops.

 ---
As folks might have suspected, not much survives except roaches, 
and they don't carry large enough packets fast enough...
--About the Internet and nuclear war.





Re: raid1 question

2000-05-05 Thread D. Lance Robinson



Ben Ross wrote:

> Hi All,
>
> I'm using a raid1 setup with the raidtools 0.90 and mingo's raid patch
> against the 2.2.15 kernel.

...

> My concern is if /dev/sdb1 really crashes and I replace it with another
> fresh disk, partition it the same as before, and do a resync, everything
> on /dev/sdc1 (raid-disk 1) will be deleted.

There is a big difference between a resync from either a mkraid or dirty
restart vs. a resync to a spare disk. When resyncing to a spare, the device is
in degraded mode and the driver knows what disks have valid data on them and
only reads from them. The spare is only written to and is only read from once
the resync completes. In the case of an mkraid or dirty restart, the driver
picks a disk to read and sticks with it for consistency sake until the resync
is complete.

<>< Lance.





RE: performance limitations of linux raid

2000-05-05 Thread Carruth, Rusty


> From: Gregory Leblanc [mailto:[EMAIL PROTECTED]]
>
> ..., that would suck up a lot more host CPU processing power than
> the 3 SCSI channels that you'd need to get 12 drives and avoid bus
>saturation.  

not to mention the obvious bus slot loading problem ;-)

rc



RE: performance limitations of linux raid

2000-05-05 Thread Gregory Leblanc

> -Original Message-
> From: Michael Robinton [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, May 04, 2000 10:31 PM
> To: Christopher E. Brown
> Cc: Chris Mauritz; bug1; [EMAIL PROTECTED]
> Subject: Re: performance limitations of linux raid
> 
> On Thu, 4 May 2000, Christopher E. Brown wrote:
> 
> > On Wed, 3 May 2000, Michael Robinton wrote:
> > 
> > > The primary limitation is probably the rotational speed 
> of the disks and 
> > > how fast you can rip data off the drives. For instance, 
> the big IBM 
> > > drives (20 - 40 gigs) have a limitation of about 27mbs 
> for both the 7200 
> > > and 10k rpm models. The Drives to come will have to make 
> trade-offs 
> > > between density and speed, as the technology's in the 
> works have upper 
> > > constraints on one or the other. So... given enough 
> controllers (either 
> > > scsii on disk or ide individual) the limit will be related to the 
> > > bandwidth of the disk interface rather than the speed of 
> the processor 
> > > it's talking too.
> > 
> > Not entirely, there is a fair bit more CPU overhead running an
> > IDE bus than a proper SCSI one.
> 
> A "fair" bit on a 500mhz+ processor is really negligible.

Not if you've got 12 IDE channels, with 1 drive each in a couple of big RAID
arrays.  Even if all of those were mirrors (since that takes the least host
CPU RAID wise), that would suck up a lot more host CPU processing power than
the 3 SCSI channels that you'd need to get 12 drives and avoid bus
saturation.  
Greg



Re: IDE Controllers

2000-05-05 Thread Matt Valites

On  5 May, Edward Muller wrote:
> I was wondering which add in PCI IDE controllers are good to use and SMP safe 
> with a 2.2.14 or 2.2.15 kernel. I did some looking for Ultra66 controllers and 
> the only thing I could find that was supported was the Ultra66 from 
> Promise. Checking on their site, they state the the driver is included with 
> 2.2.10 and later, but in 2.2.15 I could not find a reference to an Ultra66 
> driver in the kernel sources (or I was being very stupid :-) ). Plus, the driver 
> that I downloaded from Promises website states that it isn't SMP safe. 

I can't tell you about the SMP support, but the driver is NOT included
in the standard source tree.  You need to get your hands on the IDE
patch from "The Linux IDE Guy".  All his work is included in the 2.3.xx
and should be in the 2.4 kernel (much like the RAID).  

This card has worked well for me, but I have yet to try it in a
software RAID. Using the "new"RAID patches.

  
> What PCI IDE controllers have good support in the stable (2.2.14/2.2.15) tree? 
Check out the patch, there are at least 2 other cards.




-- 
Matt Valites([EMAIL PROTECTED])
The Axium - http://www.axium.net
The Internet is full.  Go away.




RE: performance limitations of linux raid

2000-05-05 Thread Carruth, Rusty

(I really hate how Outlook makes you answer in FRONT of the message,
what a dumb design...)

Well, without spending the time I should thinking about my answer, I'll say
there are many things which impact performance, most of which we've seen
talked about here:

1 - how fast can you get data off the media?
2 - related - does the data just happen to be in drive cache?
3 - how fast can you get data from the drive to the controller?
4 - how fast can you get data from the controller into system RAM?
5 - how fast can you get that data to the user?

(assuming reads - writes are similar, but reversed - for the most part)

Number 1 relates to rotational delay, seek times, and other things I'm
probably
forgetting.  (Like sector skewing (is that the right term? I forget!) -
where you
try to put the 'next' sector where its ready to be read by the head (on a
sequential read) just after the system has gotten around to asking for that
sector (boy, I can see the flames coming already! ;-) / 2

Number 2 relates to how smart your drive is, and too smart a drive can
actually
slow you down by being in the wrong place reading data you don't want when
you go ask it for data somewhere else.

Number 3 relates to not only the obvious issue of how fast the scsi bus is,
but how congested it is.  If you have 15 devices which can sustain a data
rate (including rotational delays and cache hits) of 10 megabytes/sec, and
your scsi bus can only pass 20 MB/sec, then you should not put more than 2
of those devices on that bus - thus requiring more and more controllers...
(and I'm ignoring any issues of contention, as I'm not familiar with the low
level of scsi enough to know about it)

Number 4 relates to your system bus bandwidth, DMA speed, system bus
loading,
etc.

Number 5 relates to how fast your cpu is, how well written the driver is,
and other things I'm probably forgetting.  (Like , can the OS actually
HANDLE 2 things going on at once - and do floppy accesses take priority
over later requests for hard disk accesses?)

So maximizing performance is not a 1-variable exercise.   And you don't
always
have the control you'd like over all the variables.

And paying too much attention to only one while ignoring others can easily
cause you to make really silly statements like: "Wow, I've really got a
fast system here - I have an ULTRA DMA 66 drive on my P133 - really screams.
And with that nice new 199x cdrom drive with it as secondary - wow, I
really SCREAM through those CD's!"  Um, well, sure, uh-huh.  Most of you
on this list see the obvious errors there - I've seen some pretty smart
people do similar things (but not so obvious to most folks) by missing
some of the above issues.

Well, this is way longer than I expected, so I'll quit before I get
into any MORE trouble than I probably am already!

rusty


-Original Message-
From: Bob Gustafson [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 04, 2000 5:18 PM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: performance limitations of linux raid


I think the original answer was more to the point of Performance Limitation.

The mechanical delays inherent in the disk rotation are much slower than
the electronic or optical speeds in the connection between disk and
computer.

If you had a huge bank of semiconductor memory, or a huge cache or buffer
which was really helping (i.e., you wanted the information that was already
in the cache or buffer), then things get more complicated.

BobG



IDE Controllers

2000-05-05 Thread Edward Muller

I was wondering which add in PCI IDE controllers are good to use and SMP safe
with a 2.2.14 or 2.2.15 kernel. I did some looking for Ultra66 controllers and
the only thing I could find that was supported was the Ultra66 from
Promise. Checking on their site, they state the the driver is included with
2.2.10 and later, but in 2.2.15 I could not find a reference to an Ultra66
driver in the kernel sources (or I was being very stupid :-) ). Plus, the driver
that I downloaded from Promises website states that it isn't SMP safe.

So to sum up a long story. I have four drives, using RAID and would like to
migrate the two slaves to their own controllers (making them single/masters of
course), both for performance reasons as well as RAID stability reasons.


What PCI IDE controllers have good support in the stable (2.2.14/2.2.15) tree?

-- 
Edward Muller
[EMAIL PROTECTED]
[EMAIL PROTECTED]




raid1 question

2000-05-05 Thread Ben Ross


Hi All,

I'm using a raid1 setup with the raidtools 0.90 and mingo's raid patch
against the 2.2.15 kernel.

My question is how does the raid driver decide which disk in the mirror to
use as the source for synchronization when mkraid is used?

I tried a few experiments to see how it behaved. The /etc/raidtab is as
follows:

raiddev /dev/md0
raid-level  1
nr-raid-disks   2
nr-spare-disks  0
chunk-size  4
persistent-superblock   1
device  /dev/sdb1
raid-disk   0
device  /dev/sdc1
raid-disk   1

My first experiment was to shutdown, unplug /dev/sdc1, and bring up the
system again. The raid driver happily ran the mirror in degraded mode. I
wrote some files onto /dev/md0, rebooted the machine, with
/dev/sdc1 plugged back in. /dev/md0 continued to run in degraded mode
until I ran mkraid to resynchronize the mirror. The files that were
written to /dev/md0 while it was running in degraded mode were preserved.
:)

I then tried the same experiment, but this time unplugged /dev/sdb1 so
/dev/sdc1 (which became /dev/sdb1) ran in degraded mode after the reboot.
I wrote some files onto /dev/md0 while it was in this state. After
plugging the original /dev/sdb1 in after a reboot, the mirror continued to
run in degraded mode, but this time using /dev/sdc1, because the
persistent superblock was more up to date. I then ran mkraid to
resynchronise the mirror again, but this time, the files written to
/dev/md0 in degraded mode (really /dev/sdc1) had disappeared!
:(

So, when /dev/md0 is set up with raid1, does it always use raid-disk 0 as
the source for a resync?

My concern is if /dev/sdb1 really crashes and I replace it with another
fresh disk, partition it the same as before, and do a resync, everything
on /dev/sdc1 (raid-disk 1) will be deleted.

If this is the case, I presume the solution would be to change the SCSI id
of /dev/sdc1 so it becomes /dev/sdb1 in the mirror, making it raid-disk 0?

Thanks,
Ben.