Re: [PATCH 001 of 6] md: Send online/offline uevents when an md array starts/stops.

2006-11-09 Thread Michael Tokarev
Neil Brown wrote:
[/dev/mdx...]
>> (much like how /dev/ptmx is used to create /dev/pts/N entries.)
[]
> I have the following patch sitting in my patch queue (since about
> March).
> It does what you suggest via /sys/module/md-mod/parameters/MAGIC_FILE
> which is the only md-specific part of the /sys namespace that I could
> find.
> 
> However I'm not at all convinced that it is a good idea.  I would much
> rather have mdadm control device naming than leave it up to udev.

This is again the same "device naming" question as pops up every time
someone mentions udev.  And as usual, I'm suggesting the following, which
should - hopefully - make everyone happy:

  create kernel names *always*, be it /dev/mdN or /dev/sdF or whatever,
  so that things like /proc/partitions, /proc/mdstat etc will be useful.
  For this, the ideal solution - IMHO - is to have mini-devfs-like filesystem
  mounted as /dev, so that it is possible to have "bare" names without any
  help from any external programs like udev, but I don't want to start another
  flamewar here, esp. since it's off-topic to *this* discussion.
  Note /dev/mdN is as good as /dev/md/N - because there are only a few active
  devices wich appear in /dev, there's no "risk" to have "too many" files in
  /dev, hence no need to put them into subdirs like /dev/md/, /dev/sd/ etc.

  if so desired, create *symlinks* at /dev with appropriate user-controlled
  names to those official kernel device nodes.  Be it like /dev/disk/by-label/
  or /dev/cdrom0 or whatever.
  The links can be created by mdadm, OR by udev - in this case, it's really
  irrelevant.  Udev rules does a good job of creating /dev/disk/ hierarchy
  already, and that seems to be sufficient - i see no reason to make other
  device nodes (symlinks) by mdadm.

By the way, unlike /dev/sdE and /dev/hdF entries, /dev/mdN nodes are pretty
stable.  Even if scsi disks gets reordered, mdadm finds the component devices
by UUID (if DEVICE partitions is given in config file), and you have /dev/md1
pointing to the same "logical partition" (have the same filesystem or data)
regardless how you shuffle your disks (IF mdadm was able to find all components
and assemble the array, anyway).  So sometimes, I use md/mdadm on systems
WITHOUT any "raided" drives, but where I suspect disk devices may change for
whatever reason - I just create raid0 "arrays" composed of a single partition
and let mdadm to find them in /dev/sd* and to assemble stable-numbered /dev/mdN
devices - without any help of udev or anything else (I for one dislike udev for
several reasons).

> An in any case, we have the semantic that opening an md device-file
> creates the device, and we cannot get rid of that semantic without a
> lot of warning and a lot of pain.  And adding a new semantic isn't
> really going to help.

I don't think so.  With new semantic in place, we've two options (provided
current semantics stays, and I don't see a strong reason why it should be
removed except of the bloat):

 a) with new mdadm utilizing new semantics, there's nothing to change in udev --
it will all Just Work, by mdadm opening /dev/md-control-node (how it's 
called)
and assembling devices using that, and during assemble, udev will receive 
proper
events about new "disks" appearing and will handle that as usual.

 b) without new mdadm, it will work as before (now).  And in this case, let's 
not
send any udev events, as mdadm already created the nodes etc.

So if a user wants neat and nice md/udev integration, the way to go is case "a".
If it's not required, either case will do.

Sure, eventually, long term, support for case "b" can be removed.  Or not - 
depending
on how the things will be implemented, because when done properly, both cases 
will
call the same routine(s), but case "b" will just skip sending uevents, so ioctl 
handlers
becomes two- or one-liners (two in case a and one in case b), which isn't bloat 
really ;)

/mjt
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 001 of 6] md: Send online/offline uevents when an md array starts/stops.

2006-11-09 Thread Michael Tokarev
Michael Tokarev wrote:
> Neil Brown wrote:
> [/dev/mdx...]
[]
>> An in any case, we have the semantic that opening an md device-file
>> creates the device, and we cannot get rid of that semantic without a
>> lot of warning and a lot of pain.  And adding a new semantic isn't
>> really going to help.
> 
> I don't think so.  With new semantic in place, we've two options (provided
> current semantics stays, and I don't see a strong reason why it should be
> removed except of the bloat):
> 
>  a) with new mdadm utilizing new semantics, there's nothing to change in udev 
> --
> it will all Just Work, by mdadm opening /dev/md-control-node (how it's 
> called)
> and assembling devices using that, and during assemble, udev will receive 
> proper
> events about new "disks" appearing and will handle that as usual.
> 
>  b) without new mdadm, it will work as before (now).  And in this case, let's 
> not
> send any udev events, as mdadm already created the nodes etc.

Forgot to add.  This is important point: do NOT change current behavour wrt 
uevents,
ie, don't add uevents for current semantics at all.  Only send uevents (and in 
this
case it will be normal "add" and "remove" events) when assembling arrays "the 
new way",
using (stable!) /dev/mdcontrol misc device, after RUN_ARRAY and STOP_ARRAY 
actions has
been performed.

/mjt

> So if a user wants neat and nice md/udev integration, the way to go is case 
> "a".
> If it's not required, either case will do.

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


safely removing RAID information?

2006-11-09 Thread Benjamin Schieder
Hi list.

I have the following 'leftover setup' from testing:

---snip---
/dev/hda:
  Magic : a92b4efc
Version : 00.90.00
   UUID : 533fb996:8989f533:2dc6aaa9:74fdd762
  Creation Time : Sat Jun 24 08:37:43 2006
 Raid Level : raid5
Device Size : 244198464 (232.89 GiB 250.06 GB)
 Array Size : 488396928 (465.77 GiB 500.12 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 0

Update Time : Fri Jul  7 17:49:52 2006
  State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0
   Checksum : c0941be6 - correct
 Events : 0.1780

 Layout : left-symmetric
 Chunk Size : 64K

  Number   Major   Minor   RaidDevice State
this 1  3401  active sync

   0 0  33   640  active sync
   1 1  3401  active sync
   2 2  34   642  active sync
/dev/hda1:
  Magic : a92b4efc
Version : 00.90.00
   UUID : 3559ffcf:14eb9889:3826d6c2:c13731d7
  Creation Time : Fri Jul  7 20:12:10 2006
 Raid Level : raid1
Device Size : 497856 (486.27 MiB 509.80 MB)
 Array Size : 497856 (486.27 MiB 509.80 MB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 0

Update Time : Thu Nov  9 05:12:56 2006
  State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0
   Checksum : 76d7e7e8 - correct
 Events : 0.1990


  Number   Major   Minor   RaidDevice State
this 1   311  active sync   /dev/hda1

   0 0   3   650  active sync   /dev/hdb1
   1 1   311  active sync   /dev/hda1
   2 2  2212  active sync   /dev/hdc1

---snap---

Now, /dev/hda1 (and hda[35678]) are parts of active RAID arrays.
/dev/hda (whole disk) was part of an array I set up for testing/playing
around with mdadm. I then created the partitions and set up RAIDs for them.

Is it safe to issue the following command now:

[EMAIL PROTECTED]:~# mdadm --zero-superblock /dev/hda

or will that nuke my setup along with my precious data?
My software versions are the following:

[EMAIL PROTECTED]:/usr/src/wip/installer/package/base/installer# mdadm --version
mdadm - v2.4 - 30 March 2006
[EMAIL PROTECTED]:/usr/src/wip/installer/package/base/installer# uname -a
Linux ceres 2.6.17.7-rock-dragon #2 SMP Wed Aug 30 17:21:35 CEST 2006 i686 
GNU/Linux


Thanks in advance,
Benjamin
-- 
#!/bin/sh #!/bin/bash #!/bin/tcsh #!/bin/csh #!/bin/kiss #!/bin/ksh
#!/bin/pdksh #!/usr/bin/perl #!/usr/bin/python #!/bin/zsh #!/bin/ash

Feel at home? Got some of them? Want to show some magic?

http://shellscripts.org


pgp0cl0vUONj6.pgp
Description: PGP signature


Randomly kernel panic with MPT SPI driver 3.04.01

2006-11-09 Thread RĂ©mi VM

Hello,

I've go a randomly kernel panic with a 2.6.18.1 kernel (MPT SPI driver
3.04.01) when i start my server with  a LSI 53C1030 :
---
Fusion MPT base driver 3.04.01
Copyright (c) 1999-2005 LSI Logic Corporation
Fusion MPT SPI Host driver 3.04.01
mptbase: Initiating ioc0 bringup
mptbase: ioc0: ERROR - Doorbell ACK timeout (count=4999), IntStatus=8000!
mptbase: ioc0: ERROR - Doorbell ACK timeout (count=4999), IntStatus=8000!
mptbase: ioc0: ERROR - Diagnostic reset FAILED! (102h)
mptbase: ioc0 NOT READY WARNING!
mptbase: WARNING - ioc0 did not initialize properly! (-1)
mptspi: probe of :02:05.0 failed with error -1
mptbase: Initiating ioc1 bringup
--

My server : Intel SR1400SYS
http://developer.intel.com/design/servers/chassis/sr1400/index.htm

Thanks for your help !
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[urgent] supermicro ejecting disks

2006-11-09 Thread Louis-David Mitterrand
Hello,

We recently changed our main server to a Supermicro with 6 scsi disks in 
soft raid6 with kernel 2.6.18. 

After running OK for a few days, 3 disks were suddenly ejected from the 
raid6. We are now trying to reassemble the partition in another box but 
keep getting a "superblock corrupted" error on the fs. We're in a pretty 
bad state right now, as this is a production machine. Of course we have 
bakcups but restoring them will take some time.

Does anyone have any idea of what has happended and how to fix it?

Thanks,
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[consultant needed] (was: supermicro ejecting disks)

2006-11-09 Thread Louis-David Mitterrand
I forgot to add that to help us solve this we are ready to hire a paid 
consultant please contact me by mail or phone at +33.1.46.47.21.30

Thanks

On Thu, Nov 09, 2006 at 03:18:11PM +0100, Louis-David Mitterrand wrote:
> Hello,
> 
> We recently changed our main server to a Supermicro with 6 scsi disks in 
> soft raid6 with kernel 2.6.18. 
> 
> After running OK for a few days, 3 disks were suddenly ejected from the 
> raid6. We are now trying to reassemble the partition in another box but 
> keep getting a "superblock corrupted" error on the fs. We're in a pretty 
> bad state right now, as this is a production machine. Of course we have 
> bakcups but restoring them will take some time.
> 
> Does anyone have any idea of what has happended and how to fix it?
> 
> Thanks,
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to [EMAIL PROTECTED]
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Too much ECC?

2006-11-09 Thread Dexter Filmore
I just ran smartctl -d ata on my sATA disks (Samsung) and got these raw 
values:

195 Hardware_ECC_Recovered  3344107
195 Hardware_ECC_Recovered  2786896
195 Hardware_ECC_Recovered  617712
195 Hardware_ECC_Recovered  773986

Looking at a 5 year old 40GB Maxtor that's not been cooled too well I see "3" 
as the raw value.
Should I be worried or am I just not properly reading this?

Dex


-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+>++ L+++> E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h>++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [solved] (was: supermicro ejecting disks)

2006-11-09 Thread Louis-David Mitterrand
On Thu, Nov 09, 2006 at 03:27:31PM +0100, Louis-David Mitterrand wrote:
> I forgot to add that to help us solve this we are ready to hire a paid 
> consultant please contact me by mail or phone at +33.1.46.47.21.30

Update: we eventually succeded in reassembling the partition, with two 
missing disks.
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


md manpage of mdadm 2.5.6

2006-11-09 Thread Joachim Wagner
Hi Neil,

In man -l mdadm-2.5.6/md.4 I read

"Firstly, after an unclear shutdown, the resync process will consult the 
bitmap and only resync those blocks that correspond to bits in the bitmap 
that are set. This can dramatically increase resync time."

IMHO, "increase" should be changed to "decrease" or "time" to "speed". 

Regards,
Joachim
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Too much ECC?

2006-11-09 Thread Gabor Gombas
On Thu, Nov 09, 2006 at 03:30:55PM +0100, Dexter Filmore wrote:

> 195 Hardware_ECC_Recovered  3344107

For some models that's perfectly normal.

> Looking at a 5 year old 40GB Maxtor that's not been cooled too well I see "3" 
> as the raw value.

Different technology, different vendor, different meaning of the
attribute.

> Should I be worried or am I just not properly reading this?

If the other attributes are OK then the raw value of
Hardware_ECC_Recovered has not much meaning.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: mdadm-2.5.4 issues and 2.6.18.1 kernel md issues

2006-11-09 Thread Doug Ledford
On Thu, 2006-11-09 at 16:32 +1100, Neil Brown wrote:
> On Thursday November 2, [EMAIL PROTECTED] wrote:
> > If I use mdadm 2.5.4 to create a version 1 superblock raid1 device, it
> > starts a resync.  If I then reboot the computer part way through, when
> > it boots back up, the resync gets cancelled and the array is considered
> > clean.  This is against a 2.6.18.1 kernel.
> 
> I cannot reproduce that (I tried exactly 2.6.18.1).
> Do you have kernel logs of the various stages?

No, I don't.  It could however be related to the fact that I built the
array, let it sync completely, then rebuilt the array with a new
superblock without doing an mdadm --zero-superblock on each device first
(I was playing with different options).  After that second build, it was
about 66% done resyncing when I decided to reboot and try out the
"restart from where it left off" feature of the version 1 superblock and
when I rebooted, to my dismay, it was marked as clean and no sync was in
progress.  So, I used mdadm to fail/remove/add the second device from
the array.  To my surprise, it went in clean *again*.  This time I think
that was intentional though, I don't think I had dirtied the array
between the fail and add, so the generation counts matched and mdadm or
the kernel decided it was safe to put it back in the array without a
resync (I know a *lot* of our customers are going to question the safety
of that...the logic may be sound, but it's going to make them nervous
when a fail/remove/add cycle doesn't trigger a resync, so I don't know
if you've documented the exact logic in use when doing that, but it's no
doubt going to fall under scrutiny).  So, the next time I did a
fail/remove/zero-superblock/add cycle and that triggered the full resync
that I wanted.  So, my guess is that because I didn't zero the
superblocks between the initial creation and recreation with a different
--name option (I think that was the only thing I changed), it may have
triggered something in the code path that detects what *should* be a
safe remove/add cycle and stopped the resync on reboot from happening.

> > 
> > If I create a version 1 superblock raid1 array, mdadm -D  > device> says that the device is not part of a raid array (and likewise
> > the kernel autorun facility fails to find the device).
> 
>mdadm -D 
> is incorrect usage.  You want
>mdadm -E 
> or
>mdadm -D 
> 
> in-kernel autorun does not work with version 1 metadata as it does not
> store a 'preferred minor'.

No, but it has a name, so theoretically, it could assemble it and use
the name component without regard to the minor number.  Regardless, now
that FC6 is out the door, I want to make some change for FC7 and I'm
trying to get the changes in place so we don't use autorun, so it may
very well be a moot point as far as we are concerned (although the
messages from the kernel that the devices don't have a superblock might
confuse people, I think it would be better to print out a kernel message
to the effect that autorun doesn't work on version 1 superblocks:
skipping instead of  no superblock found or whatever it says now).

> > If I create a version 1 superblock raid1 array, mdadm -E  > device> sees the superblock.  If I then run mdadm -E --brief on that
> > same device, it prints out the 1 line ARRAY line, but it misprints the
> > UUID such that is a 10 digit hex number: 8 digit hex number: 8 digit hex
> > number: 6 digit hex number.
> 
> Oops, so it does.  Fix below.  Thanks.

Thanks.

> >  It also prints the mdadm device in the
> > ARRAY line as /dev/md/# where as mdadm -D --brief prints the device
> > as /dev/md#.  Consistency would be nice.
> 
> It would be nice  but how important is it really?
> If you create the same array with --name=fred, then
> -Eb with give /dev/md/fred, while -Db will give /dev/md2.
> Both a right in a sense.
> 
> Ofcourse if you say
>   mdadm -Db /dev/md/2
> then you get /dev/md/2.
> You only have /dev/md2 force on you with 
>   mdadm -Ds
> In that case mdadm doesn't really know what name you want to use for
> the md device  I guess it could scan /dev.

Well, given the clarification above about the difference between -D and
-E, and that no minor information is stored in version 1 superblocks, I
can see where -Eb has no choice but to use the name and omit the minor
information entirely (unless you did something like checked to see if
that array was currently assembled, and if so what minor it is currently
on, but that's overkill IMO).  However, what that means to me is that in
future products, I need to teach the OS to ignore major/minor of the
device and *only* use the name if we switch to version 1 superblocks
(which I would like to do).  The reason is that if you ever loose your
mdadm.conf file, you can't get consistent major/minor information back
by going to the device, only the name information.  So, ignore
major/minor completely, let the md stack use whatever minor it wants
from boot to boot, and r

Re: New features?

2006-11-09 Thread Neil Brown
On Friday November 3, [EMAIL PROTECTED] wrote:
> On Fri, Nov 03, 2006 at 02:39:31PM +1100, Neil Brown wrote:
> 
> > mdadm could probably be changed to be able to remove the device
> > anyway.  The only difficulty is: how do you tell it which device to
> > remove", given that there is no name in /dev to use.
> > Suggestions?
> 
> Major:minor? If /sys/block still holds an entry for the removed disk,
> then the user can figure it out from the name. Or mdadm could just
> accept a path under /sys/block instead of a device node.

I like the /sys/block idea.  So if given a directory we look for a
'dev' file and read major:minor from that.
I guess I could also just allow
  mdadm /dev/mdX --remove failed

and all failed devices get removed.

Thanks for the suggestion.
NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


How to recover from massive disk failure

2006-11-09 Thread Jacob Schmidt Madsen
Hey

I have a 2 controllers: Promise SATA150 TX4 and Promise SATA300 TX4.

2 disks are connected to the SATA150 and 4 disks are connected to SATA300.

The 6 disks are part of a raid5 array.

Today the SATA300 controller failed and the 4 disks were excluded from the 
array.

The array was inactive the moment the 4 disks were excluded. After rebooting 
all controllers and disks are online again.

But now i have trouble starting the array and i could really use some input 
from smarter minds on this list.

Here's some output i gathered:

# cat /proc/mdstat:
md5 : active raid5 sdg1[6](F) sdf1[7](F) sde1[8](F) sdd1[9](F) sdc1[5] sdb1[4]
1562842880 blocks level 5, 64k chunk, algorithm 2 [6/2] [UU]

The 4 failed disks are the ones connected to the failed controller.

The following is after the reboot.

# cat /proc/mdstat:
md5 : inactive sdb1[4] sdc1[5] sdg1[3] sdf1[2] sde1[1] sdd1[0]
1875411456 blocks

I then did the following in hope that it would help:

# mdadm -S /dev/md5
mdadm: stopped /dev/md5
# mdadm -As /dev/md5
mdadm: /dev/md5 assembled from 2 drives - not enough to start the array.

No luck!

What can i do to get it up and running again?

Thanks!
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to recover from massive disk failure

2006-11-09 Thread Neil Brown
On Friday November 10, [EMAIL PROTECTED] wrote:
> # mdadm -As /dev/md5
> mdadm: /dev/md5 assembled from 2 drives - not enough to start the array.
> 
> No luck!
> 
> What can i do to get it up and running again?

Add a "--force" flag.

NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to recover from massive disk failure

2006-11-09 Thread Jacob Schmidt Madsen
Thank you very much for the quick answer!

One last one...

This is how my entry in mdadm.conf look:
DEVICE /dev/sd[bcdefg]1
ARRAY /dev/md5 level=raid5 num-devices=6 
UUID=a4a5dae9:04a09c3a:cd3fd7be:b754f4fe 
devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1,/dev/sdf1,/dev/sdg1

Do the order of the devices matter?

Thanks again!

On Friday 10 November 2006 01:54, Neil Brown wrote:
> On Friday November 10, [EMAIL PROTECTED] wrote:
> > # mdadm -As /dev/md5
> > mdadm: /dev/md5 assembled from 2 drives - not enough to start the array.
> >
> > No luck!
> >
> > What can i do to get it up and running again?
>
> Add a "--force" flag.
>
> NeilBrown
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to [EMAIL PROTECTED]
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to recover from massive disk failure

2006-11-09 Thread Neil Brown
On Friday November 10, [EMAIL PROTECTED] wrote:
> Thank you very much for the quick answer!
> 
> One last one...
> 
> This is how my entry in mdadm.conf look:
> DEVICE /dev/sd[bcdefg]1
> ARRAY /dev/md5 level=raid5 num-devices=6 
> UUID=a4a5dae9:04a09c3a:cd3fd7be:b754f4fe 
> devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1,/dev/sdf1,/dev/sdg1
> 
> Do the order of the devices matter?

No, but it is best not to have them.
i.e. get rid of the "devices=" bit and change the DEVICE line to
   DEVICE /dev/sd?1
SCSI device names can change if the hardware config changes, and you
want mdadm to find the devices no matter where they are.

NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html