Re: MD devices renaming or re-ordering question

2007-09-20 Thread Maurice Hilarius
Bill Davidsen wrote:
> ..
> I'm not clear on what you mean by a "plain disk" followed by a list of
> partitions. If that means putting all your initial data on a single
> disk without RAID protection, that's a far worse idea in my experience
> than splitting arrays across controllers.
It is easiest enough for now to mirror the boot drive.
I worded it this way to make it clear the migration is not a booting issue.
>> The remaining 15 disks are configured as :
>> sdb1 through sde1 as md0 ( 4 devices/partitions)
>> sdf1 through sdp1 as md1 (10 devices/partitions)
>> I want to add a 2nd controller, and 4 more drives, to the md0 device.
>>
>> But, I do not want md0 to be "split" across the 2 controllers this way.
>> I prefer to do the split on md1
>>   
> Move the md0 drives to the 2nd controller, add more.
Yes, that is one way, involving some hardware swapping and more downtime.
>> Other than starting from scratch, the best solution would be to add the
>> disks to md0, then to "magically" turn md0 into md1, and md1 into md0
>>   
>
> Unless you want to practice doing critical config changes, why? Moving
> the drives won't effect their name, at least not unless you have done
> something like configure by physical partition name instead of UUID.
> Doing that for more than a few drives is a learning experience waiting
> to happen. If that's the case, backup your mdadm.conf file and
> reconfigure using UUID, then start moving things around.
OK, where may I learn more on using UUID for drive identification?
I have always assembled a RAID using the syntax /dev/sdxx ( sd drive
letter and partition number)
I take it there is a way to identify the UUID of a drive and partition
and assemble and maintain using syntax that way?

I hope that this will also get me past the problem sometimes of running
out  of letters in the 26 char alphabet!
I never thought the day where I would have a problem with more than 24
drives.. OK, so I show my age there!
> ..
> Then consider the performance vs. reliability issues of having all
> drives on a single controller.
> Multiple controllers give you more points of failure unless you are
> mirroring across them, but better peak performance.
Controller reliability seems to be not an issue. I have rarely seen a
3Ware card fail.
Drives, OTOH, well..
Hence the desire to have duplicated arrays, so we can clone across on MD
to another.

> Note, I'm suggesting evaluating what you are doing only, it may be
> fine, just avoids "didn't think about that" events.
>
Agreed. All good points.
> Well, you asked for suggestions...  ;-)
These are appreciated.
I am still looking , however, for a way to rename and md device.

Another case where it comes up is when I take a set of drives from one
machine and move them to another.
Having conflicting md devices comes to mind..

Thanks Bill




-- 

With our best regards,


Maurice W. HilariusTelephone: 01-780-456-9771
Hard Data Ltd.  FAX:   01-780-456-9772
11060 - 166 Avenue email:[EMAIL PROTECTED]
Edmonton, AB, Canada   http://www.harddata.com/
   T5X 1Y3

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


MD devices renaming or re-ordering question

2007-09-13 Thread Maurice Hilarius
Hi to all.

I wonder if somebody would care to help me to solve a problem?

I have some servers.
They are running CentOS5
This OS has a limitation where the maximum filesystem size is 8TB.

Each server curr3ently has a  AMCC/3WARE 16 port SATA controllers. Total
of 16 ports / drives
I am using 750GB drives.

I am exporting the drives as single, NOT as hardware RAID
That is due to the filesystem and controller limitations, among other
reasons.

Each server currently has 16 disks attached to the one controller

I want to add a 2nd controller, and, for now, 4 more disks on it.

I want to have the boot disk as a plain disk, as presently configured as
sda1,2,3

The remaining 15 disks are configured as :
sdb1 through sde1 as md0 ( 4 devices/partitions)
sdf1 through sdp1 as md1 (10 devices/partitions)
I want to add a 2nd controller, and 4 more drives, to the md0 device.

But, I do not want md0 to be "split" across the 2 controllers this way.
I prefer to do the split on md1

Other than starting from scratch, the best solution would be to add the
disks to md0, then to "magically" turn md0 into md1, and md1 into md0

So, the question:
How does one make md1 into md0, and vice versa?
Without losing the data on these md's ?

Thanks in advance for any suggestions.



-- 
Regards, Maurice


/09 F9 11 02 9D 74 E3 5B D8 41 56 C5 63 56 88 C0/

/1001 1001 00010001 0010 10011101 01110100 11100011 01011011
11011000 0101 01010110 11000101 01100011 01010110 10001000 1100/

/10 base 13,256,278,887,989,457,651,018,865,901,401,704,640/
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


best way to create RAID10 on an CentOS5 install

2007-06-30 Thread Maurice Hilarius
Hello all again.

Extending from an earlier question:
"deliberately degrading RAID1 to a single disk, then back again"

I got some useful answers, which I appreciate.

Taking this the next step, I want to create a RAID10 using 4 disks on a
CentOS install
I also want to be able to stop and remove a pair of disks periodically ,
so I ma y exchange them as backup media.
Then add new disks and re-start it.

First challenge I see is the actual RAID10 creation in the install.

Second challenge is the syntax to stop the (correct) pair of disks and
remove them, then re-add them and restart the array so that is re-synchs.

Can anyone lend me some syntax and tips please?


-- 

With our best regards,


Maurice W. HilariusTelephone: 01-780-456-9771
Hard Data Ltd.  FAX:   01-780-456-9772
11060 - 166 Avenue email:[EMAIL PROTECTED]
Edmonton, AB, Canada   http://www.harddata.com/
   T5X 1Y3


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


deliberately degrading RAID1 to a single disk, then back again

2007-06-26 Thread Maurice Hilarius
Good day all.

Scenario:
Pair of identical disks.
partitions:
Disk 0:
/boot - NON-RAIDed
swap
/  - rest of disk

Disk 01
/boot1 - placeholder to take same space as /boot on disk0 - NON-RAIDed
swap
/  - rest of disk

I created RAID1 over / on both disks, made /dev/md0

>From time to time I want to "degrade" back to only single disk, and turn
off RAID as the overhead has some cost
>From time to time I want to restore to RAID1 function, and re-synch the
pair to current.

Yes, this is a backup scenario..

Are there any Recommendations ( with mdadm syntax)  please?





-- 

-- 

Regards,
Maurice
09 F9 11 02 9D 74 E3 5B D8 41 56 C5 63 56 88 C0

1001 1001 00010001 0010 10011101 01110100 11100011 01011011 11011000 
0101 01010110 11000101 01100011 01010110 10001000 1100

10 base 13,256,278,887,989,457,651,018,865,901,401,704,640 


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Thanks! Was:[Re: strange RAID5 problem]

2006-05-10 Thread Maurice Hilarius
Thanks to Neil, Luca, and CaT, who were all a big help.



-- 

With our best regards,


Maurice W. HilariusTelephone: 01-780-456-9771
Hard Data Ltd.  FAX:   01-780-456-9772
11060 - 166 Avenue email:[EMAIL PROTECTED]
Edmonton, AB, Canada   http://www.harddata.com/
   T5X 1Y3

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: strange RAID5 problem

2006-05-09 Thread Maurice Hilarius
Luca Berra wrote:
> ..
>>> I don't believe you, prove it (/proc/partitions)
>>>
>> I understand. Here we go then. Devices in question bracketed with "**":
>>
> ok, now i do.
> is the /dev/sdw1 device file correctly created?
> you could try straceing mdadm to see what happens
>
> what about the other suggestion? trying to stop the array and restart
> it, since it is marked as inactive.
> L.
>
Here is what we ended up doing that fixed it.
Thanks to Neil on the --force, however even with that,
ALL parameters were needed on the mdadm -C or it still refused.
We used EVMS  to rebuild as that is what originally created the RAID.

mdadm -C /dev/md3 --chunk=256 --level=5 --parity=ls --raid-devices=16
--force /dev/evms/.nodes/sdq1 /dev/evms/.nodes/sdr1
/dev/evms/.nodes/sds1 /dev/evms/.nodes/sdt1 /dev/evms/.nodes/sdu1
/dev/evms/.nodes/sdv1 missing /dev/evms/.nodes/sdx1
/dev/evms/.nodes/sdy1 /dev/evms/.nodes/sdz1 /dev/evms/.nodes/sdaa1
/dev/evms/.nodes/sdab1 /dev/evms/.nodes/sdac1 /dev/evms/.nodes/sdad1
/dev/evms/.nodes/sdae1 /dev/evms/.nodes/sdaf1

Notice we are assembling a device with a "missing" member, and the
devices are in "order" per: mdamd -D /dev/md3

This was the *only* that it would come up. It was mountable, data seems
intact.
We started the rebuild with no errors by simply adding the device
as I mentioned before with -a.

Then sped it up via:

echo "10" > /proc/sys/dev/raid/speed_limit_min

Because frankly we have the resources to do so and need it going as fast
as possible.

-- 

Regards,
Maurice

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: strange RAID5 problem

2006-05-09 Thread Maurice Hilarius
Luca Berra wrote:
> On Mon, May 08, 2006 at 11:30:52PM -0600, Maurice Hilarius wrote:
>> [EMAIL PROTECTED] ~]# mdadm /dev/md3 -a /dev/sdw1
>>
>> But, I get this error message:
>> mdadm: hot add failed for /dev/sdw1: No such device
>>
>> What? We just made the partition on sdw a moment ago in fdisk. It IS
>> there!
>
> I don't believe you, prove it (/proc/partitions)
>
>
I understand. Here we go then. Devices in question bracketed with "**":

[EMAIL PROTECTED] ~]# cat /proc/partitions
major minor  #blocks  name

   3 0  117220824 hda
   3 1 104391 hda1
   3 22008125 hda2
   3 3  115105725 hda3
   364  117220824 hdb
   365 104391 hdb1
   3662008125 hdb2
   367  115105725 hdb3
   8 0  390711384 sda
   8 1  390708801 sda1
   816  390711384 sdb
   817  390708801 sdb1
   832  390711384 sdc
   833  390708801 sdc1
   848  390711384 sdd
   849  390708801 sdd1
   864  390711384 sde
   865  390708801 sde1
   880  390711384 sdf
   881  390708801 sdf1
   896  390711384 sdg
   897  390708801 sdg1
   8   112  390711384 sdh
   8   113  390708801 sdh1
   8   128  390711384 sdi
   8   129  390708801 sdi1
   8   144  390711384 sdj
   8   145  390708801 sdj1
   8   160  390711384 sdk
   8   161  390708801 sdk1
   8   176  390711384 sdl
   8   177  390708801 sdl1
   8   192  390711384 sdm
   8   193  390708801 sdm1
   8   208  390711384 sdn
   8   209  390708801 sdn1
   8   224  390711384 sdo
   8   225  390708801 sdo1
   8   240  390711384 sdp
   8   241  390708801 sdp1
  65 0  390711384 sdq
  65 1  390708801 sdq1
  6516  390711384 sdr
  6517  390708801 sdr1
  6532  390711384 sds
  6533  390708801 sds1
  6548  390711384 sdt
  6549  390708801 sdt1
  6564  390711384 sdu
  6565  390708801 sdu1
  6580  390711384 sdv
  6581  390708801 sdv1
**
  6596  390711384 sdw
  6597  390708801 sdw1
**
  65   112  390711384 sdx
  65   113  390708801 sdx1
  65   128  390711384 sdy
  65   129  390708801 sdy1
  65   144  390711384 sdz
  65   145  390708801 sdz1
  65   160  390711384 sdaa
  65   161  390708801 sdaa1
  65   176  390711384 sdab
  65   177  390708801 sdab1
  65   192  390711384 sdac
  65   193  390708801 sdac1
  65   208  390711384 sdad
  65   209  390708801 sdad1
  65   224  390711384 sdae
  65   225  390708801 sdae1
  65   240  390711384 sdaf
  65   241  390708801 sdaf1
**
   9 0 104320 md0
**
   9 2 5860631040 md2
   9 1  115105600 md1



-- 

Regards,
Maurice

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


strange RAID5 problem

2006-05-08 Thread Maurice Hilarius
Good evening.

I am having a bit of a problem with a largish RAID5 set.
Now it is looking more and more like I am about to lose all the data on
it, so I am asking (begging?) to see if anyone can help me sort this out.


Here is the scenario: 16 SATA  disks connected to a pair of AMCC(3Ware)
9550SX-12 controllers.

RAID 5, 15 disks, plus 1 hot spare.

SMART started reporting errors on a disk, so it was retired with the
3Ware CLI, then removed and replaced.
The new disk had a JBOD signature added with the 3Ware CLI, then a
single large partition was created with fdisk.

At this point I would expect to be able to add the disk back to the
array by:
[EMAIL PROTECTED] ~]# mdadm /dev/md3 -a /dev/sdw1

But, I get this error message:
mdadm: hot add failed for /dev/sdw1: No such device

What? We just made the partition on sdw a moment ago in fdisk. It IS there!

So. we look around a bit:
# /cat/proc/mdstat

md3 : inactive sdq1[0] sdaf1[15] sdae1[14] sdad1[13] sdac1[12] sdab1[11]
sdaa1[10] sdz1[9] sdy1[8] sdx1[7] sdv1[5] sdu1[4] sdt1[3] sds1[2]
sdr1[1]
  5860631040 blocks

Yup, that looks correct, missing sdw1[6]

Looking more:
# mdadm -D /dev/md3

/dev/md3:
Version : 00.90.01
  Creation Time : Tue Jan 10 19:21:23 2006
 Raid Level : raid5
Device Size : 390708736 (372.61 GiB 400.09 GB)
   Raid Devices : 16
  Total Devices : 15
Preferred Minor : 3
Persistence : Superblock is persistent

Update Time : Mon May  8 19:33:36 2006
  State : active, degraded
 Active Devices : 15
Working Devices : 15
 Failed Devices : 0
  Spare Devices : 0

 Layout : left-symmetric
 Chunk Size : 256K

   UUID : 771aa4c0:48d9b467:44c847e2:9bc81c43
 Events : 0.1818687

Number   Major   Minor   RaidDevice State
   0  6510  active sync   /dev/sdq1
   1  65   171  active sync   /dev/sdr1
   2  65   332  active sync   /dev/sds1
   3  65   493  active sync   /dev/sdt1
   4  65   654  active sync   /dev/sdu1
   5  65   815  active sync   /dev/sdv1
 609   000  removed
   7  65  1137  active sync   /dev/sdx1
   8  65  1298  active sync   /dev/sdy1
   9  65  1459  active sync   /dev/sdz1
  10  65  161   10  active sync   /dev/sdaa1
  11  65  177   11  active sync   /dev/sdab1
  12  65  193   12  active sync   /dev/sdac1
  13  65  209   13  active sync   /dev/sdad1
  14  65  225   14  active sync   /dev/sdae1
  15  65  241   15  active sync   /dev/sdaf1

That also looks to be as expected.

So, lets try to assemble it again and force sdw1 in to it:

[EMAIL PROTECTED] ~]# mdadm
--assemble /dev/md3 /dev/sdq1 /dev/sdr1 /dev/sds1 /dev/sdt1 /dev/sdu1
/dev/sdv1 /dev/sdw1 /dev/sdx1 /dev/sdy1 /dev/sdz1 /dev/sdaa1 /dev/sdab1
/dev/sdac1 /dev/sdad1 /dev/sdae1 /dev/sdaf1
mdadm: superblock on /dev/sdw1 doesn't match others - assembly aborted

[EMAIL PROTECTED] ~]# mdadm
--assemble /dev/md3 /dev/sdq1 /dev/sdr1 /dev/sds1 /dev/sdt1 /dev/sdu1
/dev/sdv1 /dev/sdx1 /dev/sdy1 /dev/sdz1 /dev/sdaa1 /dev/sdab1 /dev/sdac1
/dev/sdad1 /dev/sdae1 /dev/sdaf1
mdadm: failed to RUN_ARRAY /dev/md3: Invalid argument

[EMAIL PROTECTED] ~]# mdadm
-A /dev/md3 /dev/sdq1 /dev/sdr1 /dev/sds1 /dev/sdt1 /dev/sdu1 /dev/sdv1
/dev/sdx1 /dev/sdy1 /dev/sdz1 /dev/sdaa1 /dev/sdab1 /dev/sdac1
/dev/sdad1 /dev/sdae1 /dev/sdaf1
mdadm: device /dev/md3 already active - cannot assemble it

[EMAIL PROTECTED] ~]# cat /proc/mdstat
Personalities : [raid1] [raid5]
md1 : active raid1 hdb3[1] hda3[0]
  115105600 blocks [2/2] [UU]

md2 : active raid5 sdp1[15] sdo1[14] sdn1[13] sdm1[12] sdl1[11] sdk1[10]
sdj1[9] sdi1[8] sdh1[7] sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1]
sda1[0]
  5860631040 blocks level 5, 256k chunk, algorithm 2 [16/16]
[]

md3 : inactive sdq1[0] sdaf1[15] sdae1[14] sdad1[13] sdac1[12] sdab1[11]
sdaa1[10] sdz1[9] sdy1[8] sdx1[7] sdv1[5] sdu1[4] sdt1[3] sds1[2]
sdr1[1]
  5860631040 blocks
md0 : active raid1 hdb1[1] hda1[0]
  104320 blocks [2/2] [UU]

unused devices: 

[EMAIL PROTECTED] ~]# mdadm /dev/md3 -a /dev/sdw1
mdadm: hot add failed for /dev/sdw1: No such device

OK, let's mount the degraded RAID and try to copy the files to somewhere
else, so we can make it from scratch:

[EMAIL PROTECTED] ~]# mount /dev/md3 /all/boxw16/
/dev/md3: Invalid argument
mount: /dev/md3: can't read superblock

[EMAIL PROTECTED] ~]# fsck /dev/md3
fsck 1.35 (28-Feb-2004)
e2fsck 1.35 (28-Feb-2004)
fsck.ext2: Invalid argument while trying to open /dev/md3

The superblock could not be read..

[EMAIL PROTECTED] ~]# mke2fs -n /dev/md3
mke2fs 1.35 (28-Feb-2004)
mke2fs: Device size reported to be zero.  Invalid partition specified,
or partition table wasn't reread 

Re: RAID5 recovery trouble, bd_claim failed?

2006-04-19 Thread Maurice Hilarius
Nate Byrnes wrote:
> Hi All,
>I'm not sure that is entirely the case. From a hardware
> perspective, I can access all the disks from the OS, via fdisk and dd.
> It is really just mdadm that is failing.  Would I still need to work
> the jumper issue?
>Thanks,
>Nate
>
IF the disks are as we suspect (master and slave relationships) and IF
you now have either a failed or a removed drive, then you  MUST correct
the jumpering.
Sure, you can often see a disk that is misconfigured.
It is almost certain, however, that when you write to it you will simply
cause corruption on it.

Of course, so far this is all speculation, as you have not actually said
what the disks, controller interfaces, and jumpering and so forth are at.
I was merely speculating, based on what you have said.

No amount of software magic will "cure" a hardware problem..


-- 

With our best regards,


Maurice W. HilariusTelephone: 01-780-456-9771
Hard Data Ltd.  FAX:   01-780-456-9772
11060 - 166 Avenue email:[EMAIL PROTECTED]
Edmonton, AB, Canada   http://www.harddata.com/
   T5X 1Y3

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID5 recovery trouble, bd_claim failed?

2006-04-19 Thread Maurice Hilarius
Nathanial Byrnes wrote:
> Yes, I did not have the funding nor approval to purchase more hardware
> when I set it up (read wife). Once it was working... the rest is
> history.
>
>   

OK, so if you have a pair of IDE disks, jumpered as Master and slave,
and if one fails:

If Master failed, re-jumper remaining disk on pair on same cable as
Master, no slave present

If Slave failed, re-jumper remaining disk on pair on same cable as
Master, no slave present.

Then you will have the remaining disk working normally, at least.

When you can afford it I suggest buying a controller with enough ports
to support the number of drives you have, with no Master/Slave pairing.

Good luck !

And to the  software guys trying to help: We need to start with the
(obvious) hardware problem, before we advise on how to recover data from
a borked system..
Once he has the jumpering on the drives sorted out, the drive that went
missing will be back again..


-- 

Regards,
Maurice

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID5 recovery trouble, bd_claim failed?

2006-04-18 Thread Maurice Hilarius
Nathanial Byrnes wrote:
> Hi All,
>   Recently I lost a disk in my raid5 SW array. It seems that it took a
> second disk with it. The other disk appears to still be funtional (from
> an fdisk perspective...). I am trying to get the array to work in
> degraded mode via failed-disk in raidtab, but am always getting the
> following error:
>
>   
Let me guess:
IDE disks, in pairs.
Jumpered as Master and Salve.

Right?





-- 

With our best regards,


Maurice W. HilariusTelephone: 01-780-456-9771
Hard Data Ltd.  FAX:   01-780-456-9772
11060 - 166 Avenue email:[EMAIL PROTECTED]
Edmonton, AB, Canada   http://www.harddata.com/
   T5X 1Y3

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Questions about: Where to find algorithms for RAID5 / RAID6

2006-04-11 Thread Maurice Hilarius
Good day.

I am looking for some information, and hope the readers of this list
might be able to point me in the right direction:

Here is the scenario:
In RAID5 ( or RAID6) when a file is written, some parity data is
created, (by some form of XOR process, I assume), then that parity data
is written to disk.

I am looking to find the algorithm that is used to create that parity
data and to decides where to place it on the disks.

Any help on this is deeply appreciated.

-- 

With our best regards,


Maurice W. HilariusTelephone: 01-780-456-9771
Hard Data Ltd.  FAX:   01-780-456-9772
11060 - 166 Avenue email:[EMAIL PROTECTED]
Edmonton, AB, Canada   http://www.harddata.com/
   T5X 1Y3


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Real Time Mirroring of a NAS

2006-04-10 Thread Maurice Hilarius
andy liebman wrote:
> ..
> Thanks for your reply, and the suggestions of others. I'm going to
> look into both NBD and DRBD.
>
> Actually, I see that my idea to export an iSCSI target from Server B,
> mount it on A, and just create a RAID1 array with the two block
> devices must be very similar to what DRBD is doing, but my guess is
> that DRBD, with it's "heartbeat" signal, is probably more robust at
> error handling. I'd love to hear from somebody who has experience with
> DRBD.
>
> By the way, I use 3ware 9550SX cards. On a 16 drive RAID-5 SATA array,
> I can get sequential reads that top 600 MBs/sec. That's megabytes, not
> megabits. And write speeds are close to 400 MB/sec with the new faster
> on-board XOR processing. And random reads are at least 200 MB/sec. So,
> 10 GbE is a must, really.
>
> Andy
>
Hi Andy.

A couple of other suggestions that may prove helpful:

1) EVMS
http://evms.sourceforge.net/


2) Lustre
http://www.clusterfs.com/
http://www.lustre.org/



-- 

With our best regards,


Maurice W. HilariusTelephone: 01-780-456-9771
Hard Data Ltd.  FAX:   01-780-456-9772
11060 - 166 Avenue email:[EMAIL PROTECTED]
Edmonton, AB, Canada   http://www.harddata.com/
   T5X 1Y3

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: ANNOUNCE: mdadm 2.4 - A tool for managing Soft RAID under Linux

2006-03-30 Thread Maurice Hilarius
Neil Brown wrote:
> I am pleased to announce the availability of
>mdadm version 2.4
> ..
>
> Release 2.4 primarily adds support for increasing the number of
> devices in a RAID5 array, which requires 2.6.17 (or some -rc or -mm
> prerelease).
> ..
Is there a corresponding means to increase the size of a file system to
use this?
> -   Allow --monitor to work with arrays with >28 devices
>   
So, how DO we get past the old 26 device "alphabet limit" ?

Thanks, as always, for the great work, Neil.



-- 

With our best regards,


Maurice W. HilariusTelephone: 01-780-456-9771
Hard Data Ltd.  FAX:   01-780-456-9772
11060 - 166 Avenue email:[EMAIL PROTECTED]
Edmonton, AB, Canada   http://www.harddata.com/
   T5X 1Y3

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html