I would echo the raid 10 but we don't know your needs
On Oct 20, 2011 10:55 PM, "Eric Shubert" <e...@shubes.net> wrote:

> On 10/19/2011 01:06 AM, James Dugger wrote:
>
>> I am trying to build an NAS using Ubuntu Server 10.04. I am using the
>> following system:
>>
>> Intel/Pentium 4 2.6GHz
>> 1GB Ram
>> Silicon Image 4 port SATA/RAID controller (fakeRAID)
>> 4 - 1TB HDD
>>
>> The HDD's are drives I have used in the past to test and build different
>> RAID configs using mdadm. I am trying to build a RAID 1 with 4 disks
>> using the following:
>>
>> Code:
>>
>> mdadm -Cv -l1 -n4 /dev/md0 /dev/sd{b,c,d,e}1
>>
>> When I try this I get a "Device or resource busy" error. When I catalog
>> the /proc/mdstat I get the following:
>>
>> Quote:
>> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
>> [raid4] [raid10]
>> md3 : active raid10 sdd1[3] sdb1[1] sdc1[0] sda1[2]
>> 1953519872 blocks 64K chunks 2 near-copies [4/4] [UUUU]
>>
>> unused devices: <none>
>>
>> And When I examine the partitions with mdadm:
>> Code:
>>
>> mdadm --examine /dev/sd{a,b,c,d}1
>>
>> I get:
>> Quote:
>> /dev/sda1:
>> Magic : a92b4efc
>> Version : 00.90.00
>> UUID : 61f714b5:fe18b25f:adc23f65:**cc5a51de
>> Creation Time : Sun Jul 10 02:59:45 2011
>> Raid Level : raid10
>> Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
>> Array Size : 1953519872 (1863.02 GiB 2000.40 GB)
>> Raid Devices : 4
>> Total Devices : 4
>> Preferred Minor : 3
>>
>> Update Time : Mon Oct 17 11:33:19 2011
>> State : clean
>> Active Devices : 4
>> Working Devices : 4
>> Failed Devices : 0
>> Spare Devices : 0
>> Checksum : 5a46c420 - correct
>> Events : 98
>>
>> Layout : near=2, far=1
>> Chunk Size : 64K
>>
>> Number Major Minor RaidDevice State
>> this 2 8 17 2 active sync /dev/sdb1
>>
>> 0 0 8 49 0 active sync /dev/sdd1
>> 1 1 8 33 1 active sync /dev/sdc1
>> 2 2 8 17 2 active sync /dev/sdb1
>> 3 3 8 65 3 active sync /dev/sde1
>> /dev/sdb1:
>> Magic : a92b4efc
>> Version : 00.90.00
>> UUID : 61f714b5:fe18b25f:adc23f65:**cc5a51de
>> Creation Time : Sun Jul 10 02:59:45 2011
>> Raid Level : raid10
>> Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
>> Array Size : 1953519872 (1863.02 GiB 2000.40 GB)
>> Raid Devices : 4
>> Total Devices : 4
>> Preferred Minor : 3
>>
>> Update Time : Mon Oct 17 11:33:19 2011
>> State : clean
>> Active Devices : 4
>> Working Devices : 4
>> Failed Devices : 0
>> Spare Devices : 0
>> Checksum : 5a46c42e - correct
>> Events : 98
>>
>> Layout : near=2, far=1
>> Chunk Size : 64K
>>
>> Number Major Minor RaidDevice State
>> this 1 8 33 1 active sync /dev/sdc1
>>
>> 0 0 8 49 0 active sync /dev/sdd1
>> 1 1 8 33 1 active sync /dev/sdc1
>> 2 2 8 17 2 active sync /dev/sdb1
>> 3 3 8 65 3 active sync /dev/sde1
>> /dev/sdc1:
>> Magic : a92b4efc
>> Version : 00.90.00
>> UUID : 61f714b5:fe18b25f:adc23f65:**cc5a51de
>> Creation Time : Sun Jul 10 02:59:45 2011
>> Raid Level : raid10
>> Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
>> Array Size : 1953519872 (1863.02 GiB 2000.40 GB)
>> Raid Devices : 4
>> Total Devices : 4
>> Preferred Minor : 3
>>
>> Update Time : Mon Oct 17 11:33:19 2011
>> State : clean
>> Active Devices : 4
>> Working Devices : 4
>> Failed Devices : 0
>> Spare Devices : 0
>> Checksum : 5a46c43c - correct
>> Events : 98
>>
>> Layout : near=2, far=1
>> Chunk Size : 64K
>>
>> Number Major Minor RaidDevice State
>> this 0 8 49 0 active sync /dev/sdd1
>>
>> 0 0 8 49 0 active sync /dev/sdd1
>> 1 1 8 33 1 active sync /dev/sdc1
>> 2 2 8 17 2 active sync /dev/sdb1
>> 3 3 8 65 3 active sync /dev/sde1
>> /dev/sdd1:
>> Magic : a92b4efc
>> Version : 00.90.00
>> UUID : 61f714b5:fe18b25f:adc23f65:**cc5a51de
>> Creation Time : Sun Jul 10 02:59:45 2011
>> Raid Level : raid10
>> Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
>> Array Size : 1953519872 (1863.02 GiB 2000.40 GB)
>> Raid Devices : 4
>> Total Devices : 4
>> Preferred Minor : 3
>>
>> Update Time : Mon Oct 17 11:33:19 2011
>> State : clean
>> Active Devices : 4
>> Working Devices : 4
>> Failed Devices : 0
>> Spare Devices : 0
>> Checksum : 5a46c452 - correct
>> Events : 98
>>
>> Layout : near=2, far=1
>> Chunk Size : 64K
>>
>> Number Major Minor RaidDevice State
>> this 3 8 65 3 active sync /dev/sde1
>>
>> 0 0 8 49 0 active sync /dev/sdd1
>> 1 1 8 33 1 active sync /dev/sdc1
>> 2 2 8 17 2 active sync /dev/sdb1
>> 3 3 8 65 3 active sync /dev/sde1
>>
>> It is indicating that there is a RAID 10 device md3 in existance and
>> active. However I have tried to stop the device with -S command and even
>> tried to zero out the superblock for /dev/sd{a,b,c,d}1 and I get a
>> device busy error again.
>>
>> I have reformatted the disks and even zeroed them out using dd however
>> it still comes back. I went into the FAKEraid settup just before the
>> BIOS posts and checked the status of the card. It indicates that "No
>> RAID device to delete" when trying to delete any possible configurations
>> in the card. fdisk -l gives the following:
>>
>> Quote:
>> isk /dev/sda: 1000.2 GB, 1000204886016 bytes
>> 255 heads, 63 sectors/track, 121601 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disk identifier: 0x00011e81
>>
>> Device Boot Start End Blocks Id System
>> /dev/sda1 1 121601 976760001 83 Linux
>>
>> Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
>> 255 heads, 63 sectors/track, 121601 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disk identifier: 0x5482f9cf
>>
>> Device Boot Start End Blocks Id System
>> /dev/sdb1 1 121601 976760001 83 Linux
>>
>> Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
>> 255 heads, 63 sectors/track, 121601 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disk identifier: 0x2d6b9798
>>
>> Device Boot Start End Blocks Id System
>> /dev/sdc1 1 121601 976760001 83 Linux
>>
>> Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
>> 255 heads, 63 sectors/track, 121601 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disk identifier: 0x3d04945c
>>
>> Device Boot Start End Blocks Id System
>> /dev/sdd1 1 121601 976760001 83 Linux
>>
>> Disk /dev/sde: 500.1 GB, 500107862016 bytes
>> 255 heads, 63 sectors/track, 60801 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disk identifier: 0x000997d2
>>
>> Device Boot Start End Blocks Id System
>> /dev/sde1 * 1 32 248832 83 Linux
>> Partition 1 does not end on cylinder boundary.
>> /dev/sde2 32 60802 488134657 5 Extended
>> /dev/sde5 32 60802 488134656 8e Linux LVM
>>
>> Disk /dev/md3: 2000.4 GB, 2000404348928 bytes
>> 2 heads, 4 sectors/track, 488379968 cylinders
>> Units = cylinders of 8 * 512 = 4096 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 65536 bytes / 131072 bytes
>> Disk identifier: 0x00000000
>>
>> Device Boot Start End Blocks Id System
>>
>> Notice the md3 device at the bottom of the fdisk print out.
>>
>> Any help with this would be greatly appreciated.
>>
>> Thanks
>>
>> --
>> James
>>
>>
>>
> Wow.
> Where'd md3 come from? It appears to be a raid10 array.
>
> Looks like from the --examine that the device assignments (/dev/sd?) have
> moved around since the array was created (sda belongs to an array consisting
> of d,c,b,e).
>
> Have a look at:
> # ls -l /dev/disk/by_id
> and it'll show which drives are assigned to which /dev/sd? letter.
>
> Then (w/out rebooting) take another crack at clearing things out, with:
> # mdadm --zero-superblock ...
>
> Then re-create/build the array.
>
> P.S. 4 raid-1 (mirrored) drives seems to be excessive redundancy to me.
> Any reason for not going with raid-10, or at least a hot spare or two?
>
> --
> -Eric 'shubes'
>
> ------------------------------**---------------------
> PLUG-discuss mailing list - 
> plug-disc...@lists.plug.**phoenix.az.us<PLUG-discuss@lists.plug.phoenix.az.us>
> To subscribe, unsubscribe, or to change your mail settings:
> http://lists.PLUG.phoenix.az.**us/mailman/listinfo/plug-**discuss<http://lists.PLUG.phoenix.az.us/mailman/listinfo/plug-discuss>
>
---------------------------------------------------
PLUG-discuss mailing list - PLUG-discuss@lists.plug.phoenix.az.us
To subscribe, unsubscribe, or to change your mail settings:
http://lists.PLUG.phoenix.az.us/mailman/listinfo/plug-discuss

Reply via email to