Re: Problems with RAID build in ubuntu Server 10.04

2011-10-22 Thread James Dugger
Matt, Eric thanks for your thoughts.  Sorry for the confusion my mdadm code
reference was wrong the device names actually updated from first install to
reboot.  The boot drive is the smaller 500GB HDD (not a part of the raid).
Ubuntu renamed it to sde from sda after reboot. I found an ARRAY device call
for md3 in the mdadm.conf file.  Once I renamed the mdadm.conf file and
rebooted, a cat /proc/mdstat showed no raid devices active.  I then zeroed
the drives using dd and started over. I built a new raid10 with the device
name md4 (just to be sure).  So far the mdstat recognizes the array and
--examine shows no errors in the array.

Thanks again for you thoughts.

On Fri, Oct 21, 2011 at 11:06 AM, Matt Graham danceswithcr...@usa.netwrote:

 From: Eric Shubert e...@shubes.net
  On 10/19/2011 01:06 AM, James Dugger wrote:
  fdisk -l gives the following:
  /dev/sda1 1 121601 976760001 83 Linux
  /dev/sdb1 1 121601 976760001 83 Linux
  /dev/sdc1 1 121601 976760001 83 Linux
  /dev/sdd1 1 121601 976760001 83 Linux

  /dev/sde1 * 1 32 248832 83 Linux
  /dev/sde2 32 60802 488134657 5 Extended
  /dev/sde5 32 60802 488134656 8e Linux LVM

 So sde contains /boot and / , while you'd like to have sda..sdd contain the
 RAID.  This should work, but for some reason mdadm says:

  0 0 8 49 0 active sync /dev/sdd1
  1 1 8 33 1 active sync /dev/sdc1
  2 2 8 17 2 active sync /dev/sdb1
  3 3 8 65 3 active sync /dev/sde1

 I'm very surprised anything's working at all if it's trying to use sde1 as
 a
 component of the RAID.

  Notice the md3 device at the bottom of the fdisk print out.

 md devices can contain partition tables if you really want them to.  Some
 people do this; I wouldn't.

  Looks like from the --examine that the device assignments (/dev/sd?)
  have moved around since the array was created (sda belongs to an array
  consisting of d,c,b,e).

 Hm.  I thought mdadm went by UUIDs within the RAID superblocks, not
 partition
 names.

  Have a look at:
  # ls -l /dev/disk/by_id
  and it'll show which drives are assigned to which /dev/sd? letter.
 
  Then (w/out rebooting) take another crack at clearing things out, with:
  # mdadm --zero-superblock ...

  Then re-create/build the array.

 This may not work properly if mdadm has the superblocks tangled up.  You
 could
 zorch the superblocks yourself with dd, something like dd if=/dev/zero
 of=/dev/sda bs=1M seek=1013760 , repeated for each of the disks.  (That's
 seek 990G out and start writing zeroes, it'll take a lot less time than
 dding /dev/zero over the entire 1T disk.)  Then stop md3 (if possible
 without
 rebooting), then recreate it, using the right options for RAID10 and the
 right
 disk names.  I'd change the partition types of the softRAID components to
 0xfd
 too, just because that makes it a little clearer as to what's going on, but
 that may be old-school or deprecated now or something.

 Have a rescue CD handy if your /boot has been eaten by this mdadm
 misadventure.  That's workaroundable, just not usually all that fun.

 --
 Matt G / Dances With Crows
 The Crow202 Blog:  http://crow202.org/wordpress/
 There is no Darkness in Eternity/But only Light too dim for us to see

 ---
 PLUG-discuss mailing list - PLUG-discuss@lists.plug.phoenix.az.us
 To subscribe, unsubscribe, or to change your mail settings:
 http://lists.PLUG.phoenix.az.us/mailman/listinfo/plug-discuss




-- 
James
---
PLUG-discuss mailing list - PLUG-discuss@lists.plug.phoenix.az.us
To subscribe, unsubscribe, or to change your mail settings:
http://lists.PLUG.phoenix.az.us/mailman/listinfo/plug-discuss

Re: Problems with RAID build in ubuntu Server 10.04

2011-10-21 Thread Stephen
I would echo the raid 10 but we don't know your needs
On Oct 20, 2011 10:55 PM, Eric Shubert e...@shubes.net wrote:

 On 10/19/2011 01:06 AM, James Dugger wrote:

 I am trying to build an NAS using Ubuntu Server 10.04. I am using the
 following system:

 Intel/Pentium 4 2.6GHz
 1GB Ram
 Silicon Image 4 port SATA/RAID controller (fakeRAID)
 4 - 1TB HDD

 The HDD's are drives I have used in the past to test and build different
 RAID configs using mdadm. I am trying to build a RAID 1 with 4 disks
 using the following:

 Code:

 mdadm -Cv -l1 -n4 /dev/md0 /dev/sd{b,c,d,e}1

 When I try this I get a Device or resource busy error. When I catalog
 the /proc/mdstat I get the following:

 Quote:
 Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
 [raid4] [raid10]
 md3 : active raid10 sdd1[3] sdb1[1] sdc1[0] sda1[2]
 1953519872 blocks 64K chunks 2 near-copies [4/4] []

 unused devices: none

 And When I examine the partitions with mdadm:
 Code:

 mdadm --examine /dev/sd{a,b,c,d}1

 I get:
 Quote:
 /dev/sda1:
 Magic : a92b4efc
 Version : 00.90.00
 UUID : 61f714b5:fe18b25f:adc23f65:**cc5a51de
 Creation Time : Sun Jul 10 02:59:45 2011
 Raid Level : raid10
 Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
 Array Size : 1953519872 (1863.02 GiB 2000.40 GB)
 Raid Devices : 4
 Total Devices : 4
 Preferred Minor : 3

 Update Time : Mon Oct 17 11:33:19 2011
 State : clean
 Active Devices : 4
 Working Devices : 4
 Failed Devices : 0
 Spare Devices : 0
 Checksum : 5a46c420 - correct
 Events : 98

 Layout : near=2, far=1
 Chunk Size : 64K

 Number Major Minor RaidDevice State
 this 2 8 17 2 active sync /dev/sdb1

 0 0 8 49 0 active sync /dev/sdd1
 1 1 8 33 1 active sync /dev/sdc1
 2 2 8 17 2 active sync /dev/sdb1
 3 3 8 65 3 active sync /dev/sde1
 /dev/sdb1:
 Magic : a92b4efc
 Version : 00.90.00
 UUID : 61f714b5:fe18b25f:adc23f65:**cc5a51de
 Creation Time : Sun Jul 10 02:59:45 2011
 Raid Level : raid10
 Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
 Array Size : 1953519872 (1863.02 GiB 2000.40 GB)
 Raid Devices : 4
 Total Devices : 4
 Preferred Minor : 3

 Update Time : Mon Oct 17 11:33:19 2011
 State : clean
 Active Devices : 4
 Working Devices : 4
 Failed Devices : 0
 Spare Devices : 0
 Checksum : 5a46c42e - correct
 Events : 98

 Layout : near=2, far=1
 Chunk Size : 64K

 Number Major Minor RaidDevice State
 this 1 8 33 1 active sync /dev/sdc1

 0 0 8 49 0 active sync /dev/sdd1
 1 1 8 33 1 active sync /dev/sdc1
 2 2 8 17 2 active sync /dev/sdb1
 3 3 8 65 3 active sync /dev/sde1
 /dev/sdc1:
 Magic : a92b4efc
 Version : 00.90.00
 UUID : 61f714b5:fe18b25f:adc23f65:**cc5a51de
 Creation Time : Sun Jul 10 02:59:45 2011
 Raid Level : raid10
 Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
 Array Size : 1953519872 (1863.02 GiB 2000.40 GB)
 Raid Devices : 4
 Total Devices : 4
 Preferred Minor : 3

 Update Time : Mon Oct 17 11:33:19 2011
 State : clean
 Active Devices : 4
 Working Devices : 4
 Failed Devices : 0
 Spare Devices : 0
 Checksum : 5a46c43c - correct
 Events : 98

 Layout : near=2, far=1
 Chunk Size : 64K

 Number Major Minor RaidDevice State
 this 0 8 49 0 active sync /dev/sdd1

 0 0 8 49 0 active sync /dev/sdd1
 1 1 8 33 1 active sync /dev/sdc1
 2 2 8 17 2 active sync /dev/sdb1
 3 3 8 65 3 active sync /dev/sde1
 /dev/sdd1:
 Magic : a92b4efc
 Version : 00.90.00
 UUID : 61f714b5:fe18b25f:adc23f65:**cc5a51de
 Creation Time : Sun Jul 10 02:59:45 2011
 Raid Level : raid10
 Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
 Array Size : 1953519872 (1863.02 GiB 2000.40 GB)
 Raid Devices : 4
 Total Devices : 4
 Preferred Minor : 3

 Update Time : Mon Oct 17 11:33:19 2011
 State : clean
 Active Devices : 4
 Working Devices : 4
 Failed Devices : 0
 Spare Devices : 0
 Checksum : 5a46c452 - correct
 Events : 98

 Layout : near=2, far=1
 Chunk Size : 64K

 Number Major Minor RaidDevice State
 this 3 8 65 3 active sync /dev/sde1

 0 0 8 49 0 active sync /dev/sdd1
 1 1 8 33 1 active sync /dev/sdc1
 2 2 8 17 2 active sync /dev/sdb1
 3 3 8 65 3 active sync /dev/sde1

 It is indicating that there is a RAID 10 device md3 in existance and
 active. However I have tried to stop the device with -S command and even
 tried to zero out the superblock for /dev/sd{a,b,c,d}1 and I get a
 device busy error again.

 I have reformatted the disks and even zeroed them out using dd however
 it still comes back. I went into the FAKEraid settup just before the
 BIOS posts and checked the status of the card. It indicates that No
 RAID device to delete when trying to delete any possible configurations
 in the card. fdisk -l gives the following:

 Quote:
 isk /dev/sda: 1000.2 GB, 1000204886016 bytes
 255 heads, 63 sectors/track, 121601 cylinders
 Units = cylinders of 16065 * 512 = 8225280 bytes
 Sector size (logical/physical): 512 bytes / 512 bytes
 I/O size (minimum/optimal): 512 bytes / 512 bytes
 Disk identifier: 0x00011e81

 Device Boot Start End Blocks Id System
 /dev/sda1 1 121601 976760001 83 Linux

 Disk /dev/sdb: 1000.2 

Re: Problems with RAID build in ubuntu Server 10.04

2011-10-21 Thread Matt Graham
From: Eric Shubert e...@shubes.net
 On 10/19/2011 01:06 AM, James Dugger wrote:
 fdisk -l gives the following:
 /dev/sda1 1 121601 976760001 83 Linux
 /dev/sdb1 1 121601 976760001 83 Linux
 /dev/sdc1 1 121601 976760001 83 Linux
 /dev/sdd1 1 121601 976760001 83 Linux

 /dev/sde1 * 1 32 248832 83 Linux
 /dev/sde2 32 60802 488134657 5 Extended
 /dev/sde5 32 60802 488134656 8e Linux LVM

So sde contains /boot and / , while you'd like to have sda..sdd contain the
RAID.  This should work, but for some reason mdadm says:

 0 0 8 49 0 active sync /dev/sdd1
 1 1 8 33 1 active sync /dev/sdc1
 2 2 8 17 2 active sync /dev/sdb1
 3 3 8 65 3 active sync /dev/sde1

I'm very surprised anything's working at all if it's trying to use sde1 as a
component of the RAID.

 Notice the md3 device at the bottom of the fdisk print out.

md devices can contain partition tables if you really want them to.  Some
people do this; I wouldn't.

 Looks like from the --examine that the device assignments (/dev/sd?) 
 have moved around since the array was created (sda belongs to an array 
 consisting of d,c,b,e).

Hm.  I thought mdadm went by UUIDs within the RAID superblocks, not partition
names.

 Have a look at:
 # ls -l /dev/disk/by_id
 and it'll show which drives are assigned to which /dev/sd? letter.
 
 Then (w/out rebooting) take another crack at clearing things out, with:
 # mdadm --zero-superblock ...

 Then re-create/build the array.

This may not work properly if mdadm has the superblocks tangled up.  You could
zorch the superblocks yourself with dd, something like dd if=/dev/zero
of=/dev/sda bs=1M seek=1013760 , repeated for each of the disks.  (That's
seek 990G out and start writing zeroes, it'll take a lot less time than
dding /dev/zero over the entire 1T disk.)  Then stop md3 (if possible without
rebooting), then recreate it, using the right options for RAID10 and the right
disk names.  I'd change the partition types of the softRAID components to 0xfd
too, just because that makes it a little clearer as to what's going on, but
that may be old-school or deprecated now or something.

Have a rescue CD handy if your /boot has been eaten by this mdadm
misadventure.  That's workaroundable, just not usually all that fun.

-- 
Matt G / Dances With Crows
The Crow202 Blog:  http://crow202.org/wordpress/
There is no Darkness in Eternity/But only Light too dim for us to see

---
PLUG-discuss mailing list - PLUG-discuss@lists.plug.phoenix.az.us
To subscribe, unsubscribe, or to change your mail settings:
http://lists.PLUG.phoenix.az.us/mailman/listinfo/plug-discuss


Re: Problems with RAID build in ubuntu Server 10.04

2011-10-20 Thread Eric Shubert

On 10/19/2011 01:06 AM, James Dugger wrote:

I am trying to build an NAS using Ubuntu Server 10.04. I am using the
following system:

Intel/Pentium 4 2.6GHz
1GB Ram
Silicon Image 4 port SATA/RAID controller (fakeRAID)
4 - 1TB HDD

The HDD's are drives I have used in the past to test and build different
RAID configs using mdadm. I am trying to build a RAID 1 with 4 disks
using the following:

Code:

mdadm -Cv -l1 -n4 /dev/md0 /dev/sd{b,c,d,e}1

When I try this I get a Device or resource busy error. When I catalog
the /proc/mdstat I get the following:

Quote:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md3 : active raid10 sdd1[3] sdb1[1] sdc1[0] sda1[2]
1953519872 blocks 64K chunks 2 near-copies [4/4] []

unused devices: none

And When I examine the partitions with mdadm:
Code:

mdadm --examine /dev/sd{a,b,c,d}1

I get:
Quote:
/dev/sda1:
Magic : a92b4efc
Version : 00.90.00
UUID : 61f714b5:fe18b25f:adc23f65:cc5a51de
Creation Time : Sun Jul 10 02:59:45 2011
Raid Level : raid10
Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
Array Size : 1953519872 (1863.02 GiB 2000.40 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 3

Update Time : Mon Oct 17 11:33:19 2011
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Checksum : 5a46c420 - correct
Events : 98

Layout : near=2, far=1
Chunk Size : 64K

Number Major Minor RaidDevice State
this 2 8 17 2 active sync /dev/sdb1

0 0 8 49 0 active sync /dev/sdd1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 17 2 active sync /dev/sdb1
3 3 8 65 3 active sync /dev/sde1
/dev/sdb1:
Magic : a92b4efc
Version : 00.90.00
UUID : 61f714b5:fe18b25f:adc23f65:cc5a51de
Creation Time : Sun Jul 10 02:59:45 2011
Raid Level : raid10
Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
Array Size : 1953519872 (1863.02 GiB 2000.40 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 3

Update Time : Mon Oct 17 11:33:19 2011
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Checksum : 5a46c42e - correct
Events : 98

Layout : near=2, far=1
Chunk Size : 64K

Number Major Minor RaidDevice State
this 1 8 33 1 active sync /dev/sdc1

0 0 8 49 0 active sync /dev/sdd1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 17 2 active sync /dev/sdb1
3 3 8 65 3 active sync /dev/sde1
/dev/sdc1:
Magic : a92b4efc
Version : 00.90.00
UUID : 61f714b5:fe18b25f:adc23f65:cc5a51de
Creation Time : Sun Jul 10 02:59:45 2011
Raid Level : raid10
Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
Array Size : 1953519872 (1863.02 GiB 2000.40 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 3

Update Time : Mon Oct 17 11:33:19 2011
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Checksum : 5a46c43c - correct
Events : 98

Layout : near=2, far=1
Chunk Size : 64K

Number Major Minor RaidDevice State
this 0 8 49 0 active sync /dev/sdd1

0 0 8 49 0 active sync /dev/sdd1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 17 2 active sync /dev/sdb1
3 3 8 65 3 active sync /dev/sde1
/dev/sdd1:
Magic : a92b4efc
Version : 00.90.00
UUID : 61f714b5:fe18b25f:adc23f65:cc5a51de
Creation Time : Sun Jul 10 02:59:45 2011
Raid Level : raid10
Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
Array Size : 1953519872 (1863.02 GiB 2000.40 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 3

Update Time : Mon Oct 17 11:33:19 2011
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Checksum : 5a46c452 - correct
Events : 98

Layout : near=2, far=1
Chunk Size : 64K

Number Major Minor RaidDevice State
this 3 8 65 3 active sync /dev/sde1

0 0 8 49 0 active sync /dev/sdd1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 17 2 active sync /dev/sdb1
3 3 8 65 3 active sync /dev/sde1

It is indicating that there is a RAID 10 device md3 in existance and
active. However I have tried to stop the device with -S command and even
tried to zero out the superblock for /dev/sd{a,b,c,d}1 and I get a
device busy error again.

I have reformatted the disks and even zeroed them out using dd however
it still comes back. I went into the FAKEraid settup just before the
BIOS posts and checked the status of the card. It indicates that No
RAID device to delete when trying to delete any possible configurations
in the card. fdisk -l gives the following:

Quote:
isk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00011e81

Device Boot Start End Blocks Id System
/dev/sda1 1 121601 976760001 83 Linux

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x5482f9cf

Device Boot Start End