Re: Problem with 3xRAID1 to RAID 0

2006-07-12 Thread Jim Klimov
Hello Vladimir,

Tuesday, July 11, 2006, 11:41:31 AM, you wrote:

VS Hi,

VS I created to 3 x /dev/md1 to /dev/md3  which consist of six identical
VS 200GB hdd

VS my mdadm --detail --scan looks like

VS Proteus:/home/vladoportos# mdadm --detail --scan
VS ARRAY /dev/md1 level=raid1 num-devices=2
VS UUID=d1fadb29:cc004047:aabf2f31:3f044905
VSdevices=/dev/sdb,/dev/sda
VS ARRAY /dev/md2 level=raid1 num-devices=2
VS UUID=38babb4d:92129d4a:94d659f1:3b238c53
VSdevices=/dev/sdc,/dev/sdd
VS ARRAY /dev/md3 level=raid1 num-devices=2
VS UUID=a0406e29:c1f586be:6b3381cf:086be0c2
VSdevices=/dev/sde,/dev/sdf
VS ARRAY /dev/md0 level=raid1 num-devices=2
VS UUID=c04441d4:e15d900e:57903584:9eb5fea6
VSdevices=/dev/hdc1,/dev/hdd1


VS and mdadm.conf

VS DEVICE partitions
VS ARRAY /dev/md4 level=raid0 num-devices=3
VS UUID=1c8291ba:2d83cf54:2698ce30:e49b1e6c
VSdevices=/dev/md1,/dev/md2,/dev/md3
VS ARRAY /dev/md3 level=raid1 num-devices=2
VS UUID=a0406e29:c1f586be:6b3381cf:086be0c2
VSdevices=/dev/sde,/dev/sdf
VS ARRAY /dev/md2 level=raid1 num-devices=2
VS UUID=38babb4d:92129d4a:94d659f1:3b238c53
VSdevices=/dev/sdc,/dev/sdd
VS ARRAY /dev/md1 level=raid1 num-devices=2
VS UUID=d1fadb29:cc004047:aabf2f31:3f044905
VSdevices=/dev/sda,/dev/sdb
VS ARRAY /dev/md0 level=raid1 num-devices=2
VS UUID=c04441d4:e15d900e:57903584:9eb5fea6
VSdevices=/dev/hdc1,/dev/hdd1



VS as you can see i created than from md1-3 RAID0 - md4 its works fine...
VS but i cant get it again after reboot i need to create it again...

VS I dont get it why it wont creat at boot...  any body had similar problem ?
I haven't had a problem like this, but taking a wild guess - did you
try putting the definitions in mdadm.conf in a different order?

In particular, you define md4 before the system knows anything about
the devices md[1-3]...

You can speed up the checks (I think) by using something like this
instead of rebooting full-scale, except for the last check to see if
it all actually works :)
mdadm --stop /dev/md4
mdadm --stop /dev/md3
mdadm --stop /dev/md2
mdadm --stop /dev/md1

mdadm -As
or
mdadm -Asc /etc/mdadm.conf.test

Also you seem to make the md[1-3] devices from whole disks.
Had you made them from partitions you could
1) Set a partition type to 0xfd so that a proper kernel could make
   your raid1 sets at boot-time and then make md4 correctly even
   with the current config file
2) Move the submirrors to another disk (i.e. a new larger one)
   if you needed to rebuild, upgrade, recover, etc. by just making
   a new partition of the same size.
   Also keep in mind that 200Gb (and any other) disks of different
   models and makers can vary in size by several tens of megabytes...
   Bit me once with certain 36Gb SCSI disks which were somewhat
   larger than any competition, so we had to hunt for the same model
   to rebuild our array.

A question to the general public: am I wrong? :)
Are there any actual bonuses to making RAIDs on whole raw disks?

-- 
Best regards,
 Jim Klimovmailto:[EMAIL PROTECTED]

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Problem with 3xRAID1 to RAID 0

2006-07-12 Thread Mario 'BitKoenig' Holbe
Jim Klimov [EMAIL PROTECTED] wrote:
 Are there any actual bonuses to making RAIDs on whole raw disks?

You win 63 sectors (i.e. 32k) usually.


regards
   Mario
-- 
*axiom* welcher sensorische input bewirkte die output-aktion,
den irc-chatter mit dem nick dus des irc-servers
mittels eines kills zu verweisen?

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Problem with 3xRAID1 to RAID 0

2006-07-12 Thread Christian Pernegger

Are there any actual bonuses to making RAIDs on whole raw disks?


Not if you're using regular md devices.

For partitionable md arrays using partitions seems a little strange to
me, since you then have partitions on a partition. That'd probably
make it difficult to just mount a single member of a mirror for data
recovery, etc ...

And Neil seems to favour initrd over kernel auto-detection / assembly anyhow.

FWIW, I use whole disks and limit the space used per disk to exactly
the rated capacity, i. e. floor ( ( GB * 10^9 ) / 1024 ) blocks with
the --size parameter.

Regards,

C.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: only 4 spares and no access to my data

2006-07-12 Thread Molle Bestefich

Karl Voit wrote:

if (super == NULL) {
  fprintf(stderr, Name : No suitable drives found for %s\n, mddev);
[...]

Well I guess, the message will be shown, if the superblock is not found.


Yes.  No clue why, my buest guess is that you've already zeroed the superblock.
What does madm --query / --examine say about /dev/sd[abcd], are there
superblocks ?


st = guess_super(fd);
  if (st == NULL) {
if (!quiet)
  fprintf(stderr, Name : Unrecognised md component device - %s\n,
dev);

Again: this seems to be the case, when the superblock is empty.


Yes, looks like it can't find any usable superblocks.
Maybe you've accidentally zeroed the superblocks on sd[abcd]1 also?

If you fdisk -l /dev/sd[abcd], does the partition tables look like
they should / like they used to?

What does mdadm --query / --examine /dev/sd[abcd]1 tell you, any superblocks ?


Since my miserably failure I am probably too careful *g*

The problem is also, that without deeper background knowledge, I can not
predict, if this or that permanently affects the real data on the disks.


My best guess is that it's OK and you won't loose data if you run
--zero-superblock on /dev/sd[abcd] and then create an array on
/dev/sd[abcd]1, but I do find it odd that it suddenly can't find
superblocks on /dev/sd[abcd]1.


Maybe such a person like me starts to think that sw-raid-tools like
mdadm should warn users before permanent changes are executed. If
mdadm should be used by users (additional to raid-geeks like you),
it might be a good idea to prevent data loss. (Ment as a suggestion.)


Perhaps.  Or perhaps mdadm should just tell you that you're doing
something stupid if you try to manipulate arrays on a block device
which seems to contain a partition table.

It's not like it's even remotely useful to create an MD array spanning
the whole disk rather than spanning a partition which spans the whole
disk, anyway.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html