Re: Help recreating a raid5

2006-04-03 Thread David Greaves
Neil Brown wrote:

On Sunday April 2, [EMAIL PROTECTED] wrote:
  

From some archive reading I understand that I can recreate the array using

   mdadm --create /dev/md1 -l5 -n3 /dev/sdd1 /dev/sdb1 missing

but that I need to specify the correct order for the drives.

I've not used --assume-clean, --force or --run; should I? I assume that
since it's only got 2 of 3 then it won't need the assume-clean.

The detail and dmesg data suggests that the order in the command  above
is correct.

Can anyone confirm this?



Yes, that all looks correct.
  

Thanks Neil

That seemed to work.
Now I need to find out if I have bad hardware or if there is something
(else) wrong with libata :)

David

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


How is an IO size are determnied ?

2006-04-03 Thread Raz Ben-Jehuda(caro)
Neil/Jens  Hello.
Hope is this not too much bother for you.

Question: how does the psuedo device ( /dev/md )  change the
IOs sizes going down into the disks ?

Explanation:
I am using software raid5 , chunk size is 1024K, 4 disks.
I have made a hook in make_request inorder to bypass
the raid5 IO methodology .I need to control the amount of IOs
going down into the disk and their sizes.
the hook looks like this.


static int make_request (request_queue_t *q, struct bio * bi){
...
if ( bypass_raid5   bio_data_dir(bi) == READ )
{
new_sector = raid5_compute_sector(bi-bi_sector,
  
   raid_disks,
  
   data_disks,
  
   dd_idx,
  
   pd_idx,
  
   conf);

bi-bi_sector = new_sector;
bi-bi_bdev =  conf-disks[dd_idx].rdev-bdev;
 return 1;
}
...
}

I have compared the IOs sizes and numbers in the deadline elevator.
it seems that an a single direct IO read of 1MB to a disk
is divided into two 1/2 MB request_t ( though max_hw_sectors=2048)
and when I go through the raid i am getting three request_t's
992 sectors followed by 64 sectors followed by 992 sectors.
I have also recorded the IOs going in make_request in this
scenario, it is composed of 8 124K and an additional 32K
request.

the test:
My test is simple . I am reading the device in direct io mode
and no file system in involved.
could you explain this ? why I am not getting two 1/2 MB ?
Could it be the slab cache ? ( biovec256)

Thank you
--
Raz
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RAID rebuild I/O bandwidth

2006-04-03 Thread Yogesh Pahilwan
Hi Folks,

I am doing some research on calculating I/O performance of a raid array.

I want to test the RAID rebuild.

Can anyone specify what raid rebuild I/O bandwidth is?
How should I set 10MB/sec of rebuild I/O bandwidth?
How should I measure time to rebuild a disk?

Thanks in advance,

Thanks,
Yogesh

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Cheap Motherboards and Linux RAID

2006-04-03 Thread Solid Computing
If you happen to be unfortunate enough to have also purchased a cheap ASUS K8N 
VM with the Nforce410 chipset in order to get the software RAID (or anything 
for that matter) to work you have to disable APIC .  This means APIC modules 
must not be loaded.

Joe Olstad,
Solid Computing Corp
Edmonton, Canada
780-710-FAST

- Original Message -
From: Jeff Garzik [EMAIL PROTECTED]
Date: Monday, April 3, 2006 4:28 am
Subject: Re: Softraid controllers and Linux

 Jim Klimov wrote:
  Hello linux-raid,
  
I have tried several cheap RAID controllers recently (namely,
VIA VT6421, Intel 6300ESB and Adaptec/Marvell 885X6081).

VIA one is a PCI card, the second two are built in a Supermicro
motherboard (E7520/X6DHT-G).
  
The intent was to let the BIOS of the controllers make a RAID1
mirror of two disks independently of an OS to make redundant
multi-OS booting transparent. While DOS and Windows saw their
mirrors as a singular block device, Linux (FC5) accessed the
two drives separately on all adapters.
 
 You did not buy a RAID controller.
 
 http://linux-ata.org/faq-sata-raid.html
 
 If you really want to use proprietary RAID on Linux, you may use 
 dmraid, 
 but using MD for software RAID is much more robust.
 
   Jeff
 
 
 -
 To unsubscribe from this list: send the line unsubscribe linux-
 raid in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: I dropped 42 Lbs in 4 days

2006-04-03 Thread Technomage
replied on this one privately. :)

for the rest, I handle 300 spams a day (once you are someones list, getting 
off is hard work). still, to see it here... I found that rather unusual 
(hence the comment).

sorry bout that (and yeah, I am fairly conversant with netiquette. this one 
just took me by surprise is all. :)

On Monday 03 April 2006 11:23, David Greaves wrote:


 This would be a phenomenon known as 'spam' - if you don't recognise it
 yet, don't worry, you soon will :)

 Netiquette says you ignore it, not reply to all.

 Mind you, netiquette says a lot of things that fewer and fewer people
 seem to heed...

 Google for netiquette if you are interested.

 David
 (Who knows he shouldn't have replied)
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Cheap Motherboards and Linux RAID

2006-04-03 Thread PFC


If you happen to be unfortunate enough to have also purchased a cheap  
ASUS K8N VM with the Nforce410 chipset in order to get the software RAID


	And if you are also unfortunate enough to have bought some newer Maxtor  
SATA harddrives, use the jumper on the drive to revert to SATA150 instead  
of SATA300 ; this will prevent your drives from randomly failing (and  
losing all data) every 2-5 days. If your drive has no jumper, you (should)  
be safe (hopefully). This bug is agnostic and hits windows and linux, with  
nforce 3 and 4 chipsets. On the SATA150 position, the machine has been  
running smoothly for a few months with no problem whatsoever.


(or anything for that matter) to work you have to disable APIC .  This  
means APIC modules must not be loaded.

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] mdadm: monitor event argument passing

2006-04-03 Thread Paul Clements

Neil Brown wrote:

On Friday March 31, [EMAIL PROTECTED] wrote:

I've been looking at the mdadm monitor, and thought it might be useful 
if it allowed extra context information (in the form of command line 
arguments) to be sent to the event program, so instead of just:


# mdadm -F /dev/md0 -p md_event

you could do something like:

# mdadm -F /dev/md0 -p md_event -i some_info

And the -i some_info will be passed on the command line to the event 
program. Of course you can usually figure out what the extra context 
should be in the event program itself, but it may take more work.



I would recommend the use of environment variables for this:

  md_event_context=some_info mdadm -F /dev/md0 -p md_event



Does that work for you?


Yes, that would work.

Thanks,
Paul
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: I dropped 42 Lbs in 4 days

2006-04-03 Thread Matti Aarnio
On Mon, Apr 03, 2006 at 11:04:48AM -0700, Technomage wrote:
 pardon my asking but...
 
 HUH?!?!?

  Sometimes spams do leak thru to the lists.
  How and why is explained in LKML-FAQ.

 On Monday 03 April 2006 17:46, Alice wrote:
  I lost 30lbs in
   w eeks
  snip url

/Matti Aarnio
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Help needed - RAID5 recovery from Power-fail

2006-04-03 Thread Nigel J. Terry
I wonder if you could help a Raid Newbie with a problem

I had a power fail, and now I can't access my RAID array. It has been
working fine for months until I lost power... Being a fool, I don't have
a full backup, so I really need to get this data back.

I run FC4 (64bit).
I have an array of two disks /dev/sda1 and /dev/sdb1 as a raid5 array
/dev/md0 on top of which I run lvm and mount the whole lot as /home. My
intention was always to add another disk to this array, and I purchased
one yesterday.

When I boot, I get:

md0 is not clean
Cannot start dirty degraded array
failed to run raid set md0


I can provide the following extra information:

# cat /proc/mdstat
Personalities : [raid5]
unused devices: none

# mdadm --query /dev/md0
/dev/md0: is an md device which is not active

# mdadm --query /dev/md0
/dev/md0: is an md device which is not active
/dev/md0: is too small to be an md component.

# mdadm --query /dev/sda1
/dev/sda1: is not an md array
/dev/sda1: device 0 in 2 device undetected raid5 md0.  Use mdadm
--examine for more detail.

#mdadm --query /dev/sdb1
/dev/sdb1: is not an md array
/dev/sdb1: device 1 in 2 device undetected raid5 md0.  Use mdadm
--examine for more detail.

# mdadm --examine /dev/md0
mdadm: /dev/md0 is too small for md

# mdadm --examine /dev/sda1
/dev/sda1:
  Magic : a92b4efc
Version : 00.90.02
   UUID : c57d50aa:1b3bcabd:ab04d342:6049b3f1
  Creation Time : Thu Dec 15 15:29:36 2005
 Raid Level : raid5
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0

Update Time : Tue Mar 21 06:25:52 2006
  State : active
 Active Devices : 1
Working Devices : 1
 Failed Devices : 2
  Spare Devices : 0
   Checksum : 2ba99f09 - correct
 Events : 0.1498318

 Layout : left-symmetric
 Chunk Size : 128K

  Number   Major   Minor   RaidDevice State
this 0   810  active sync   /dev/sda1

   0 0   810  active sync   /dev/sda1
   1 1   001  faulty removed

#mdadm --examine /dev/sdb1
/dev/sdb1:
  Magic : a92b4efc
Version : 00.90.02
   UUID : c57d50aa:1b3bcabd:ab04d342:6049b3f1
  Creation Time : Thu Dec 15 15:29:36 2005
 Raid Level : raid5
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0

Update Time : Tue Mar 21 06:23:57 2006
  State : active
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
   Checksum : 2ba99e95 - correct
 Events : 0.1498307

 Layout : left-symmetric
 Chunk Size : 128K

  Number   Major   Minor   RaidDevice State
this 1   8   171  active sync   /dev/sdb1

   0 0   810  active sync   /dev/sda1
   1 1   8   171  active sync   /dev/sdb1

It looks to me like there is no hardware problem, but maybe I am wrong.
I cannot find any file /etc/mdadm.confnor   /etc/raidtab.

How would you suggest I proceed? I'm wary of doing anything (assemble,
build, create) until I am sure it won't reset everything.

Many Thanks

Nigel



-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Help needed - RAID5 recovery from Power-fail

2006-04-03 Thread Neil Brown
On Monday April 3, [EMAIL PROTECTED] wrote:
 I wonder if you could help a Raid Newbie with a problem
 
 I had a power fail, and now I can't access my RAID array. It has been
 working fine for months until I lost power... Being a fool, I don't have
 a full backup, so I really need to get this data back.
 
 I run FC4 (64bit).
 I have an array of two disks /dev/sda1 and /dev/sdb1 as a raid5 array
 /dev/md0 on top of which I run lvm and mount the whole lot as /home. My
 intention was always to add another disk to this array, and I purchased
 one yesterday.

2 devices in a raid5??  Doesn't seem a lot of point it being raid5
rather than raid1.

 
 When I boot, I get:
 
 md0 is not clean
 Cannot start dirty degraded array
 failed to run raid set md0

This tells use that the array is degraded.  A dirty degraded array can
have undetectable data corruption.  That is why it won't start it for
you.
However with only two devices, data corruption from this cause isn't
actually possible. 

The kernel parameter
   md_mod.start_dirty_degraded=1
will bypass this message and start the array anyway.

Alternately:
  mdadm -A --force /dev/md0 /dev/sd[ab]1

 
 # mdadm --examine /dev/sda1
 /dev/sda1:
   Magic : a92b4efc
 Version : 00.90.02
UUID : c57d50aa:1b3bcabd:ab04d342:6049b3f1
   Creation Time : Thu Dec 15 15:29:36 2005
  Raid Level : raid5
Raid Devices : 2
   Total Devices : 2
 Preferred Minor : 0
 
 Update Time : Tue Mar 21 06:25:52 2006
   State : active
  Active Devices : 1

So at 06:25:52, there was only one working devices, while...


 
 #mdadm --examine /dev/sdb1
 /dev/sdb1:
   Magic : a92b4efc
 Version : 00.90.02
UUID : c57d50aa:1b3bcabd:ab04d342:6049b3f1
   Creation Time : Thu Dec 15 15:29:36 2005
  Raid Level : raid5
Raid Devices : 2
   Total Devices : 2
 Preferred Minor : 0
 
 Update Time : Tue Mar 21 06:23:57 2006
   State : active
  Active Devices : 2

at 06:23:57 there were two.

It looks like you lost a drive a while ago. Did you notice?

Anyway, the 'mdadm' command I gave above should get the array working
again for you.  Then you might want to
   mdadm /dev/md0 -a /dev/sdb1
is you trust /dev/sdb

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html