Re: raid 5 read performance

2006-10-20 Thread Stephan van Hienen
On Sun, 21 May 2006, Neil Brown wrote:

> Please read
> 
> http://www.spinics.net/lists/raid/msg11838.html
> 
> and ask if you have further questions.

Neil,

what is the current status on the slow read performance with 2.6 ?

Regards,

Stephan 
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


mdadm 2.2 segmentation fault

2006-01-27 Thread Stephan van Hienen

when i try to start my raid with mdadm 2.2 it gives a segfault :

]# mdadm  -A /dev/md0
Segmentation fault

and dmesg shows :
md: md0 stopped.

mdadm 1.12 shows :

]# mdadm  -A /dev/md0
mdadm: /dev/md0 has been started with 15 drives.

after i start the md0 with mdadm 1.12, it looks like 2.2 is working ok :

]# mdadm  -D /dev/md0
/dev/md0:
Version : 00.90.03
  Creation Time : Wed Dec  1 16:38:32 2004
 Raid Level : raid5
 Array Size : 2461523456 (2347.49 GiB 2520.60 GB)
Device Size : 175823104 (167.68 GiB 180.04 GB)
   Raid Devices : 15
  Total Devices : 15
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Fri Jan 27 14:06:15 2006
  State : clean
 Active Devices : 15
Working Devices : 15
 Failed Devices : 0
  Spare Devices : 0

 Layout : left-symmetric
 Chunk Size : 256K

   UUID : 784fce06:6999eec1:ad90674e:e415169f
 Events : 0.3082763

Number   Major   Minor   RaidDevice State
   0   8   490  active sync   /dev/sdd1
   1   8   331  active sync   /dev/sdc1
   2   8   172  active sync   /dev/sdb1
   3   8   653  active sync   /dev/sde1
   4   8   814  active sync   /dev/sdf1
   5   8   975  active sync   /dev/sdg1
   6   8  1136  active sync   /dev/sdh1
   7   8  1297  active sync   /dev/sdi1
   8   8  1458  active sync   /dev/sdj1
   9   8  1619  active sync   /dev/sdk1
  10   8  177   10  active sync   /dev/sdl1
  11   8  193   11  active sync   /dev/sdm1
  12   8  209   12  active sync   /dev/sdn1
  13   8  225   13  active sync   /dev/sdo1
  14   8  241   14  active sync   /dev/sdp1


is there anything i can test/debug ?
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Live Read Error Correction W/O Reconstruct?

2006-01-26 Thread Stephan van Hienen

On Tue, 24 Jan 2006, Neil Brown wrote:


On Monday January 23, [EMAIL PROTECTED] wrote:

In 2004, Mr Brown wrote that read errors could be
handled without reconstruction. Has this been
implemented in 2.6.8? As I understand it,  this is the
way RAID is supposed to work.



Not in 2.6.8.
It is implemented in 2.6.15 for raid5 and
2.6.16-rc1 for raid1 and raid6.


Neil,

is there anything logged in case this read error occurs ?
(which can be fixed)

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Better handling of readerrors with raid5.

2005-12-23 Thread Stephan van Hienen

On Thu, 22 Dec 2005, Andrew Burgess wrote:


i recovered my raid by using dd_rescue the last failed disk to a spare
disk (with 4 read errors)
(and doing a mdadm -A -force)



(so I should have a corrupted file somewhere?)


Or a corrupted filesystem, which e2fsck will find & fix.

filesystem is xfs

and to make sure i runned xfs_check
(which runned for a while with no output)

afther this the dmesg output after mounting :

XFS mounting filesystem md0
Ending clean XFS mount for filesystem: md0



Did you write down the bad sectors?  If so, you can use debugfs to find out
where they are on the filesystem. Computing the filesystem block number from
the device sector number will be an exercise...


dd_rescue made a logfile :

dd_rescue: (warning): /dev/sdb1 (39297916.0k): Input/output error!
dd_rescue: (warning): /dev/sdb1 (39297916.5k): Input/output error!
dd_rescue: (warning): /dev/sdb1 (39297917.0k): Input/output error!
dd_rescue: (warning): /dev/sdb1 (39297917.5k): Input/output error!


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH md ] Better handling of readerrors with raid5.

2005-12-22 Thread Stephan van Hienen

On Fri, 16 Sep 2005, NeilBrown wrote:


TESTERS WANTED!!  SEE BELOW...

This patch changes the behaviour of raid5 when it gets a read error.
Instead of just failing the device, it tried to find out what should
have been there, and writes it over the bad block.  For some
media-errors, this has a reasonable chance of fixing the error.
If the write succeeds, and a subsequent read succeeds as well, raid5
decided the address is OK and conitnues.


Neil,

what is the current status of this patch ?
yesterday one of my disks decided to fail during the night (3ware ide 
timeout)
during the rebuild one of my disks decided it had a read error so the raid 
was 'down'


i recovered my raid by using dd_rescue the last failed disk to a spare 
disk (with 4 read errors)

(and doing a mdadm -A -force)

(so I should have a corrupted file somewhere?)

looks like this patch would help in situations like this
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: non-optimal RAID 5 performance with 8 drive array

2005-03-06 Thread Stephan van Hienen
On Sun, 6 Mar 2005, Nicola Fankhauser wrote:
hi
On Sun, 2005-03-06 at 15:25, Stephan van Hienen wrote:
which kernel are you using ?
currently 2.6.8.
2.4:
write 100MB/s
read  140MB/s
2.6
write 100MB/s
read  280MB/s
are you sure that 2.6 gives you better read performance than 2.4? it's
been reported the other way 'round.
oops typo
2.4 does 280MB/s
and 2.6 140MB/s
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: non-optimal RAID 5 performance with 8 drive array

2005-03-06 Thread Stephan van Hienen
On Tue, 1 Mar 2005, Nicola Fankhauser wrote:
the array (reading the first 8GiB from /dev/md0 with dd, bs=1024K) performs 
at about 174MiB/s, accessing the array through LVM2 (still with bs=1024K) 
only 86MiB/s.
Nicola,
which kernel are you using ?
2.4 vs 2.6 performance on my machine (and same problem on different 
machines)

raid5 13 disks
2.4:
write 100MB/s
read  140MB/s
2.6
write 100MB/s
read  280MB/s
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: slow raid5 performance kernel 2.6 vs 2.4

2005-03-06 Thread Stephan van Hienen
Is there any news about this problem ?
On Tue, 14 Dec 2004, Neil Brown wrote:
On Sunday December 12, [EMAIL PROTECTED] wrote:
Hi,
system :
P4 2.4GHz HT, 1GB DDR
8*  S-ATA 250GB Hitachi on 2 Si3114 controllers
bonnie performance with 2.4 :
block input 126MB/sec
with 2.6 :
block input 90MB/sec
saw another posting about slow read performance with raid5 and 2.6
any ideas yet how to fix this ?
Not yet.  I've been doing some testing which you can read about at
   http://neilb.web.cse.unsw.edu.au/~neilb/01102979338
I only get a drop from about 250 to 220 on my hardware, which still
isn't good.
I've got a few ideas but I'm not sure when I will have a chance to
follow through with them.  I'm doing a bit more testing first to make
sure I have a complete picture.
NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html