Re: bad performance on RAID 5

2007-01-21 Thread Bill Davidsen

Nix wrote:

On 18 Jan 2007, Bill Davidsen spake thusly:
  

) Steve Cousins wrote:


time dd if=/dev/zero of=/mount-point/test.dat bs=1024k count=1024
  

That doesn't give valid (repeatable) results due to caching issues. Go
back to the thread I started on RAID-5 write, and see my results. More
important, the way I got rid of the cache effects (beside an unloaded
systems) was:
 sync; time bash -c "dd if=/dev/zero bs=1024k count=2048 of=/mnt/point/file; 
sync"
I empty the cache, then time the dd including the sync at the
end. Results are far more repeatable.



Recent versions of dd have `oflag=direct' as well, to open the output
with O_DIRECT. (I'm not sure what the state of O_DIRECT on regular files
is though.)

  
Doing the write using page cache and then just a single sync at the end 
gives a closer estimate of what can be written to the array in general. 
By going to O_DIRECT every i/o take place unbuffered, which is more 
typical of database use or similar. My original problem was capturing 
real data at 75MB/s and not being able to write it to an array that 
fast, taking advantage of page cache. By going to huge stripe cache, 
about 20x larger than default, performance was boosted to a more useful 
level.


I think both measurements are useful, but they aren't measuring the same 
thing.


--
bill davidsen <[EMAIL PROTECTED]>
 CTO TMR Associates, Inc
 Doing interesting things with small computers since 1979

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: bad performance on RAID 5

2007-01-21 Thread Nix
On 18 Jan 2007, Bill Davidsen spake thusly:
> ) Steve Cousins wrote:
>> time dd if=/dev/zero of=/mount-point/test.dat bs=1024k count=1024
> That doesn't give valid (repeatable) results due to caching issues. Go
> back to the thread I started on RAID-5 write, and see my results. More
> important, the way I got rid of the cache effects (beside an unloaded
> systems) was:
>  sync; time bash -c "dd if=/dev/zero bs=1024k count=2048 of=/mnt/point/file; 
> sync"
> I empty the cache, then time the dd including the sync at the
> end. Results are far more repeatable.

Recent versions of dd have `oflag=direct' as well, to open the output
with O_DIRECT. (I'm not sure what the state of O_DIRECT on regular files
is though.)

-- 
`The serial comma, however, is correct and proper, and abandoning it will
surely lead to chaos, anarchy, rioting in the streets, the Terrorists
taking over, and possibly the complete collapse of Human Civilization.'
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: bad performance on RAID 5

2007-01-20 Thread Roger Lucas
Hi Sevrin,

Are you sure all the disks are working OK?

We saw a problem here with a 4-disk SATA array.  The array was working
without errors but was _very_ slow.  When we checked each disk individually,
we found one disk was reporting SMART errors and running slow.  We removed
it and replaced it with a new disk and the RAID array ran at full speed
again.  Further tests on the removed disk found that it was having a lot of
problems but just about hanging in there (although taking a long time for
each operation) - hence the RAID array didn't mark it as faulty.

I would check the SMART logs with smartctl to see if anything looks a bit
wrong and try benchmarking the disks individually too.

- Roger

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: bad performance on RAID 5

2007-01-18 Thread dean gaudet
On Wed, 17 Jan 2007, Sevrin Robstad wrote:

> I'm suffering from bad performance on my RAID5.
> 
> a "echo check >/sys/block/md0/md/sync_action"
> 
> gives a speed at only about 5000K/sec , and HIGH load average :
> 
> # uptime
> 20:03:55 up 8 days, 19:55,  1 user,  load average: 11.70, 4.04, 1.52

iostat -kx /dev/sd? 10  ... and sum up the total IO... 

also try increasing sync_speed_min/max

and a loadavg jump like that suggests to me you have other things 
competing for the disk at the same time as the "check".

-dean
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: bad performance on RAID 5

2007-01-18 Thread Bill Davidsen

) Steve Cousins wrote:

Sevrin Robstad wrote:

I'm suffering from bad performance on my RAID5.

a "echo check >/sys/block/md0/md/sync_action"

gives a speed at only about 5000K/sec , and HIGH load average :


What do you get when you try something like:

time dd if=/dev/zero of=/mount-point/test.dat bs=1024k count=1024
That doesn't give valid (repeatable) results due to caching issues. Go 
back to the thread I started on RAID-5 write, and see my results. More 
important, the way I got rid of the cache effects (beside an unloaded 
systems) was:
 sync; time bash -c "dd if=/dev/zero bs=1024k count=2048 
of=/mnt/point/file; sync"
I empty the cache, then time the dd including the sync at the end. 
Results are far more repeatable.


I actually was able to create custom arrays on unused devices to play 
with array setting, once you use the array you can't tune as much.


where /mount-point is where /dev/md0 is mounted.

This will create a 1 GiB file and it will tell you how long it takes 
to create it.  Also, I'd try running Bonnie++ on it to see what the 
different performance values are.


I don't know a lot about the md sync process but I remember having my 
sync action stuck at a low value at one point and it didn't have 
anything to do with the performance of the RAID array in general.


Steve


# uptime
20:03:55 up 8 days, 19:55,  1 user,  load average: 11.70, 4.04, 1.52

kernel is 2.6.18.1.2257.fc5
mdadm is v2.5.5

the system consist of an athlon XP1,2GHz and two Sil3114 4port S-ATA 
PCI cards with a total of 6  250gb S-ATA drives connected.


[EMAIL PROTECTED] ~]# mdadm --detail /dev/md0
/dev/md0:
   Version : 00.90.03
 Creation Time : Tue Dec  5 00:33:01 2006
Raid Level : raid5
Array Size : 1218931200 (1162.46 GiB 1248.19 GB)
   Device Size : 243786240 (232.49 GiB 249.64 GB)
  Raid Devices : 6
 Total Devices : 6
Preferred Minor : 0
   Persistence : Superblock is persistent

   Update Time : Wed Jan 17 23:14:39 2007
 State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
 Spare Devices : 0

Layout : left-symmetric
Chunk Size : 256K

  UUID : 27dce477:6f45d11b:77377d08:732fa0e6
Events : 0.58

   Number   Major   Minor   RaidDevice State
  0   810  active sync   /dev/sda1
  1   8   171  active sync   /dev/sdb1
  2   8   332  active sync   /dev/sdc1
  3   8   493  active sync   /dev/sdd1
  4   8   654  active sync   /dev/sde1
  5   8   815  active sync   /dev/sdf1
[EMAIL PROTECTED] ~]#


Sevrin 


--
bill davidsen <[EMAIL PROTECTED]>
 CTO TMR Associates, Inc
 Doing interesting things with small computers since 1979

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: bad performance on RAID 5

2007-01-18 Thread Bill Davidsen

Sevrin Robstad wrote:
I've tried to increase the cache size - I can't measure any 
difference.


You probably won't help small writes, but large writes will go faster 
with a stripe cache of size num_disks*chunk_size*2 or larger.


Raz Ben-Jehuda(caro) wrote:

did u  increase the stripe cache size ?


On 1/18/07, Justin Piszcz <[EMAIL PROTECTED]> wrote:

Sevrin Robstad wrote:
> I'm suffering from bad performance on my RAID5.
>
> a "echo check >/sys/block/md0/md/sync_action"
>
> gives a speed at only about 5000K/sec , and HIGH load average :
>
> # uptime
> 20:03:55 up 8 days, 19:55,  1 user,  load average: 11.70, 4.04, 1.52
>
> kernel is 2.6.18.1.2257.fc5
> mdadm is v2.5.5
>
> the system consist of an athlon XP1,2GHz and two Sil3114 4port S-ATA
> PCI cards with a total of 6  250gb S-ATA drives connected.
>
> [EMAIL PROTECTED] ~]# mdadm --detail /dev/md0
> /dev/md0:
>Version : 00.90.03
>  Creation Time : Tue Dec  5 00:33:01 2006
> Raid Level : raid5
> Array Size : 1218931200 (1162.46 GiB 1248.19 GB)
>Device Size : 243786240 (232.49 GiB 249.64 GB)
>   Raid Devices : 6
>  Total Devices : 6
> Preferred Minor : 0
>Persistence : Superblock is persistent
>
>Update Time : Wed Jan 17 23:14:39 2007
>  State : clean
> Active Devices : 6
> Working Devices : 6
> Failed Devices : 0
>  Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 256K
>
>   UUID : 27dce477:6f45d11b:77377d08:732fa0e6
> Events : 0.58
>
>Number   Major   Minor   RaidDevice State
>   0   810  active sync   /dev/sda1
>   1   8   171  active sync   /dev/sdb1
>   2   8   332  active sync   /dev/sdc1
>   3   8   493  active sync   /dev/sdd1
>   4   8   654  active sync   /dev/sde1
>   5   8   815  active sync   /dev/sdf1
> [EMAIL PROTECTED] ~]#
>
>
> Sevrin
> -
> To unsubscribe from this list: send the line "unsubscribe 
linux-raid" in

> the body of a message to [EMAIL PROTECTED]
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

If they are on the PCI bus, that is about right, you probably should be
getting 10-15MB/s, but it is about right.  If you had each drive on its
own PCI-e controller, then you would get much faster speeds. 



--
bill davidsen <[EMAIL PROTECTED]>
 CTO TMR Associates, Inc
 Doing interesting things with small computers since 1979

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: bad performance on RAID 5

2007-01-18 Thread Sevrin Robstad

Mark Hahn wrote:

Chunk Size : 256K


well, that's pretty big.  it means 6*256K is necessary to do a 
whole-stripe update; your stripe cache may be too small to be effective.


If they are on the PCI bus, that is about right, you probably should 
be getting 10-15MB/s, but it is about right.  If you had each drive 
on its own PCI-e controller, then you would get much faster speeds.


10-15 seems bizarrely low - one can certainly achieve >100 MB/s
over the PCI bus, so where does the factor of 6-10 come in?
seems like a R6 resync would do 4 reads and 2 writes for every
4 chunks of throughput (so should achieve more like 50 MB/s if the 
main limit is the bus at 100.)


There are two controllers, 4 disks connected to one controller on the 
PCI-bus and 2 disks connected to the other controller.


well, you should probably look at the PCI topology ("lspci -v -t"),
and perhaps even the PCI settings (as well as stripe cache size, 
perhaps nr_requests, etc)




I've tried to increase the stripe_cache_size a lot, no noticable difference.
I set it to 8096

[EMAIL PROTECTED] md]# lspci -v -t
-[:00]-+-00.0  VIA Technologies, Inc. VT8366/A/7 [Apollo KT266/A/333]
  +-01.0-[:01]00.0  ATI Technologies Inc Rage 128 RF/SG AGP
  +-05.0  Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] 
Serial ATA Controller
  +-07.0  Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] 
Serial ATA Controller

  +-08.0  Realtek Semiconductor Co., Ltd. RTL-8169 Gigabit Ethernet
  +-11.0  VIA Technologies, Inc. VT8233 PCI to ISA Bridge
  \-11.1  VIA Technologies, Inc. 
VT82C586A/B/VT82C686/A/B/VT823x/A/C PIPC Bus Master IDE

[EMAIL PROTECTED] md]#



-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: bad performance on RAID 5

2007-01-18 Thread Sevrin Robstad

Steve Cousins wrote:

Sevrin Robstad wrote:

I'm suffering from bad performance on my RAID5.

a "echo check >/sys/block/md0/md/sync_action"

gives a speed at only about 5000K/sec , and HIGH load average :

What do you get when you try something like:

time dd if=/dev/zero of=/mount-point/test.dat bs=1024k count=1024

where /mount-point is where /dev/md0 is mounted.

This will create a 1 GiB file and it will tell you how long it takes 
to create it.  Also, I'd try running Bonnie++ on it to see what the 
different performance values are.


I don't know a lot about the md sync process but I remember having my 
sync action stuck at a low value at one point and it didn't have 
anything to do with the performance of the RAID array in general.




I ran the a couple of times, and got either about 28MB/s or about 
34MB/s. strange.
When I run the same test on a single disk connected to another 
controller I get about 60MB/s.



[EMAIL PROTECTED] ~]# time dd if=/dev/zero of=/mnt/gigaraid/1gb.testfile 
bs=1024k cou   nt=1024

1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 38.1873 seconds, 28.1 MB/s

real0m38.321s
user0m0.009s
sys 0m8.602s
[EMAIL PROTECTED] ~]# time dd if=/dev/zero of=/mnt/gigaraid/1gb.testfile 
bs=1024k cou   nt=1024

1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 31.4377 seconds, 34.2 MB/s

real0m31.949s
user0m0.016s
sys 0m8.988s
[EMAIL PROTECTED] ~]# time dd if=/dev/zero of=/mnt/gigaraid/1gb.testfile 
bs=1024k cou   nt=1024

1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 37.599 seconds, 28.6 MB/s

real0m38.151s
user0m0.011s
sys 0m9.291s
[EMAIL PROTECTED] ~]# time dd if=/dev/zero of=/mnt/gigaraid/1gb.testfile 
bs=1024k count=1024

1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 31.2569 seconds, 34.4 MB/s

real0m31.765s
user0m0.007s
sys 0m8.913s
[EMAIL PROTECTED] ~]# time dd if=/dev/zero of=/mnt/gigaraid/1gb.testfile 
bs=1024k count=1024

1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 37.4778 seconds, 28.7 MB/s

real0m37.923s
user0m0.009s
sys 0m9.231s
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: bad performance on RAID 5

2007-01-18 Thread Mark Hahn

gives a speed at only about 5000K/sec , and HIGH load average :
# uptime
20:03:55 up 8 days, 19:55,  1 user,  load average: 11.70, 4.04, 1.52


loadav is a bit misleading - it doesn't mean you had >11 runnable jobs.
you might just have more jobs waiting on IO, being starved by the 
IO done by resync.



Chunk Size : 256K


well, that's pretty big.  it means 6*256K is necessary to do a 
whole-stripe update; your stripe cache may be too small to be effective.


If they are on the PCI bus, that is about right, you probably should be 
getting 10-15MB/s, but it is about right.  If you had each drive on its own 
PCI-e controller, then you would get much faster speeds.


10-15 seems bizarrely low - one can certainly achieve >100 MB/s
over the PCI bus, so where does the factor of 6-10 come in?
seems like a R6 resync would do 4 reads and 2 writes for every
4 chunks of throughput (so should achieve more like 50 MB/s if 
the main limit is the bus at 100.)


There are two controllers, 4 disks connected to one controller on the PCI-bus 
and 2 disks connected to the other controller.


well, you should probably look at the PCI topology ("lspci -v -t"),
and perhaps even the PCI settings (as well as stripe cache size, 
perhaps nr_requests, etc)

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: bad performance on RAID 5

2007-01-18 Thread Raz Ben-Jehuda(caro)

iorder to understand what is going in your system you should:
1. determine the access pattern to the volume. meaning:
sequetial ? random access ?
sync io ? async io ?
mostly read ? mostly write ?
Are you using small buffers ?  big buffers ?

2. you should test the controller capabilty.
meaning :
see if dd'in for each disk in the system seperately reduces the total
 throughput.


On 1/18/07, Sevrin Robstad <[EMAIL PROTECTED]> wrote:

I've tried to increase the cache size - I can't measure any difference.

Raz Ben-Jehuda(caro) wrote:
> did u  increase the stripe cache size ?
>
>
> On 1/18/07, Justin Piszcz <[EMAIL PROTECTED]> wrote:
>> Sevrin Robstad wrote:
>> > I'm suffering from bad performance on my RAID5.
>> >
>> > a "echo check >/sys/block/md0/md/sync_action"
>> >
>> > gives a speed at only about 5000K/sec , and HIGH load average :
>> >
>> > # uptime
>> > 20:03:55 up 8 days, 19:55,  1 user,  load average: 11.70, 4.04, 1.52
>> >
>> > kernel is 2.6.18.1.2257.fc5
>> > mdadm is v2.5.5
>> >
>> > the system consist of an athlon XP1,2GHz and two Sil3114 4port S-ATA
>> > PCI cards with a total of 6  250gb S-ATA drives connected.
>> >
>> > [EMAIL PROTECTED] ~]# mdadm --detail /dev/md0
>> > /dev/md0:
>> >Version : 00.90.03
>> >  Creation Time : Tue Dec  5 00:33:01 2006
>> > Raid Level : raid5
>> > Array Size : 1218931200 (1162.46 GiB 1248.19 GB)
>> >Device Size : 243786240 (232.49 GiB 249.64 GB)
>> >   Raid Devices : 6
>> >  Total Devices : 6
>> > Preferred Minor : 0
>> >Persistence : Superblock is persistent
>> >
>> >Update Time : Wed Jan 17 23:14:39 2007
>> >  State : clean
>> > Active Devices : 6
>> > Working Devices : 6
>> > Failed Devices : 0
>> >  Spare Devices : 0
>> >
>> > Layout : left-symmetric
>> > Chunk Size : 256K
>> >
>> >   UUID : 27dce477:6f45d11b:77377d08:732fa0e6
>> > Events : 0.58
>> >
>> >Number   Major   Minor   RaidDevice State
>> >   0   810  active sync   /dev/sda1
>> >   1   8   171  active sync   /dev/sdb1
>> >   2   8   332  active sync   /dev/sdc1
>> >   3   8   493  active sync   /dev/sdd1
>> >   4   8   654  active sync   /dev/sde1
>> >   5   8   815  active sync   /dev/sdf1
>> > [EMAIL PROTECTED] ~]#
>> >
>> >
>> > Sevrin
>> > -
>> > To unsubscribe from this list: send the line "unsubscribe
>> linux-raid" in
>> > the body of a message to [EMAIL PROTECTED]
>> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>> If they are on the PCI bus, that is about right, you probably should be
>> getting 10-15MB/s, but it is about right.  If you had each drive on its
>> own PCI-e controller, then you would get much faster speeds.
>>
>> -
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to [EMAIL PROTECTED]
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
>





--
Raz
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: bad performance on RAID 5

2007-01-18 Thread Steve Cousins

Sevrin Robstad wrote:

I'm suffering from bad performance on my RAID5.

a "echo check >/sys/block/md0/md/sync_action"

gives a speed at only about 5000K/sec , and HIGH load average :


What do you get when you try something like:

time dd if=/dev/zero of=/mount-point/test.dat bs=1024k count=1024

where /mount-point is where /dev/md0 is mounted.

This will create a 1 GiB file and it will tell you how long it takes to 
create it.  Also, I'd try running Bonnie++ on it to see what the 
different performance values are.


I don't know a lot about the md sync process but I remember having my 
sync action stuck at a low value at one point and it didn't have 
anything to do with the performance of the RAID array in general.


Steve


# uptime
20:03:55 up 8 days, 19:55,  1 user,  load average: 11.70, 4.04, 1.52

kernel is 2.6.18.1.2257.fc5
mdadm is v2.5.5

the system consist of an athlon XP1,2GHz and two Sil3114 4port S-ATA PCI 
cards with a total of 6  250gb S-ATA drives connected.


[EMAIL PROTECTED] ~]# mdadm --detail /dev/md0
/dev/md0:
   Version : 00.90.03
 Creation Time : Tue Dec  5 00:33:01 2006
Raid Level : raid5
Array Size : 1218931200 (1162.46 GiB 1248.19 GB)
   Device Size : 243786240 (232.49 GiB 249.64 GB)
  Raid Devices : 6
 Total Devices : 6
Preferred Minor : 0
   Persistence : Superblock is persistent

   Update Time : Wed Jan 17 23:14:39 2007
 State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
 Spare Devices : 0

Layout : left-symmetric
Chunk Size : 256K

  UUID : 27dce477:6f45d11b:77377d08:732fa0e6
Events : 0.58

   Number   Major   Minor   RaidDevice State
  0   810  active sync   /dev/sda1
  1   8   171  active sync   /dev/sdb1
  2   8   332  active sync   /dev/sdc1
  3   8   493  active sync   /dev/sdd1
  4   8   654  active sync   /dev/sde1
  5   8   815  active sync   /dev/sdf1
[EMAIL PROTECTED] ~]#


Sevrin
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
__
 Steve Cousins, Ocean Modeling GroupEmail: [EMAIL PROTECTED]
 Marine Sciences, 452 Aubert Hall   http://rocky.umeoce.maine.edu
 Univ. of Maine, Orono, ME 04469Phone: (207) 581-4302


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: bad performance on RAID 5

2007-01-18 Thread Sevrin Robstad

I've tried to increase the cache size - I can't measure any difference.

Raz Ben-Jehuda(caro) wrote:

did u  increase the stripe cache size ?


On 1/18/07, Justin Piszcz <[EMAIL PROTECTED]> wrote:

Sevrin Robstad wrote:
> I'm suffering from bad performance on my RAID5.
>
> a "echo check >/sys/block/md0/md/sync_action"
>
> gives a speed at only about 5000K/sec , and HIGH load average :
>
> # uptime
> 20:03:55 up 8 days, 19:55,  1 user,  load average: 11.70, 4.04, 1.52
>
> kernel is 2.6.18.1.2257.fc5
> mdadm is v2.5.5
>
> the system consist of an athlon XP1,2GHz and two Sil3114 4port S-ATA
> PCI cards with a total of 6  250gb S-ATA drives connected.
>
> [EMAIL PROTECTED] ~]# mdadm --detail /dev/md0
> /dev/md0:
>Version : 00.90.03
>  Creation Time : Tue Dec  5 00:33:01 2006
> Raid Level : raid5
> Array Size : 1218931200 (1162.46 GiB 1248.19 GB)
>Device Size : 243786240 (232.49 GiB 249.64 GB)
>   Raid Devices : 6
>  Total Devices : 6
> Preferred Minor : 0
>Persistence : Superblock is persistent
>
>Update Time : Wed Jan 17 23:14:39 2007
>  State : clean
> Active Devices : 6
> Working Devices : 6
> Failed Devices : 0
>  Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 256K
>
>   UUID : 27dce477:6f45d11b:77377d08:732fa0e6
> Events : 0.58
>
>Number   Major   Minor   RaidDevice State
>   0   810  active sync   /dev/sda1
>   1   8   171  active sync   /dev/sdb1
>   2   8   332  active sync   /dev/sdc1
>   3   8   493  active sync   /dev/sdd1
>   4   8   654  active sync   /dev/sde1
>   5   8   815  active sync   /dev/sdf1
> [EMAIL PROTECTED] ~]#
>
>
> Sevrin
> -
> To unsubscribe from this list: send the line "unsubscribe 
linux-raid" in

> the body of a message to [EMAIL PROTECTED]
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

If they are on the PCI bus, that is about right, you probably should be
getting 10-15MB/s, but it is about right.  If you had each drive on its
own PCI-e controller, then you would get much faster speeds.

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html






-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: bad performance on RAID 5

2007-01-18 Thread Sevrin Robstad

Justin Piszcz wrote:

I'm suffering from bad performance on my RAID5.

a "echo check >/sys/block/md0/md/sync_action"

gives a speed at only about 5000K/sec , and HIGH load average :

# uptime
20:03:55 up 8 days, 19:55,  1 user,  load average: 11.70, 4.04, 1.52

kernel is 2.6.18.1.2257.fc5
mdadm is v2.5.5

the system consist of an athlon XP1,2GHz and two Sil3114 4port S-ATA 
PCI cards with a total of 6  250gb S-ATA drives connected.


[EMAIL PROTECTED] ~]# mdadm --detail /dev/md0
/dev/md0:
   Version : 00.90.03
 Creation Time : Tue Dec  5 00:33:01 2006
Raid Level : raid5
Array Size : 1218931200 (1162.46 GiB 1248.19 GB)
   Device Size : 243786240 (232.49 GiB 249.64 GB)
  Raid Devices : 6
 Total Devices : 6
Preferred Minor : 0
   Persistence : Superblock is persistent

   Update Time : Wed Jan 17 23:14:39 2007
 State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
 Spare Devices : 0

Layout : left-symmetric
Chunk Size : 256K

  UUID : 27dce477:6f45d11b:77377d08:732fa0e6
Events : 0.58

   Number   Major   Minor   RaidDevice State
  0   810  active sync   /dev/sda1
  1   8   171  active sync   /dev/sdb1
  2   8   332  active sync   /dev/sdc1
  3   8   493  active sync   /dev/sdd1
  4   8   654  active sync   /dev/sde1
  5   8   815  active sync   /dev/sdf1
[EMAIL PROTECTED] ~]#


Sevrin
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


If they are on the PCI bus, that is about right, you probably should 
be getting 10-15MB/s, but it is about right.  If you had each drive on 
its own PCI-e controller, then you would get much faster speeds.





There are two controllers, 4 disks connected to one controller on the 
PCI-bus and 2 disks connected to the other controller.
As you say I should have had 10-15MB/s, but I'm having 5MB *and* with 
really high load average.


Sevrin
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: bad performance on RAID 5

2007-01-18 Thread Raz Ben-Jehuda(caro)

did u  increase the stripe cache size ?


On 1/18/07, Justin Piszcz <[EMAIL PROTECTED]> wrote:

Sevrin Robstad wrote:
> I'm suffering from bad performance on my RAID5.
>
> a "echo check >/sys/block/md0/md/sync_action"
>
> gives a speed at only about 5000K/sec , and HIGH load average :
>
> # uptime
> 20:03:55 up 8 days, 19:55,  1 user,  load average: 11.70, 4.04, 1.52
>
> kernel is 2.6.18.1.2257.fc5
> mdadm is v2.5.5
>
> the system consist of an athlon XP1,2GHz and two Sil3114 4port S-ATA
> PCI cards with a total of 6  250gb S-ATA drives connected.
>
> [EMAIL PROTECTED] ~]# mdadm --detail /dev/md0
> /dev/md0:
>Version : 00.90.03
>  Creation Time : Tue Dec  5 00:33:01 2006
> Raid Level : raid5
> Array Size : 1218931200 (1162.46 GiB 1248.19 GB)
>Device Size : 243786240 (232.49 GiB 249.64 GB)
>   Raid Devices : 6
>  Total Devices : 6
> Preferred Minor : 0
>Persistence : Superblock is persistent
>
>Update Time : Wed Jan 17 23:14:39 2007
>  State : clean
> Active Devices : 6
> Working Devices : 6
> Failed Devices : 0
>  Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 256K
>
>   UUID : 27dce477:6f45d11b:77377d08:732fa0e6
> Events : 0.58
>
>Number   Major   Minor   RaidDevice State
>   0   810  active sync   /dev/sda1
>   1   8   171  active sync   /dev/sdb1
>   2   8   332  active sync   /dev/sdc1
>   3   8   493  active sync   /dev/sdd1
>   4   8   654  active sync   /dev/sde1
>   5   8   815  active sync   /dev/sdf1
> [EMAIL PROTECTED] ~]#
>
>
> Sevrin
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to [EMAIL PROTECTED]
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

If they are on the PCI bus, that is about right, you probably should be
getting 10-15MB/s, but it is about right.  If you had each drive on its
own PCI-e controller, then you would get much faster speeds.

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html




--
Raz
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: bad performance on RAID 5

2007-01-17 Thread Justin Piszcz

Sevrin Robstad wrote:

I'm suffering from bad performance on my RAID5.

a "echo check >/sys/block/md0/md/sync_action"

gives a speed at only about 5000K/sec , and HIGH load average :

# uptime
20:03:55 up 8 days, 19:55,  1 user,  load average: 11.70, 4.04, 1.52

kernel is 2.6.18.1.2257.fc5
mdadm is v2.5.5

the system consist of an athlon XP1,2GHz and two Sil3114 4port S-ATA 
PCI cards with a total of 6  250gb S-ATA drives connected.


[EMAIL PROTECTED] ~]# mdadm --detail /dev/md0
/dev/md0:
   Version : 00.90.03
 Creation Time : Tue Dec  5 00:33:01 2006
Raid Level : raid5
Array Size : 1218931200 (1162.46 GiB 1248.19 GB)
   Device Size : 243786240 (232.49 GiB 249.64 GB)
  Raid Devices : 6
 Total Devices : 6
Preferred Minor : 0
   Persistence : Superblock is persistent

   Update Time : Wed Jan 17 23:14:39 2007
 State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
 Spare Devices : 0

Layout : left-symmetric
Chunk Size : 256K

  UUID : 27dce477:6f45d11b:77377d08:732fa0e6
Events : 0.58

   Number   Major   Minor   RaidDevice State
  0   810  active sync   /dev/sda1
  1   8   171  active sync   /dev/sdb1
  2   8   332  active sync   /dev/sdc1
  3   8   493  active sync   /dev/sdd1
  4   8   654  active sync   /dev/sde1
  5   8   815  active sync   /dev/sdf1
[EMAIL PROTECTED] ~]#


Sevrin
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


If they are on the PCI bus, that is about right, you probably should be 
getting 10-15MB/s, but it is about right.  If you had each drive on its 
own PCI-e controller, then you would get much faster speeds.


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html