RE: Benchmarks, raid1 (was raid0) performance

2000-06-23 Thread Gregory Leblanc

 -Original Message-
 From: Hugh Bragg [mailto:[EMAIL PROTECTED]]
 Sent: Friday, June 23, 2000 12:36 AM
 To: Gregory Leblanc
 Cc: [EMAIL PROTECTED]
 Subject: Re: Benchmarks, raid1 (was raid0) performance
 
[snip]
   What version of raidtools should I use against a stock 2.2.16
   system with raid-2.2.16-A0 patch running raid1?
  
  The 0.90 ones.  I think that Ingo has some tools in the 
 same place as the
  patches, those should be the right tools.  I'll bet that 
 the Software-RAID
  HOWTO tells where to get the latest tools.  You can find it at
  http://www.LinuxDoc.org/
  Greg
 
 I think you mean the only raid tools there
 people.redhat.com/mingo/raid-patches/raidtools-dangerous-0.90-
 2116.tar.gz?
 
 I'm a bit sceptical about using something that's labled dangerous.
 What is so dangerous about it and is there any more chance 
 that it will
 break
 something than the standard release RH 6.2 raid tools?

Nah, RedHat ships with a variant of these tools.  You could probably
(somebody check me here) use the tools that ship with RH6.2 and have them
work just fine.
Greg



Re: Benchmarks, raid1 (was raid0) performance

2000-06-21 Thread Hugh Bragg

Patch http://www.icon.fi/~mak/raid1/raid1readbalance-2.2.15-B2
improves read performance right? At what cost?

Can/Should I apply the raid1readbalance-2.2.15-B2 patch after
applying mingo's raid-2.2.16-A0 patch?

What version of raidtools should I use against a stock 2.2.16
system with raid-2.2.16-A0 patch running raid1?

Hugh.



RE: Benchmarks, raid1 (was raid0) performance

2000-06-21 Thread Gregory Leblanc

 -Original Message-
 From: Hugh Bragg [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, June 21, 2000 5:04 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Benchmarks, raid1 (was raid0) performance
 
 Patch http://www.icon.fi/~mak/raid1/raid1readbalance-2.2.15-B2
 improves read performance right? At what cost?

Only the cost of patching your kernel, I think.  This patch does some nifty
tricks to help pick which disk to read data from, and will double the read
rates from RAID 1, assuming that you don't saturate the bus.  

 Can/Should I apply the raid1readbalance-2.2.15-B2 patch after
 applying mingo's raid-2.2.16-A0 patch?

I don't see any reason not to apply it, although I haven't tried it with
2.2.16.

 What version of raidtools should I use against a stock 2.2.16
 system with raid-2.2.16-A0 patch running raid1?

The 0.90 ones.  I think that Ingo has some tools in the same place as the
patches, those should be the right tools.  I'll bet that the Software-RAID
HOWTO tells where to get the latest tools.  You can find it at
http://www.LinuxDoc.org/
Greg



RE: Benchmarks, raid1 (was raid0) performance

2000-06-21 Thread Diegmueller, Jason (I.T. Dept)

:  Can/Should I apply the raid1readbalance-2.2.15-B2 patch after
:  applying mingo's raid-2.2.16-A0 patch?
: 
: I don't see any reason not to apply it, although I haven't 
: tried it with 2.2.16.

I have been out of the linux-raid world for a bit, but a 
two-drive RAID1 installation yesterday has brought me back.  
Naturally, when I saw mention of radi1readbalance, I immediately
tried it.

I'm running 2.2.17pre4, and it patched cleanly.  But bonnie++
is showing no change in read performance.  I am using IDE drives,
but they are on separate controllers (/dev/hda, and /dev/hdc) 
with both drives configured as masters.

Anyone have any tricks up their sleeves?



RE: Benchmarks, raid1 (was raid0) performance

2000-06-21 Thread Gregory Leblanc

 -Original Message-
 From: Diegmueller, Jason (I.T. Dept) [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, June 21, 2000 10:46 AM
 To: 'Gregory Leblanc'; 'Hugh Bragg'; [EMAIL PROTECTED]
 Subject: RE: Benchmarks, raid1 (was raid0) performance
 
 :  Can/Should I apply the raid1readbalance-2.2.15-B2 patch after
 :  applying mingo's raid-2.2.16-A0 patch?
 : 
 : I don't see any reason not to apply it, although I haven't 
 : tried it with 2.2.16.
 
 I have been out of the linux-raid world for a bit, but a 
 two-drive RAID1 installation yesterday has brought me back.  
 Naturally, when I saw mention of radi1readbalance, I immediately
 tried it.
 
 I'm running 2.2.17pre4, and it patched cleanly.  But bonnie++
 is showing no change in read performance.  I am using IDE drives,
 but they are on separate controllers (/dev/hda, and /dev/hdc) 
 with both drives configured as masters.
 
 Anyone have any tricks up their sleeves?

None offhand, but can you post your test configuration/parameters?  Things
like test size, relavent portions of /etc/raidtab, things like that.  I know
this should be a whole big list, but I can think of all of them right now.
FYI, I don't do IDE RAID (or IDE at all), but it's pretty awesome on SCSI.
Greg



RE: Benchmarks, raid1 (was raid0) performance

2000-06-21 Thread Diegmueller, Jason (I.T. Dept)

: None offhand, but can you post your test  configuration/parameters?
: Things like test size, relavent portions of /etc/raidtab, things 
: like that.  I know this should be a whole big list, but I can think
: of all of them right now. FYI, I don't do IDE RAID (or IDE at all),
: but it's pretty awesome on SCSI.

Yes, I'll see if I can't whip all that together tonight.

I do like the SCSI/Software-RAID on Linux setup.  I've got two server
for old clients at my last job still running (one of those has 230+ day
uptime) Software-RAID5 on an HP Netserver LXe Pro.  

Nice and quick, stable, haven't had to endure an actual drive failure 
yet .. but simulated failures have worked wonderfully.



Re: Benchmarks, raid1 (was raid0) performance

2000-06-21 Thread Mika Kuoppala

On Wed Jun 21 2000 at 12:46:02 -0500, Diegmueller, Jason (I.T. Dept) wrote:
 :  Can/Should I apply the raid1readbalance-2.2.15-B2 patch after
 :  applying mingo's raid-2.2.16-A0 patch?
 : 
 : I don't see any reason not to apply it, although I haven't 
 : tried it with 2.2.16.
 
 I have been out of the linux-raid world for a bit, but a 
 two-drive RAID1 installation yesterday has brought me back.  
 Naturally, when I saw mention of radi1readbalance, I immediately
 tried it.
 
 I'm running 2.2.17pre4, and it patched cleanly.  But bonnie++
 is showing no change in read performance.  I am using IDE drives,
 but they are on separate controllers (/dev/hda, and /dev/hdc) 
 with both drives configured as masters.
 
 Anyone have any tricks up their sleeves?

Look at the Bonnies seek performance. It should rise.
For single sequential reads, readbalancer doesn't help.
Bonnie tests only single sequential reads.

If you wan't to test with multiple io threads, try
http://tiobench.sourceforge.net

-- Mika



RE: Benchmarks, raid1 (was raid0) performance

2000-06-21 Thread Diegmueller, Jason (I.T. Dept)

: Look at the Bonnies seek performance. It should rise.
: For single sequential reads, readbalancer doesn't help.
: Bonnie tests only single sequential reads.
: 
: If you wan't to test with multiple io threads, try
: http://tiobench.sourceforge.net

Great, thanks, I'll give this a try!



RE: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-13 Thread Adrian Head

I have seen people complain about simular issues on the kernel mailing
list so maybe there is an actual kernel problem.  

What I have always wanted to know but haven't tested yet is to test raid
performance with and without the noatime attribute in /etc/fstab  I
think that when Linux reads a file it writes the time the file was
accessed whereas a write is just a write.  I expect that for benchmarks
this would not affect results alot since SCSI systems would have the
same overhead - but some people seem to swear by it for news servers and
the like.

Am I off track?

Adrian Head



 -Original Message-
 From: bug1 [SMTP:[EMAIL PROTECTED]]
 Sent: Tuesday, 13 June 2000 04:52
 To:   [EMAIL PROTECTED]
 Cc:   Ingo Molnar
 Subject:  Benchmarks, raid0 performance, 1,2,3,4 drives
 
 Here are some more benchmarks for raid0 with different numbers of
 elements, all tests done with tiobench.pl -s=800
 
[Adrian Head]  [SNIP] 

 Glenn



Re: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-13 Thread bug1

Adrian Head wrote:
 
 I have seen people complain about simular issues on the kernel mailing
 list so maybe there is an actual kernel problem.
 
 What I have always wanted to know but haven't tested yet is to test raid
 performance with and without the noatime attribute in /etc/fstab  I
 think that when Linux reads a file it writes the time the file was
 accessed whereas a write is just a write.  I expect that for benchmarks
 this would not affect results alot since SCSI systems would have the
 same overhead - but some people seem to swear by it for news servers and
 the like.
 
 Am I off track?
 

Um, this would effect benchmarks that use a filesystem, if you do a dd
if=/dev/hda of=/dev/null it doesnt consider the filesystem so i would
guess that time isnt updated and it wouldnt be an overhead.

hdparm -t works below the filesystem level as well, its just data, it
doesnt make sense of it, as far as i know.

Glenn



Re: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-13 Thread bug1

Ingo Molnar wrote:
 
 could you send me your /etc/raidtab? I've tested the performance of 4-disk
 RAID0 on SCSI, and it scales perfectly here, as far as hdparm -t goes.
 (could you also send the 'hdparm -t /dev/md0' results, do you see a
 degradation in those numbers as well?)
 
 it could either be some special thing in your setup, or an IDE+RAID
 performance problem.
 
 Ingo

It think it might be an IDE bottleneck.

if i use dd to read 800MB from each of my drives individually the speeds
i get are

hde=22MB/s
hdg=22MB/s
hdi=18MB/s
hdk=20MB/s


if i do the same tests simultaneously i get 10MB/s from each of the four
drives
if i do the same test on just hde hdg and hdk i get 13MB/s from each of
three drives
if i do it on hde and hdg i get 18MB/s from each. (both ide channels on
one card
On hdi and hdk i get 15MB/s

I conclude that on my system there is an ide saturation point (or
bottleneck) around 40MB/s
 
But the same thing happens under 2.2, so it doesnt explain the
performance difference between 2.2 and 2.[34], so there must be another
bottleneck somewhere as well back to the drawing board i guess.



Glenn



RE: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-13 Thread Gregory Leblanc

 -Original Message-
 From: bug1 [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, June 13, 2000 10:39 AM
 To: [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Subject: Re: Benchmarks, raid0 performance, 1,2,3,4 drives
 
 Ingo Molnar wrote:
  
  could you send me your /etc/raidtab? I've tested the 
 performance of 4-disk
  RAID0 on SCSI, and it scales perfectly here, as far as 
 hdparm -t goes.
  (could you also send the 'hdparm -t /dev/md0' results, do you see a
  degradation in those numbers as well?)
  
  it could either be some special thing in your setup, or an IDE+RAID
  performance problem.
  
  Ingo
 
 It think it might be an IDE bottleneck.
 
 if i use dd to read 800MB from each of my drives individually 
 the speeds
 i get are
 
 hde=22MB/s
 hdg=22MB/s
 hdi=18MB/s
 hdk=20MB/s
 
 
 if i do the same tests simultaneously i get 10MB/s from each 
 of the four
 drives
 if i do the same test on just hde hdg and hdk i get 13MB/s 
 from each of
 three drives
 if i do it on hde and hdg i get 18MB/s from each. (both ide 
 channels on
 one card
 On hdi and hdk i get 15MB/s
 
 I conclude that on my system there is an ide saturation point (or
 bottleneck) around 40MB/s

Didn't the LAND5 people think that there was a bottleneck around 40MB/Sec at
some point?  Anybody know if they were talking about IDE drives?  Seems
quite possible that there aren't any single drives that are hitting this
speed, so it's only showing up with RAID.
Greg



Re: Benchmarks, raid1 (was raid0) performance

2000-06-13 Thread Jeff Hill

Gregory Leblanc wrote:

--snip--
  I conclude that on my system there is an ide saturation point (or
  bottleneck) around 40MB/s
 Didn't the LAND5 people think that there was a bottleneck around 40MB/Sec at
 some point?  Anybody know if they were talking about IDE drives?  Seems
 quite possible that there aren't any single drives that are hitting this
 speed, so it's only showing up with RAID.
 Greg


Is there any place where benchmark results are listed? I've finally
gotten my RAID-1 running and am trying to see if the performance is what
I should expect or if there is some other issue:

Running "hdparm -t /dev/md0" a few times:

 Timing buffered disk reads:  64 MB in  3.03 seconds = 21.12 MB/sec
 Timing buffered disk reads:  64 MB in  2.65 seconds = 24.15 MB/sec
 Timing buffered disk reads:  64 MB in  3.21 seconds = 19.94 MB/sec

And bonnie:
  ---Sequential Output ---Sequential Input--
--Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
--Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU 
/sec %CPU
  800  5402 90.9 13735 13.7  7223 15.0  5502 85.0 14062  8.9
316.7  2.8


I had expected better performance with the system: Adaptec 2940U2W with
2x Seagate Cheetah (LVD) 9.1G drives; single PII 400Mhz; 512MB ECC RAM;
ASUS P3B-F 100Mhz.

I have to say the RAID-1 works very well in my crash tests, and that's
the most important thing.

Sorry for taking this off the original thread.

Regards,

Jeff Hill



Re: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-13 Thread bert hubert

On Tue, Jun 13, 2000 at 04:51:46AM +1000, bug1 wrote:

 Maybe im missing something here, why arent reads just as fast as writes?

I note the same on a 2 way IDE RAID-1 device, with both disks on a separate
bus.

Regards,

bert hubert

-- 
   |  http://www.rent-a-nerd.nl
   | - U N I X -
   |  Inspice et cautus eris - D11T'95



Re: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-13 Thread Henry J. Cobb

Bug1: Maybe im missing something here, why arent reads just as fast as writes?

The cynic in me suggests that the RAID driver has to wait for the
information to be read off the disks, but it doesn't have to wait for the
writes to complete before returning, but I haven't read the code.

-HJC




Re: Benchmarks, raid1 (was raid0) performance

2000-06-13 Thread Jeff Hill

Gregory Leblanc wrote:
 
 I don't have anything that caliber to compare against, so I can't really
 say.  Should I assume that you don't have Mika's RAID1 read balancing patch?

I have to admit I was ignorant of the patch (I had skimmed the archives,
but not well enough). Searched the archive further, found it, patched it
into 2.2.16-RAID.

However, how nervous should I be putting it on a production server?
Mika's note says 'experimental'. This is my main production server and I
don't have a development machine currently capable of testing RAID1 on
(and even then, the development machine can never get the same drubbing
as production). 

That said, it looks like the patch has an impact (although I'm not
familiar with tiobench):

tiobench results before Mika's patch:

 File   Block  Num  Seq ReadRand Read   Seq Write  Rand
Write
  DirSize   Size   Thr Rate (CPU%) Rate (CPU%) Rate (CPU%) Rate
(CPU%)
--- -- --- --- --- --- ---
---
   . 1024   40961  11.73 7.35% 1.008 1.54% 10.63 11.2% 1.452
11.8%
   . 1024   40962  12.65 7.78% 1.072 1.44% 10.15 10.5% 1.397
12.5%
   . 1024   40964  12.95 8.08% 1.177 1.70% 9.671 9.95% 1.393
12.6%
   . 1024   40968  12.79 8.45% 1.273 1.85% 9.344 9.89% 1.377
12.8%


tiobench results after Mika's patch:

 File   Block  Num  Seq ReadRand Read   Seq Write  Rand
Write
  DirSize   Size   Thr Rate (CPU%) Rate (CPU%) Rate (CPU%) Rate
(CPU%)
--- -- --- --- --- --- ---
---
   . 1024   40961  22.83 14.9% 1.035 0.86% 10.97 11.2% 1.416
13.5%
   . 1024   40962  26.66 18.7% 1.263 1.21% 10.42 10.6% 1.395
11.6%
   . 1024   40964  27.74 20.2% 1.349 1.20% 9.795 10.0% 1.395
12.2%
   . 1024   40968  24.69 20.8% 1.475 1.46% 9.262 9.82% 1.388
12.0%


Thanks for the help.

Jeff Hill
 
  I have to say the RAID-1 works very well in my crash tests, and that's
  the most important thing.
 
 Yep!  Although speed is the biggest reason that I can see for using Software
 RAID over hardware.  Next comes price.
 Greg

-- 

--  HR On-Line:  The Network for Workplace Issues --
http://www.hronline.com - Ph:416-604-7251 - Fax:416-604-4708




Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-12 Thread bug1

Here are some more benchmarks for raid0 with different numbers of
elements, all tests done with tiobench.pl -s=800

Hardware: dual celeron 433, 128MB ram using 2.4.0-test1-ac15+B5 raid
patch, raid drives on two promise udma66 cards (one drive per channel)

Write speed looks decent for 1 and 2 drives, but three or four drives
makes only a few percent improvement
Threaded reads (4 and 8) actually looks to be improving linearly, but
its still slow as.

A single drive looks to have faster read time than reading from a 4 way
raid0.

The comparison between /hde5 as a regular partition and hde5 as a single
element in a raid0 array (of 1) seems to show raid adds a considerable
overhead to read performance, but still reads arent as fast as writes on
hde5, this isnt a very practical benchmark anyway.

Maybe im missing something here, why arent reads just as fast as writes?


4-way raid0 (disk hde hdg hdi hdk)

File   Block  Num  Seq ReadRand Read   Seq Write  Rand Write
Size   Size   Thr Rate (CPU%) Rate (CPU%) Rate (CPU%) Rate (CPU%)
-- --- --- --- --- --- ---
 80040961  15.26 33.7% 0.561 1.22% 33.25 51.8% 2.071 4.64%
 80040962  8.402 18.1% 0.678 1.56% 33.48 64.1% 2.132 5.25%
 80040964  7.021 14.2% 0.789 1.76% 33.40 68.5% 2.194 5.10%
 80040968  6.248 12.0% 0.885 1.98% 33.25 71.5% 2.237 5.40%

3-way raid0 (disk hde hdg hdi)
File   Block  Num  Seq ReadRand Read   Seq Write  Rand Write
Size   Size   Thr Rate (CPU%) Rate (CPU%) Rate (CPU%) Rate (CPU%)
-- --- --- --- --- --- ---
 80040961  16.18 32.8% 0.540 1.86% 31.18 50.1% 1.576 2.92%
 80040962  7.478 15.4% 0.626 1.72% 31.82 60.5% 1.635 2.98%
 80040964  5.128 10.2% 0.709 1.78% 31.85 65.4% 1.665 3.02%
 80040968  4.683 8.89% 0.777 1.82% 31.81 68.0% 1.689 3.08%

2-way raid0 (disk hde hdg)
File   Block  Num  Seq ReadRand Read   Seq Write  Rand Write
Size   Size   Thr Rate (CPU%) Rate (CPU%) Rate (CPU%) Rate (CPU%)
-- --- --- --- --- --- ---
 80040961  15.16 30.2% 0.484 1.27% 30.21 46.0% 1.057 2.36%
 80040962  6.783 13.6% 0.547 1.41% 30.13 56.9% 1.088 2.57%
 80040964  3.894 7.69% 0.589 1.42% 29.89 60.5% 1.103 2.51%
 80040968  3.561 6.54% 0.623 1.43% 29.42 62.4% 1.113 2.44%


1-way raid0 (disk hde)

File   Block  Num  Seq ReadRand Read   Seq Write  Rand Write
Size   Size   Thr Rate (CPU%) Rate (CPU%) Rate (CPU%) Rate (CPU%)
-- --- --- --- --- --- ---
 80040961  11.17 22.1% 0.458 0.90% 19.06 29.3% 0.551 0.91%
 80040962  2.427 4.20% 0.453 0.91% 18.73 33.3% 0.553 1.31%
 80040964  1.548 2.68% 0.452 0.87% 18.42 35.2% 0.555 1.31%
 80040968  1.076 1.85% 0.459 0.83% 18.00 36.0% 0.555 1.35%


Tiobench on hde5, the partition that did contain the 1-way raid0 above.

File   Block  Num  Seq ReadRand Read   Seq Write  Rand Write
Size   Size   Thr Rate (CPU%) Rate (CPU%) Rate (CPU%) Rate (CPU%)
-- --- --- --- --- --- ---
  80040961  21.85 16.9% 0.464 1.12% 19.12 29.5% 0.556 0.96%
  80040962  11.31 9.01% 0.450 0.79% 18.77 33.8% 0.558 1.10%
  80040964  9.506 7.86% 0.451 0.77% 18.45 35.4% 0.560 1.20%
  80040968  8.455 7.25% 0.457 0.84% 17.93 36.3% 0.561 1.17%

The following is to give an idea of the relative performance of the
drives
If your wondering all drives are 7200rpm and in udma66 mode, the first
two are ibm, the second two are quantum

hdparm -Tt /dev/hde
 Timing buffer-cache reads:   128 MB in  1.54 seconds = 83.12 MB/sec
 Timing buffered disk reads:  64 MB in  2.92 seconds = 21.92 MB/sec

hdparm -Tt /dev/hdg
 Timing buffer-cache reads:   128 MB in  1.55 seconds = 82.58 MB/sec
 Timing buffered disk reads:  64 MB in  2.90 seconds = 22.07 MB/sec

hdparm -Tt /dev/hdi
 Timing buffer-cache reads:   128 MB in  1.54 seconds = 83.12 MB/sec
 Timing buffered disk reads:  64 MB in  3.33 seconds = 19.22 MB/sec

hdparm -Tt /dev/hdk
 Timing buffer-cache reads:   128 MB in  1.54 seconds = 83.12 MB/sec
 Timing buffered disk reads:  64 MB in  3.28 seconds = 19.51 MB/sec

The end.

Glenn



Re: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-12 Thread Ingo Molnar


could you send me your /etc/raidtab? I've tested the performance of 4-disk
RAID0 on SCSI, and it scales perfectly here, as far as hdparm -t goes.
(could you also send the 'hdparm -t /dev/md0' results, do you see a
degradation in those numbers as well?)

it could either be some special thing in your setup, or an IDE+RAID
performance problem.

Ingo




Re: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-12 Thread bug1

Ingo Molnar wrote:
 
 could you send me your /etc/raidtab? I've tested the performance of 4-disk
 RAID0 on SCSI, and it scales perfectly here, as far as hdparm -t goes.
 (could you also send the 'hdparm -t /dev/md0' results, do you see a
 degradation in those numbers as well?)
 
 it could either be some special thing in your setup, or an IDE+RAID
 performance problem.
 
 Ingo

Im not sure how usefull these results are, the number seemed to vary by
1MB/s or so between runs, and i do have 128MB ram. Im not sure if hdparm
is sensitive to ramsize.

So generally, a 50% increase for a second drive, and then no increase
after that.

I am glad to hear that scsi scales well, at least that limits to
problems to ide or me doing something silly.
Maybe i should try on a different motherboard.

4-way raid0 (/dev/hde, /dev/hdg, /dev/hdi, /dev/hdk)
/dev/md0:
 Timing buffer-cache reads:   128 MB in  1.67 seconds = 76.65 MB/sec
 Timing buffered disk reads:  64 MB in  2.09 seconds = 30.62 MB/sec

3-way raid0 (/dev/hde, /dev/hdg, /dev/hdi)
/dev/md0:
 Timing buffer-cache reads:   128 MB in  1.59 seconds = 80.50 MB/sec
 Timing buffered disk reads:  64 MB in  2.15 seconds = 29.77 MB/sec

2-way raid0 (/dev/hde, /dev/hdg)
/dev/md0:
 Timing buffer-cache reads:   128 MB in  1.59 seconds = 80.50 MB/sec
 Timing buffered disk reads:  64 MB in  1.94 seconds = 32.99 MB/sec

Im used a 32K chunk size for all the tests i did, here is my raidtab
To change the number of drives i was testing i just changed
nr-raid-disks and uncommented the next disks, i didnt touch anything
else.

raiddev /dev/md0
raid-level  0
persistent-superblock   1
chunk-size  32  
nr-raid-disks   2
nr-spare-disks  0
device  /dev/hde5
raid-disk   0
device  /dev/hdg5
raid-disk   1
# device/dev/hdi5
# raid-disk 2
# device/dev/hdk5
# raid-disk 3

Thanks

Glenn



My benchmarks

2000-04-25 Thread Douglas Egan

I have an Intel P-III running at 450 MHz on an Intel SE440BX-2
motherboard with patched 2.2.14 kernel raidtools 0.90

RAID-5 array consists of 3 "Maxtor 51536U3" drives.  One drive is master
on secondary motherboard IDE port. (no slave)
The other 2 are alone on the primary and secondary of a Promise Ultra
ATA/66.  The raid is configured as follows:

[root@porgy /proc]# cat mdstat
Personalities : [raid5] 
read_ahead 1024 sectors
md0 : active raid5 hdg1[1] hde1[0] hdc1[2] 10005120 blocks level 5, 64k
chunk, algorithm 0 [3/3] [UUU]
md1 : active raid5 hdg5[2] hde5[1] hdc5[0] 10005120 blocks level 5, 64k
chunk, algorithm 0 [3/3] [UUU]
md2 : active raid5 hdg6[2] hde6[1] hdc6[0] 10004096 blocks level 5, 64k
chunk, algorithm 0 [3/3] [UUU]
unused devices: none
[root@porgy /proc]# 

The following test was run on /dev/md2 mounted as /usr1

[degan@porgy tiobench-0.29]$ ./tiobench.pl --block 4096
No size specified, using 510 MB
Size is MB, BlkSz is Bytes, Read, Write, and Seeks are MB/sec

 File   Block  Num  Seq ReadRand Read   Seq Write  Rand
Write
  DirSize   Size   Thr Rate (CPU%) Rate (CPU%) Rate (CPU%) Rate
(CPU%)
--- -- --- --- --- --- ---
---
   . 51040961  21.63 10.4% 0.757 0.87% 19.62 17.1% 0.799
2.45%
   . 51040962  22.89 12.0% 0.938 0.90% 20.18 17.7% 0.792
2.38%
   . 51040964  21.85 12.5% 1.113 1.09% 20.37 18.1% 0.784
2.19%
   . 51040968  20.57 13.1% 1.252 1.34% 20.54 18.5% 0.776
2.34%


I am not sure what to make of the results and am happy with my RAID
operation.
I only post them FYI.


+-+
| Douglas EganWind River  |
| |
| Tel   : 847-837-1530|
| Fax   : 847-949-1368|
| HTTP  : http://www.windriver.com|
| Email : [EMAIL PROTECTED] |
+-+



Linux RAID benchmarks

2000-04-07 Thread Karel Volejnik

Hi all,
there are some benchmarks from my tests. Maybe interesting.

Configuration:
==
Pentium III (Coppermine) 550MHz, 128MB RAM, MB Galaxy 2000+, integrated
IDE UltraDMA/33, IDE UltraDMA/66 (Promise). PCI card HotRot/66.

Boot disk is connected to onboard IDE-UltraDMA/33.

Two WD WDC WD273BA (7200rpm,27GB, 2MB internal cache) are connected to
onboard Promise IDE-UltraDMA/66 (one disk per IDE channel).
Two disks Seagete ST38410A (5400, 8GB, 512kB) are connected to Abit
HotRot/66.

Linux kernel 2.2.14 with raid patch (and ide patch).

chunk size 4KB, blocksize (ext2) 4KB.

hdparm for all disks:
 multcount=  0 (off)
 I/O support  =  0 (default 16-bit)
 unmaskirq=  0 (off)
 using_dma=  1 (on)
 keepsettings =  0 (off)
 nowerr   =  0 (off)
 readonly =  0 (off)
 readahead=  8 (on)

Seagate alone
-
  ---Sequential Output ---Sequential Input--
--Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
--Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU
/sec %CPU
  500  8881 99.3 18288 13.4  7603 10.8  8562 94.1 17867  6.3
103.6  0.5

Western Digital alone
-
  ---Sequential Output ---Sequential Input--
--Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
--Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU
/sec %CPU
  500  8890 99.0 23128 17.4  9909 15.4  8656 95.1 21967  7.7
146.1  0.6

Raid-0 (2xSeagate)
--
  ---Sequential Output ---Sequential Input--
--Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
--Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU
/sec %CPU
  500  8618 97.1 42237 34.6 14963 23.7  8777 97.8 2 19.1
110.0  0.6
  ---Sequential Output ---Sequential Input--
--Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
--Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU
/sec %CPU
 1000  8582 96.9 41399 33.7 15262 24.0  8849 98.5 44030 18.6
94.2  0.5

Raid-0 (2xWestern Digital)
--
  ---Sequential Output ---Sequential Input--
--Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
--Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU
/sec %CPU
  500  8650 96.9 41756 31.5 16309 26.9  8857 98.1 45439 20.6
149.2  0.5
  ---Sequential Output ---Sequential Input--
--Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
--Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU
/sec %CPU
 1000  8614 96.2 40210 30.5 16661 27.5  8928 99.0 45391 19.9
126.9  0.8

Raid-1 (2xSeagate)
--
  ---Sequential Output ---Sequential Input--
--Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
--Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU
/sec %CPU
  500  8391 95.2 20172 16.1  7912 11.7  8476 94.3 20527  8.5
106.5  0.5
  ---Sequential Output ---Sequential Input--
--Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
--Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU
/sec %CPU
 1000  8381 95.2 19437 15.7  8071 12.1  8724 96.9 20635  8.3
83.6  0.6

Raid-1 (2xWestern Digital)
--
  ---Sequential Output ---Sequential Input--
--Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
--Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU
/sec %CPU
  500  8472 96.1 21238 15.6  8984 13.1  8616 95.9 22796 10.6
144.4  0.7
  ---Sequential Output ---Sequential Input--
--Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
--Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU
/sec %CPU
 1000  8430 95.6 20308 15.1  9086 14.1  8802 97.7 22728  9.1
120.7  0.6

Raid-0 (4 disks = 2xWestern Digital, 2x Seagate)

  ---Sequential Output ---Sequential Input--
--Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
--Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU
/sec %CPU
  500  8881 99.0 57904 44.9 22972 38.7  8917 98.6 84374 44.3
168.2  1.1
  ---Sequential Output ---Sequential Input--
--Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
--Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU
/sec %CPU
 1000  8871 98.9 55508 44.1 22922 37.8  8938 98.7 84509

ExtremeRAID 1100 benchmarks

2000-03-02 Thread Chris Mauritz

Has anyone done any benchmarks with the Mylex ExtremeRAID 1100?  I'm
planning on getting one of the 3 channel ones with 64mb cache.  Initially,
it will be delivered on a dual PIII-750mhz machine with NT, but I'd like to
repurpose this as a Linux file server.  It will have an external enclosure
with 8 18gig 10,000rpm IBM Deskstars and one hot spare.  Can anyone hazard a
guess at the kind of performance I can expect from such an array?

Cheers,

Chris

Chris Mauritz
[EMAIL PROTECTED]




FW: ExtremeRAID 1100 benchmarks

2000-03-02 Thread Kenneth Cornetet
Title: FW: ExtremeRAID 1100 benchmarks





I had a 2 channel 1164 in a dual 450 PIII (256MB ram) with 4 18GB Seagate LVD 10K RPM drives in RAID 5. WIth all defaults, except 4K block size of the ext2 file system, I got about 22MB/sec reads and writes according to Bonnie. Best I remember, the 4K block size made a fairly large improvement over absolute defaults.

I didn't have time to tweak any settings on the 1164


This was a stock redhat 6.0 system.


Unfortunately, box only ran linux for about an hour. Alas, it was destined to be an NT machine.


I wished someone would port Bonnie (or tiotest) to NT.


-Original Message-
From: Chris Mauritz [mailto:[EMAIL PROTECTED]] 
Sent: Thursday, March 02, 2000 11:29 AM
To: [EMAIL PROTECTED]
Subject: ExtremeRAID 1100 benchmarks



Has anyone done any benchmarks with the Mylex ExtremeRAID 1100? I'm
planning on getting one of the 3 channel ones with 64mb cache. Initially,
it will be delivered on a dual PIII-750mhz machine with NT, but I'd like to
repurpose this as a Linux file server. It will have an external enclosure
with 8 18gig 10,000rpm IBM Deskstars and one hot spare. Can anyone hazard a
guess at the kind of performance I can expect from such an array?


Cheers,


Chris


Chris Mauritz
[EMAIL PROTECTED]





Re: FW: ExtremeRAID 1100 benchmarks

2000-03-02 Thread Brian Pomerantz

I'm in the middle of testing this controller on an ES40 (4 CPU Alpha).
I should get some numbers next week.  So far with a 4+p RAID 5 I'm
seeing about 17MB/s write performance with a single chain.  I think
these are only 7200 RPM drive.  I don't really care about read
performance but that was up around 30MB/s.  Next week I should be
testing with 2 cards per server with 3 chains on one and 2 chains on
another.  I'm trying to reach 100MB/s write performance on a single
server.  I really doubt I'll get that with this card, I'm nearly
certain I'll have to go with an external Fibre Channel solution with a
smart enclosure.


BAPper

On Thu, Mar 02, 2000 at 12:58:42PM -0500, Kenneth Cornetet wrote:
 I had a 2 channel 1164 in a dual 450 PIII (256MB ram) with 4 18GB Seagate
 LVD 10K RPM drives in RAID 5. WIth all defaults, except 4K block size of the
 ext2 file system, I got about 22MB/sec reads and writes according to Bonnie.
 Best I remember, the 4K block size made a fairly large improvement over
 absolute defaults.
 
 I didn't have time to tweak any settings on the 1164
 
 This was a stock redhat 6.0 system.
 
 Unfortunately, box only ran linux for about an hour. Alas, it was destined
 to be an NT machine.
 
 I wished someone would port Bonnie (or tiotest) to NT.
 
 -Original Message-
 From: Chris Mauritz [mailto:[EMAIL PROTECTED]] 
 Sent: Thursday, March 02, 2000 11:29 AM
 To: [EMAIL PROTECTED]
 Subject: ExtremeRAID 1100 benchmarks
 
 
 Has anyone done any benchmarks with the Mylex ExtremeRAID 1100?  I'm
 planning on getting one of the 3 channel ones with 64mb cache.  Initially,
 it will be delivered on a dual PIII-750mhz machine with NT, but I'd like to
 repurpose this as a Linux file server.  It will have an external enclosure
 with 8 18gig 10,000rpm IBM Deskstars and one hot spare.  Can anyone hazard a
 guess at the kind of performance I can expect from such an array?
 
 Cheers,
 
 Chris
 
 Chris Mauritz
 [EMAIL PROTECTED]
 



Re: ExtremeRAID 1100 benchmarks

2000-03-02 Thread James Manning

[ Thursday, March  2, 2000 ] Chris Mauritz wrote:
 Has anyone done any benchmarks with the Mylex ExtremeRAID 1100?  I'm
 planning on getting one of the 3 channel ones with 64mb cache.  Initially,
 it will be delivered on a dual PIII-750mhz machine with NT, but I'd like to
 repurpose this as a Linux file server.  It will have an external enclosure
 with 8 18gig 10,000rpm IBM Deskstars and one hot spare.  Can anyone hazard a
 guess at the kind of performance I can expect from such an array?

This is a very similar setup to the 9-disk 10krpm raid5 extremeraid 1100
benchmarks I mailed the list awhile back... search back through some
archives.

James



Re: FW: ExtremeRAID 1100 benchmarks

2000-03-02 Thread Chris Mauritz

I'm inclined to think this controller can do it with 4-5 spindles per
channel.  I'm told the strongarm processor at 233mhz can really crank on
RAID 5 applications.  Let us know how it works out.

Cheers,

Chris
- Original Message -
From: "Brian Pomerantz" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, March 02, 2000 1:51 PM
Subject: Re: FW: ExtremeRAID 1100 benchmarks


 I'm in the middle of testing this controller on an ES40 (4 CPU Alpha).
 I should get some numbers next week.  So far with a 4+p RAID 5 I'm
 seeing about 17MB/s write performance with a single chain.  I think
 these are only 7200 RPM drive.  I don't really care about read
 performance but that was up around 30MB/s.  Next week I should be
 testing with 2 cards per server with 3 chains on one and 2 chains on
 another.  I'm trying to reach 100MB/s write performance on a single
 server.  I really doubt I'll get that with this card, I'm nearly
 certain I'll have to go with an external Fibre Channel solution with a
 smart enclosure.


 BAPper

 On Thu, Mar 02, 2000 at 12:58:42PM -0500, Kenneth Cornetet wrote:
  I had a 2 channel 1164 in a dual 450 PIII (256MB ram) with 4 18GB
Seagate
  LVD 10K RPM drives in RAID 5. WIth all defaults, except 4K block size of
the
  ext2 file system, I got about 22MB/sec reads and writes according to
Bonnie.
  Best I remember, the 4K block size made a fairly large improvement over
  absolute defaults.
 
  I didn't have time to tweak any settings on the 1164
 
  This was a stock redhat 6.0 system.
 
  Unfortunately, box only ran linux for about an hour. Alas, it was
destined
  to be an NT machine.
 
  I wished someone would port Bonnie (or tiotest) to NT.
 
  -Original Message-
  From: Chris Mauritz [mailto:[EMAIL PROTECTED]]
  Sent: Thursday, March 02, 2000 11:29 AM
  To: [EMAIL PROTECTED]
  Subject: ExtremeRAID 1100 benchmarks
 
 
  Has anyone done any benchmarks with the Mylex ExtremeRAID 1100?  I'm
  planning on getting one of the 3 channel ones with 64mb cache.
Initially,
  it will be delivered on a dual PIII-750mhz machine with NT, but I'd like
to
  repurpose this as a Linux file server.  It will have an external
enclosure
  with 8 18gig 10,000rpm IBM Deskstars and one hot spare.  Can anyone
hazard a
  guess at the kind of performance I can expect from such an array?
 
  Cheers,
 
  Chris
 
  Chris Mauritz
  [EMAIL PROTECTED]
 





Re: some benchmarks for read-balancing RAID1 (was: Re: Raid0 performance worse than single drive? also was: Re: sw raid 0 - performance problems (old thread; 12 Jan 2000))

2000-02-14 Thread James Manning

[ Sunday, February 13, 2000 ] James Manning wrote:
 I'm going to try adding a --numruns flag for tiobench so we can have an
 automated facility for averaging over a number of runs.  I believe the
 dip at 4 threads is real, but it's worth adding anyway :)

It'll be part of tiotest 0.23, but attached is the relevant section
for anyone that wants to use it sooner.

James


--- tiobench.pl.origMon Feb 14 04:01:53 2000
+++ tiobench.pl Mon Feb 14 04:02:17 2000
@@ -8,13 +8,15 @@
 sub usage {
print "Usage: $0 [options]\n","Available options:\n\t",
 "[--size SizeInMB]+\n\t",
+"[--numruns NumberOfRuns]+\n\t",
 "[--dir TestDir]+\n\t",
 "[--block BlkSizeInBytes]+\n\t",
 "[--seeks TotalSeeks]+\n\t",
 "[--threads NumberOfThreads]+\n\n",
"+ means you can specify this option multiple times to cover multiple\n",
"cases, for instance: $0 --block 4096 --block 8192 will first run\n",
-   "through with a 4KB block size and then again with a 8KB block size\n";
+   "through with a 4KB block size and then again with a 8KB block size.\n",
+   "--numruns specifies over how many runs each test should be averaged\n";
exit(1);
 }
 
@@ -43,17 +45,20 @@
 my $write_mbytes; my $write_time; my $write_utime; my $write_stime;
 my $read_mbytes;  my $read_time;  my $read_utime;  my $read_stime;
 my $seeks;my $seeks_time; my $seeks_utime; my $seeks_stime;
+my $num_runs; my $run_number;
 
 # option parsing
 GetOptions("dir=s@",\@dirs,
"size=i@",\@sizes,
"block=i@",\@blocks,
"seeks=i",\$total_seeks,
+   "numruns=i",\$num_runs,
"threads=i@",\@threads);
 
 usage if $Getopt::Long::error;
 
 # give some default values
+$num_runs=1 unless $num_runs  $num_runs  0;
 @dirs=qw(.) unless @dirs;
 @blocks=qw(4096) unless @blocks;
 @threads=qw(1 2 4 8) unless @threads;
@@ -90,33 +95,34 @@
foreach $size (@sizes) {
   foreach $block (@blocks) {
  foreach $thread (@threads) {
-my $thread_seeks=int($total_seeks/$thread);
-my $thread_size=int($size/$thread);
-my $run_string = "$tiotest -t $thread -f $thread_size ".
- "-s $thread_seeks -b $block -d $dir -T -W";
-my $prompt = "*** Now Running: $run_string ...";
-print $prompt;
-open(TIOTEST,"$run_string |") or die "Could not run $tiotest";
-   
-   while(TIOTEST) {
-   my ($field,$amount,$time,$utime,$stime)=split(/[:,]/);
-   $stat_data{$field}{'amount'}=$amount;
-   $stat_data{$field}{'time'}=$time;
-   $stat_data{$field}{'utime'}=$utime;
-   $stat_data{$field}{'stime'}=$stime;
-   }
-close(TIOTEST);
-
+foreach $run_number (1..$num_runs) {
+   my $thread_seeks=int($total_seeks/$thread);
+   my $thread_size=int($size/$thread);
+   my $run_string = "$tiotest -t $thread -f $thread_size ".
+"-s $thread_seeks -b $block -d $dir -T -W";
+   my $prompt = "Run #$run_number: $run_string";
+   print $prompt;
+   open(TIOTEST,"$run_string |") or die "Could not run $tiotest";
+
+   while(TIOTEST) {
+  my ($field,$amount,$time,$utime,$stime)=split(/[:,]/);
+  $stat_data{$field}{'amount'} += $amount;
+  $stat_data{$field}{'time'}   += $time;
+  $stat_data{$field}{'utime'}  += $utime;
+  $stat_data{$field}{'stime'}  += $stime;
+   }
+   close(TIOTEST);
+   print "" x length($prompt); # erase prompt
+}
 for my $field ('read','write','seek') {
-  $stat_data{$field}{'rate'} = 
-  $stat_data{$field}{'amount'} /
-  $stat_data{$field}{'time'};
-  $stat_data{$field}{'cpu'} = 
-  100 * ( $stat_data{$field}{'utime'} +
-  $stat_data{$field}{'stime'} ) / 
+  $stat_data{$field}{'rate'} = 
+ $stat_data{$field}{'amount'} /
+ $stat_data{$field}{'time'};
+  $stat_data{$field}{'cpu'} = 
+ 100 * ( $stat_data{$field}{'utime'} +
+ $stat_data{$field}{'stime'} ) / 
  $stat_data{$field}{'time'};
 }
-print "" x length($prompt); # erase prompt
 write;
  }
   }



Re: some benchmarks for read-balancing RAID1 (was: Re: Raid0 performance worse than single drive? also was: Re: sw raid 0 - performance problems (old thread; 12 Jan 2000))

2000-02-13 Thread James Manning

[ Saturday, February 12, 2000 ] Peter Palfrader aka Weasel wrote:
 So, I finally found time to try the new RAID stuff and speed
 increased :)

Excellent.

 I also tried RAID1 with and without the read-balancing patch:
 The filesystem was always made with a simple "mke2fs dev":

-Rstripe= could be helpful... since you have numbers w/o it, it'd
be interesting to see any diff. (it shouldn't make a diff in the 
4k chunk size case, but cover it anyway if possible, just to see :)

Since it (afaik) is the number of ext2 blocks gathered before
the ll_rw_blk down to the lower level, it should help efficiency.

   TCQ Enabled By Default : Disabled

something else possibly worth pursuing...

I'm going to try adding a --numruns flag for tiobench so we can have an
automated facility for averaging over a number of runs.  I believe the
dip at 4 threads is read, but it's worth adding anyway :)

James



IDE RAID0 hdparm benchmarks

1999-09-13 Thread Tom Livingston


Hello folks,

Due to too much coffee while diagnosing another problem... I found myself
unable to sleep.  So I did some ide raid0 benchmarks for everyone to mull
over.

[Note that I did these benchmarks with hdparm and not bonnie, as I actually
had a readable raid5 filesystem on this disk set.  But if I played around
with raidtab's and hdparm I could do some non-destructive benchmarking.

I am running redhat-6.0+updates, linux kernel 2.2.12, with
raid0145-19990824-2.2.11.gz and 2.2.12.uniform-ide-6.20.hydra.patch.gz.  The
machine is an Abit-BP6, celeron 366, 128MB RAM.  All of the disks tested are
Maxtor 90845D4 5400RPM disks, or a close relative.  ide0-1 is an Intel
PIIX4, ide2-7 are three PDC20246 controllers.

All of these tests were conducted with one disk per channel except for hdb1
which is slave on the same channel as the boot disk (system was otherwise
idel)

First set of tests:  RAID0, 4k chunk size, cpu clocked at 366Mhz.
All hdparms reported:
 Timing buffer-cache reads:   128 MB in  1.67 seconds =76.65 MB/sec

2 disks (hdb1,hdc1)  buffered disk reads : 24.24 MB/sec
3 disks (hdb1,hdc1,hde1)   buffered disk : 31.37 MB/sec
4 disks (hdb1,hdc1,hde1,hdg1)   bfr disk : 35.75 MB/sec
5 disks (hdb1,hdc1,hde1,hdg1,hdi1)  disk : 31.68 MB/sec
6 disks (hdb1,hdc1,hde1,hdg1,hdi1,hdk1)  : 32.82 MB/sec
7 disks (hdb,hdc,hde,hdg,hdi,hdk,hdm): 31.84 MB/sec
8 disks (hdb,hdc,hde,hdg,hdi,hdk,hdm,hdo): 32.00 MB/sec


Same system. same disks, clocked at 550Mhz:
 Timing buffer-cache reads:   128 MB in  1.07 seconds =119.63 MB/sec

2 disks (hdb1,hdc1)  buffered disk reads : 24.81 MB/sec
3 disks (hdb1,hdc1,hde1)   buffered disk : 31.84 MB/sec
4 disks (hdb1,hdc1,hde1,hdg1)   bfr disk : 37.65 MB/sec
5 disks (hdb1,hdc1,hde1,hdg1,hdi1)  disk : 37.87 MB/sec
6 disks (hdb1,hdc1,hde1,hdg1,hdi1,hdk1)  : 35.96 MB/sec
7 disks (hdb,hdc,hde,hdg,hdi,hdk,hdm): 35.16 MB/sec
8 disks (hdb,hdc,hde,hdg,hdi,hdk,hdm,hdo): 32.65 MB/sec

Bonus test, one disk from each controller, clocked at 550:

4 disks (hdc1,hde1,hdi1,hdm1)   bfr disk : 38.79 MB/sec

Completely non-useful bonus test:
10 disk raid5 set, 2 channels contain both master and slave disks for the
set. 128k chunk size, 550Mhz, running in degraded mode.
 Timing buffered disk reads:  64 MB in  3.73 seconds =17.16 MB/sec

Any comments or thoughts?  I don't run raid0 in production, so this is no
bitch fest. But I was pretty deep into my raid setup, and I thought people
might appreciate some numbers.  I was surprised to see the throughput top
out at four disks and then drop lower after that.

Tom





Re: IDE RAID0 hdparm benchmarks

1999-09-13 Thread Glenn McGrath

Yea, im always keen to see results, ive been trying to understand
performance
issues with ide raid0 for a while.

In the HOWTO i remember that says raid0 can give "near linear" performance
increases, a few weeks ago i played with a 4 way ide raid0, and i max'ed out
at
about the same figures you got.

I think scsi has similar problems, but this is more likely to be due to
saturating the
scsi bus, i remember seeing some good figures (70MB/s) with dual scsi
channels,
cant remember the details too well though (scsi is too expensive for me to
play with).

Could it be that near linear performance is a bit unrealistic ?

Im not a raid guru, so i too would be interested in other discussion about
maximum
raid0 throughput.


 Hello folks,

 Due to too much coffee while diagnosing another problem... I found myself
 unable to sleep.  So I did some ide raid0 benchmarks for everyone to mull
 over.

 [Note that I did these benchmarks with hdparm and not bonnie, as I
actually
 had a readable raid5 filesystem on this disk set.  But if I played around
 with raidtab's and hdparm I could do some non-destructive benchmarking.

 I am running redhat-6.0+updates, linux kernel 2.2.12, with
 raid0145-19990824-2.2.11.gz and 2.2.12.uniform-ide-6.20.hydra.patch.gz.
The
 machine is an Abit-BP6, celeron 366, 128MB RAM.  All of the disks tested
are
 Maxtor 90845D4 5400RPM disks, or a close relative.  ide0-1 is an Intel
 PIIX4, ide2-7 are three PDC20246 controllers.

 All of these tests were conducted with one disk per channel except for
hdb1
 which is slave on the same channel as the boot disk (system was otherwise
 idel)

 First set of tests:  RAID0, 4k chunk size, cpu clocked at 366Mhz.
 All hdparms reported:
  Timing buffer-cache reads:   128 MB in  1.67 seconds =76.65 MB/sec

 2 disks (hdb1,hdc1)  buffered disk reads : 24.24 MB/sec
 3 disks (hdb1,hdc1,hde1)   buffered disk : 31.37 MB/sec
 4 disks (hdb1,hdc1,hde1,hdg1)   bfr disk : 35.75 MB/sec
 5 disks (hdb1,hdc1,hde1,hdg1,hdi1)  disk : 31.68 MB/sec
 6 disks (hdb1,hdc1,hde1,hdg1,hdi1,hdk1)  : 32.82 MB/sec
 7 disks (hdb,hdc,hde,hdg,hdi,hdk,hdm): 31.84 MB/sec
 8 disks (hdb,hdc,hde,hdg,hdi,hdk,hdm,hdo): 32.00 MB/sec


 Same system. same disks, clocked at 550Mhz:
  Timing buffer-cache reads:   128 MB in  1.07 seconds =119.63 MB/sec

 2 disks (hdb1,hdc1)  buffered disk reads : 24.81 MB/sec
 3 disks (hdb1,hdc1,hde1)   buffered disk : 31.84 MB/sec
 4 disks (hdb1,hdc1,hde1,hdg1)   bfr disk : 37.65 MB/sec
 5 disks (hdb1,hdc1,hde1,hdg1,hdi1)  disk : 37.87 MB/sec
 6 disks (hdb1,hdc1,hde1,hdg1,hdi1,hdk1)  : 35.96 MB/sec
 7 disks (hdb,hdc,hde,hdg,hdi,hdk,hdm): 35.16 MB/sec
 8 disks (hdb,hdc,hde,hdg,hdi,hdk,hdm,hdo): 32.65 MB/sec

 Bonus test, one disk from each controller, clocked at 550:

 4 disks (hdc1,hde1,hdi1,hdm1)   bfr disk : 38.79 MB/sec

 Completely non-useful bonus test:
 10 disk raid5 set, 2 channels contain both master and slave disks for the
 set. 128k chunk size, 550Mhz, running in degraded mode.
  Timing buffered disk reads:  64 MB in  3.73 seconds =17.16 MB/sec

 Any comments or thoughts?  I don't run raid0 in production, so this is no
 bitch fest. But I was pretty deep into my raid setup, and I thought people
 might appreciate some numbers.  I was surprised to see the throughput top
 out at four disks and then drop lower after that.

 Tom







Re: RAID0/5/4 benchmarks

1999-09-02 Thread Marc SCHAEFER

Marc SCHAEFER [EMAIL PROTECTED] wrote:
 Now, RAID5 on the same 7 disk set:

   ---Sequential Output ---Sequential Input--
   -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
 MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU
  2000 16241 79.7 43569 42.0 19304 46.6 24004 92.0 68733 82.8

Now, that's interesting. It's not THAT bad (43.5 MByte/s writing,
68.7 MByte/s reading). However, what puzzles me is that my CPU is mostly
idle when I would expect it to generate the parity.

For 43 MByte/s of DMA out from RAM, we have the processor reading
at 43 MByte/s from the RAM, and writing at 6 MByte/s. From the RAID0
result, I would have expected 85 MByte-s - 1/7, which is around
70 MByte/s.



Re: Benchmarks/Performance.

1999-04-26 Thread John Ronan


On 22-Apr-99 Paul Jakma wrote:

Ok  I ran a few bonnies with differenc chunk sizes... 

Raid5 running on 4 WDC AC31300R's UDMA... Seems to peak at 32k chunks, 4K block
size

Thanks for your replies...

Cheers (time to do the "power removal" test :) )


--
John Ronan [EMAIL PROTECTED], 
  Telecommunications Software Systems Group - WIT, +353-51-302411,
http://www-tssg.wit.ie

Q: How do you know a guy at the beach has a redhead for a girlfriend?
A: She has scratched "Stay off MY TURF!" on his back with her nails.




 Bonnie


Re: Benchmarks/Performance.

1999-04-26 Thread Paul Jakma

On Fri, 23 Apr 1999, John Ronan wrote:

  
  On 22-Apr-99 Paul Jakma wrote:
  
  Ok  I ran a few bonnies with differenc chunk sizes... 
  
  Raid5 running on 4 WDC AC31300R's UDMA... Seems to peak at 32k
  chunks, 4K block size

i've done a bit of benching aswell. The most important (on ia32
anyway) seems to be the 4k e2fs block size, it always wins.

  
  Thanks for your replies...
  
  Cheers (time to do the "power removal" test :) )
 
  

-- 
Paul Jakma
[EMAIL PROTECTED]   http://hibernia.clubi.ie
PGP5 key: http://www.clubi.ie/jakma/publickey.txt
---
Fortune:
Usage: fortune -P [-f] -a [xsz] Q: file [rKe9] -v6[+] file1 ...




Re: Benchmarks/Performance.

1999-04-26 Thread Stephen C. Tweedie

Hi,

On Thu, 22 Apr 1999 20:45:52 +0100 (IST), Paul Jakma [EMAIL PROTECTED]
said:

 i tried this with raid0, and if bonnie is any guide, the optimal
 configuration is 64k chunk size, 4k e2fs block size.  

Going much above 64k will mean that readahead has to work very much
harder to keep all the pipelines full when doing large sequential IOs.
That's why bonnie results can fall off.  However, if you have
independent IOs going on (web/news/mail service or multiuser machines)
then that concurrent activity may still be faster with larger chunk
sizes, as you minimise the chance of any one file access having to cross
multiple disks.

In other words, all benchmarks lie. :)

--Stephen



Re: Benchmarks/Performance.

1999-04-26 Thread Stephen C. Tweedie

Hi,

On Mon, 26 Apr 1999 21:28:20 +0100 (IST), Paul Jakma [EMAIL PROTECTED]
said:

 it was close between 32k and 64k. 128k was noticably slower (for
 bonnie) so i didn't bother with 256k. 

Fine, but 128k will be noticeably faster for some other tasks.  Like I
said, it depends on whether you prioritise large-file bandwidth over the
ability to serve many IOs at once.

 viz pipelining: would i be right in thinking that a decent scsi
 controller and drives can "pipeline" /far/ better than, eg, a udma
 setup?

Yes, although you eventually run into a different bottleneck: the
filesystem has to serialise every so often while reading its indirection
metadata blocks.  Using a 4k fs blocksize helps there (again, for
squeezing the last few %age points out of sequential readahead).

 ie the optimal chunk size would be higher for a scsi system than for
 an eide/udma setup?

udma can do readahead and multi-sector IOs.  scsi can have limited
tagged queue depths.  Command setup is more expensive on scsi than on
ide.  Which costs dominate really depends on the workload.

--Stephen



Re: benchmarks

1999-04-25 Thread Tim Moore

 - have heard someone say that running two striped ide drives is 2x slower than
   normal ide access... donno...
 ( I use striped 2striped 8Gb ide drives for incremental backups of each 64Gb 
main servers )

2x slower = both on same ide; 2x faster = each on different ide



Benchmarks/Performance.

1999-04-22 Thread John Ronan

Hi,
I set up a raid box a while ago and so far it's performed flawlessly...
unfortunately the group I'm in are outgrowing it.  So I'm putting together a
new box and I've got time to test it and benchmark it before putting it into
service.

The machine is a PPRO 64MB Ram, vanilla SuSE-6.0 box, I downloaded 2.2.6, the
patches and the raidtools.  Recompiled the kernel, wrote a raidtab, ran mkraid
and it all seems to work.  (see attached raidtab, md0 is squid cache, md1
/home, md2 main raid 5 array)

/dev/md2 is raid5 across 4 WDC AC313000R's (I can only work with what I have in
the office) In the raidtab I gave it a chunk size of 128 and I used the
following mke2fs command.

mke2fs -b 4096 0R stride=32 -m0 /dev/md2 

Which from what I've read, should be pretty alright... Basically I suppose what
I'm asking is "Am I on the right track?" I'd really appreciate some feedback...
because Once this is put into service It'll be our server for the next 12
months at least.


--
Bonnie -s 265 on /dev/md1 2 Seagate ST34371N's on AHA2940

 ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
  256  3367 92.5  8991 32.0  3232 17.7  3764 90.3  8410 17.1  77.7  3.5
--

I haven't run bonnie on /dev/md2 yet 'cause it's still synching the array.


Thanks for your time and patience... And keep up the good work... 

--
John Ronan [EMAIL PROTECTED], 
  Telecommunications Software Systems Group - WIT, +353-51-302411,
http://www-tssg.wit.ie

You've had too much coffee when ...   
  you walk 20 miles on your treadmill before you 
  realise it's not plugged in   




 raidtab


Re: Benchmarks/Performance.

1999-04-22 Thread Paul Jakma

On Thu, 22 Apr 1999, Carlos Carvalho wrote:

  John Ronan ([EMAIL PROTECTED]) wrote on 22 April 1999 16:03:
  
   /dev/md2 is raid5 across 4 WDC AC313000R's (I can only work with
   what I have in the office) In the raidtab I gave it a chunk size of
   128 and I used the following mke2fs command.
  
   mke2fs -b 4096 0R stride=32 -m0 /dev/md2 

  I'd like to have a way to measure the performance, but I don't know
  how. Doug Ledford recommends using bonnie on a just-created array to
  check performance with various chunck sizes. What bothers me most is
  that it seems that the best settings depend on your particular files
  and usage...

i tried this with raid0, and if bonnie is any guide, the optimal
configuration is 64k chunk size, 4k e2fs block size.  

-- 
Paul Jakma
[EMAIL PROTECTED]   http://hibernia.clubi.ie
PGP5 key: http://www.clubi.ie/jakma/publickey.txt
---
Fortune:
I haven't lost my mind -- it's backed up on tape somewhere.



benchmarks

1999-04-21 Thread Seth Vidal

I've mostly been a lurker but recent changes in my company have peaked my
interest in the performance of sw vs hw raid.

Does anyone have some statistics online of sw raid (1,5) vs hw raid
(1,5) on a linux system?

Also is there anyway to have a hot-swappable sw raid system. (IDE or SCSI)?

RTFM's and web page pointers are gladly accepted.

thanks
-sv


   



Re: benchmarks

1999-04-21 Thread Josh Fishman

Seth Vidal wrote:
 
 I've mostly been a lurker but recent changes in my company have peaked my
 interest in the performance of sw vs hw raid.
 
 Does anyone have some statistics online of sw raid (1,5) vs hw raid
 (1,5) on a linux system?

We have a DPT midrange SmartRAID-V and we're going to do testing on two
7 x 17.5 GB RAID 5 arrays, one software, one hardware. We'll post the
results as soon as they're available. (Testing will happen on a dual PII
350 w/ 256 MB RAM  a cheezy IDE disk for /, running 2.2.6 (or later).)

What kind of tests would people like to see run? The main test I'm
going for is simply stability under load on bigish file systems 
biggish file operations.

 -- Josh Fishman
NYU / RLab



Re: benchmarks

1999-04-21 Thread Seth Vidal

  I've mostly been a lurker but recent changes in my company have peaked my
  interest in the performance of sw vs hw raid.
  
  Does anyone have some statistics online of sw raid (1,5) vs hw raid
  (1,5) on a linux system?
 
 We have a DPT midrange SmartRAID-V and we're going to do testing on two
 7 x 17.5 GB RAID 5 arrays, one software, one hardware. We'll post the
 results as soon as they're available. (Testing will happen on a dual PII
 350 w/ 256 MB RAM  a cheezy IDE disk for /, running 2.2.6 (or later).)
 
 What kind of tests would people like to see run? The main test I'm
 going for is simply stability under load on bigish file systems 
 biggish file operations.

stability and read performance speeds and write performance speeds.

possibly optimization for mostly-read situations, mostly-write situations
and then both read and write situations.

-sv



Benchmarks and questions

1999-04-17 Thread Ted Byrd

Hello,

I've successfully installed a software RAID 5 array using kernel 2.2.3
and the 0.90 patch and tools.  I have posted the results of this
endeavor, including benchmarks at:

http://www.idiom.com/~tbyrd/softraid/index.html

It might answer some of the questions I've seen here lately regarding
which software is required to run the latest tools, and provides links
to the latest software and documentation.  It also includes Bonnie
benchmarks for files sizes ranging from 8 MB to 1.5 GB, and would seem
to support earlier statements regarding cache performance variance.
(Though these are probably good numbers for estimating CPU/memory/cache
performance!)

So far, so good!  Cheers to the maintainers!!!

-Ted



Some hw v. sw RAID benchmarks

1999-01-07 Thread Malcolm Beattie

Here for your edification and amusement are some benchmarks comparing
hardware v. software RAID for fairly similar setups.

Sun sell two versions of their 12-disk hot-swap dual-everything disk
array (codename Dilbert):
 * the D1000 is a "dumb" array presenting 6 disks on each of two
   Ultra Wide Differential SCSI busses.
 * the A1000 is similar but has an internal hardware RAID module
   which connects to the two busses internally, does its "RAID thing"
   and presents a single Ultra Wide Differential bus to the outside
   world and talks to an intelligent adapter card on the hosts side.

We have the following configurations which I benchmarked using bonnie:

System 1: A1000 array with 6 x 1 RPM 4G wide SCSI drives and 64MB
  NVRAM cache connected to Sun Ultra 5 with a 270 MHz
  UltraSPARC IIi CPU and 320 MB RAM running Solaris 2.6 via a 
  Symbios 53C875-based card.
System 2: D1000 array with 6 x 1 RPM 9G wide SCSI drives on one
  of its two busses connected to a PC with a 350 MHz PII CPU
  and 512 MB RAM running Linux 2.0.36 with with
  raid-19981214-0.90 RAID patch.

Both systems were set up as a single 6 disk RAID5 group. System 1 had
a standard Solaris UFS filesystem on the resulting 20GB logical drive.
System 2 used chunk-size 64 for its RAID5 configuration (defaults for
ther settings) and a single ext2 filesystem (with blocksize 4096 and
stride=16). Bonnie was run on both as the only non-idle process on a
1000 MB file.

  System 1 System 2
Seq output
--
  per char7268 K/s @ 66.7% CPU   5104 K/s @ 88.6% CPU  
  block  12850 K/s @ 31.9% CPU  12922 K/s @ 16.4% CPU
  rewrite 8221 K/s @ 45.1% CPU   5973 K/s @ 16.9% CPU

Seq input
-
  per char8275 K/s @ 99.2% CPU   5058 K/s @ 96.1% CPU
  block  21856 K/s @ 46.4% CPU  13080 K/s @ 15.2% CPU

Random Seeks   293.0 /s @ 8.7% CPU282.3 /s @ 5.7% CPU

--Malcolm

-- 
Malcolm Beattie [EMAIL PROTECTED]
Unix Systems Programmer
Oxford University Computing Services