On an Intel architecture machine you'll never get more than about 80MBs
regardless of the number of SCSI busses or the speed of the disks. The
PCI bus becomes a bottleneck at this point.
Another consideration of course. But I think his problem was that he
couldn't get any higher
I've been following these threads on sw raid over hw raid, etc., with
some curiosity. I also did testing with a Mylex DAC1164P, in my case
using 8 IBM Ultrastar 18ZX drives (1rpm).
I get the following bonnie results on that system, just using hw raid,
for sequential input, sequential
input output random
MB/s %cpu MB/s %cpu /s %cpu
1drive-jbod 19.45 16.3 17.99 16.4 153.90 4.0
raid0 48.49 42.1 25.48 23.1 431.00 7.4
raid01 53.23 41.4 21.22 19.0 313.10 9.5
raid5 52.47 39.3 21.35 19.8 365.60
So in most cases you wrote data much faster than writing it?
Ummm... s/writing/reading/;
:)
James
On Wed, Aug 18, 1999 at 12:18:05PM -0400, James Manning wrote:
input output random
MB/s %cpu MB/s %cpu /s %cpu
1drive-jbod 19.45 16.3 17.99 16.4 153.90 4.0
raid0 48.49 42.1 25.48 23.1 431.00 7.4
raid01 53.23 41.4
input output random
MB/s %cpu MB/s %cpu /s %cpu
1drive-jbod 19.45 16.3 17.99 16.4 153.90 4.0
raid0 48.49 42.1 25.48 23.1 431.00 7.4
raid01 53.23 41.4 21.22 19.0 313.10 9.5
raid5 52.47 39.3
Again your %cpu is high compared to what I've seen. I've never seen
anything at 99%. Anyone else?
My s/w raid5 CPU util has always been between 99 and 100% for writes.
If some kind soul could help me figure out kernel profiling, I'll
profile 2.2.12 doing block s/w raid5 writes.
James
--
On Wed, 18 Aug 1999, James Manning wrote:
input output random
MB/s %cpu MB/s %cpu /s %cpu
1drive-jbod 19.45 16.3 17.99 16.4 153.90 4.0
raid0 48.49 42.1 25.48 23.1 431.00 7.4
raid01 53.23 41.4 21.22
On Wed, 18 Aug 1999, James Manning wrote:
I missed the start of this thread, so I don't know what RAID level you're
using. I did some RAID-0 tests with the new Linux RAID code back in March
on a dual 450Mhz Xeon box. Throughput on a single LVD bus appears to peak
at about 55MBs
On Wed, 18 Aug 1999, Gadi Oxman wrote:
I'd recommend verifying if the following changes affect the s/w
raid-5 performance:
1.A kernel compiled with HZ=1024 instead of HZ=100 -- this
will decrease the latency between "i/o submitted to the raid
layer" and "i/o submitted to
Date: Wed, 18 Aug 1999 09:18:00 -0400 (EDT)
From: James Manning [EMAIL PROTECTED]
FWIW, the eXtremeRAID 1100 cards are 64-bit PCI only (as are the ServeRAID
cards in my previous testing). Other testing I've done has shown many
situations where even our quad P6/200 machines (PC
On Fri, 13 Aug 1999, James Manning wrote:
To be honest, I'm still a little confused that I can't seem to find
any s/w h/w combination that will yield better than 30MB/sec
writes to 20 drives (18.2GB Cheetah-3's operating over 2 channels,
both at 80MB/sec with LVD). Any tips? s/w raid
On Tue, 17 Aug 1999, Chance Reschke wrote:
On Fri, 13 Aug 1999, James Manning wrote:
To be honest, I'm still a little confused that I can't seem to find
any s/w h/w combination that will yield better than 30MB/sec
writes to 20 drives (18.2GB Cheetah-3's operating over
On Thu, Aug 12, 1999 at 04:47:49PM -0400, James Manning wrote:
...
subsequent mke2fs and bonnie's seemed fine, so this is most likely
pretty safe to ignore, I suppose.
here's the results for s/w raid5 on top of 4 h/w raid0's (20 drives)
---Sequential Output
disk 0: /dev/rd/c0d0p1, 88866800kB, raid superblock at 88866688kB
disk 1: /dev/rd/c0d1p1, 35547120kB, raid superblock at 35547008kB
disk 2: /dev/rd/c0d2p1, 35547120kB, raid superblock at 35547008kB
disk 3: /dev/rd/c0d3p1, 35547120kB, raid superblock at 35547008kB
disk 4: /dev/rd/c0d4p1,
In my s/w raid5 over h/w raid0 testing, I had just completed
s/w raid5 over 4 h/w raid0's (via Mylex DAC1164P, 5 drives each),
recorded the bonnie results. After using Mylex's config util
(in a DOS reboot) and making the 20 drives into 10 raid0's with
2 drives each, I did the follow:
- killed
16 matches
Mail list logo