On Mon, 14 Aug 2000, Corin Hartland-Swann wrote:
>I have tried this out, and found that the default settings were:
>elevator ID=232 read_latency=128 write_latency=8192 max_bomb_segments=4
(side note: Jens increased bomb segments to 32 in recent 2.2.17)
I think we can apply this patch on top of
te_latency to 10,000,000 results in
>> similar throughput, but catastrophic seek performance:
>
>Odd...
I guess it was the tiotest "Seek" bug that I mentioned in the other email.
>to backup the values chosen. But the current defaults do impose performance
>problems, as
819232 13.3228 6.36% 22.8210 19.0% 151.544 0.73%
>
> So we're still seeing a drop in performance with 1 thread, and still
> seeing the same severe degradation 2.2.16 exhibits.
>
>
> Thanks,
>
> Corin
>
Hi, motivated by your earlier comparison between 2.2.1
lts in
> similar throughput, but catastrophic seek performance:
Odd...
> Now, does anyone (Andrea in particular) know where the defaults are set? I
include/linux/blkdev.h, ELEVATOR_DEFAULTS.
> assume that setting read_latency to much lower than write_latency was an
> accident, but can
Hi there,
I am CC:ing this to Andrea Arcangeli because he is credited at the top of
drivers/block/ll_rw_blk.c as writing the elevator code.
On Sun, 13 Aug 2000, Jens Axboe wrote:
> On Sun, Aug 13 2000, Corin Hartland-Swann wrote:
> > The fact remains that disk performance is much wo
On Sun, Aug 13 2000, Corin Hartland-Swann wrote:
> The fact remains that disk performance is much worse under 2.2.16 and
> heavy loads than under 2.2.15 - what I was trying to find out was what
A new elevator was introduced into 2.2.16, that may be affecting
results. Try using elvtun
ence the results at
> > all! d'oh!
>
> Linux is designed to have swap. I doubt anyone cares about how it
> behaves if you cripple it.
Since this is designed to test raw disk performance, I wanted to reduce
any other factors that might influence it. This includes redu
1 23.4496 9.70% 24.1711 20.6% 139.941 0.88%
/mnt/ 25681922 16.9398 7.53% 24.0482 20.3% 136.706 0.69%
/mnt/ 25681924 15.0166 6.82% 23.7892 20.2% 139.922 0.69%
/mnt/ 256819216 13.5901 6.38% 23.2326 19.4% 147.956 0.70%
/mnt/ 256819232 13.3228 6.36%
reveal bottlenecks.
>
> > > I used tiotest to benchmark, using a file size of 256MB, block size of 4K,
> > > and with 1, 2, 4, 16, 32 threads. The performance starts to get hit as
>
> I forgot to add that I ran each test five times so as to get consistent
> results.
&g
OS, or
kernel command line, but derived from the partition table.
Disk geometry is totally unrelated to disk performance.
Andries
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/
of 256MB, block size of 4K,
> > and with 1, 2, 4, 16, 32 threads. The performance starts to get hit as
I forgot to add that I ran each test five times so as to get consistent
results.
> does larger blocksizes change the picture at all? I'm wondering whether
> readahead is ef
accesses (on IDE) rather than a RAID problem.
I benchmarked single IDE disk performance on the following setup:
Intel 810E Chipset Motherboard (CA810EAL), Pentium III-667, 32M RAM,
Maxtor DiamondMax Plus 40 40.9GB UDMA66 Disk, Model 54098U8
I have attached the (edited) kernel config I used for all
IDE
> controllers form a raid5 software raid, reiserfs is the filesystem used
> on /dev/md0
>
> I'm a bit dissapointed with the read performance being about the same as
> reading from a single disk (using bonnie with size set to 500MB)
>
> - Is bonnie not the right be
the filesystem used
on /dev/md0
I'm a bit dissapointed with the read performance being about the same as
reading from a single disk (using bonnie with size set to 500MB)
- Is bonnie not the right benchmark to use here? What may be better ones
- Is there still another kernel patch needed for
ntial Create-- Random
> Create
> -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
> /sec %CP
> 30 174 99 + 93 9417 93 180 99
> -Original Message-
> From: James Manning [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, June 27, 2000 6:37 PM
> To: Linux Raid list (E-mail)
> Subject: Re: performance statistics for RAID?
>
> [Gregory Leblanc]
> > Is there any chance of keeping track
[Gregory Leblanc]
> Is there any chance of keeping track of these with software RAID?
AFAIK, sct's patch to give sar-like data out of /proc/partitions gives
all of the above stats and more... neat patch :) The user-space tool
should be in the same dir. And, FWIW, I get asked about how people ca
I just read that message from James Manning on some performance tuning, and
it made me think about this. On some of our RAID controllers, they collect
statistics for the RAID volumes. The one that I'm thinking of collects
things like this, except that I've trimmed some of the
> -Original Message-
> From: Hugh Bragg [mailto:[EMAIL PROTECTED]]
> Sent: Friday, June 23, 2000 12:36 AM
> To: Gregory Leblanc
> Cc: [EMAIL PROTECTED]
> Subject: Re: Benchmarks, raid1 (was raid0) performance
>
[snip]
> > > What version of raidtools shoul
Gregory Leblanc wrote:
>
> > -Original Message-
> > From: Hugh Bragg [mailto:[EMAIL PROTECTED]]
> > Sent: Wednesday, June 21, 2000 5:04 AM
> > To: [EMAIL PROTECTED]
> > Subject: Re: Benchmarks, raid1 (was raid0) performance
> >
> > Patch h
: Look at the Bonnies seek performance. It should rise.
: For single sequential reads, readbalancer doesn't help.
: Bonnie tests only single sequential reads.
:
: If you wan't to test with multiple io threads, try
: http://tiobench.sourceforge.net
Great, thanks, I'll give this a try!
patched cleanly. But bonnie++
> is showing no change in read performance. I am using IDE drives,
> but they are on separate controllers (/dev/hda, and /dev/hdc)
> with both drives configured as masters.
>
> Anyone have any tricks up their sleeves?
Look at the Bonnies seek performance. I
: None offhand, but can you post your test configuration/parameters?
: Things like test size, relavent portions of /etc/raidtab, things
: like that. I know this should be a whole big list, but I can think
: of all of them right now. FYI, I don't do IDE RAID (or IDE at all),
: but it's pretty aw
> -Original Message-
> From: Diegmueller, Jason (I.T. Dept) [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, June 21, 2000 10:46 AM
> To: 'Gregory Leblanc'; 'Hugh Bragg'; [EMAIL PROTECTED]
> Subject: RE: Benchmarks, raid1 (was raid0) performa
nstallation yesterday has brought me back.
Naturally, when I saw mention of radi1readbalance, I immediately
tried it.
I'm running 2.2.17pre4, and it patched cleanly. But bonnie++
is showing no change in read performance. I am using IDE drives,
but they are on separate controllers (/dev/hda
> -Original Message-
> From: Hugh Bragg [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, June 21, 2000 5:04 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Benchmarks, raid1 (was raid0) performance
>
> Patch http://www.icon.fi/~mak/raid1/raid1readbalance-2.2.15-B2
> i
Patch http://www.icon.fi/~mak/raid1/raid1readbalance-2.2.15-B2
improves read performance right? At what cost?
Can/Should I apply the raid1readbalance-2.2.15-B2 patch after
applying mingo's raid-2.2.16-A0 patch?
What version of raidtools should I use against a stock 2.2.16
system with
FYI!
A whole new line of very low cost Linux Based NAS appliances feature filled.
www.raidzone.com
___
Are you a Techie? Get Your Free Tech Email Address Now!
Many to choose from! Visit http://www.TechEmail.com
> -Original Message-
> From: Jeff Hill [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, June 13, 2000 1:26 PM
> To: Gregory Leblanc
> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: Re: Benchmarks, raid1 (was raid0) performance
>
> Gregory Leblanc wrote:
>
>
> -Original Message-
> From: Jeff Hill [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, June 13, 2000 3:56 PM
> To: Gregory Leblanc
> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: Re: Benchmarks, raid1 (was raid0) performance
>
> Gregory Leblanc wrote:
> &
Gregory Leblanc wrote:
>
> I don't have anything that caliber to compare against, so I can't really
> say. Should I assume that you don't have Mika's RAID1 read balancing patch?
I have to admit I was ignorant of the patch (I had skimmed the archives,
but not well enough). Searched the archive f
Bug1: Maybe im missing something here, why arent reads just as fast as writes?
The cynic in me suggests that the RAID driver has to wait for the
information to be read off the disks, but it doesn't have to wait for the
writes to complete before returning, but I haven't read the code.
-HJC
On Tue, Jun 13, 2000 at 04:51:46AM +1000, bug1 wrote:
> Maybe im missing something here, why arent reads just as fast as writes?
I note the same on a 2 way IDE RAID-1 device, with both disks on a separate
bus.
Regards,
bert hubert
--
| http://www.rent-a-ne
ng about IDE drives? Seems
> quite possible that there aren't any single drives that are hitting this
> speed, so it's only showing up with RAID.
> Greg
Is there any place where benchmark results are listed? I've finally
gotten my RAID-1 running and am trying to se
> -Original Message-
> From: bug1 [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, June 13, 2000 10:39 AM
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]
> Subject: Re: Benchmarks, raid0 performance, 1,2,3,4 drives
>
> Ingo Molnar wrote:
> >
> > could
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Hello,
Just to let you know, I also see very similar IDE-RAID0 performance
problems:
I have RAID0 with two 30G DiamondMax (Maxtor) ATA-66 drives connected to
a Promise Ultra66 controller.
I am using kernel 2.4.0-test1
Ingo Molnar wrote:
>
> could you send me your /etc/raidtab? I've tested the performance of 4-disk
> RAID0 on SCSI, and it scales perfectly here, as far as hdparm -t goes.
> (could you also send the 'hdparm -t /dev/md0' results, do you see a
> degradation in those n
Adrian Head wrote:
>
> I have seen people complain about simular issues on the kernel mailing
> list so maybe there is an actual kernel problem.
>
> What I have always wanted to know but haven't tested yet is to test raid
> performance with and without the noatime att
I have seen people complain about simular issues on the kernel mailing
list so maybe there is an actual kernel problem.
What I have always wanted to know but haven't tested yet is to test raid
performance with and without the noatime attribute in /etc/fstab I
think that when Linux re
Ingo Molnar wrote:
>
> could you send me your /etc/raidtab? I've tested the performance of 4-disk
> RAID0 on SCSI, and it scales perfectly here, as far as hdparm -t goes.
> (could you also send the 'hdparm -t /dev/md0' results, do you see a
> degradation in those n
could you send me your /etc/raidtab? I've tested the performance of 4-disk
RAID0 on SCSI, and it scales perfectly here, as far as hdparm -t goes.
(could you also send the 'hdparm -t /dev/md0' results, do you see a
degradation in those numbers as well?)
it could either be some s
a single
element in a raid0 array (of 1) seems to show raid adds a considerable
overhead to read performance, but still reads arent as fast as writes on
hde5, this isnt a very practical benchmark anyway.
Maybe im missing something here, why arent reads just as fast as writes?
4-way raid0 (disk
James Manning <[EMAIL PROTECTED]> wrote:
> [Gregory Leblanc]
> > > [root@bod tiobench-0.3.1]# ./tiobench.pl --dir /raid5
> > > No size specified, using 200 MB
> > > Size is MB, BlkSz is Bytes, Read, Write, and Seeks are MB/sec
> >
> > Try making the size at least double that of ram.
> Actually,
[Gregory Leblanc]
> Sounds good, James, but Darren said that his machine had 256MB of ram. I
> wouldn't have mentioned it, except that it wasn't using enough, I think.
it tries to stat /proc/kcore currently. no procfs and it'll fail to
get a good number... I've thought about other approaches, t
> -Original Message-
> From: James Manning [mailto:[EMAIL PROTECTED]]
> Sent: Friday, June 09, 2000 12:46 PM
> To: Gregory Leblanc
> Cc: [EMAIL PROTECTED]
> Subject: Re: bonnie++ for RAID5 performance statistics
>
>
> [Gregory Leblanc]
> > > [roo
[Gregory Leblanc]
> > [root@bod tiobench-0.3.1]# ./tiobench.pl --dir /raid5
> > No size specified, using 200 MB
> > Size is MB, BlkSz is Bytes, Read, Write, and Seeks are MB/sec
>
> Try making the size at least double that of ram.
Actually, I do exactly that, clamping at 200MB and 2000MB current
3:29 AM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: RE: bonnie++ for RAID5 performance statistics
> -Original Message-
> From: Darren Evans [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, June 07, 2000 3:02 AM
> To: [EMAIL PROTECTED]
> Subject: bonnie++ for RAID5 pe
> -Original Message-
> From: Darren Evans [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, June 08, 2000 2:16 AM
> To: Gregory Leblanc
> Cc: [EMAIL PROTECTED]
> Subject: RE: bonnie++ for RAID5 performance statistics
>
> Hi Greg,
>
> Yeah I know sorry about t
> -Original Message-
> From: Darren Evans [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, June 07, 2000 3:02 AM
> To: [EMAIL PROTECTED]
> Subject: bonnie++ for RAID5 performance statistics
>
> I guess this kind of thing would be great to be detailed in the FAQ.
Di
I guess this kind of thing would be great to be detailed in the FAQ.
Anyone care to swap statistics so I know how valid these are.
This is with an Adaptec AIC-7895 Ultra SCSI host adapter.
Is this good, reasonable or bad timing?
[darren@bod bonnie++-1.00a]$ bonnie++ -d /raid5 -m bod -s 90mb
W
.
Every drive got its own UATA66 channel.
With kernel 2.3.47 read performance was about 37.5 MB/s and write at about
33 MB/s. The bad thing whith this kernel was that the filesystem got
corrupt.. :)
Now I'm running 2.4.0-test1-ac8, now it reads with about 20 MB/s and
writes at 30 MB/s, wha
> -Original Message-
> From: octave klaba [mailto:[EMAIL PROTECTED]]
> Sent: Monday, May 15, 2000 7:25 AM
> To: Thomas Scholten
> Cc: Linux Raid Mailingliste
> Subject: Re: How to test raid5 performance best ?
>
> > 1. Which tools should i use to test raid-
Hi,
> 1. Which tools should i use to test raid-performace ?
tiotest.
I lost the official url
you can download it from http://ftp.ovh.net/tiotest-0.25.tar.gz
> 2. is it possible to add disks to a raid5 after its been started ?
good question ;)
--
Amicalement,
oCtAvE
Connexion terminée par ex
Hello All,
some day ago i joined the Software-Raid-Club :) I'm now running a SCSI-Raid5
with 3 2 GB partitions. I choosed a chunk-size of 32 kb. Referring to the
FAQ i'm told to experiment to get best performance chunk-size, but i
definitly have no good clue how to test performace :-/
On Fri, 5 May 2000, Michael Robinton wrote:
> > > >
> > > > Not entirely, there is a fair bit more CPU overhead running an
> > > > IDE bus than a proper SCSI one.
> > >
> > > A "fair" bit on a 500mhz+ processor is really negligible.
> >
> >
> > Ehem, a fair bit on a 500Mhz CPU is
ly
writing to the 1.3, however it has to go through the raid layer as well.
This tells me that appending drives (even if slower) to give more space
doesn't affect performance (much) compared to the single drive.
One more question I have is how do you tell how much cpu time something
compiled
> > >
> > > Not entirely, there is a fair bit more CPU overhead running an
> > > IDE bus than a proper SCSI one.
> >
> > A "fair" bit on a 500mhz+ processor is really negligible.
>
>
> Ehem, a fair bit on a 500Mhz CPU is ~ 30%. I have watched a
> *single* UDMA66 drive (with read ahead
On Thu, 4 May 2000, Michael Robinton wrote:
> >
> > Not entirely, there is a fair bit more CPU overhead running an
> > IDE bus than a proper SCSI one.
>
> A "fair" bit on a 500mhz+ processor is really negligible.
Ehem, a fair bit on a 500Mhz CPU is ~ 30%. I have watched a
*single*
> From: Gregory Leblanc [mailto:[EMAIL PROTECTED]]
>
> ..., that would suck up a lot more host CPU processing power than
> the 3 SCSI channels that you'd need to get 12 drives and avoid bus
>saturation.
not to mention the obvious bus slot loading problem ;-)
rc
> -Original Message-
> From: Michael Robinton [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, May 04, 2000 10:31 PM
> To: Christopher E. Brown
> Cc: Chris Mauritz; bug1; [EMAIL PROTECTED]
> Subject: Re: performance limitations of linux raid
>
> On Thu, 4 May 2000, Ch
(I really hate how Outlook makes you answer in FRONT of the message,
what a dumb design...)
Well, without spending the time I should thinking about my answer, I'll say
there are many things which impact performance, most of which we've seen
talked about here:
1 - how fast c
On Thu, 4 May 2000, Christopher E. Brown wrote:
> On Wed, 3 May 2000, Michael Robinton wrote:
>
> > The primary limitation is probably the rotational speed of the disks and
> > how fast you can rip data off the drives. For instance, the big IBM
> > drives (20 - 40 gigs) have a limitation of ab
I think the original answer was more to the point of Performance Limitation.
The mechanical delays inherent in the disk rotation are much slower than
the electronic or optical speeds in the connection between disk and
computer.
If you had a huge bank of semiconductor memory, or a huge cache or
On Wed, 3 May 2000, Michael Robinton wrote:
> The primary limitation is probably the rotational speed of the disks and
> how fast you can rip data off the drives. For instance, the big IBM
> drives (20 - 40 gigs) have a limitation of about 27mbs for both the 7200
> and 10k rpm models. The Driv
> -Original Message-
> From: Carruth, Rusty [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, May 04, 2000 8:36 AM
> To: [EMAIL PROTECTED]
> Subject: RE: performance limitations of linux raid
>
> > The primary limitation is probably the rotational speed of
> the
On Thu, May 04, 2000 at 08:35:52AM -0700, Carruth, Rusty wrote:
>
> > The primary limitation is probably the rotational speed of the disks and
> > how fast you can rip data off the drives. For instance, ...
>
> Well, yeah, and so whatever happened to optical scsi? I heard that you
> could ge
> The primary limitation is probably the rotational speed of the disks and
> how fast you can rip data off the drives. For instance, ...
Well, yeah, and so whatever happened to optical scsi? I heard that you
could get 1 gbit/sec (or maybe gByte?) xfer, and you could go 1000 meters -
or is thi
The primary limitation is probably the rotational speed of the disks and
how fast you can rip data off the drives. For instance, the big IBM
drives (20 - 40 gigs) have a limitation of about 27mbs for both the 7200
and 10k rpm models. The Drives to come will have to make trade-offs
between dens
> From [EMAIL PROTECTED] Wed May 3 20:38:05 2000
>
> Umm, I can get 13,000K/sec to/from ext2 from a *single*
> UltraWide Cheeta (best case, *long* reads, no seeks). 100Mbit is only
> 12,500K/sec.
>
>
> A 4 drive UltraWide Cheeta array will top out an UltraWide bus
> at 40MByte/sec
On Sun, 23 Apr 2000, Chris Mauritz wrote:
> > I wonder what the fastest speed any linux software raid has gotten, it
> > would be great if the limitation was a hardware limitation i.e. cpu,
> > (scsi/ide) interface speed, number of (scsi/ide) interfaces, drive
> > speed. It would be interesting t
If you are looking for one of the higest performance systems we have ever seen, visit www.raidzone.com. These are real systems.
John
___
Are you a Techie? Get Your Free Tech Email Address Now!
Many to choose from! Visit http
More notes on the 8 IDE drive raid5 system I built, and the 3ware controller.
Edwin Hakkennes wrote:
May I ask how these ide-ports and the attatched disks
show up under Redhat 6.2? Are they just standard ATA33 or ATA66 controllers
which can be used in software raid? Or is only the sum of the atta
Hi!
Somebody uttered around Apr 27 2000:
> > Thanks - where can I find the archive?
> >
> One is at http://kernelnotes.org/lnxlists/linux-raid/
>
> > Also - is it a pretty stable patch? (This is a production server)
> >
> I don't know, sorry.
I'm running 2.2.14 with mingos raid patch and th
On Thu, 27 Apr 2000, Corin Hartland-Swann wrote:
>
> Holger,
>
> On Thu, 27 Apr 2000, Holger Kiehl wrote:
> > On Thu, 27 Apr 2000, Corin Hartland-Swann wrote:
> > > I was hoping that RAID-1 would 'stripe' reads between the disks,
> > > inc
On Thu, 27 Apr 2000, Corin Hartland-Swann wrote:
>
>
> BTW, I was pleased to discover that Linux had absolutely no problems with
> Ultra160-SCSI - 96MB/s isn't bad at all, is it?
>
> I was hoping that RAID-1 would 'stripe' reads between the disks,
> i
Holger,
On Thu, 27 Apr 2000, Holger Kiehl wrote:
> On Thu, 27 Apr 2000, Corin Hartland-Swann wrote:
> > I was hoping that RAID-1 would 'stripe' reads between the disks,
> > increasing read performance to RAID-0 levels, but leaving write
> > performance at singl
On Thu, 27 Apr 2000, Corin Hartland-Swann wrote:
> I was hoping that RAID-1 would 'stripe' reads between the disks,
> increasing read performance to RAID-0 levels, but leaving write
> performance at single-disk levels. Does anyone know why it doesn't do
> this?
>
#x27;safe' copy on sdd2.
I'd like the /home partition to be as fast as possible, so I thought that
RAID 0+1 was a good solution (giving approximately quadruple read and
double write performance). After experimenting, I got the following
results:
Read (MB/s)
> 2 Masters from the ASUS P3C2000 CDU Board, 2 Masters on a CMD648
> based PCI controller and 4 Masters on a 3Wave 4 port RAID
> Controller. I don't use the 3Wave as a RAID controller though, just
> as a very good 4 channel Ultra66 board. They also make an 8 channel
> board, and that's what I'm go
The coolest guy you know wrote:
> Clay Claiborne wrote:
> >
> > For what its worth, we recently built an 8 ide drive 280GB raid5 system.
> > Benchmarking with HDBENCH we got 35.7MB/sec read and 29.87MB/sec write. With
> > DBENCH and 1 client we got 44.5 MB/sec with 3 clients it dropped down to
> In the last 24 hours ive been getting them when e2fsck runs after
> rebooting. Usual cause of rebooting is irq causeing lockup, or endlessly
> trying looping trying to get an irq.
>
> Im convinced its my hpt366 controller, ive mentioned my problem in a few
> channels, no luck yet.
>
> I used
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
> What stripe size, CPU and memory is used here?
System is a dual-cpu PII 450Mhz with 256MB RAM.
Disks are configured with chunk-size of 32kb (ext2 block-size is 4kb).
> Is this a dual CPU system perhaps? Something
remo strotkamp wrote:
>
> bug1 wrote:
> >
> > Clay Claiborne wrote:
> > >
> > > For what its worth, we recently built an 8 ide drive 280GB raid5 system.
> > > Benchmarking with HDBENCH we got 35.7MB/sec read and 29.87MB/sec write. With
> > > DBENCH and 1 client we got 44.5 MB/sec with 3 clients
227 98.3 47754 33.0 182.8 1.5
> **
>
> When doing _actual_ work (I/O bound reads on huge data sets), I often
> see sustained read performance as high as 50MB/s.
>
> Tests on the individual drives show 28+ MB/s.
What stripe size, CPU and memory is used here? I hav
On Tue, Apr 25, 2000 at 11:38:59PM +0100, Paul Jakma wrote:
> > Clue: this is the way every RAID controller I know of works these days.
> what??? Do you know what are you are talking about?
Yep, I think so.
I think I misunderstood you ("getting called by BIOS").
> a REAL raid contr
On Tue, 25 Apr 2000, Gregory Leblanc wrote:
Then you've never used a RAID card. I've got a number of RAID
cards here, 2 from compaq, 1 from DPT, and another from HP
(really AMI), and all of them implement RAID functions like
striping, double writes (mirroring), and parity calculations fo
On Wed, 26 Apr 2000, Daniel Roesen wrote:
Clue: this is the way every RAID controller I know of works these days.
what??? Do you know what are you are talking about? hey,
i've got some $1000 raid cards for you. (my markup is $980).
a REAL raid controller is a *complete computer*
> -Original Message-
> From: Daniel Roesen [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, April 25, 2000 3:07 PM
> To: [EMAIL PROTECTED]
> Subject: Re: performance limitations of linux raid
>
>
> On Tue, Apr 25, 2000 at 10:28:46PM +0100, Paul Jakma wrote:
> &
On Tue, Apr 25, 2000 at 10:28:46PM +0100, Paul Jakma wrote:
> Clue: the Promise IDE RAID controller is NOT a hardware RAID
> controller.
>
> Promise IDE RAID == Software RAID where the software is written by
> Promise and sitting on the ROM on the Promise card getting called by
> the BIOS.
Clue:
On Mon, 24 Apr 2000, Frank Joerdens wrote:
I've been toying with the idea of getting one of those for a while, but
there doesn't seem to be a linux driver for the FastTrack66 (the RAID
card), only for the Ultra66 (the not-hacked IDE controller), and that
driver has only 'Experimental' sta
bug1 wrote:
>
> Clay Claiborne wrote:
> >
> > For what its worth, we recently built an 8 ide drive 280GB raid5 system.
> > Benchmarking with HDBENCH we got 35.7MB/sec read and 29.87MB/sec write. With
> > DBENCH and 1 client we got 44.5 MB/sec with 3 clients it dropped down to about
> > 43MB/sec.
bug1 wrote:
>
> >
> > I don't believe the specs either, because they are for the "ideal" case.
> > However, I think that either your benchmark is flawed, or you've got a
> > crappy controller. I have a (I think) 5400 RPM 4.5GB IBM SCA SCSI drive in
> > a machine at home, and I can easily read at
's
SCSI that can do that ;)
The point is, comparing speed of SCSI vs any IDE variant is like
comparing apples and oranges. That said, copmparing two drives of any
variant, and basing their performance upon the rotational speed is also
an error. RPMs are not the sole determining factor. Other factors
in
> A 7200RPM IDE drive is faster than a 5400RPM SCSI drive and a 1RPM
> SCSI drive is faster than a 7200RPM drive.
>
> If you have two 7200RPM drives, one scsi and one ide, each on there own
> channel, then they should be about the same speed.
>
Not entirely true - the DMA capabilities of ID
Clay Claiborne wrote:
>
> For what its worth, we recently built an 8 ide drive 280GB raid5 system.
> Benchmarking with HDBENCH we got 35.7MB/sec read and 29.87MB/sec write. With
> DBENCH and 1 client we got 44.5 MB/sec with 3 clients it dropped down to about
> 43MB/sec.
> The system is a 600Mhz
>
> I don't believe the specs either, because they are for the "ideal" case.
> However, I think that either your benchmark is flawed, or you've got a
> crappy controller. I have a (I think) 5400 RPM 4.5GB IBM SCA SCSI drive in
> a machine at home, and I can easily read at 7MB/sec from it under S
On Mon, 24 Apr 2000, Clay Claiborne wrote:
> For what its worth, we recently built an 8 ide drive 280GB raid5 system.
> Benchmarking with HDBENCH we got 35.7MB/sec read and 29.87MB/sec write. With
> DBENCH and 1 client we got 44.5 MB/sec with 3 clients it dropped down to about
> 43MB/sec.
> Th
> -Original Message-
> From: Scott M. Ransom [mailto:[EMAIL PROTECTED]]
> Sent: Monday, April 24, 2000 6:13 PM
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; Gregory Leblanc; bug1
> Subject: RE: performance limitations of linux raid
>
>
---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
6833 99.2 42532 44.4 18397 42.2 7227 98.3 47754 33.0 182.8 1.5
* *
For what its worth, we recently built an 8 ide drive 280GB raid5 system.
Benchmarking with HDBENCH we got 35.7MB/sec read and 29.87MB/sec write. With
DBENCH and 1 client we got 44.5 MB/sec with 3 clients it dropped down to about
43MB/sec.
The system is a 600Mhz P-3 on a ASUS P3C2000 with 256MB of
1 - 100 of 347 matches
Mail list logo