Re: Degrading disk read performance under 2.2.16

2000-08-15 Thread Andrea Arcangeli
On Mon, 14 Aug 2000, Corin Hartland-Swann wrote: >I have tried this out, and found that the default settings were: >elevator ID=232 read_latency=128 write_latency=8192 max_bomb_segments=4 (side note: Jens increased bomb segments to 32 in recent 2.2.17) I think we can apply this patch on top of

Re: Degrading disk read performance under 2.2.16

2000-08-14 Thread Andrea Arcangeli
te_latency to 10,000,000 results in >> similar throughput, but catastrophic seek performance: > >Odd... I guess it was the tiotest "Seek" bug that I mentioned in the other email. >to backup the values chosen. But the current defaults do impose performance >problems, as

Re: Degrading disk read performance under 2.2.16

2000-08-14 Thread bug1
819232 13.3228 6.36% 22.8210 19.0% 151.544 0.73% > > So we're still seeing a drop in performance with 1 thread, and still > seeing the same severe degradation 2.2.16 exhibits. > > > Thanks, > > Corin > Hi, motivated by your earlier comparison between 2.2.1

Re: Degrading disk read performance under 2.2.16

2000-08-14 Thread Jens Axboe
lts in > similar throughput, but catastrophic seek performance: Odd... > Now, does anyone (Andrea in particular) know where the defaults are set? I include/linux/blkdev.h, ELEVATOR_DEFAULTS. > assume that setting read_latency to much lower than write_latency was an > accident, but can&#

Re: Degrading disk read performance under 2.2.16

2000-08-14 Thread Corin Hartland-Swann
Hi there, I am CC:ing this to Andrea Arcangeli because he is credited at the top of drivers/block/ll_rw_blk.c as writing the elevator code. On Sun, 13 Aug 2000, Jens Axboe wrote: > On Sun, Aug 13 2000, Corin Hartland-Swann wrote: > > The fact remains that disk performance is much wo

Re: Degrading disk read performance under 2.2.16

2000-08-13 Thread Jens Axboe
On Sun, Aug 13 2000, Corin Hartland-Swann wrote: > The fact remains that disk performance is much worse under 2.2.16 and > heavy loads than under 2.2.15 - what I was trying to find out was what A new elevator was introduced into 2.2.16, that may be affecting results. Try using elvtun

Re: Degrading disk read performance under 2.2.16

2000-08-13 Thread Corin Hartland-Swann
ence the results at > > all! d'oh! > > Linux is designed to have swap. I doubt anyone cares about how it > behaves if you cripple it. Since this is designed to test raw disk performance, I wanted to reduce any other factors that might influence it. This includes redu

Re: Degrading disk read performance under 2.2.16

2000-08-11 Thread Corin Hartland-Swann
1 23.4496 9.70% 24.1711 20.6% 139.941 0.88% /mnt/ 25681922 16.9398 7.53% 24.0482 20.3% 136.706 0.69% /mnt/ 25681924 15.0166 6.82% 23.7892 20.2% 139.922 0.69% /mnt/ 256819216 13.5901 6.38% 23.2326 19.4% 147.956 0.70% /mnt/ 256819232 13.3228 6.36%

Re: Degrading disk read performance under 2.2.16

2000-08-11 Thread Andre Hedrick
reveal bottlenecks. > > > > I used tiotest to benchmark, using a file size of 256MB, block size of 4K, > > > and with 1, 2, 4, 16, 32 threads. The performance starts to get hit as > > I forgot to add that I ran each test five times so as to get consistent > results. &g

Re: Degrading disk read performance under 2.2.16

2000-08-11 Thread Andries Brouwer
OS, or kernel command line, but derived from the partition table. Disk geometry is totally unrelated to disk performance. Andries - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/

Re: Degrading disk read performance under 2.2.16

2000-08-11 Thread Corin Hartland-Swann
of 256MB, block size of 4K, > > and with 1, 2, 4, 16, 32 threads. The performance starts to get hit as I forgot to add that I ran each test five times so as to get consistent results. > does larger blocksizes change the picture at all? I'm wondering whether > readahead is ef

Degrading disk read performance under 2.2.16

2000-08-10 Thread Corin Hartland-Swann
accesses (on IDE) rather than a RAID problem. I benchmarked single IDE disk performance on the following setup: Intel 810E Chipset Motherboard (CA810EAL), Pentium III-667, 32M RAM, Maxtor DiamondMax Plus 40 40.9GB UDMA66 Disk, Model 54098U8 I have attached the (edited) kernel config I used for all

Re: Read performance bad in 2.4.0-test5-pre3

2000-07-25 Thread bug1
IDE > controllers form a raid5 software raid, reiserfs is the filesystem used > on /dev/md0 > > I'm a bit dissapointed with the read performance being about the same as > reading from a single disk (using bonnie with size set to 500MB) > > - Is bonnie not the right be

Read performance bad in 2.4.0-test5-pre3

2000-07-25 Thread Nils Rennebarth
the filesystem used on /dev/md0 I'm a bit dissapointed with the read performance being about the same as reading from a single disk (using bonnie with size set to 500MB) - Is bonnie not the right benchmark to use here? What may be better ones - Is there still another kernel patch needed for

Re: Performance gap between 2.2.14 and 2.4.0-test4 kernels

2000-07-24 Thread bug1
ntial Create-- Random > Create > -Create-- --Read--- -Delete-- -Create-- --Read--- > -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > /sec %CP > 30 174 99 + 93 9417 93 180 99

RE: performance statistics for RAID?

2000-06-27 Thread Gregory Leblanc
> -Original Message- > From: James Manning [mailto:[EMAIL PROTECTED]] > Sent: Tuesday, June 27, 2000 6:37 PM > To: Linux Raid list (E-mail) > Subject: Re: performance statistics for RAID? > > [Gregory Leblanc] > > Is there any chance of keeping track

Re: performance statistics for RAID?

2000-06-27 Thread James Manning
[Gregory Leblanc] > Is there any chance of keeping track of these with software RAID? AFAIK, sct's patch to give sar-like data out of /proc/partitions gives all of the above stats and more... neat patch :) The user-space tool should be in the same dir. And, FWIW, I get asked about how people ca

performance statistics for RAID?

2000-06-27 Thread Gregory Leblanc
I just read that message from James Manning on some performance tuning, and it made me think about this. On some of our RAID controllers, they collect statistics for the RAID volumes. The one that I'm thinking of collects things like this, except that I've trimmed some of the

RE: Benchmarks, raid1 (was raid0) performance

2000-06-23 Thread Gregory Leblanc
> -Original Message- > From: Hugh Bragg [mailto:[EMAIL PROTECTED]] > Sent: Friday, June 23, 2000 12:36 AM > To: Gregory Leblanc > Cc: [EMAIL PROTECTED] > Subject: Re: Benchmarks, raid1 (was raid0) performance > [snip] > > > What version of raidtools shoul

Re: Benchmarks, raid1 (was raid0) performance

2000-06-23 Thread Hugh Bragg
Gregory Leblanc wrote: > > > -Original Message- > > From: Hugh Bragg [mailto:[EMAIL PROTECTED]] > > Sent: Wednesday, June 21, 2000 5:04 AM > > To: [EMAIL PROTECTED] > > Subject: Re: Benchmarks, raid1 (was raid0) performance > > > > Patch h

RE: Benchmarks, raid1 (was raid0) performance

2000-06-21 Thread Diegmueller, Jason (I.T. Dept)
: Look at the Bonnies seek performance. It should rise. : For single sequential reads, readbalancer doesn't help. : Bonnie tests only single sequential reads. : : If you wan't to test with multiple io threads, try : http://tiobench.sourceforge.net Great, thanks, I'll give this a try!

Re: Benchmarks, raid1 (was raid0) performance

2000-06-21 Thread Mika Kuoppala
patched cleanly. But bonnie++ > is showing no change in read performance. I am using IDE drives, > but they are on separate controllers (/dev/hda, and /dev/hdc) > with both drives configured as masters. > > Anyone have any tricks up their sleeves? Look at the Bonnies seek performance. I

RE: Benchmarks, raid1 (was raid0) performance

2000-06-21 Thread Diegmueller, Jason (I.T. Dept)
: None offhand, but can you post your test configuration/parameters? : Things like test size, relavent portions of /etc/raidtab, things : like that. I know this should be a whole big list, but I can think : of all of them right now. FYI, I don't do IDE RAID (or IDE at all), : but it's pretty aw

RE: Benchmarks, raid1 (was raid0) performance

2000-06-21 Thread Gregory Leblanc
> -Original Message- > From: Diegmueller, Jason (I.T. Dept) [mailto:[EMAIL PROTECTED]] > Sent: Wednesday, June 21, 2000 10:46 AM > To: 'Gregory Leblanc'; 'Hugh Bragg'; [EMAIL PROTECTED] > Subject: RE: Benchmarks, raid1 (was raid0) performa

RE: Benchmarks, raid1 (was raid0) performance

2000-06-21 Thread Diegmueller, Jason (I.T. Dept)
nstallation yesterday has brought me back. Naturally, when I saw mention of radi1readbalance, I immediately tried it. I'm running 2.2.17pre4, and it patched cleanly. But bonnie++ is showing no change in read performance. I am using IDE drives, but they are on separate controllers (/dev/hda

RE: Benchmarks, raid1 (was raid0) performance

2000-06-21 Thread Gregory Leblanc
> -Original Message- > From: Hugh Bragg [mailto:[EMAIL PROTECTED]] > Sent: Wednesday, June 21, 2000 5:04 AM > To: [EMAIL PROTECTED] > Subject: Re: Benchmarks, raid1 (was raid0) performance > > Patch http://www.icon.fi/~mak/raid1/raid1readbalance-2.2.15-B2 > i

Re: Benchmarks, raid1 (was raid0) performance

2000-06-21 Thread Hugh Bragg
Patch http://www.icon.fi/~mak/raid1/raid1readbalance-2.2.15-B2 improves read performance right? At what cost? Can/Should I apply the raid1readbalance-2.2.15-B2 patch after applying mingo's raid-2.2.16-A0 patch? What version of raidtools should I use against a stock 2.2.16 system with

New Linux Based HIGH PERFORMANCE UltraATA Nas devices, 200GB to 1.8TB

2000-06-15 Thread Daniel Strumm
FYI! A whole new line of very low cost Linux Based NAS appliances feature filled. www.raidzone.com ___ Are you a Techie? Get Your Free Tech Email Address Now! Many to choose from! Visit http://www.TechEmail.com

RE: Benchmarks, raid1 (was raid0) performance

2000-06-14 Thread Gregory Leblanc
> -Original Message- > From: Jeff Hill [mailto:[EMAIL PROTECTED]] > Sent: Tuesday, June 13, 2000 1:26 PM > To: Gregory Leblanc > Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED] > Subject: Re: Benchmarks, raid1 (was raid0) performance > > Gregory Leblanc wrote: > >

RE: Benchmarks, raid1 (was raid0) performance

2000-06-13 Thread Gregory Leblanc
> -Original Message- > From: Jeff Hill [mailto:[EMAIL PROTECTED]] > Sent: Tuesday, June 13, 2000 3:56 PM > To: Gregory Leblanc > Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED] > Subject: Re: Benchmarks, raid1 (was raid0) performance > > Gregory Leblanc wrote: > &

Re: Benchmarks, raid1 (was raid0) performance

2000-06-13 Thread Jeff Hill
Gregory Leblanc wrote: > > I don't have anything that caliber to compare against, so I can't really > say. Should I assume that you don't have Mika's RAID1 read balancing patch? I have to admit I was ignorant of the patch (I had skimmed the archives, but not well enough). Searched the archive f

Re: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-13 Thread Henry J. Cobb
Bug1: Maybe im missing something here, why arent reads just as fast as writes? The cynic in me suggests that the RAID driver has to wait for the information to be read off the disks, but it doesn't have to wait for the writes to complete before returning, but I haven't read the code. -HJC

Re: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-13 Thread bert hubert
On Tue, Jun 13, 2000 at 04:51:46AM +1000, bug1 wrote: > Maybe im missing something here, why arent reads just as fast as writes? I note the same on a 2 way IDE RAID-1 device, with both disks on a separate bus. Regards, bert hubert -- | http://www.rent-a-ne

Re: Benchmarks, raid1 (was raid0) performance

2000-06-13 Thread Jeff Hill
ng about IDE drives? Seems > quite possible that there aren't any single drives that are hitting this > speed, so it's only showing up with RAID. > Greg Is there any place where benchmark results are listed? I've finally gotten my RAID-1 running and am trying to se

RE: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-13 Thread Gregory Leblanc
> -Original Message- > From: bug1 [mailto:[EMAIL PROTECTED]] > Sent: Tuesday, June 13, 2000 10:39 AM > To: [EMAIL PROTECTED] > Cc: [EMAIL PROTECTED] > Subject: Re: Benchmarks, raid0 performance, 1,2,3,4 drives > > Ingo Molnar wrote: > > > > could

Re: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-13 Thread Scott M. Ransom
Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Hello, Just to let you know, I also see very similar IDE-RAID0 performance problems: I have RAID0 with two 30G DiamondMax (Maxtor) ATA-66 drives connected to a Promise Ultra66 controller. I am using kernel 2.4.0-test1

Re: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-13 Thread bug1
Ingo Molnar wrote: > > could you send me your /etc/raidtab? I've tested the performance of 4-disk > RAID0 on SCSI, and it scales perfectly here, as far as hdparm -t goes. > (could you also send the 'hdparm -t /dev/md0' results, do you see a > degradation in those n

Re: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-13 Thread bug1
Adrian Head wrote: > > I have seen people complain about simular issues on the kernel mailing > list so maybe there is an actual kernel problem. > > What I have always wanted to know but haven't tested yet is to test raid > performance with and without the noatime att

RE: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-13 Thread Adrian Head
I have seen people complain about simular issues on the kernel mailing list so maybe there is an actual kernel problem. What I have always wanted to know but haven't tested yet is to test raid performance with and without the noatime attribute in /etc/fstab I think that when Linux re

Re: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-12 Thread bug1
Ingo Molnar wrote: > > could you send me your /etc/raidtab? I've tested the performance of 4-disk > RAID0 on SCSI, and it scales perfectly here, as far as hdparm -t goes. > (could you also send the 'hdparm -t /dev/md0' results, do you see a > degradation in those n

Re: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-12 Thread Ingo Molnar
could you send me your /etc/raidtab? I've tested the performance of 4-disk RAID0 on SCSI, and it scales perfectly here, as far as hdparm -t goes. (could you also send the 'hdparm -t /dev/md0' results, do you see a degradation in those numbers as well?) it could either be some s

Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-12 Thread bug1
a single element in a raid0 array (of 1) seems to show raid adds a considerable overhead to read performance, but still reads arent as fast as writes on hde5, this isnt a very practical benchmark anyway. Maybe im missing something here, why arent reads just as fast as writes? 4-way raid0 (disk

Re: bonnie++ for RAID5 performance statistics

2000-06-12 Thread Marc SCHAEFER
James Manning <[EMAIL PROTECTED]> wrote: > [Gregory Leblanc] > > > [root@bod tiobench-0.3.1]# ./tiobench.pl --dir /raid5 > > > No size specified, using 200 MB > > > Size is MB, BlkSz is Bytes, Read, Write, and Seeks are MB/sec > > > > Try making the size at least double that of ram. > Actually,

Re: bonnie++ for RAID5 performance statistics

2000-06-09 Thread James Manning
[Gregory Leblanc] > Sounds good, James, but Darren said that his machine had 256MB of ram. I > wouldn't have mentioned it, except that it wasn't using enough, I think. it tries to stat /proc/kcore currently. no procfs and it'll fail to get a good number... I've thought about other approaches, t

RE: bonnie++ for RAID5 performance statistics

2000-06-09 Thread Gregory Leblanc
> -Original Message- > From: James Manning [mailto:[EMAIL PROTECTED]] > Sent: Friday, June 09, 2000 12:46 PM > To: Gregory Leblanc > Cc: [EMAIL PROTECTED] > Subject: Re: bonnie++ for RAID5 performance statistics > > > [Gregory Leblanc] > > > [roo

Re: bonnie++ for RAID5 performance statistics

2000-06-09 Thread James Manning
[Gregory Leblanc] > > [root@bod tiobench-0.3.1]# ./tiobench.pl --dir /raid5 > > No size specified, using 200 MB > > Size is MB, BlkSz is Bytes, Read, Write, and Seeks are MB/sec > > Try making the size at least double that of ram. Actually, I do exactly that, clamping at 200MB and 2000MB current

RE: bonnie++ for RAID5 performance statistics

2000-06-09 Thread Darren Evans
3:29 AM To: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: RE: bonnie++ for RAID5 performance statistics > -Original Message- > From: Darren Evans [mailto:[EMAIL PROTECTED]] > Sent: Wednesday, June 07, 2000 3:02 AM > To: [EMAIL PROTECTED] > Subject: bonnie++ for RAID5 pe

RE: bonnie++ for RAID5 performance statistics

2000-06-08 Thread Gregory Leblanc
> -Original Message- > From: Darren Evans [mailto:[EMAIL PROTECTED]] > Sent: Thursday, June 08, 2000 2:16 AM > To: Gregory Leblanc > Cc: [EMAIL PROTECTED] > Subject: RE: bonnie++ for RAID5 performance statistics > > Hi Greg, > > Yeah I know sorry about t

RE: bonnie++ for RAID5 performance statistics

2000-06-08 Thread Gregory Leblanc
> -Original Message- > From: Darren Evans [mailto:[EMAIL PROTECTED]] > Sent: Wednesday, June 07, 2000 3:02 AM > To: [EMAIL PROTECTED] > Subject: bonnie++ for RAID5 performance statistics > > I guess this kind of thing would be great to be detailed in the FAQ. Di

bonnie++ for RAID5 performance statistics

2000-06-07 Thread Darren Evans
I guess this kind of thing would be great to be detailed in the FAQ. Anyone care to swap statistics so I know how valid these are. This is with an Adaptec AIC-7895 Ultra SCSI host adapter. Is this good, reasonable or bad timing? [darren@bod bonnie++-1.00a]$ bonnie++ -d /raid5 -m bod -s 90mb W

Linux software RAID performance.

2000-06-06 Thread Torbjorn Olander
. Every drive got its own UATA66 channel. With kernel 2.3.47 read performance was about 37.5 MB/s and write at about 33 MB/s. The bad thing whith this kernel was that the filesystem got corrupt.. :) Now I'm running 2.4.0-test1-ac8, now it reads with about 20 MB/s and writes at 30 MB/s, wha

RE: How to test raid5 performance best ?

2000-05-15 Thread Gregory Leblanc
> -Original Message- > From: octave klaba [mailto:[EMAIL PROTECTED]] > Sent: Monday, May 15, 2000 7:25 AM > To: Thomas Scholten > Cc: Linux Raid Mailingliste > Subject: Re: How to test raid5 performance best ? > > > 1. Which tools should i use to test raid-

Re: How to test raid5 performance best ?

2000-05-15 Thread octave klaba
Hi, > 1. Which tools should i use to test raid-performace ? tiotest. I lost the official url you can download it from http://ftp.ovh.net/tiotest-0.25.tar.gz > 2. is it possible to add disks to a raid5 after its been started ? good question ;) -- Amicalement, oCtAvE Connexion terminée par ex

How to test raid5 performance best ?

2000-05-15 Thread Thomas Scholten
Hello All, some day ago i joined the Software-Raid-Club :) I'm now running a SCSI-Raid5 with 3 2 GB partitions. I choosed a chunk-size of 32 kb. Referring to the FAQ i'm told to experiment to get best performance chunk-size, but i definitly have no good clue how to test performace :-/

Re: performance limitations of linux raid

2000-05-05 Thread Christopher E. Brown
On Fri, 5 May 2000, Michael Robinton wrote: > > > > > > > > Not entirely, there is a fair bit more CPU overhead running an > > > > IDE bus than a proper SCSI one. > > > > > > A "fair" bit on a 500mhz+ processor is really negligible. > > > > > > Ehem, a fair bit on a 500Mhz CPU is

Re: performance limitations of linux raid

2000-05-05 Thread Mel Walters
ly writing to the 1.3, however it has to go through the raid layer as well. This tells me that appending drives (even if slower) to give more space doesn't affect performance (much) compared to the single drive. One more question I have is how do you tell how much cpu time something compiled

Re: performance limitations of linux raid

2000-05-05 Thread Michael Robinton
> > > > > > Not entirely, there is a fair bit more CPU overhead running an > > > IDE bus than a proper SCSI one. > > > > A "fair" bit on a 500mhz+ processor is really negligible. > > > Ehem, a fair bit on a 500Mhz CPU is ~ 30%. I have watched a > *single* UDMA66 drive (with read ahead

Re: performance limitations of linux raid

2000-05-05 Thread Christopher E. Brown
On Thu, 4 May 2000, Michael Robinton wrote: > > > > Not entirely, there is a fair bit more CPU overhead running an > > IDE bus than a proper SCSI one. > > A "fair" bit on a 500mhz+ processor is really negligible. Ehem, a fair bit on a 500Mhz CPU is ~ 30%. I have watched a *single*

RE: performance limitations of linux raid

2000-05-05 Thread Carruth, Rusty
> From: Gregory Leblanc [mailto:[EMAIL PROTECTED]] > > ..., that would suck up a lot more host CPU processing power than > the 3 SCSI channels that you'd need to get 12 drives and avoid bus >saturation. not to mention the obvious bus slot loading problem ;-) rc

RE: performance limitations of linux raid

2000-05-05 Thread Gregory Leblanc
> -Original Message- > From: Michael Robinton [mailto:[EMAIL PROTECTED]] > Sent: Thursday, May 04, 2000 10:31 PM > To: Christopher E. Brown > Cc: Chris Mauritz; bug1; [EMAIL PROTECTED] > Subject: Re: performance limitations of linux raid > > On Thu, 4 May 2000, Ch

RE: performance limitations of linux raid

2000-05-05 Thread Carruth, Rusty
(I really hate how Outlook makes you answer in FRONT of the message, what a dumb design...) Well, without spending the time I should thinking about my answer, I'll say there are many things which impact performance, most of which we've seen talked about here: 1 - how fast c

Re: performance limitations of linux raid

2000-05-04 Thread Michael Robinton
On Thu, 4 May 2000, Christopher E. Brown wrote: > On Wed, 3 May 2000, Michael Robinton wrote: > > > The primary limitation is probably the rotational speed of the disks and > > how fast you can rip data off the drives. For instance, the big IBM > > drives (20 - 40 gigs) have a limitation of ab

Re: performance limitations of linux raid

2000-05-04 Thread Bob Gustafson
I think the original answer was more to the point of Performance Limitation. The mechanical delays inherent in the disk rotation are much slower than the electronic or optical speeds in the connection between disk and computer. If you had a huge bank of semiconductor memory, or a huge cache or

Re: performance limitations of linux raid

2000-05-04 Thread Christopher E. Brown
On Wed, 3 May 2000, Michael Robinton wrote: > The primary limitation is probably the rotational speed of the disks and > how fast you can rip data off the drives. For instance, the big IBM > drives (20 - 40 gigs) have a limitation of about 27mbs for both the 7200 > and 10k rpm models. The Driv

RE: performance limitations of linux raid

2000-05-04 Thread Gregory Leblanc
> -Original Message- > From: Carruth, Rusty [mailto:[EMAIL PROTECTED]] > Sent: Thursday, May 04, 2000 8:36 AM > To: [EMAIL PROTECTED] > Subject: RE: performance limitations of linux raid > > > The primary limitation is probably the rotational speed of > the

Re: performance limitations of linux raid

2000-05-04 Thread phil
On Thu, May 04, 2000 at 08:35:52AM -0700, Carruth, Rusty wrote: > > > The primary limitation is probably the rotational speed of the disks and > > how fast you can rip data off the drives. For instance, ... > > Well, yeah, and so whatever happened to optical scsi? I heard that you > could ge

RE: performance limitations of linux raid

2000-05-04 Thread Carruth, Rusty
> The primary limitation is probably the rotational speed of the disks and > how fast you can rip data off the drives. For instance, ... Well, yeah, and so whatever happened to optical scsi? I heard that you could get 1 gbit/sec (or maybe gByte?) xfer, and you could go 1000 meters - or is thi

Re: performance limitations of linux raid

2000-05-03 Thread Michael Robinton
The primary limitation is probably the rotational speed of the disks and how fast you can rip data off the drives. For instance, the big IBM drives (20 - 40 gigs) have a limitation of about 27mbs for both the 7200 and 10k rpm models. The Drives to come will have to make trade-offs between dens

Re: performance limitations of linux raid

2000-05-03 Thread Chris Mauritz
> From [EMAIL PROTECTED] Wed May 3 20:38:05 2000 > > Umm, I can get 13,000K/sec to/from ext2 from a *single* > UltraWide Cheeta (best case, *long* reads, no seeks). 100Mbit is only > 12,500K/sec. > > > A 4 drive UltraWide Cheeta array will top out an UltraWide bus > at 40MByte/sec

Re: performance limitations of linux raid

2000-05-03 Thread Christopher E. Brown
On Sun, 23 Apr 2000, Chris Mauritz wrote: > > I wonder what the fastest speed any linux software raid has gotten, it > > would be great if the limitation was a hardware limitation i.e. cpu, > > (scsi/ide) interface speed, number of (scsi/ide) interfaces, drive > > speed. It would be interesting t

: Re: performance limitations of linux raid

2000-05-02 Thread john kidd
If you are looking for one of the higest performance systems we have ever seen, visit www.raidzone.com. These are real systems. John ___ Are you a Techie? Get Your Free Tech Email Address Now! Many to choose from! Visit http

Re: performance limitations of linux raid

2000-05-01 Thread Clay Claiborne
More notes on the 8 IDE drive raid5 system I built, and the 3ware controller. Edwin Hakkennes wrote: May I ask how these ide-ports and the attatched disks show up under Redhat 6.2? Are they just standard ATA33 or ATA66 controllers which can be used in software raid? Or is only the sum of the atta

Re: RAID-1 Performance

2000-04-27 Thread Peter Palfrader
Hi! Somebody uttered around Apr 27 2000: > > Thanks - where can I find the archive? > > > One is at http://kernelnotes.org/lnxlists/linux-raid/ > > > Also - is it a pretty stable patch? (This is a production server) > > > I don't know, sorry. I'm running 2.2.14 with mingos raid patch and th

Re: RAID-1 Performance

2000-04-27 Thread Holger Kiehl
On Thu, 27 Apr 2000, Corin Hartland-Swann wrote: > > Holger, > > On Thu, 27 Apr 2000, Holger Kiehl wrote: > > On Thu, 27 Apr 2000, Corin Hartland-Swann wrote: > > > I was hoping that RAID-1 would 'stripe' reads between the disks, > > > inc

Re: RAID-1 Performance

2000-04-27 Thread Mika Kuoppala
On Thu, 27 Apr 2000, Corin Hartland-Swann wrote: > > > BTW, I was pleased to discover that Linux had absolutely no problems with > Ultra160-SCSI - 96MB/s isn't bad at all, is it? > > I was hoping that RAID-1 would 'stripe' reads between the disks, > i

Re: RAID-1 Performance

2000-04-27 Thread Corin Hartland-Swann
Holger, On Thu, 27 Apr 2000, Holger Kiehl wrote: > On Thu, 27 Apr 2000, Corin Hartland-Swann wrote: > > I was hoping that RAID-1 would 'stripe' reads between the disks, > > increasing read performance to RAID-0 levels, but leaving write > > performance at singl

Re: RAID-1 Performance

2000-04-27 Thread Holger Kiehl
On Thu, 27 Apr 2000, Corin Hartland-Swann wrote: > I was hoping that RAID-1 would 'stripe' reads between the disks, > increasing read performance to RAID-0 levels, but leaving write > performance at single-disk levels. Does anyone know why it doesn't do > this? >

RAID-1 Performance

2000-04-27 Thread Corin Hartland-Swann
#x27;safe' copy on sdd2. I'd like the /home partition to be as fast as possible, so I thought that RAID 0+1 was a good solution (giving approximately quadruple read and double write performance). After experimenting, I got the following results: Read (MB/s)

Re: performance limitations of linux raid

2000-04-26 Thread Michael
> 2 Masters from the ASUS P3C2000 CDU Board, 2 Masters on a CMD648 > based PCI controller and 4 Masters on a 3Wave 4 port RAID > Controller. I don't use the 3Wave as a RAID controller though, just > as a very good 4 channel Ultra66 board. They also make an 8 channel > board, and that's what I'm go

Re: performance limitations of linux raid

2000-04-26 Thread Clay Claiborne
The coolest guy you know wrote: > Clay Claiborne wrote: > > > > For what its worth, we recently built an 8 ide drive 280GB raid5 system. > > Benchmarking with HDBENCH we got 35.7MB/sec read and 29.87MB/sec write. With > > DBENCH and 1 client we got 44.5 MB/sec with 3 clients it dropped down to

Re: performance limitations of linux raid

2000-04-26 Thread =3D=3FISO-8859-2=3FQ=3FJure=5FPe=E8ar=3F=3D
> In the last 24 hours ive been getting them when e2fsck runs after > rebooting. Usual cause of rebooting is irq causeing lockup, or endlessly > trying looping trying to get an irq. > > Im convinced its my hpt366 controller, ive mentioned my problem in a few > channels, no luck yet. > > I used

Re: performance limitations of linux raid

2000-04-25 Thread Scott M. Ransom
Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit > What stripe size, CPU and memory is used here? System is a dual-cpu PII 450Mhz with 256MB RAM. Disks are configured with chunk-size of 32kb (ext2 block-size is 4kb). > Is this a dual CPU system perhaps? Something

Re: performance limitations of linux raid

2000-04-25 Thread bug1
remo strotkamp wrote: > > bug1 wrote: > > > > Clay Claiborne wrote: > > > > > > For what its worth, we recently built an 8 ide drive 280GB raid5 system. > > > Benchmarking with HDBENCH we got 35.7MB/sec read and 29.87MB/sec write. With > > > DBENCH and 1 client we got 44.5 MB/sec with 3 clients

Re: performance limitations of linux raid

2000-04-25 Thread Drake Diedrich
227 98.3 47754 33.0 182.8 1.5 > ** > > When doing _actual_ work (I/O bound reads on huge data sets), I often > see sustained read performance as high as 50MB/s. > > Tests on the individual drives show 28+ MB/s. What stripe size, CPU and memory is used here? I hav

Re: performance limitations of linux raid

2000-04-25 Thread Daniel Roesen
On Tue, Apr 25, 2000 at 11:38:59PM +0100, Paul Jakma wrote: > > Clue: this is the way every RAID controller I know of works these days. > what??? Do you know what are you are talking about? Yep, I think so. I think I misunderstood you ("getting called by BIOS"). > a REAL raid contr

RE: performance limitations of linux raid

2000-04-25 Thread Paul Jakma
On Tue, 25 Apr 2000, Gregory Leblanc wrote: Then you've never used a RAID card. I've got a number of RAID cards here, 2 from compaq, 1 from DPT, and another from HP (really AMI), and all of them implement RAID functions like striping, double writes (mirroring), and parity calculations fo

Re: performance limitations of linux raid

2000-04-25 Thread Paul Jakma
On Wed, 26 Apr 2000, Daniel Roesen wrote: Clue: this is the way every RAID controller I know of works these days. what??? Do you know what are you are talking about? hey, i've got some $1000 raid cards for you. (my markup is $980). a REAL raid controller is a *complete computer*

RE: performance limitations of linux raid

2000-04-25 Thread Gregory Leblanc
> -Original Message- > From: Daniel Roesen [mailto:[EMAIL PROTECTED]] > Sent: Tuesday, April 25, 2000 3:07 PM > To: [EMAIL PROTECTED] > Subject: Re: performance limitations of linux raid > > > On Tue, Apr 25, 2000 at 10:28:46PM +0100, Paul Jakma wrote: > &

Re: performance limitations of linux raid

2000-04-25 Thread Daniel Roesen
On Tue, Apr 25, 2000 at 10:28:46PM +0100, Paul Jakma wrote: > Clue: the Promise IDE RAID controller is NOT a hardware RAID > controller. > > Promise IDE RAID == Software RAID where the software is written by > Promise and sitting on the ROM on the Promise card getting called by > the BIOS. Clue:

Re: performance limitations of linux raid

2000-04-25 Thread Paul Jakma
On Mon, 24 Apr 2000, Frank Joerdens wrote: I've been toying with the idea of getting one of those for a while, but there doesn't seem to be a linux driver for the FastTrack66 (the RAID card), only for the Ultra66 (the not-hacked IDE controller), and that driver has only 'Experimental' sta

Re: performance limitations of linux raid

2000-04-25 Thread remo strotkamp
bug1 wrote: > > Clay Claiborne wrote: > > > > For what its worth, we recently built an 8 ide drive 280GB raid5 system. > > Benchmarking with HDBENCH we got 35.7MB/sec read and 29.87MB/sec write. With > > DBENCH and 1 client we got 44.5 MB/sec with 3 clients it dropped down to about > > 43MB/sec.

Re: performance limitations of linux raid

2000-04-24 Thread Bill Anderson
bug1 wrote: > > > > > I don't believe the specs either, because they are for the "ideal" case. > > However, I think that either your benchmark is flawed, or you've got a > > crappy controller. I have a (I think) 5400 RPM 4.5GB IBM SCA SCSI drive in > > a machine at home, and I can easily read at

Re: performance limitations of linux raid

2000-04-24 Thread Bill Anderson
's SCSI that can do that ;) The point is, comparing speed of SCSI vs any IDE variant is like comparing apples and oranges. That said, copmparing two drives of any variant, and basing their performance upon the rotational speed is also an error. RPMs are not the sole determining factor. Other factors in

Re: performance limitations of linux raid

2000-04-24 Thread Seth Vidal
> A 7200RPM IDE drive is faster than a 5400RPM SCSI drive and a 1RPM > SCSI drive is faster than a 7200RPM drive. > > If you have two 7200RPM drives, one scsi and one ide, each on there own > channel, then they should be about the same speed. > Not entirely true - the DMA capabilities of ID

Re: performance limitations of linux raid

2000-04-24 Thread bug1
Clay Claiborne wrote: > > For what its worth, we recently built an 8 ide drive 280GB raid5 system. > Benchmarking with HDBENCH we got 35.7MB/sec read and 29.87MB/sec write. With > DBENCH and 1 client we got 44.5 MB/sec with 3 clients it dropped down to about > 43MB/sec. > The system is a 600Mhz

Re: performance limitations of linux raid

2000-04-24 Thread bug1
> > I don't believe the specs either, because they are for the "ideal" case. > However, I think that either your benchmark is flawed, or you've got a > crappy controller. I have a (I think) 5400 RPM 4.5GB IBM SCA SCSI drive in > a machine at home, and I can easily read at 7MB/sec from it under S

Re: performance limitations of linux raid

2000-04-24 Thread Chris Bondy
On Mon, 24 Apr 2000, Clay Claiborne wrote: > For what its worth, we recently built an 8 ide drive 280GB raid5 system. > Benchmarking with HDBENCH we got 35.7MB/sec read and 29.87MB/sec write. With > DBENCH and 1 client we got 44.5 MB/sec with 3 clients it dropped down to about > 43MB/sec. > Th

RE: performance limitations of linux raid

2000-04-24 Thread Gregory Leblanc
> -Original Message- > From: Scott M. Ransom [mailto:[EMAIL PROTECTED]] > Sent: Monday, April 24, 2000 6:13 PM > To: [EMAIL PROTECTED] > Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; Gregory Leblanc; bug1 > Subject: RE: performance limitations of linux raid > >

RE: performance limitations of linux raid

2000-04-24 Thread Scott M. Ransom
---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 6833 99.2 42532 44.4 18397 42.2 7227 98.3 47754 33.0 182.8 1.5 * *

Re: performance limitations of linux raid

2000-04-24 Thread Clay Claiborne
For what its worth, we recently built an 8 ide drive 280GB raid5 system. Benchmarking with HDBENCH we got 35.7MB/sec read and 29.87MB/sec write. With DBENCH and 1 client we got 44.5 MB/sec with 3 clients it dropped down to about 43MB/sec. The system is a 600Mhz P-3 on a ASUS P3C2000 with 256MB of

  1   2   3   4   >