RE: Benchmarks, raid1 (was raid0) performance

2000-06-23 Thread Gregory Leblanc

> -Original Message-
> From: Hugh Bragg [mailto:[EMAIL PROTECTED]]
> Sent: Friday, June 23, 2000 12:36 AM
> To: Gregory Leblanc
> Cc: [EMAIL PROTECTED]
> Subject: Re: Benchmarks, raid1 (was raid0) performance
> 
[snip]
> > > What version of raidtools should I use against a stock 2.2.16
> > > system with raid-2.2.16-A0 patch running raid1?
> > 
> > The 0.90 ones.  I think that Ingo has some tools in the 
> same place as the
> > patches, those should be the right tools.  I'll bet that 
> the Software-RAID
> > HOWTO tells where to get the latest tools.  You can find it at
> > http://www.LinuxDoc.org/
> > Greg
> 
> I think you mean the only raid tools there
> people.redhat.com/mingo/raid-patches/raidtools-dangerous-0.90-
> 2116.tar.gz?
> 
> I'm a bit sceptical about using something that's labled dangerous.
> What is so dangerous about it and is there any more chance 
> that it will
> break
> something than the standard release RH 6.2 raid tools?

Nah, RedHat ships with a variant of these tools.  You could probably
(somebody check me here) use the tools that ship with RH6.2 and have them
work just fine.
Greg



Re: Benchmarks, raid1 (was raid0) performance

2000-06-23 Thread Hugh Bragg

Gregory Leblanc wrote:
> 
> > -Original Message-
> > From: Hugh Bragg [mailto:[EMAIL PROTECTED]]
> > Sent: Wednesday, June 21, 2000 5:04 AM
> > To: [EMAIL PROTECTED]
> > Subject: Re: Benchmarks, raid1 (was raid0) performance
> >
> > Patch http://www.icon.fi/~mak/raid1/raid1readbalance-2.2.15-B2
> > improves read performance right? At what cost?
> 
> Only the cost of patching your kernel, I think.  This patch does some nifty
> tricks to help pick which disk to read data from, and will double the read
> rates from RAID 1, assuming that you don't saturate the bus.
> 
> > Can/Should I apply the raid1readbalance-2.2.15-B2 patch after
> > applying mingo's raid-2.2.16-A0 patch?
> 
> I don't see any reason not to apply it, although I haven't tried it with
> 2.2.16.
> 

OK, thanks, I will try this.

> > What version of raidtools should I use against a stock 2.2.16
> > system with raid-2.2.16-A0 patch running raid1?
> 
> The 0.90 ones.  I think that Ingo has some tools in the same place as the
> patches, those should be the right tools.  I'll bet that the Software-RAID
> HOWTO tells where to get the latest tools.  You can find it at
> http://www.LinuxDoc.org/
> Greg

I think you mean the only raid tools there
people.redhat.com/mingo/raid-patches/raidtools-dangerous-0.90-2116.tar.gz?

I'm a bit sceptical about using something that's labled dangerous.
What is so dangerous about it and is there any more chance that it will
break
something than the standard release RH 6.2 raid tools?

Hugh.



RE: Benchmarks, raid1 (was raid0) performance

2000-06-21 Thread Diegmueller, Jason (I.T. Dept)

: Look at the Bonnies seek performance. It should rise.
: For single sequential reads, readbalancer doesn't help.
: Bonnie tests only single sequential reads.
: 
: If you wan't to test with multiple io threads, try
: http://tiobench.sourceforge.net

Great, thanks, I'll give this a try!



Re: Benchmarks, raid1 (was raid0) performance

2000-06-21 Thread Mika Kuoppala

On Wed Jun 21 2000 at 12:46:02 -0500, Diegmueller, Jason (I.T. Dept) wrote:
> : > Can/Should I apply the raid1readbalance-2.2.15-B2 patch after
> : > applying mingo's raid-2.2.16-A0 patch?
> : 
> : I don't see any reason not to apply it, although I haven't 
> : tried it with 2.2.16.
> 
> I have been out of the linux-raid world for a bit, but a 
> two-drive RAID1 installation yesterday has brought me back.  
> Naturally, when I saw mention of radi1readbalance, I immediately
> tried it.
> 
> I'm running 2.2.17pre4, and it patched cleanly.  But bonnie++
> is showing no change in read performance.  I am using IDE drives,
> but they are on separate controllers (/dev/hda, and /dev/hdc) 
> with both drives configured as masters.
> 
> Anyone have any tricks up their sleeves?

Look at the Bonnies seek performance. It should rise.
For single sequential reads, readbalancer doesn't help.
Bonnie tests only single sequential reads.

If you wan't to test with multiple io threads, try
http://tiobench.sourceforge.net

-- Mika



RE: Benchmarks, raid1 (was raid0) performance

2000-06-21 Thread Diegmueller, Jason (I.T. Dept)

: None offhand, but can you post your test  configuration/parameters?
: Things like test size, relavent portions of /etc/raidtab, things 
: like that.  I know this should be a whole big list, but I can think
: of all of them right now. FYI, I don't do IDE RAID (or IDE at all),
: but it's pretty awesome on SCSI.

Yes, I'll see if I can't whip all that together tonight.

I do like the SCSI/Software-RAID on Linux setup.  I've got two server
for old clients at my last job still running (one of those has 230+ day
uptime) Software-RAID5 on an HP Netserver LXe Pro.  

Nice and quick, stable, haven't had to endure an actual drive failure 
yet .. but simulated failures have worked wonderfully.



RE: Benchmarks, raid1 (was raid0) performance

2000-06-21 Thread Gregory Leblanc

> -Original Message-
> From: Diegmueller, Jason (I.T. Dept) [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, June 21, 2000 10:46 AM
> To: 'Gregory Leblanc'; 'Hugh Bragg'; [EMAIL PROTECTED]
> Subject: RE: Benchmarks, raid1 (was raid0) performance
> 
> : > Can/Should I apply the raid1readbalance-2.2.15-B2 patch after
> : > applying mingo's raid-2.2.16-A0 patch?
> : 
> : I don't see any reason not to apply it, although I haven't 
> : tried it with 2.2.16.
> 
> I have been out of the linux-raid world for a bit, but a 
> two-drive RAID1 installation yesterday has brought me back.  
> Naturally, when I saw mention of radi1readbalance, I immediately
> tried it.
> 
> I'm running 2.2.17pre4, and it patched cleanly.  But bonnie++
> is showing no change in read performance.  I am using IDE drives,
> but they are on separate controllers (/dev/hda, and /dev/hdc) 
> with both drives configured as masters.
> 
> Anyone have any tricks up their sleeves?

None offhand, but can you post your test configuration/parameters?  Things
like test size, relavent portions of /etc/raidtab, things like that.  I know
this should be a whole big list, but I can think of all of them right now.
FYI, I don't do IDE RAID (or IDE at all), but it's pretty awesome on SCSI.
Greg



RE: Benchmarks, raid1 (was raid0) performance

2000-06-21 Thread Diegmueller, Jason (I.T. Dept)

: > Can/Should I apply the raid1readbalance-2.2.15-B2 patch after
: > applying mingo's raid-2.2.16-A0 patch?
: 
: I don't see any reason not to apply it, although I haven't 
: tried it with 2.2.16.

I have been out of the linux-raid world for a bit, but a 
two-drive RAID1 installation yesterday has brought me back.  
Naturally, when I saw mention of radi1readbalance, I immediately
tried it.

I'm running 2.2.17pre4, and it patched cleanly.  But bonnie++
is showing no change in read performance.  I am using IDE drives,
but they are on separate controllers (/dev/hda, and /dev/hdc) 
with both drives configured as masters.

Anyone have any tricks up their sleeves?



RE: Benchmarks, raid1 (was raid0) performance

2000-06-21 Thread Gregory Leblanc

> -Original Message-
> From: Hugh Bragg [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, June 21, 2000 5:04 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Benchmarks, raid1 (was raid0) performance
> 
> Patch http://www.icon.fi/~mak/raid1/raid1readbalance-2.2.15-B2
> improves read performance right? At what cost?

Only the cost of patching your kernel, I think.  This patch does some nifty
tricks to help pick which disk to read data from, and will double the read
rates from RAID 1, assuming that you don't saturate the bus.  

> Can/Should I apply the raid1readbalance-2.2.15-B2 patch after
> applying mingo's raid-2.2.16-A0 patch?

I don't see any reason not to apply it, although I haven't tried it with
2.2.16.

> What version of raidtools should I use against a stock 2.2.16
> system with raid-2.2.16-A0 patch running raid1?

The 0.90 ones.  I think that Ingo has some tools in the same place as the
patches, those should be the right tools.  I'll bet that the Software-RAID
HOWTO tells where to get the latest tools.  You can find it at
http://www.LinuxDoc.org/
Greg



Re: Benchmarks, raid1 (was raid0) performance

2000-06-21 Thread Hugh Bragg

Patch http://www.icon.fi/~mak/raid1/raid1readbalance-2.2.15-B2
improves read performance right? At what cost?

Can/Should I apply the raid1readbalance-2.2.15-B2 patch after
applying mingo's raid-2.2.16-A0 patch?

What version of raidtools should I use against a stock 2.2.16
system with raid-2.2.16-A0 patch running raid1?

Hugh.



RE: Benchmarks, raid1 (was raid0) performance

2000-06-14 Thread Gregory Leblanc

> -Original Message-
> From: Jeff Hill [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, June 13, 2000 1:26 PM
> To: Gregory Leblanc
> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: Re: Benchmarks, raid1 (was raid0) performance
> 
> Gregory Leblanc wrote:
> 
> >>--snip--<<
> > > I conclude that on my system there is an ide saturation point (or
> > > bottleneck) around 40MB/s
> > Didn't the LAND5 people think that there was a bottleneck 
> around 40MB/Sec at
> > some point?  Anybody know if they were talking about IDE 
> drives?  Seems
> > quite possible that there aren't any single drives that are 
> hitting this
> > speed, so it's only showing up with RAID.
> > Greg
> 
> 
> Is there any place where benchmark results are listed?

Not that I know of.  Is there any interest in having these online in some
biggass database?  Assuming that I can manage it, I'll have a server online,
running some SQL server, by next weekend.  I could put things into there,
and provide some basic SQL type passthru from the web.

> I've finally
> gotten my RAID-1 running and am trying to see if the 
> performance is what
> I should expect or if there is some other issue:
> 
> Running "hdparm -t /dev/md0" a few times:
> 
>  Timing buffered disk reads:  64 MB in  3.03 seconds = 21.12 MB/sec
>  Timing buffered disk reads:  64 MB in  2.65 seconds = 24.15 MB/sec
>  Timing buffered disk reads:  64 MB in  3.21 seconds = 19.94 MB/sec

My understanding has always been that hdparm was on crack as far as speed
went.  I've never really taken the time to check, since tiobench does a
beautiful job for what I need, and because tiobench is CONSISTENT.

> And bonnie:
>   ---Sequential Output ---Sequential Input--
--Random--
>   -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
--Seeks---
> MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec
%CPU
>   800  5402 90.9 13735 13.7  7223 15.0  5502 85.0 14062  8.9 316.7
2.8
> 
> 
> I had expected better performance with the system: Adaptec 
> 2940U2W with
> 2x Seagate Cheetah (LVD) 9.1G drives; single PII 400Mhz; 
> 512MB ECC RAM;
> ASUS P3B-F 100Mhz.

I don't have anything that caliber to compare against, so I can't really
say.  Should I assume that you don't have Mika's RAID1 read balancing patch?

> I have to say the RAID-1 works very well in my crash tests, and that's
> the most important thing.

Yep!  Although speed is the biggest reason that I can see for using Software
RAID over hardware.  Next comes price.  
Greg



RE: Benchmarks, raid1 (was raid0) performance

2000-06-13 Thread Gregory Leblanc

> -Original Message-
> From: Jeff Hill [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, June 13, 2000 3:56 PM
> To: Gregory Leblanc
> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: Re: Benchmarks, raid1 (was raid0) performance
> 
> Gregory Leblanc wrote:
> > 
> > I don't have anything that caliber to compare against, so I 
> can't really
> > say.  Should I assume that you don't have Mika's RAID1 read 
> balancing patch?
> 
> I have to admit I was ignorant of the patch (I had skimmed 
> the archives,
> but not well enough). Searched the archive further, found it, 
> patched it
> into 2.2.16-RAID.
> 
> However, how nervous should I be putting it on a production server?
> Mika's note says 'experimental'. This is my main production 
> server and I
> don't have a development machine currently capable of testing RAID1 on
> (and even then, the development machine can never get the 
> same drubbing
> as production). 

I've got it on the machines that I have running RAID in production.  I'm not
away of any "issues" with the patch, but I'm waiting for pre-releases of 2.4
to stabilize (on SPARC-32, mostly) before I start really reefing on things.
Ingo just posted something saying that the 2.4 code has Mika's patch
integrated, along with some cleanup.  Later,
Greg



Re: Benchmarks, raid1 (was raid0) performance

2000-06-13 Thread Jeff Hill

Gregory Leblanc wrote:
> 
> I don't have anything that caliber to compare against, so I can't really
> say.  Should I assume that you don't have Mika's RAID1 read balancing patch?

I have to admit I was ignorant of the patch (I had skimmed the archives,
but not well enough). Searched the archive further, found it, patched it
into 2.2.16-RAID.

However, how nervous should I be putting it on a production server?
Mika's note says 'experimental'. This is my main production server and I
don't have a development machine currently capable of testing RAID1 on
(and even then, the development machine can never get the same drubbing
as production). 

That said, it looks like the patch has an impact (although I'm not
familiar with tiobench):

tiobench results before Mika's patch:

 File   Block  Num  Seq ReadRand Read   Seq Write  Rand
Write
  DirSize   Size   Thr Rate (CPU%) Rate (CPU%) Rate (CPU%) Rate
(CPU%)
--- -- --- --- --- --- ---
---
   . 1024   40961  11.73 7.35% 1.008 1.54% 10.63 11.2% 1.452
11.8%
   . 1024   40962  12.65 7.78% 1.072 1.44% 10.15 10.5% 1.397
12.5%
   . 1024   40964  12.95 8.08% 1.177 1.70% 9.671 9.95% 1.393
12.6%
   . 1024   40968  12.79 8.45% 1.273 1.85% 9.344 9.89% 1.377
12.8%


tiobench results after Mika's patch:

 File   Block  Num  Seq ReadRand Read   Seq Write  Rand
Write
  DirSize   Size   Thr Rate (CPU%) Rate (CPU%) Rate (CPU%) Rate
(CPU%)
--- -- --- --- --- --- ---
---
   . 1024   40961  22.83 14.9% 1.035 0.86% 10.97 11.2% 1.416
13.5%
   . 1024   40962  26.66 18.7% 1.263 1.21% 10.42 10.6% 1.395
11.6%
   . 1024   40964  27.74 20.2% 1.349 1.20% 9.795 10.0% 1.395
12.2%
   . 1024   40968  24.69 20.8% 1.475 1.46% 9.262 9.82% 1.388
12.0%


Thanks for the help.

Jeff Hill
 
> > I have to say the RAID-1 works very well in my crash tests, and that's
> > the most important thing.
> 
> Yep!  Although speed is the biggest reason that I can see for using Software
> RAID over hardware.  Next comes price.
> Greg

-- 

--  HR On-Line:  The Network for Workplace Issues --
http://www.hronline.com - Ph:416-604-7251 - Fax:416-604-4708




Re: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-13 Thread Henry J. Cobb

Bug1: Maybe im missing something here, why arent reads just as fast as writes?

The cynic in me suggests that the RAID driver has to wait for the
information to be read off the disks, but it doesn't have to wait for the
writes to complete before returning, but I haven't read the code.

-HJC




Re: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-13 Thread bert hubert

On Tue, Jun 13, 2000 at 04:51:46AM +1000, bug1 wrote:

> Maybe im missing something here, why arent reads just as fast as writes?

I note the same on a 2 way IDE RAID-1 device, with both disks on a separate
bus.

Regards,

bert hubert

-- 
   |  http://www.rent-a-nerd.nl
   | - U N I X -
   |  Inspice et cautus eris - D11T'95



Re: Benchmarks, raid1 (was raid0) performance

2000-06-13 Thread Jeff Hill

Gregory Leblanc wrote:

>>--snip--<<
> > I conclude that on my system there is an ide saturation point (or
> > bottleneck) around 40MB/s
> Didn't the LAND5 people think that there was a bottleneck around 40MB/Sec at
> some point?  Anybody know if they were talking about IDE drives?  Seems
> quite possible that there aren't any single drives that are hitting this
> speed, so it's only showing up with RAID.
> Greg


Is there any place where benchmark results are listed? I've finally
gotten my RAID-1 running and am trying to see if the performance is what
I should expect or if there is some other issue:

Running "hdparm -t /dev/md0" a few times:

 Timing buffered disk reads:  64 MB in  3.03 seconds = 21.12 MB/sec
 Timing buffered disk reads:  64 MB in  2.65 seconds = 24.15 MB/sec
 Timing buffered disk reads:  64 MB in  3.21 seconds = 19.94 MB/sec

And bonnie:
  ---Sequential Output ---Sequential Input--
--Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
--Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU 
/sec %CPU
  800  5402 90.9 13735 13.7  7223 15.0  5502 85.0 14062  8.9
316.7  2.8


I had expected better performance with the system: Adaptec 2940U2W with
2x Seagate Cheetah (LVD) 9.1G drives; single PII 400Mhz; 512MB ECC RAM;
ASUS P3B-F 100Mhz.

I have to say the RAID-1 works very well in my crash tests, and that's
the most important thing.

Sorry for taking this off the original thread.

Regards,

Jeff Hill



RE: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-13 Thread Gregory Leblanc

> -Original Message-
> From: bug1 [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, June 13, 2000 10:39 AM
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]
> Subject: Re: Benchmarks, raid0 performance, 1,2,3,4 drives
> 
> Ingo Molnar wrote:
> > 
> > could you send me your /etc/raidtab? I've tested the 
> performance of 4-disk
> > RAID0 on SCSI, and it scales perfectly here, as far as 
> hdparm -t goes.
> > (could you also send the 'hdparm -t /dev/md0' results, do you see a
> > degradation in those numbers as well?)
> > 
> > it could either be some special thing in your setup, or an IDE+RAID
> > performance problem.
> > 
> > Ingo
> 
> It think it might be an IDE bottleneck.
> 
> if i use dd to read 800MB from each of my drives individually 
> the speeds
> i get are
> 
> hde=22MB/s
> hdg=22MB/s
> hdi=18MB/s
> hdk=20MB/s
> 
> 
> if i do the same tests simultaneously i get 10MB/s from each 
> of the four
> drives
> if i do the same test on just hde hdg and hdk i get 13MB/s 
> from each of
> three drives
> if i do it on hde and hdg i get 18MB/s from each. (both ide 
> channels on
> one card
> On hdi and hdk i get 15MB/s
> 
> I conclude that on my system there is an ide saturation point (or
> bottleneck) around 40MB/s

Didn't the LAND5 people think that there was a bottleneck around 40MB/Sec at
some point?  Anybody know if they were talking about IDE drives?  Seems
quite possible that there aren't any single drives that are hitting this
speed, so it's only showing up with RAID.
Greg



Re: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-13 Thread Scott M. Ransom

 
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Hello,
Just to let you know, I also see very similar IDE-RAID0 performance
problems:

I have RAID0 with two 30G DiamondMax (Maxtor) ATA-66 drives connected to
a Promise Ultra66 controller.

I am using kernel 2.4.0-test1-ac15+B5 raid on a dual PII-450 with 256M
RAM (but have seen the same problems on all of the 2.3.XX series.

Here are the results from bonnie:

 ---Sequential Output ---Sequential Input-- --Random--
 -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
  MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
1200  6813 98.2 41157 42.0 10101 25.9  5205 78.9 14890 27.3 137.8  1.8

Here are the results from hdparm on my drives (just showing one because
they are identical):

/dev/hde:

 Model=Maxtor 53073U6, FwRev=DA620CQ0, SerialNo=K604F9MC
 Config={ Fixed }
 RawCHS=4092/16/63, TrkSize=0, SectSize=0, ECCbytes=57
 BuffType=DualPortCache, BuffSize=2048kB, MaxMultSect=16, MultSect=off
 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=60030432
 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
 PIO modes: pio0 pio1 pio2 pio3 pio4 
 DMA modes: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 *udma4 

/dev/hde:
 multcount=  0 (off)
 I/O support  =  0 (default 16-bit)
 unmaskirq=  0 (off)
 using_dma=  1 (on)
 keepsettings =  0 (off)
 nowerr   =  0 (off)
 readonly =  0 (off)
 readahead=  8 (on)
 geometry = 59554/16/63, sectors = 60030432, start = 0

Performance using 2.2.16 with raid+IDE patches gives very good
performance:

 ---Sequential Output ---Sequential Input-- --Random--
 -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
  MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
1200  6813 98.9 40923 40.7 15584 35.3  7154 97.6 40258 26.6 151.5  1.7

Here is my raidtab:

raiddev /dev/md0
  raid-level0
  nr-raid-disks 2
  persistent-superblock 1
  chunk-size32
  device/dev/hdg2
  raid-disk 0
  device/dev/hde2
  raid-disk 1

And here are single-disk and md0 performance tests using hdparm -tT:

/dev/hde:
 Timing buffer-cache reads:   128 MB in  1.13 seconds =113.27 MB/sec
 Timing buffered disk reads:  64 MB in  2.46 seconds = 26.02 MB/sec

/dev/md0:
 Timing buffer-cache reads:   128 MB in  0.94 seconds =136.17 MB/sec
 Timing buffered disk reads:  64 MB in  1.66 seconds = 38.55 MB/sec

Hope this helps sort the matter out...

Scott

-- 
Scott M. Ransom   Address:  Harvard-Smithsonian CfA
Phone:  (617) 495-4142  60 Garden St.  MS 10 
email:  [EMAIL PROTECTED]  Cambridge, MA  02138
PGP Fingerprint: D2 0E D0 10 CD 95 06 DA  EF 78 FE 2B CB 3A D3 53



Re: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-13 Thread bug1

Ingo Molnar wrote:
> 
> could you send me your /etc/raidtab? I've tested the performance of 4-disk
> RAID0 on SCSI, and it scales perfectly here, as far as hdparm -t goes.
> (could you also send the 'hdparm -t /dev/md0' results, do you see a
> degradation in those numbers as well?)
> 
> it could either be some special thing in your setup, or an IDE+RAID
> performance problem.
> 
> Ingo

It think it might be an IDE bottleneck.

if i use dd to read 800MB from each of my drives individually the speeds
i get are

hde=22MB/s
hdg=22MB/s
hdi=18MB/s
hdk=20MB/s


if i do the same tests simultaneously i get 10MB/s from each of the four
drives
if i do the same test on just hde hdg and hdk i get 13MB/s from each of
three drives
if i do it on hde and hdg i get 18MB/s from each. (both ide channels on
one card
On hdi and hdk i get 15MB/s

I conclude that on my system there is an ide saturation point (or
bottleneck) around 40MB/s
 
But the same thing happens under 2.2, so it doesnt explain the
performance difference between 2.2 and 2.[34], so there must be another
bottleneck somewhere as well back to the drawing board i guess.



Glenn



Re: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-13 Thread bug1

Adrian Head wrote:
> 
> I have seen people complain about simular issues on the kernel mailing
> list so maybe there is an actual kernel problem.
> 
> What I have always wanted to know but haven't tested yet is to test raid
> performance with and without the noatime attribute in /etc/fstab  I
> think that when Linux reads a file it writes the time the file was
> accessed whereas a write is just a write.  I expect that for benchmarks
> this would not affect results alot since SCSI systems would have the
> same overhead - but some people seem to swear by it for news servers and
> the like.
> 
> Am I off track?
> 

Um, this would effect benchmarks that use a filesystem, if you do a dd
if=/dev/hda of=/dev/null it doesnt consider the filesystem so i would
guess that time isnt updated and it wouldnt be an overhead.

hdparm -t works below the filesystem level as well, its just data, it
doesnt make sense of it, as far as i know.

Glenn



RE: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-13 Thread Adrian Head

I have seen people complain about simular issues on the kernel mailing
list so maybe there is an actual kernel problem.  

What I have always wanted to know but haven't tested yet is to test raid
performance with and without the noatime attribute in /etc/fstab  I
think that when Linux reads a file it writes the time the file was
accessed whereas a write is just a write.  I expect that for benchmarks
this would not affect results alot since SCSI systems would have the
same overhead - but some people seem to swear by it for news servers and
the like.

Am I off track?

Adrian Head



> -Original Message-
> From: bug1 [SMTP:[EMAIL PROTECTED]]
> Sent: Tuesday, 13 June 2000 04:52
> To:   [EMAIL PROTECTED]
> Cc:   Ingo Molnar
> Subject:  Benchmarks, raid0 performance, 1,2,3,4 drives
> 
> Here are some more benchmarks for raid0 with different numbers of
> elements, all tests done with tiobench.pl -s=800
> 
[Adrian Head]  [SNIP] 

> Glenn



Re: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-12 Thread bug1

Ingo Molnar wrote:
> 
> could you send me your /etc/raidtab? I've tested the performance of 4-disk
> RAID0 on SCSI, and it scales perfectly here, as far as hdparm -t goes.
> (could you also send the 'hdparm -t /dev/md0' results, do you see a
> degradation in those numbers as well?)
> 
> it could either be some special thing in your setup, or an IDE+RAID
> performance problem.
> 
> Ingo

Im not sure how usefull these results are, the number seemed to vary by
1MB/s or so between runs, and i do have 128MB ram. Im not sure if hdparm
is sensitive to ramsize.

So generally, a 50% increase for a second drive, and then no increase
after that.

I am glad to hear that scsi scales well, at least that limits to
problems to ide or me doing something silly.
Maybe i should try on a different motherboard.

4-way raid0 (/dev/hde, /dev/hdg, /dev/hdi, /dev/hdk)
/dev/md0:
 Timing buffer-cache reads:   128 MB in  1.67 seconds = 76.65 MB/sec
 Timing buffered disk reads:  64 MB in  2.09 seconds = 30.62 MB/sec

3-way raid0 (/dev/hde, /dev/hdg, /dev/hdi)
/dev/md0:
 Timing buffer-cache reads:   128 MB in  1.59 seconds = 80.50 MB/sec
 Timing buffered disk reads:  64 MB in  2.15 seconds = 29.77 MB/sec

2-way raid0 (/dev/hde, /dev/hdg)
/dev/md0:
 Timing buffer-cache reads:   128 MB in  1.59 seconds = 80.50 MB/sec
 Timing buffered disk reads:  64 MB in  1.94 seconds = 32.99 MB/sec

Im used a 32K chunk size for all the tests i did, here is my raidtab
To change the number of drives i was testing i just changed
nr-raid-disks and uncommented the next disks, i didnt touch anything
else.

raiddev /dev/md0
raid-level  0
persistent-superblock   1
chunk-size  32  
nr-raid-disks   2
nr-spare-disks  0
device  /dev/hde5
raid-disk   0
device  /dev/hdg5
raid-disk   1
# device/dev/hdi5
# raid-disk 2
# device/dev/hdk5
# raid-disk 3

Thanks

Glenn



Re: Benchmarks, raid0 performance, 1,2,3,4 drives

2000-06-12 Thread Ingo Molnar


could you send me your /etc/raidtab? I've tested the performance of 4-disk
RAID0 on SCSI, and it scales perfectly here, as far as hdparm -t goes.
(could you also send the 'hdparm -t /dev/md0' results, do you see a
degradation in those numbers as well?)

it could either be some special thing in your setup, or an IDE+RAID
performance problem.

Ingo




Re: Benchmarks/Performance.

1999-04-26 Thread Stephen C. Tweedie

Hi,

On Mon, 26 Apr 1999 21:28:20 +0100 (IST), Paul Jakma <[EMAIL PROTECTED]>
said:

> it was close between 32k and 64k. 128k was noticably slower (for
> bonnie) so i didn't bother with 256k. 

Fine, but 128k will be noticeably faster for some other tasks.  Like I
said, it depends on whether you prioritise large-file bandwidth over the
ability to serve many IOs at once.

> viz pipelining: would i be right in thinking that a decent scsi
> controller and drives can "pipeline" /far/ better than, eg, a udma
> setup?

Yes, although you eventually run into a different bottleneck: the
filesystem has to serialise every so often while reading its indirection
metadata blocks.  Using a 4k fs blocksize helps there (again, for
squeezing the last few %age points out of sequential readahead).

> ie the optimal chunk size would be higher for a scsi system than for
> an eide/udma setup?

udma can do readahead and multi-sector IOs.  scsi can have limited
tagged queue depths.  Command setup is more expensive on scsi than on
ide.  Which costs dominate really depends on the workload.

--Stephen



Re: Benchmarks/Performance.

1999-04-26 Thread Paul Jakma

On Mon, 26 Apr 1999, Stephen C. Tweedie wrote:

  Hi,
  
  On Thu, 22 Apr 1999 20:45:52 +0100 (IST), Paul Jakma <[EMAIL PROTECTED]>
  said:
  
  > i tried this with raid0, and if bonnie is any guide, the optimal
  > configuration is 64k chunk size, 4k e2fs block size.  
  
  Going much above 64k will mean that readahead has to work very much
  harder to keep all the pipelines full when doing large sequential IOs.

it was close between 32k and 64k. 128k was noticably slower (for
bonnie) so i didn't bother with 256k. 

viz pipelining: would i be right in thinking that a decent scsi
controller and drives can "pipeline" /far/ better than, eg, a udma
setup?

ie the optimal chunk size would be higher for a scsi system than for
an eide/udma setup?

  In other words, all benchmarks lie. :)
  
think someone should tell "tom's hardware guide". :)

  --Stephen
  
-- 
Paul Jakma
[EMAIL PROTECTED]   http://hibernia.clubi.ie
PGP5 key: http://www.clubi.ie/jakma/publickey.txt
---
Fortune:
We have not inherited the earth from our parents, we've borrowed it from
our children.



Re: Benchmarks/Performance.

1999-04-26 Thread Stephen C. Tweedie

Hi,

On Thu, 22 Apr 1999 20:45:52 +0100 (IST), Paul Jakma <[EMAIL PROTECTED]>
said:

> i tried this with raid0, and if bonnie is any guide, the optimal
> configuration is 64k chunk size, 4k e2fs block size.  

Going much above 64k will mean that readahead has to work very much
harder to keep all the pipelines full when doing large sequential IOs.
That's why bonnie results can fall off.  However, if you have
independent IOs going on (web/news/mail service or multiuser machines)
then that concurrent activity may still be faster with larger chunk
sizes, as you minimise the chance of any one file access having to cross
multiple disks.

In other words, all benchmarks lie. :)

--Stephen



Re: Benchmarks/Performance.

1999-04-26 Thread Paul Jakma

On Fri, 23 Apr 1999, John Ronan wrote:

  
  On 22-Apr-99 Paul Jakma wrote:
  
  Ok  I ran a few bonnies with differenc chunk sizes... 
  
  Raid5 running on 4 WDC AC31300R's UDMA... Seems to peak at 32k
  chunks, 4K block size

i've done a bit of benching aswell. The most important (on ia32
anyway) seems to be the 4k e2fs block size, it always wins.

  
  Thanks for your replies...
  
  Cheers (time to do the "power removal" test :) )
 
  

-- 
Paul Jakma
[EMAIL PROTECTED]   http://hibernia.clubi.ie
PGP5 key: http://www.clubi.ie/jakma/publickey.txt
---
Fortune:
Usage: fortune -P [-f] -a [xsz] Q: file [rKe9] -v6[+] file1 ...




Re: Benchmarks/Performance.

1999-04-26 Thread John Ronan


On 22-Apr-99 Paul Jakma wrote:

Ok  I ran a few bonnies with differenc chunk sizes... 

Raid5 running on 4 WDC AC31300R's UDMA... Seems to peak at 32k chunks, 4K block
size

Thanks for your replies...

Cheers (time to do the "power removal" test :) )


--
John Ronan <[EMAIL PROTECTED]>, 
  Telecommunications Software Systems Group - WIT, +353-51-302411,
http://www-tssg.wit.ie

Q: How do you know a guy at the beach has a redhead for a girlfriend?
A: She has scratched "Stay off MY TURF!" on his back with her nails.




 Bonnie


Re: benchmarks

1999-04-24 Thread Tim Moore

> - have heard someone say that running two striped ide drives is 2x slower than
>   normal ide access... donno...
> ( I use striped 2striped 8Gb ide drives for incremental backups of each 64Gb 
>main servers )

2x slower = both on same ide; 2x faster = each on different ide



Re: Benchmarks/Performance.

1999-04-22 Thread Paul Jakma

On Thu, 22 Apr 1999, Alvin Oga wrote:

  
  hi paul
  
  can you post your data on a page...
  and/or your commands
  
  if you need a page...I am beginning to think a central place
  to keep random benchmark tests data might be nice...
  ( will give ya free space for raid benchmark data )
  
  have fun
  alvin
  http://www.linux-consulting.com/Raid  see Docs too
  

sure, 

i have it attached. needs a little bit of "beautification".

I can't remember the exact raid configuration. i think it was raid5
across either 3 IBM uw's, or across 3 IBM's uw and a Seagate Empire
fn. i didn't wait for the resync to finish. (don't know significant
this is).

ck refers to chunk size, fs refers to filesystem block size. 

the figures favour 4k e2fs block size in every case. so there's no
doubt about that one.

wrt to chunk size, the figures seem to peak at 64k.

-- 
Paul Jakma
[EMAIL PROTECTED]   http://hibernia.clubi.ie
PGP5 key: http://www.clubi.ie/jakma/publickey.txt
---
Fortune:
Profanity is the one language all programmers know best.


8k ck, 1024bk

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
  100  2069 89.3 11214 46.0  4257 29.7  1312 88.5 10850 30.5 125.0  4.3

8k ck, 2048bk.

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
  100  2191 92.7 14996 52.0  4783 29.1  1307 89.2 11718 27.5 139.2  3.6

8k ck, 4096bk.

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
  100  2214 92.8 15770 50.1  4858 26.9  1320 89.9 11907 24.1 157.7  3.8

16k chunk, 2048 fs block.

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
  100  2080 88.4 13153 45.2  4582 27.9  1312 89.2 11504 25.6 148.9  5.1

16k chk, 4096 bk.

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
  100  2214 92.9 15944 50.4  4784 27.5  1310 89.2 12183 27.5 173.7  3.9

32k ck, 1k bk.

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
  100  1793 76.8  8292 31.6  2584 18.5  1244 85.0  8063 24.5 161.9  6.2

32k chk, 2k bk.

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
  100  2123 89.7 14019 47.5  4593 27.5  1278 86.4 11455 27.3 184.7  5.5

32k ck, 4k bk.

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
  100  2138 89.7 16000 52.0  4842 26.3  1287 86.8 11565 25.8 192.1  5.4

64k ck, 2k bk.

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
  100  2131 89.8 15114 52.0  4897 30.8  1272 85.9 11757 26.6 198.2  5.0

64k ck, 4k bk.

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
  100  2132 89.9 16049 49.8  5197 30.4  1281 86.2 11883 26.3 217.3  4.9

128k ck, 2k bk.

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
  100  2113 89.6 14683 51.8  5042 30.6  1257 84.0 11156 23.6 207.3  5.6

128 ck, 4k bk.

  ---Sequential Output ---Sequential Input-- --Random--
  

Re: Benchmarks/Performance.

1999-04-22 Thread Alvin Oga


hi paul

can you post your data on a page...
and/or your commands

if you need a page...I am beginning to think a central place
to keep random benchmark tests data might be nice...
( will give ya free space for raid benchmark data )

have fun
alvin
http://www.linux-consulting.com/Raidsee Docs too

>   John Ronan ([EMAIL PROTECTED]) wrote on 22 April 1999 16:03:
>   
>>/dev/md2 is raid5 across 4 WDC AC313000R's (I can only work with
>>what I have in the office) In the raidtab I gave it a chunk size of
>>128 and I used the following mke2fs command.
>   
>>mke2fs -b 4096 0R stride=32 -m0 /dev/md2 
> 
>   I'd like to have a way to measure the performance, but I don't know
>   how. Doug Ledford recommends using bonnie on a just-created array to
>   check performance with various chunck sizes. What bothers me most is
>   that it seems that the best settings depend on your particular files
>   and usage...
> 
> i tried this with raid0, and if bonnie is any guide, the optimal
> configuration is 64k chunk size, 4k e2fs block size.  
> 
> -- 
> Paul Jakma
> [EMAIL PROTECTED] http://hibernia.clubi.ie
> PGP5 key: http://www.clubi.ie/jakma/publickey.txt
> ---
> Fortune:
> I haven't lost my mind -- it's backed up on tape somewhere.
> 
> 



Re: Benchmarks/Performance.

1999-04-22 Thread Paul Jakma

On Thu, 22 Apr 1999, Carlos Carvalho wrote:

  John Ronan ([EMAIL PROTECTED]) wrote on 22 April 1999 16:03:
  
   >/dev/md2 is raid5 across 4 WDC AC313000R's (I can only work with
   >what I have in the office) In the raidtab I gave it a chunk size of
   >128 and I used the following mke2fs command.
  
   >mke2fs -b 4096 0R stride=32 -m0 /dev/md2 

  I'd like to have a way to measure the performance, but I don't know
  how. Doug Ledford recommends using bonnie on a just-created array to
  check performance with various chunck sizes. What bothers me most is
  that it seems that the best settings depend on your particular files
  and usage...

i tried this with raid0, and if bonnie is any guide, the optimal
configuration is 64k chunk size, 4k e2fs block size.  

-- 
Paul Jakma
[EMAIL PROTECTED]   http://hibernia.clubi.ie
PGP5 key: http://www.clubi.ie/jakma/publickey.txt
---
Fortune:
I haven't lost my mind -- it's backed up on tape somewhere.



Re: Benchmarks/Performance.

1999-04-22 Thread Carlos Carvalho

John Ronan ([EMAIL PROTECTED]) wrote on 22 April 1999 16:03:

 >/dev/md2 is raid5 across 4 WDC AC313000R's (I can only work with
 >what I have in the office) In the raidtab I gave it a chunk size of
 >128 and I used the following mke2fs command.

 >mke2fs -b 4096 0R stride=32 -m0 /dev/md2 

This does look right, but I have doubts about the chunck size. Linas
discusses this issue in his old raid-howto. Basically, if you put a
small chunck size you divide most files across the disks, which means
bandwidth increases for individual files. However, you also lose
because you cannot parallelize (spelling?) the accesses. If you use a
large chunck size most files will be in a single disk, so the system
will do several accesses simultaneously. Therefore the best compromize
will depend on the workload and statistical distribution of file
sizes. There's also the question that with a small chunck size all
disks will have to work on most files, which will increase head
movement and seek time. Remember that ext2fs tends to coalesce the
blocks for a given file, but tends to spread files all over the
partition. However, you also have to take tagged queueing into account
for good scsi disks, which may produce optimizations at the
driver/disk level. You see the problem is not easy.

I'd like to have a way to measure the performance, but I don't know
how. Doug Ledford recommends using bonnie on a just-created array to
check performance with various chunck sizes. What bothers me most is
that it seems that the best settings depend on your particular files
and usage...

Another point, I'd like to build a distribution of file sizes to help
choosing the stripe size. I tried to use du for this, but du also
prints totals for directories, which is not what we want in this case.
How can you measure the file sizes of all files (including special
ones) in a filesystem? ls isn't apropriate because of files with holes.



Re: benchmarks

1999-04-22 Thread Osma Ahvenlampi

Josh Fishman <[EMAIL PROTECTED]> writes:
> We have a DPT midrange SmartRAID-V and we're going to do testing on two
> 7 x 17.5 GB RAID 5 arrays, one software, one hardware. We'll post the
> results as soon as they're available. (Testing will happen on a dual PII
> 350 w/ 256 MB RAM & a cheezy IDE disk for /, running 2.2.6 (or later).)
> 
> What kind of tests would people like to see run? The main test I'm
> going for is simply stability under load on bigish file systems &
> biggish file operations.

Bonnie and iostone benchmarks for a single SCSI disk only as well as
both RAID configurations would be a good start. If you're really
interested, running some standard(ish) database benchmarks would be
nice, too.

-- 
Osma Ahvenlampi



Re: benchmarks

1999-04-21 Thread Scott Laird


You should probably also either test it with a non-DPT SCSI controller, or
test a single disk by itself, to try to factor out the DPT's SCSI
performance as a consideration.


Scott

On Wed, 21 Apr 1999, Josh Fishman wrote:
>
> Seth Vidal wrote:
> > 
> > I've mostly been a lurker but recent changes in my company have peaked my
> > interest in the performance of sw vs hw raid.
> > 
> > Does anyone have some statistics online of sw raid (1,5) vs hw raid
> > (1,5) on a linux system?
> 
> We have a DPT midrange SmartRAID-V and we're going to do testing on two
> 7 x 17.5 GB RAID 5 arrays, one software, one hardware. We'll post the
> results as soon as they're available. (Testing will happen on a dual PII
> 350 w/ 256 MB RAM & a cheezy IDE disk for /, running 2.2.6 (or later).)
> 
> What kind of tests would people like to see run? The main test I'm
> going for is simply stability under load on bigish file systems &
> biggish file operations.
> 
>  -- Josh Fishman
> NYU / RLab
> 



Re: benchmarks

1999-04-21 Thread Alvin Oga


hi ya seth/josh

> > I've mostly been a lurker but recent changes in my company have peaked my
> > interest in the performance of sw vs hw raid.
> > 
> > Does anyone have some statistics online of sw raid (1,5) vs hw raid
> > (1,5) on a linux system?

I've been haphazardly collecting some benchmark info... or methodology
http://www.linux-consulting.com/Raid/Docs
- see raid_cmd.uhow2
- see raid_benchmark.txt

- have a s/w raid5 on debian-2.1 w/ 5 4.5Gb fireball(?) scsi3-drives
( linux-2.2.3 w/ patches )

- have 3 other hardware raid boxes too that I could run non-destructives tests on...
- donno how to tests it..
- hoping for some non-destructive read/write/seek disk tests

- have heard someone say that running two striped ide drives is 2x slower than 
  normal ide access... donno...
( I use striped 2striped 8Gb ide drives for incremental backups of each 64Gb 
main servers )

have fun
alvin
http://www.linux-consulting.com/Raid


> We have a DPT midrange SmartRAID-V and we're going to do testing on two
> 7 x 17.5 GB RAID 5 arrays, one software, one hardware. We'll post the
> results as soon as they're available. (Testing will happen on a dual PII
> 350 w/ 256 MB RAM & a cheezy IDE disk for /, running 2.2.6 (or later).)
> 
> What kind of tests would people like to see run? The main test I'm
> going for is simply stability under load on bigish file systems &
> biggish file operations.
> 



Re: benchmarks

1999-04-21 Thread Seth Vidal

> > I've mostly been a lurker but recent changes in my company have peaked my
> > interest in the performance of sw vs hw raid.
> > 
> > Does anyone have some statistics online of sw raid (1,5) vs hw raid
> > (1,5) on a linux system?
> 
> We have a DPT midrange SmartRAID-V and we're going to do testing on two
> 7 x 17.5 GB RAID 5 arrays, one software, one hardware. We'll post the
> results as soon as they're available. (Testing will happen on a dual PII
> 350 w/ 256 MB RAM & a cheezy IDE disk for /, running 2.2.6 (or later).)
> 
> What kind of tests would people like to see run? The main test I'm
> going for is simply stability under load on bigish file systems &
> biggish file operations.

stability and read performance speeds and write performance speeds.

possibly optimization for mostly-read situations, mostly-write situations
and then both read and write situations.

-sv



Re: benchmarks

1999-04-21 Thread Josh Fishman

Seth Vidal wrote:
> 
> I've mostly been a lurker but recent changes in my company have peaked my
> interest in the performance of sw vs hw raid.
> 
> Does anyone have some statistics online of sw raid (1,5) vs hw raid
> (1,5) on a linux system?

We have a DPT midrange SmartRAID-V and we're going to do testing on two
7 x 17.5 GB RAID 5 arrays, one software, one hardware. We'll post the
results as soon as they're available. (Testing will happen on a dual PII
350 w/ 256 MB RAM & a cheezy IDE disk for /, running 2.2.6 (or later).)

What kind of tests would people like to see run? The main test I'm
going for is simply stability under load on bigish file systems &
biggish file operations.

 -- Josh Fishman
NYU / RLab