Corin Hartland-Swann wrote:
>
> Hi Andre,
>
> The revised comparison between 2.2.15 and 2.4.0-test5 are as follows:
>
> ==> 2.2.15 <==
>
> Dir Size BlkSz Thr# Read (CPU%) Write (CPU%) Seeks (CPU%)
> - -- --- - -- --
> /mnt/ 256
Nils Rennebarth wrote:
>
> I use the 2.4.0-test5-pre3 kernel together with Andre Hedricks ide patch
> ide.2.4.0-t5-2.all.4c.patch.bz2 and reiserfs
>
> The machine is a Athlon 650, equipped with 256MB of RAM.
> 6 IBM UDMA-66 drives of 46GB each, hanging on three Promise 20262 IDE
> controllers fo
This is interesting, i thought it was only IDE that didnt scale well in
2.[34], looks like the problem is more generic than that.
I cc'ed this to linux-raid.
Glenn
> Gianluca Cecchi wrote:
>
>
> The system:
>
> MB: Supermicro P6SBU (Adaptec 7890 on board)
> CPU: 1 pentium III 500 MHz
> Mem:
Seth Vidal wrote:
>
> So my questions are these:
> Is 90MB/s a reasonable speed to be able to achieve in a raid0 array
> across say 5-8 drives?
> What controllers/drives should I be looking at?
Im a big IDE fan, and have experimented with raid0 a fair bit, i dreamt
of achieving theses speeds w
I know they arent implemented, i know HSM stands for Heirarchical
Storage Managment, but thats about it.
Are these features usefull, or obsolete or what?
Any pointer to docs?
Thanks
Glenn
Ingo Molnar wrote:
>
> could you send me your /etc/raidtab? I've tested the performance of 4-disk
> RAID0 on SCSI, and it scales perfectly here, as far as hdparm -t goes.
> (could you also send the 'hdparm -t /dev/md0' results, do you see a
> degradation in those numbers as well?)
>
> it could e
Adrian Head wrote:
>
> I have seen people complain about simular issues on the kernel mailing
> list so maybe there is an actual kernel problem.
>
> What I have always wanted to know but haven't tested yet is to test raid
> performance with and without the noatime attribute in /etc/fstab I
> th
Ingo Molnar wrote:
>
> could you send me your /etc/raidtab? I've tested the performance of 4-disk
> RAID0 on SCSI, and it scales perfectly here, as far as hdparm -t goes.
> (could you also send the 'hdparm -t /dev/md0' results, do you see a
> degradation in those numbers as well?)
>
> it could e
Here are some more benchmarks for raid0 with different numbers of
elements, all tests done with tiobench.pl -s=800
Hardware: dual celeron 433, 128MB ram using 2.4.0-test1-ac15+B5 raid
patch, raid drives on two promise udma66 cards (one drive per channel)
Write speed looks decent for 1 and 2 driv
Ingo, you said you were interested in slowdowns relative to 2.2, it
doesnt look good for reads, 2.2 looks to be 4-6 times faster than
2.4-test1, does this indicate there is something wrong somewhere?
There wasnt much difference with the -B5 patch, but from the looks of it
the patch effected raid1
"Cavanaugh, Craig" wrote:
>
> Try the following
>
> http://www.3ware.com/products/linux3ware.shtml
>
> I have a Promise Utlra66 Card that is working great with a couple of WD
> Ultra DMA 66 drives
>
> On top of that, it's inside a BP6 system in the slot that share an Irq with
> a HPT??? ide co
Dan Hollis wrote:
>
> On Tue, 16 May 2000, bug1 wrote:
> > Ive been fighting with the HPT366 controller for ages.
>
> After months of struggling with the HPT366[1], I gave up and installed a
> PDC20262. Now all my devices work perfectly. Even my cdrw and 40x cdrom
>
> > Just last night i got them working fine with (2.2.12 through to
> > 2.2.15)
> > + ide patch on my dual celeron bp6 using 5 drives, 1 from and intel
> > channel, 2 from the onboard hpt366 and 2 from the pci hpt366.
> [Adrian Head] Would you be kind enough to tell be which
> versions of
Ive been fighting with the HPT366 controller for ages.
Just last night i got them working fine with (2.2.12 through to 2.2.15)
+ ide patch on my dual celeron bp6 using 5 drives, 1 from and intel
channel, 2 from the onboard hpt366 and 2 from the pci hpt366.
I can do "cat /dev/hdx >/dev/null" (x i
Volker Wysk wrote:
>
> Hello.
>
> I've tried Gelnn's tip (thanks!), but still all superblocks seem to be
> corrupted. This seems quite strange to me, since the volume has not
> been formatted.
>
> Is there anyone familiar with the internals of the RAID system, who could
> tell me what actually
Volker Wysk wrote:
>
> Hello!
>
> RedHat 6.1's graphical install program has destroyed my RAID0 volume,
> which is really bad for me.
>
> I was going to install a second Linux, on a separate partition, and chose
> "create RAID partition", and to *not* format it. After that, I couldn't
> mount i
For those that dont already know, i just noticed that 2.3.99-pre8 kernel
has mode1 and mode5 back in.
I havent tried it, it hasnt been tested much yet.
Im sure someone is interested.
Glenn
remo strotkamp wrote:
>
> bug1 wrote:
> >
> > Clay Claiborne wrote:
> > >
> > > For what its worth, we recently built an 8 ide drive 280GB raid5 system.
> > > Benchmarking with HDBENCH we got 35.7MB/sec read and 29.87MB/sec write. With
> > >
Clay Claiborne wrote:
>
> For what its worth, we recently built an 8 ide drive 280GB raid5 system.
> Benchmarking with HDBENCH we got 35.7MB/sec read and 29.87MB/sec write. With
> DBENCH and 1 client we got 44.5 MB/sec with 3 clients it dropped down to about
> 43MB/sec.
> The system is a 600Mhz
>
> I don't believe the specs either, because they are for the "ideal" case.
> However, I think that either your benchmark is flawed, or you've got a
> crappy controller. I have a (I think) 5400 RPM 4.5GB IBM SCA SCSI drive in
> a machine at home, and I can easily read at 7MB/sec from it under S
Edward Schernau wrote:
>
> Chris Mauritz wrote:
>
> > > Ive done some superficial performance tests using dd, 55MB/s write
> > > 12MB/s read, interestingly i did get 42MB/s write using just a 2 way ide
> > > raid0, and got 55MB/s write with one drive per channel on four channels
> > > (i had no
Chris Mauritz wrote:
>
> > From [EMAIL PROTECTED] Sat Apr 22 21:37:37 2000
> >
> > Hi, im just wondering has anyone really explored the performance
> > limitations of linux raid ?
> >
> > Recognising ones limitations is the first step to overcomming them.
> >
> > Ive found that relative performan
Hi, im just wondering has anyone really explored the performance
limitations of linux raid ?
Recognising ones limitations is the first step to overcomming them.
Ive found that relative performance increases are better with less
drives.
Ive been using raid for a year or so, ive never managed to
Hi, i want to reconfigure my server fairly dramaticly and im trying to
work out how i can do it without great pain.
I currently have 3 drive of ~ 20GB, i have another 20GB and 6.4GB i want
to include in my array.
I have about 40Gb of data currently on the drives, about 10GB on a
raid0, the rest
It doesnt compile if you select it, but "boot with raid" works without
enabling it anyway.
> I also noticed that the "boot with raid" option in the kernel won't compile
> properly in the 2.3.4X series.
>
> thanks,
>
> karl
i just compiled and ran iozone (make linux) it was purely command line,
i havent worked out how to get the graphics yet.
James Manning wrote:
>
> [ Friday, March 3, 2000 ] bug1 wrote:
> > there are a few benchmark progs arround
> >
> > bonnie:old benchmark progra
[EMAIL PROTECTED] wrote:
>
> What program do I use for benchmarking?
> gary hostetler
there are a few benchmark progs arround
bonnie:old benchmark program
bonnie++ :updated bonnie to reflect modern hardware
tiotest :looks promising, still being developed
iozone:havent tried this, but
Michael wrote:
>
> Answered my own question, for those interested:
> > I'm trying to put together a minimum initrd to start a raid1 over
> > raid0 root raid set. I can't seem to get raidstart to start the
> > second raid set. I've done this before with the old raid tools but
> > without the overl
Hmm, well i took your advice and it has seemed to work.
I havent done a thourough check for corruption, but there doesnt seem to be
any major problems.
I had a linear raid partition over 4 drives (1 raid partition on each
drive).
I used ext2resize to resize the partition to below the last disk, t
29 matches
Mail list logo