On Tue, 19 Jan 1999, Theo Van Dinter wrote:
> | Hmm. I use SCSI on high-performance systems, but if IDE is so bad, why
> | does NASA use IDE? ;)
>
> as far as I know, beowulf tends to use the network more than the disk, so it
> isn't necessary to have an extremely fast disk subsystem. RAM, CP
>
>On Mon, 18 Jan 1999, Tom wrote:
> I disagree. SCSI advantages:
>
>- Higher drive quality. EIDE drives are built cheap, just because that is
>what the market buys. Compare the listed MTBF for a Maxtor Diamond MAX to
>the list MTBF for a Seagate Barracuda 4LP/XL. In fact, it doesn't seem
>th
>
>On Mon, 18 Jan 1999, Jeremy Wohl wrote:
>
>> On Mon, Jan 18, 1999 at 02:43:31PM -0800, Aaron D. Turner wrote:
>> > Performance is going to *suck* with 5 IDE disks, assuming of course you
>> > can actually get 3 IDE controllers to work in the same box.
>>
>> OK, what's the bottleneck here? The
On Mon, Jan 18, 1999 at 02:43:31PM -0800, Aaron D. Turner wrote:
>> Performance is going to *suck* with 5 IDE disks, assuming of course you
>> can actually get 3 IDE controllers to work in the same box.
>OK, what's the bottleneck here? The track-to-track seeks, bandwidth,
>cpu use seem to be rea
On Mon, Jan 18, 1999 at 07:04:41PM -0800, Tom wrote:
> Now if Bonnie was an application that accomplished meaningful work, this
> could be useful. Since it doesn't, it just shows the speed of single
> read/write stream.
Right, well, my application is similar. Rare, large sequential writes.
Sl
>Linux software RAID5 w/5 EIDE UMDA disks. Crazy?
>
I am currently developing the Linux driver support for RAIDZONE technology.
RAIDZONE is a complete RAID solution that ultilizes Ultra ATA (a.k.a. EIDE
UDMA) disk
drives. Using RAIDZONE it is possible to build Linux servers with a
significant
n
> From [EMAIL PROTECTED] Tue Jan 19 10:52:47 1999
>
> | Hmm. I use SCSI on high-performance systems, but if IDE is so bad, why
> | does NASA use IDE? ;)
>
> as far as I know, beowulf tends to use the network more than the disk, so it
> isn't necessary to have an extremely fast disk subsystem.
| Hmm. I use SCSI on high-performance systems, but if IDE is so bad, why
| does NASA use IDE? ;)
as far as I know, beowulf tends to use the network more than the disk, so it
isn't necessary to have an extremely fast disk subsystem. RAM, CPU, and
network speeds are much more important.
--
R
Hmm. I use SCSI on high-performance systems, but if IDE is so bad, why
does NASA use IDE? ;)
http://beowulf.gsfc.nasa.gov/bds/bds.html>
--
I didn't know it was impossible when I did it.
Osma Ahvenlampi <[EMAIL PROTECTED]>
> 11GB DiamondMAX? That is probably the 2880 seris...a 5400rpm drive.
You're right, this is a 5,400 rpm drive - even more impressive.
> Now if Bonnie was an application that accomplished meaningful work, this
> could be useful. Since it doesn't, it just shows the speed of single
> read/wri
> On Mon, Jan 18, 1999 at 02:43:31PM -0800, Aaron D. Turner wrote:
> > Performance is going to *suck* with 5 IDE disks, assuming of course you
> > can actually get 3 IDE controllers to work in the same box.
>
> OK, what's the bottleneck here? The track-to-track seeks, bandwidth,
> cpu use seem
On Mon, 18 Jan 1999, reschke wrote:
> Check out:
>
> http://beowulf.gsfc.nasa.gov/bds/disks.html
>
> especially the bit at the bottom of the page which shows almost perfect
> scaling across three IDE disks/channels. Using cheap IDE drives in DMA
> mode completely avoids the CPU overhead pena
I've done a few mirrors with EIDE UDMA drives. It seems to be MUCH faster
to resync an array when the drives are on different channels. Two 6 gig
drives were going to take 20 minutes to resync when on the same channel,
but only 12 minutes when on different channels.
Brian
On Mon, 18 Jan 1999, J
> > I assume the problem is getting io requests to occur simultaneously. My
> > surfing tells me most controllers provides poor same-channel access,
> > yet independent channels. So three controllers, one disk per channel
> > would provide concurrent io's, if the linux eide driver makes use of
>
On Mon, 18 Jan 1999, Jeremy Wohl wrote:
> On Mon, Jan 18, 1999 at 02:43:31PM -0800, Aaron D. Turner wrote:
> > Performance is going to *suck* with 5 IDE disks, assuming of course you
> > can actually get 3 IDE controllers to work in the same box.
>
> OK, what's the bottleneck here? The track-t
-BEGIN PGP SIGNED MESSAGE-
Performance is going to *suck* with 5 IDE disks, assuming of course you
can actually get 3 IDE controllers to work in the same box. I guess the
question is what's more important?
1) Saving a few bucks
2) Keeping your hair
:-)
- --
Aaron Turner
Linux software RAID5 w/5 EIDE UMDA disks. Crazy?
I assume the problem is getting io requests to occur simultaneously. My
surfing tells me most controllers provides poor same-channel access,
yet independent channels. So three controllers, one disk per channel
would provide concurrent io's, if t
17 matches
Mail list logo