Re: SCSI or IDE

2002-12-01 Thread Russell Coker
On Sun, 1 Dec 2002 07:04, Eric Jennings wrote:
> Can you give us a command to call (using bonnie++ binaries) that will
> give a more real-world test of filesystem and disk performance?  I'd
> like to see how bonnie++ differs from hdparm in results.

Even if you just run "bonnie++" with no parameters it will be a LOT better 
than hdparm at predicting what real performance is likely to be like.

For any detailed settings you need to have an idea of what your applications 
will do (something I am not aware of and can't easily advise on).

For an even more real-world test install a mail server and use Postal.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-30 Thread Eric Jennings
Hi Russell-

Can you give us a command to call (using bonnie++ binaries) that will 
give a more real-world test of filesystem and disk performance?  I'd 
like to see how bonnie++ differs from hdparm in results.

Thanks-
Eric


On Sun, 1 Dec 2002 00:30, Thomas Kirk wrote:

 On Fri, Nov 29, 2002 at 05:07:16PM +0100, Nicolas Bougues wrote:
 > You should probably try to time the disk reads, not the buffer cache...
 >
 > hdparm -t

 Yes the disk reads is a more realistic real world test :

 /dev/sda5:
  Timing buffer-cache reads:   128 MB in  1.23 seconds =104.07 MB/sec
 guf:~# hdparm -t /dev/sda5


hdparm is NOT a real world test!

In real world operation you use a file system not direct access to 
the device. 
Bonnie++ is one of many file system benchmarks that you can use to get
results that are more useful than hdparm.

If you want to look at the performance of a raw device then use zcav (part of
Bonnie++), it allows you to easily graph the varying performance across a
partition.


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-30 Thread Russell Coker
On Sun, 1 Dec 2002 00:30, Thomas Kirk wrote:
> On Fri, Nov 29, 2002 at 05:07:16PM +0100, Nicolas Bougues wrote:
> > You should probably try to time the disk reads, not the buffer cache...
> >
> > hdparm -t
>
> Yes the disk reads is a more realistic real world test :
>
> /dev/sda5:
>  Timing buffer-cache reads:   128 MB in  1.23 seconds =104.07 MB/sec
> guf:~# hdparm -t /dev/sda5

hdparm is NOT a real world test!

In real world operation you use a file system not direct access to the device.  
Bonnie++ is one of many file system benchmarks that you can use to get 
results that are more useful than hdparm.

If you want to look at the performance of a raw device then use zcav (part of 
Bonnie++), it allows you to easily graph the varying performance across a 
partition.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-30 Thread Jason Lim
On a pretty loaded system with a 3ware 74xx series card, we're getting:

# hdparm -tT /dev/sda3

/dev/sda3:
 Timing buffer-cache reads:   128 MB in  0.65 seconds =196.92 MB/sec
 Timing buffered disk reads:  64 MB in  1.44 seconds = 44.44 MB/sec

This seems to be more in-line with expectations.

Sincerely,
Jason
http://www.zentek-international.com/

- Original Message -
From: "Bulent Murtezaoglu" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Sunday, December 01, 2002 11:07 AM
Subject: Re: SCSI or IDE


> >>>>> "TH" == Thomas Kirk <[EMAIL PROTECTED]> writes:
> [...]
> TH> /dev/sdb5: Timing buffer-cache reads: 128 MB in 0.95 seconds
> TH> =134.74 MB/sec
>
> TH> /dev/sdb5: Timing buffered disk reads: 64 MB in 3.42 seconds =
> TH> 18.71 MB/sec
>
> TH> When it comes to real world test my scsibased system is almost
> TH> twice as fast as the idebased one :) [...]
>
> Hmm, the IDE drive in my notebook beats that!
>
> defter:~# hdparm -tT /dev/hda
>
> /dev/hda:
>  Timing buffer-cache reads:   128 MB in  0.55 seconds =232.73 MB/sec
>  Timing buffered disk reads:  64 MB in  3.29 seconds = 19.45 MB/sec
>
> This is an IBM a30p, with a 5200? RPM 2.5" 48 GIG drive.
>
> So what are we concluding from this?  I choose to conclude nothing
> of major significance.
>
> cheers,
>
> BM
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-30 Thread Bulent Murtezaoglu
> "TH" == Thomas Kirk <[EMAIL PROTECTED]> writes:
[...]
TH> /dev/sdb5: Timing buffer-cache reads: 128 MB in 0.95 seconds
TH> =134.74 MB/sec

TH> /dev/sdb5: Timing buffered disk reads: 64 MB in 3.42 seconds =
TH> 18.71 MB/sec

TH> When it comes to real world test my scsibased system is almost
TH> twice as fast as the idebased one :) [...]

Hmm, the IDE drive in my notebook beats that!

defter:~# hdparm -tT /dev/hda 

/dev/hda:
 Timing buffer-cache reads:   128 MB in  0.55 seconds =232.73 MB/sec
 Timing buffered disk reads:  64 MB in  3.29 seconds = 19.45 MB/sec

This is an IBM a30p, with a 5200? RPM 2.5" 48 GIG drive.

So what are we concluding from this?  I choose to conclude nothing
of major significance.

cheers,

BM


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-30 Thread Thomas Kirk
On Sun, Dec 01, 2002 at 12:30:16AM +0100, Thomas Kirk wrote:

> Yes the disk reads is a more realistic real world test :
> 
> /dev/sda5:
>  Timing buffer-cache reads:   128 MB in  1.23 seconds =104.07 MB/sec
> guf:~# hdparm -t /dev/sda5
> 
> /dev/sda5:
>  Timing buffered disk reads:  64 MB in  5.84 seconds = 10.96 MB/sec

The same test on a scsisetup much like the above just with scsidisk :

*:~# hdparm -T /dev/sdb5 

/dev/sdb5:
 Timing buffer-cache reads:   128 MB in  0.95 seconds =134.74 MB/sec

/dev/sdb5:
 Timing buffered disk reads:  64 MB in  3.42 seconds = 18.71 MB/sec

When it comes to real world test my scsibased system is almost twice
as fast as the idebased one :)


-- 
Venlig hilsen/Kind regards
Thomas Kirk
ARKENA
thomas(at)arkena(dot)com
Http://www.arkena.com


BOFH excuse #27:

radiosity depletion


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-30 Thread Thomas Kirk
On Fri, Nov 29, 2002 at 05:07:16PM +0100, Nicolas Bougues wrote:

> You should probably try to time the disk reads, not the buffer cache...
> 
> hdparm -t

Yes the disk reads is a more realistic real world test :

/dev/sda5:
 Timing buffer-cache reads:   128 MB in  1.23 seconds =104.07 MB/sec
guf:~# hdparm -t /dev/sda5

/dev/sda5:
 Timing buffered disk reads:  64 MB in  5.84 seconds = 10.96 MB/sec

The above result is from the Ultratrak Tx8 (TX8000 now) with 8 60 Gigs IBM
deskstar 7200 rpm in raid 5. Kind of slow if you ask me.


-- 
Venlig hilsen/Kind regards
Thomas Kirk
ARKENA
thomas(at)arkena(dot)com
Http://www.arkena.com


BOFH excuse #411:

Traffic jam on the Information Superhighway.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-29 Thread Nicolas Bougues
On Fri, Nov 29, 2002 at 07:32:52AM -0800, Eric Jennings wrote:

> http://www.3ware.com/products/benchmarks.asp
> 
> 
> My real world tests below:
> 
> # hdparm -T /dev/sda1
> 
> /dev/sda1:
> Timing buffer-cache reads:   128 MB in  1.01 seconds =126.73 MB/sec
> 

You should probably try to time the disk reads, not the buffer cache...

hdparm -t

BTW, on my RAID5 setup (4 drives) with a 3Ware card, hdparm gives
pretty much the same result as a single drive.

I believe that the performance is enhanced in random access, not
linear, like hdparm does.
--
Nicolas Bougues
Axialys Interactive


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-29 Thread Eric Jennings
 > For what it's worth, I'm running a 3Ware 6410 card (4 port IDE RAID-5

 with three 60 GB 7200 drives) on our development server, and it works
 flawlessly.  One of the nice features is that it can support email
 notification of array rebuilds and drive issues/failures, so it's
 easy to keep on top of any changes to the drives.


Maybe i should give them a try in a near future? The promise Ultratek
TX8000 can hold 8 disk allthough its not as stable and fast as one
could wish.


3Ware has 2, 4, 8, and 12 disk cards, so pretty much whatever you need.



 >

 Not to mention the good performance you get from ordinary IDE drives.
 A no-brainer to install w/ Debian, and I'll definitely be purchasing
 more of these for our production servers from here on out.  (But this
 time I'll be getting the 7500 cards that support ATA-133 drives --
 should be even faster)


Any benchmarks available? Does anyone have experince with these cards?


3Ware has their own benchmarks, although take them w/ a grain of 
salt, since they were made by the same company selling the card!

http://www.3ware.com/products/benchmarks.asp


My real world tests below:

# hdparm -T /dev/sda1

/dev/sda1:
Timing buffer-cache reads:   128 MB in  1.01 seconds =126.73 MB/sec


Remember too, I'm using an old 6410 which doesn't have their fancy 
RAID5 speedup technology (forgot what they call it), and I'm only 
using ATA-100 disks. (3 60 GB 7200 Maxtor drives).


cool-
Eric


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: SCSI or IDE

2002-11-29 Thread Thomas Kirk
Hep

On Thu, Nov 28, 2002 at 10:47:54PM -0800, Eric Jennings wrote:

> For what it's worth, I'm running a 3Ware 6410 card (4 port IDE RAID-5 
> with three 60 GB 7200 drives) on our development server, and it works 
> flawlessly.  One of the nice features is that it can support email 
> notification of array rebuilds and drive issues/failures, so it's 
> easy to keep on top of any changes to the drives.

Maybe i should give them a try in a near future? The promise Ultratek
TX8000 can hold 8 disk allthough its not as stable and fast as one
could wish.

> 
> Not to mention the good performance you get from ordinary IDE drives. 
> A no-brainer to install w/ Debian, and I'll definitely be purchasing 
> more of these for our production servers from here on out.  (But this 
> time I'll be getting the 7500 cards that support ATA-133 drives -- 
> should be even faster)

Any benchmarks available? Does anyone have experince with these cards?


-- 
Venlig hilsen/Kind regards
Thomas Kirk
ARKENA
thomas(at)arkena(dot)com
Http://www.arkena.com


BOFH excuse #219:

Recursivity.  Call back if it happens again.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-28 Thread Eric Jennings
For what it's worth, I'm running a 3Ware 6410 card (4 port IDE RAID-5 
with three 60 GB 7200 drives) on our development server, and it works 
flawlessly.  One of the nice features is that it can support email 
notification of array rebuilds and drive issues/failures, so it's 
easy to keep on top of any changes to the drives.

Not to mention the good performance you get from ordinary IDE drives. 
A no-brainer to install w/ Debian, and I'll definitely be purchasing 
more of these for our production servers from here on out.  (But this 
time I'll be getting the 7500 cards that support ATA-133 drives -- 
should be even faster)


Best Regards-
Eric




 > > > u can get hot swap ide

 > >
 > > promise do one (hot swap ide), dunno how good it is mind.
 >
 > If you are thinking on this one ->
 > http://www.promise.com/product/product_detail_eng.asp?productI
 > d=90&familyId=6
 >
 > Dont buy it! It as simple as that. 1 year ago i bought one of those
 > bastards from promise and its slooow. Im running it as filer on a
 > debian 3.0 system filesystem xfs and i havent been able to push it to
 > a sustain throughput on more than 3MB/sec. This is with 8
 > 60GB IBM deskstar
 > 7200rpm disks in raid5.
 > [...]
 > Next time i have to buy ideraid ill try 3ware for sure.

 I have one ofe those thingies running our local samba server, raid 5 w/ 3+1
 80 Gig 7200 IBM HDDs. Works flawlessly and fast. hdparm shows the following
 throughput:

  Timing buffer-cache reads:   128 MB in  0.87 seconds =147.13 MB/sec
  Timing buffered disk reads:  64 MB in  1.31 seconds = 48.85 MB/sec

 This is on a dual PIII/500 w/ 256 MB.

 Not the cheapest one, but it's actually worth it.


By "one of those thingies" you mean 3ware?




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-28 Thread Jeremy Zawodny
On Thu, Nov 28, 2002 at 08:16:51AM +0100, Thomas Lamy wrote:
> Thomas Kirk [mailto:[EMAIL PROTECTED]] wrote:
> > 
> > Hep
> > 
> > On Mon, Nov 25, 2002 at 11:57:33AM +1300, Jones, Steven wrote:
> > 
> > > u can get hot swap ide 
> > > 
> > > promise do one (hot swap ide), dunno how good it is mind.
> > 
> > If you are thinking on this one ->
> > http://www.promise.com/product/product_detail_eng.asp?productI
> > d=90&familyId=6
> > 
> > Dont buy it! It as simple as that. 1 year ago i bought one of those
> > bastards from promise and its slooow. Im running it as filer on a
> > debian 3.0 system filesystem xfs and i havent been able to push it to
> > a sustain throughput on more than 3MB/sec. This is with 8 
> > 60GB IBM deskstar
> > 7200rpm disks in raid5. 
> > [...]
> > Next time i have to buy ideraid ill try 3ware for sure.
> 
> I have one ofe those thingies running our local samba server, raid 5 w/ 3+1
> 80 Gig 7200 IBM HDDs. Works flawlessly and fast. hdparm shows the following
> throughput:
> 
>  Timing buffer-cache reads:   128 MB in  0.87 seconds =147.13 MB/sec
>  Timing buffered disk reads:  64 MB in  1.31 seconds = 48.85 MB/sec
> 
> This is on a dual PIII/500 w/ 256 MB.
> 
> Not the cheapest one, but it's actually worth it.

By "one of those thingies" you mean 3ware?
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
<[EMAIL PROTECTED]>  |  http://jeremy.zawodny.com/


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-28 Thread I. Forbes
Hello Russell 

On 28 Nov 2002 at 13:52, Russell Coker wrote:

> On Thu, 28 Nov 2002 13:15, I. Forbes wrote:
> > - If you have a "glitch" on a drive the raid will mark the partition
> > as defective possibly when there is no permanent damage. You have to
> > reboot the server before you can attempt to bring this partition back
> > on line. Once rebooted you can attempt to re-sync the drives.
> 
> That is strange.  On many occasions I have had a transient error or a failing 
> drive drop out of a RAID but then work fine when I ran raidhotadd...

In my experience, if the drive dropped out due to an error,  you have 
to reboot the machine before raidhotadd will attempt to remount it. 
(This may vary between kernel versions.)
 
> > Be very careful to set-up and check your cron scripts. If a drive
> > fails, you need the machine to send an e-mail to an address where you
> > know it is going to be read and acted upon! You do not want that e-
> > mail buried in 1000 other system warnings that get deleted without
> > being read.
> 
> The raidtools2 package comes with a cron script that does well in this regard.

The e-mail generated from raidtools2 is imbedded in the "cron.daily" 
report. If you have a bunch of programs that get run by cron.daily 
and generate a lot of output, a critical raid disk warning can get 
lost in the noise.

I have modified my cron scripts to send a second e-mail directly to 
an address that does not normally get any system messages. This one 
can be cc'd to the client if need be. They like that kind of 
reassurance.

Ian

-
Ian Forbes ZSD
http://www.zsd.co.za
Office: +27 21 683-1388  Fax: +27 21 674-1106
Snail Mail: P.O. Box 46827, Glosderry, 7702, South Africa
-



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-28 Thread uwp
In fact modern SCSI disks are a little bit faster (the 15000 rpm versions).
But they are much more expensive. We solved the problem with RAID-boxes
from EasyRAID. IDE disks input with hot plugging support and RAID 5 and
a SCSI connector. In the machine there's the corresponding SCSI controller
(maybe not as cheap as an IDE controller but for us the disks make the point
because we need lots of them) and that's it. Peak rate with 12 x WD JB1000
(100 GB, 8 MB Cache, RAID5 -> ~1TB diskspace): about 53 MB/s, average is
at 33-37 MB/s. Fast enough for us (it's a backup-system).

Mermgfurt,
Udo
-- 
  Udo Wolter <-> [EMAIL PROTECTED]|  /"\
  !!! Free Music Video !!! All Linux made !!!|  \ / ASCII RIBBON CAMPAIGN
http://www.dicke-aersche.de/chapterx/video.html  |   XAGAINST HTML MAIL
   !!! First Music Video made with Linux !!! |  / \


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-28 Thread Russell Coker
On Thu, 28 Nov 2002 13:15, I. Forbes wrote:
> - If you have a "glitch" on a drive the raid will mark the partition
> as defective possibly when there is no permanent damage. You have to
> reboot the server before you can attempt to bring this partition back
> on line. Once rebooted you can attempt to re-sync the drives.

That is strange.  On many occasions I have had a transient error or a failing 
drive drop out of a RAID but then work fine when I ran raidhotadd...

> Be very careful to set-up and check your cron scripts. If a drive
> fails, you need the machine to send an e-mail to an address where you
> know it is going to be read and acted upon! You do not want that e-
> mail buried in 1000 other system warnings that get deleted without
> being read.

The raidtools2 package comes with a cron script that does well in this regard.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




RE: SCSI or IDE

2002-11-28 Thread I. Forbes
Hello All

We have about a dozen production machines running software RAID1 with 
IDE drives. We have experience going back about year now and we have 
had a number of raid drive failures in that time. 

Good points:

- If a drive fails, the machine carries on running and you can sort 
it out the problem at a convenient time. You do not loose any data 
and not much downtime.

Bad points:

- After a drive fails it is not guaranteed 100% that the box will be 
bootable. If the bios supports booting off both IDE's it is a good 
start but some combination of drive/contoller failures can leave the 
machine unbootable. A cold reboot as opposed to a warm reboot can 
make a difference. It is a good idea to have a boot stiffy available, 
this should always work. At worst you may have to disable a drive in 
the bios or open the case and swop the IDE cables to get it to boot.

- If you have a "glitch" on a drive the raid will mark the partition 
as defective possibly when there is no permanent damage. You have to 
reboot the server before you can attempt to bring this partition back 
on line. Once rebooted you can attempt to re-sync the drives. If you 
loose sync again in the next few hours, start planning on replacing 
the drive. But I have had a partition drop out, re-booted the 
machine, re-synced and it worked faultlessly for months. So it is 
definitely worth considering this before you replace the drive.

- You cannot "hot swap" the drives.

Bottom line is I would much rather have a machine with software raid 
1 than one drive alone. Most of the new machines we build have this 
configuration. 

However if guaranteed 24/7 operation if your requirement, as opposed 
to security of data and minimizing downtime then you will have to buy 
something exotic that supports hot-swap and has a good reputation.

I have also played with machines with cheap bios based raid which 
proved frustrating. I would much rather use Linux software raid than 
any of these.

Be very careful to set-up and check your cron scripts. If a drive 
fails, you need the machine to send an e-mail to an address where you 
know it is going to be read and acted upon! You do not want that e-
mail buried in 1000 other system warnings that get deleted without 
being read.

Have fun.

Ian



On 28 Nov 2002 at 14:15, Jones, Steven wrote:

> If you lose the primary boot disk on software raid its not bootable in my
> experience.
> 
> I wouldnt use software raid for any prod box for this reason.
> 
> I happen to have 2 x 20g sitting, and since I only need 2 gig ish
> max..
> 
> Steven
> 
> -Original Message-
> From: Russell Coker [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, 28 November 2002 1:35 
> To: Jones, Steven; 'Thomas Kirk'
> Cc: [EMAIL PROTECTED]
> Subject: Re: SCSI or IDE
> 
> 
> On Wed, 27 Nov 2002 23:30, Jones, Steven wrote:
> >
> http://www.promise.com/product/product_detail_eng.asp?productId=93&familyId
> >= 7
> >
> > i was actually looking at one of these.
> >
> > For my simpler needs, data protection is important but there isnt lots of
> > it so 2 x 20 gig disks mirrored is heaps. I would like to keep the uptime
> > up, so was thinking of this solution, anybody tried one? Its for my web
> > server with all of a 128k connection so sucky performance isnt an issue as
> > its bugger all hits.
> 
> If you only need RAID-1 then software RAID is probably best.  It's cheapest 
> and provides much better performance than most hardware RAID's.  Also if you
> 
> only need 20G of storage then you still may want to consider 120G drives, 
> they are much faster than 20G drives.
> 
> > However for another job Im thinking of elsewhere (a 2 node cluster) though
> > it would be a disaster. 3meg a sec just wont cut it, i can get 16 meg off
> a
> > second hand scsi setup for the same dosh.
> 
> You can get 40 meg from a software RAID-1 on IDE drives more easily and 
> cheaply.
> 
> -- 
> http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
> http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
> http://www.coker.com.au/postal/Postal SMTP/POP benchmark
> http://www.coker.com.au/~russell/  My home page
> 
> 
> -- 
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
> [EMAIL PROTECTED]
> 
> 
> -- 
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
> 
> 
> 
> 

-
Ian Forbes ZSD
http://www.zsd.co.za
Office: +27 21 683-1388  Fax: +27 21 674-1106
Snail Mail: P.O. Box 46827, Glosderry, 7702, South Africa
-



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-28 Thread Thomas Kirk
Hep

On Thu, Nov 28, 2002 at 04:07:42PM +1100, Donovan Baarda wrote:

> That sounds very crappy... I'm not familiar with this product and it's
> drivers. From the kernel side, does it look like IDE or something else? If
> it looks like IDE, are you actualy using UDMA? The Debian kernels default to
> off... check with;

The array is connected to server via SCSI. It looks like a very big
scsidisk in our case around 400GB.

> 
> # hdparm /dev/hde
> 
> and see if dma is on.
> 
> I find it hard to believe that the performance could be that bad... there
> must be something else misconfigured.

Sorry it was 15MB/sec but thats still not very good.

I spoke to a guy somewhere in asia and him and his friend (they
worked for a large company forgot the name now) conducted some
benchmarktests that show exactly the same as mine so i guess its just
the speed that this array can do. To my luck we use this array for
storage now and not so much for serverbackend diskcapacity.

> I have heard horror stories about IDE raid when discs actualy die. I think
> the problem is disks can die in almost-pretending-to-be-ok ways. Perhaps

yeah maybe? The thing is we will never know since the promisearray dosnt tell
use much. 

> At least it sounds like the guy knew what he was talking about...

yes indeed but still i was stunned when i heard what the solution
was. Im use to work with 1-class serverhardware and ive never done
anything like this before.


-- 
Venlig hilsen/Kind regards
Thomas Kirk
ARKENA
thomas(at)arkena(dot)com
Http://www.arkena.com


BOFH excuse #112:

The monitor is plugged into the serial port


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-28 Thread Thing
On Thu, 28 Nov 2002 22:21, Russell Coker wrote:
> On Thu, 28 Nov 2002 02:15, Jones, Steven wrote:
> > If you lose the primary boot disk on software raid its not bootable in my
> > experience.
>
> That's often the case.  If the disk entirely dies then the BIOS should be
> able to boot from the other disk, but if the disk partially fails then
> it'll probably not be bootable.
>
> But if you want to save money on hardware software RAID-1 is a very good
> option.

Im afriad I have to disagree, have you tried to boot off the second disk? 

Im pretty sure its always the case, even with a totally dead primary disk. I 
know of no motherboard that can find the second disk and have Linux realise 
its booting off not the disk it expects.

Sun boxes will do it, but we arnt talking Sparc in here, (at least Im not)

Ive tested this scenario (after some clown bought a stack of compaq dp320s 
against my advice.PHB liked the pricedoh) and on several machines i 
have been unable to get the bios to boot the second  disk with the primary 
disk removed.

It simply does not work.

If you know of a motherboard that will do it please let me/us know, it might 
get on my shopping list.

regards

Thing


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-28 Thread Russell Coker
On Thu, 28 Nov 2002 02:15, Jones, Steven wrote:
> If you lose the primary boot disk on software raid its not bootable in my
> experience.

That's often the case.  If the disk entirely dies then the BIOS should be able 
to boot from the other disk, but if the disk partially fails then it'll 
probably not be bootable.

But if you want to save money on hardware software RAID-1 is a very good 
option.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-27 Thread Thomas Lamy
Thomas Kirk [mailto:[EMAIL PROTECTED]] wrote:
> 
> Hep
> 
> On Mon, Nov 25, 2002 at 11:57:33AM +1300, Jones, Steven wrote:
> 
> > u can get hot swap ide 
> > 
> > promise do one (hot swap ide), dunno how good it is mind.
> 
> If you are thinking on this one ->
> http://www.promise.com/product/product_detail_eng.asp?productI
> d=90&familyId=6
> 
> Dont buy it! It as simple as that. 1 year ago i bought one of those
> bastards from promise and its slooow. Im running it as filer on a
> debian 3.0 system filesystem xfs and i havent been able to push it to
> a sustain throughput on more than 3MB/sec. This is with 8 
> 60GB IBM deskstar
> 7200rpm disks in raid5. 
> [...]
> Next time i have to buy ideraid ill try 3ware for sure.

I have one ofe those thingies running our local samba server, raid 5 w/ 3+1
80 Gig 7200 IBM HDDs. Works flawlessly and fast. hdparm shows the following
throughput:

 Timing buffer-cache reads:   128 MB in  0.87 seconds =147.13 MB/sec
 Timing buffered disk reads:  64 MB in  1.31 seconds = 48.85 MB/sec

This is on a dual PIII/500 w/ 256 MB.

Not the cheapest one, but it's actually worth it.

Thomas


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-27 Thread Jason Lim
> On Wed, 27 Nov 2002 23:30, Jones, Steven wrote:
> >
http://www.promise.com/product/product_detail_eng.asp?productId=93&familyI
d
> >= 7
> >
> > i was actually looking at one of these.
> >
> > For my simpler needs, data protection is important but there isnt lots
of
> > it so 2 x 20 gig disks mirrored is heaps. I would like to keep the
uptime
> > up, so was thinking of this solution, anybody tried one? Its for my
web
> > server with all of a 128k connection so sucky performance isnt an
issue as
> > its bugger all hits.
>
> If you only need RAID-1 then software RAID is probably best.  It's
cheapest
> and provides much better performance than most hardware RAID's.  Also if
you
> only need 20G of storage then you still may want to consider 120G
drives,
> they are much faster than 20G drives.

On the other hand... when I was experiementing with all this way back
(like maybe 1-2 years ago... Russell was helping back them too ;-) ), I
found that software RAID in some cases won't work properly if something
has died... for example, from memory one of the hard disks failed. The
BIOS stalled on that HD at POST and wouldn't continue normally. It didn't
failover to the 2nd hard disk (this was RAID 1). A hardware RAID setup
would be able to handle this as the hardware RAID solution would be
designed to prevent such things from completely stalling the system and
preventing startup.

Of course, hardware RAID is not as flexible, and cheapo hardware RAID may
not be much more intelligent than the motherboard's onboard IDE
controller. But it's gotta have a least a bit more intelligence.

> > However for another job Im thinking of elsewhere (a 2 node cluster)
though
> > it would be a disaster. 3meg a sec just wont cut it, i can get 16 meg
off a
> > second hand scsi setup for the same dosh.
>
> You can get 40 meg from a software RAID-1 on IDE drives more easily and
> cheaply.

Note that you probably won't be able to go above 2 HDs, as you certainly
won't want to put more than 1 HD on per ... oh... per cable (whats the
word... per port? per channel?). Putting more than 1 on lowers performance
greatly, so you can forget about doing RAID 5. You COULD go buy a PCI IDE
card, but then if you're going the hardware route you may as well get a
hardware RAID card.

Anyway, just my thoughts, as I've been in a similar situation. Software
RAID is certainly more flexible and may be faster than some hardware IDE
solutions, but it can fail under some situations. It's your own decisions,
but once you do it, stick with it as it DEFINATELY is not fun to move
these kind of things around ;-)

Sincerely,
Jason
http://www.zentek-international.com/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-27 Thread Donovan Baarda
On Wed, Nov 27, 2002 at 10:09:48PM +0100, Thomas Kirk wrote:
> Hep
> 
> On Mon, Nov 25, 2002 at 11:57:33AM +1300, Jones, Steven wrote:
[...]
> If you are thinking on this one ->
> http://www.promise.com/product/product_detail_eng.asp?productId=90&familyId=6
> 
> Dont buy it! It as simple as that. 1 year ago i bought one of those
> bastards from promise and its slooow. Im running it as filer on a
> debian 3.0 system filesystem xfs and i havent been able to push it to
> a sustain throughput on more than 3MB/sec. This is with 8 60GB IBM deskstar
> 7200rpm disks in raid5. Recently a disk crashed on me and the hole

That sounds very crappy... I'm not familiar with this product and it's
drivers. From the kernel side, does it look like IDE or something else? If
it looks like IDE, are you actualy using UDMA? The Debian kernels default to
off... check with;

# hdparm /dev/hde

and see if dma is on.

I find it hard to believe that the performance could be that bad... there
must be something else misconfigured.


> array went offline allthough the manual says it should continue to
> function. NOT TRUE! I relplace the broken drive with a new and the
> promisearray began to rebuild i thought i where homesafe. After
> rebuilding in 15 hours the array went offline again and nothing i did
> got it back?? I called localshop where i bought it nobody could help
> me they suggested that i contacted promise in netherlands. Story
> continues.

I have heard horror stories about IDE raid when discs actualy die. I think
the problem is disks can die in almost-pretending-to-be-ok ways. Perhaps
SCSI with it's more robust protocol is more likely to identify when disks
die like this. 

However, the recovery problems sounds like something else dodgey...

> Promise in netherlands where quit helpfull but what they suggested got
> me pulling out my hair!!! (what i have left of it). They suggested
> that i deletede the array, created a new and saved it then just after
> saving it i had to pull out the powercord in the back so the array
> wouldn't initialize. I would not belive what i was hearing. Pulling
> out powercord while the array is initializing sounds like a hugh hack
> to me but i did it just because i didnt knew what else to do. It
> actually worked so now im back to the good old slw promisearray
> and after a xfs_repair my filer was up and running again. 

At least it sounds like the guy knew what he was talking about...

-- 
--
ABO: finger [EMAIL PROTECTED] for more info, including pgp key
--


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




RE: SCSI or IDE

2002-11-27 Thread Jones, Steven
If you lose the primary boot disk on software raid its not bootable in my
experience.

I wouldnt use software raid for any prod box for this reason.

I happen to have 2 x 20g sitting, and since I only need 2 gig ish
max..

Steven

-Original Message-
From: Russell Coker [mailto:[EMAIL PROTECTED]]
Sent: Thursday, 28 November 2002 1:35 
To: Jones, Steven; 'Thomas Kirk'
Cc: [EMAIL PROTECTED]
Subject: Re: SCSI or IDE


On Wed, 27 Nov 2002 23:30, Jones, Steven wrote:
>
http://www.promise.com/product/product_detail_eng.asp?productId=93&familyId
>= 7
>
> i was actually looking at one of these.
>
> For my simpler needs, data protection is important but there isnt lots of
> it so 2 x 20 gig disks mirrored is heaps. I would like to keep the uptime
> up, so was thinking of this solution, anybody tried one? Its for my web
> server with all of a 128k connection so sucky performance isnt an issue as
> its bugger all hits.

If you only need RAID-1 then software RAID is probably best.  It's cheapest 
and provides much better performance than most hardware RAID's.  Also if you

only need 20G of storage then you still may want to consider 120G drives, 
they are much faster than 20G drives.

> However for another job Im thinking of elsewhere (a 2 node cluster) though
> it would be a disaster. 3meg a sec just wont cut it, i can get 16 meg off
a
> second hand scsi setup for the same dosh.

You can get 40 meg from a software RAID-1 on IDE drives more easily and 
cheaply.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-27 Thread Russell Coker
On Wed, 27 Nov 2002 23:30, Jones, Steven wrote:
> http://www.promise.com/product/product_detail_eng.asp?productId=93&familyId
>= 7
>
> i was actually looking at one of these.
>
> For my simpler needs, data protection is important but there isnt lots of
> it so 2 x 20 gig disks mirrored is heaps. I would like to keep the uptime
> up, so was thinking of this solution, anybody tried one? Its for my web
> server with all of a 128k connection so sucky performance isnt an issue as
> its bugger all hits.

If you only need RAID-1 then software RAID is probably best.  It's cheapest 
and provides much better performance than most hardware RAID's.  Also if you 
only need 20G of storage then you still may want to consider 120G drives, 
they are much faster than 20G drives.

> However for another job Im thinking of elsewhere (a 2 node cluster) though
> it would be a disaster. 3meg a sec just wont cut it, i can get 16 meg off a
> second hand scsi setup for the same dosh.

You can get 40 meg from a software RAID-1 on IDE drives more easily and 
cheaply.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-27 Thread Kirk Ismay

- Original Message -
From: "Jones, Steven" <[EMAIL PROTECTED]>
To: "'Thomas Kirk'" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Wednesday, November 27, 2002 2:30 PM
Subject: RE: SCSI or IDE


>
>
http://www.promise.com/product/product_detail_eng.asp?productId=93&familyId=
> 7
>
> i was actually looking at one of these.
>
> For my simpler needs, data protection is important but there isnt lots of
it
> so 2 x 20 gig disks mirrored is heaps. I would like to keep the uptime up,
> so was thinking of this solution, anybody tried one? Its for my web server
> with all of a 128k connection so sucky performance isnt an issue as its
> bugger all hits.
>
> However for another job Im thinking of elsewhere (a 2 node cluster) though
> it would be a disaster. 3meg a sec just wont cut it, i can get 16 meg off
a
> second hand scsi setup for the same dosh.
>
> Steven

ARCO Products are pretty good. I've used their first generation DupliDisk
controller for some time.
The only drawback with that version is that raid rebuilds must be done from
a DOS boot disk.

This has been fixed in the DupliDisk 2 product, it does background rebuilds
now, and they have a hot swappable drive enclosure as well.

http://www.arcoide.com/

That being said, the only reason I'm not using the IDE RAID is that I've
switched to Ultra160 SCSI RAID on my newer systems.  I've used both the Dell
PERC and Mylex SCSI controllers, using RAID-1. IO performance is much better
on the SCSI drives.  I've also blown more IDE drives than SCSI over the
years, especially on high load servers.

Thats my $0.02 anyhow :)

Sincerely,
--
Kirk Ismay
System Administrator


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




RE: SCSI or IDE

2002-11-27 Thread Jones, Steven

http://www.promise.com/product/product_detail_eng.asp?productId=93&familyId=
7

i was actually looking at one of these.

For my simpler needs, data protection is important but there isnt lots of it
so 2 x 20 gig disks mirrored is heaps. I would like to keep the uptime up,
so was thinking of this solution, anybody tried one? Its for my web server
with all of a 128k connection so sucky performance isnt an issue as its
bugger all hits. 

However for another job Im thinking of elsewhere (a 2 node cluster) though
it would be a disaster. 3meg a sec just wont cut it, i can get 16 meg off a
second hand scsi setup for the same dosh.

Steven

 

-Original Message-
From: Thomas Kirk [mailto:[EMAIL PROTECTED]]
Sent: Thursday, 28 November 2002 10:10 
To: Jones, Steven
Cc: 'John'; Scott; [EMAIL PROTECTED]
Subject: Re: SCSI or IDE


Hep

On Mon, Nov 25, 2002 at 11:57:33AM +1300, Jones, Steven wrote:

> u can get hot swap ide 
> 
> promise do one (hot swap ide), dunno how good it is mind.

If you are thinking on this one ->
http://www.promise.com/product/product_detail_eng.asp?productId=90&familyId=
6

Dont buy it! It as simple as that. 1 year ago i bought one of those
bastards from promise and its slooow. Im running it as filer on a
debian 3.0 system filesystem xfs and i havent been able to push it to
a sustain throughput on more than 3MB/sec. This is with 8 60GB IBM deskstar
7200rpm disks in raid5. Recently a disk crashed on me and the hole
array went offline allthough the manual says it should continue to
function. NOT TRUE! I relplace the broken drive with a new and the
promisearray began to rebuild i thought i where homesafe. After
rebuilding in 15 hours the array went offline again and nothing i did
got it back?? I called localshop where i bought it nobody could help
me they suggested that i contacted promise in netherlands. Story
continues.

Promise in netherlands where quit helpfull but what they suggested got
me pulling out my hair!!! (what i have left of it). They suggested
that i deletede the array, created a new and saved it then just after
saving it i had to pull out the powercord in the back so the array
wouldn't initialize. I would not belive what i was hearing. Pulling
out powercord while the array is initializing sounds like a hugh hack
to me but i did it just because i didnt knew what else to do. It
actually worked so now im back to the good old slw promisearray
and after a xfs_repair my filer was up and running again. 

Next time i have to buy ideraid ill try 3ware for sure.

-- 
Venlig hilsen/Kind regards
Thomas Kirk
ARKENA
thomas(at)arkena(dot)com
Http://www.arkena.com


BOFH excuse #283:

Lawn mower blade in your fan need sharpening


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-27 Thread Thomas Kirk
Hep

On Mon, Nov 25, 2002 at 11:57:33AM +1300, Jones, Steven wrote:

> u can get hot swap ide 
> 
> promise do one (hot swap ide), dunno how good it is mind.

If you are thinking on this one ->
http://www.promise.com/product/product_detail_eng.asp?productId=90&familyId=6

Dont buy it! It as simple as that. 1 year ago i bought one of those
bastards from promise and its slooow. Im running it as filer on a
debian 3.0 system filesystem xfs and i havent been able to push it to
a sustain throughput on more than 3MB/sec. This is with 8 60GB IBM deskstar
7200rpm disks in raid5. Recently a disk crashed on me and the hole
array went offline allthough the manual says it should continue to
function. NOT TRUE! I relplace the broken drive with a new and the
promisearray began to rebuild i thought i where homesafe. After
rebuilding in 15 hours the array went offline again and nothing i did
got it back?? I called localshop where i bought it nobody could help
me they suggested that i contacted promise in netherlands. Story
continues.

Promise in netherlands where quit helpfull but what they suggested got
me pulling out my hair!!! (what i have left of it). They suggested
that i deletede the array, created a new and saved it then just after
saving it i had to pull out the powercord in the back so the array
wouldn't initialize. I would not belive what i was hearing. Pulling
out powercord while the array is initializing sounds like a hugh hack
to me but i did it just because i didnt knew what else to do. It
actually worked so now im back to the good old slw promisearray
and after a xfs_repair my filer was up and running again. 

Next time i have to buy ideraid ill try 3ware for sure.

-- 
Venlig hilsen/Kind regards
Thomas Kirk
ARKENA
thomas(at)arkena(dot)com
Http://www.arkena.com


BOFH excuse #283:

Lawn mower blade in your fan need sharpening


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-25 Thread Donovan Baarda
On Mon, Nov 25, 2002 at 06:13:47PM +0100, Toni Mueller wrote:
> 
> Hi,
> 
> On Sun, Nov 24, 2002 at 10:35:37AM -0800, Jeremy Zawodny wrote:
> > On Sun, Nov 24, 2002 at 06:56:34PM +0100, ? ? wrote:
> > > About performance - IDE still uses a lot of the CPU
> > now that most servers are far faster than that, we're talking about
> > what, 1% or maybe 2% of the CPU?
> 
> on my 700 MHz workstation I see up to 30% used by disk I/O.

you are almost certainly not using UDMA... 

The stock debian kernel still defaults to DMA off because there is some
hardware out there that has problems with it.

Install hwtools, hdparm and run;

hdparm /dev/hda
hdparm -tT /dev/hda

to see your current settings and benchmark the performance. Then run;

hdparm m16 -c1 -u1 -d1 /dev/hda
hdparm -tT /dev/hda

to change to dma mode and measure the new performance. The important option
is the '-d1', and '-u1' can be risky on some hardware so you might want to
omit it.

Once you've found your prefered settings, edit /etc/init.d/hwtools to set it
up for you every time you reboot.

-- 
--
ABO: finger [EMAIL PROTECTED] for more info, including pgp key
--


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-25 Thread Russell Coker
On Mon, 25 Nov 2002 18:22, Toni Mueller wrote:
> On Sun, Nov 24, 2002 at 11:45:21PM +0100, Russell Coker wrote:
> > When you've had a repair-man from the vendor use a hammer to install a
> > CPU you learn to accept that any hardware can be broken no matter how
> > well it's installed.
>
> did he also use a chainsaw to cut his finger nails?

I wish he would use a chainsaw to shave...

> > Yes.  However for bulk IO it's rotational speed multiplied by the number
> > of sectors per track.  A 5400rpm IDE disk with capacity 160G will
> > probably perform better for bulk IO than a 10,000rpm SCSI disk with
> > capacity 36G for this reason.
>
> The average application for most people is decidedly _not_ to have
> bulk I/O, but large numbers of very small I/O operations. Like on
> a news server, a mail server (using maildir), your typical web server
> etc. Imho the seek times in SCSI drives is faster not only due to
> rotational speed, but also because of more powerful arm moving
> motors.

When you get a medium sized server from Sun it'll probably have at least 4 
fast-ethernet ports and a disk array that can barely sustain 40MB/s.  Even 
for straight file-serving bulk IO can become a bottleneck on such a system.

If you use a small part of a large drive you get better average access times.
If you buy two * 160G ATA 7200 drives and use the first 18G of each of them in 
a RAID-0 then you should get better performance than a single 10K rpm SCSI 
drive can deliver (and it'll cost less from the first online computer store I 
found in google).

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-25 Thread Toni Mueller

Hi,

On Sun, Nov 24, 2002 at 11:45:21PM +0100, Russell Coker wrote:
> When you've had a repair-man from the vendor use a hammer to install a CPU you 
> learn to accept that any hardware can be broken no matter how well it's 
> installed.

did he also use a chainsaw to cut his finger nails?

> Yes.  However for bulk IO it's rotational speed multiplied by the number of 
> sectors per track.  A 5400rpm IDE disk with capacity 160G will probably 
> perform better for bulk IO than a 10,000rpm SCSI disk with capacity 36G for 
> this reason.

The average application for most people is decidedly _not_ to have
bulk I/O, but large numbers of very small I/O operations. Like on
a news server, a mail server (using maildir), your typical web server
etc. Imho the seek times in SCSI drives is faster not only due to
rotational speed, but also because of more powerful arm moving
motors.

> It seems that whenever a vendor gets a reputation for high quality they then 
> increase the volume, sub-contract the manufacturing to a country where they 
> can pay the workers $0.10 per hour, and the results are what you would 
> expect.  :(

Unfortunately, you are right on target with this one 8-((


Best,
--Toni++


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-25 Thread Toni Mueller

Hi,

On Sun, Nov 24, 2002 at 10:35:37AM -0800, Jeremy Zawodny wrote:
> On Sun, Nov 24, 2002 at 06:56:34PM +0100, ? ? wrote:
> > About performance - IDE still uses a lot of the CPU
> now that most servers are far faster than that, we're talking about
> what, 1% or maybe 2% of the CPU?

on my 700 MHz workstation I see up to 30% used by disk I/O.

> It's probably more than worth the cost savings on the SCSI premium.

Wrt. reliability, my experience shows - for me - that a SCSI drive
lasts 5 years, and an IDE drive lasts 1 year, given similar load
(if not even heavier on the SCSI side). So imho it's worth that
premium for the savings in trouble. Please also compare vendor's
warranty statements. While you don't want to exercise them, their
recent reductions on the IDE side shows their different trust
for these product lines.


Best,
--Toni++


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE (IBM for RAID)

2002-11-24 Thread Jason Lim
> If you want reliable IDE's get Quantum (whups... they don't exist
anymore),
> IBM (whups again... they shut down after they built a paperweight
factory in
> Hungary?), or Segate... perhaps Maxtor (bought out Quantum didn't
they?).

Thing with IBM HDs, in my experience, is that some are good from the
start, some are bad from the start. When I was building a big array a
while ago using IBM 120GXP HDs, out of 8 per server, 2 or 3 failed, but
they failed almost straight away (clicking sound). The rest have run
pretty much flawlessly till today.

> The hard thing with computer gear in general is each generation is a
totally
> new generation... A manufacturer's drives can turn from good to crap in
one
> batch, and the reverse. About all you can do is check
www.storagereview.com
> and solicit for advice each time you go to buy something.
>

IBM HDs are well-known for having the best RAID performance, due to the
firmware being tuned that way. Check out storagereview.com and you'll see
what I mean. Individually they don't perform much better than the
competition, but in a RAID setup they actually perform better. Not sure
why as I'd assume high speed independent HDs would lead to high speed
RAID... but anyway, IBM HDs are best for RAID from the performance results
for some reason.



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Jason Lim

>
> pps: last time i needed to build a large raid array for a fileserver, i
> priced both IDE and SCSI solutions.  the SCSI solution was about $15000
> all up (server, external drive box, drives, raid controller, etc).  the
> equivalent IDE solution was about $13000.  i ended up deciding that scsi
> was worth the extra $2000.
>
> btw, prices are in australian dollars, $AUD1.00 is somewhere around
$US0.55
>
> these days, i may have chosen differently because i could probably have
> got twice the storage capacity for the same price with IDE.

Definately... the gap is widening between IDE and SCSI.

The actual physical hardware (the disk acutators and such) are usually
manufactured in the same factory, right? So all things being equal
(transportation from the factory, etc.) they should have similar failure
rates, only that the SCSI drives have more/better chips/firmware/software?


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Scott St. John
At 05:39 PM 11/24/2002 -0500, John wrote:

I currently work with an ISP that has mostly IDE on the servers doing
miscellaneous stuff, all SCSI RAID5 on the servers such as database, NFS
and network monitoring. I just like being able to pull a drive hot and
replace it nice and easy in the servers that are most critical to me.


We  had a 9 gig SCSI drive on-line for 6 years and this was
the first time it gave us problems.  Of course the problem was the drive died
and held the  configuration for that machine :)  This little box was a Pentium
100 with 64 megs of ram and did Sendmail/Radius, Apache for user home
pages and web mail.  It ran BSDi 3.0.  Oh yeah, back up DNS as well.

I guess I want to ask this again - I am having serious performance issues with
this Red Hat 7.2 machine running Sendmail.  I had 50+ emails  last night
complaining about it.  One guy knows we ran BSD on the old server and begged
us to go back.  I have to move fast until I can get the new equipment ordered.
My thought is to take a Dual P3-700 Server with a gig of ram and put Debian
with Postfix on it.  I have 3 Maxtor  IDE drives in it, 2 60 gigs and a 120 
gig drive.

Since it is Thanksgiving week our traffic will be lower and this would give me
the chance to get the new server up and running and HOPEFULLY give us
much improved performance until I can get the SCSI drives ordered.

-Scott

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.419 / Virus Database: 235 - Release Date: 11/13/2002



Re: SCSI or IDE

2002-11-24 Thread Russell Coker
On Mon, 25 Nov 2002 00:54, Donovan Baarda wrote:
> I'm pretty sure most device drivers for both IDE and SCSI do some degree of
> command-reordering before issuing the commands down the buss. I wonder how
> much real-world benefit can be gained from drive-level command re-ordering,
> and how many SCSI drives actualy bother to implement it well :-)

Last I heard was that they both did it badly.  Commands were re-ordered at the 
block device level (re-ordering commands sent to a RAID device is not 
helpful).

This is separate to re-ordering within the disk.

> The point is, 4 IDE buses will probably match 1 SCSI bus for sustained
> transfer rates4x133 =533MB/sec... more than 1x the fastest SCSI. Throw
> in the IDE crappy performance, and you get about the same.

To sustain that speed you need 66MHz 64bit PCI, which almost no-one gets.

If you have a single 33MHz card then the entire bus runs at 33MHz, so 
therefore you need an expensive motherboard with multiple PCI buses and RAID 
controller cards to support it.

Running two hardware RAID cards on separate PCI buses and then doing software 
RAID-0 across them to solve PCI bottlenecks is apparently not that uncommon.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Donovan Baarda
On Mon, Nov 25, 2002 at 12:29:11AM +0100, Jan-Benedict Glaw wrote:
> On Mon, 2002-11-25 10:17:44 +1100, Donovan Baarda <[EMAIL PROTECTED]>
> wrote in message <[EMAIL PROTECTED]>:
> > On Sun, Nov 24, 2002 at 08:45:04PM +0100, Emilio Brambilla wrote:
> > > hello,
> > > On Sun, 24 Nov 2002, Russell Coker wrote:
[...]
> Command queuing is quite new to ide, and only IBM drives support it up
> to now, but others are to follow...

Ahh, perhaps only the spec supported it, and no actual hardware :-)

> > In any case, command queuing makes a big difference when you have lots of
> > slow drives sharing a mega-bandwidth buss. IDE has only two drives, so it's
> 
> That's not really right. Command Queuing allows to to tell the drive you
> want to have, say, 10 sectors scattered across the whole drive. If you
> give 10 synchronous commands, you'll see 10 seeks. Issuing them as
> queued commands will fetch them _all_ within _one_ seek, if there's good
> firmware on the drive. Only the drive itself does know the optimal order
> of fetching them, the OS only knows some semantics...

I'm pretty sure most device drivers for both IDE and SCSI do some degree of
command-reordering before issuing the commands down the buss. I wonder how
much real-world benefit can be gained from drive-level command re-ordering,
and how many SCSI drives actualy bother to implement it well :-)

> > not as relevant. I believe most benchmarking shows only marginal peformance
> > hit for two IDE's on the same bus (this might be because IDE does have a
> > form of command queuing, or it could just be because it doesn't make much
> > difference). I know SCSI shows nearly no hit for two drives on one bus, but
> 
> Or it is because the benchmark doesn't ask _both_ drive to send their
> very maximum of data...

I'm pretty sure any benchmarks done on this would have been hammering both
drives at once... that would be the point, wouldn't it?

> > when you compare 8 SCSI's on one bus with 8 IDE's on 4 buses, I bet they
> > turn out about the same.
> 
> > If you have 6 or less devices, IDE is just as good as SCSI, and bucketloads
> > cheaper.
> 
> Only true if you don't want to see your devices to send at their maximum
> speed _all the time_.

The point is, 4 IDE buses will probably match 1 SCSI bus for sustained
transfer rates4x133 =533MB/sec... more than 1x the fastest SCSI. Throw
in the IDE crappy performance, and you get about the same.

-- 
--
ABO: finger [EMAIL PROTECTED] for more info, including pgp key
--


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Craig Sanders
On Sun, Nov 24, 2002 at 09:14:14PM +0100, Russell Coker wrote:
> 3ware RAID arrays are affordable and deliver quite satisfactory
> performance.  Usually they are limited by PCI speeds (last time I
> checked they didn't support 66MHz 64bit PCI).

the key is that you need to use cards like these if you want decent
performance out of a multi-drive IDE system.  that's not such a bad
thing, because these cards are relatively inexpensive.

the one thing NOT to do is to have more than one drive per IDE channel.
i.e. don't have IDE slave devices (including disks, tapes, cdroms, etc),
have only master drives on a dedicated IDE controller.  either use a
bunch of cheap PCI IDE controllers (some have as many as 4 per card,
most have 2) to get enough IDE channels for your drives or use something
like the 3ware raid cards which have one IDE controller for each drive.

craig

ps: i still prefer SCSI - a good scsi raid setup will beat a good ide
raid setup anyday (but it will cost more), and i know that i'm using
equipment designed for heavy server loads rather than light desktop
loads.

the price difference between SCSI & IDE is actually increasing.  there
are some huge IDE drives available now at reasonable prices, but the
SCSI versions of the same sizes are a) often delayed for at least
several months and b) not reasonably priced.

pps: last time i needed to build a large raid array for a fileserver, i
priced both IDE and SCSI solutions.  the SCSI solution was about $15000
all up (server, external drive box, drives, raid controller, etc).  the
equivalent IDE solution was about $13000.  i ended up deciding that scsi
was worth the extra $2000.  

btw, prices are in australian dollars, $AUD1.00 is somewhere around $US0.55

these days, i may have chosen differently because i could probably have
got twice the storage capacity for the same price with IDE.


-- 
craig sanders <[EMAIL PROTECTED]>

Fabricati Diem, PVNC.
 -- motto of the Ankh-Morpork City Watch


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Donovan Baarda
On Sun, Nov 24, 2002 at 05:39:53PM -0500, John wrote:
> On Sun, Nov 24, 2002 at 12:38:56PM -0500, Scott wrote:
> > After some talks with the person who handles the books she has given me 
> > the authority to bail on these Netfinity boxes and get something more 
> > supported by Debian.  My question is:  with IDE drives as fast as they are 
> > now does it really pay to go SCSI?  Are there any benefits besides RAID?
> > I understand fault tolerance, but how about performance?
> 
> I have used SCSI and IDE in many levels of the game. I've also used
> filers (Netapp). 
[...]
> I've seen (figuring off the top of my head) a 3:1 IDE/SCSI failure rate
> across all drives/servers/systems. I'm not recalling that many failures
> all told. I can actually only recall two SCSI failure, a 2G WD and a 18G
> IBM. I've had multiple Fujitu IDE, WD IDE failures, sometimes with the
> replacement drive failing in the same machine (Grrr)

Fujitsu and WD don't make HDD's, they make paperweights... and cheap
nasty paperweights at that.

Actually, I'm probably being a bit harsh on WD... they do probably make some
HDD's at their paperweight factories, but Fujitsu never have.

If you want reliable IDE's get Quantum (whups... they don't exist anymore),
IBM (whups again... they shut down after they built a paperweight factory in
Hungary?), or Segate... perhaps Maxtor (bought out Quantum didn't they?).

The hard thing with computer gear in general is each generation is a totally
new generation... A manufacturer's drives can turn from good to crap in one
batch, and the reverse. About all you can do is check www.storagereview.com
and solicit for advice each time you go to buy something.

-- 
--
ABO: finger [EMAIL PROTECTED] for more info, including pgp key
--


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Jan-Benedict Glaw
On Mon, 2002-11-25 10:17:44 +1100, Donovan Baarda <[EMAIL PROTECTED]>
wrote in message <[EMAIL PROTECTED]>:
> On Sun, Nov 24, 2002 at 08:45:04PM +0100, Emilio Brambilla wrote:
> > hello,
> > On Sun, 24 Nov 2002, Russell Coker wrote:
> [...]
> > SCSI can queue up to 256 commands and reorder them for maximum
> > performance, furthermore SCSI has been developed to be used in the server
> > market, so they are optimized for servers (rescheduling commands and seek
> > patterns of SCSI has been written for this kind of use!)
> 
> There are lots of IDE vs SCSI arguments that are no longer true that still
> surface when this topic is recycled.
> 
> Command Queuing: IDE didn't support command queuing, and SCSI did. I thought
> command queuing had been available in IDE for ages... A quick search of the
> Linux IDE driver source pulls up bucketloads of matches against queue,
> including;

Command queuing is quite new to ide, and only IBM drives support it up
to now, but others are to follow...

>  * Version 6.00 use per device request queues
>  *  attempt to optimize shared hwgroup performance
>   ::
>  * Version 6.31 Debug Share INTR's and request queue streaming
>  *  Native ATA-100 support

This is Linux' internal queuing, not drive queuing...

> In any case, command queuing makes a big difference when you have lots of
> slow drives sharing a mega-bandwidth buss. IDE has only two drives, so it's

That's not really right. Command Queuing allows to to tell the drive you
want to have, say, 10 sectors scattered across the whole drive. If you
give 10 synchronous commands, you'll see 10 seeks. Issuing them as
queued commands will fetch them _all_ within _one_ seek, if there's good
firmware on the drive. Only the drive itself does know the optimal order
of fetching them, the OS only knows some semantics...

> not as relevant. I believe most benchmarking shows only marginal peformance
> hit for two IDE's on the same bus (this might be because IDE does have a
> form of command queuing, or it could just be because it doesn't make much
> difference). I know SCSI shows nearly no hit for two drives on one bus, but

Or it is because the benchmark doesn't ask _both_ drive to send their
very maximum of data...

> when you compare 8 SCSI's on one bus with 8 IDE's on 4 buses, I bet they
> turn out about the same.

> If you have 6 or less devices, IDE is just as good as SCSI, and bucketloads
> cheaper.

Only true if you don't want to see your devices to send at their maximum
speed _all the time_.

> The IDE raid cards do open up the 6~12 device area to IDE, but I suspect
> SCSI is still slightly less painful, though IDE is definitely cheaper.

That's right:-)

MfG, JBG

-- 
   Jan-Benedict Glaw   [EMAIL PROTECTED]. +49-172-7608481
   "Eine Freie Meinung in  einem Freien Kopf| Gegen Zensur
fuer einen Freien Staat voll Freier Bürger" | im Internet!
   Shell Script APT-Proxy: http://lug-owl.de/~jbglaw/software/ap2/



msg07371/pgp0.pgp
Description: PGP signature


Re: SCSI or IDE

2002-11-24 Thread Donovan Baarda

On Sun, Nov 24, 2002 at 08:45:04PM +0100, Emilio Brambilla wrote:
> hello,
> On Sun, 24 Nov 2002, Russell Coker wrote:
[...]
> ATA/IDE drives/controllers lack the ability to perform "command queuing",
> so they are not much fast on many concurrent i/o requests (this feature
> will be introduced in serial-ATA II devices, I think)
> 
> SCSI can queue up to 256 commands and reorder them for maximum
> performance, furthermore SCSI has been developed to be used in the server
> market, so they are optimized for servers (rescheduling commands and seek
> patterns of SCSI has been written for this kind of use!)

There are lots of IDE vs SCSI arguments that are no longer true that still
surface when this topic is recycled.

CPU: IDE in the PIO days used bucketloads of CPU. UDMA ended that three or
for IDE generations ago. It is not unusual to see benchmarks with IDE drives
using less CPU than SCSI drives, though they are pretty much the same now.

Thermal recalibration: Some drives do periodic "recalibrations" that cause a
hickup in data streaming. This is _not_ and IDE vs SCSI issue, but a drive
issue. Some drives were "multi-media rated", which means they can gaurentee
a constant stream of data without "recalibration" hickups. Many low-end SCSI
drives are mechanicaly identical to the manufacturuers corresponding IDE
drive, and hence have the same "recalibration" behaviour. I'm not sure what
the current state of affairs with "thermal recalibration" and "multi-media
rated", but it wouldn't surprise me if the terms have faded away as it's
probably not an issue on new drives. Anyone else care to comment?

Command Queuing: IDE didn't support command queuing, and SCSI did. I thought
command queuing had been available in IDE for ages... A quick search of the
Linux IDE driver source pulls up bucketloads of matches against queue,
including;

 * Version 6.00 use per device request queues
 *  attempt to optimize shared hwgroup performance
  ::
 * Version 6.31 Debug Share INTR's and request queue streaming
 *  Native ATA-100 support

And the ataraid.c code includes references to "ataraid_split_request"
commands.  The ide-cd.c code also refers to "cdrom_queue_packet_command".
This might not be actual "command-queuing" so perhaps I'm wrong, but I'm
sure I read ages ago that IDE had at least something compareable. Anyone
actualy know?

In any case, command queuing makes a big difference when you have lots of
slow drives sharing a mega-bandwidth buss. IDE has only two drives, so it's
not as relevant. I believe most benchmarking shows only marginal peformance
hit for two IDE's on the same bus (this might be because IDE does have a
form of command queuing, or it could just be because it doesn't make much
difference). I know SCSI shows nearly no hit for two drives on one bus, but
when you compare 8 SCSI's on one bus with 8 IDE's on 4 buses, I bet they
turn out about the same.

> It's true that on many entry-level severs IDE is enough for the job (and
> a lot cheeper than SCSI), but on hi-end servers scsi is still a MUST!

Many "high-end" integrated SCSI RAID storage solutions are actualy A SCSI
interface to a bunch of IDE disks...

The best way to compare bare single drive performance is to compare drives
at;

http://www.storagereview.com/

IMHO, the big win of SCSI is a single interface with a proper bus that
supports multiple devices. SCSI can drive a scanner, 2 cdroms, and 4
hardrives off one interface using a single interrupt. UW SCSI can handle
up to 15 devices on one interface, and not break a sweat.

If you are going to have more than 6 devices, SCSI is the less painful path
to take, though more expensive.

If you have 6 or less devices, IDE is just as good as SCSI, and bucketloads
cheaper.

The IDE raid cards do open up the 6~12 device area to IDE, but I suspect
SCSI is still slightly less painful, though IDE is definitely cheaper.

-- 
--
ABO: finger [EMAIL PROTECTED] for more info, including pgp key
--


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Russell Coker
On Sun, 24 Nov 2002 23:39, John wrote:
> There's quite little point in having IDE for my work on the most mission
> critical servers. We also have a habit of netbooting many of our
> machines. POP/SMTP/HTTP/HTTPS/DNS are done via netboot. This reduces our
> reliance on drives in tons of systems.

It does however increase your reliance on the network.  However it's an 
interesting concept.

One problem with netbooting is that you then become reliant on a single Filer 
type device instead of having multiple independant servers.  If each server 
has it's own disks running software RAID then a single disk failure isn't 
going to cause any great problems, and a total server failure isn't going to 
be a big hassle either.

Another problem that has prevented me from doing such things in the past is 
that the switches etc have been run by a different group.  I have been unable 
to trust the administrators of the switches to not break things on me...  :(

> I would be happy to know if there
> are controllers and setups that allow hotswappable IDE RAID5 - I'd be
> very interested if there were (please feel free to let me know on or off
> list).

http://www.raidzone.com/Products___Solutions/OpenNAS_Overview/opennas_overview.html

> IF you can get a combination of good IDE drives with good IDE
> controllers that don't peg your CPU usage and money is an issue, go with
> IDE. Never put two RAID1 IDE drives on the same channel (primary or
> secondary). Put one on each for safety.

You can say the same about SCSI.

If you get a high-end RAID product from Sun then you won't have two drives in 
the same RAID set on the same SCSI cable.

One final thing, the performance differences between ReiserFS, Ext3, and XFS 
are far greater than that between IDE and SCSI drives of similar specs.  All 
three file systems perform best for different tasks, benchmark for what you 
plan to do first.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




RE: SCSI or IDE

2002-11-24 Thread Jones, Steven
u can get hot swap ide 

promise do one (hot swap ide), dunno how good it is mind.

Thing

8><--

I currently work with an ISP that has mostly IDE on the servers doing
miscellaneous stuff, all SCSI RAID5 on the servers such as database, NFS
and network monitoring. I just like being able to pull a drive hot and
replace it nice and easy in the servers that are most critical to me. 


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread John
On Sun, Nov 24, 2002 at 12:38:56PM -0500, Scott wrote:
> After some talks with the person who handles the books she has given me 
> the authority to bail on these Netfinity boxes and get something more 
> supported by Debian.  My question is:  with IDE drives as fast as they are 
> now does it really pay to go SCSI?  Are there any benefits besides RAID?
> I understand fault tolerance, but how about performance?

I have used SCSI and IDE in many levels of the game. I've also used
filers (Netapp). 

I currently work with an ISP that has mostly IDE on the servers doing
miscellaneous stuff, all SCSI RAID5 on the servers such as database, NFS
and network monitoring. I just like being able to pull a drive hot and
replace it nice and easy in the servers that are most critical to me. 

There's quite little point in having IDE for my work on the most mission
critical servers. We also have a habit of netbooting many of our
machines. POP/SMTP/HTTP/HTTPS/DNS are done via netboot. This reduces our
reliance on drives in tons of systems. I would be happy to know if there
are controllers and setups that allow hotswappable IDE RAID5 - I'd be
very interested if there were (please feel free to let me know on or off
list). 

At home, where we have a completely overbuilt network (geek!) I have a
server with IDE software RAID1 (dual 40G) and a SCSI RAID5 array that is
external and on a module installed basis so I can move/add/remove
drives at will - without losing my uptime on the main machine. My SCSI
array is currently 54G, but will expand again in the spring when I make
some other upgrades and free up more like drives. I also like to add and
remove my SCSI CD-ROMs as well, just cause I have several laying around.


I've seen (figuring off the top of my head) a 3:1 IDE/SCSI failure rate
across all drives/servers/systems. I'm not recalling that many failures
all told. I can actually only recall two SCSI failure, a 2G WD and a 18G
IBM. I've had multiple Fujitu IDE, WD IDE failures, sometimes with the
replacement drive failing in the same machine (Grrr)


Overall, this would be my recommendation (IMHO - YMMV)

IF you can get a combination of good IDE drives with good IDE
controllers that don't peg your CPU usage and money is an issue, go with
IDE. Never put two RAID1 IDE drives on the same channel (primary or
secondary). Put one on each for safety. For storing mp3's at home or
files locally, IDE is generally well suited and will save you a lot.

If you've got more money and want to see a (actual, not spec) better
MTBF, go with SCSI. Take the time to learn how SCSI works, terminations,
etc. Research block sizes on RAID arrays. Experiment to get the best
speeds. Use multiple controllers if you want. Have proper cooling. 

I think SCSI edges out IDE for reliability and I think the extra cost is
worth it. And if your data is super mission critical, just buy a filer
instead and use snapshots. If, as I reread your question, you just want
to know "Is SCSI worth it for speed?" - no, probably not, you can do
well with an intelligently configured IDE system. 

$.02, FWIW,

John


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Russell Coker
On Sun, 24 Nov 2002 22:13, Vector wrote:
> > > You can put a lot more disks on a single SCSI
> > > controler, than on a IDE controler, and there (afaik, i could be
> > > mistaken) two drives on one bus cannot work simultaneously and share
> > > the bandwidth (which isn't a problem with SCSI, if you have 160 MB/s
> > > bus, and 3 disks that can make about 40MB/s, you can have all 120MB/s)
> >
> > 3ware IDE controllers support up to 12 drives.  You won't find many SCSI
> > controllers that can do that and deliver acceptable performance (you
> > won't get good performance unless you have 64bit 66MHz PCI).
>
> That is not true.

What are you claiming is not true?

> Ultra2 can't do 160MB/s.  Ultra2 is limited to 80MB/s.  U160 (or ultra3)
> can do 160MB/s.  And perhaps, yes, Ultra2 vs ATA-133 might be comparable. 
> And U320 is now and can do 320MB/ssuch is and has been the evolution of
> both standards.

ATA-133 for two disks (or one disks for 3ware type devices) is more than 
adequate.  U160 for more than 4 disks will be a bottleneck.

> > 3)  SCSI termination.
>
> Huh?  I'd honestly have to say this falls into the same category as 1)
> Incompetant administrators.  Get the termination right and it all works
> just fine, which is now easier than ever since controllers have been able

Unfortunately the administrators don't always get a chance to inspect the 
hardware or fiddle with it.

I often don't get to touch the hardware I administer until after it has been 
proven to be broken.

Sun likes to do all the hardware maintenance (it's quite profitable for them).  
Sun employees often aren't able to terminate SCSI properly.

For these and other reasons a company I am working for is abandoning Sun and 
moving to Debian on PC servers.

> to autoterminate for many many years and now they are building terminators
> right into the cable.  And there are other factors like cable quality and
> length.  It's cerntaily more complicated but again, I feel it's worth it
> once you know what you are doing.

Regardless of auto-termination and terminators built into cables, if you 
install the wrong parts then you can still stuff it up.

When you've had a repair-man from the vendor use a hammer to install a CPU you 
learn to accept that any hardware can be broken no matter how well it's 
installed.

> > SCSI drives tend to have higher rotational speeds than IDE drives and
> > thus
>
> True, and in your first reply on this thread didn't you quote this as one
> of the primary factors determining speed?

Yes.  However for bulk IO it's rotational speed multiplied by the number of 
sectors per track.  A 5400rpm IDE disk with capacity 160G will probably 
perform better for bulk IO than a 10,000rpm SCSI disk with capacity 36G for 
this reason.

A high rotational speed helps seek times a lot too, but a big cache and a 
battery-backed write-back cache can make up for this (admittedly this isn't 
something you'll see in a typical IDE-RAID solution).

> Hmm, yeah, there's crap in both sectors that's for sure.  I can't say I've
> been a huge fan of IBM drives in the past.

IBM drives used to be really good.  They used to run cool, quietly, and 
reliably.  I've had IBM drives keep working in situations where other brand 
drives failed from heat.

It seems that whenever a vendor gets a reputation for high quality they then 
increase the volume, sub-contract the manufacturing to a country where they 
can pay the workers $0.10 per hour, and the results are what you would 
expect.  :(

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Vector

- Original Message -
From: Russell Coker <[EMAIL PROTECTED]>
To: Emilio Brambilla <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Sunday, November 24, 2002 1:14 PM
Subject: Re: SCSI or IDE


> Organizations such as CERN are using IDE disks for multi-terabyte arrays.

I've heard google uses IDE as well.  Of course, they come in a huge cluster
of cheap workstations, not as a mass storage system.


In an attempt to answer the original question:
As you can see here there is somewhat of a religious war going on.  I don't
much care about the specifics of the religion.  I have always gone with what
worked best for me which in my case has been SCSI.  If you have a tight
budget, I'm sure you can find an IDE solution that will do you just fine.
If you have a fat budget, try them both and then sell off the one like the
least and chalk it up to a learning experience.

vec



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Vector

- Original Message -
From: Russell Coker <[EMAIL PROTECTED]>
To: Âàñèë Êîëåâ <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Sunday, November 24, 2002 12:39 PM
Subject: Re: SCSI or IDE


> > You can put a lot more disks on a single SCSI
> > controler, than on a IDE controler, and there (afaik, i could be
> > mistaken) two drives on one bus cannot work simultaneously and share the
> > bandwidth (which isn't a problem with SCSI, if you have 160 MB/s bus,
> > and 3 disks that can make about 40MB/s, you can have all 120MB/s)
>
> 3ware IDE controllers support up to 12 drives.  You won't find many SCSI
> controllers that can do that and deliver acceptable performance (you won't
> get good performance unless you have 64bit 66MHz PCI).
>

That is not true.

> Do a benchmark of two IDE drives on the one cable and you will discover
that
> the performance loss is not very significant.
>
> ATA-133 compared to Ultra2 SCSI at 160MB/s is not much difference.  S-ATA
is
> coming out now and supports 150MB/s per drive.
>

Ultra2 can't do 160MB/s.  Ultra2 is limited to 80MB/s.  U160 (or ultra3) can
do 160MB/s.  And perhaps, yes, Ultra2 vs ATA-133 might be comparable.  And
U320 is now and can do 320MB/ssuch is and has been the evolution of both
standards.

> > And maybe i should say something about the reliability, SCSI disks don't
> > die that often, compared to IDE drives, while being used a lot 24x7.
>
> The three biggest causes of data loss that I have seen are:
> 1)  Incompetant administrators.

Amen.

> 2)  Heat.

Halleluja, Brother!

> 3)  SCSI termination.
>

Huh?  I'd honestly have to say this falls into the same category as 1)
Incompetant administrators.  Get the termination right and it all works just
fine, which is now easier than ever since controllers have been able to
autoterminate for many many years and now they are building terminators
right into the cable.  And there are other factors like cable quality and
length.  It's cerntaily more complicated but again, I feel it's worth it
once you know what you are doing.

> SCSI drives tend to have higher rotational speeds than IDE drives and thus

True, and in your first reply on this thread didn't you quote this as one of
the primary factors determining speed?

> produce more heat.  Even when IBM was shipping thousands of broken IDE
hard

yes, fans are our friends!

> drives (and hundreds of broken SCSI drives which didn't seem to get any
> press) the data loss caused by defective drives was still far less than
any
> of those three factors.

Hmm, yeah, there's crap in both sectors that's for sure.  I can't say I've
been a huge fan of IBM drives in the past.

vec



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Vector
I can tell you that for the last 10 years, I've been using all SCSI
equipment in all the systems I've built.  I have yet to be disappointed with
the the stuff even though it tends to cost more.  They are MUCH more
flexibility than IDE systems, and despite all the additions to IDE like
DMA/UDMA, etc...  I am still the happiest with the SCSI systems.  In fact my
daughter is now using the first system I ever put SCSI into 10 years ago
with the same drives and the same controller and it's doing nicely.  Every
time they come out with the next generation IDE stuff I always buy one or
two and test it agains the latest SCSI stuff and I have yet to be
disappointed with the SCSI hardware.  I used to do alot of CD burning and
SCSI drives were the only thing that could keep up.  At the time, they
simply didn't make any IDE drives that could operate for extended periods
without doing thermal head recalibration which consequently meant a buffer
underrun.  Of course, now days that's no longer a problem but it always
seems like the latest features and technology appear in SCSI devices first.

vec


- Original Message -
From: Scott <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Sunday, November 24, 2002 10:38 AM
Subject: SCSI or IDE


> After some talks with the person who handles the books she has given me
> the authority to bail on these Netfinity boxes and get something more
> supported by Debian.  My question is:  with IDE drives as fast as they are
> now does it really pay to go SCSI?  Are there any benefits besides RAID?
> I understand fault tolerance, but how about performance?
>
> Thanks,
>
> -Scott
>
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Russell Coker
On Sun, 24 Nov 2002 20:45, Emilio Brambilla wrote:
> > IDE and SCSI give very similar performance.  Performance is determined by
> > hardware issues such as rotational speed rather than the type of
> > interface.
>
> I agree if you think at a single drive workstation, not if you think at a
> server with many disks making heavy i/o on the disks.

Organizations such as CERN are using IDE disks for multi-terabyte arrays.

> ATA/IDE drives/controllers lack the ability to perform "command queuing",
> so they are not much fast on many concurrent i/o requests (this feature
> will be introduced in serial-ATA II devices, I think)

Get 10 disks in a RAID array and the ability of a single disk to queue 
commands becomes less important, the RAID hardware can do that.

> SCSI can queue up to 256 commands and reorder them for maximum
> performance, furthermore SCSI has been developed to be used in the server
> market, so they are optimized for servers (rescheduling commands and seek
> patterns of SCSI has been written for this kind of use!)

However benchmarks tend not to show any great advantage for SCSI.  If you get 
an affordable SCSI RAID solution then the performance will suck.  Seeing an 
array of 10,000 RPM Ultra2 SCSI disks delivering the same performance as a 
single IDE disk is not uncommon when you have a cheap RAID setup.

Even when your RAID array costs more than your house you may find the 
performance unsatisfactory.

3ware RAID arrays are affordable and deliver quite satisfactory performance.  
Usually they are limited by PCI speeds (last time I checked they didn't 
support 66MHz 64bit PCI).

> It's true that on many entry-level severs IDE is enough for the job (and
> a lot cheeper than SCSI), but on hi-end servers scsi is still a MUST!

SCSI is more expensive, it's not faster, it's not as well supported, and it 
has termination issues.  SCSI is not "a must" unless you buy from Sun or one 
of the other vendors that gives you what costs the most rather than what you 
need...

> btw, rotational speed speeking, how many 15.000 rpm IDE disks are
> there? :-)

None, not that I care.  As long as fiber channel speeds, RAID array speeds, 
etc slow the arrays of SCSI drives I use so much that they deliver the same 
performance as a single IDE disk that's all irrelevant.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Emilio Brambilla
hello,
On Sun, 24 Nov 2002, Russell Coker wrote:

> IDE and SCSI give very similar performance.  Performance is determined by 
> hardware issues such as rotational speed rather than the type of interface.
I agree if you think at a single drive workstation, not if you think at a 
server with many disks making heavy i/o on the disks.

ATA/IDE drives/controllers lack the ability to perform "command queuing",
so they are not much fast on many concurrent i/o requests (this feature
will be introduced in serial-ATA II devices, I think)

SCSI can queue up to 256 commands and reorder them for maximum
performance, furthermore SCSI has been developed to be used in the server
market, so they are optimized for servers (rescheduling commands and seek
patterns of SCSI has been written for this kind of use!)

It's true that on many entry-level severs IDE is enough for the job (and
a lot cheeper than SCSI), but on hi-end servers scsi is still a MUST!

btw, rotational speed speeking, how many 15.000 rpm IDE disks are
there? :-)

-- 
Saluti,
emilio brambilla


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Russell Coker
On Sun, 24 Nov 2002 18:56, Âàñèë Êîëåâ wrote:
> About performance - IDE still uses a lot of the CPU, SCSI has it's own
> processing power.

Please do some benchmarks.  You'll discover that when DMA is enabled and you 
have a good chipset then IDE will not use much CPU.

OTOH if you have an Adaptec 1510 then even accessing a CD-ROM will take 
excessive amounts of CPU time.

In summary:  Good controllers use little CPU time, bad controllers use a lot 
of CPU time.  Doesn't matter whether it's IDE or SCSI.

> You can put a lot more disks on a single SCSI
> controler, than on a IDE controler, and there (afaik, i could be
> mistaken) two drives on one bus cannot work simultaneously and share the
> bandwidth (which isn't a problem with SCSI, if you have 160 MB/s bus,
> and 3 disks that can make about 40MB/s, you can have all 120MB/s)

3ware IDE controllers support up to 12 drives.  You won't find many SCSI 
controllers that can do that and deliver acceptable performance (you won't 
get good performance unless you have 64bit 66MHz PCI).

Do a benchmark of two IDE drives on the one cable and you will discover that 
the performance loss is not very significant.

ATA-133 compared to Ultra2 SCSI at 160MB/s is not much difference.  S-ATA is 
coming out now and supports 150MB/s per drive.

Please do some benchmarks before you start talking about performance.

> And maybe i should say something about the reliability, SCSI disks don't
> die that often, compared to IDE drives, while being used a lot 24x7.

The three biggest causes of data loss that I have seen are:
1)  Incompetant administrators.
2)  Heat.
3)  SCSI termination.

SCSI drives tend to have higher rotational speeds than IDE drives and thus 
produce more heat.  Even when IBM was shipping thousands of broken IDE hard 
drives (and hundreds of broken SCSI drives which didn't seem to get any 
press) the data loss caused by defective drives was still far less than any 
of those three factors.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Thing
On Mon, 25 Nov 2002 06:38, Scott wrote:
> After some talks with the person who handles the books she has given me
> the authority to bail on these Netfinity boxes and get something more
> supported by Debian.  My question is:  with IDE drives as fast as they are
> now does it really pay to go SCSI?  Are there any benefits besides RAID?
> I understand fault tolerance, but how about performance?
>
> Thanks,
>
> -Scott

I would be grateful if you cold document why / what probs you are having wiht 
the net infinity kit (for future reference).

Ide is obviously way cheaper than Scsi, You can go ide raid, which Ive not 
tried yet, but it would give a mirror whcih is what you want really (read 
performance will be a bit better too).

Does the load justify scsi? if its not hammered then hardware ide raid is 
probably fine.

regards

Thing


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Jeremy Zawodny
On Sun, Nov 24, 2002 at 06:56:34PM +0100, ? ? wrote:
>
> About performance - IDE still uses a lot of the CPU

IMHO that argument made a lot more sense when we had 300MHz CPUs.  But
now that most servers are far faster than that, we're talking about
what, 1% or maybe 2% of the CPU?

It's probably more than worth the cost savings on the SCSI premium.

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
<[EMAIL PROTECTED]>  |  http://jeremy.zawodny.com/


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Васил Колев
About performance - IDE still uses a lot of the CPU, SCSI has it's own
processing power. You can put a lot more disks on a single SCSI
controler, than on a IDE controler, and there (afaik, i could be
mistaken) two drives on one bus cannot work simultaneously and share the
bandwidth (which isn't a problem with SCSI, if you have 160 MB/s bus,
and 3 disks that can make about 40MB/s, you can have all 120MB/s)

And maybe i should say something about the reliability, SCSI disks don't
die that often, compared to IDE drives, while being used a lot 24x7.


Íà íä, 2002-11-24 â 18:38, Scott çàïèñà:
> After some talks with the person who handles the books she has given me 
> the authority to bail on these Netfinity boxes and get something more 
> supported by Debian.  My question is:  with IDE drives as fast as they are 
> now does it really pay to go SCSI?  Are there any benefits besides RAID?
> I understand fault tolerance, but how about performance?



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Russell Coker
On Sun, 24 Nov 2002 18:38, Scott wrote:
> After some talks with the person who handles the books she has given me
> the authority to bail on these Netfinity boxes and get something more
> supported by Debian.  My question is:  with IDE drives as fast as they are
> now does it really pay to go SCSI?  Are there any benefits besides RAID?
> I understand fault tolerance, but how about performance?

IDE and SCSI give very similar performance.  Performance is determined by 
hardware issues such as rotational speed rather than the type of interface.

If you want RAID then 3Ware makes some good IDE RAID products.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]