I had always found SCSI HDDs would out-perform IDE in an environment where
we had multiple concurrent users hitting a shared HDD.  With a single user
IDE could kick SCSI's ass all day long (not a slow 5,400 rpm IDE though).
SCSI was designed to handle multiple user I/O efficiently.  It was also very
reliable as SCSI HDDs were designed for multi-user Server use, thus
performance and quality was always an issue to keep in mind when designing
SCSI HDDs.  Hence, one reason why they are more costly than IDE, or even
SATA.

But, with the advent of SATA HDDs I found SCSI is falling to the wayside,
keeping its finger in the larger scale systems with large numbers  of users
with high I/O demand.  SATA HDDs seem to do a fine job keeping up with
smaller user numbers with moderate to light I/O demand.  The reliability
between higher quality SATA vs SCSI has been found to be on par with each
other.  But, if you are trying to compare cheap, lesser quality SATA vs
SCSI, well, price does reflect something about the design internals.  When I
get a HDD the first thing I look at is the brand recognition, then the
warranty.  I will not purchase a HDD with a 1 year warranty.  It has to be 3
or 5 years from the manufacturer (not extended warranty for extra $) before
I even consider it as a contender.

All that said, I have been using SCSI HDDs for many years in my Servers.
Usually the boot HDD is IDE or SATA, but the data HDDs had been SCSIs.
Lately I have been getting SATA/eSATA II HDDs for the data HDDs, but only
because they are far less costly, have larger capacity for the $, and easier
to get replacements for than SCSIs.

I also had a recent rash of failing SCSIs about 6 years into their life, and
I did not want to be limited to the smaller capacity of SCSIs for the $ as
opposed to the capacity for the $ with SATA HDD units.  Thus far I am
impressed with the performance of the SATA HDD, but I also only have about
12 (soon to be 25 more) PCs hitting various Servers with automated data
processing tasks.  Hardly a heavy load as compared to several dozen humans
hammering on a system, otherwise I would have had to look at SCSI (likely
SAS) regardless of $$$.  BTW, the symptoms of the failing SCSI HDDs included
corrupted .cdx files, and warnings on the client PC and Server about losing
data during writes.  The write-ahead cache was already off for the SCSI HDDs
and local HDDs, and I had been getting reliable performance for over 5
years.  After dinking around with a new SCSI HDD I had on hand, and finding
I still had problems I just removed all SCSIs and popped in a SATA HDD to
see if that would resolve the problem.  It did, and my journey using
SATA/eSATA took on a life of its own.  I do not plan to get any more SCSI
HDDs for my own Servers.  If I need that kind of punch I will start looking
at fiber connected SAS ($$$).  I doubt I will need to go there.

I just ordered another new Server (Dell 840), and decided to go SATA for
boot (only way it comes unless I went SCSI), and eSATA II (3gb/second
transfer rate) for the data HDD (PostgreSQL database).  The Server is not
here yet, but the eSATA HDDs are.  As is the eSATA PCI-Express card with 2
external eSATA ports, and the eSATA cables.  The Server has internal HDD
hot-swap cage construction with two 250Gb SATA II HDDs that are set up to
run RAID1.  The eSATA port card supports RAID 0,1,5,10.  Unfortunately with
Linux I found (supposedly) I can't have RAID5, so I am opting for RAID1, and
the eSATA card supports hot swapping.  So I have a 3rd 250Gb hot swap HDD
for the boot drive set in case one HDD fails, and a 3rd 1.5Tg Seagate
Freeagent Extreme external eSATA/USB2/Firewire HDD for the same reason.
Warranties be damned if I have to wait a week to get a replacement part <g>.
I never ran RAID1 before, so this will be interesting from both a
performance and failure recovery perspective (I will test the hot swap and
data striping capabilities before putting the Server into production).  Oh,
setup is going to be interesting also as I plan to run the Server under
Ubuntu Server v-8.04 (may 8.10, have not decided yet).  The Server OS
options from Dell were Windows, Red Hat or Suse - but not Ubuntu.  If I run
into a pinch with Ubuntu I guess it is back to Suse for that one machine,
which is fine although it does have an annual fee.  Could be worse, I could
have had Windows shoved down my throat (again).

If I felt SCSI was the only viable option for my needs I would have gone
that way with the new Server, but the SATA/eSATA configurations I have
worked with and set up in the recent past have been just fine.  Why eSATA?
Quick swapping of HDDs if needed, like due to a production PC failure.  Or,
quick migration to larger HDDs when the day comes (hard to imagine
outgrowing a 1.5Tb HDD).  As long as the 3Gb/second throughput is available
(in theory at least) I am doing better than with USB2 at 480Mb/second.  USB2
is a bit too slow for the production use I have in mind.  But I am still
going to use USB2 for file backup HDDs, as the connectivity for that purpose
is fast enough, and cheap.  That said, I just ordered 12 of those Seagate
1.5Tb eSATA/USB2/Firewire HDD units for maximum connectivity flesibility.
Each workstation has an eSATA card, and will use eSATA for its local
database, and USB2 for backup.  eSATA does cost a little more than just USB2
or USB2/Firewire (eSATA II cards are about $35+), but worth it for the eSATA
II performance.

Okay, that was my 2 cents on the SATA vs SCSI, and some other peripheral
info from my experiences.


Gil

> -----Original Message-----
> From: profoxtech-boun...@leafe.com
> [mailto:profoxtech-boun...@leafe.com]on Behalf Of MB Software Solutions
> General Account
> Sent: Monday, January 26, 2009 6:28 PM
> To: profoxt...@leafe.com
> Subject: SCSI drives and VFP data tables
>
>
> We've got a client who's got frequent memo file corruption/problems and I
> noticed they've got a SCSI drive.  He did have write-caching enabled and I
> told him about disabling it (...i had tried, but even though he was logged
> in administrator seemgingly, the option was grayed out and wouldn't let me
> change it).
>
> YEARS AGO, I remember IDE drives being much better/reliable than SCSI
> drives and I wondered if the other devs here could offer any thoughts on
> that around the virtual ProFox watercooler?
>
> <takes a drink>.
>
> Tia!
> --Michael
>
>
>
>
[excessive quoting removed by server]

_______________________________________________
Post Messages to: ProFox@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/profox
OT-free version of this list: http://leafe.com/mailman/listinfo/profoxtech
Searchable Archive: http://leafe.com/archives/search/profox
This message: 
http://leafe.com/archives/byMID/profox/ndbblhfmcdkpegpoiiapiegebhab....@gilhale.com
** All postings, unless explicitly stated otherwise, are the opinions of the 
author, and do not constitute legal or medical advice. This statement is added 
to the messages for those lawyers who are too stupid to see the obvious.

Reply via email to