Re: I don't understand why SCSI is preferred.

2006-07-17 Thread Duncan Hill
On Wednesday 12 July 2006 20:58, Chris White wrote:

 performance?.  From what I know of MySQL, not really, because MySQL does a
 good amount of work in memory.  The only time I'd see disk access being a
 factor is if you had a large mass of swap/virtual memory.

I have to play with 300 gig of data (and growing).  MySQL cannot keep enough 
of the indexes in memory unfortunately - not when the index for one of the 
tables is 6 gig.

Whether you use SATA, PATA or SCSI (on the back of FC), the answer for speed 
is spindle rotation speed and number of heads.  There's a reason that the 
older HP9000 boxes used disk packs full of 9 GB drives - heads.  SCSI has the 
advantage (for now at least) of being designed in a manner that lets it 
do 'things' faster.

Oh, as a small example - the DB server attached to the SAN can pull data 
faster than my personal server, even though the personal server is only 
dealing with one request and the DB/SAN is dealing with hundreds per second 
(and the personal server is no slouch).  Fun to watch all the SAN disk lights 
light up when that happens.
-- 
Scanned by iCritical.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-17 Thread Cory

Duncan Hill wrote:
Oh, as a small example - the DB server attached to the SAN can pull data 
faster than my personal server, even though the personal server is only 
dealing with one request and the DB/SAN is dealing with hundreds per second 
(and the personal server is no slouch).  Fun to watch all the SAN disk lights 
light up when that happens.
What do you use on your SAN?  We're looking at deploying a SAN on our 
Linux MySQL setup.


Cory.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-14 Thread living liquid | Christian Meisinger
 We're using Opterons, Linux 2.6.x, and a SiL (Silicon Image) SATA
 chipset whose particular model number I don't have in front of me.
 
 After MUCH MUCH MUCH trial and error we've discovered that:
 1) 2.6.16 substantially alleviates the problem but doesn't eliminate it.
 2) There is a 3Ware card that's MUCH better in this regard.
 
 Personally, I'm not a fan of 3Ware, having lost a RAID array due in no
 small part to a BUG in their firmware (whose existence they knew about
 but, naturally, refused to acknowledge until we presented them with
 proof that it had to be a bug...) but you can control for such variables...


thanks

we use a 3ware 9000 SATA-RAID5 controller.
strange.

we have a xeon cpu here.
so it's not a amd specific problem either i guess.

maybe some strange SMP problem.

BUT we use kernel 2.6.11 so that could be the problem.
ahh n i hate kernel updates :)
i will try a kernel update... sometime ;)



chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: I don't understand why SCSI is preferred.

2006-07-14 Thread Tim Lucia
Let me ask the following (Chris):

Is it wise to partition things such that /data and /binlogs are on two
different partitions?  That way, the binary log(s) can't fill up the /data
drive.  If so, is there a guideline for how much space to save for binary
logs?  Or do you just keep both the DBs and binlogs and data together?

TIA,
Tim

 -Original Message-
 From: Tim Lucia [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, July 12, 2006 5:41 PM
 To: 'Chris White'; mysql@lists.mysql.com
 Subject: RE: I don't understand why SCSI is preferred.
 
 
  -Original Message-
  From: Chris White [mailto:[EMAIL PROTECTED]
  Sent: Wednesday, July 12, 2006 5:15 PM
  To: mysql@lists.mysql.com
  Subject: Re: I don't understand why SCSI is preferred.
 
  On Wednesday 12 July 2006 01:13 pm, Tim Lucia wrote:
   I've seen whitepapers from MySQL's web site, co-authored with Dell,
 that
   recommend the hardware optimization be:
  
   1. More Memory
 
  That's a definite
 
   2. Faster Drives (15K RPM is better the 10K)
 
  Well, I guess for any server really, the faster the disk writes (Though
  let's
  be honest, the faster the disk writes AND the better integrity disk).
  Generally this is, in my opinion more suitable for things like logging,
 or
  the times MySQL actually decides to write to the disk (here's where a
  MySQL
  person steps in and states when that is ;) ).
 
   3. Faster CPU.
 
  As with most things these days.  Better CPU means less worry about Oh,
 I
  wonder if I can do this and increases the time period between now and
  when
  you need to scale.
 
   Based on this, we're spec'ing 2950s with 16Gb, dual 2.8 dual-core
 Xeons,
   and 146Gb 15K (times 6) drives.
 
  Sounds about right.  If you're on a linux system I also recommend that
 you
  turn on NPTL (Native Posix Threading Library), which is done through
 glibc
  (or by grabbing an rpm/deb/whatever with said support).  As always,
 don't
  forget the SMP support in the kernel to benifit from the Dual-Core (I'm
  guessing you probably know this, but hey.. never hurts).
 
   The plan is to RAID then 2 x RAID1 for the o/s (/boot, /,
 
  sounds good
 
   /var, and some
 
  It's actually best to shove this on a separate disk.  As the name
  implies, /var is for variable data.  That said, you'll be chucking
  everything
  and the kitchen sink at it.  Logs, spools, etc.  These suckers are
  constantly
  being written to, and let's forgot the fact that some people attack
  servers
  by shoving data at it, which goes to logs.. which take up space.. you
 get
  the
  idea.
 
 
 /var would be on a separate partition, on the same physical RAID set --
 sorry that was obvious to *me* but I didn't say that.
 
 
 
   working space for dumps and restores), and 4 x RAID10 for /data.
 Anyone
   have any feedback on this?
 
  Some people use replication servers for backups, others use the same
  drive.  I
  like the idea of a separate backup replication server as if the main one
  goes
  down, I've got a real physically separated backup to work with.  In the
  end
  that's what matters.
 
 The plan is to backup the slave.  I just want to reserve some space if I
 need to have a local dump file or something.
 
 
  --
  Chris White
  PHP Programmer/DBloomingOnions
  Interfuel
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-14 Thread Jon Frisby
It was my impression, from the information we've collected that our  
problem is very specific to Opteron.  It's possible that your problem  
is actually unrelated. :(


-JF


On Jul 14, 2006, at 7:24 AM, living liquid|Christian Meisinger wrote:


We're using Opterons, Linux 2.6.x, and a SiL (Silicon Image) SATA
chipset whose particular model number I don't have in front of me.

After MUCH MUCH MUCH trial and error we've discovered that:
1) 2.6.16 substantially alleviates the problem but doesn't  
eliminate it.

2) There is a 3Ware card that's MUCH better in this regard.

Personally, I'm not a fan of 3Ware, having lost a RAID array due  
in no
small part to a BUG in their firmware (whose existence they knew  
about

but, naturally, refused to acknowledge until we presented them with
proof that it had to be a bug...) but you can control for such  
variables...



thanks

we use a 3ware 9000 SATA-RAID5 controller.
strange.

we have a xeon cpu here.
so it's not a amd specific problem either i guess.

maybe some strange SMP problem.

BUT we use kernel 2.6.11 so that could be the problem.
ahh n i hate kernel updates :)
i will try a kernel update... sometime ;)



chris

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-13 Thread living liquid | Christian Meisinger
 * - For example: We faced a NASTY problem using AMD 64-bit CPUs + SATA +
 Linux where I/O on the system (the WHOLE system, not JUST the SATA
 spindles -- network, PATA, USB, EVERYTHING) would suddenly come to a
 grinding halt (or very nearly halted) randomly when the SATA subsystem
 was under heavy load.  It required a LOT of trial-and-error kernel
 adjustments to find a configuration that did not suffer this problem.

we have the same problem here.
what did you do to solve this problem?
i guess we need to trial-and-error our own kernel configuration
depending on our hardware but what parameters did you changed?

i'm very thankful about any help ... we have NO idea what's wrong :)


best regards chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-13 Thread Jon Frisby
We're using Opterons, Linux 2.6.x, and a SiL (Silicon Image) SATA  
chipset whose particular model number I don't have in front of me.


After MUCH MUCH MUCH trial and error we've discovered that:
1) 2.6.16 substantially alleviates the problem but doesn't eliminate it.
2) There is a 3Ware card that's MUCH better in this regard.

Personally, I'm not a fan of 3Ware, having lost a RAID array due in  
no small part to a BUG in their firmware (whose existence they knew  
about but, naturally, refused to acknowledge until we presented them  
with proof that it had to be a bug...) but you can control for such  
variables...


-JF


On Jul 12, 2006, at 11:48 PM, living liquid | Christian Meisinger wrote:

* - For example: We faced a NASTY problem using AMD 64-bit CPUs +  
SATA +

Linux where I/O on the system (the WHOLE system, not JUST the SATA
spindles -- network, PATA, USB, EVERYTHING) would suddenly come to a
grinding halt (or very nearly halted) randomly when the SATA  
subsystem

was under heavy load.  It required a LOT of trial-and-error kernel
adjustments to find a configuration that did not suffer this problem.


we have the same problem here.
what did you do to solve this problem?
i guess we need to trial-and-error our own kernel configuration
depending on our hardware but what parameters did you changed?

i'm very thankful about any help ... we have NO idea what's wrong :)


best regards chris



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-13 Thread mos

At 03:45 PM 7/12/2006, Jon Frisby wrote:

This REALLY should be an academic concern.  Either you have a system
that can tolerate the failure of a drive, or you do not.  The
frequency of failure rates is pretty much irrelevant:  You can train
incredibly non-technical (inexpensive) people to respond to a pager
and hot-swap a bad drive.
If you are in the position where the typical failure-rate of a class
of drive is of concern to you then either: A) You have a different
problem causing all your drives to fail ultra-fast (heat, electrical
noise, etc) or B) You haven't adequately designed your storage
subsystem.



It all depends how valuable your uptime is. If you double or triple the 
time between hard disk failures, most people would pay extra for that so 
they buy SCSI drive. You wouldn't take your family car and race in the Indy 
500, would you? After a few laps at 150 mph (if you can get it going that 
fast), it will seize up, so you go into the pit stop and what? Get another 
family car and drive that? And keep doing that until you finish the race? 
Down time is extremely expensive and embarrassing. Just talk to the guys at 
FastMail who has had 2 outages even with hardware raid in place. Recovery 
doesn't always work as smoothly as you think it should.



Save yourself the headache, and just set up a RAID10 PATA/SATA array
with a hot spare.   Not sure if Linux/FreeBSD/et al support hot-swap
of drives when using software RAID, but if it does then you don't
even need to spend a few hundred bucks on a RAID controller.


Software RAID? Are you serious? No way!

Mike





-JF


On Jul 12, 2006, at 12:11 PM, mos wrote:


At 12:42 PM 7/12/2006, you wrote:

On Tuesday 11 July 2006 19:26, mos wrote:
 SCSI drives are also designed to run 24/7 whereas IDE drives are
more
 likely to fail if used on a busy server.

This used to be the case.  But there are SATA drives out there now
being made
for enterprise class, 100% duty cycle operations.  See, for
example,
http://www.westerndigital.com/en/products/Products.asp? 
DriveID=238Language=en

No, I am not affiliated with WD, just had good experience with
these drives.
1.2 Million Hours MTBF at 100% duty cycle and a five year
warranty.  Not bad.


That's good to hear, but  MTBF is really a pie in the sky estimate.
I had an expensive HP tape drive that had something like 20,000 hr
MTBF. Both of my units failed at under 70 hours. HP's estimate was
power on hours (unit powered on and doing nothing), and did NOT
include hours when the tape was in motion. Sheesh.

To get the MTBF estimate, the manufacturer will power on 100 drives
(or more) and time to see when the first one fails. If it fails in
1000 hours, then the MTBF is 100x1000hrs or 100,000 hours. This is
far from being accurate because as we all know, the older the
drive, the more likely it is to fail. (Especially after the
warranty period has expired, failure rate is quite highg).

I am hoping the newer SATA II drives will provide SCSI performance
at a reasonable price. It would be interesting to see if anyone has
polled ISP's to see what they're using. I know they charge more (or
at least they used to) for SCSI drives if you are renting a server
from them. It would be interesting to see what their failure rate
is on IDE vs SCSI vs SATA.

Mike

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-13 Thread Jon Frisby


On Jul 13, 2006, at 3:03 PM, mos wrote:


At 03:45 PM 7/12/2006, Jon Frisby wrote:

This REALLY should be an academic concern.  Either you have a system
that can tolerate the failure of a drive, or you do not.  The
frequency of failure rates is pretty much irrelevant:  You can train
incredibly non-technical (inexpensive) people to respond to a pager
and hot-swap a bad drive.
If you are in the position where the typical failure-rate of a class
of drive is of concern to you then either: A) You have a different
problem causing all your drives to fail ultra-fast (heat, electrical
noise, etc) or B) You haven't adequately designed your storage
subsystem.



It all depends how valuable your uptime is. If you double or triple  
the time between hard disk failures, most people would pay extra  
for that so they buy SCSI drive. You wouldn't take your family car  
and race in the Indy 500, would you? After a few laps at 150 mph  
(if you can get it going that fast), it will seize up, so you go  
into the pit stop and what? Get another family car and drive that?  
And keep doing that until you finish the race? Down time is  
extremely expensive and embarrassing. Just talk to the guys at  
FastMail who has had 2 outages even with hardware raid in place.  
Recovery doesn't always work as smoothly as you think it should.


Again:  Either your disk sub-system can TOLERATE (read: CONTINUE  
OPERATING IN THE FACE OF) a drive failure, or it cannot.  If you  
can't hot-stop a dead drive, your system can't tolerate the failure  
of a drive.


Your analogy is flawed.  The fact that companies like Google are  
running with incredibly good uptimes while using cheap, commodity  
hardware (including IDE drives!) demonstrates it.


SCSI drives WILL NOT improve your uptime by a factor of 2x or 3x.   
Using a hot-swappable disk subsystem, and having hot-spares WILL.   
Designing your systems without needless single points of failure WILL.




Software RAID? Are you serious? No way!


You make a compelling case for your position, but I'm afraid I still  
disagree with you.  *cough*


If you're using RAID10, or other forms of RAID that don't involve  
computing a checksum (and the write hole that accompanies it),  
there's little need for hardware support.  It won't make things  
dramatically faster unless you spend a ton of money on cache -- in  
which case you should seriously consider a SAN for the myriad other  
benefits it provides.  The reliability introduced by hardware RAID  
with battery backups is pretty negligible if you're doing your I/O  
right (I.E. you've made sure your drives aren't lying when they say a  
write has completed AND you're using fsync -- which MySQL does).


-JF



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-12 Thread Daniel da Veiga

On 7/11/06, Brian Dunning [EMAIL PROTECTED] wrote:

My understanding is that SCSI has a faster transfer rate, for
transferring large files. A busy database needs really fast access,


Your statement is partially correct, yes, it has faster transfer
rates, but that is not only for tranfer large files, it accelerates
any access to the disk, because the queue will run faster and
demanding apps will have a better response time (that is all theory,
of course).


for making numerous fast calls all over the disk. Two different,
unrelated things.


SCSI also has a controller that process, queues and serves the data,
this would reduce CPU time and provide faster access. It also is more
fit for high demand, because of its higher spin rates, and it also
runs better in a server environment where there is high load 24/7.



I am more than willing to be called Wrong, slapped, and cast from a
bridge.



Nobody will do that, but you can jump for yourself for not googling
for ide scsi sata pata performance. ;) I'm just kidding.

--
Daniel da Veiga
Computer Operator - RS - Brazil
-BEGIN GEEK CODE BLOCK-
Version: 3.1
GCM/IT/P/O d-? s:- a? C++$ UBLA++ P+ L++ E--- W+++$ N o+ K- w O M- V-
PS PE Y PGP- t+ 5 X+++ R+* tv b+ DI+++ D+ G+ e h+ r+ y++
--END GEEK CODE BLOCK--

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-12 Thread Joshua J. Kugler
On Tuesday 11 July 2006 19:26, mos wrote:
 SCSI drives are also designed to run 24/7 whereas IDE drives are more
 likely to fail if used on a busy server.

This used to be the case.  But there are SATA drives out there now being made 
for enterprise class, 100% duty cycle operations.  See, for example, 
http://www.westerndigital.com/en/products/Products.asp?DriveID=238Language=en  
No, I am not affiliated with WD, just had good experience with these drives.  
1.2 Million Hours MTBF at 100% duty cycle and a five year warranty.  Not bad.

j

-- 
Joshua Kugler   
Lead System Admin -- Senior Programmer
http://www.eeinternet.com
PGP Key: http://pgp.mit.edu/  ID 0xDB26D7CE
PO Box 80086 -- Fairbanks, AK 99708 -- Ph: 907-456-5581 Fax: 907-456-3111

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-12 Thread mos

At 12:42 PM 7/12/2006, you wrote:

On Tuesday 11 July 2006 19:26, mos wrote:
 SCSI drives are also designed to run 24/7 whereas IDE drives are more
 likely to fail if used on a busy server.

This used to be the case.  But there are SATA drives out there now being made
for enterprise class, 100% duty cycle operations.  See, for example,
http://www.westerndigital.com/en/products/Products.asp?DriveID=238Language=en 


No, I am not affiliated with WD, just had good experience with these drives.
1.2 Million Hours MTBF at 100% duty cycle and a five year warranty.  Not bad.


That's good to hear, but  MTBF is really a pie in the sky estimate. I had 
an expensive HP tape drive that had something like 20,000 hr MTBF. Both of 
my units failed at under 70 hours. HP's estimate was power on hours (unit 
powered on and doing nothing), and did NOT include hours when the tape was 
in motion. Sheesh.


To get the MTBF estimate, the manufacturer will power on 100 drives (or 
more) and time to see when the first one fails. If it fails in 1000 hours, 
then the MTBF is 100x1000hrs or 100,000 hours. This is far from being 
accurate because as we all know, the older the drive, the more likely it is 
to fail. (Especially after the warranty period has expired, failure rate is 
quite highg).


I am hoping the newer SATA II drives will provide SCSI performance at a 
reasonable price. It would be interesting to see if anyone has polled ISP's 
to see what they're using. I know they charge more (or at least they used 
to) for SCSI drives if you are renting a server from them. It would be 
interesting to see what their failure rate is on IDE vs SCSI vs SATA.


Mike 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-12 Thread Scott Tanner

 
 I am hoping the newer SATA II drives will provide SCSI performance at a 
 reasonable price. It would be interesting to see if anyone has polled ISP's 
 to see what they're using. I know they charge more (or at least they used 
 to) for SCSI drives if you are renting a server from them. It would be 
 interesting to see what their failure rate is on IDE vs SCSI vs SATA.
 
 Mike 
 
 
  By newer SATA II drivers, are you referring to SAS drives?

  There is a great article on Tom's hardware on SAS drives as a
replacement for standard SCSI:
http://www.tomshardware.com/2006/04/07/going_the_sas_storage_way/index.html

  My company is in the process of switching to direct attached SAS
arrays for our database servers, as part of a scale-out model. We've
done testing between SATA, SCSI, and SAS arrays, and the SCSI and SAS
systems were very comparative. The number of disks in the array seemed
to have a larger effect then the type of disk. SAS also has more fiber
like features then SCSI, making it better suited for HA environments.

Just something else to consider.

Regards,

Scott Tanner
Sys Admin
www.amientertainment.net



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-12 Thread Daniel da Veiga

On 7/12/06, mos [EMAIL PROTECTED] wrote:

At 12:42 PM 7/12/2006, you wrote:
On Tuesday 11 July 2006 19:26, mos wrote:
  SCSI drives are also designed to run 24/7 whereas IDE drives are more
  likely to fail if used on a busy server.

This used to be the case.  But there are SATA drives out there now being made
for enterprise class, 100% duty cycle operations.  See, for example,
http://www.westerndigital.com/en/products/Products.asp?DriveID=238Language=en

No, I am not affiliated with WD, just had good experience with these drives.
1.2 Million Hours MTBF at 100% duty cycle and a five year warranty.  Not bad.

That's good to hear, but  MTBF is really a pie in the sky estimate. I had
an expensive HP tape drive that had something like 20,000 hr MTBF. Both of
my units failed at under 70 hours. HP's estimate was power on hours (unit
powered on and doing nothing), and did NOT include hours when the tape was
in motion. Sheesh.

To get the MTBF estimate, the manufacturer will power on 100 drives (or
more) and time to see when the first one fails. If it fails in 1000 hours,
then the MTBF is 100x1000hrs or 100,000 hours. This is far from being
accurate because as we all know, the older the drive, the more likely it is
to fail. (Especially after the warranty period has expired, failure rate is
quite highg).

I am hoping the newer SATA II drives will provide SCSI performance at a
reasonable price. It would be interesting to see if anyone has polled ISP's


The answer (short and based on experience) is NO! A SATA drive is no
different from an IDE drive of the same type. I'm sure they'll release
fast and reliable drives based on SATA with differenct mechanisms
(like the one Joshua pointed), but most will be IDE like with a
different interface, those high demand drives are fated to cost a lot
more.


to see what they're using. I know they charge more (or at least they used
to) for SCSI drives if you are renting a server from them. It would be
interesting to see what their failure rate is on IDE vs SCSI vs SATA.


That is something only an ISP or corporation would give (and no one
will EVER sign it, *lol*). SCSI has one more advantage I forgot to add
to my previous message, they can be arranged better in RAID with hot
swap. I can only tell about my company, where servers have all SCSI
disks (IBM, Dell).

--
Daniel da Veiga
Computer Operator - RS - Brazil
-BEGIN GEEK CODE BLOCK-
Version: 3.1
GCM/IT/P/O d-? s:- a? C++$ UBLA++ P+ L++ E--- W+++$ N o+ K- w O M- V-
PS PE Y PGP- t+ 5 X+++ R+* tv b+ DI+++ D+ G+ e h+ r+ y++
--END GEEK CODE BLOCK--

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-12 Thread Chris White
On Tuesday 11 July 2006 04:18 pm, Brian Dunning wrote:
 My understanding is that SCSI has a faster transfer rate, for
 transferring large files. A busy database needs really fast access,
 for making numerous fast calls all over the disk. Two different,
 unrelated things.

 I am more than willing to be called Wrong, slapped, and cast from a
 bridge.

Hmm, not sure if the question at hand is being answered.  The topics I've seen 
so far seem to indicate why SCSI is fast.  However, the original question was 
more along the lines of Does it matter with regards to database 
performance?.  From what I know of MySQL, not really, because MySQL does a 
good amount of work in memory.  The only time I'd see disk access being a 
factor is if you had a large mass of swap/virtual memory.

Now one place where I'm sure it would matter is if you were doing a 
substantial amount of logging, or db dumping to disk.  Then yes, you'd want a 
nice fast disk at that point.

-- 
Chris White
PHP Programmer/DBlowMeAway
Interfuel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: I don't understand why SCSI is preferred.

2006-07-12 Thread Tim Lucia
I've seen whitepapers from MySQL's web site, co-authored with Dell, that
recommend the hardware optimization be:

1. More Memory
2. Faster Drives (15K RPM is better the 10K)
3. Faster CPU.

Based on this, we're spec'ing 2950s with 16Gb, dual 2.8 dual-core Xeons, and
146Gb 15K (times 6) drives.

The plan is to RAID then 2 x RAID1 for the o/s (/boot, /, /var, and some
working space for dumps and restores), and 4 x RAID10 for /data.  Anyone
have any feedback on this?

Tim

 -Original Message-
 From: Chris White [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, July 12, 2006 3:59 PM
 To: mysql@lists.mysql.com
 Subject: Re: I don't understand why SCSI is preferred.
 
 On Tuesday 11 July 2006 04:18 pm, Brian Dunning wrote:
  My understanding is that SCSI has a faster transfer rate, for
  transferring large files. A busy database needs really fast access,
  for making numerous fast calls all over the disk. Two different,
  unrelated things.
 
  I am more than willing to be called Wrong, slapped, and cast from a
  bridge.
 
 Hmm, not sure if the question at hand is being answered.  The topics I've
 seen
 so far seem to indicate why SCSI is fast.  However, the original question
 was
 more along the lines of Does it matter with regards to database
 performance?.  From what I know of MySQL, not really, because MySQL does
 a
 good amount of work in memory.  The only time I'd see disk access being a
 factor is if you had a large mass of swap/virtual memory.
 
 Now one place where I'm sure it would matter is if you were doing a
 substantial amount of logging, or db dumping to disk.  Then yes, you'd
 want a
 nice fast disk at that point.
 
 --
 Chris White
 PHP Programmer/DBlowMeAway
 Interfuel
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-12 Thread Jon Frisby
This REALLY should be an academic concern.  Either you have a system  
that can tolerate the failure of a drive, or you do not.  The  
frequency of failure rates is pretty much irrelevant:  You can train  
incredibly non-technical (inexpensive) people to respond to a pager  
and hot-swap a bad drive.


If you are in the position where the typical failure-rate of a class  
of drive is of concern to you then either: A) You have a different  
problem causing all your drives to fail ultra-fast (heat, electrical  
noise, etc) or B) You haven't adequately designed your storage  
subsystem.


Save yourself the headache, and just set up a RAID10 PATA/SATA array  
with a hot spare.   Not sure if Linux/FreeBSD/et al support hot-swap  
of drives when using software RAID, but if it does then you don't  
even need to spend a few hundred bucks on a RAID controller.


-JF


On Jul 12, 2006, at 12:11 PM, mos wrote:


At 12:42 PM 7/12/2006, you wrote:

On Tuesday 11 July 2006 19:26, mos wrote:
 SCSI drives are also designed to run 24/7 whereas IDE drives are  
more

 likely to fail if used on a busy server.

This used to be the case.  But there are SATA drives out there now  
being made
for enterprise class, 100% duty cycle operations.  See, for  
example,
http://www.westerndigital.com/en/products/Products.asp? 
DriveID=238Language=en
No, I am not affiliated with WD, just had good experience with  
these drives.
1.2 Million Hours MTBF at 100% duty cycle and a five year  
warranty.  Not bad.


That's good to hear, but  MTBF is really a pie in the sky estimate.  
I had an expensive HP tape drive that had something like 20,000 hr  
MTBF. Both of my units failed at under 70 hours. HP's estimate was  
power on hours (unit powered on and doing nothing), and did NOT  
include hours when the tape was in motion. Sheesh.


To get the MTBF estimate, the manufacturer will power on 100 drives  
(or more) and time to see when the first one fails. If it fails in  
1000 hours, then the MTBF is 100x1000hrs or 100,000 hours. This is  
far from being accurate because as we all know, the older the  
drive, the more likely it is to fail. (Especially after the  
warranty period has expired, failure rate is quite highg).


I am hoping the newer SATA II drives will provide SCSI performance  
at a reasonable price. It would be interesting to see if anyone has  
polled ISP's to see what they're using. I know they charge more (or  
at least they used to) for SCSI drives if you are renting a server  
from them. It would be interesting to see what their failure rate  
is on IDE vs SCSI vs SATA.


Mike

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-12 Thread Timothy Murphy
On Wednesday 12 July 2006 20:11, mos wrote:

 To get the MTBF estimate, the manufacturer will power on 100 drives (or
 more) and time to see when the first one fails. If it fails in 1000 hours,
 then the MTBF is 100x1000hrs or 100,000 hours. 

I don't know much statistics,
but I do know that that estimate would not just be inaccurate -
it would be absurdly wrong.

-- 
Timothy Murphy  
e-mail (80k only): tim /at/ birdsnest.maths.tcd.ie
tel: +353-86-2336090, +353-1-2842366
s-mail: School of Mathematics, Trinity College, Dublin 2, Ireland

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-12 Thread Jon Frisby


On Jul 12, 2006, at 12:45 PM, Scott Tanner wrote:





I am hoping the newer SATA II drives will provide SCSI performance  
at a
reasonable price. It would be interesting to see if anyone has  
polled ISP's
to see what they're using. I know they charge more (or at least  
they used
to) for SCSI drives if you are renting a server from them. It  
would be

interesting to see what their failure rate is on IDE vs SCSI vs SATA.

Mike



  By newer SATA II drivers, are you referring to SAS drives?


No, typically SATA II is meant to refer to SATA w/ NCQ + doubled  
max throughput.




  My company is in the process of switching to direct attached SAS
arrays for our database servers, as part of a scale-out model. We've
done testing between SATA, SCSI, and SAS arrays, and the SCSI and SAS
systems were very comparative. The number of disks in the array seemed
to have a larger effect then the type of disk. SAS also has more fiber
like features then SCSI, making it better suited for HA environments.


Yeah, that's sort of the conventional-wisdom for drive arrays:  More  
spindles == faster.  It's roughly analogous to adding CPUs versus  
getting faster CPUs with a workload that's easily parallelizable.   
More spindles means more heads.  More heads means more simultaneous  
seeks, reads, and writes.


-JF



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-12 Thread Jon Frisby


On Jul 12, 2006, at 12:56 PM, Daniel da Veiga wrote:


On 7/12/06, mos [EMAIL PROTECTED] wrote:

At 12:42 PM 7/12/2006, you wrote:
On Tuesday 11 July 2006 19:26, mos wrote:
  SCSI drives are also designed to run 24/7 whereas IDE drives  
are more

  likely to fail if used on a busy server.

This used to be the case.  But there are SATA drives out there  
now being made
for enterprise class, 100% duty cycle operations.  See, for  
example,
http://www.westerndigital.com/en/products/Products.asp? 
DriveID=238Language=en


No, I am not affiliated with WD, just had good experience with  
these drives.
1.2 Million Hours MTBF at 100% duty cycle and a five year  
warranty.  Not bad.


That's good to hear, but  MTBF is really a pie in the sky  
estimate. I had
an expensive HP tape drive that had something like 20,000 hr MTBF.  
Both of
my units failed at under 70 hours. HP's estimate was power on  
hours (unit
powered on and doing nothing), and did NOT include hours when the  
tape was

in motion. Sheesh.

To get the MTBF estimate, the manufacturer will power on 100  
drives (or
more) and time to see when the first one fails. If it fails in  
1000 hours,

then the MTBF is 100x1000hrs or 100,000 hours. This is far from being
accurate because as we all know, the older the drive, the more  
likely it is
to fail. (Especially after the warranty period has expired,  
failure rate is

quite highg).

I am hoping the newer SATA II drives will provide SCSI performance  
at a
reasonable price. It would be interesting to see if anyone has  
polled ISP's


The answer (short and based on experience) is NO! A SATA drive is no
different from an IDE drive of the same type. I'm sure they'll release
fast and reliable drives based on SATA with differenct mechanisms
(like the one Joshua pointed), but most will be IDE like with a
different interface, those high demand drives are fated to cost a lot
more.


Rule of thumb:  If you see a SATA drive that is 18GB, 36GB, 72GB, or  
144GB and costs WAY more per GB than other SATA drives of more normal  
capacities (80GB, 100GB, 120GB, 160GB, 200GB...) then it's probably  
using the same physical drive as a SCSI drive but with a SATA  
interface tacked on instead.



That is something only an ISP or corporation would give (and no one
will EVER sign it, *lol*). SCSI has one more advantage I forgot to add
to my previous message, they can be arranged better in RAID with hot
swap. I can only tell about my company, where servers have all SCSI
disks (IBM, Dell).


Have you had any specific problems with SATA/PATA hot-swap?  We've  
only had problems when we've tried to use a ThreeWare RAID card and  
tried to do hot-swap...


-JF


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-12 Thread Chris White
On Wednesday 12 July 2006 01:13 pm, Tim Lucia wrote:
 I've seen whitepapers from MySQL's web site, co-authored with Dell, that
 recommend the hardware optimization be:

 1. More Memory

That's a definite

 2. Faster Drives (15K RPM is better the 10K)

Well, I guess for any server really, the faster the disk writes (Though let's 
be honest, the faster the disk writes AND the better integrity disk).  
Generally this is, in my opinion more suitable for things like logging, or 
the times MySQL actually decides to write to the disk (here's where a MySQL 
person steps in and states when that is ;) ).

 3. Faster CPU.

As with most things these days.  Better CPU means less worry about Oh, I 
wonder if I can do this and increases the time period between now and when 
you need to scale.

 Based on this, we're spec'ing 2950s with 16Gb, dual 2.8 dual-core Xeons,
 and 146Gb 15K (times 6) drives.

Sounds about right.  If you're on a linux system I also recommend that you 
turn on NPTL (Native Posix Threading Library), which is done through glibc 
(or by grabbing an rpm/deb/whatever with said support).  As always, don't 
forget the SMP support in the kernel to benifit from the Dual-Core (I'm 
guessing you probably know this, but hey.. never hurts).

 The plan is to RAID then 2 x RAID1 for the o/s (/boot, /, 

sounds good

 /var, and some 

It's actually best to shove this on a separate disk.  As the name 
implies, /var is for variable data.  That said, you'll be chucking everything 
and the kitchen sink at it.  Logs, spools, etc.  These suckers are constantly 
being written to, and let's forgot the fact that some people attack servers 
by shoving data at it, which goes to logs.. which take up space.. you get the 
idea.  

 working space for dumps and restores), and 4 x RAID10 for /data.  Anyone
 have any feedback on this?

Some people use replication servers for backups, others use the same drive.  I 
like the idea of a separate backup replication server as if the main one goes 
down, I've got a real physically separated backup to work with.  In the end 
that's what matters.

-- 
Chris White
PHP Programmer/DBloomingOnions
Interfuel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-12 Thread Jon Frisby


On Jul 12, 2006, at 12:58 PM, Chris White wrote:


On Tuesday 11 July 2006 04:18 pm, Brian Dunning wrote:

My understanding is that SCSI has a faster transfer rate, for
transferring large files. A busy database needs really fast access,
for making numerous fast calls all over the disk. Two different,
unrelated things.

I am more than willing to be called Wrong, slapped, and cast from a
bridge.


Hmm, not sure if the question at hand is being answered.  The  
topics I've seen
so far seem to indicate why SCSI is fast.  However, the original  
question was

more along the lines of Does it matter with regards to database
performance?.  From what I know of MySQL, not really, because  
MySQL does a
good amount of work in memory.  The only time I'd see disk access  
being a

factor is if you had a large mass of swap/virtual memory.

Now one place where I'm sure it would matter is if you were doing a
substantial amount of logging, or db dumping to disk.  Then yes,  
you'd want a

nice fast disk at that point.


That's just silly.  ALL databases attempt to do as MUCH AS POSSIBLE  
in memory.  The disk is ALWAYS the enemy when it comes to a  
relational database.  The only question is the design of the database  
and of the queries.  If you have some leeway to muck about with the  
design of each then you can often find ways of making the database  
*do less work* (talk to the disk/ram less) which is always preferable  
to trying to make the disk faster.


-JF

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-12 Thread Chris White
On Wednesday 12 July 2006 01:13 pm, Tim Lucia wrote:
 I've seen whitepapers from MySQL's web site, co-authored with Dell, that
 recommend the hardware optimization be:

 1. More Memory
 2. Faster Drives (15K RPM is better the 10K)
 3. Faster CPU.

Oh wait, we forgot #4:

 4. Filesystem

You can have the fastest disk alive, but if your filesystem is doing 
sleep(1000) during every transfer (this is 1% possible, but just an example), 
you're data transfer is just plain going to suck.  There's a couple of 
Filesystems out there:

Ext2/3

I recommend ext3 here.  It's tried and true tested throughout the business 
world, kind of slow at times, but mostly stable in the end.  You'll generally 
see this as the filesystem of choice for those running *NIX type systems.

XFS

This one does a lot of operations in memory, and tries to write to disk as 
infrequently as possible, instead caching it in memory.  This does wonders 
for transfer rates, but just remember, memory is a temporary storage.  If 
your power goes out, kiss your data goodbye!  If you still want performance, 
at least put your server behind a nice UPS!

JFS

I use this at home a lot, and it works fairly well.  It seems to be a nice mix 
of speed and stability.  When something does go wrong, fsck takes under 30 
seconds on a 30GB drive.  Unfortunately this doesn't have too much corporate 
world exposure like ext2/3.  Good for when you're bored on a sunny Tuesday 
and want to try something new out.

Fat32/NTFS

Well, this is kind of a quick answer.  Most will straightup go NTFS nowdays 
(iirc because of speed and security labels, but I haven't dealt with windows 
filesystems in awhile).

-- 
Chris White
PHP Programmer/DBooyah!
Interfuel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: I don't understand why SCSI is preferred.

2006-07-12 Thread Tim Lucia

 -Original Message-
 From: Chris White [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, July 12, 2006 5:15 PM
 To: mysql@lists.mysql.com
 Subject: Re: I don't understand why SCSI is preferred.
 
 On Wednesday 12 July 2006 01:13 pm, Tim Lucia wrote:
  I've seen whitepapers from MySQL's web site, co-authored with Dell, that
  recommend the hardware optimization be:
 
  1. More Memory
 
 That's a definite
 
  2. Faster Drives (15K RPM is better the 10K)
 
 Well, I guess for any server really, the faster the disk writes (Though
 let's
 be honest, the faster the disk writes AND the better integrity disk).
 Generally this is, in my opinion more suitable for things like logging, or
 the times MySQL actually decides to write to the disk (here's where a
 MySQL
 person steps in and states when that is ;) ).
 
  3. Faster CPU.
 
 As with most things these days.  Better CPU means less worry about Oh, I
 wonder if I can do this and increases the time period between now and
 when
 you need to scale.
 
  Based on this, we're spec'ing 2950s with 16Gb, dual 2.8 dual-core Xeons,
  and 146Gb 15K (times 6) drives.
 
 Sounds about right.  If you're on a linux system I also recommend that you
 turn on NPTL (Native Posix Threading Library), which is done through glibc
 (or by grabbing an rpm/deb/whatever with said support).  As always, don't
 forget the SMP support in the kernel to benifit from the Dual-Core (I'm
 guessing you probably know this, but hey.. never hurts).
 
  The plan is to RAID then 2 x RAID1 for the o/s (/boot, /,
 
 sounds good
 
  /var, and some
 
 It's actually best to shove this on a separate disk.  As the name
 implies, /var is for variable data.  That said, you'll be chucking
 everything
 and the kitchen sink at it.  Logs, spools, etc.  These suckers are
 constantly
 being written to, and let's forgot the fact that some people attack
 servers
 by shoving data at it, which goes to logs.. which take up space.. you get
 the
 idea.


/var would be on a separate partition, on the same physical RAID set --
sorry that was obvious to *me* but I didn't say that. 


 
  working space for dumps and restores), and 4 x RAID10 for /data.  Anyone
  have any feedback on this?
 
 Some people use replication servers for backups, others use the same
 drive.  I
 like the idea of a separate backup replication server as if the main one
 goes
 down, I've got a real physically separated backup to work with.  In the
 end
 that's what matters.

The plan is to backup the slave.  I just want to reserve some space if I
need to have a local dump file or something.

 
 --
 Chris White
 PHP Programmer/DBloomingOnions
 Interfuel
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-11 Thread Chris White
On Tuesday 11 July 2006 04:18 pm, Brian Dunning wrote:
 My understanding is that SCSI has a faster transfer rate, for
 transferring large files. A busy database needs really fast access,
 for making numerous fast calls all over the disk. Two different,
 unrelated things.

 I am more than willing to be called Wrong, slapped, and cast from a
 bridge.

Be careful on that, databases do more work in memory than anything else.  That 
said, I'd be more worried about your memory capacity.  Now, if you rely 
mainly on swap(virtual) memory, then  you might worry more on that :).

-- 
Chris White
PHP Programer/DBouncingWithJava
Interfuel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-11 Thread Chris W

Brian Dunning wrote:

My understanding is that SCSI has a faster transfer rate, for  
transferring large files. 


SCSI is better for EVERYTHING except your budget.  Faster for large 
transfers, small transfers, seek times, and most especially it handles 
requests from multiple threads much better. 


--
Chris W
KE5GIX

Gift Giving Made Easy
Get the gifts you want  
give the gifts they want
One stop wish list for any gift, 
from anywhere, for any occasion!

http://thewishzone.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-11 Thread Scott Haneda
 Brian Dunning wrote:
 
 My understanding is that SCSI has a faster transfer rate, for
 transferring large files.
 
 SCSI is better for EVERYTHING except your budget.  Faster for large
 transfers, small transfers, seek times, and most especially it handles
 requests from multiple threads much better.

Almost everything, they have not hit that capacity issue yet, they are all
generally much smaller that non SCSI.
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-11 Thread Greg 'groggy' Lehey
On Tuesday, 11 July 2006 at 16:41:24 -0700, Chris White wrote:
 On Tuesday 11 July 2006 04:18 pm, Brian Dunning wrote:
 My understanding is that SCSI has a faster transfer rate, for
 transferring large files. A busy database needs really fast access,
 for making numerous fast calls all over the disk. Two different,
 unrelated things.

 I am more than willing to be called Wrong, slapped, and cast from a
 bridge.

 Be careful on that, databases do more work in memory than anything
 else.  That said, I'd be more worried about your memory capacity.
 Now, if you rely mainly on swap(virtual) memory, then you might
 worry more on that :).

Clearly when you're working in memory, the kind of disks you use don't
have much influence.

In fact, SCSI disks typically have (marginally) faster access times
than ATA.  They may also have higher transfer rates, but as Brian
observes, this is of marginal interest.

One of the things that we discuss internally from time to time is the
influence of block size on database performance.  On modern disks,
random access to a single 4 kB block takes about 5.1 ms (5 ms seek,
0.1 ms transfer).  Random access to a single 64 kB block takes about
6.6 ms (5 ms seek, 1.6 ms transfer).  Clearly big blocks improve disk
bandwidth; but if you only need 4 kB, the rest doesn't buy you
anything.  That's why we discuss rather than come to any useful
conclusion.

Greg
--
Greg Lehey, Senior Software Engineer, Online Backup
MySQL AB, http://www.mysql.com/
Echunga, South Australia
Phone: +61-8-8388-8286   Mobile: +61-418-838-708
VoIP:  sip:[EMAIL PROTECTED], sip:[EMAIL PROTECTED]
Diary http://www.lemis.com/grog/diary.html

Are you MySQL certified?  http://www.mysql.com/certification/


pgpDBQluI8zU2.pgp
Description: PGP signature


Re: I don't understand why SCSI is preferred.

2006-07-11 Thread Jon Frisby
It's my understanding that the biggest remaining difference has to do  
with SCSI having far superior command queueing capabilities --  
although SATA's command queueing may have closed the gap somewhat --  
which provides for much better real-world performance when you have  
multiple database threads doing work.


The bottom line is that (at least in the past -- who knows, perhaps  
the latest-n-greatest SATA gear has truly tipped the scales, although  
I doubt it) you will see better real-world performance with less  
fidgeting* from SCSI (or Fibre Channel, switched or otherwise) in  
terms of access times and throughput than you will from PATA or SATA.


* - For example: We faced a NASTY problem using AMD 64-bit CPUs +  
SATA + Linux where I/O on the system (the WHOLE system, not JUST the  
SATA spindles -- network, PATA, USB, EVERYTHING) would suddenly come  
to a grinding halt (or very nearly halted) randomly when the SATA  
subsystem was under heavy load.  It required a LOT of trial-and-error  
kernel adjustments to find a configuration that did not suffer this  
problem.


As to whether it is PREFERRED, that comes down to your constraints.   
There are some problem domains where it's REALLY REALLY HARD to split  
database load across multiple servers.  There are many problem  
domains where bad or overly-simplistic design patterns are common  
that make scaling to multiple machines hard.  So sometimes you wind  
up in a nasty situation where your only option is to have REALLY fast  
spindles -- in which case, the 10x or 20x price premium for SCSI may  
be unavoidable.


Generally speaking, if you need ultra-fast spindles you should  
probably be re-evaluating your database architecture as you're asking  
for financial and technological pain.


-JF

On Jul 11, 2006, at 4:18 PM, Brian Dunning wrote:

My understanding is that SCSI has a faster transfer rate, for  
transferring large files. A busy database needs really fast access,  
for making numerous fast calls all over the disk. Two different,  
unrelated things.


I am more than willing to be called Wrong, slapped, and cast from a  
bridge.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-11 Thread mos

At 06:18 PM 7/11/2006, you wrote:

My understanding is that SCSI has a faster transfer rate, for
transferring large files. A busy database needs really fast access,
for making numerous fast calls all over the disk. Two different,
unrelated things.

I am more than willing to be called Wrong, slapped, and cast from a
bridge.


SCSI controllers have a processor that can queue disk commands thereby 
freeing up CPU cycles which makes it ideal for a server. If you are using 
the SCSI drive on a single user machine then it's not going to be faster, 
and could even be slower than a good IDE drive. I've used a lot of SCSI 
drives years ago and paid dearly the price for the drives and the 
controllers.  SATA II drives may give SCSI a run for their money. But as 
others have said, you can get better database performance just by 
increasing your RAM.


SCSI drives are also designed to run 24/7 whereas IDE drives are more 
likely to fail if used on a busy server. If you really want something fast, 
put the data on a hardware RAM drive. If you think SCSI drives are 
expensive, you ain't seen nothing yet. :)


http://www.anandtech.com/storage/showdoc.aspx?i=2480
http://www.tomshardware.com/2005/12/05/hyperos_dram_hard_drive_on_the_block/
http://www.hyperossystems.co.uk/

Mike

P.S. Don't jump from a bridge, cause I may be driving underneath it at the 
time. 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]