>Logic?
Foul! That's NOT evidence.
>
>Mechanical devices have decreasing MTBF when run in hotter environments,
>often at non-linear rates.
I agree that this seems intuitive. But I think taking it as a cast-iron
truth is dangerous.
>Server class drives are designed with a longer lifespan in mi
At 11:13 PM 4/7/2007, [EMAIL PROTECTED] wrote:
On Sat, 7 Apr 2007, Ron wrote:
Ron, I think that many people aren't saying cheap==good, what we are
doing is arguing against the idea that expesnsive==good (and it's
coorelary cheap==bad)
Since the buying decision is binary, you either buy high qu
On Sat, Apr 07, 2007 at 08:46:33PM -0400, Ron wrote:
> The Google and CMU studies are =not= based on data drawn from
> businesses where the lesser consequences of an outage are losing
> $10Ks or $100K per minute... ...and where the greater consequences
> include the chance of loss of human life.
I believe that the biggest cause for data loss from people useing the
'cheap' drives is due to the fact that one 'cheap' drive holds the
capacity of 5 or so 'expensive' drives, and since people don't
realize this they don't realize that the time to rebuild the failed
drive onto a hot-spare is
On Sat, 7 Apr 2007, Ron wrote:
Ron, why is it that you assume that anyone who disagrees with you doesn't
work in an environment where they care about the datacenter environment,
and aren't in fields like financial services? and why do you think that we
are just trying to save a few pennies? (t
At 05:42 PM 4/7/2007, [EMAIL PROTECTED] wrote:
On Sat, 7 Apr 2007, Ron wrote:
The reality is that all modern HDs are so good that it's actually
quite rare for someone to suffer a data loss event. The
consequences of such are so severe that the event stands out more
than just the statistics w
On Sat, 7 Apr 2007, Ron wrote:
The reality is that all modern HDs are so good that it's actually quite rare
for someone to suffer a data loss event. The consequences of such are so
severe that the event stands out more than just the statistics would imply.
For those using small numbers of HDs
Given all the data I have personally + all that I have from NOC
personnel, Sys Admins, Network Engineers, Operations Managers, etc my
experience (I do systems architecture consulting that requires me to
interface with many of these on a regular basis) supports a variation
of hypothesis 2. Let'
In summary, it seems one of these is true:
1. Drive manufacturers don't design server drives to be more
reliable than consumer drive
2. Drive manufacturers _do_ design server drives to be more
reliable than consumer drive, but the design doesn't yield significantly
better relia
Ron wrote:
I read them as soon as they were available. Then I shrugged and noted
YMMV to myself.
1= Those studies are valid for =those= users under =those= users'
circumstances in =those= users' environments.
How well do those circumstances and environments mimic anyone else's?
Exactly,
* Charles Sprickman <[EMAIL PROTECTED]> [070407 00:49]:
> On Fri, 6 Apr 2007, [EMAIL PROTECTED] wrote:
>
> >On Fri, 6 Apr 2007, Scott Marlowe wrote:
> >
> >>Based on experience I think that on average server drives are more
> >>reliable than consumer grade drives, and can take more punishment.
> >
On Fri, 6 Apr 2007, Charles Sprickman wrote:
On Fri, 6 Apr 2007, [EMAIL PROTECTED] wrote:
On Fri, 6 Apr 2007, Scott Marlowe wrote:
> Based on experience I think that on average server drives are more
> reliable than consumer grade drives, and can take more punishment.
this I am not sure
On Fri, 6 Apr 2007, [EMAIL PROTECTED] wrote:
On Fri, 6 Apr 2007, Scott Marlowe wrote:
Based on experience I think that on average server drives are more
reliable than consumer grade drives, and can take more punishment.
this I am not sure about
I think they should survey Tivo owners next t
On Fri, 6 Apr 2007, Scott Marlowe wrote:
Based on experience I think that on average server drives are more
reliable than consumer grade drives, and can take more punishment.
this I am not sure about
But,
the variables of manufacturer, model, and the batch often make even more
difference tha
On Fri, Apr 06, 2007 at 03:37:08PM -0400, Ron wrote:
studies. I respect that. Unfortunately the RW is too fast moving
and too messy to wait for a laboratory style study to be completed
before we are called on to make professional decisions on most
issues we face within our work
IME I have to
On Fri, 6 Apr 2007, Scott Marlowe wrote:
Most server drives are rated for 55-60C environmental temperature
operation, which means the drive would be even hotter.
I chuckled when I dug into the details for the drives in my cheap PC; the
consumer drives from Seagate:
http://www.seagate.com/doc
At 02:19 PM 4/6/2007, Michael Stone wrote:
On Fri, Apr 06, 2007 at 12:41:25PM -0400, Ron wrote:
3.based on personal observation, case study reports, or random
investigations rather than systematic scientific evaluation:
anecdotal evidence.
Here you even quote the appropriate definition before
On Fri, Apr 06, 2007 at 12:41:25PM -0400, Ron wrote:
3.based on personal observation, case study
reports, or random investigations rather than
systematic scientific evaluation: anecdotal evidence.
Here you even quote the appropriate definition before ignoring it.
In short, professional advic
At 09:23 AM 4/6/2007, Michael Stone wrote:
On Fri, Apr 06, 2007 at 08:49:08AM -0400, Ron wrote:
Not quite. Each of our professional
experiences is +also+ statistical
evidence. Even if it is a personally skewed sample.
I'm not sure that word means what you think it
means. I think the one yo
On Thu, 2007-04-05 at 23:37, Greg Smith wrote:
> On Thu, 5 Apr 2007, Scott Marlowe wrote:
>
> > On Thu, 2007-04-05 at 14:30, James Mansion wrote:
> >> Can you cite any statistical evidence for this?
> > Logic?
>
> OK, everyone who hasn't already needs to read the Google and CMU papers.
> I'll ev
On Fri, Apr 06, 2007 at 08:49:08AM -0400, Ron wrote:
Not quite. Each of our professional experiences is +also+
statistical evidence. Even if it is a personally skewed sample.
I'm not sure that word means what you think it means. I think the one
you're looking for is "anecdotal".
My experie
Michael Stone wrote:
On Fri, Apr 06, 2007 at 02:00:15AM -0400, Tom Lane wrote:
It seems hard to believe that the vendors themselves wouldn't burn in
the drives for half a day, if that's all it takes to eliminate a large
fraction of infant mortality. The savings in return processing and
customer
At 07:38 AM 4/6/2007, Michael Stone wrote:
On Thu, Apr 05, 2007 at 11:19:04PM -0400, Ron wrote:
Both statements are the literal truth.
Repeating something over and over again doesn't make it truth. The
OP asked for statistical evidence (presumably real-world field
evidence) to support that a
Tom Lane wrote:
Greg Smith <[EMAIL PROTECTED]> writes:
On Fri, 6 Apr 2007, Tom Lane wrote:
It seems hard to believe that the vendors themselves wouldn't burn in
the drives for half a day, if that's all it takes to eliminate a large
fraction of infant mortality.
I've read that much of the dama
I read them as soon as they were available. Then I shrugged and
noted YMMV to myself.
1= Those studies are valid for =those= users under =those= users'
circumstances in =those= users' environments.
How well do those circumstances and environments mimic anyone else's?
I don't know since the
On Fri, Apr 06, 2007 at 02:00:15AM -0400, Tom Lane wrote:
It seems hard to believe that the vendors themselves wouldn't burn in
the drives for half a day, if that's all it takes to eliminate a large
fraction of infant mortality. The savings in return processing and
customer goodwill would surely
On Thu, Apr 05, 2007 at 11:19:04PM -0400, Ron wrote:
Both statements are the literal truth.
Repeating something over and over again doesn't make it truth. The OP
asked for statistical evidence (presumably real-world field evidence) to
support that assertion. Thus far, all the publicly availab
Greg Smith <[EMAIL PROTECTED]> writes:
> On Fri, 6 Apr 2007, Tom Lane wrote:
>> It seems hard to believe that the vendors themselves wouldn't burn in
>> the drives for half a day, if that's all it takes to eliminate a large
>> fraction of infant mortality.
> I've read that much of the damage that
On Fri, 6 Apr 2007, Tom Lane wrote:
It seems hard to believe that the vendors themselves wouldn't burn in
the drives for half a day, if that's all it takes to eliminate a large
fraction of infant mortality.
I've read that much of the damage that causes hard drive infant mortality
is related t
[EMAIL PROTECTED] writes:
> On Thu, 5 Apr 2007, Ron wrote:
>> Yep. Folks should google "bath tub curve of statistical failure" or
>> similar.
>> Basically, always burn in your drives for at least 1/2 a day before using
>> them in a production or mission critical role.
> for this and your first
On Fri, 6 Apr 2007, Ron wrote:
Bear in mind that Google was and is notorious for pushing their environmental
factors to the limit while using the cheapest "PoS" HW they can get their
hands on.
Let's just say I'm fairly sure every piece of HW they were using for those
studies was operating outs
At 11:40 PM 4/5/2007, [EMAIL PROTECTED] wrote:
On Thu, 5 Apr 2007, Ron wrote:
At 10:07 PM 4/5/2007, [EMAIL PROTECTED] wrote:
On Thu, 5 Apr 2007, Scott Marlowe wrote:
> Server class drives are designed with a longer lifespan in mind.
> > Server class hard drives are rated at higher temperatures
On Thu, 5 Apr 2007, Scott Marlowe wrote:
On Thu, 2007-04-05 at 14:30, James Mansion wrote:
Can you cite any statistical evidence for this?
Logic?
OK, everyone who hasn't already needs to read the Google and CMU papers.
I'll even provide links for you:
http://www.cs.cmu.edu/~bianca/fast07.
On Thu, 5 Apr 2007 [EMAIL PROTECTED] wrote:
> On Thu, 5 Apr 2007, Ron wrote:
> > At 10:07 PM 4/5/2007, [EMAIL PROTECTED] wrote:
> >> On Thu, 5 Apr 2007, Scott Marlowe wrote:
> >>
> >> > Server class drives are designed with a longer lifespan in mind.
> >> >
> >> > Server class hard drives are rate
On Thu, 5 Apr 2007, Ron wrote:
At 10:07 PM 4/5/2007, [EMAIL PROTECTED] wrote:
On Thu, 5 Apr 2007, Scott Marlowe wrote:
> Server class drives are designed with a longer lifespan in mind.
>
> Server class hard drives are rated at higher temperatures than desktop
> drives.
these two I question
At 10:07 PM 4/5/2007, [EMAIL PROTECTED] wrote:
On Thu, 5 Apr 2007, Scott Marlowe wrote:
Server class drives are designed with a longer lifespan in mind.
Server class hard drives are rated at higher temperatures than desktop
drives.
these two I question.
David Lang
Both statements are the li
On Thu, 5 Apr 2007, Scott Marlowe wrote:
On Thu, 2007-04-05 at 14:30, James Mansion wrote:
Server drives are generally more tolerant of higher temperatures. I.e.
the failure rate for consumer and server class HDs may be about the same
at 40 degrees C, but by the time the internal case temps ge
On Thu, 2007-04-05 at 14:30, James Mansion wrote:
> >Server drives are generally more tolerant of higher temperatures. I.e.
> >the failure rate for consumer and server class HDs may be about the same
> >at 40 degrees C, but by the time the internal case temps get up to 60-70
> >degrees C, the cons
On Thu, 5 Apr 2007, Ron wrote:
At 11:19 AM 4/5/2007, Scott Marlowe wrote:
On Thu, 2007-04-05 at 00:32, Tom Lane wrote:
> "James Mansion" <[EMAIL PROTECTED]> writes:
> > > Right --- the point is not the interface, but whether the drive is
> > > built
> > > for reliability or to hit a low p
On Thu, 5 Apr 2007, [EMAIL PROTECTED] wrote:
I'm curious to know why you're on xfs (i've been too chicken to stray from
ext3).
better support for large files (although postgres does tend to try and
keep the file size down by going with multiple files) and also for more
files
the multiple
>Server drives are generally more tolerant of higher temperatures. I.e.
>the failure rate for consumer and server class HDs may be about the same
>at 40 degrees C, but by the time the internal case temps get up to 60-70
>degrees C, the consumer grade drives will likely be failing at a much
>higher
On 5-4-2007 17:58 [EMAIL PROTECTED] wrote:
On Apr 5, 2007, at 4:09 AM, Ron wrote:
BE VERY WARY OF USING AN ADAPTEC RAID CONTROLLER!
Thanks - I received similar private emails with the same advice. I will
change the controller to a LSI MegaRAID SAS 8408E -- any feedback on
this one?
We ha
At 11:19 AM 4/5/2007, Scott Marlowe wrote:
On Thu, 2007-04-05 at 00:32, Tom Lane wrote:
> "James Mansion" <[EMAIL PROTECTED]> writes:
> >> Right --- the point is not the interface, but whether the drive is built
> >> for reliability or to hit a low price point.
>
> > Personally I take the marketi
On 4/5/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
On Apr 5, 2007, at 4:09 AM, Ron wrote:
> BE VERY WARY OF USING AN ADAPTEC RAID CONTROLLER!
Thanks - I received similar private emails with the same advice. I
will change the controller to a LSI MegaRAID SAS 8408E -- any
feedback on this o
[EMAIL PROTECTED] wrote:
On Apr 5, 2007, at 4:09 AM, Ron wrote:
BE VERY WARY OF USING AN ADAPTEC RAID CONTROLLER!
Thanks - I received similar private emails with the same advice. I will
change the controller to a LSI MegaRAID SAS 8408E -- any feedback on
this one?
LSI makes a good contro
On Apr 5, 2007, at 4:09 AM, Ron wrote:
BE VERY WARY OF USING AN ADAPTEC RAID CONTROLLER!
Thanks - I received similar private emails with the same advice. I
will change the controller to a LSI MegaRAID SAS 8408E -- any
feedback on this one?
IME, they are usually the worst of the commod
On Apr 5, 2007, at 8:21 AM, Jeff Frost wrote:
I noticed this behavior on the last Areca based 8 disk Raptor
system I built. Putting pg_xlog on a separate partition on the same
logical volume was faster than putting it on the large volume. It
was also faster to have 8xRAID10 for OS+data+pg
On Thu, 5 Apr 2007, Scott Marlowe wrote:
I've read some recent contrary advice. Specifically advising the
sharing of all files (pg_xlogs, indices, etc..) on a huge raid array
and letting the drives load balance by brute force.
The other, at first almost counter-intuitive result was that puttin
On Thu, 2007-04-05 at 00:32, Tom Lane wrote:
> "James Mansion" <[EMAIL PROTECTED]> writes:
> >> Right --- the point is not the interface, but whether the drive is built
> >> for reliability or to hit a low price point.
>
> > Personally I take the marketing mublings about the enterprise drives
> >
On Wed, 2007-04-04 at 09:12, [EMAIL PROTECTED] wrote:
> On Apr 3, 2007, at 6:54 PM, Geoff Tolley wrote:
>
> > But what's likely to make the largest difference in the OP's case
> > (many inserts) is write caching, and a battery-backed cache would
> > be needed for this. This will help mask wri
BE VERY WARY OF USING AN ADAPTEC RAID CONTROLLER!
IME, they are usually the worst of the commodity RAID controllers available.
I've often seen SW RAID outperform them.
If you are going to use this config, Tyan's n3600M (AKA S2932) MB has
a variant that comes with 8 SAS + 6 SATA II connectors.
[EMAIL PROTECTED] wrote:
In a perhaps fitting compromise, I have decide to go with a hybrid
solution:
8*73GB 15k SAS drives hooked up to Adaptec 4800SAS
PLUS
6*150GB SATA II drives hooked up to mobo (for now)
All wrapped in a 16bay 3U server. My reasoning is that the extra SATA
drives are pra
If the 3U case has a SAS-expander in its backplane (which it probably
has?) you should be able to connect all drives to the Adaptec
controller, depending on the casing's exact architecture etc. That's
another two advantages of SAS, you don't need a controller port for each
harddisk (we have a D
"James Mansion" <[EMAIL PROTECTED]> writes:
>> Right --- the point is not the interface, but whether the drive is built
>> for reliability or to hit a low price point.
> Personally I take the marketing mublings about the enterprise drives
> with a pinch of salt. The low-price drives HAVE TO be re
>Right --- the point is not the interface, but whether the drive is built
>for reliability or to hit a low price point.
Personally I take the marketing mublings about the enterprise drives
with a pinch of salt. The low-price drives HAVE TO be reliable too,
because a non-negligible failure rate wi
In a perhaps fitting compromise, I have decide to go with a hybrid
solution:
8*73GB 15k SAS drives hooked up to Adaptec 4800SAS
PLUS
6*150GB SATA II drives hooked up to mobo (for now)
All wrapped in a 16bay 3U server. My reasoning is that the extra SATA
drives are practically free compared t
Problem is :), you can purchase SATA Enterprise Drives.
Problem I would have thought that was a good thing!!! ;-)
Carlos
--
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http:
Joshua D. Drake wrote:
> Bruce Momjian wrote:
> > [EMAIL PROTECTED] wrote:
> >> On Wed, Apr 04, 2007 at 08:50:44AM -0700, Joshua D. Drake wrote:
> difference. OTOH, the SCSI discs were way less reliable than the SATA
> discs, that might have been bad luck.
> >>> Probably bad luck. I find
Bruce Momjian wrote:
[EMAIL PROTECTED] wrote:
On Wed, Apr 04, 2007 at 08:50:44AM -0700, Joshua D. Drake wrote:
difference. OTOH, the SCSI discs were way less reliable than the SATA
discs, that might have been bad luck.
Probably bad luck. I find that SCSI is very reliable, but I don't find
it a
[EMAIL PROTECTED] wrote:
> On Wed, Apr 04, 2007 at 08:50:44AM -0700, Joshua D. Drake wrote:
> > >difference. OTOH, the SCSI discs were way less reliable than the SATA
> > >discs, that might have been bad luck.
> > Probably bad luck. I find that SCSI is very reliable, but I don't find
> > it any mo
On 4-4-2007 21:17 [EMAIL PROTECTED] wrote:
fwiw, I've had horrible experiences with areca drivers on linux. I've
found them to be unreliable when used with dual AMD64 processors 4+ GB
of ram. I've tried kernels 2.16 up to 2.19... intermittent yet
inevitable ext3 corruptions. 3ware cards, on th
On Apr 4, 2007, at 12:09 PM, Arjen van der Meijden wrote:
If you don't care about such things, it may actually be possible to
build a similar set-up as your SATA-system with 12 or 16 15k rpm
SAS disks or 10k WD Raptor disks. For the sata-solution you can
also consider a 24-port Areca card.
On 4-4-2007 0:13 [EMAIL PROTECTED] wrote:
We need to upgrade a postgres server. I'm not tied to these specific
alternatives, but I'm curious to get feedback on their general qualities.
SCSI
dual xeon 5120, 8GB ECC
8*73GB SCSI 15k drives (PERC 5/i)
(dell poweredge 2900)
This is a SAS set
>sure but for any serious usage one either wants to disable that
>cache(and rely on tagged command queuing or how that is called in SATAII
>world) or rely on the OS/raidcontroller implementing some sort of
>FUA/write barrier feature(which linux for example only does in pretty
>recent kernels)
Does
[EMAIL PROTECTED] wrote:
Perhaps a basic question - but why does the interface matter? :-)
The interface itself matters not so much these days as the drives that
happen to use it. Most manufacturers make both SATA and SCSI lines, are
keen to keep the market segmented, and don't want to canni
[EMAIL PROTECTED] wrote:
for that matter, with 20ish 320G drives, how large would a parition be
that only used the outer pysical track of each drive? (almost certinly
multiple logical tracks) if you took the time to set this up you could
eliminate seeking entirely (at the cost of not useing yo
[EMAIL PROTECTED] wrote:
Good point. On another note, I am wondering why nobody's brought up
the command-queuing perf benefits (yet). Is this because sata vs scsi
are at par here? I'm finding conflicting information on this -- some
calling sata's ncq mostly crap, others stating the real-wor
On Wed, Apr 04, 2007 at 08:50:44AM -0700, Joshua D. Drake wrote:
> >difference. OTOH, the SCSI discs were way less reliable than the SATA
> >discs, that might have been bad luck.
> Probably bad luck. I find that SCSI is very reliable, but I don't find
> it any more reliable than SATA. That is assu
Joshua D. Drake wrote:
SATAII brute forces itself through some of its performance, for
example 16MB write cache on each drive.
sure but for any serious usage one either wants to disable that
cache(and rely on tagged command queuing or how that is called in SATAII
Why? Assuming we have a B
On Wed, 4 Apr 2007, Peter Kovacs wrote:
But if an individual disk fails in a disk array, sooner than later you
would want to purchase a new fitting disk, walk/drive to the location
of the disk array, replace the broken disk in the array and activate
the new disk. Is this correct?
correct, but
difference. OTOH, the SCSI discs were way less reliable than the SATA
discs, that might have been bad luck.
Probably bad luck. I find that SCSI is very reliable, but I don't find
it any more reliable than SATA. That is assuming correct ventilation etc...
Sincerely,
Joshua D. Drake
Andr
* Joshua D. Drake <[EMAIL PROTECTED]> [070404 17:40]:
>
> >Good point. On another note, I am wondering why nobody's brought up the
> >command-queuing perf benefits (yet). Is this because sata vs scsi are at
>
> SATAII has similar features.
>
> >par here? I'm finding conflicting information on
SATAII brute forces itself through some of its performance, for
example 16MB write cache on each drive.
sure but for any serious usage one either wants to disable that
cache(and rely on tagged command queuing or how that is called in SATAII
Why? Assuming we have a BBU, why would you turn o
Joshua D. Drake wrote:
Good point. On another note, I am wondering why nobody's brought up
the command-queuing perf benefits (yet). Is this because sata vs scsi
are at
SATAII has similar features.
par here? I'm finding conflicting information on this -- some calling
sata's ncq mostly cr
Good point. On another note, I am wondering why nobody's brought up the
command-queuing perf benefits (yet). Is this because sata vs scsi are at
SATAII has similar features.
par here? I'm finding conflicting information on this -- some calling
sata's ncq mostly crap, others stating the re
At 07:16 AM 4/4/2007, Peter Kovacs wrote:
This may be a silly question but: will not 3 times as many disk drives
mean 3 times higher probability for disk failure?
Yes, all other factors being equal 3x more HDs (24 vs 8) means ~3x
the chance of any specific HD failing.
OTOH, either of these n
On Apr 3, 2007, at 6:54 PM, Geoff Tolley wrote:
I don't think the density difference will be quite as high as you
seem to think: most 320GB SATA drives are going to be 3-4 platters,
the most that a 73GB SCSI is going to have is 2, and more likely 1,
which would make the SCSIs more like 50%
* Alvaro Herrera <[EMAIL PROTECTED]> [070404 15:42]:
> Peter Kovacs escribió:
> > But if an individual disk fails in a disk array, sooner than later you
> > would want to purchase a new fitting disk, walk/drive to the location
> > of the disk array, replace the broken disk in the array and activate
On 4-Apr-07, at 8:46 AM, Andreas Kostyrka wrote:
* Peter Kovacs <[EMAIL PROTECTED]> [070404 14:40]:
This may be a silly question but: will not 3 times as many disk
drives
mean 3 times higher probability for disk failure? Also rumor has it
that SATA drives are more prone to fail than SCSI dri
Peter Kovacs escribió:
> But if an individual disk fails in a disk array, sooner than later you
> would want to purchase a new fitting disk, walk/drive to the location
> of the disk array, replace the broken disk in the array and activate
> the new disk. Is this correct?
Ideally you would have a s
But if an individual disk fails in a disk array, sooner than later you
would want to purchase a new fitting disk, walk/drive to the location
of the disk array, replace the broken disk in the array and activate
the new disk. Is this correct?
Thanks
Peter
On 4/4/07, Alvaro Herrera <[EMAIL PROTECTE
Andreas Kostyrka escribió:
> * Peter Kovacs <[EMAIL PROTECTED]> [070404 14:40]:
> > This may be a silly question but: will not 3 times as many disk drives
> > mean 3 times higher probability for disk failure? Also rumor has it
> > that SATA drives are more prone to fail than SCSI drivers. More
> >
* Peter Kovacs <[EMAIL PROTECTED]> [070404 14:40]:
> This may be a silly question but: will not 3 times as many disk drives
> mean 3 times higher probability for disk failure? Also rumor has it
> that SATA drives are more prone to fail than SCSI drivers. More
> failures will result, in turn, in mor
This may be a silly question but: will not 3 times as many disk drives
mean 3 times higher probability for disk failure? Also rumor has it
that SATA drives are more prone to fail than SCSI drivers. More
failures will result, in turn, in more administration costs.
Thanks
Peter
On 4/4/07, [EMAIL P
[EMAIL PROTECTED] wrote:
> 8*73GB SCSI 15k ...(dell poweredge 2900)...
> 24*320GB SATA II 7.2k ...(generic vendor)...
>
> raid10. Our main requirement is highest TPS (focused on a lot of INSERTS).
> Question: will 8*15k SCSI drives outperform 24*7K SATA II drives?
It's worth asking the vendor
On Tue, 3 Apr 2007, Geoff Tolley wrote:
Ron wrote:
At 07:07 PM 4/3/2007, Ron wrote:
> For random IO, the 3ware cards are better than PERC
>
> > Question: will 8*15k 73GB SCSI drives outperform 24*7K 320GB SATA II
> drives?
>
> Nope. Not even if the 15K 73GB HDs were the brand new Savv
You might also ask on:
[EMAIL PROTECTED]
People are pretty candid there.
~BAS
On Tue, 2007-04-03 at 15:13 -0700, [EMAIL PROTECTED] wrote:
> Question: will 8*15k SCSI drives outperform 24*7K SATA II drives?
--
Brian A. Seklecki <[EMAIL PROTECTED]>
Collaborative Fusion, Inc.
--
Ron wrote:
At 07:07 PM 4/3/2007, Ron wrote:
For random IO, the 3ware cards are better than PERC
> Question: will 8*15k 73GB SCSI drives outperform 24*7K 320GB SATA II
drives?
Nope. Not even if the 15K 73GB HDs were the brand new Savvio 15K
screamers.
Example assuming 3.5" HDs and RAID 10
At 07:07 PM 4/3/2007, Ron wrote:
For random IO, the 3ware cards are better than PERC
> Question: will 8*15k 73GB SCSI drives outperform 24*7K 320GB SATA
II drives?
Nope. Not even if the 15K 73GB HDs were the brand new Savvio 15K screamers.
Example assuming 3.5" HDs and RAID 10 => 4 15K 73GB
For random IO, the 3ware cards are better than PERC
> Question: will 8*15k 73GB SCSI drives outperform 24*7K 320GB SATA II drives?
Nope. Not even if the 15K 73GB HDs were the brand new Savvio 15K screamers.
Example assuming 3.5" HDs and RAID 10 => 4 15K 73GB vs 12 7.2K 320GB
The 15K's are 2x f
We need to upgrade a postgres server. I'm not tied to these specific
alternatives, but I'm curious to get feedback on their general
qualities.
SCSI
dual xeon 5120, 8GB ECC
8*73GB SCSI 15k drives (PERC 5/i)
(dell poweredge 2900)
SATA
dual opteron 275, 8GB ECC
24*320GB SATA II 7.2k d
91 matches
Mail list logo