On 10/08/2011 20:31, Brian Bockelman wrote:
MTTF is a difficult number. Popular papers include:
http://db.usenix.org/events/fast07/tech/schroeder/schroeder_html/index.html,
http://labs.google.com/papers/disk_failures.pdf
Ted is assuming a MTTF of 25kHours; I think that's overly pessimistic,
On 11/08/2011 01:15, Rajiv Chittajallu wrote:
Ted Dunning wrote on 08/10/11 at 10:40:30 -0700:
To be specific, taking a 100 node x 10 disk x 2 TB configuration with drive
MTBF of 1000 days, we should be seeing drive failures on average once per
day. With 1G ethernet and 30MB/s/node dedicated
Thanks,
This is helpful
-Original Message-
From: Allen Wittenauer [mailto:a...@apache.org]
Sent: Wednesday, August 10, 2011 4:50 PM
To: general@hadoop.apache.org
Subject: Re: Dedicated disk for operating system
On Aug 10, 2011, at 2:22 AM, Oded Rosen wrote:
Hi,
What is the best
A short, slightly off-topic question:
Also note that in this configuration that one cannot take
advantage of the keep the machine up at all costs features in newer
Hadoop's, which require that root, swap, and the log area be mirrored
to be truly effective. I'm not quite convinced that
On 8/10/11 7:56 AM, Evert Lammerts evert.lamme...@sara.nl wrote:
A short, slightly off-topic question:
Also note that in this configuration that one cannot take
advantage of the keep the machine up at all costs features in newer
Hadoop's, which require that root, swap, and the log
On Aug 10, 2011, at 7:56 AM, Evert Lammerts wrote:
A short, slightly off-topic question:
Also note that in this configuration that one cannot take
advantage of the keep the machine up at all costs features in newer
Hadoop's, which require that root, swap, and the log area be mirrored
On Wed, Aug 10, 2011 at 10:40 AM, Ted Dunning tdunn...@maprtech.com wrote:
To be specific, taking a 100 node x 10 disk x 2 TB configuration with drive
MTBF of 1000 days, we should be seeing drive failures on average once per
day
For a 10,000 node cluster, however, we should expect the
MTTF is a difficult number. Popular papers include:
http://db.usenix.org/events/fast07/tech/schroeder/schroeder_html/index.html,
http://labs.google.com/papers/disk_failures.pdf
Ted is assuming a MTTF of 25kHours; I think that's overly pessimistic, although
both papers indicate that MTTF is a
Luke,
Yes, I do have some data to back this up, but I think I mentioned that this
was just the back of an envelope type computation. As such, it necessarily
ignores a number of factors.
Can you say what specifically it is that you object to? Is the analysis
pessimistic or optimistic? Are you
Ted Dunning wrote on 08/10/11 at 10:40:30 -0700:
To be specific, taking a 100 node x 10 disk x 2 TB configuration with drive
MTBF of 1000 days, we should be seeing drive failures on average once per
day. With 1G ethernet and 30MB/s/node dedicated to re-replication, it will
just over 10 minutes to
10 matches
Mail list logo