Luke,

Yes, I do have some data to back this up, but I think I mentioned that this
was just the back of an envelope type computation.  As such, it necessarily
ignores a number of factors.

Can you say what specifically it is that you object to?  Is the analysis
pessimistic or optimistic?  Are you seeing lots of correlated failures?  I
presume that your 40,000+ nodes are not in a single cluster and thus have
different failure modes than I was talking about.  Perhaps you could say
more about your situation.

In many installations, duty factor is low enough that average failure rate
can be an order of magnitude lower than what I quoted.  Even so, I don't
feel comfortable using that kind of rate for a computation of this sort.

On Wed, Aug 10, 2011 at 12:19 PM, Luke Lu <l...@vicaya.com> wrote:

> On Wed, Aug 10, 2011 at 10:40 AM, Ted Dunning <tdunn...@maprtech.com>
> wrote:
> > To be specific, taking a 100 node x 10 disk x 2 TB configuration with
> drive
> > MTBF of 1000 days, we should be seeing drive failures on average once per
> > day....
> > For a 10,000 node cluster, however, we should expect the average rate of
> > disk failure rate of one failure every 2.5 hours.
>
> Do you have real data to back the analysis? You assume a uniform disk
> failure distribution, which is absolutely not true. I can only say
> that our ops data across 40000+ nodes shows that the above analysis is
> not even close. (This is assuming that the ops know what they are
> doing though :)
>
> __Luke
>

Reply via email to