> Seems like you have done everything and also know everything. Not everything, but having been an engineer for 25 years, I have done many projects including digital imaging systems, and SCSI systems... What I do know, I know, and what I don't know, I know I don't know. I don't just make things up.
> I don't > know how your company (or you) determined MTBF of a RAID0 system but > most companies as Compaq, IBM, Sun, Adaptec, etc. say that MTBF will > decrease. There is only one article I have seen that says this, and I have had discussions with the authors about this. Do you have any reference to articles/spec sheets that make this claim? Interestingly enough, MTBF does not derate for adding a second CPU or for adding more memory to the system... > Exactly because of the reduced MTBF of a system with multiple > HDs Berkeley has suggested the RAID system. Is this "study" published anywhere? If so, I'd like to see it. > The RAID system is supposed > to relax the impact of the reduced MTBF. That doesn't mean the MTBF > becomes higher when a RAID system is deployed but it just makes it more > likely that the failure can be repaired. Failure recovery is entirely different from MTBF. > I see though where your (company's) calculation might come from. The company was Digital, BTW. We had an entire department devoted to MTBF testing...and specifically to storage MTBF assessment. > You > can determine MTBF for a certain device by testing for example 10000 > drives for 1000 hours and then divide the total of 10000*1000 hours by > the number of failures. That's not really how you determine MTBF. MTBF is an average. You are right, you need a large sample to test though. > Nevertheless, this calculation doesn't apply to RAID as a RAID system > has to be considered as a single identity. Exactly, and that is why you don't get any decrease in MTBF by adding drives. It's really simple. > So you cannot claim that > because you have 10 HDs your RAID system is working 10*1=10 hours in > each single hour. Your RAID system is ONE identity and therefore is > working only 1 hours each hour it is up. Therefore the MTBF decreases. Why does the MTBF decrease? You have a magical "therefore" that doesn't follow. If you tested 1000 drives by themselves, and you got an MTBF of 1,000,000 hours, let's say...take those 1000 drives, and make 500 RAID 0 systems, and your MTBF will NOT decrease notably, if at all, from drive failure. It may from other factors like power supply or thermal, but not from drive failure.