> > assuming honest mtbf numbers, one would expect similar
> > ures for the same io workload on the same size data set
> > as mechanical disks.  since flash drives are much smaller,
> > there would obviously be fewer ures per drive.  but needing
> > 10x more drives, the mtbf would be worse per byte of storage
> > than enterprise sata drives.  so you'd see more overall failures.
> 
> this depends on usage, obviously. i think it misses the point that
> there's plenty of applications where the smaller storage (assuming a
> single unit) is perfectly adequate. i swapped out the HD in my laptop
> for a SD drive: the reduction in size is entirely workable, and the
> other benefits make the trade a big win. there're plenty of
> applications where i need relatively little raw storage: laptops, boot
> media for network terminals, embedded things.
> 
> for large-scale storage, your analysis is much more appropriate. my
> file server remains based on spinning magnetic disks, and i expect
> that's likely to be the case for a long time.

on the other hand, since the ure rate is the same for a
mechanical disk as for your flash drive, one can't claim that
it's "more reliable".  it will return an unreadable error just
as often.  limiting your dataset on a mechanical hard drive
would accomplish the same goal for less cash.  and the afr (dirty secret:
the mtbf number is actually the extrapolated afr^-1) is only
0.4% instead of 0.7%.  at that rate, something else is more
likey to eat your laptop (gartner sez 20%/year.  but that's for
the intel enterprise ssd, which costs more than most laptops.
this article claims that flash is currently less reliable
the old-fashoned disks:

http://www.pcworld.com/article/143558/laptop_flash_drives_hit_by_high_failure_rates.html

surprising, no?  there are still plenty of reasons to want an
ssd.  it just seems that reliablity isn't one of those reasons yet.

- erik

Reply via email to