Hmm,  No, close, but not really correct.

*all* flash will eventually fail if you write to it enough.  It's physics.

Flash memory stores data in individual memory cells, which are made of 
floating-gate MOSFETs. Traditionally, each cell had two possible states, so one 
bit of data was stored in each cell in so-called single-level cells, or SLC 
flash memory.

MLC flash attempts to store more than one bit per cell.  This works, and it 
significantly reduces the price in terms of capacity, but is much more prone to 
errors (in technical terms, its bit error rate is higher).
Since computation is increasingly cheap over time, the solution has been to 
throw MIPS at the problem.  Most modern MLC controllers run a variant of 
Bose-Chudhuri-Hocquenghem (http://www.aqdi.com/bch.pdf) for error detection and 
correction.  

MLC also tends to be slower, especially in terms of write speed, so the 
controllers also tend to interleave access to the flash parts in order to 
increase the access (especially write) speed, and attempt to write to all 
blocks on the flash, remapping block 'N' for block 'M' behind your back, in 
order to provide a service called "wear-leveling".   This last could be (and 
probably should be) a function of the filesystem, but everybody these days 
wants to run over a SATA (or PCIe, and even with PCIe about 1/2 the time there 
is a SATA controller in-front of the flash) bus and pretend that the flash is 
really something we used to call a 'disk', and not have to create a custom 
filesystem.

(FFS: the last word in filesytems (no wait, that was ZFS).. nevermind, neither 
has any support for running on top of bare flash.  Discuss.)

SLC flash only stores one bit per cell, so it's significantly more expensive 
for a given capacity.  Moreover, the 'belly' of the flash market is consumers, 
so cubic tons more MLC parts are produced than SLC parts,
and the economies of scale kicked in.

SLC is significantly faster, and has more write endurance.  Since higher temps 
cause more leakage of the stored voltage, MLC flash tends to not "operate well" 
at industrial temps.

SLC NAND flash is typically rated at about 100k cycles, while MLC NAND flash is 
typically rated at no more than 10k cycles.  Via wear-leveling and 
over-provisioning ('spare blocks') you can increase these numbers, but no 
native flash device is rated in terms of millions of erase cycles.  (There are 
SLC NOR flash parts that are rated to 1 million erase cycles.  You're unlikely 
to want to pay for 500GB of same.)

http://www.wdc.com/WDProducts/SSD/whitepapers/en/NAND_Evolution_0812.pdf

Jim

On Mar 21, 2012, at 4:44 PM, Dimitri Alexandris wrote:

> Normal commercial flash will eventually fail. It's not designed for
> this purpose.
> 
> We use only industrial products which include error correction blocks
> and mechanism (transparent to the system), like:
> 
> http://www.ieiworld.com/product_groups/industrial/detail_list.aspx?gid=00001000010000000001&cid=08141368770534315264
> 
> mainly IFM 4000+ / IFM 4400+ series:
> 
> http://www.ieiworld.com/product_groups/industrial/content.aspx?gid=00001000010000000001&cid=08141368770534315264&id=0A221362488516674830
> 
> We get tens of million cycles with not a single fail, in industrial
> environment (high temp. + vibration).
> 
> 
> 
> On Wed, Mar 21, 2012 at 21:12, Chris Buechler <c...@pfsense.org> wrote:
>> On Wed, Mar 21, 2012 at 2:46 PM, Jeppe Øland <jol...@gmail.com> wrote:
>>>>> I'm getting the following error when logging into the box. It's at the top
>>>>> of the page when presented with the username and password prompt. You can
>>>>> not go past the login page.  pretty sure it's due to faulty hard drives.
>>>> 
>>>> Indeed it is. We discussed this with the vendor you got them from at
>>>> length, seems they got a bad batch of SSDs. Judging by recent
>>>> experiences, I'd stay away from Kingston SSDs.
>>> 
>>> Are you saying you have discussed the issue with Kingston, and that
>>> they admitted problems?
>>> 
>> 
>> No, with the reseller that the OP bought the systems from. He was
>> discussing with Kingston, we had to jump through some hoops to prove a
>> hardware problem. I'm not sure where it went from there.
>> _______________________________________________
>> List mailing list
>> List@lists.pfsense.org
>> http://lists.pfsense.org/mailman/listinfo/list
> _______________________________________________
> List mailing list
> List@lists.pfsense.org
> http://lists.pfsense.org/mailman/listinfo/list

_______________________________________________
List mailing list
List@lists.pfsense.org
http://lists.pfsense.org/mailman/listinfo/list

Reply via email to