Am Tue, 16 May 2017 14:21:20 +0200
schrieb Tomasz Torcz <to...@pipebreaker.pl>:

> On Tue, May 16, 2017 at 03:58:41AM +0200, Kai Krakow wrote:
> > Am Mon, 15 May 2017 22:05:05 +0200
> > schrieb Tomasz Torcz <to...@pipebreaker.pl>:
> >   
>  [...]  
> > > 
> > >   Let me add my 2 cents.  bcache-writearound does not cache writes
> > > on SSD, so there are less writes overall to flash.  It is said
> > > to prolong the life of the flash drive.
> > >   I've recently switched from bcache-writeback to
> > > bcache-writearound, because my SSD caching drive is at the edge
> > > of it's lifetime. I'm using bcache in following configuration:
> > > http://enotty.pipebreaker.pl/dżogstaff/2016.05.25-opcja2.svg My
> > > SSD is Samsung SSD 850 EVO 120GB, which I bought exactly 2 years
> > > ago.
> > > 
> > >   Now, according to
> > > http://www.samsung.com/semiconductor/minisite/ssd/product/consumer/850evo.html
> > > 120GB and 250GB warranty only covers 75 TBW (terabytes written).  
> > 
> > According to your chart, all your data is written twice to bcache.
> > It may have been better to buy two drives, one per mirror. I don't
> > think that SSD firmwares do deduplication - so data is really
> > written twice.  
> 
>   I'm aware of that, but 50 GB (I've got 100GB caching partition)
> is still plenty to cache my ~, some media files, two small VMs.
> On the other hand I don't want to overspend. This is just a home
> server.
>   Nb. I'm still waiting for btrfs native SSD caching, which was
> planned for 3.6 kernel 5 years ago :)
> ( 
> https://oss.oracle.com/~mason/presentation/btrfs-jls-12/btrfs.html#/planned-3.6
> )
> 
> > 
> >   
> > > My
> > > drive has  # smartctl -a /dev/sda  | grep LBA 241
> > > Total_LBAs_Written      0x0032   099   099   000    Old_age
> > > Always       -       136025596053  
> > 
> > Doesn't say this "99%" remaining? The threshold is far from being
> > reached...
> > 
> > I'm curious, what is Wear_Leveling_Count reporting?  
> 
> ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE
> UPDATED  WHEN_FAILED RAW_VALUE 9 Power_On_Hours          0x0032
> 096   096   000    Old_age   Always       -       18227 12
> Power_Cycle_Count       0x0032   099   099   000    Old_age
> Always       -       29 177 Wear_Leveling_Count     0x0013   001
> 001   000    Pre-fail  Always       -       4916
> 
>  Is this 001 mean 1%? If so, SMART contradicts datasheets. And I
> don't think I shoud see read errors for 1% wear.

It more means 1% left, that is 99% wear... Most of these are counters
from 100 down to zero, with THRESH being the threshold point below or at
which it is considered failed or failing.

Only a few values work the other way around (like temperature).

Be careful with interpreting raw values: they may be very manufacturer
specific and not normalized.

According to Total_LBAs_Written, the manufacturer thinks the drive
could still take 100x more (only 1% used). But your wear level is almost
100% (value = 001). I think that value isn't really designed around the
flash cell lifetime, but intermediate components like caches.

So you need to read most values "backwards": It's not a used counter,
but a "what's left" counter.

What does it tell you about reserved blocks usage? Note that it's sort
of double negation here: value 100 used means 100% unused or 0%
used... ;-) Or just insert a "minus" in front of those values and think
of them counting up to zero. So on a time axis it's at -100% of the
total lifetime scale and 0 is the fail point (or whatever "thresh"
says).


-- 
Regards,
Kai

Replies to list-only preferred.


--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to