Yes.  The drive treats some portion of cells as SLC, which having only two 
charge states is a lot faster and is used as cache.  As with any cache-enabled 
drive, if that cache fills up either due to misaligned flush cycles or simply 
data coming in faster than it can flush, you’ll see a performance cliff.

Also, uniquely for reasons I don’t yet understand, as the drive fills up, the 
size of that cache area decreases linearly until it hits a certain minimum.  
Thus an empty drive will have more cache in service than a 70% full drive. 

This article includes 8x10 color glossy pictures with circles and arrows and a 
paragraph on the back describing what each one is about.

https://www.howtogeek.com/428869/ssds-are-getting-denser-and-slower-thanks-to-qlc-flash/

Which raises some interesting questions about the proper use of TRIM both by 
RBD clients and by ceph-osd, especially when dmcrypt is used.

— aad





> Are you saying that the write performance becomes bad (90MB/sec) for long 
> lasting *continuous* writing? (after filling up a write buffer or such)
> 
> But given time to empty that buffer again, it should again write with the 
> normal higher speed?
> 
> So in applications with enough variation between reading and writing, they 
> could still perform good enough?
> 
> MJ
> 
> On 3/6/20 2:06 PM, vita...@yourcmc.ru wrote:
>> Hi,
>> Current QLC drives are total shit in terms of steady-state performance. 
>> First 10-100 GB of data is written into the SLC cache which is fast, but 
>> then the drive switches to its QLC memory and even the linear write 
>> performance drops to ~90 MB/s which is actually worse than with HDDs!
>> So, try to run a long linear write test and check the performance after 
>> writing a lot of data.
>>> Last monday I performed a quick test with those two disks already,
>>> probably not that relevant, but posting it anyway:
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to