I've read that SSD drives work best when you only use some percentage of them 
(75%, 50%, etc) because by leaving unused space it allows the SSD more headroom 
to shuffle data around internally to keep things optimal. Those articles are 
most likely written for a filesystem on an OS that might not know about 
TRIM/UNMAP etc.

Has anyone done any testing on sustained random write throughput on a (say) 
60GB flash drive with only 50% dedicated to bcache, or 75%, or 100%?

Thanks

James
--
To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to