Hi Hugo, As I re-read it closely (and also other comments in the thread) I know understand there is a difference how nodatacow works even if snapshot are in place.
On autodefrag I wonder is there some more detailed documentation about how autodefrag works. The manual https://btrfs.wiki.kernel.org/index.php/Mount_options has very general statement. What does "detect random IO" really means ? It also talks about defragmenting the file - is i really about the whole file which is triggered for defrag or is defrag locally ? Ie I would understand what as writes happen the 1MB block is checked and if it is more than X fragments it is defragmented or something like that. Also does autodefrag works with nodatacow (ie with snapshot) or are these exclusive ? > > There's another approach which might be worth testing, which is to > use autodefrag. This will increase data write I/O, because where you > have one or more small writes in a region, it will also read and write > the data in a small neghbourhood around those writes, so the > fragmentation is reduced. This will improve subsequent read > performance. > > I could also suggest getting the latest kernel you can -- 16.04 is > already getting on for a year old, and there may be performance > improvements in upstream kernels which affect your workload. There's > an Ubuntu kernel PPA you can use to get the new kernels without too > much pain. > > > -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html