On 2015-01-12 10:35, P. Remek wrote:
Another thing to consider is that the kernel's default I/O scheduler and the 
default parameters for that I/O scheduler are almost always suboptimal for SSD's, 
and this tends to show far more with BTRFS than anything else.  Personally 
>I've found that using the CFQ I/O scheduler with the following parameters 
works best for a majority of SSD's:
1. slice_idle=0
2. back_seek_penalty=1
3. back_seek_max set equal to the size in sectors of the device
4. nr_requests and quantum set to the hardware command queue depth

I will give these suggestions a try but I don't expect any big gain.
Notice that the difference between EXT4 and BTRFS random write is
massive - its 200 000 IOPs vs. 15 000 IOPs and the device and kernel
parameters are exactly the same (it is same machine) for both test
scenarios. It suggests that something is taking down write performance
in the Btrfs implementation.

Notice also that we did some performance tuning ( queue scheduling set
to noop, irq affinity distribution and pinning to specific numa nodes
and cores etc.)

The stuff about the I/O scheduler is more general advice for dealing with SSD's than anything BTRFS specific. I've found though that on SATA (I don't have anywhere near the kind of budget needed for SAS disks, and even less so for SAS SSD's) connected SSD's at least, using the no-op I/O scheduler get's better small burst performance, but it causes horrible latency spikes whenever trying to do something that requires bulk throughput with random writes (rsync being an excellent example of this).

Something else I thought of after my initial reply, due to the COW nature of BTRFS, you will generally get better performance of metadata operations with shallower directory structures (largely because mtime updates propagate up the directory tree to the root of the filesystem).

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to