On 08/05/12 13:31, Chris Mason wrote: [...] > A few people have already mentioned how btrfs will pack these small > files into metadata blocks. If you're running btrfs on a single disk,
[...] > But the cost is increased CPU usage. Btrfs hits memmove and memcpy > pretty hard when you're using larger blocks. > > I suggest using a 16K or 32K block size. You can go up to 64K, it may > work well if you have beefy CPUs. Example for 16K: > > mkfs.btrfs -l 16K -n 16K /dev/xxx Is that still with "-s 4K" ? Might that help SSDs that work in 16kByte chunks? And why are memmove and memcpy more heavily used? Does that suggest better optimisation of the (meta)data, or just a greater housekeeping overhead to shuffle data to new offsets? Regards, Martin -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html