On 2017年08月31日 19:36, Roman Mamedov wrote:
On Thu, 31 Aug 2017 12:43:19 +0200
Marco Lorenzo Crociani <mar...@prismatelecomtesting.com> wrote:

Hi,
this 37T filesystem took some times to mount. It has 47
subvolumes/snapshots and is mounted with
noatime,compress=zlib,space_cache. Is it normal, due to its size?

Just like Han said, this is caused by BLOCK_GROUP_ITEM scattered in the large extent tree.
So, it's hard to improve soon.

Some ideas like delay BLOCK_GROUP_ITEM loading can greatly enhance the mount speed. But such enhancement may affect extent allocator (that's to say we can't do any write before at least some BLOCK_GROUP_ITEM is loaded) and may cause more bugs.

Other ideas like per-chunk extent tree may also greatly reduce mount time but need on-disk format change. (Well, in fact btrfs on-disk format is never well designed anyway, so if anyone is really doing this, please make a comprehensive wiki/white paper for this)


If you could implement SSD caching in front of your FS (such as lvmcache or
bcache), that would work wonders for performance in general, and especially
for mount times. I have seen amazing results with lvmcache (of just 32 GB) for
a 14 TB FS.

That's impressive.
Since extent tree is a super hot tree (any CoW will modify extent tree), it makes sense.


As for in general, with your FS size perhaps you should be using
"space_cache=v2" for better performance, but I'm not sure if that will have
any effect on mount time (aside from slowing down the first mount with that).

Unfortunately, space tree is not loaded until used (at least for v1), so space_cache may not help much.

Thanks,
Qu
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to