On 02/21/2018 03:49 PM, Ellis H. Wilson III wrote:
> On 02/20/2018 08:49 PM, Qu Wenruo wrote:
>>>>> On 2018年02月16日 22:12, Ellis H. Wilson III wrote:
>>>>>> $ sudo btrfs-debug-tree -t chunk /dev/sdb | grep CHUNK_ITEM | wc -l
>>>>>> 3454
>>>>>
>> Increasing node size may reduce extent tree size. Although at most
>> reduce one level AFAIK.
>>
>> But considering that the higher the node is, the more chance it's
>> cached, reducing tree height wouldn't bring much performance impact
>> AFAIK.
>>
>> If one could do real world benchmark to beat or prove my assumption, it
>> would be much better though.
> 
> I'm willing to try this if you tell me exactly what you'd like me to do.
>  I've not mucked with nodesize before, so I'd like to avoid changing it
> to something absurd.
> 
>>> Qu's suggestion is actually independent of all the above reasons, but
>>> does kind of fit in with the fourth as another case of preventative
>>> maintenance.
>>
>> My suggestion is to use balance to reduce number of block groups, so we
>> could do less search at mount time.
>>
>> It's more like reason 2.
>>
>> But it only works for case where there are a lot of fragments so a lot
>> of chunks are not fully utilized.
>> Unfortunately, that's not the case for OP, so my suggestion doesn't make
>> sense here.
> 
> I ran the balance all the same, and the number of chunks has not
> changed.  Before 3454, and after 3454:
>  $ sudo btrfs-debug-tree -t chunk /dev/sdb | grep CHUNK_ITEM | wc -l
> 3454
> 
> HOWEVER, the time to mount has gone up somewhat significantly, from
> 11.537s to 16.553s, which was very unexpected.  Output from previously
> run commands shows the extent tree metadata grew about 25% due to the
> balance.  Everything else stayed roughly the same, and no additional
> data was added to the system (nor snapshots taken, nor additional
> volumes added, etc):
> 
> Before balance:
> $ sudo ./show_metadata_tree_sizes.py /mnt/btrfs/
> ROOT_TREE           1.14MiB 0(    72) 1(     1)
> EXTENT_TREE       644.27MiB 0( 41101) 1(   131) 2(     1)
> CHUNK_TREE        384.00KiB 0(    23) 1(     1)
> DEV_TREE          272.00KiB 0(    16) 1(     1)
> FS_TREE            11.55GiB 0(754442) 1(  2179) 2(     5) 3(     2)
> CSUM_TREE           3.50GiB 0(228593) 1(   791) 2(     2) 3(     1)
> QUOTA_TREE            0.00B
> UUID_TREE          16.00KiB 0(     1)
> FREE_SPACE_TREE       0.00B
> DATA_RELOC_TREE    16.00KiB 0(     1)
> 
> After balance:
> $ sudo ./show_metadata_tree_sizes.py /mnt/btrfs/
> ROOT_TREE           1.16MiB 0(    73) 1(     1)
> EXTENT_TREE       806.50MiB 0( 51419) 1(   196) 2(     1)
> CHUNK_TREE        384.00KiB 0(    23) 1(     1)
> DEV_TREE          272.00KiB 0(    16) 1(     1)
> FS_TREE            11.55GiB 0(754442) 1(  2179) 2(     5) 3(     2)
> CSUM_TREE           3.49GiB 0(227920) 1(   804) 2(     2) 3(     1)
> QUOTA_TREE            0.00B
> UUID_TREE          16.00KiB 0(     1)
> FREE_SPACE_TREE       0.00B
> DATA_RELOC_TREE    16.00KiB 0(     1)

Heu, interesting.

What's the output of `btrfs fi df /mountpoint` and `grep btrfs
/proc/self/mounts` (does it contain 'ssd') and which kernel version is
this? (I get a bit lost in the many messages and subthreads in this
thread) I also can't find in the threads which command "the balance" means.

And what does this tell you?

https://github.com/knorrie/python-btrfs/blob/develop/examples/show_free_space_fragmentation.py

Just to make sure you're not pointlessly shovelling data around on a
filesystem that is already in bad shape.

>> BTW, if OP still wants to try something to possibly to reduce mount time
>> with same the fs, I could try some modification to current block group
>> iteration code to see if it makes sense.
> 
> I'm glad to try anything if it's helpful to improving BTRFS.  Just let
> me know.
> 
> Best,
> 
> ellis


-- 
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to