Here is btrfs-show-super output:

nazar-pc@nazar-pc ~> sudo btrfs-show-super /dev/sda1
superblock: bytenr=65536, device=/dev/sda1
---------------------------------------------------------
csum            0x1e3c6fb8 [match]
bytenr            65536
flags            0x1
            ( WRITTEN )
magic            _BHRfS_M [match]
fsid            40b8240a-a0a2-4034-ae55-f8558c0343a8
label            Backup
generation        165491
root            143985360896
sys_array_size        226
chunk_root_generation    162837
root_level        1
chunk_root        247023583232
chunk_root_level    1
log_root        0
log_root_transid    0
log_root_level        0
total_bytes        858993459200
bytes_used        276512202752
sectorsize        4096
nodesize        16384
leafsize        16384
stripesize        4096
root_dir        6
num_devices        1
compat_flags        0x0
compat_ro_flags        0x0
incompat_flags        0x169
            ( MIXED_BACKREF |
              COMPRESS_LZO |
              BIG_METADATA |
              EXTENDED_IREF |
              SKINNY_METADATA )
csum_type        0
csum_size        4
cache_generation    165491
uuid_tree_generation    165491
dev_item.uuid        81eee7a6-774e-4bb5-8b72-cebb85a2f2ce
dev_item.fsid        40b8240a-a0a2-4034-ae55-f8558c0343a8 [match]
dev_item.type        0
dev_item.total_bytes    858993459200
dev_item.bytes_used    291072114688
dev_item.io_align    4096
dev_item.io_width    4096
dev_item.sector_size    4096
dev_item.devid        1
dev_item.dev_group    0
dev_item.seek_speed    0
dev_item.bandwidth    0
dev_item.generation    0
It is sad that skinny metadata will only affect new data, probably, I'll end up re-creating it:(

Can I rebalance it or something simple for this purpose?

Those are quite typical values for an already heavily used btrfs on a HDD.

Bad news, since I'm doing mounting/unmounting few times during snapshots creation because of how BTRFS works (source code: https://github.com/nazar-pc/just-backup-btrfs/blob/master/just-backup-btrfs.php#L148)

So if 10+20 seconds is typical, then in my case HDD can be very busy during a minute or sometimes more, this is not good and basically part or even real reason of initial question.

Sincerely, Nazar Mokrynskyi
github.com/nazar-pc
Skype: nazar-pc
Diaspora: naza...@diaspora.mokrynskyi.com
Tox: A9D95C9AA5F7A3ED75D83D0292E22ACE84BA40E912185939414475AF28FD2B2A5C8EF5261249

On 24.02.16 23:32, Henk Slager wrote:
On Tue, Feb 23, 2016 at 6:44 PM, Nazar Mokrynskyi <na...@mokrynskyi.com> wrote:
Looks like btrfstune -x did nothing, probably, it was already used at
creation time, I'm using rcX versions of kernel all the time and rolling
version of Ubuntu, so this is very likely to be the case.
The command    btrfs-show-super   shows the features of the
filesystem. You have a 'dummy' single profiles on the HDD fs and that
gives me a hint that you likely have used older tools to create the
fs. The current kernel does not set this feature flag on disk. If the
flag was already set, then no difference in performance.

If it was not set, then from now on, new metadata extents should be
skinny, which saves on total memory size and processing for (the
larger) filesystems. But for your existing data (snapshot subvolumes
in your  case) the metadata is then still non-skinny. So you won't
notice an instant difference only after all exiting fileblocks are
re-written or removed.
You will probably have a measurable difference if you equally fill 2
filesystems, one with and the other without the flag.

One thing I've noticed is much slower mount/umount on HDD than on SSD:

nazar-pc@nazar-pc ~> time sudo umount /backup
0.00user 0.00system 0:00.01elapsed 36%CPU (0avgtext+0avgdata
7104maxresident)k
0inputs+0outputs (0major+784minor)pagefaults 0swaps
nazar-pc@nazar-pc ~> time sudo mount /backup
0.00user 0.00system 0:00.03elapsed 23%CPU (0avgtext+0avgdata
7076maxresident)k
0inputs+0outputs (0major+803minor)pagefaults 0swaps
nazar-pc@nazar-pc ~> time sudo umount /backup_hdd
0.00user 0.11system 0:01.04elapsed 11%CPU (0avgtext+0avgdata
7092maxresident)k
0inputs+15296outputs (0major+787minor)pagefaults 0swaps
nazar-pc@nazar-pc ~> time sudo mount /backup_hdd
0.00user 0.02system 0:04.45elapsed 0%CPU (0avgtext+0avgdata
7140maxresident)k
14648inputs+0outputs (0major+795minor)pagefaults 0swaps
It is especially long (tenth of seconds with hight HDD load) when called
after some time, not consequently.

Once it took something like 20 seconds to unmount filesystem and around 10
seconds to mount it.
Those are quite typical values for an already heavily used btrfs on a HDD.

About memory - 16 GiB of RAM should be enough I guess:) Can I measure
somehow if seeking is a problem?
I don't know a tool that can measure seek times and gather statistics
over and extended period of time and relate that to filesystem
internal actions. It would be best if all this were done by the HDD
firmware (under command of the filesystem code). One can make a model
of it I think, but the question is how good that is for modern drives.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



Attachment: smime.p7s
Description: Кріптографічний підпис S/MIME

Reply via email to