Hello,
If I write large sequential file on snapshot, then create another snapshot,
overwrite file with small amount of data and delete first snapshot, second
snapshot has very large data extent and only small part of it is used.
For example if I use following sequence:
mkfs.btrfs /dev/sdn
mount -o noatime,nodatacow,nospace_cache /dev/sdn /mnt/b
btrfs sub snap /mnt/b /mnt/b/snap1
dd if=/dev/zero of=/mnt/b/snap1/t count=15000 bs=65535
sync
btrfs sub snap /mnt/b/snap1 /mnt/b/snap2
dd if=/dev/zero of=/mnt/b/snap2/t seek=3 count=1 bs=2048
sync
btrfs sub delete /mnt/b/snap1
btrfs-debug-tree /dev/sdn
I see following data extents
item 6 key (257 EXTENT_DATA 0) itemoff 3537 itemsize 53
extent data disk byte 1103101952 nr 194641920
extent data offset 0 nr 4096 ram 194641920
extent compression 0
item 7 key (257 EXTENT_DATA 4096) itemoff 3484 itemsize 53
extent data disk byte 2086129664 nr 4096
extent data offset 0 nr 4096 ram 4096
extent compression 0
In item 6: only 4096 from 194641920 are in use. Rest of space is wasted.
If I defragment like: btrfs filesystem defragment /mnt/b/snap2/t it release
wasted space. But I can't use defragment because if I have few snapshots I
need to run defragment on each snapshot and it disconnect relation between
snapshot and create multiple copies of same data.
In our test that create and delete snapshots while writing data, we end up
with few GBs of disk space wasted.
Is it possible to limit size of allocated data extents?
Is it possible to defragment subvolume without breaking snapshots relations?
Any other idea how to recover wasted space?
Thanks,
Moshe Melnikov
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html