I've been using btrfs for two months now. Every day between 02:00 and
08:00 I rsync some 300GB data (milions of files) to btrfs device and
then make snapshot. Next day i rsync again 300GG little changed (rsync
in place). First days it worked perfectly. Then loadavg (sys load)
started to rise. Now,
Yes, yesterday I've umouted partition and re-mounted it. Nothing has
changed this night.
You can look at my load graphs here:
http://img42.imageshack.us/img42/4661/33737291.png
http://img210.imageshack.us/img210/2742/46527625.png
On the second one blue is SYS load. I bet you can reasily spot
I've rebooted server and run backup to btrfs partition again. It seems
that problem is gone, high sys load does not occur now. So it is some
bug in btrfs... Before reboot server had 30 days uptime so its really
not much.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
I'm wondering is space really freed after deleting large subvolume?
Will space be immediately available to other data like other
subvolumes?
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at
No, subvolume deletion is done in the background, and the space
recovered will be returned for use relatively slowly. This is because
the FS has to go through the metadata, updating reference counts for
each extent in the subvolume to work out whether the space can be
recovered or not.
And what affect this rate of space reclaiming? When does it happen?
Also I guess that if it happens it must lower overall io performance
and rise loadavg ...?
and finaly is the performance overhead same as deleting same number of
files in system like ext3? When you delete large number of
Only umount does, and it can take a very long time if you have deleted a
large
(differing a lot) subvolume just before that.
does it mean that I won't be able to cleanly reboot machine after
deleting subvolume with milions of files?
And most important question is if deleting btrfs subvolume
btrfs like ext4 has support for extents, which can be any size, so
typically if you delete a large file, then it occupies only one extent,
so only that one extent needs to be marked as free, so a lot less I/O.
I know, I know. Issue is with many (small) files.
If you delete a large number of
Just a idea: I don't know if btrfs works like that or not but idea
would be that b-tree filesystem should be able to loose or discard
branches be removing a node. Cut a tree node and branches will fall
off - and get overwrited as empty space sometime in future (just like
during data deletion).
If
btrfs is missing recursive subvolume delete. You can't delete
subvolume if there is other subvolumes/snapshots in it. Also I think
there are no real tools to find out which directories are
subvolumes/snapshots
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a
2011/7/11 Stephane Chazelas stephane_chaze...@yahoo.fr:
2011-07-11 02:00:51 +0200, krz...@gmail.com :
Documentation says that btrfs-image zeros data. Feature request is for
disabling this. btrfs-image could be used to copy filesystem to
another drive (for example with snapshots, when copying
I wanted to confirm that btrfs will continue to work on raid1 when one
of devices will be gone.
dd if=/dev/null of=img0 bs=1 seek=2G
dd if=/dev/null of=img1 bs=1 seek=2G
mkfs.btrfs -d raid1 -m raid1 img0 img1
losetup /dev/loop1 img0
losetup /dev/loop2 img1
mkdir dir
mount -t btrfs /dev/loop1 dir
dd if=/dev/null of=img5 bs=1 seek=2G
dd if=/dev/null of=img6 bs=1 seek=2G
mkfs.btrfs -d raid1 -m raid1 img5 img6
losetup /dev/loop4 img5
losetup /dev/loop5 img6
btrfs device scan
mount -t btrfs /dev/loop4 dir
umount dir
losetup -d /dev/loop5
mount -t btrfs -o degraded /dev/loop4 dir
umount dir
Thanks.
I don't see reason why this needs another mount switch. This would
fail to start whole system in / parition was btrfs raid1, with no
reason to do so.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More
Documentation says that btrfs-image zeros data. Feature request is for
disabling this. btrfs-image could be used to copy filesystem to
another drive (for example with snapshots, when copying it file by
file would take much longer time or acctualy was not possible
(snapshots)). btrfs-image in turn
kernel: 2.6.37.6
At least twice I've experienced problems when using btrfs with ssd for
postgresql db storage. Server gets frozen, i even can't kill it using
kill -9, I cant chown postgresql db dir (it never gets done). If I
initiate copy operation of postgresql db dir to another partition
When I've used btrfs subvolume, without separately mouting it, for
mysql database dir I've got
Fatal error: Can't open and lock privilege tables: Table 'host' is read only
when starting mysql. When I've put mysql database dir in root of
btrfs, not a subvolume, mysql works fine.
I've checked file
There should be a way to make automatic checkpoints less frequent. On
the busy ssd I have about 7 cp every second. If it were for example
once every 5 minutes then one could set garbage removal every few
days.
Also garbage removal should have option to clean only if like 90% of
drive is used.
18 matches
Mail list logo