I've been using btrfs for two months now. Every day between 02:00 and
08:00 I rsync some 300GB data (milions of files) to btrfs device and
then make snapshot. Next day i rsync again 300GG little changed (rsync
"in place"). First days it worked perfectly. Then loadavg (sys load)
started to rise. Now
Yes, yesterday I've umouted partition and re-mounted it. Nothing has
changed this night.
You can look at my load graphs here:
http://img42.imageshack.us/img42/4661/33737291.png
http://img210.imageshack.us/img210/2742/46527625.png
On the second one blue is SYS load. I bet you can reasily spot time
I see that this high sys load and loadavg did not appear gradualy. It
appeared first time about 7 days ago and were present every backup,
every night since then.
During backup top shows higher cpu load on
4216 root 20 0 000 R 20.3 0.0 284:25.83 btrfs-delayed-m
4222 root
I've rebooted server and run backup to btrfs partition again. It seems
that problem is gone, high sys load does not occur now. So it is some
bug in btrfs... Before reboot server had 30 days uptime so its really
not much.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
t
usrqouta is very usefull for couting (and limitng) ammount of data put
by system users. Its otherwise impossible to quicly calculate disc
usage by system users. Btrfs sould support this in future. Usrquota is
supported by ext2/ext3/reiserfs and I quess it updates its internal
database everytime fil
There should be a way to make automatic checkpoints less frequent. On
the busy ssd I have about 7 cp every second. If it were for example
once every 5 minutes then one could set garbage removal every few
days.
Also garbage removal should have option to clean only if like 90% of
drive is used. there
I understand btrfs intent but same command run twice should not give
diffrent results. This really makes snapshot automation hard
root@sv12 [/ssd]# btrfs subvolume snapshot /ssd/sub1 /ssd/5
Create a snapshot of '/ssd/sub1' in '/ssd/5'
root@sv12 [/ssd]# btrfs subvolume snapshot /ssd/sub1 /ssd/5
Cr
When I've used btrfs subvolume, without separately mouting it, for
mysql database dir I've got
Fatal error: Can't open and lock privilege tables: Table 'host' is read only
when starting mysql. When I've put mysql database dir in root of
btrfs, not a subvolume, mysql works fine.
I've checked file fo
kernel: 2.6.37.6
At least twice I've experienced problems when using btrfs with ssd for
postgresql db storage. Server gets frozen, i even can't kill it using
kill -9, I cant chown postgresql db dir (it never gets done). If I
initiate copy operation of postgresql db dir to another partition
(withou
btrfs balance takes very long time on large filesystems. Possibility
of running balance on only one subvolume or running balance to reclaim
only after deleing subvolume would be usefull.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@
Documentation says that btrfs-image zeros data. Feature request is for
disabling this. btrfs-image could be used to copy filesystem to
another drive (for example with snapshots, when copying it file by
file would take much longer time or acctualy was not possible
(snapshots)). btrfs-image in turn c
2011/7/11 Stephane Chazelas :
> 2011-07-11 02:00:51 +0200, krz...@gmail.com :
>> Documentation says that btrfs-image zeros data. Feature request is for
>> disabling this. btrfs-image could be used to copy filesystem to
>> another drive (for example with snapshots, when copyi
I wanted to confirm that btrfs will continue to work on raid1 when one
of devices will be gone.
dd if=/dev/null of=img0 bs=1 seek=2G
dd if=/dev/null of=img1 bs=1 seek=2G
mkfs.btrfs -d raid1 -m raid1 img0 img1
losetup /dev/loop1 img0
losetup /dev/loop2 img1
mkdir dir
mount -t btrfs /dev/loop1 dir
b
dd if=/dev/null of=img5 bs=1 seek=2G
dd if=/dev/null of=img6 bs=1 seek=2G
mkfs.btrfs -d raid1 -m raid1 img5 img6
losetup /dev/loop4 img5
losetup /dev/loop5 img6
btrfs device scan
mount -t btrfs /dev/loop4 dir
umount dir
losetup -d /dev/loop5
mount -t btrfs -o degraded /dev/loop4 dir
umount dir
lose
Thanks.
I don't see reason why this needs another mount switch. This would
fail to start whole system in / parition was btrfs raid1, with no
reason to do so.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordo
I wonder if it would be possible to implement instant unlinking
directory with files in it. Since btrfs is based on b trees it could
be possible. Filesystem would have to "loose" all information on
directory and object in it, and allow overwriting this information.
This would be great feature, beca
Not in all cases you can plan and predict where there will be need for
deleting large number of files. Also subvolumes are difficult to
maintain, not to mention still quite buggy.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.ke
And besides deleting is transparent for any programming language and
can be done with no special permissions, while subvolume deletion or
creation is of course not.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More m
# uname -a
Linux dhcppc1 3.0.1--std-ipv6-64 #1 SMP Sun Aug 14 17:06:21 CEST
2011 x86_64 x86_64 x86_64 GNU/Linux
mkdir test5
cd test5
dd if=/dev/null of=img5 bs=1 seek=2G
dd if=/dev/null of=img6 bs=1 seek=2G
losetup /dev/loop2 img5
losetup /dev/loop3 img6
mkfs.btrfs -d raid1 -m raid1 /dev/loop2
btrfs is missing recursive subvolume delete. You can't delete
subvolume if there is other subvolumes/snapshots in it. Also I think
there are no real tools to find out which directories are
subvolumes/snapshots
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of
I'm wondering is space really freed after deleting large subvolume?
Will space be immediately available to other data like other
subvolumes?
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http:/
> No, subvolume deletion is done in the background, and the space
> recovered will be returned for use relatively slowly. This is because
> the FS has to go through the metadata, updating reference counts for
> each extent in the subvolume to work out whether the space can be
> recovered or not.
> And what affect this rate of space reclaiming? When does it happen?
> Also I guess that if it happens it must lower overall io performance
> and rise loadavg ...?
>
and finaly is the performance overhead same as deleting same number of
files in system like ext3? When you delete large number of f
>>Only umount does, and it can take a very long time if you >>have deleted a
>>large
>>(differing a lot) subvolume just before that.
does it mean that I won't be able to cleanly reboot machine after
deleting subvolume with milions of files?
And most important question is if deleting btrfs subvol
>btrfs like ext4 has support for extents, which can be >any size, so
>typically if you delete a large file, then it occupies only >one extent,
>so only that one extent needs to be marked as free, so a >lot less I/O.
I know, I know. Issue is with many (small) files.
> If you delete a large number
Just a idea: I don't know if btrfs works like that or not but idea
would be that b-tree filesystem should be able to "loose" or "discard"
branches be removing a node. Cut a tree node and branches will fall
off - and get overwrited as empty space sometime in future (just like
during data deletion).
26 matches
Mail list logo