On Fri, Mar 07, 2014 at 01:13:53AM +, Michael Russo wrote:
Duncan 1i5t5.duncan at cox.net writes:
But if you're not using compression, /that/ can't explain it...
Ha! Well while that was an interesting discussion of fragmentation,
I am only using the default mount options here and
Duncan, thank you for this comprehensive post. Really helpful as always!
[...]
As for restoring, since a snapshot is a copy of the filesystem as it
existed at that point, and the method btrfs exposes for accessing them is
to mount that specific snapshot, to restore an individual file from a
Hugo Mills posted on Fri, 07 Mar 2014 08:02:13 + as excerpted:
On Fri, Mar 07, 2014 at 01:13:53AM +, Michael Russo wrote:
Duncan 1i5t5.duncan at cox.net writes:
But if you're not using compression, /that/ can't explain it...
Ha! Well while that was an interesting discussion
With kernel 3.13.5 (Ubuntu mainline), when plugging in a (evidently
twitchy) USB3 stick with a BTRFS filesystem, I hit an oops in read()
[1].
Full dmesg output is at:
http://quora.org/2014/btrfs-oops.txt
Thanks,
Daniel
-- [1]
IP: 0010:[8135eaf6] [8135eaf6] memcpy+0x6/0x110
On 03/07/2014 05:55 AM, Daniel J Blueman wrote:
With kernel 3.13.5 (Ubuntu mainline), when plugging in a (evidently
twitchy) USB3 stick with a BTRFS filesystem, I hit an oops in read()
[1].
Full dmesg output is at:
Duncan 1i5t5.duncan at cox.net writes:
*But*, btrfs snapshots by themselves remain on the existing btrfs
filesystem, and thus are subject to many of the same risks as the
filesystem itself. As you mentioned raid is redundancy not backup,
snapshots aren't backup either; snapshots are
Thanks Hugo, that makes sense, and maybe leads to a possible way to fix the
issue in future versions of btrfs-convert or a way to handle it in the balance
code.
What I did to find files with extents:
cd /mymedia
find . -type f -print0 | xargs -0 filefrag | grep -v 1\ extent | grep -v 0\
Eric Mesa wrote (ao):
Duncan - thanks for this comprehensive explanation. For a huge portion of
your reply...I was all wondering why you and others were saying snapshots
aren't backups. They certainly SEEMED like backups. But now I see that the
problem is one of precise terminology vs
most of the user level scripts uses /proc/self/mounts for the
disk-path to mount-point to fsid mapping. But when seed disk is
present which generally has lowest devid, the /proc/self/mounts
would show the seed disk, but seed disk has different fsid from
the actual fsid that's mounted. Due to this
ioctl(BTRFS_IOC_FS_INFO) returns num_devices which does not
count seed device. num_devices is used to calculate
the number of slots during the ioctl(BTRFS_IOC_DEV_INFO)
but ioctl(BTRFS_IOC_DEV_INFO) would count seed devices as well.
Due to this miss match btrfs_progs get_fs_info() hits the bug..
The intended usage of total_devices and num_devices
should be recorded in the comments so that these
two counters can be used correctly as originally
intended.
As of now there appears to be slight deviations/bugs
from the original intention, the bugs are apparent
that num_devices does not count
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/27/2014 12:58 AM, Miao Xie wrote:
As we know, btrfs flushes the continuous pages as many as
possible, but if all the free spaces are small, we will allocate
the spaces by several times, and if there is something wrong with
the space
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/27/2014 02:47 AM, Liu Bo wrote:
This is a preparation work, rename waiting_dir_move to
send_dir_node. We'd like to share waiting_dir_move structure in new
did_create_dir() code.
Signed-off-by: Liu Bo bo.li@oracle.com --- v2: fix wrong
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/26/2014 09:23 AM, Austin S Hemmelgarn wrote:
Currently, btrfs balance start fails when trying to convert
metadata or system chunks to dup profile on filesystems with
multiple devices. This requires that a conversion from a
multi-device
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/24/2014 06:54 AM, Filipe David Borba Manana wrote:
Regression test for btrfs incremental send issue where a rmdir
instruction is sent against an orphan directory inode which is not
empty yet, causing btrfs receive to fail when it attempts to
The error message is confusing:
# btrfs sub delete /mnt/mysub/
Delete subvolume '/mnt/mysub'
ERROR: cannot delete '/mnt/mysub' - Directory not empty
The error message does not make sense to me: It's not about deleting a
directory but it's a subvolume, and it doesn't matter if the subvolume is
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/20/2014 05:08 AM, Miao Xie wrote:
Signed-off-by: Miao Xie mi...@cn.fujitsu.com --- fs/btrfs/ctree.c
| 25 ++--- fs/btrfs/ctree.h | 39
+-- fs/btrfs/disk-io.c | 33
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/20/2014 05:08 AM, Miao Xie wrote:
Signed-off-by: Miao Xie mi...@cn.fujitsu.com --- fs/btrfs/ctree.c
| 25 ++--- fs/btrfs/ctree.h | 39
+-- fs/btrfs/disk-io.c | 33
Alright! After doing:
cd /mymedia; find . -type f | while read file; do mv -v $file /dev/shm;
f2=`basename $file`; mv -v /dev/shm/$f2 $file; done
I finally moved whatever files out of the single allocation and back onto the
new RAID1 profile:
oot@ossy:~# /usr/src/btrfs-progs/btrfs fi df
Hi there,
I tried to perform an incremental backup as described in
https://btrfs.wiki.kernel.org/index.php/Incremental_Backup between 2 external
USB drives,
The 1st btrfs send foo/snap1 | btrfs receive bar went well, although it took
5-6 times the time the same workload takes in ZFS.
Then
20 matches
Mail list logo