On Sat, Aug 16, 2014 at 03:28:11PM +0800, Miao Xie wrote:
On Fri, 15 Aug 2014 23:36:53 +0800, Liu Bo wrote:
This has been reported and discussed for a long time, and this hang occurs
in
both 3.15 and 3.16.
Btrfs now migrates to use kernel workqueue, but it introduces this hang
I've attached the dmesg output from a system running Debian kernel 3.14.13
which locked up. Everything which needed to write to disk was blocked. The
dmesg output didn't catch the first messages which had scrolled out of the
buffer. As the disk wasn't writable there was nothing useful in
ioctl BTRFS_IOC_FS_INFO return num_devices which does _not_ include seed
device, But the following ioctl BTRFS_IOC_DEV_INFO counts and gets seed
disk when probed. So in the userland we hit a count-slot missmatch
bug..
get_fs_info()
::
BUG_ON(ndevs =
yeah. btrfs filesystem show didn't work any time before as in
the test case below.
mkfs.btrfs /dev/sdb -f
btrfstune -S 1 /dev/sdb
mount /dev/sdb /btrfs
btrfs dev add /dev/sdc /btrfs
btrfs fi show -- fails.
kindly ref to the commit log for bug and its fix details.
Anand Jain (1):
btrfs-progs:
The count as returned by BTRFS_IOC_FS_INFO is the number of slots that
btrfs-progs would allocate for the BTRFS_IOC_DEV_INFO ioctl. Since
BTRFS_IOC_DEV_INFO would loop across the seed devices, So its better
ioctl BTRFS_IOC_FS_INFO returns the total_devices instead of num_devices.
The above
ioctl BTRFS_IOC_FS_INFO return num_devices which does _not_ include seed
device, But the following ioctl BTRFS_IOC_DEV_INFO counts and gets seed
disk when probed. So in the userland we hit a count-slot missmatch
bug..
get_fs_info()
::
BUG_ON(ndevs =
yeah. btrfs filesystem show didn't work any time before as in
the test case below.
mkfs.btrfs /dev/sdb -f
btrfstune -S 1 /dev/sdb
mount /dev/sdb /btrfs
btrfs dev add /dev/sdc /btrfs
btrfs fi show -- fails.
kindly ref to the commit log for bug and its fix details.
Anand Jain (1):
btrfs-progs:
As mentioned in the kernel patch
btrfs: ioctl BTRFS_IOC_FS_INFO and
BTRFS_IOC_DEV_INFO miss-matched with slots
The count as returned by BTRFS_IOC_FS_INFO is the number of slots that
btrfs-progs would allocate for the BTRFS_IOC_DEV_INFO ioctl. Since
BTRFS_IOC_DEV_INFO would loop across the seed
Hi,
I did a checkout of the latest btrfs progs to repair my damaged filesystem.
Running btrfs restore gives me several failed to inflate: -6 and crashes with
some memory corruption. I ran it again with valgrind and got:
valgrind --log-file=x2 -v --leak-check=yes btrfs restore /dev/sda9
Signed-off-by: David Disseldorp dd...@suse.de
---
Documentation/btrfs-subvolume.txt | 2 +-
cmds-subvolume.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/Documentation/btrfs-subvolume.txt
b/Documentation/btrfs-subvolume.txt
index a519131..c8b9928 100644
On 05 August 2014 at 23:32 Zach Brown z...@zabbo.net wrote:
Hello Zach,
Here's an untested patch which
Try testing it. It's easy with virtualization and xfstests.
You'll find that sending to a file fails because each individual file
write call that makes up a
Hi,
I ran the fs_mark test on a single empty hard drive. After the test, the df -h
results are:
/dev/sdk1 917G 39G 832G 5% /ext4
/dev/sdj1 932G 53G 850G 6% /btrfs
The test result for btrfs shows it ran 15 hours. Note there is no file/dir
remove operation
Good questions and already good comment given.
For another view...
On 17/08/14 13:31, Duncan wrote:
Shriramana Sharma posted on Sun, 17 Aug 2014 14:26:06 +0530 as excerpted:
Hello. One more Q re generic BTRFS behaviour.
https://btrfs.wiki.kernel.org/index.php/Main_Page specifically
MM == Marc MERLIN m...@merlins.org writes:
MM Note 3.16.0 is actually worse than 3.15 for me.
Here (a single partition btrfs), 3.16.0 works fine, but 3.17-rc1 fails again.
My /var/log is also a compressed, single-partition btrfs; that doesn't
show the problem with any version. Just the
btrfs_drop_snapshot() leaves subvolume qgroup items on disk after
completion. This can cause problems with snapshot creation. If a new
snapshot tries to claim the deleted subvolumes id, btrfs will get -EEXIST
from add_qgroup_item() and go read-only. The following commands will
reproduce this
On Sun, Aug 17, 2014 at 03:09:21PM -0500, Eric Sandeen wrote:
Coverity pointed this out; in the newly added
qgroup_subtree_accounting(), if btrfs_find_all_roots()
returns an error, we leak at least the parents pointer,
and possibly the roots pointer, depending on what failure
occurs.
If
This reproduces in a not tainted kernel 3.17.0-0.rc1.git0.1.fc22.x86_64. I
still used btrfs-progs v3.14.2-167-ge514381 to create the new raid5 volume, so
it seems whatever fixed it in for-linus is not in for-linus2.
[ 45.935848] BTRFS info (device sdc): disk space caching is enabled
[
On Mon, Aug 18, 2014 at 05:38:17PM +, Ming Lei wrote:
Hi,
I ran the fs_mark test on a single empty hard drive. After the test, the df
-h results are:
/dev/sdk1 917G 39G 832G 5% /ext4
/dev/sdj1 932G 53G 850G 6% /btrfs
The test result for btrfs
On Mon, 18 Aug 2014 17:38:17 +, Ming Lei wrote:
Hi,
I ran the fs_mark test on a single empty hard drive. After the test, the df
-h results are:
/dev/sdk1 917G 39G 832G 5% /ext4
/dev/sdj1 932G 53G 850G 6% /btrfs
The test result for btrfs shows it
Hi,
Description of the problem:
mount btrfs with selinux context, then create a subvolume, the new
subvolume cannot be mounted, even with the same context.
mkfs -t btrfs /dev/sda5
mount -o context=system_u:object_r:nfs_t:s0 /dev/sda5 /mnt/btrfs
btrfs subvolume create /mnt/btrfs/subvol
mount -o
Martin posted on Mon, 18 Aug 2014 19:16:20 +0100 as excerpted:
Also, for the file segment being defragged, abandon any links to other
snapshots to in effect deliberately replicate the data where appropriate
so that data segment is fully defragged.
FWIW, this is the current state.
The initial
Martin posted on Mon, 18 Aug 2014 19:16:20 +0100 as excerpted:
OTOH, I tend to be rather more of an independent partition booster than
many. The biggest reason for that is the too many eggs in one basket
problem. Fully separate filesystems on separate partitions...
I do so similarly
Shriramana Sharma posted on Sun, 17 Aug 2014 18:17:48 +0530 as excerpted:
Hello. This is wrt this thread:
http://www.spinics.net/lists/linux-btrfs/msg36639.html
The OP of that thread had not clarified (IMO) what exactly he means by
unreliability of btrfs send/receive. Is it only via
23 matches
Mail list logo