Hi,
after scrub start, scrub cancel, umount, mount of a two disk raid1
(data + metadata):
[12999.229791] ==
[12999.236029] WARNING: possible circular locking dependency detected
[12999.242261] 4.14.35 #36 Not tainted
[12999.245806]
Hello,
this warning happened during "btrfs subvolume remove" of a
readonly snapshot after the newest in the snapshot series was
"btrfs received".
[96857.000284] [ cut here ]
[96857.000307] WARNING: CPU: 1 PID: 371 at kernel/locking/lockdep.c:704
Hello,
I just got a BUG on mount of a raid10 fs. /dev/sde was added to
the fs recently and balance has been started. After reboot (balance
still running), the fs can not be mounted any more.
# btrfs fi sh
Label: 'BTR0' uuid: 0ec83db3-4574-4e40-8d57-ebbe9fe246e1
Total devices 5 FS
Hello Satoru and all,
that Oct. report was the only time I've experienced the error, so I
don't have much to add. I can try to answer your questions:
Here are my questions.
1. Is your system btrfs scrub clean?
yes,
2. Is this message shown every boot time?
no, I have seen them only
Hello,
On Mon, 27 Oct 2014 13:44:22 +, Filipe David Manana wrote:
On Mon, Oct 27, 2014 at 12:11 PM, Filipe David Manana
fdman...@gmail.com wrote:
On Mon, Oct 27, 2014 at 11:08 AM, Miao Xie mi...@cn.fujitsu.com wrote:
On Mon, 27 Oct 2014 09:19:52 +, Filipe Manana wrote:
We have a race
Hello,
[...]
This patch seems to fix https://bugzilla.kernel.org/show_bug.cgi?id=64961
for me: I've been testing it together with
[PATCH] Btrfs: fix invalid leaf slot access in btrfs_lookup_extent()
on top of 3.18-rc2 since yesterday, and so far no crashes during balance
or device remove.
Hello,
[$] uname -a
Linux beplan 3.18.0-rc1-next-20141023-ARCH-dirty #1 SMP PREEMPT Sat
Oct 25 22:19:01 FET 2014 x86_64 GNU/Linux
I have a custom config for kernel, and i disable debugs feature in
kernel. If needed i can enable it and recompile kernel.
After this message, system boot
Hello,
You have mentioned two issues when balance and fi show running
concurrently
my mail was a bit chaotic, but I get the stalls even on idle system.
Today I got
parent transid verify failed on 1559973888000 wanted 1819 found 1821
parent transid verify failed on 1559973888000 wanted
Hello Gui,
Oh, it seems that there are btrfs with missing devs that are bringing
troubles to the @open_ctree_... function.
what do you mean by missing devs? I have no degraded fs.
The time btrfs fi sh spends scanning disks of a filesystem seems to
be proportional to the amount of data
Hello,
the version 3.17 of btrfs-progs has been released.
on a system with 3-disk raid1 and 4 and 5-disk raid10 fs,
btrfs filesystem show now stalls for approx. half a minute after the
listing, just before the version information. During that time, it
often prints something like
[...]
Hello,
one more thing: I just overwrote part of one disk.
btrfs filesystem show could be more helpful diagnosing this:
# btrfs fi sh
Label: 'BTRFSROOT' uuid: d877125e-9b8d-47ea-b57b-7411292fd26c
Total devices 1 FS bytes used 2.91GiB
devid1 size 29.44GiB used 5.04GiB
Hello,
so far I haven't succeeded running btrfs balance on a large
skinny-metadata fs -- segfault, kernel bug, reproducible. No such
problems on ^skinny-metadata fs (same disks, same data). Tried both
several times on 3.17. More info in comments 10,14 in
Hello,
the core of skinny-metadata feature has been merged in 3.10 (Jun 2013)
and has been reportedly used by many people. No major bugs were reported
lately unless I missed them.
so far I haven't succeeded running btrfs balance on a large
skinny-metadata fs -- segfault, kernel bug,
Hello,
I have trouble finishing btrfs balance on five disk raid10 fs.
I added a disk to 4x3TB raid10 fs and run btrfs balance start
/mnt/b3, which segfaulted after few hours, probably because of the BUG
below. btrfs check does not find any errors, both before the balance
and after reboot (the
Hello,
on a fs with 4 disks, raid 10 for data, one drive was failing and has
been removed. After reboot and 'mount -o degraded...', the fs looks
full, even though before removal of the failed device it was almost
80% free.
root@fs0:~# df -h /mnt/b
Filesystem Size Used Avail Use% Mounted
15 matches
Mail list logo