Hi,
these two patches were sent as a part of a larger RFC which aims at
allowing GFP_NOFS allocations to fail to help sort out memory reclaim
issues bound to the current behavior
(http://marc.info/?l=linux-mmm=143876830616538w=2).
It is clear that move to the GFP_NOFS behavior change is a long
From: Michal Hocko mho...@suse.com
Btrfs relies on GFP_NOFS allocation when committing the transaction but
this allocation context is rather weak wrt. reclaim capabilities. The
page allocator currently tries hard to not fail these allocations if
they are small (=PAGE_ALLOC_COSTLY_ORDER) so this
On 19 August 2015 at 11:11, fdman...@kernel.org wrote:
From: Filipe Manana fdman...@suse.com
If we partially clone one extent of a file into a lower offset of the
file, fsync the file, power fail and then mount the fs to trigger log
replay, we can get multiple checksum items in the csum tree
On Tue 18-08-15 19:29:14, Michal Hocko wrote:
On Tue 18-08-15 13:11:44, Chris Mason wrote:
On Tue, Aug 18, 2015 at 12:40:32PM +0200, Michal Hocko wrote:
From: Michal Hocko mho...@suse.com
Btrfs relies on GFP_NOFS allocation when commiting the transaction but
since mm: page_alloc:
On Wed, Aug 19, 2015 at 11:11:55AM +0100, fdman...@kernel.org wrote:
From: Filipe Manana fdman...@suse.com
If we partially clone one extent of a file into a lower offset of the
file, fsync the file, power fail and then mount the fs to trigger log
replay, we can get multiple checksum items in
From: Michal Hocko mho...@suse.com
alloc_btrfs_bio relies on GFP_NOFS allocation when committing the
transaction but this allocation context is rather weak wrt. reclaim
capabilities. The page allocator currently tries hard to not fail these
allocations if they are small (=PAGE_ALLOC_COSTLY_ORDER)
linux 4.1.4 forced read-only during an rsync, complaining about lack
of space, with ~30TB free. Filesystem has 6 snapshots, basically 3
incremental rsync's of 2 different external filesystems. Not sure how
to proceed, balance -dusage=5 then try and remount, doesn't balance
need rw?
# btrfs file
2015-08-19 17:39 GMT+02:00 Leo Unbekandt leo.un...@gmail.com:
Hello everyone,
I've encountered what looks like a nasty bug which occures when the OOM
kills a process and that this process is working with the file system.
I've been able to reproduce this issue using docker, by limiting the
Hello Hugo,
thanks for your hint.
On 16.08.2015 16:57, Hugo Mills wrote:
Here's your problem -- you've got a RAID 5 filesystem, which has a
minimum allocation of 2 devices, but only one device has free space on
it for allocation, so no more chunks can be allocated. I'm not sure
how it
On Wed, Aug 19, 2015 at 1:22 AM, Qu Wenruo quwen...@cn.fujitsu.com wrote:
Timothy Normand Miller wrote on 2015/08/18 22:55 -0400:
On Tue, Aug 18, 2015 at 10:48 PM, Qu Wenruo quwen...@cn.fujitsu.com
wrote:
Timothy Normand Miller wrote on 2015/08/18 22:46 -0400:
On Tue, Aug 18, 2015 at
Hello everyone,
I've encountered what looks like a nasty bug which occures when the OOM
kills a process and that this process is working with the file system.
I've been able to reproduce this issue using docker, by limiting the
memory limits of process and make them crash when working.
(You can
I can't help about your issue, but also I can't resist to ask: if I read
correctly
the data you have 2x 30TB disks, and 2x 40TB disks. It is correct ?
Are physical disk or virtual ones ?
On 2015-08-19 15:23, E V wrote:
linux 4.1.4 forced read-only during an rsync, complaining about lack
of
On Wed, Aug 19, 2015 at 06:10:06PM +0200, Hendrik Friedel wrote:
Hello Hugo,
thanks for your hint.
On 16.08.2015 16:57, Hugo Mills wrote:
Here's your problem -- you've got a RAID 5 filesystem, which has a
minimum allocation of 2 devices, but only one device has free space on
it for
Hi Hugo,
thanks for your help.
Now the output is:
root@homeserver:/media# btrfs fi df /mnt/__Complete_Disk
Data, RAID5: total=3.79TiB, used=3.78TiB
System, RAID5: total=32.00MiB, used=416.00KiB
Metadata, RAID5: total=6.46GiB, used=4.85GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
On Wed, Aug 19, 2015 at 02:17:39PM +0200, mho...@kernel.org wrote:
Hi,
these two patches were sent as a part of a larger RFC which aims at
allowing GFP_NOFS allocations to fail to help sort out memory reclaim
issues bound to the current behavior
Hi all,
playing with raid5 and btrfs replace I found a BUG. Basically it seems that
if I try to replace a missing disk of a degraded filesystem I got a kernel
BUG. This is reproducible at 100% for me.
To simulate the disk removal, I started qemu and I used the command drive_del
Thanks. I'd consider raid6, but since I'll be backing up to a second
btrfs raid5 array, I think I have sufficient redundancy, since
equivalent to raid 5+1 on paper. I'm doing that rather than something
like raid10 in a single box because I want the redundancy of a second
physical server so I
Hi Qu,
On Fri, Feb 28, 2014 at 4:46 AM, Qu Wenruo quwen...@cn.fujitsu.com wrote:
The original btrfs_workers has thresholding functions to dynamically
create or destroy kthreads.
Though there is no such function in kernel workqueue because the worker
is not created manually, we can still use
On Wed, Aug 19, 2015 at 11:41:55AM -0700, Omar Sandoval wrote:
On Wed, Aug 19, 2015 at 07:11:20PM +0200, Goffredo Baroncelli wrote:
Hi all,
playing with raid5 and btrfs replace I found a BUG. Basically it seems
that if I try to replace a missing disk of a degraded filesystem I got
a
On Wed, Aug 19, 2015 at 07:11:20PM +0200, Goffredo Baroncelli wrote:
Hi all,
playing with raid5 and btrfs replace I found a BUG. Basically it seems that
if I try to replace a missing disk of a degraded filesystem I got a
kernel BUG. This is reproducible at 100% for me.
Hi, Goffredo, this
From: Filipe Manana fdman...@suse.com
Hi Chris,
Please consider the following fixes for your integration-4.3 branch.
Nothing unusual. I included any Reviewed-by tags people added and a
test case for xfstests for the file corruption after fsync fix.
Thanks.
The following changes since commit
On 2015-08-19 22:28, Omar Sandoval wrote:
On Wed, Aug 19, 2015 at 11:41:55AM -0700, Omar Sandoval wrote:
On Wed, Aug 19, 2015 at 07:11:20PM +0200, Goffredo Baroncelli wrote:
Hi all,
playing with raid5 and btrfs replace I found a BUG. Basically it seems
that if I try to replace a missing
On Wed, Aug 19, 2015 at 07:37:42PM +0100, fdman...@kernel.org wrote:
From: Filipe Manana fdman...@suse.com
Hi Chris,
Please consider the following fixes for your integration-4.3 branch.
Nothing unusual. I included any Reviewed-by tags people added and a
test case for xfstests for the file
At initializing time, for threshold-able workqueue, it's max_active
of kernel workqueue should be 1 and grow if it hits threshold.
But due to the bad naming, there is both 'max_active' for kernel
workqueue and btrfs workqueue.
So wrong value is given at workqueue initialization.
This patch fixes
Hi Alex.
Thanks for the review.
Comment inlined below.
Alex Lyakas wrote on 2015/08/19 18:46 +0200:
Hi Qu,
On Fri, Feb 28, 2014 at 4:46 AM, Qu Wenruo quwen...@cn.fujitsu.com wrote:
The original btrfs_workers has thresholding functions to dynamically
create or destroy kthreads.
Though there
Hello btrfs mailing list!
I have a two questions regarding the use of the NOCOW bit and how this affects
scrub and snapshots.
Question 1: If I apply the NOCOW attribute to a file or directory, how does
that affect my ability to run btrfs scrub?
Question 2: If I apply the NOCOW attribute
Filipe David Manana wrote on 2015/08/19 11:07 +0100:
On Tue, Aug 18, 2015 at 3:03 AM, Qu Wenruo quwen...@cn.fujitsu.com wrote:
Btrfs qgroup reserve codes lacks check for rewrite dirty page, causing
every write, even rewriting a uncommitted dirty page, to reserve space.
But only written data
Btrfs qgroup reserve codes lacks check for rewrite dirty page, causing
every write, even rewriting a uncommitted dirty page, to reserve space.
But only written data will free the reserved space, causing reserved
space leaking.
The bug exists almost from the beginning of btrfs qgroup codes, but
On Wed, Aug 19, 2015 at 11:16:10AM +0800, anand jain wrote:
Ok, SCRATCH_OPTIONS might not be the best idea here, so feel free to
drop it.
I have dropped $SCRATCH_OPTIONS. (waiting to submit v8). Thanks.
However, you've still missed the primary reason I suggested
On Tue, Aug 18, 2015 at 3:03 AM, Qu Wenruo quwen...@cn.fujitsu.com wrote:
Btrfs qgroup reserve codes lacks check for rewrite dirty page, causing
every write, even rewriting a uncommitted dirty page, to reserve space.
But only written data will free the reserved space, causing reserved
space
From: Filipe Manana fdman...@suse.com
If we partially clone one extent of a file into a lower offset of the
file, fsync the file, power fail and then mount the fs to trigger log
replay, we can get multiple checksum items in the csum tree that overlap
each other and result in checksum lookup
From: Filipe Manana fdman...@suse.com
Test that if we fsync a file that got one extent partially cloned into a
lower file offset, after a power failure our file has the same content it
had before the power failure and after the extent cloning operation.
This test is motivated by an issue found
There are two large disks, part of the disks partitioned for MD RAID1
and the rest of the disks partitioned for BtrFs RAID1
One of the disks (/dev/sdd) appears to have failed, there were plenty of
alerts from MD (including dmesg and emails) but nothing from the BtrFs
filesystem
Could this just
Tsutomu Itoh wrote on 2015/08/19 14:55 +0900:
We need not check path before btrfs_free_path() is called because
path is checked in btrfs_free_path().
Signed-off-by: Tsutomu Itoh t-i...@jp.fujitsu.com
Reviewed-by: Qu Wenruo quwen...@cn.fujitsu.com
BTW, did you check btrfs-progs for the such
On 2015/08/19 16:34, Qu Wenruo wrote:
Tsutomu Itoh wrote on 2015/08/19 14:55 +0900:
We need not check path before btrfs_free_path() is called because
path is checked in btrfs_free_path().
Signed-off-by: Tsutomu Itoh t-i...@jp.fujitsu.com
Reviewed-by: Qu Wenruo quwen...@cn.fujitsu.com
We need not check path before btrfs_free_path() is called because
path is checked in btrfs_free_path().
Signed-off-by: Tsutomu Itoh t-i...@jp.fujitsu.com
---
cmds-check.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/cmds-check.c b/cmds-check.c
index 4fa8709..8019fb0
is BtrFs failing to alert?
Yes. Btrfs does not do that. as of now.
Only action that it takes is to put FS into readonly mode. That may
be fine for ext4 kind of FS but its not correct from the btrfs Volume
Manager perspective.
A work in progress at my end to fix that.
[996932.735110]
two to dig more..
Aug 16 04:41:31 [1082957.226817] BTRFS: error (device sdb) in
__btrfs_free_extent:6235: errno=-28 No space left
Aug 16 04:41:31 [1082957.226819] BTRFS info (device sdb): forced readonly
::
Aug 16 04:41:31 [1082957.289289] BTRFS: error (device sdb) in
Hi, Jonathan Panozzo,
-Original Message-
From: linux-btrfs-ow...@vger.kernel.org
[mailto:linux-btrfs-ow...@vger.kernel.org] On Behalf Of Chris Murphy
Sent: Thursday, August 20, 2015 9:56 AM
To: Jonathan Panozzo j...@lime-technology.com; Btrfs BTRFS
linux-btrfs@vger.kernel.org
Zhao,
Thank you for your response. Two quick follow-up questions:
1: What happens on an unrecoverable data error case? Does the volume get put
into read-only mode?
2: Out of curiosity, why is data checksumming tied to COW?
- Jon
On Aug 19, 2015, at 11:09 PM, Zhao Lei
On Thu, 20 Aug 2015 11:55:43 AM Chris Murphy wrote:
Question 1: If I apply the NOCOW attribute to a file or directory, how
does that affect my ability to run btrfs scrub?
nodatacow includes nodatasum and no compression. So it means these
files are presently immune from scrub check and
41 matches
Mail list logo