Hi, Jonathan Panozzo
> -Original Message-
> From: Jonathan Panozzo [mailto:j...@lime-technology.com]
> Sent: Thursday, August 20, 2015 2:13 PM
> To: Zhao Lei
> Cc: Chris Murphy ; Btrfs BTRFS
>
> Subject: Re: Questions on use of NOCOW impact to subvolumes and snapshots
>
>
> > On Aug 20
> On Aug 20, 2015, at 1:03 AM, Zhao Lei wrote:
>
> Hi, Jonathan Panozzo
>
>> -Original Message-
>> From: Jonathan Panozzo [mailto:j...@lime-technology.com]
>> Sent: Thursday, August 20, 2015 12:41 PM
>> To: Zhao Lei
>> Cc: Chris Murphy ; Btrfs BTRFS
>>
>> Subject: Re: Questions on use
Hi, Jonathan Panozzo
> -Original Message-
> From: Jonathan Panozzo [mailto:j...@lime-technology.com]
> Sent: Thursday, August 20, 2015 12:41 PM
> To: Zhao Lei
> Cc: Chris Murphy ; Btrfs BTRFS
>
> Subject: Re: Questions on use of NOCOW impact to subvolumes and snapshots
>
> Zhao,
>
> Th
Add further checks to btrfs replace start command.
The following tests where added in user space before calling
the ioctl():
1) check if the new disk is greather or equal to the old one
2) check if the source device is or a block device or a
numerical dev-id
These checks are already performed in
Zhao,
Thank you for your response. Two quick follow-up questions:
1: What happens on an unrecoverable data error case? Does the volume get put
into read-only mode?
2: Out of curiosity, why is data checksumming tied to COW?
- Jon
> On Aug 19, 2015, at 11:09 PM, Zhao Lei wrote:
>
> Hi, Jo
Hi, Jonathan Panozzo,
> -Original Message-
> From: linux-btrfs-ow...@vger.kernel.org
> [mailto:linux-btrfs-ow...@vger.kernel.org] On Behalf Of Chris Murphy
> Sent: Thursday, August 20, 2015 9:56 AM
> To: Jonathan Panozzo ; Btrfs BTRFS
>
> Subject: Re: Questions on use of NOCOW impact to s
On Thu, 20 Aug 2015 11:55:43 AM Chris Murphy wrote:
> > Question 1: If I apply the NOCOW attribute to a file or directory, how
> > does that affect my ability to run btrfs scrub?
>
> nodatacow includes nodatasum and no compression. So it means these
> files are presently immune from scrub check a
two to dig more..
Aug 16 04:41:31 [1082957.226817] BTRFS: error (device sdb) in
__btrfs_free_extent:6235: errno=-28 No space left
Aug 16 04:41:31 [1082957.226819] BTRFS info (device sdb): forced readonly
::
Aug 16 04:41:31 [1082957.289289] BTRFS: error (device sdb) in
cleanup_transaction:16
is BtrFs failing to alert?
Yes. Btrfs does not do that. as of now.
Only action that it takes is to put FS into readonly mode. That may
be fine for ext4 kind of FS but its not correct from the btrfs Volume
Manager perspective.
A work in progress at my end to fix that.
> [996932.735110]
Btrfs qgroup reserve codes lacks check for rewrite dirty page, causing
every write, even rewriting a uncommitted dirty page, to reserve space.
But only written data will free the reserved space, causing reserved
space leaking.
The bug exists almost from the beginning of btrfs qgroup codes, but
no
On Wed, Aug 19, 2015 at 6:44 PM, Jonathan Panozzo
wrote:
> Hello btrfs mailing list!
>
> I have a two questions regarding the use of the NOCOW bit and how this
> affects scrub and snapshots.
>
> Question 1: If I apply the NOCOW attribute to a file or directory, how does
> that affect my ability
Filipe David Manana wrote on 2015/08/19 11:07 +0100:
On Tue, Aug 18, 2015 at 3:03 AM, Qu Wenruo wrote:
Btrfs qgroup reserve codes lacks check for rewrite dirty page, causing
every write, even rewriting a uncommitted dirty page, to reserve space.
But only written data will free the reserved s
At initializing time, for threshold-able workqueue, it's max_active
of kernel workqueue should be 1 and grow if it hits threshold.
But due to the bad naming, there is both 'max_active' for kernel
workqueue and btrfs workqueue.
So wrong value is given at workqueue initialization.
This patch fixes
Hi Alex.
Thanks for the review.
Comment inlined below.
Alex Lyakas wrote on 2015/08/19 18:46 +0200:
Hi Qu,
On Fri, Feb 28, 2014 at 4:46 AM, Qu Wenruo wrote:
The original btrfs_workers has thresholding functions to dynamically
create or destroy kthreads.
Though there is no such function in
Hello btrfs mailing list!
I have a two questions regarding the use of the NOCOW bit and how this affects
scrub and snapshots.
Question 1: If I apply the NOCOW attribute to a file or directory, how does
that affect my ability to run btrfs scrub?
Question 2: If I apply the NOCOW attribute recu
On Wed, Aug 19, 2015 at 07:37:42PM +0100, fdman...@kernel.org wrote:
> From: Filipe Manana
>
> Hi Chris,
>
> Please consider the following fixes for your integration-4.3 branch.
> Nothing unusual. I included any Reviewed-by tags people added and a
> test case for xfstests for the file corruption
On 2015-08-19 22:28, Omar Sandoval wrote:
> On Wed, Aug 19, 2015 at 11:41:55AM -0700, Omar Sandoval wrote:
>> On Wed, Aug 19, 2015 at 07:11:20PM +0200, Goffredo Baroncelli wrote:
>>> Hi all,
>>>
>>> playing with raid5 and "btrfs replace" I found a BUG. Basically it seems
>>> that if I try to repla
On Wed, Aug 19, 2015 at 11:41:55AM -0700, Omar Sandoval wrote:
> On Wed, Aug 19, 2015 at 07:11:20PM +0200, Goffredo Baroncelli wrote:
> > Hi all,
> >
> > playing with raid5 and "btrfs replace" I found a BUG. Basically it seems
> > that if I try to replace a "missing" disk of a "degraded" filesyst
From: Filipe Manana
Hi Chris,
Please consider the following fixes for your integration-4.3 branch.
Nothing unusual. I included any Reviewed-by tags people added and a
test case for xfstests for the file corruption after fsync fix.
Thanks.
The following changes since commit 46cd28555ffaa4016229
On Wed, Aug 19, 2015 at 07:11:20PM +0200, Goffredo Baroncelli wrote:
> Hi all,
>
> playing with raid5 and "btrfs replace" I found a BUG. Basically it seems that
> if I try to replace a "missing" disk of a "degraded" filesystem I got a
> kernel BUG. This is reproducible at 100% for me.
Hi, Goffr
On Wed, Aug 19, 2015 at 02:17:39PM +0200, mho...@kernel.org wrote:
> Hi,
> these two patches were sent as a part of a larger RFC which aims at
> allowing GFP_NOFS allocations to fail to help sort out memory reclaim
> issues bound to the current behavior
> (http://marc.info/?l=linux-mm&m=14387683061
Hi Hugo,
thanks for your help.
>> Now the output is:
root@homeserver:/media# btrfs fi df /mnt/__Complete_Disk
Data, RAID5: total=3.79TiB, used=3.78TiB
System, RAID5: total=32.00MiB, used=416.00KiB
Metadata, RAID5: total=6.46GiB, used=4.85GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
Thanks. I'd consider raid6, but since I'll be backing up to a second
btrfs raid5 array, I think I have sufficient redundancy, since
equivalent to raid 5+1 on paper. I'm doing that rather than something
like raid10 in a single box because I want the redundancy of a second
physical server so I c
Hi all,
playing with raid5 and "btrfs replace" I found a BUG. Basically it seems that
if I try to replace a "missing" disk of a "degraded" filesystem I got a kernel
BUG. This is reproducible at 100% for me.
To simulate the disk removal, I started qemu and I used the command "drive_del
drive-vi
Hi Qu,
On Fri, Feb 28, 2014 at 4:46 AM, Qu Wenruo wrote:
> The original btrfs_workers has thresholding functions to dynamically
> create or destroy kthreads.
>
> Though there is no such function in kernel workqueue because the worker
> is not created manually, we can still use the workqueue_set_
2015-08-19 17:39 GMT+02:00 Leo Unbekandt :
> Hello everyone,
>
> I've encountered what looks like a nasty bug which occures when the OOM
> kills a process and that this process is working with the file system.
> I've been able to reproduce this issue using docker, by limiting the
> memory limits of
On Wed, Aug 19, 2015 at 1:22 AM, Qu Wenruo wrote:
>
>
> Timothy Normand Miller wrote on 2015/08/18 22:55 -0400:
>>
>> On Tue, Aug 18, 2015 at 10:48 PM, Qu Wenruo
>> wrote:
>>>
>>>
>>>
>>> Timothy Normand Miller wrote on 2015/08/18 22:46 -0400:
On Tue, Aug 18, 2015 at 9:32 PM, Qu We
On Wed, Aug 19, 2015 at 06:10:06PM +0200, Hendrik Friedel wrote:
> Hello Hugo,
>
> thanks for your hint.
>
> On 16.08.2015 16:57, Hugo Mills wrote:
> >Here's your problem -- you've got a RAID 5 filesystem, which has a
> >minimum allocation of 2 devices, but only one device has free space on
>
Hello Hugo,
thanks for your hint.
On 16.08.2015 16:57, Hugo Mills wrote:
Here's your problem -- you've got a RAID 5 filesystem, which has a
minimum allocation of 2 devices, but only one device has free space on
it for allocation, so no more chunks can be allocated. I'm not sure
how it ended
I can't help about your issue, but also I can't resist to ask: if I read
correctly
the data you have 2x 30TB disks, and 2x 40TB disks. It is correct ?
Are physical disk or virtual ones ?
On 2015-08-19 15:23, E V wrote:
> linux 4.1.4 forced read-only during an rsync, complaining about lack
> of s
Hello everyone,
I've encountered what looks like a nasty bug which occures when the OOM
kills a process and that this process is working with the file system.
I've been able to reproduce this issue using docker, by limiting the
memory limits of process and make them crash when working.
(You can f
linux 4.1.4 forced read-only during an rsync, complaining about lack
of space, with ~30TB free. Filesystem has 6 snapshots, basically 3
incremental rsync's of 2 different external filesystems. Not sure how
to proceed, balance -dusage=5 then try and remount, doesn't balance
need rw?
# btrfs file us
On Tue 18-08-15 19:29:14, Michal Hocko wrote:
> On Tue 18-08-15 13:11:44, Chris Mason wrote:
> > On Tue, Aug 18, 2015 at 12:40:32PM +0200, Michal Hocko wrote:
> > > From: Michal Hocko
> > >
> > > Btrfs relies on GFP_NOFS allocation when commiting the transaction but
> > > since "mm: page_alloc: d
On 19 August 2015 at 11:11, wrote:
> From: Filipe Manana
>
> If we partially clone one extent of a file into a lower offset of the
> file, fsync the file, power fail and then mount the fs to trigger log
> replay, we can get multiple checksum items in the csum tree that overlap
> each other and r
Hi,
these two patches were sent as a part of a larger RFC which aims at
allowing GFP_NOFS allocations to fail to help sort out memory reclaim
issues bound to the current behavior
(http://marc.info/?l=linux-mm&m=143876830616538&w=2).
It is clear that move to the GFP_NOFS behavior change is a long t
From: Michal Hocko
Btrfs relies on GFP_NOFS allocation when committing the transaction but
this allocation context is rather weak wrt. reclaim capabilities. The
page allocator currently tries hard to not fail these allocations if
they are small (<=PAGE_ALLOC_COSTLY_ORDER) so this is not a problem
From: Michal Hocko
alloc_btrfs_bio relies on GFP_NOFS allocation when committing the
transaction but this allocation context is rather weak wrt. reclaim
capabilities. The page allocator currently tries hard to not fail these
allocations if they are small (<=PAGE_ALLOC_COSTLY_ORDER) but it can
sti
On Wed, Aug 19, 2015 at 11:11:55AM +0100, fdman...@kernel.org wrote:
> From: Filipe Manana
>
> If we partially clone one extent of a file into a lower offset of the
> file, fsync the file, power fail and then mount the fs to trigger log
> replay, we can get multiple checksum items in the csum tre
From: Filipe Manana
Test that if we fsync a file that got one extent partially cloned into a
lower file offset, after a power failure our file has the same content it
had before the power failure and after the extent cloning operation.
This test is motivated by an issue found in btrfs that is fi
From: Filipe Manana
If we partially clone one extent of a file into a lower offset of the
file, fsync the file, power fail and then mount the fs to trigger log
replay, we can get multiple checksum items in the csum tree that overlap
each other and result in checksum lookup failures later. Those f
On Tue, Aug 18, 2015 at 3:03 AM, Qu Wenruo wrote:
> Btrfs qgroup reserve codes lacks check for rewrite dirty page, causing
> every write, even rewriting a uncommitted dirty page, to reserve space.
>
> But only written data will free the reserved space, causing reserved
> space leaking.
>
> The bug
There are two large disks, part of the disks partitioned for MD RAID1
and the rest of the disks partitioned for BtrFs RAID1
One of the disks (/dev/sdd) appears to have failed, there were plenty of
alerts from MD (including dmesg and emails) but nothing from the BtrFs
filesystem
Could this just
We need not check path before btrfs_free_path() is called because
path is checked in btrfs_free_path().
Signed-off-by: Tsutomu Itoh
---
cmds-check.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/cmds-check.c b/cmds-check.c
index 4fa8709..8019fb0 100644
--- a/cmds-check.c
On 2015/08/19 16:34, Qu Wenruo wrote:
Tsutomu Itoh wrote on 2015/08/19 14:55 +0900:
We need not check path before btrfs_free_path() is called because
path is checked in btrfs_free_path().
Signed-off-by: Tsutomu Itoh
Reviewed-by: Qu Wenruo
Thanks for the review.
BTW, did you check btrfs
Tsutomu Itoh wrote on 2015/08/19 14:55 +0900:
We need not check path before btrfs_free_path() is called because
path is checked in btrfs_free_path().
Signed-off-by: Tsutomu Itoh
Reviewed-by: Qu Wenruo
BTW, did you check btrfs-progs for the such cleanup?
Thanks,
Qu
---
fs/btrfs/dev-re
45 matches
Mail list logo