Hi Qu,
I don't think I have seen this before, I don't know the reason
why I wrote this, may be to test encryption, however it was all
with default options.
But now I could reproduce and, looks like balance fails to
start with IO error though the mount is successful.
--
#
On Tue 07-02-17 09:51:50, Dave Chinner wrote:
> On Mon, Feb 06, 2017 at 07:47:43PM +0100, Michal Hocko wrote:
> > On Mon 06-02-17 10:32:37, Darrick J. Wong wrote:
[...]
> > > I prefer to keep the "...yet we are likely to be under GFP_NOFS..."
> > > wording of the old comment because it captures
The original csum error message only outputs inode number, offset, check
sum and expected check sum.
However no root objectid is outputted, which sometimes makes debugging
quite painful under multi-subvolume case (including relocation).
Also the checksum output is decimal, which seldom makes
Hi Anand,
I found that btrfs/125 test case can only pass if we enabled space cache.
If using nospace_cache or space_cache=v2 mount option, it will get
blocked forever with the following callstack(the only blocked process):
[11382.046978] btrfs D11128 6705 6057 0x
Commit 4c63c2454ef incorrectly assumed that returning -ENOIOCTLCMD would
cause the native ioctl to be called. The ->compat_ioctl callback is
expected to handle all ioctls, not just compat variants. As a result,
when using 32-bit userspace on 64-bit kernels, everything except those
three ioctls
On 1/9/17 6:28 AM, David Sterba wrote:
> On Fri, Jan 06, 2017 at 12:22:34PM -0500, Joseph Salisbury wrote:
>> A kernel bug report was opened against Ubuntu [0]. This bug was fixed
>> by the following commit in v4.7-rc1:
>>
>> commit 4c63c2454eff996c5e27991221106eb511f7db38
>>
>> Author: Luke
At 02/07/2017 12:09 AM, Goldwyn Rodrigues wrote:
Hi Qu,
On 02/05/2017 07:45 PM, Qu Wenruo wrote:
At 02/04/2017 09:47 AM, Jorg Bornschein wrote:
February 4, 2017 1:07 AM, "Goldwyn Rodrigues" wrote:
Quata support was indeed active -- and it warned me that the
On 05/02/17 12:08, Kai Krakow wrote:
> Wrong. If you tend to not be in control of the permissions below a
> mountpoint, you prevent access to it by restricting permissions on a
> parent directory of the mountpoint. It's that easy and it always has
> been. That is standard practice. While your
On Mon, Feb 06, 2017 at 07:47:43PM +0100, Michal Hocko wrote:
> On Mon 06-02-17 10:32:37, Darrick J. Wong wrote:
> > On Mon, Feb 06, 2017 at 06:44:15PM +0100, Michal Hocko wrote:
> > > On Mon 06-02-17 07:39:23, Matthew Wilcox wrote:
> > > > On Mon, Feb 06, 2017 at 03:07:16PM +0100, Michal Hocko
Am Mon, 6 Feb 2017 07:30:31 -0500
schrieb "Austin S. Hemmelgarn" :
> > How about mounting the receiver below a directory only traversable
> > by root (chmod og-rwx)? Backups shouldn't be directly accessible by
> > ordinary users anyway.
> There are perfectly legitimate
On Mon 06-02-17 11:51:11, Darrick J. Wong wrote:
> On Mon, Feb 06, 2017 at 07:47:43PM +0100, Michal Hocko wrote:
> > On Mon 06-02-17 10:32:37, Darrick J. Wong wrote:
> > > On Mon, Feb 06, 2017 at 06:44:15PM +0100, Michal Hocko wrote:
> > > > On Mon 06-02-17 07:39:23, Matthew Wilcox wrote:
> > > >
On 2017-02-03 11:44, Lakshmipathi.G wrote:
> Hi.
>
> Came across this thread
> https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg55161.html
> Exploring possibility of adding test-scripts around these area using
> dump-tree & corrupt-block.But
> unable to figure-out how to get parity
On Mon, Feb 06, 2017 at 07:47:43PM +0100, Michal Hocko wrote:
> On Mon 06-02-17 10:32:37, Darrick J. Wong wrote:
> > On Mon, Feb 06, 2017 at 06:44:15PM +0100, Michal Hocko wrote:
> > > On Mon 06-02-17 07:39:23, Matthew Wilcox wrote:
> > > > On Mon, Feb 06, 2017 at 03:07:16PM +0100, Michal Hocko
On Mon 06-02-17 10:32:37, Darrick J. Wong wrote:
> On Mon, Feb 06, 2017 at 06:44:15PM +0100, Michal Hocko wrote:
> > On Mon 06-02-17 07:39:23, Matthew Wilcox wrote:
> > > On Mon, Feb 06, 2017 at 03:07:16PM +0100, Michal Hocko wrote:
> > > > +++ b/fs/xfs/xfs_buf.c
> > > > @@ -442,17 +442,17 @@
On Mon, Feb 06, 2017 at 06:44:15PM +0100, Michal Hocko wrote:
> On Mon 06-02-17 07:39:23, Matthew Wilcox wrote:
> > On Mon, Feb 06, 2017 at 03:07:16PM +0100, Michal Hocko wrote:
> > > +++ b/fs/xfs/xfs_buf.c
> > > @@ -442,17 +442,17 @@ _xfs_buf_map_pages(
> > > bp->b_addr = NULL;
> > >
On Mon, Feb 06, 2017 at 12:42:01AM +0300, Alexander Tomokhov wrote:
> Is it possible, having two drives to do raid1 for metadata but keep data on a
> single drive only?
Chris had a patch for doing basically this that we were testing
internally, but I don't think he ever sent it to the mailing
On Mon 06-02-17 07:39:23, Matthew Wilcox wrote:
> On Mon, Feb 06, 2017 at 03:07:16PM +0100, Michal Hocko wrote:
> > +++ b/fs/xfs/xfs_buf.c
> > @@ -442,17 +442,17 @@ _xfs_buf_map_pages(
> > bp->b_addr = NULL;
> > } else {
> > int retried = 0;
> > - unsigned
On Mon, Feb 06, 2017 at 08:26:54AM -0800, Liu Bo wrote:
> On Mon, Feb 06, 2017 at 02:50:18PM +0900, takafumi-sslab wrote:
> >
> >
> > On 2017/02/06 12:35, Liu Bo wrote:
> > > a) __extent_writepage has handled the case when nr == 0.
> >
> > Yes, I agree this.
> >
> > > b) when nr == 1, the page
On Mon, Feb 06, 2017 at 02:50:18PM +0900, takafumi-sslab wrote:
>
>
> On 2017/02/06 12:35, Liu Bo wrote:
> > a) __extent_writepage has handled the case when nr == 0.
>
> Yes, I agree this.
>
> > b) when nr == 1, the page is marked with writeback bit and added into a
> > bio, thus we have
Hi Qu,
On 02/05/2017 07:45 PM, Qu Wenruo wrote:
>
>
> At 02/04/2017 09:47 AM, Jorg Bornschein wrote:
>> February 4, 2017 1:07 AM, "Goldwyn Rodrigues" wrote:
>>
>>
>> Quata support was indeed active -- and it warned me that the qroup
>> data was inconsistent.
>>
>>
On Mon, Feb 06, 2017 at 03:07:16PM +0100, Michal Hocko wrote:
> +++ b/fs/xfs/xfs_buf.c
> @@ -442,17 +442,17 @@ _xfs_buf_map_pages(
> bp->b_addr = NULL;
> } else {
> int retried = 0;
> - unsigned noio_flag;
> + unsigned nofs_flag;
>
>
On Mon 06-02-17 07:24:00, Matthew Wilcox wrote:
> On Mon, Feb 06, 2017 at 03:34:50PM +0100, Michal Hocko wrote:
> > This part is not needed for the patch, strictly speaking but I wanted to
> > make the code more future proof.
>
> Understood. I took an extra bit myself for marking the radix tree
On Mon, Feb 06, 2017 at 03:34:50PM +0100, Michal Hocko wrote:
> This part is not needed for the patch, strictly speaking but I wanted to
> make the code more future proof.
Understood. I took an extra bit myself for marking the radix tree as
being used for an IDR (so the radix tree now uses 4
On Mon 06-02-17 06:26:41, Matthew Wilcox wrote:
> On Mon, Feb 06, 2017 at 03:07:13PM +0100, Michal Hocko wrote:
> > While we are at it also make sure that the radix tree doesn't
> > accidentaly override tags stored in the upper part of the gfp_mask.
>
> > diff --git a/lib/radix-tree.c
On Mon, Feb 06, 2017 at 03:07:13PM +0100, Michal Hocko wrote:
> While we are at it also make sure that the radix tree doesn't
> accidentaly override tags stored in the upper part of the gfp_mask.
> diff --git a/lib/radix-tree.c b/lib/radix-tree.c
> index 9dc093d5ef39..7550be09f9d6 100644
> ---
From: Michal Hocko
The current implementation of the reclaim lockup detection can lead to
false positives and those even happen and usually lead to tweak the
code to silence the lockdep by using GFP_NOFS even though the context
can use __GFP_FS just fine. See
From: Michal Hocko
xfs has defined PF_FSTRANS to declare a scope GFP_NOFS semantic quite
some time ago. We would like to make this concept more generic and use
it for other filesystems as well. Let's start by giving the flag a
more generic name PF_MEMALLOC_NOFS which is in line
on next-20170206
Diffstat says
fs/jbd2/journal.c | 7 +++
fs/jbd2/transaction.c | 11 +++
fs/xfs/kmem.c | 12 ++--
fs/xfs/kmem.h | 2 +-
fs/xfs/libxfs/xfs_btree.c | 2 +-
fs/xfs/xfs_aops.c | 6 +++---
fs/xfs/xfs_buf.c | 8
From: Michal Hocko
now that we have memalloc_nofs_{save,restore} api we can mark the whole
transaction context as implicitly GFP_NOFS. All allocations will
automatically inherit GFP_NOFS this way. This means that we do not have
to mark any of those requests with GFP_NOFS and
From: Michal Hocko
kmem_zalloc_large and _xfs_buf_map_pages use memalloc_noio_{save,restore}
API to prevent from reclaim recursion into the fs because vmalloc can
invoke unconditional GFP_KERNEL allocations and these functions might be
called from the NOFS contexts. The
From: Michal Hocko
kjournald2 is central to the transaction commit processing. As such any
potential allocation from this kernel thread has to be GFP_NOFS. Make
sure to mark the whole kernel thread GFP_NOFS by the memalloc_nofs_save.
Suggested-by: Jan Kara
From: Michal Hocko
GFP_NOFS context is used for the following 5 reasons currently
- to prevent from deadlocks when the lock held by the allocation
context would be needed during the memory reclaim
- to prevent from stack overflows during the reclaim
On 2017-02-04 16:10, Kai Krakow wrote:
Am Sat, 04 Feb 2017 20:50:03 +
schrieb "Jorg Bornschein" :
February 4, 2017 1:07 AM, "Goldwyn Rodrigues"
wrote:
Yes, please check if disabling quotas makes a difference in
execution time of btrfs balance.
Just
On 2017-02-05 23:26, Duncan wrote:
Hans van Kranenburg posted on Sun, 05 Feb 2017 22:55:42 +0100 as
excerpted:
On 02/05/2017 10:42 PM, Alexander Tomokhov wrote:
Is it possible, having two drives to do raid1 for metadata but keep
data on a single drive only?
Nope.
Would be a really nice
On 2017-02-05 06:54, Kai Krakow wrote:
Am Wed, 1 Feb 2017 17:43:32 +
schrieb Graham Cobb :
On 01/02/17 12:28, Austin S. Hemmelgarn wrote:
On 2017-02-01 00:09, Duncan wrote:
Christian Lupien posted on Tue, 31 Jan 2017 18:32:58 -0500 as
excerpted:
[...]
I'm just a
I am sorry for forggeting to write the reproducing steps.
I injects the ftrace's logging code and the fault to the Linux kerenl
v4.10-rc7.
The diff is too long for pasting here.
So, I put the repository of the kernel here.
https://github.com/tk1012/linux-for-reproduce-btrfs-failure.git
And
At 02/06/2017 05:14 PM, Jorg Bornschein wrote:
February 6, 2017 1:45 AM, "Qu Wenruo"
Would you please provide the kernel version?
v4.9 introduced a bad fix for qgroup balance, which doesn't completely fix
qgroup bytes leaking,
but also hugely slow down the balance
February 6, 2017 1:45 AM, "Qu Wenruo"
> Would you please provide the kernel version?
>
> v4.9 introduced a bad fix for qgroup balance, which doesn't completely fix
> qgroup bytes leaking,
> but also hugely slow down the balance process:
>
I'm a bit behind the times:
38 matches
Mail list logo