On 19.01.2017 20:21, David Sterba wrote:
> On Wed, Jan 18, 2017 at 12:31:26AM +0200, Nikolay Borisov wrote:
>> So here is a new set of patches cleaning up tree-log function
>> w.r.t inode vs btrfs_inode. There are still some which remain
>> but I didn't find compelling arguments to cleaning the
At 01/20/2017 12:38 PM, Chris Murphy wrote:
All of my Btrfs file systems, including new ones, have errors
according to lowmem mode, and no errors reported at all for original
mode. So which is correct? If lowmem mode is correct, then there are
obviously kernel bugs that are causing problems rig
On Thu, Jan 12, 2017 at 03:13:37AM +, fdman...@kernel.org wrote:
> From: Filipe Manana
>
> Test that an incremental send operation works after moving a directory
> into a new parent directory, deleting its previous parent directory and
> creating a new inode that has the same inode number as
All of my Btrfs file systems, including new ones, have errors
according to lowmem mode, and no errors reported at all for original
mode. So which is correct? If lowmem mode is correct, then there are
obviously kernel bugs that are causing problems right away, even on
minutes old file systems.
I ca
This is the RFC proposal for refactor btrfs-corrupt-block.
Just as Lakshmipathi and xfstests guys commented in btrfs ML, we need
btrfs-corrupt-block to either craft a test case, or for later RAID56
corruption test case.
However the current situation of btrfs-corrupt-block has several
problem
At 01/19/2017 06:06 PM, Sebastian Gottschall wrote:
Hello
I have a question. after a power outage my system was turning into a
unrecoverable state using btrfs (kernel 4.9)
since im running --init-extent-tree now for 3 days i'm asking how long
this process normally takes and why it outputs mill
Hi,
after upgrading this powerpc32 box from 4.10-rc2 to -rc4, the message
below occured a few hours after boot. Full dmesg and .config:
http://nerdbynature.de/bits/4.10-rc4/
Any ideas?
Thanks,
Christian.
Faulting instruction address: 0xc02d4584
Oops: Kernel access of bad area, sig: 11 [#1]
On 2017-01-19 13:23, Roman Mamedov wrote:
On Thu, 19 Jan 2017 17:39:37 +0100
"Alejandro R. Mosteo" wrote:
I was wondering, from a point of view of data safety, if there is any
difference between using dup or making a raid1 from two partitions in
the same disk. This is thinking on having some p
On Thu, Jan 19, 2017 at 12:15 AM, Duncan <1i5t5.dun...@cox.net> wrote:
> Chris Murphy posted on Wed, 18 Jan 2017 14:30:28 -0700 as excerpted:
>
>> On Wed, Jan 18, 2017 at 2:07 PM, Jon wrote:
>>> So, I had a raid 1 btrfs system setup on my laptop. Recently I upgraded
>>> the drives and wanted to ge
On Wed, Jan 18, 2017 at 12:31:26AM +0200, Nikolay Borisov wrote:
> So here is a new set of patches cleaning up tree-log function
> w.r.t inode vs btrfs_inode. There are still some which remain
> but I didn't find compelling arguments to cleaning them up, so
> I've left them unchanged. This time
On Thu, 19 Jan 2017 17:39:37 +0100
"Alejandro R. Mosteo" wrote:
> I was wondering, from a point of view of data safety, if there is any
> difference between using dup or making a raid1 from two partitions in
> the same disk. This is thinking on having some protection against the
> typical aging H
I don't know if it is btrfs related but I'm getting
hard freezes on 4.8.17.
So I went back to 4.8.14 (with identical .config file).
It is one of my kernels which is known to be trouble
free for a long time.
Since they are hard lock up for real, I can't provide
anything.. Does anyone experience an
Hey Qu.
On Wed, 2017-01-18 at 16:48 +0800, Qu Wenruo wrote:
> To Christoph,
>
> Would you please try this patch, and to see if it suppress the block
> group
> warning?
I did another round of fsck in both modes (original/lomem), first
WITHOUT your patch, then WITH it... both on progs version 4.9.
On 2017-01-19 11:39, Alejandro R. Mosteo wrote:
Hello list,
I was wondering, from a point of view of data safety, if there is any
difference between using dup or making a raid1 from two partitions in
the same disk. This is thinking on having some protection against the
typical aging HDD that sta
Hello list,
I was wondering, from a point of view of data safety, if there is any
difference between using dup or making a raid1 from two partitions in
the same disk. This is thinking on having some protection against the
typical aging HDD that starts to have bad sectors.
On a related note, I see
On Thu, Jan 5, 2017 at 11:31 AM, Filipe Manana wrote:
> On Thu, Jan 5, 2017 at 2:45 AM, robbieko wrote:
>> Filipe Manana 於 2017-01-04 21:09 寫到:
>>
>>> On Wed, Jan 4, 2017 at 10:53 AM, robbieko wrote:
From: Robbie Ko
Test that an incremental send operation dosen't' work becau
From: Filipe Manana
When we are checking if we need to delay the rename operation for an
inode we not checking if a parent inode that exists in the send and
parent snapshots is really the same inode or not, that is, we are not
comparing the generation number of the parent inode in the send and
pa
From: Robbie Ko
When both the parent and send snapshots have a directory inode with the
same number but different generations (therefore they are different
inodes) and both have an entry with the same name, an incremental send
stream will contain an invalid rmdir operation that refers to the
orph
On Thu, Jan 5, 2017 at 8:24 AM, robbieko wrote:
> From: Robbie Ko
>
> Under certain situations, an incremental send operation can
Again, missing some word after the word "can", without it the phrase
doesn't make any sense.
Tip: don't copy paste change logs and then modify them for each patch,
it
On Thu, Jan 5, 2017 at 8:24 AM, robbieko wrote:
> From: Robbie Ko
>
> Under certain situations, an incremental send operation can
Missing some word after the word "can".
> a rename operation that will make the receiving end fail when
> attempting to execute it, because the target is exist.
>
>
On Wed, Nov 2, 2016 at 3:52 AM, robbieko wrote:
> Hi Eryu Guan,
>
> Yes, it need apply
> [PATCH] "Btrfs: incremental send, do not skip generation inconsistency check
> for inode 256."
> and test again, it will failed.
>
> because current code there is a problem, but just will not happen.
Then it'
From: Filipe Manana
Test that an incremental send operation does not fail when a new inode
replaces an old inode that has the same number but different generation,
and both are direct children of the subvolume/snapshot root.
This is fixed by the following patch for the linux kernel:
"Btrfs: s
From: Filipe Manana
Test that an incremental send operation works when in both snapshots
there are two directory inodes that have the same number but different
generations and have an entry with the same name that corresponds to
different inodes in each snapshot.
The btrfs issue is fixed by the
From: Filipe Manana
Test that an incremental send operation works after moving a directory
into a new parent directory, deleting its previous parent directory and
creating a new inode that has the same inode number as the old parent.
This issue is fixed by the following patch for the linux kerne
On Wed, Jan 18, 2017 at 09:42:45PM -0600, Goldwyn Rodrigues wrote:
> >>> +#define BUG_ON(c) ASSERT(!(c))
> >>
> >> The problem with this is that you are killing the value printed as a
> >> part of the trace for BUG_ON(). The main reason why commit
> >> 00e769d04c2c83029d6c71 was written. Please be
On Thu, Jan 5, 2017 at 8:24 AM, robbieko wrote:
> From: Robbie Ko
>
> Under certain situations, an incremental send operation can
Again, missing some word after the word "can".
Copy pasting change logs is not that good
> a rmdir operation that will make the receiving end fail when
> attempt
From: Robbie Ko
Under certain situations, an incremental send operation can fail due to a
premature attempt to create a new top level inode (a direct child of the
subvolume/snapshot root) whose name collides with another inode that was
removed from the send snapshot.
Consider the following examp
On Thu 19-01-17 10:22:36, Jan Kara wrote:
> On Thu 19-01-17 09:39:56, Michal Hocko wrote:
> > On Tue 17-01-17 18:29:25, Jan Kara wrote:
> > > On Tue 17-01-17 17:16:19, Michal Hocko wrote:
> > > > > > But before going to play with that I am really wondering whether we
> > > > > > need
> > > > > > a
On Thu, Jan 19, 2017 at 10:15 AM, Duncan <1i5t5.dun...@cox.net> wrote:
> Chris Murphy posted on Wed, 18 Jan 2017 14:30:28 -0700 as excerpted:
>
>> On Wed, Jan 18, 2017 at 2:07 PM, Jon wrote:
>>> So, I had a raid 1 btrfs system setup on my laptop. Recently I upgraded
>>> the drives and wanted to ge
Hello
I have a question. after a power outage my system was turning into a
unrecoverable state using btrfs (kernel 4.9)
since im running --init-extent-tree now for 3 days i'm asking how long
this process normally takes and why it outputs millions of lines like
Backref 1562890240 root 262 owne
On Thu 19-01-17 09:39:56, Michal Hocko wrote:
> On Tue 17-01-17 18:29:25, Jan Kara wrote:
> > On Tue 17-01-17 17:16:19, Michal Hocko wrote:
> > > > > But before going to play with that I am really wondering whether we
> > > > > need
> > > > > all this with no journal at all. AFAIU what Jack told m
On Tue 17-01-17 18:29:25, Jan Kara wrote:
> On Tue 17-01-17 17:16:19, Michal Hocko wrote:
> > > > But before going to play with that I am really wondering whether we need
> > > > all this with no journal at all. AFAIU what Jack told me it is the
> > > > journal lock(s) which is the biggest problem
Hi,
The test flag override way only runs all tests in lowmem mode or not,
but can't decide one test repair or not.
I have some ideas below:
1.Create a hidden empty file under the test dir which need to be
repaired.Edit tests/common.local:_skip_spec() to judge repair or not by
the existence o
33 matches
Mail list logo