Hey Qu,
On Thu, 2017-01-12 at 09:25 +0800, Qu Wenruo wrote:
> And since you just deleted a subvolume and unmount it soon
Indeed, I unmounted it pretty quickly afterwards...
I had mounted it (ro) in the meantime, and did a whole
find mntoint > /dev/null
on it just to see whether going through the
Since lowmem mode can repair inode nbytes error now, modify this test case
to allow lowmem mode repair.
Signed-off-by: Su Yue
---
v3: add this patch.
---
tests/fsck-tests/016-wrong-inode-nbytes/test.sh | 33 +
1 file changed, 33 insertions(+)
Added 'repair_inode_item' which dispatches functions such as
'repair_inode__nbytes_lowmem' to correct errors and
'struct inode_item_fix_info' to store correct values and errors.
Signed-off-by: Su Yue
---
v2: reassign err to info.err after repaired in process_one_leaf_v2
Add a function 'repair_inode_isize' to support inode isize repair.
Signed-off-by: Su Yue
---
v2: none
v3: none
---
cmds-check.c | 49 -
1 file changed, 48 insertions(+), 1 deletion(-)
diff --git a/cmds-check.c
At 01/12/2017 10:28 AM, Christoph Anton Mitterer wrote:
Hey Qu,
On Thu, 2017-01-12 at 09:25 +0800, Qu Wenruo wrote:
And since you just deleted a subvolume and unmount it soon
Indeed, I unmounted it pretty quickly afterwards...
I had mounted it (ro) in the meantime, and did a whole
find
On Wed, Jan 11, 2017 at 12:51 PM, Jan Kara wrote:
> On Wed 11-01-17 11:29:28, Miklos Szeredi wrote:
>> I know there's work on this for xfs, but could this be done in generic mm
>> code?
>>
>> What are the obstacles? page->mapping and page->index are the obvious
>> ones.
>
> Yes,
On Mon, Jan 09, 2017 at 03:39:03PM +0100, Michal Hocko wrote:
> From: Michal Hocko
>
> try_release_extent_state reduces the gfp mask to GFP_NOFS if it is
> compatible. This is true for GFP_KERNEL as well. There is no real
> reason to do that though. There is no new lock taken
On Tue, Jan 10, 2017 at 08:35:30PM +0200, Nikolay Borisov wrote:
> After following the discussion in [1] I took a look at what's the
> state of VFS-related members being used in core BTRFS code. It turned
> out there are quite a few functions which operate on struct btrfs_inode,
> yet take
On Wed 11-01-17 14:55:50, David Sterba wrote:
[...]
> But otherwise looks ok to me, I'm going to merge the patch. Thanks.
I have only now noticed typo in the subject. s@etrfs:@btrfs:@
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body
On Wed, Jan 11, 2017 at 12:51:43PM +0100, Jan Kara wrote:
> On Wed 11-01-17 11:29:28, Miklos Szeredi wrote:
> > I know there's work on this for xfs, but could this be done in generic mm
> > code?
> >
> > What are the obstacles? page->mapping and page->index are the obvious
> > ones.
>
> Yes,
On 11.01.2017 18:51, David Sterba wrote:
> On Tue, Jan 10, 2017 at 08:35:30PM +0200, Nikolay Borisov wrote:
>> After following the discussion in [1] I took a look at what's the
>> state of VFS-related members being used in core BTRFS code. It turned
>> out there are quite a few functions which
Hugo Mills posted on Tue, 10 Jan 2017 15:47:53 + as excerpted:
> On Tue, Jan 10, 2017 at 10:42:51AM -0500, Austin S. Hemmelgarn wrote:
>> Most of the issue in this case is with the size of the initial chunk.
>> That said, I've got quite a few reasonably sized filesystems (I think
>> the
> On 10 Jan 2017, at 21:07, Vinko Magecic
> wrote:
>
> Hello,
>
> I set up a raid 1 with two btrfs devices and came across some situations in
> my testing that I can't get a straight answer on.
> 1) When replacing a volume, do I still need to `umount /path`
On Wed 11-01-17 11:29:28, Miklos Szeredi wrote:
> I know there's work on this for xfs, but could this be done in generic mm
> code?
>
> What are the obstacles? page->mapping and page->index are the obvious
> ones.
Yes, these two are the main that come to my mind. Also you'd need to
somehow
Add a function 'repair_inode_isize' to support inode isize repair.
Signed-off-by: Su Yue
---
cmds-check.c | 49 -
1 file changed, 48 insertions(+), 1 deletion(-)
diff --git a/cmds-check.c b/cmds-check.c
index
Added 'repair_inode_item' which dispatches functions such as
'repair_inode__nbytes_lowmem' to correct errors and
'struct inode_item_fix_info' to store correct values and errors.
v2:
reassign err to info.err in process_one_leaf.
Signed-off-by: Su Yue
---
I know there's work on this for xfs, but could this be done in generic mm code?
What are the obstacles? page->mapping and page->index are the obvious ones.
If that's too difficult is it maybe enough to share mappings between
files while they are completely identical and clone the mapping when
Austin S. Hemmelgarn posted on Tue, 10 Jan 2017 09:57:52 -0500 as
excerpted:
> I can't personally comment on the code itself right now (I've actually
> never looked at the mkfs code, or any of the stuff that deals with the
> System chunk), but I can make a few general comments on this:
> 1. This
Hi.
On Debian sid:
$ uname -a
Linux heisenberg 4.8.0-2-amd64 #1 SMP Debian 4.8.15-2 (2017-01-04) x86_64
GNU/Linux
$ btrfs version
btrfs-progs v4.7.3
During a:
# btrfs send -p foo bar | btrfs receive baz
Jan 11 20:43:10 heisenberg kernel: [ cut here ]
Jan 11 20:43:10
On 2017-01-10 16:49, Chris Murphy wrote:
On Tue, Jan 10, 2017 at 2:07 PM, Vinko Magecic
wrote:
Hello,
I set up a raid 1 with two btrfs devices and came across some situations in my
testing that I can't get a straight answer on.
1) When replacing a volume, do
Looks like there's some sort of xattr and Btrfs interaction happening
here; but as it only happens with some subvolumes/snapshots not all
(but 100% consistent) maybe the kernel version at the time the
snapshot was taken is a factor? Anyway git bisect says
# first bad commit:
I solved the problem. After much futzing with mount options and trying
check --repair, I ended up booting into an Antergos live USB, which
runs kernel and tools 4.9, and was able to succesfully mount the btrfs
volume there. This time, before shutting down, I paused the balance
operation. (Not sure
On Wed 11-01-17 14:55:50, David Sterba wrote:
> On Mon, Jan 09, 2017 at 03:39:02PM +0100, Michal Hocko wrote:
> > From: Michal Hocko
> >
> > b335b0034e25 ("Btrfs: Avoid using __GFP_HIGHMEM with slab allocator")
> > has reduced the allocation mask in btrfs_releasepage to GFP_NOFS
Hi all,
I am observing periodic crashes with signature below on kernel 4.4.26.
wb is extracted from page (see mm/page-writeback.c, void
account_page_dirtied() ):
inode_attach_wb(inode, page);
wb = inode_to_wb(inode);
We are crasing in
__inc_wb_stat(wb,
Hey.
Linux heisenberg 4.8.0-2-amd64 #1 SMP Debian 4.8.15-2 (2017-01-04)
x86_64 GNU/Linux
btrfs-progs v4.7.3
I've had this already at least once some year ago or so:
I was doing backups (incremental via send/receive).
After everything was copied, I unmounted the destination fs, made a
fsck, all
Oops forgot to copy and past the actual fsck output O:-)
# btrfs check /dev/mapper/data-a3 ; echo $?
Checking filesystem on /dev/mapper/data-a3
UUID: 326d292d-f97b-43ca-b1e8-c722d3474719
checking extents
ref mismatch on [37765120 16384] extent item 0, found 1
Backref 37765120 parent 6403 root
At 01/11/2017 01:36 AM, lakshmipath...@giis.co.in wrote:
What about submitting a btrfs-image and use generic test load?
Okay, how to share the corrupted btrfs-image? using github? And also do
you have references for this kind of
setup under btrfs-progs/tests/? So that I can follow its
At 01/12/2017 09:07 AM, Christoph Anton Mitterer wrote:
Hey.
Linux heisenberg 4.8.0-2-amd64 #1 SMP Debian 4.8.15-2 (2017-01-04)
x86_64 GNU/Linux
btrfs-progs v4.7.3
I've had this already at least once some year ago or so:
I was doing backups (incremental via send/receive).
After everything
On Jan 11, 2017, at 3:29 AM, Miklos Szeredi wrote:
>
> I know there's work on this for xfs, but could this be done in generic mm
> code?
>
> What are the obstacles? page->mapping and page->index are the obvious ones.
>
> If that's too difficult is it maybe enough to share
I would like to use this thread to ask few questions:
If we have 2 devices dying on us and we run RAID6 - this theoretically will
still run (despite our current problems). Now let’s say that we booted up raid6
of 10 disk and 2 of them dies but operator does NOT know what are dev ID of
disk
I had an 8TB btrfs filesystem setup with my data, and added a 4TB and
3TB drive to it and then ran a raid1 conversion using the command
btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/btrfs
It was running OK, but then I restarted the computer without pausing
the rebalance (I wasn't aware
31 matches
Mail list logo