Re: [PULL] Btrfs fixes for 4.10

2017-01-20 Thread Chris Mason

On Fri, Jan 20, 2017 at 03:42:50PM +0100, David Sterba wrote:

Hi,

a few more fixes, please pull. Thanks.


Great, thanks Dave, rolling tests.

-chris
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[GIT PULL] libnvdimm fixes for 4.10-rc5

2017-01-20 Thread Dan Williams
Hi Linus, please pull from:

  git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm libnvdimm-fixes

...to receive:

* A regression fix for the multiple-pmem-namespace-per-region support
added in 4.9. Even if an existing environment is not using that
feature the act of creating and a destroying a single namespace with
the ndctl utility will lead to the proliferation of extra unwanted
namespace devices.

* A fix for the error code returned from the pmem driver when the
memcpy_mcsafe() routine returns -EFAULT. Btrfs seems to be the only
block I/O consumer that tries to parse the meaning of the error code
when it is non-zero.

Neither of these fixes are critical, the namespace leak is awkward in
that it can cause device naming to change and complicates debugging
namespace initialization issues. The error code fix is included out of
caution for what other consumers might be expecting -EIO for block I/O
errors.

---

The following changes since commit a121103c922847ba5010819a3f250f1f7fc84ab8:

  Linux 4.10-rc3 (2017-01-08 14:18:17 -0800)

are available in the git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm libnvdimm-fixes

for you to fetch changes up to 1f19b983a8877f81763fab3e693c6befe212736d:

  libnvdimm, namespace: fix pmem namespace leak, delete when size set
to zero (2017-01-13 09:50:33 -0800)


Dan Williams (1):
  libnvdimm, namespace: fix pmem namespace leak, delete when size
set to zero

Stefan Hajnoczi (1):
  pmem: return EIO on read_pmem() failure

 drivers/nvdimm/namespace_devs.c | 23 ++-
 drivers/nvdimm/pmem.c   |  4 +++-
 2 files changed, 13 insertions(+), 14 deletions(-)

commit d47d1d27fd6206c18806440f6ebddf51a806be4f
Author: Stefan Hajnoczi 
Date:   Thu Jan 5 10:05:46 2017 +

pmem: return EIO on read_pmem() failure

The read_pmem() function uses memcpy_mcsafe() on x86 where an EFAULT
error code indicates a failed read.  Block I/O should use EIO to
indicate failure.  Other pmem code paths (like bad blocks) already use
EIO so let's be consistent.

This fixes compatibility with consumers like btrfs that try to parse the
specific error code rather than treat all errors the same.

Reviewed-by: Jeff Moyer 
Signed-off-by: Stefan Hajnoczi 
Signed-off-by: Dan Williams 

commit 1f19b983a8877f81763fab3e693c6befe212736d
Author: Dan Williams 
Date:   Mon Jan 9 17:30:49 2017 -0800

libnvdimm, namespace: fix pmem namespace leak, delete when size set to zero

Commit 98a29c39dc68 ("libnvdimm, namespace: allow creation of multiple
pmem-namespaces per region") added support for establishing additional
pmem namespace beyond the seed device, similar to blk namespaces.
However, it neglected to delete the namespace when the size is set to
zero.

Fixes: 98a29c39dc68 ("libnvdimm, namespace: allow creation of
multiple pmem-namespaces per region")
Cc: 
Signed-off-by: Dan Williams 
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] btrfs-progs: sanitize - Use correct source for memcpy

2017-01-20 Thread Goldwyn Rodrigues
From: Goldwyn Rodrigues 

While performing a memcpy, we are copying from uninitialized dst
as opposed to src->data. Though using eb->len is correct, I used
src->len to make it more readable.

Signed-off-by: Goldwyn Rodrigues 
---
 image/main.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/image/main.c b/image/main.c
index 58dcecb..0158844 100644
--- a/image/main.c
+++ b/image/main.c
@@ -550,7 +550,7 @@ static void sanitize_name(struct metadump_struct *md, u8 
*dst,
return;
}
 
-   memcpy(eb->data, dst, eb->len);
+   memcpy(eb->data, src->data, src->len);
 
switch (key->type) {
case BTRFS_DIR_ITEM_KEY:
-- 
2.10.2

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 0/6] Btrfs: incremental send, fix serval case failure

2017-01-20 Thread David Sterba
On Thu, Jan 05, 2017 at 04:24:54PM +0800, robbieko wrote:
> From: Robbie Ko 
> 
> Patch for fix btrfs incremental send.
> These patches base on v4.8.0-rc8

Is this a typo or did you really base the patches on 4.8-rc8? At the
moment we're nearing the v4.11 development cycle, so patches should be
eg. based on a recent 4.10-rc from the master branch. Let me know
privately if you need more information on the development cycle and
workflow.

> V3: Improve the change log

Please take Filipe's advice and comments seriously. Changelogs are
important part of the patches, namely for fixes that are not obvious and
fix corner cases. Writing good changelogs takes time to learn, you can
find examples in the log of send.c .
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] btrfs: raid56: Remove unused variant in lock_stripe_add

2017-01-20 Thread David Sterba
On Mon, Jan 16, 2017 at 10:23:06AM +0800, Qu Wenruo wrote:
> Variant 'walk' in lock_stripe_add() is never used.
> Remove it.
> 
> Signed-off-by: Qu Wenruo 

Added to 4.11 queue, thaks. Changelog and subject edited.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] btrfs-progs: lowmem-check: Fix false alert on dropped leaf

2017-01-20 Thread David Sterba
On Wed, Jan 18, 2017 at 01:21:07PM +0800, Qu Wenruo wrote:
> For btrfs-progs test case 021-partially-dropped-snapshot-case, if the
> first leaf is already dropped, btrfs check low-memory mode will report
> false alert:
> 
> checking fs roots
> checksum verify failed on 29917184 found E4E3BDB6 wanted 
> checksum verify failed on 29917184 found E4E3BDB6 wanted 
> checksum verify failed on 29917184 found E4E3BDB6 wanted 
> checksum verify failed on 29917184 found E4E3BDB6 wanted 
> 
> This is caused by we are calling check_fs_first_inode() function,
> unlike the rest part of check_fs_root_v2(), it doesn't have enough check
> on dropping progress, and caused the false alert.
> 
> Fix it by checking dropping progress before searching slot.
> 
> Signed-off-by: Qu Wenruo 

Applied, thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Btrfs: refactor btrfs_extent_same() slightly

2017-01-20 Thread David Sterba
On Tue, Jan 17, 2017 at 11:37:38PM -0800, Omar Sandoval wrote:
> From: Omar Sandoval 
> 
> This was originally a prep patch for changing the behavior on len=0, but
> we went another direction with that. This still makes the function
> slightly easier to follow.
> 
> Reviewed-by: Qu Wenruo 
> Signed-off-by: Omar Sandoval 
> ---
> Qu thought this would still be a worthwhile cleanup. I'm fine either
> way. Applies to Dave's for-next branch.

Reviewed-by: David Sterba 
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Btrfs: constify struct btrfs_{,disk_}key wherever possible

2017-01-20 Thread David Sterba
On Tue, Jan 17, 2017 at 11:24:37PM -0800, Omar Sandoval wrote:
> From: Omar Sandoval 
> 
> In a lot of places, it's unclear when it's safe to reuse a struct
> btrfs_key after it has been passed to a helper function. Constify these
> arguments wherever possible to make it obvious.
> 
> Signed-off-by: Omar Sandoval 
> ---
> This applies to Dave's for-next branch. If it's too intrusive of a
> change, it can wait, but I think it's a nice cleanup.

Applies cleanly on the for 4.11 branch (that now has most of for-next),
thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs check lowmem vs original

2017-01-20 Thread Chris Murphy
For sanitizing the debug tree I tried this backwards method:

# btrfs-image -c5 -t4 -s /dev/mapper/brick1 > btrfsimage_30f4724a.bin
# lvcreate -V 603440807936b -T vg/thintastic -n btr1
# btrfs-image -r btrfsimage_30f4724a.bin /dev/vg/btr1
# btrfs inspect-internal dump-tree /dev/vg/btr1 > btrfsdebugtree_30f4724a.log

But I see by default, -r makes modifications to the chunk tree, so
maybe it's not a useful debug tree? Anyway, the two btrfs checks
(original and lowmem) and the debug tree are here:

30f4724a-first.logs.tar 254MiB
https://drive.google.com/open?id=0B_2Asp8DGjJ9QmZOV3pNS212Yk0

There are 4 more file systems, 2 are single device, 2 are two device
(raid1) fs's. I'm not sure how to use this indirect btrfs-image
approach with raid1.


Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] Btrfs: clean up btrfs_ordered_update_i_size

2017-01-20 Thread David Sterba
On Thu, Jan 12, 2017 at 08:16:17AM -0800, Liu Bo wrote:
> Since we have a good helper entry_end, use it for ordered extent.
> 
> Signed-off-by: Liu Bo 

Reviewed-by: David Sterba 
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] Btrfs: fix btrfs_ordered_update_i_size to update disk_i_size properly

2017-01-20 Thread David Sterba
On Thu, Jan 12, 2017 at 08:13:26AM -0800, Liu Bo wrote:
> btrfs_ordered_update_i_size can be called by truncate and endio, but only 
> endio
> takes ordered_extent which contains the completed IO.
> 
> while truncating down a file, if there are some in-flight IOs,
> btrfs_ordered_update_i_size in endio will set disk_i_size to @orig_offset that
> is zero.  If truncating-down fails somehow, we try to recover in memory isize
> with this zero'd disk_i_size.
> 
> Fix it by only updating disk_i_size with @orig_offset when
> btrfs_ordered_update_i_size is not called from endio while truncating down and
> waiting for in-flight IOs completing their work before recover in-memory size.
> 
> Besides fixing the above issue, add an assertion for last_size to double check
> we truncate down to the desired size.
> 
> Signed-off-by: Liu Bo 

Looks good to me, added to 4.11 queue.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs check lowmem vs original

2017-01-20 Thread Chris Murphy
On Thu, Jan 19, 2017 at 10:45 PM, Qu Wenruo  wrote:

>> Another file system, 15 minutes old with two mounts in its whole
>> lifetime, and only written with kernel 4.10-rc3 has over 30 lines of
>> varying numbers:
>>
>> ERROR: root 257 EXTENT DATA[150134 11317248] prealloc shouln't have
>> datasum
>>
>> That file system should have no preallocated extents (It's a clean
>> installation of Fedora Rawhide, using only rsync)
>
>
> btrfs-debug-tree will help to make sure what is wrong.

6b187fa6.logs.tar.gz 20M
https://drive.google.com/open?id=0B_2Asp8DGjJ9SlRvZ2plNXVmTUU


That's the small recent one, generic content. The others are bigger,
and I should probably sanitize the filenames from debug-tree but can't
find in the archives how to do that. Is btrfs-image useful for this?




> That's why lowmem mode is still not the default option.
>
> The problem os original mode is, if you're checking a TB level fs and only 2
> or 4G ram, then it's quite possible you ran out of memory and won't be able
> to check the fs forever, more several than annoying.

Fair enough.



-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PULL] Btrfs fixes for 4.10

2017-01-20 Thread David Sterba
Hi,

a few more fixes, please pull. Thanks.


The following changes since commit 0bf70aebf12d8fa0d06967b72ca4b257eb6adf06:

  Merge branch 'tracepoint-updates-4.10' of 
git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux into for-linus-4.10 
(2017-01-11 06:26:12 -0800)

are available in the git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git for-chris

for you to fetch changes up to 91298eec05cd8d4e828cf7ee5d4a6334f70cf69a:

  Btrfs: fix truncate down when no_holes feature is enabled (2017-01-19 
18:02:22 +0100)


Chandan Rajendra (1):
  Btrfs: Fix deadlock between direct IO and fast fsync

Liu Bo (1):
  Btrfs: fix truncate down when no_holes feature is enabled

Wang Xiaoguang (1):
  btrfs: fix false enospc error when truncating heavily reflinked file

 fs/btrfs/inode.c | 18 +++---
 1 file changed, 15 insertions(+), 3 deletions(-)
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] btrfs-progs: lowmem-check: Fix wrong extent tree iteration

2017-01-20 Thread Christoph Anton Mitterer
On Fri, 2017-01-20 at 15:58 +0800, Qu Wenruo wrote:
> Nice to hear that, although the -5 error seems to be caught
> I'll locate the problem and then send the patch.
> 
> Thanks for your testing!

You're welcome... just ping me once I should do another run.

Cheers,
Chris.

smime.p7s
Description: S/MIME cryptographic signature


Re: btrfs recovery

2017-01-20 Thread Sebastian Gottschall

Am 20.01.2017 um 09:05 schrieb Duncan:

Sebastian Gottschall posted on Thu, 19 Jan 2017 11:06:19 +0100 as
excerpted:


I have a question. after a power outage my system was turning into a
unrecoverable state using btrfs (kernel 4.9)
since im running --init-extent-tree now for 3 days i'm asking how long
this process normally takes

QW has the better direct answer for you, but...

This is just a note to remind you, in general questions like "how long"
can be better answered if we know the size of your filesystem, the mode
(how many devices and what duplication mode for data and metadata) and
something about how you use it -- how many subvolumes and snapshots you
have, whether you have quotas enabled, etc.
hard to give a answer right now since the fs is still in 
init-tree-extent. so i cannot get any details from it while running this 
process.
it was a standard opensuse 42.1 installation with btrfs as rootfs. the 
size is about 1,8 tb. no soft raid. its a hardware raid6 system using a 
areca controller

running all as single device

hr

Normally output from commands like btrfs fi usage can answer most of the
filesystem size and mode stuff, but of course that command requires a
mount, and you're doing an unmounted check ATM.  However, btrfs fi show
should still work and give us basic information like file size and number
of devices, and you can fill in the blanks from there.

0:rescue:~ # btrfs.static fi show
Label: none  uuid: 946b1a04-c321-4a24-bfb4-d6dcfa8b52dc
Total devices 1 FS bytes used 1.15TiB
devid1 size 1.62TiB used 1.37TiB path /dev/sda3



You did mention the kernel version (4.9) however, something that a lot of
reports miss, and you're current, so kudos for that. =:^)

i was reading other reports first, so i know whats expected :-)
beside this im a linux developer as well, so i know whats most important 
to know and most systems i run are almost up to date


As to your question, assuming a terabyte scale filesystem, as QW
suggested, a full extent tree rebuild is a big job and could indeed take
awhile (days).

4992 minutes now so, 3.4 days


 From a practical perspective...

Given the state of btrfs as a still stabilizing and maturing filesystem,
having backups for any data you value more than the time and hassle
necessary to do the backup is even more a given than on a fully stable
filesystem, which means, given the time for an extent tree rebuild on
that size of a filesystem, unless you're doing the rebuild specifically
to get the experience or test the code, as a practical matter it's
probably simply easier to restore from that backup if you valued the data
enough to have one, or simply scrap the filesystem and start over if you
considered the data worth less than the time and hassle of a backup, and
thus didn't have one.
i have a backup for sure for worst case, its just not always up to date. 
which means i might lost minor work of 6 -7 days maximum since i cannot 
mirror the whole filesystem every second
sourcecodes in repository are safe for sure and there will be nothing 
lost,but will take always some time to get the backup back to the 
system, reinstalling OS, etc. my OS is not very vanilla. its all
a little bit customized and not sure how i did it last time and would 
take some time to find the right path back. so its worth to try it 
without going the hard way





--
Mit freundlichen Grüssen / Regards

Sebastian Gottschall / CTO

NewMedia-NET GmbH - DD-WRT
Firmensitz:  Berliner Ring 101, 64625 Bensheim
Registergericht: Amtsgericht Darmstadt, HRB 25473
Geschäftsführer: Peter Steinhäuser, Christian Scheele
http://www.dd-wrt.com
email: s.gottsch...@dd-wrt.com
Tel.: +496251-582650 / Fax: +496251-5826565

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs recovery

2017-01-20 Thread Sebastian Gottschall

Am 20.01.2017 um 02:08 schrieb Qu Wenruo:



At 01/19/2017 06:06 PM, Sebastian Gottschall wrote:

Hello

I have a question. after a power outage my system was turning into a
unrecoverable state using btrfs (kernel 4.9)
since im running --init-extent-tree now for 3 days i'm asking how long
this process normally takes and why it outputs millions of lines like


--init-extent-tree will trash *ALL* current extent tree, and *REBUILD* 
them from fs-tree.


This can takes a long time depending on the size of the fs, and how 
many shared extents there are (snapshots and reflinks all counts).
its about 1,8 tb. so not a great size, but millions of files. its a 
build server


Such a huge operation should only be used if you're sure only extent 
tree is corrupted, and other tree are all OK.
since operations like zero-log doesnt help and scrub cancles after 5 
seconds with error (can't remember the exact error right now)

i'm sure there is something corrupt in it


Or you'll just totally screw your fs further, especially when 
interrupted.

running since 4 days now and for sure i wont interrupt it now at this state




Backref 1562890240 root 262 owner 483059214 offset 0 num_refs 0 not
found in extent tree
Incorrect local backref count on 1562890240 root 262 owner 483059214
offset 0 found 1 wanted 0 back 0x23b0211d0
backpointer mismatch on [1562890240 4096]


This is common, since --init-extent-tree trash all extent tree, so 
every tree-block/data extent will trigger such output



adding new data backref on 1562890240 root 262 owner 483059214 offset 0
found 1
Repaired extent references for 1562890240


But as you see, it repaired the extent tree by adding back 
EXTENT_ITEM/METADATA_ITEM into extent tree, so far it works.


If you see such output with all the same bytenr, then things goes 
really wrong and maybe a dead loop.
they are all incremental. so looks okay then. i just dont know where the 
end is


Personally speaking, normal problem like failed to mount should not 
need --init-extent-tree.


Especially, extent-tree corruption normally is not really related to 
mount failure, but sudden remount to RO and kernel wanring.

initially i was able to mount the fs, but it turned to be ro only.


Thanks,
Qu



please avoid typical answers like "potential dangerous operation" since
all repair options are declared as potenial dangerous.


Sebastian








--
Mit freundlichen Grüssen / Regards

Sebastian Gottschall / CTO

NewMedia-NET GmbH - DD-WRT
Firmensitz:  Berliner Ring 101, 64625 Bensheim
Registergericht: Amtsgericht Darmstadt, HRB 25473
Geschäftsführer: Peter Steinhäuser, Christian Scheele
http://www.dd-wrt.com
email: s.gottsch...@dd-wrt.com
Tel.: +496251-582650 / Fax: +496251-5826565

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs recovery

2017-01-20 Thread Duncan
Sebastian Gottschall posted on Thu, 19 Jan 2017 11:06:19 +0100 as
excerpted:

> I have a question. after a power outage my system was turning into a
> unrecoverable state using btrfs (kernel 4.9)
> since im running --init-extent-tree now for 3 days i'm asking how long
> this process normally takes

QW has the better direct answer for you, but...

This is just a note to remind you, in general questions like "how long" 
can be better answered if we know the size of your filesystem, the mode 
(how many devices and what duplication mode for data and metadata) and 
something about how you use it -- how many subvolumes and snapshots you 
have, whether you have quotas enabled, etc.

Normally output from commands like btrfs fi usage can answer most of the 
filesystem size and mode stuff, but of course that command requires a 
mount, and you're doing an unmounted check ATM.  However, btrfs fi show 
should still work and give us basic information like file size and number 
of devices, and you can fill in the blanks from there.

You did mention the kernel version (4.9) however, something that a lot of 
reports miss, and you're current, so kudos for that. =:^)

As to your question, assuming a terabyte scale filesystem, as QW 
suggested, a full extent tree rebuild is a big job and could indeed take 
awhile (days).

>From a practical perspective...

Given the state of btrfs as a still stabilizing and maturing filesystem, 
having backups for any data you value more than the time and hassle 
necessary to do the backup is even more a given than on a fully stable 
filesystem, which means, given the time for an extent tree rebuild on 
that size of a filesystem, unless you're doing the rebuild specifically 
to get the experience or test the code, as a practical matter it's 
probably simply easier to restore from that backup if you valued the data 
enough to have one, or simply scrap the filesystem and start over if you 
considered the data worth less than the time and hassle of a backup, and 
thus didn't have one.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] btrfs-progs: lowmem-check: Fix wrong extent tree iteration

2017-01-20 Thread Qu Wenruo



At 01/20/2017 01:10 AM, Christoph Anton Mitterer wrote:

Hey Qu.


On Wed, 2017-01-18 at 16:48 +0800, Qu Wenruo wrote:

To Christoph,

Would you please try this patch, and to see if it suppress the block
group
warning?

I did another round of fsck in both modes (original/lomem), first
WITHOUT your patch, then WITH it... both on progs version 4.9... no
further RW mount between these 4 runs:


btrfs-progs v4.9 WITHOUT patch:
***
# btrfs check /dev/nbd0 ; echo $?
checking extents
checking free space cache
checking fs roots
checking csums
checking root refs
Checking filesystem on /dev/nbd0
UUID: 326d292d-f97b-43ca-b1e8-c722d3474719
found 7469206884352 bytes used err is 0
total csum bytes: 7281779252
total tree bytes: 10837262336
total fs tree bytes: 2011906048
total extent tree bytes: 1015349248
btree space waste bytes: 922444044
file data blocks allocated: 7458369622016
 referenced 7579485159424
0

# btrfs check --mode=lowmem /dev/nbd0 ; echo $?
checking extents
ERROR: block group[74117545984 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[239473786880 1073741824] used 1073741824 but extent items 
used 1207959552
ERROR: block group[500393050112 1073741824] used 1073741824 but extent items 
used 1207959552
ERROR: block group[581997428736 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[626557714432 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[668433645568 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[948680261632 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[982503129088 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[1039411445760 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[1054443831296 1073741824] used 1073741824 but extent items 
used 1207959552
ERROR: block group[1190809042944 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[1279392743424 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[1481256206336 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[1620842643456 1073741824] used 1073741824 but extent items 
used 1207959552
ERROR: block group[1914511032320 1073741824] used 1073741824 but extent items 
used 1207959552
ERROR: block group[3055361720320 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[3216422993920 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[3670615785472 1073741824] used 1073741824 but extent items 
used 1207959552
ERROR: block group[3801612288000 1073741824] used 1073741824 but extent items 
used 1207959552
ERROR: block group[3828455833600 1073741824] used 1073741824 but extent items 
used 1207959552
ERROR: block group[4250973241344 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[4261710659584 1073741824] used 1073741824 but extent items 
used 1074266112
ERROR: block group[4392707162112 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[4558063403008 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[4607455526912 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[4635372814336 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[4640204652544 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[4642352136192 1073741824] used 1073741824 but extent items 
used 1207959552
ERROR: block group[4681006841856 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[5063795802112 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[5171169984512 1073741824] used 1073741824 but extent items 
used 1207959552
ERROR: block group[5216267141120 1073741824] used 1073741824 but extent items 
used 1207959552
ERROR: block group[5290355326976 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[5445511020544 1073741824] used 1073741824 but extent items 
used 1074266112
ERROR: block group[6084387405824 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[6104788500480 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[6878956355584 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[6997067956224 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[7702516334592 1073741824] used 1073741824 but extent items 
used 0
ERROR: block group[8051482427392 1073741824] used 1073741824 but extent items 
used 1084751872
ERROR: block group[8116980678656 1073741824] used 1073217536 but extent items 
used 0
ERROR: errors found in extent allocation tree or chunk allocation
checking free space cache
checking fs roots
Checking filesystem on /dev/nbd0
UUID: 326d292d-f97b-43ca-b1e8-c722d3474719
found 7469206884352 bytes used err is -5
total csum bytes: 7281779252
total tree bytes: 10837262336
total fs tree bytes: 2011906048
total extent tree bytes: