Re: python-btrfs v10 preview... detailed usage reporting and a tutorial

2018-10-07 Thread Adam Borowski
On Mon, Oct 08, 2018 at 02:03:44AM +0200, Hans van Kranenburg wrote:
> And yes, when promoting things like the new show_usage example to
> programs that are easily available, users will probably start parsing
> the output of them with sed and awk which is a total abomination and the
> absolute opposite of the purpose of the library. So be it. Let it go. :D
> "The code never bothered me any way".

It's not like some deranged person would parse the output of, say, show_file
in Perl...
 
> The interesting question that remains is where the result should go.
> 
> btrfs-heatmap is a thing of its own now, but it's a bit of the "show
> case" example using the lib, with its own collection of documentation
> and even possibility to script it again.
> 
> Shipping the 'binaries' in the python3-btrfs package wouldn't be the
> right thing, so where should they go? apt-get install btrfs-moar-utils-yolo?

At least in Debian, moving executables between packages is a matter of
versioned Replaces (+Conflicts: old), so if any point you decide differently
it's not a problem.  So btrfs-moar-utils-yolo should work well.

> Or should btrfs-progs start to use this to accelerate improvement for
> providing a richer collection of useful progs for things that are not on
> essential level (like, you won't need them inside initramfs, so they can
> use python)?

You might want your own package that's agile and btrfs-progs for things
declared to be rock stable (WRT command-line API, not neccesarily stability
of code).

Meow!
-- 
⢀⣴⠾⠻⢶⣦⠀ 
⣾⠁⢰⠒⠀⣿⡁ 10 people enter a bar: 1 who understands binary,
⢿⡄⠘⠷⠚⠋⠀ 1 who doesn't, D who prefer to write it as hex,
⠈⠳⣄ and 1 who narrowly avoided an off-by-one error.


Re: python-btrfs v10 preview... detailed usage reporting and a tutorial

2018-10-07 Thread Hans van Kranenburg
Hi,

On 09/24/2018 01:19 AM, Adam Borowski wrote:
> On Sun, Sep 23, 2018 at 11:54:12PM +0200, Hans van Kranenburg wrote:
>> Two examples have been added, which use the new code. I would appreciate
>> extra testing. Please try them and see if the reported numbers make sense:
>>
>> space_calculator.py
>> ---
>> Best to be initially described as a CLI version of the well-known
>> webbased btrfs space calculator by Hugo. ;] Throw a few disk sizes at
>> it, choose data and metadata profile and see how much space you would
>> get to store actual data.
>>
>> See commit message "Add example to calculate usable and wasted space"
>> for example output.
>>
>> show_usage.py
>> -
>> The contents of the old show_usage.py example that simply showed a list
>> of block groups are replaced with a detailed usage report of an existing
>> filesystem.
> 
> I wonder, perhaps at least some of the examples could be elevated to
> commands meant to be run by end-user?  Ie, installing them to /usr/bin/,
> dropping the extension?  They'd probably need less generic names, though.

Some of the examples are very useful, and I keep using them frequently.
That's actually also the reason that I for now just have copied the
examples/ to /usr/share/doc/python3-btrfs/examples for the Debian
package, so that they're easily available on all systems that I work on.

Currently the examples collection is serving a few purposes. It's my
poor mans testing framework, which covers all functionality of the lib.
It displays all the things that you can do. There's a rich git commit
message history on them, which I plan to transform into documentation
and tutorial stuff later.

So, yes, a bunch of the things are quite useful actually. The new
show_usage and space_calculator are examples of things that are possible
which start to ascend the small thingies on debugging level.

So what would be candidates to be promoted to 'official' utils?

0) Ah, btrfs-heatmap

Yeah, that's the thing it all started with. I started writing all of the
code to be able to debug why my filesystems were allocating raw disk
space all the time and not reusing the free already allocated space.
But, that one is already done.

https://github.com/knorrie/btrfs-heatmap/

1) Custom btrfs balance

If really needed (and luckily, the need for it is mostly removed after
solving the -o ssd issues) I always use balance_least_used.py instead of
regular btrfs balance. I think it totally makes sense to do the analysis
of what blockgroups to feed to balance in what order in user space.

I also used another custom script to feed block groups with highly
fragmented free space to balance to try repairing filesystems that had
been using the cluster data extent allocator. That's not in examples,
but when you combine show_free_space_fragmentation with parts of
balance_least_used, you get the idea.

The best example I can think of here is a program that uses the new
usage information to find out how to feed block groups to balance to
actually get a balanced filesystem with minimal amount of wasted raw
space, and then do exactly that in the quickest way possible while
providing interesting progress information, instead of just brute force
rewriting all of the data and having no idea what's actually happening.

2) Advanced usage reporting

Something like the new show_usage, but hey, when using python with some
batteries included, I guess we can relatively easily do a nice html or
pdf output with pie and bar charts which provide the user with
information about the filesystem. Just having users run that when
they're asking for help on IRC and share the result would be nice. :o)

3) The space calculator

Yup, obviously.

4) Maybe show_orphan_cleaner_progress

I use that one now and then to get a live view on mass-removal of
subvolumes (backup snapshot expiry), but it's very close to a debug
tool. Or maybe I'm already spoiled and used to it now, and I don't
realize any more how frustrating it must be to see disk IO and cpu go
all places and have no idea about what btrfs is doing.

5) So much more...

So... the examples are just basic test coverage. There is so much more
that can be done.

And yes, to be able to write a small thingie that uses the lib, you
already have to know a lot about btrfs. -> That's why I started writing
the tutorial.

And yes, when promoting things like the new show_usage example to
programs that are easily available, users will probably start parsing
the output of them with sed and awk which is a total abomination and the
absolute opposite of the purpose of the library. So be it. Let it go. :D
"The code never bothered me any way".

The interesting question that remains is where the result should go.

btrfs-heatmap is a thing of its own now, but it's a bit of the "show
case" example using the lib, with its own collection of documentation
and even possibility to script it again.

Shipping the 'binaries' in the python3-btrfs package wouldn't be the
right thing, 

Re: [PATCH v2 5/9] generic/102 open code dev_size _scratch_mkfs_sized()

2018-10-07 Thread Eryu Guan
On Wed, Sep 26, 2018 at 12:08:56PM +0800, Anand Jain wrote:
> 
> 
> On 09/25/2018 06:54 PM, Nikolay Borisov wrote:
> > 
> > 
> > On 25.09.2018 07:24, Anand Jain wrote:
> > > Open code helps to grep and find out parameter sent to the
> > > _scratch_mkfs_sized here.
> > > 
> > > Signed-off-by: Anand Jain 
> > 
> > IMO this is noise, you can just as simply do
> > "grep _scratch_mkfs_sized" and then open the file to inspect the actual
> > argument. But it's up to the xfstest maintainers
> 
>  I am ok. Its just a nice cleanup.
> 
> Thanks, Anand

I prefer dropping patch 5/6/7, as I don't think they're that necessary.

BTW, other patches from this series but patch 3 ("geneirc/077 fix min
size for btrfs") look fine to me, I'm taking them in this week's update.

Thanks,
Eryu

> 
> > > ---
> > >   tests/generic/102 | 3 +--
> > >   1 file changed, 1 insertion(+), 2 deletions(-)
> > > 
> > > diff --git a/tests/generic/102 b/tests/generic/102
> > > index faf940ac5070..aad496a5bc69 100755
> > > --- a/tests/generic/102
> > > +++ b/tests/generic/102
> > > @@ -31,8 +31,7 @@ _require_scratch
> > >   rm -f $seqres.full
> > > -dev_size=$((512 * 1024 * 1024)) # 512MB filesystem
> > > -_scratch_mkfs_sized $dev_size >>$seqres.full 2>&1
> > > +_scratch_mkfs_sized $((512 * 1024 * 1024)) >>$seqres.full 2>&1
> > >   _scratch_mount
> > >   for ((i = 0; i < 10; i++)); do
> > > 


Monitoring btrfs with Prometheus (and soon OpenMonitoring)

2018-10-07 Thread Holger Hoffstätte



The Prometheus statistics collection/aggregation/monitoring/alerting system
[1] is quite popular, easy to use and will probably be the basis for the
upcoming OpenMetrics "standard" [2].

Prometheus collects metrics by polling host-local "exporters" that respond
to http requests; many such exporters exist, from the generic node_exporter
for OS metrics to all sorts of application-/service-specific varieties.

Since btrfs already exposes quite a lot of monitorable and - more
importantly - actionable runtime information in sysfs it only makes sense
to expose these metrics for visualization & alerting. I noodled over the
idea some time ago but got sidetracked, besides not being thrilled at all
by the idea of doing this in golang (which I *really* dislike).

However, exporters can be written in any language as long as they speak
the standard response protocol, so an alternative would be to use one
of the other official exporter clients. These provide language-native
"mini-frameworks" where one only has to fill in the blanks (see [3]
for examples).

Since the issue just came up in the node_exporter bugtracker [3] I
figured I ask if anyone here is interested in helping build a proper
standalone btrfs_exporter in C++? :D

..just kidding, I'd probably use python (which I kind of don't really
know either :) and build on Hans' python-btrfs library for anything
not covered by sysfs.

Anybody interested in helping? Apparently there are also golang libs
for btrfs [5] but I don't know anything about them (if you do, please
comment on the bug), and the idea of adding even more stuff into the
monolithic, already creaky and somewhat bloated node_exporter is not
appealing to me.

Potential problems wrt. btrfs are access to root-only information,
like e.g. the btrfs device stats/errors in the aforementioned bug,
since exporters are really supposed to run unprivileged due to network
exposure. The S.M.A.R.T. exporter [6] solves this with dual-process
contortions; obviously it would be better if all relevant metrics were
accessible directly in sysfs and not require privileged access, but
forking a tiny privileged process every polling interval is probably
not that bad.

All ideas welcome!

cheers,
Holger

[1] https://www.prometheus.io/
[2] https://openmetrics.io/
[3] https://github.com/prometheus/client_python,
https://github.com/prometheus/client_ruby
[4] https://github.com/prometheus/node_exporter/issues/1100
[5] 
https://github.com/prometheus/node_exporter/issues/1100#issuecomment-427651028
[6] https://github.com/cloudandheat/prometheus_smart_exporter


Re: Two partitionless BTRFS drives no longer seen as containing BTRFS filesystem

2018-10-07 Thread evan d
Thanks for looking at it for me, appreciate the input.
On Sun, Oct 7, 2018 at 2:25 PM evan d  wrote:
>
> > > I may as well use wipefs to clear crud from both drives, partition and
> > > format them and then use them elsewhere.   -- this more or less
> > > accurately summarise the situation?
> >
> > Unfortunately, yes.
>
>
> I recall the machine these drives were in lost the onboard NIC when
> the desktop switch it was connected to went up in smoke.  Perhaps it
> was at that point the corruption occurred, albeit I seem to recall
> checking the drives at the time.  Such is life.  If they pass extended
> SMART tests they'll go back into use.


Re: Two partitionless BTRFS drives no longer seen as containing BTRFS filesystem

2018-10-07 Thread evan d
> > I may as well use wipefs to clear crud from both drives, partition and
> > format them and then use them elsewhere.   -- this more or less
> > accurately summarise the situation?
>
> Unfortunately, yes.


I recall the machine these drives were in lost the onboard NIC when
the desktop switch it was connected to went up in smoke.  Perhaps it
was at that point the corruption occurred, albeit I seem to recall
checking the drives at the time.  Such is life.  If they pass extended
SMART tests they'll go back into use.


Re: Two partitionless BTRFS drives no longer seen as containing BTRFS filesystem

2018-10-07 Thread Qu Wenruo


On 2018/10/7 下午6:39, evan d wrote:
>>> like so?:
>>> grep -obUaP "\x5F\x42\x48\x52\x66\x53\x5F\x4D" /dev/sdc
>>>
>> Yes. And it will be very slow, since you're going to read out the whole
>> disk.
>>
>> But I don't really think you would get some hit, according to current
>> result.
> 
> Ok, so it is what it is.  Based on what you're telling me, whilst the
> data may be there and intact, the superblock is irretrievably damaged
> and the data is thus for all intents and purposes lost.

Although the result is the same (data all lost), but I don't believe
it's only super block corrupted.
If you have some special magic string, like "#!/bin/bash" or "\x7f\x45
\x4c\x46\x02\x01\x01\x00" (elf header) , you could try to grep through
the whole disk.

I believe the data (or at least part of the data) is also corrupted in
this case.

> 
> I may as well use wipefs to clear crud from both drives, partition and
> format them and then use them elsewhere.   -- this more or less
> accurately summarise the situation?

Unfortunately, yes.

Thanks,
Qu

> 



signature.asc
Description: OpenPGP digital signature


Re: Two partitionless BTRFS drives no longer seen as containing BTRFS filesystem

2018-10-07 Thread evan d
> > like so?:
> > grep -obUaP "\x5F\x42\x48\x52\x66\x53\x5F\x4D" /dev/sdc
> >
> Yes. And it will be very slow, since you're going to read out the whole
> disk.
>
> But I don't really think you would get some hit, according to current
> result.

Ok, so it is what it is.  Based on what you're telling me, whilst the
data may be there and intact, the superblock is irretrievably damaged
and the data is thus for all intents and purposes lost.

I may as well use wipefs to clear crud from both drives, partition and
format them and then use them elsewhere.   -- this more or less
accurately summarise the situation?


Re: Two partitionless BTRFS drives no longer seen as containing BTRFS filesystem

2018-10-07 Thread Qu Wenruo


On 2018/10/7 下午4:28, evan d wrote:
 # dd if=/dev/sdb bs=1M of=last_chance.raw count=128 skip=256M
 # grep -obUaP "\x5F\x42\x48\x52\x66\x53\x5F\x4D" last_chance.raw
> 
> grep returns no result on either drive
> 
> If still no hit, you could try just run the grep command on the disk.
> 
> like so?:
> grep -obUaP "\x5F\x42\x48\x52\x66\x53\x5F\x4D" /dev/sdc
> 
Yes. And it will be very slow, since you're going to read out the whole
disk.

But I don't really think you would get some hit, according to current
result.

Thanks,
Qu



signature.asc
Description: OpenPGP digital signature


Re: Two partitionless BTRFS drives no longer seen as containing BTRFS filesystem

2018-10-07 Thread evan d
> >> # dd if=/dev/sdb bs=1M of=last_chance.raw count=128 skip=256M
> >> # grep -obUaP "\x5F\x42\x48\x52\x66\x53\x5F\x4D" last_chance.raw

grep returns no result on either drive

 If still no hit, you could try just run the grep command on the disk.

like so?:
grep -obUaP "\x5F\x42\x48\x52\x66\x53\x5F\x4D" /dev/sdc


Re: Two partitionless BTRFS drives no longer seen as containing BTRFS filesystem

2018-10-07 Thread Qu Wenruo


On 2018/10/7 下午4:09, evan d wrote:
>> If first 128M doesn't hit, I highly doubt something more strange happened.
> 
> Not sure I follow, do you mean if it doesn't hit then it's likely
> something else went wrong?

Yes.

If it's just a simple offset, it should hit.
If it's some simple corruption like bit rot, it shouldn't cause all
super blocks to be corrupted so seriously.

> 
> 
>> I'm considering something like encryption.
>> Maybe the disk is already encrypted by hardware?
> 
> The drives were never encrypted, none of my drives are
> 
> 
>> Windows is just a black box, I have no idea what a Windows could do to a
>> disk.
>>
>> But it doesn't explain why the 2nd super block can't be located, unless
>> Windows is wipe more data than the first 128M.
> 
> I'm thinking Windows may have tried to convert them to Dynamic disk on
> detecting them and assuming they're empty.

If that's the case, and sharing with Windows can't be avoided, then next
time please use partition table (GPT/MBR) other than raw disks.

> 
>>
>> If the disk is larger than 256G, would you please try to locate the last
>> possible super at 256G?
> 
> They're both 6TB drives
> 
>>
>> # dd if=/dev/sdb bs=1M of=last_chance.raw count=128 skip=256M
>> # grep -obUaP "\x5F\x42\x48\x52\x66\x53\x5F\x4D" last_chance.raw
> 
> dd returns:
> dd: /dev/sdb: cannot skip: Invalid argument

My fault, skip should be 256K not 256M.

Thanks,
Qu

> 



signature.asc
Description: OpenPGP digital signature


Re: Two partitionless BTRFS drives no longer seen as containing BTRFS filesystem

2018-10-07 Thread evan d
> If first 128M doesn't hit, I highly doubt something more strange happened.

Not sure I follow, do you mean if it doesn't hit then it's likely
something else went wrong?


> I'm considering something like encryption.
> Maybe the disk is already encrypted by hardware?

The drives were never encrypted, none of my drives are


> Windows is just a black box, I have no idea what a Windows could do to a
> disk.
>
> But it doesn't explain why the 2nd super block can't be located, unless
> Windows is wipe more data than the first 128M.

I'm thinking Windows may have tried to convert them to Dynamic disk on
detecting them and assuming they're empty.

>
> If the disk is larger than 256G, would you please try to locate the last
> possible super at 256G?

They're both 6TB drives

>
> # dd if=/dev/sdb bs=1M of=last_chance.raw count=128 skip=256M
> # grep -obUaP "\x5F\x42\x48\x52\x66\x53\x5F\x4D" last_chance.raw

dd returns:
dd: /dev/sdb: cannot skip: Invalid argument


Re: Two partitionless BTRFS drives no longer seen as containing BTRFS filesystem

2018-10-07 Thread Qu Wenruo


On 2018/10/7 下午2:47, evan d wrote:
>> None of your super blocks has correct magic.
> 
> 
> I take it this applies to both drives?

Yes, both drivers have something wrong.

> 
> 
> 
>> This means either your whole disk get corrupted, or something introduced
>> some offset.
>>
>> Please try the following commands to dump more data around super blocks,
>> so we could be able to find the possible offset:
>>
>> # dd if=/dev/sdb of=possible_sb_range.raw bs=1M count=128
>> # grep -obUaP "\x5F\x42\x48\x52\x66\x53\x5F\x4D" possible_sb_range.
>>
>> For a valid btrfs without any offset, the result should look like:
>> 65600:_BHRfS_M
>> 67108928:_BHRfS_M
> 
> # dd if=/dev/sdc of=possible_sb_range.sdc.raw bs=1M count=128
> 128+0 records in
> 128+0 records out
> 134217728 bytes (134 MB, 128 MiB) copied, 0.737479 s, 182 MB/s
> 
> # dd if=/dev/sdb of=possible_sb_range.sdb.raw bs=1M count=128
> 128+0 records in
> 128+0 records out
> 134217728 bytes (134 MB, 128 MiB) copied, 0.726327 s, 185 MB/s
> 
> # grep -obUaP "\x5F\x42\x48\x52\x66\x53\x5F\x4D" possible_sb_range.sdb.raw
> # grep -obUaP "\x5F\x42\x48\x52\x66\x53\x5F\x4D" possible_sb_range.sdc.raw
> 
> Both return nothing.

Then this is not good at all.

The super blocks should be at 64K and 64M.
If first 128M doesn't hit, I highly doubt something more strange happened.

> 
> Both drives pass S.M.A.R.T. testing so I'd have to think the
> corruption stems from some kind of offset rather than random
> corruption.

Corruption shouldn't happen like this.

And offset shouldn't be so large to offset the whole 128M range.

I'm considering something like encryption.
Maybe the disk is already encrypted by hardware?

>  They may have accidentally been inserted into a Windows
> machine (but not partitioned or formatted).  Could this be a likely
> cause?

Windows is just a black box, I have no idea what a Windows could do to a
disk.

But it doesn't explain why the 2nd super block can't be located, unless
Windows is wipe more data than the first 128M.

If the disk is larger than 256G, would you please try to locate the last
possible super at 256G?

# dd if=/dev/sdb bs=1M of=last_chance.raw count=128 skip=256M
# grep -obUaP "\x5F\x42\x48\x52\x66\x53\x5F\x4D" last_chance.raw

If still no hit, you could try just run the grep command on the disk.
It would take a long long time reading all data from the disk.

If still no hit, it means definitely not some easy offset.

Thanks,
Qu
> 



signature.asc
Description: OpenPGP digital signature


Re: Two partitionless BTRFS drives no longer seen as containing BTRFS filesystem

2018-10-07 Thread evan d
> None of your super blocks has correct magic.


I take it this applies to both drives?



> This means either your whole disk get corrupted, or something introduced
> some offset.
>
> Please try the following commands to dump more data around super blocks,
> so we could be able to find the possible offset:
>
> # dd if=/dev/sdb of=possible_sb_range.raw bs=1M count=128
> # grep -obUaP "\x5F\x42\x48\x52\x66\x53\x5F\x4D" possible_sb_range.
>
> For a valid btrfs without any offset, the result should look like:
> 65600:_BHRfS_M
> 67108928:_BHRfS_M

# dd if=/dev/sdc of=possible_sb_range.sdc.raw bs=1M count=128
128+0 records in
128+0 records out
134217728 bytes (134 MB, 128 MiB) copied, 0.737479 s, 182 MB/s

# dd if=/dev/sdb of=possible_sb_range.sdb.raw bs=1M count=128
128+0 records in
128+0 records out
134217728 bytes (134 MB, 128 MiB) copied, 0.726327 s, 185 MB/s

# grep -obUaP "\x5F\x42\x48\x52\x66\x53\x5F\x4D" possible_sb_range.sdb.raw
# grep -obUaP "\x5F\x42\x48\x52\x66\x53\x5F\x4D" possible_sb_range.sdc.raw

Both return nothing.

Both drives pass S.M.A.R.T. testing so I'd have to think the
corruption stems from some kind of offset rather than random
corruption.  They may have accidentally been inserted into a Windows
machine (but not partitioned or formatted).  Could this be a likely
cause?


Re: Two partitionless BTRFS drives no longer seen as containing BTRFS filesystem

2018-10-07 Thread Qu Wenruo


On 2018/10/7 下午2:10, evan d wrote:
>> Please try "btrfs ins dump-super -fFa" on these two disks.
>>
>> If it's only the primary superblock corrupted, the backup should be good.
>>
>> If backup is also corrupted, either it has some offset or the whole data
>> is corrupted.
> 
> # btrfs ins dump-super -fFa /dev/sdb
> superblock: bytenr=65536, device=/dev/sdb
> -
> csum_type 0 (crc32c)
> csum_size 4
> csum 0x [DON'T MATCH]
> bytenr 0
> flags 0x0
> magic  [DON'T MATCH]
[snip]
> superblock: bytenr=274877906944, device=/dev/sdb
> -
> csum_type 26294 (INVALID)
> csum_size 32
> csum 0x05401fa3a3e8cd64075ce9fdbb9e60a01d58061a3cff3bc0235d18912ab755a2
> [UNKNOWN CSUM TYPE OR SIZE]
> bytenr 7401042280310172376
> flags 0x5a8a759265673a05
> ( WRITTEN |
>   METADUMP |
>   unknown flag: 0x5a8a759065673a04 )
> magic .~...6.. [DON'T MATCH]
[snip]

None of your super blocks has correct magic.

This means either your whole disk get corrupted, or something introduced
some offset.

Please try the following commands to dump more data around super blocks,
so we could be able to find the possible offset:

# dd if=/dev/sdb of=possible_sb_range.raw bs=1M count=128
# grep -obUaP "\x5F\x42\x48\x52\x66\x53\x5F\x4D" possible_sb_range.

For a valid btrfs without any offset, the result should look like:
65600:_BHRfS_M
67108928:_BHRfS_M

If your result doesn't look like this but still has two similar hits,
then you could calculate the offset, and use dm-linear to remap the disk
and try to recover the fs.

Thanks,
Qu



signature.asc
Description: OpenPGP digital signature


Re: Two partitionless BTRFS drives no longer seen as containing BTRFS filesystem

2018-10-07 Thread evan d
> Please try "btrfs ins dump-super -fFa" on these two disks.
>
> If it's only the primary superblock corrupted, the backup should be good.
>
> If backup is also corrupted, either it has some offset or the whole data
> is corrupted.

# btrfs ins dump-super -fFa /dev/sdb
superblock: bytenr=65536, device=/dev/sdb
-
csum_type 0 (crc32c)
csum_size 4
csum 0x [DON'T MATCH]
bytenr 0
flags 0x0
magic  [DON'T MATCH]
fsid ----
label
generation 0
root 0
sys_array_size 0
chunk_root_generation 0
root_level 0
chunk_root 0
chunk_root_level 0
log_root 0
log_root_transid 0
log_root_level 0
total_bytes 0
bytes_used 0
sectorsize 0
nodesize 0
leafsize (deprecated) 0
stripesize 0
root_dir 0
num_devices 0
compat_flags 0x0
compat_ro_flags 0x0
incompat_flags 0x0
cache_generation 0
uuid_tree_generation 0
dev_item.uuid ----
dev_item.fsid ---- [match]
dev_item.type 0
dev_item.total_bytes 0
dev_item.bytes_used 0
dev_item.io_align 0
dev_item.io_width 0
dev_item.sector_size 0
dev_item.devid 0
dev_item.dev_group 0
dev_item.seek_speed 0
dev_item.bandwidth 0
dev_item.generation 0
sys_chunk_array[2048]:
backup_roots[4]:

superblock: bytenr=67108864, device=/dev/sdb
-
csum_type 29777 (INVALID)
csum_size 32
csum 0xdfa25558dc616d7f36b287aa0081da91b78ec4910ba082b8d807708dcf608c91
[UNKNOWN CSUM TYPE OR SIZE]
bytenr 18068171547813619898
flags 0xbaca4264c914fdc
( METADUMP |
  METADUMP_V2 |
  unknown flag: 0xbaca4204c914fdc )
magic Y...FE./ [DON'T MATCH]
fsid 870d0bb4-65de-453b-2592-88b2cda7c38b
label 
>.<...@...B%...<.N..S.*.>^.p.x.v..o..<).u..B}./...B..3:8..&}-k...o6..Y..R..'o..G.lG.-...$'S!.h.-...".*.S..xf..@d...W..*..w..V.)..q..|.pgt.L...z.i...(..3Ar...;B.j.]:I.X...=x.(u.<..2U..0
.n..^}...^..Y.a#.X...G..d..
generation 2272432616352171010
root 6354703493070726827
sys_array_size 2955003080
chunk_root_generation 6888771666932949452
root_level 123
chunk_root 1494381076279922822
chunk_root_level 127
log_root 2134514170013133221
log_root_transid 9424889336279151653
log_root_level 153
total_bytes 2681859058010843869
bytes_used 16135710043447639801
sectorsize 1588161000
nodesize 3303803296
leafsize (deprecated) 1829755703
stripesize 606246139
root_dir 1342036912498050847
num_devices 17153780124435501376
compat_flags 0xf49b2c3a96602d1e
compat_ro_flags 0x8bec2f0d81d2ef13
( FREE_SPACE_TREE |
  FREE_SPACE_TREE_VALID |
  unknown flag: 0x8bec2f0d81d2ef10 )
incompat_flags 0x1af3f95d789a706e
( DEFAULT_SUBVOL |
  MIXED_GROUPS |
  COMPRESS_LZO |
  BIG_METADATA |
  EXTENDED_IREF |
  unknown flag: 0x1af3f95d789a7000 )
cache_generation 733155394646473
uuid_tree_generation 6538806993670512709
dev_item.uuid c1494bde-9546-5ef1-5d08-e259707d06d3
dev_item.fsid 584a42f5-4f79-58cc-3f5e-bdd9733b94d3 [DON'T MATCH]
dev_item.type 15668929679594826040
dev_item.total_bytes 1644623768791993611
dev_item.bytes_used 13437937382560806996
dev_item.io_align 449600477
dev_item.io_width 3390996026
dev_item.sector_size 2580293710
dev_item.devid 8727228223548828735
dev_item.dev_group 3143108434
dev_item.seek_speed 99
dev_item.bandwidth 34
dev_item.generation 15553054142077970558
sys_chunk_array[2048]:
ERROR: sys_array_size 2955003080 shouldn't exceed 2048 bytes
backup_roots[4]:
backup 0:
backup_tree_root: 15696464816120474628 gen: 16022257131800929882 level: 86
backup_chunk_root: 301766055038840723 gen: 16810756911197712753 level: 114
backup_extent_root: 6651488794176833875 gen: 10776594467847637718 level: 60
backup_fs_root: 1826104903792017114 gen: 3329223824114931446 level: 159
backup_dev_root: 11721506207622158585 gen: 10455859429120851009 level: 198
backup_csum_root: 5686172936498246011 gen: 5319088827707453088 level: 168
backup_total_bytes: 7660601670332883006
backup_bytes_used: 3723313713264611767
backup_num_devices: 1913069816786984281

backup 1:
backup_tree_root: 9129110729577497674 gen: 17870813394716935947 level: 45
backup_chunk_root: 3169491491968745044 gen: 17348548561480407615 level: 93
backup_extent_root: 5781159137873655776 gen: 3348348558872496210 level: 123
backup_fs_root: 66326293521237128 gen: 16098559310782853786 level: 157
backup_dev_root: 10010186234695580749 gen: 5427645709246451749 level: 214
backup_csum_root: 3481616510852897026 gen: 3557794445033232028 level: 233
backup_total_bytes: 14437518517144737363
backup_bytes_used: 17463569409584801738
backup_num_devices: 3997309709667846939

backup 2:
backup_tree_root: 10726132636215357139 gen: 12411364641008183307 level: 88
backup_chunk_root: 15701704192942804973 gen: 12075216484835399161 level: 115
backup_extent_root: 2480766121519302854 gen: 14058965640461957484 level: 149
backup_fs_root: 9763730962528489871 gen: 7584542795525942005 level: 178
backup_dev_root: 11165845436133270817 gen: 13967062440412348994 level: 236
backup_csum_root: 

Re: Two partitionless BTRFS drives no longer seen as containing BTRFS filesystem

2018-10-07 Thread evan d
> Did you try a btrfs device scan  ?

Tried it, it returns nothing.