Re: btrfs-find-root duration?

2016-12-11 Thread Markus Binsteiner
On Mon, Dec 12, 2016 at 5:12 PM, Chris Murphy  wrote:
> Another idea is btrfs-find-root -a. This is slow for me, it took about
> a minute for less than 1GiB of metadata. But I've got over 50
> candidate tree roots and generations.

Same behaviour with the newer versioned btrfs-find-root, just hangs.
The older one doesn't seem to have that option.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs-find-root duration?

2016-12-11 Thread Markus Binsteiner
Ok, some news. I chrooted into the old OS-root (Jessie), and low and
behold, old-version btrfs-find-root seemed to work:

# btrfs-find-root /dev/mapper/think--big-home
Super think's the tree root is at 138821632, chunk root 21020672
Well block 4194304 seems great, but generation doesn't match, have=2,
want=593818 level 0
Well block seems great, but generation doesn't match, have=3,
want=593818 level 0
Well block 29360128 seems great, but generation doesn't match,
have=593817, want=593818 level 1
Well block 29425664 seems great, but generation doesn't match,
have=593817, want=593818 level 0
Well block 29507584 seems great, but generation doesn't match,
have=593817, want=593818 level 0
Well block 29523968 seems great, but generation doesn't match,
have=593817, want=593818 level 0
Found tree root at 138821632 gen 593818 level 0

That's it though. Shouldn't there be many more lines for a filesytem that old?

I've tried btrfs restore for a few of those (both using old and new
btrfs restore versions), doesn't crash for the older generations:

# btrfs-find-root /dev/mapper/think--big-home
Super think's the tree root is at 138821632, chunk root 21020672
Well block 4194304 seems great, but generation doesn't match, have=2,
want=593818 level 0
Well block 4243456 seems great, but generation doesn't match, have=3,
want=593818 level 0
Well block 29360128 seems great, but generation doesn't match,
have=593817, want=593818 level 1
Well block 29425664 seems great, but generation doesn't match,
have=593817, want=593818 level 0
Well block 29507584 seems great, but generation doesn't match,
have=593817, want=593818 level 0
Well block 29523968 seems great, but generation doesn't match,
have=593817, want=593818 level 0
Found tree root at 138821632 gen 593818 level 0
root@think:/root# btrfs restore /dev/mapper/think--big-home /tmp -D -v
-i -t 4243456
parent transid verify failed on 4243456 wanted 593818 found 3
parent transid verify failed on 4243456 wanted 593818 found 3
Ignoring transid failure
This is a dry-run, no files are going to be restored
Reached the end of the tree searching the directory

But does crashes for 29360128 and 29425664:

# btrfs restore /dev/mapper/think--big-home /tmp -D -v -i -t 29360128
parent transid verify failed on 29360128 wanted 593818 found 593817
parent transid verify failed on 29360128 wanted 593818 found 593817
parent transid verify failed on 29360128 wanted 593818 found 593817
parent transid verify failed on 29360128 wanted 593818 found 593817
Ignoring transid failure
volumes.c:1554: btrfs_chunk_readonly: Assertion `!ce` failed.
btrfs[0x435f1e]
btrfs[0x435f42]
btrfs[0x4384f9]
btrfs[0x42f44a]
btrfs[0x42ab64]
btrfs[0x42aec5]
btrfs[0x42af74]
btrfs[0x41e7d9]
btrfs[0x40b46a]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7f618e103b45]
btrfs[0x40b497]

and fails for 29507584 and 29523968:

# btrfs restore /dev/mapper/think--big-home /tmp -D -v -i -t 29507584
parent transid verify failed on 29507584 wanted 593818 found 593817
parent transid verify failed on 29507584 wanted 593818 found 593817
parent transid verify failed on 29507584 wanted 593818 found 593817
parent transid verify failed on 29507584 wanted 593818 found 593817
Ignoring transid failure
Couldn't setup extent tree
Couldn't setup device tree
Could not open root, trying backup super
parent transid verify failed on 29507584 wanted 593818 found 593817
parent transid verify failed on 29507584 wanted 593818 found 593817
parent transid verify failed on 29507584 wanted 593818 found 593817
parent transid verify failed on 29507584 wanted 593818 found 593817
Ignoring transid failure
Couldn't setup extent tree
Couldn't setup device tree
Could not open root, trying backup super
No valid Btrfs found on /dev/mapper/think--big-home
Could not open root, trying backup super


> The old chunk root and tree root might come in handy, from backup0,
> which is the generation prior to the one currently set in the super
> blocks. The bytes_used vs dev_item.bytes_used is also interesting,
> huge difference. So it looks like this was umounted very soon after
> you realized what you did.

Yes, Did that as soon as I realized something was amiss. To be honest,
I'm not even 100% sure anymore what exactly I did, I just assumed I
deleted my home directory because I was deleting a directory a minute
or so before everything fell to pieces and my home directory was gone.
Also, it wouldn't be totally unlike me to have done that :-)


> What do you get for
>
> btrfs restore -l 
>
> Maybe just try this, which is the oldest generation we know about
> right now, which is found in backup root 2.
>
> sudo btrfs restore -D -v -t 138788864 /dev/mapper/brick1 .

# btrfs restore -D -v -t 138788864 /dev/mapper/think--big-home  .
checksum verify failed on 138788864 found 2E842E3E wanted 9D4DED61
checksum verify failed on 138788864 found 2E842E3E wanted 9D4DED61
checksum verify failed on 138788864 found 62A944D5 wanted 0C5CE041
checksum verify failed on 138788864 found 

Re: How to get back a deleted sub-volume.

2016-12-11 Thread Chris Murphy
Tomasz - try using 'btrfs-find-root -a ' I totally forgot about
this option. It goes through the extent tree and might have a chance
of finding additional generations that aren't otherwise being found.
You can then plug those tree roots into 'btrfs restore -t '
and do it with the -D and -v options so it's a verbose dry run, and
see if the file listing it spits out is at all useful - if it has any
of the data you're looking for.


Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs-find-root duration?

2016-12-11 Thread Chris Murphy
Another idea is btrfs-find-root -a. This is slow for me, it took about
a minute for less than 1GiB of metadata. But I've got over 50
candidate tree roots and generations.

But still you can try the tree root for the oldest generation in your
full superblock listing, like I described. If that restore dry run is
an empty listing, or partial, then try the btrfs-find-root -a option
to get more candidate generations. For the most part each lower
generation number is 30 seconds older. So if you can think about when
you umounted the file system in relation to when the delete happened,
you can get closer to the most recent generation that still has your
data.


Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs-find-root duration?

2016-12-11 Thread Chris Murphy
I don't know, maybe. This is not a new file system, clearly, it has
half million+ generations.

backup_roots[4]:
backup 0:
backup_tree_root:29360128gen: 593817level: 1
backup_chunk_root:20971520gen: 591139level: 1
backup_extent_root:29376512gen: 593817level: 1
backup_fs_root:132562944gen: 593815level: 0
backup_dev_root:29409280gen: 593817level: 0
backup_csum_root:29491200gen: 593817level: 0
backup_total_bytes:254007050240
backup_bytes_used:43532288
backup_num_devices:1


backup 1:
backup_tree_root:138821632gen: 593818level: 0
backup_chunk_root:21020672gen: 593818level: 0
backup_extent_root:138805248gen: 593818level: 0
backup_fs_root:132562944gen: 593815level: 0
backup_dev_root:132841472gen: 593818level: 0
backup_csum_root:138870784gen: 593818level: 0
backup_total_bytes:254007050240
backup_bytes_used:655360
backup_num_devices:1


The old chunk root and tree root might come in handy, from backup0,
which is the generation prior to the one currently set in the super
blocks. The bytes_used vs dev_item.bytes_used is also interesting,
huge difference. So it looks like this was umounted very soon after
you realized what you did.

What do you get for

btrfs restore -l 

Maybe just try this, which is the oldest generation we know about
right now, which is found in backup root 2.

sudo btrfs restore -D -v -t 138788864 /dev/mapper/brick1 .

That's a dry run so the path to save stuff to doesn't matter, so I'm
just using . for that. But what you'll get is a file listing. If
there's nothing listed, then it's a dead end. But if it's listing
everything you want, it's a win. Just point to a proper destination
and remove -D. However, another possibility is that it's a partial -
i.e. it might be a generation in between the cleaner running, so some
stuff is restored but not all of it.

Yeah, maybe there's a way to get an older generation and root with an
older copy of btrfs-progs. I think the current one isn't working very
well...on yet another file system I have that is very new (week) but
has hours of writes on it,

[chris@f25s ~]$ sudo btrfs-find-root /dev/mapper/brick1
Superblock thinks the generation is 5727
Superblock thinks the level is 1
Found tree root at 426847567872 gen 5727 level 1


That's it. So no hang, but only one tree root which is just bogus.
Even the superblock has more information than this. I'm used to seeing
dozens or more.


Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs-find-root duration?

2016-12-11 Thread Markus Binsteiner
> This is what I'd expect if the volume has only had a mkfs done and
> then mounted and umounted. No files. What do you get for
>
> btrfs ins dump-s -fa /dev/mapper/think--big-home

(attached)

Also tried btrfs check -b /dev/mapper/think--big-home, but that errored:

# btrfs  check -b /dev/mapper/think--big-home
volumes.c:1588: btrfs_chunk_readonly: Assertion `!ce` failed.
btrfs(+0x50940)[0x56351dc8f940]
btrfs(+0x50967)[0x56351dc8f967]
btrfs(btrfs_chunk_readonly+0x5a)[0x56351dc91c93]
btrfs(btrfs_read_block_groups+0x1dc)[0x56351dc8749b]
btrfs(btrfs_setup_all_roots+0x33b)[0x56351dc82adc]
btrfs(+0x43ed8)[0x56351dc82ed8]
btrfs(open_ctree_fs_info+0xd5)[0x56351dc82ffd]
btrfs(cmd_check+0x457)[0x56351dc6c332]
btrfs(main+0x139)[0x56351dc4efab]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1)[0x7efee8d6d3f1]
btrfs(_start+0x2a)[0x56351dc4efea]

So, you reckon going back to an older version of btrfs-progs might be
worth a shot then?
superblock: bytenr=65536, device=/dev/mapper/think--big-home
-
csum_type   0 (crc32c)
csum_size   4
csum0xa64b343b [match]
bytenr  65536
flags   0x1
( WRITTEN )
magic   _BHRfS_M [match]
fsid7f1ce0ed-5986-43ae-b0dd-727eee19fd08
label   
generation  593818
root138821632
sys_array_size  226
chunk_root_generation   593818
root_level  0
chunk_root  21020672
chunk_root_level0
log_root0
log_root_transid0
log_root_level  0
total_bytes 254007050240
bytes_used  655360
sectorsize  4096
nodesize16384
leafsize16384
stripesize  4096
root_dir6
num_devices 1
compat_flags0x0
compat_ro_flags 0x0
incompat_flags  0x69
( MIXED_BACKREF |
  COMPRESS_LZO |
  BIG_METADATA |
  EXTENDED_IREF )
cache_generation593818
uuid_tree_generation593818
dev_item.uuid   59b96438-be86-4397-a842-1e1c6efd7479
dev_item.fsid   7f1ce0ed-5986-43ae-b0dd-727eee19fd08 [match]
dev_item.type   0
dev_item.total_bytes254007050240
dev_item.bytes_used 182066348032
dev_item.io_align   4096
dev_item.io_width   4096
dev_item.sector_size4096
dev_item.devid  1
dev_item.dev_group  0
dev_item.seek_speed 0
dev_item.bandwidth  0
dev_item.generation 0
sys_chunk_array[2048]:
item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 0)
chunk length 4194304 owner 2 stripe_len 65536
type SYSTEM num_stripes 1
stripe 0 devid 1 offset 0
dev uuid: 59b96438-be86-4397-a842-1e1c6efd7479
item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 20971520)
chunk length 8388608 owner 2 stripe_len 65536
type SYSTEM|DUP num_stripes 2
stripe 0 devid 1 offset 20971520
dev uuid: 59b96438-be86-4397-a842-1e1c6efd7479
stripe 1 devid 1 offset 29360128
dev uuid: 59b96438-be86-4397-a842-1e1c6efd7479
backup_roots[4]:
backup 0:
backup_tree_root:   29360128gen: 593817 level: 1
backup_chunk_root:  20971520gen: 591139 level: 1
backup_extent_root: 29376512gen: 593817 level: 1
backup_fs_root: 132562944   gen: 593815 level: 0
backup_dev_root:29409280gen: 593817 level: 0
backup_csum_root:   29491200gen: 593817 level: 0
backup_total_bytes: 254007050240
backup_bytes_used:  43532288
backup_num_devices: 1

backup 1:
backup_tree_root:   138821632   gen: 593818 level: 0
backup_chunk_root:  21020672gen: 593818 level: 0
backup_extent_root: 138805248   gen: 593818 level: 0
backup_fs_root: 132562944   gen: 593815 level: 0
backup_dev_root:132841472   gen: 593818 level: 0
backup_csum_root:   138870784   gen: 593818 level: 0
backup_total_bytes: 254007050240
backup_bytes_used:  655360
backup_num_devices: 1

backup 2:
backup_tree_root:   138788864   gen: 593815 level: 1
backup_chunk_root:  20971520gen: 591139 level: 1
backup_extent_root: 132841472   gen: 593815 level: 1
backup_fs_root:  

Re: How to get back a deleted sub-volume.

2016-12-11 Thread Chris Murphy
On Sun, Dec 11, 2016 at 5:56 PM, Tomasz Kusmierz  wrote:
> Chris, for all the time you helped so far I have to really appologize
> I've led you a stray ... so, reson the subvolumes were deleted is
> nothing to do with btrfs it self, I'm using "Rockstor" to ease
> managment tasks. This tool / environment / distribution treats a
> singular btrfs FS as a "pool" ( something in line of zfs :/ ) and when
> one removes a pool from the system it will actually go and delete
> subvolumes from FS before unmounting it and removing reference of it
> from it's DB (yes a bit shiet I know). so I'm not blaming anybody here
> for disapearing subvolumes, it's just me commig back to belive in man
> kind to get kicked in the gonads by mankind stupidity.
>
> ALSO by importing the fs to their "solution" is just actually mounting
> and walking the tree of subvolumes to to create all the references in
> local DB (for rockstor of course, still nothing to do with btrfs
> functionality). To be able to ïmport" I've had to remove before
> mentioned snpshots becouse imports script was timing out.
>
> So for a single subvolume (called physically "share") I was left with
> no snapshots (removed by me to make import not time out) and then this
> subvolume was removed when I was trying to remove a fs (pool) from a
> running system.
>
> I've polled both disks (2 disk fs raid1) and I'm trying to rescue as
> much data as I can.
>
> The question is, why suddenly when I removed the snapshots and
> (someone else removed) the subvolume, there is such a great gap in
> generations of FS (over 200 generations missing) and the most recent
> generation that actually can me touched by btrfs restore is over a
> month old.
>
> How to over come that ?

Well it depends on how long it was from the time of the snapshots
being deleted to the time the file system was unmounted. If it wasn't
that long it might be possible to find a root from 'btrfs insp-in
dump-s -fa ' (or btrfs-show-super with older progs) to see if you
can use any of the backup roots. Once so much time goes by, the no
longer used metadata for generations has their roots deleted and all
of the blocks used for both metadata and data are subject to being
overwritten. So my expectation is that there's too much time between
delete and umount, so there's nothing in the current file system that
points to the old generations.

It might be the metadata and data still exist. It's not entirely a
shot in the dark to find it.

First, try to find the oldest chunk tree you can with btrfs-show-super
-fa (btrfs insp dump-s -fa) and look in the backup roots list for the
chunk tree:

backup_roots[4]:
backup 0:
backup_tree_root:21037056gen: 7level: 0
backup_chunk_root:147456gen: 6level: 0
backup_extent_root:21053440gen: 7level: 0
backup_fs_root:4194304gen: 4level: 0
backup_dev_root:20987904gen: 6level: 0
backup_csum_root:21069824gen: 7level: 0
backup_total_bytes:268435456000
backup_bytes_used:393216
backup_num_devices:1

backup 1:
backup_tree_root:21086208gen: 8level: 0
backup_chunk_root:147456gen: 6level: 0
backup_extent_root:21102592gen: 8level: 0
backup_fs_root:4194304gen: 4level: 0
backup_dev_root:20987904gen: 6level: 0
backup_csum_root:21118976gen: 8level: 0
backup_total_bytes:268435456000
backup_bytes_used:393216
backup_num_devices:1

backup 2:
backup_tree_root:21069824gen: 9level: 0
backup_chunk_root:147456gen: 6level: 0
backup_extent_root:21053440gen: 9level: 0
backup_fs_root:21004288gen: 9level: 0
backup_dev_root:21135360gen: 9level: 0
backup_csum_root:21037056gen: 9level: 0
backup_total_bytes:268435456000
backup_bytes_used:487424
backup_num_devices:1

backup 3:
backup_tree_root:21151744gen: 10level: 0
backup_chunk_root:147456gen: 6level: 0
backup_extent_root:21168128gen: 10level: 0
backup_fs_root:21004288gen: 9level: 0
backup_dev_root:21135360gen: 9level: 0
backup_csum_root:21184512gen: 10level: 0
backup_total_bytes:268435456000
backup_bytes_used:487424
backup_num_devices:1

IN this case, all of the chunk roots are the same generation - so it's
no help really.

Second, list the chunk tree using either -t 3 for the current one, or
you can plug in the bytenr for an older backup chunk root if
available.

[root@f25h ~]# btrfs-debug-tree -t 3 /dev/mapper/vg-test2
btrfs-progs v4.8.5
chunk tree
leaf 147456 items 5 free space 15740 generation 

Re: btrfs-find-root duration?

2016-12-11 Thread Chris Murphy
On Sun, Dec 11, 2016 at 5:12 PM, Markus Binsteiner  wrote:
>> You might try 'btrfs check' without repairing, using a recent version
>> of btrfs-progs and see if it finds anything unusual.
>
> Not quite sure what that output means, but btrfs check returns instantly:
>
> $ sudo btrfs check /dev/mapper/think--big-home
> Checking filesystem on /dev/mapper/think--big-home
> UUID: 7f1ce0ed-5986-43ae-b0dd-727eee19fd08
> checking extents
> checking free space cache
> checking fs roots
> checking csums
> checking root refs
> found 655360 bytes used err is 0
> total csum bytes: 0
> total tree bytes: 131072
> total fs tree bytes: 32768
> total extent tree bytes: 16384
> btree space waste bytes: 123528
> file data blocks allocated: 524288
>  referenced 524288

This is what I'd expect if the volume has only had a mkfs done and
then mounted and umounted. No files. What do you get for

btrfs ins dump-s -fa /dev/mapper/think--big-home

>
> When I do the same thing on the root-OS partition that is on the same
> disk, it takes a bit longer to complete, and I'm getting:
>
> $ sudo btrfs check /dev/mapper/think--big-jessie
> Checking filesystem on /dev/mapper/think--big-jessie
> UUID: d82e2746-4164-41fa-b528-5a5838d818c6
> checking extents
> checking free space cache
> checking fs roots
> checking csums
> checking root refs
> found 29295181824 bytes used err is 0
> total csum bytes: 27623652
> total tree bytes: 973848576
> total fs tree bytes: 874446848
> total extent tree bytes: 62554112
> btree space waste bytes: 154131992
> file data blocks allocated: 29202391040
>  referenced 30898016256
>
> So I reckon the tree(s) in my home partition is just empty? Why would
> btrfs-find-tree take so long to complete though?

Looks like a bug. I'm not sure when this regression began, but I see
it with btrfs-progs-4.8.5-1.fc26.x86_64 on a new file system, with no
files and also one volume that has a single file. But it's not hanging
on older file systems.

# btrfs-find-root /dev/mapper/vg-test2
Superblock thinks the generation is 8
Superblock thinks the level is 0

Indefinite hang.



>Also, I've tried to
> run btrfs-find-tree on the root-OS partition, but that also didn't
> complete within the 10 minutes I tried (so far).

Yeah same here, but unlike your case it completes fast for older file
systems with a decent amount of data on it. I'm not sure what the
pattern is here that results in the hang. Unfortunately strace is not
revealing I think - attached anyway.



-- 
Chris Murphy
root@f25h ~]# strace btrfs-find-root /dev/mapper/vg-test2
execve("/sbin/btrfs-find-root", ["btrfs-find-root", "/dev/mapper/vg-test2"], 
[/* 31 vars */]) = 0
brk(NULL)   = 0x5571d01f7000
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x7f389a106000
access("/etc/ld.so.preload", R_OK)  = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=99038, ...}) = 0
mmap(NULL, 99038, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f389a0ed000
close(3)= 0
open("/lib64/libuuid.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0P\24\0\0\0\0\0\0"..., 
832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=19528, ...}) = 0
mmap(NULL, 2113552, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 
0x7f3899cde000
mprotect(0x7f3899ce2000, 2093056, PROT_NONE) = 0
mmap(0x7f3899ee1000, 4096, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3000) = 0x7f3899ee1000
mmap(0x7f3899ee2000, 16, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f3899ee2000
close(3)= 0
open("/lib64/libblkid.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0`\206\0\0\0\0\0\0"..., 
832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=275520, ...}) = 0
mmap(NULL, 2368448, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 
0x7f3899a9b000
mprotect(0x7f3899ad8000, 2097152, PROT_NONE) = 0
mmap(0x7f3899cd8000, 20480, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3d000) = 0x7f3899cd8000
mmap(0x7f3899cdd000, 960, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f3899cdd000
close(3)= 0
open("/lib64/libz.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0` \0\0\0\0\0\0"..., 832) 
= 832
fstat(3, {st_mode=S_IFREG|0755, st_size=89520, ...}) = 0
mmap(NULL, 2183272, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 
0x7f3899885000
mprotect(0x7f389989a000, 2093056, PROT_NONE) = 0
mmap(0x7f3899a99000, 8192, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x14000) = 0x7f3899a99000
close(3)= 0
open("/lib64/liblzo2.so.2", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0@%\0\0\0\0\0\0"..., 832) 
= 832
fstat(3, {st_mode=S_IFREG|0755, 

Re: How to get back a deleted sub-volume.

2016-12-11 Thread Tomasz Kusmierz
Chris, for all the time you helped so far I have to really appologize
I've led you a stray ... so, reson the subvolumes were deleted is
nothing to do with btrfs it self, I'm using "Rockstor" to ease
managment tasks. This tool / environment / distribution treats a
singular btrfs FS as a "pool" ( something in line of zfs :/ ) and when
one removes a pool from the system it will actually go and delete
subvolumes from FS before unmounting it and removing reference of it
from it's DB (yes a bit shiet I know). so I'm not blaming anybody here
for disapearing subvolumes, it's just me commig back to belive in man
kind to get kicked in the gonads by mankind stupidity.

ALSO by importing the fs to their "solution" is just actually mounting
and walking the tree of subvolumes to to create all the references in
local DB (for rockstor of course, still nothing to do with btrfs
functionality). To be able to ïmport" I've had to remove before
mentioned snpshots becouse imports script was timing out.

So for a single subvolume (called physically "share") I was left with
no snapshots (removed by me to make import not time out) and then this
subvolume was removed when I was trying to remove a fs (pool) from a
running system.

I've polled both disks (2 disk fs raid1) and I'm trying to rescue as
much data as I can.

The question is, why suddenly when I removed the snapshots and
(someone else removed) the subvolume, there is such a great gap in
generations of FS (over 200 generations missing) and the most recent
generation that actually can me touched by btrfs restore is over a
month old.

How to over come that ?



On 11 December 2016 at 19:00, Chris Murphy  wrote:
> On Sun, Dec 11, 2016 at 10:40 AM, Tomasz Kusmierz
>  wrote:
>> Hi,
>>
>> So, I've found my self in a pickle after following this steps:
>> 1. trying to migrate an array to different system, it became apparent
>> that importing array there was not possible to import it because I've
>> had a very large amount of snapshots (every 15 minutes during office
>> hours amounting to few K) so I've had to remove snapshots for main
>> data storage.
>
> True, there is no recursive incremental send.
>
>> 2. while playing with live array, it become apparent that some bright
>> spark implemented a "delete all sub-volumes while removing array from
>> system" ... needles to say that this behaviour is unexpected to say al
>> least ... and I wanted to punch somebody in face.
>
> The technical part of this is vague. I'm guessing you used 'btrfs
> device remove' butt it works no differently than lvremove - when a
> device is removed from an array, it wipes the signature from that
> device.  You probably can restore that signature and use that device
> again, depending on what the profile is for metadata and data, it may
> be usable stand alone.
>
> Proposing assault is probably not the best way to ask for advice
> though. Just a guess.
>
>
>
>
>>
>> Since then I was trying to rescue as much data as I can, luckily I
>> managed to get a lot of data from snapshots for "other than share"
>> volumes (because those were not deleted :/) but the most important
>> volume "share" prove difficult. This subvolume comes out with a lot of
>> errors on readout with "btrfs restore /dev/sda /mnt2/temp2/ -x -m -S
>> -s -i -t".
>>
>> Also for some reason I can't use a lot of root blocks that I find with
>> btrfs-find-root ..
>>
>> To put some detail here:
>> btrfs-find-root -a /dev/sda
>> Superblock thinks the generation is 184540
>> Superblock thinks the level is 1
>> Well block 919363862528(gen: 184540 level: 1) seems good, and it
>> matches superblock
>> Well block 919356325888(gen: 184539 level: 1) seems good, but
>> generation/level doesn't match, want gen: 184540 level: 1
>> Well block 919343529984(gen: 184538 level: 1) seems good, but
>> generation/level doesn't match, want gen: 184540 level: 1
>> Well block 920041308160(gen: 184537 level: 1) seems good, but
>> generation/level doesn't match, want gen: 184540 level: 1
>> Well block 919941955584(gen: 184536 level: 1) seems good, but
>> generation/level doesn't match, want gen: 184540 level: 1
>> Well block 919670538240(gen: 184535 level: 1) seems good, but
>> generation/level doesn't match, want gen: 184540 level: 1
>> Well block 920045371392(gen: 184532 level: 1) seems good, but
>> generation/level doesn't match, want gen: 184540 level: 1
>> Well block 920070209536(gen: 184531 level: 1) seems good, but
>> generation/level doesn't match, want gen: 184540 level: 1
>> Well block 920117510144(gen: 184530 level: 1) seems good, but
>> generation/level doesn't match, want gen: 184540 level: 1 <<< here
>> stuff is gone
>> Well block 920139055104(gen: 184511 level: 0) seems good, but
>> generation/level doesn't match, want gen: 184540 level: 1
>> Well block 920139022336(gen: 184511 level: 0) seems good, but
>> generation/level doesn't match, want gen: 184540 level: 1
>> Well block 920138989568(gen: 184511 

Re: btrfs-find-root duration?

2016-12-11 Thread Markus Binsteiner
> You might try 'btrfs check' without repairing, using a recent version
> of btrfs-progs and see if it finds anything unusual.

Not quite sure what that output means, but btrfs check returns instantly:

$ sudo btrfs check /dev/mapper/think--big-home
Checking filesystem on /dev/mapper/think--big-home
UUID: 7f1ce0ed-5986-43ae-b0dd-727eee19fd08
checking extents
checking free space cache
checking fs roots
checking csums
checking root refs
found 655360 bytes used err is 0
total csum bytes: 0
total tree bytes: 131072
total fs tree bytes: 32768
total extent tree bytes: 16384
btree space waste bytes: 123528
file data blocks allocated: 524288
 referenced 524288

When I do the same thing on the root-OS partition that is on the same
disk, it takes a bit longer to complete, and I'm getting:

$ sudo btrfs check /dev/mapper/think--big-jessie
Checking filesystem on /dev/mapper/think--big-jessie
UUID: d82e2746-4164-41fa-b528-5a5838d818c6
checking extents
checking free space cache
checking fs roots
checking csums
checking root refs
found 29295181824 bytes used err is 0
total csum bytes: 27623652
total tree bytes: 973848576
total fs tree bytes: 874446848
total extent tree bytes: 62554112
btree space waste bytes: 154131992
file data blocks allocated: 29202391040
 referenced 30898016256

So I reckon the tree(s) in my home partition is just empty? Why would
btrfs-find-tree take so long to complete though? Also, I've tried to
run btrfs-find-tree on the root-OS partition, but that also didn't
complete within the 10 minutes I tried (so far).

> Although, are there many snapshots? That would cause the rentention of roots.

I can't remember exactly, but it's very possible I didn't use any
snapshots on this machine at all.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs-find-root duration?

2016-12-11 Thread Chris Murphy
On Sun, Dec 11, 2016 at 4:30 PM, Markus Binsteiner  wrote:
>> OK when I do it on a file system with just 14GiB of metadata it's
>> maybe 15 seconds. So a few minutes sounds sorta suspicious to me but,
>> *shrug* I don't have a file system the same size to try it on, maybe
>> it's a memory intensive task and once the system gets low on RAM while
>> traversing the file system it slows done a ton.
>
> Ok, thanks, looks like there is some other issue then as well. The
> process doesn't take up any memory at all, just 100% of one core.
>
> Maybe I'll try to use it with an older version of btrfs-progs, from
> Debian Jessie. Don't think it'll make any difference, but I don't know
> what else to try. At this point I'm more curious than anything else.
> I've got backups for most of my stuff, just a few rogue scripts I'd
> have to re-write. Still, would be nice to get those back.

You might try 'btrfs check' without repairing, using a recent version
of btrfs-progs and see if it finds anything unusual.

Although, are there many snapshots? That would cause the rentention of roots.


-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs-find-root duration?

2016-12-11 Thread Markus Binsteiner
> OK when I do it on a file system with just 14GiB of metadata it's
> maybe 15 seconds. So a few minutes sounds sorta suspicious to me but,
> *shrug* I don't have a file system the same size to try it on, maybe
> it's a memory intensive task and once the system gets low on RAM while
> traversing the file system it slows done a ton.

Ok, thanks, looks like there is some other issue then as well. The
process doesn't take up any memory at all, just 100% of one core.

Maybe I'll try to use it with an older version of btrfs-progs, from
Debian Jessie. Don't think it'll make any difference, but I don't know
what else to try. At this point I'm more curious than anything else.
I've got backups for most of my stuff, just a few rogue scripts I'd
have to re-write. Still, would be nice to get those back.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs-find-root duration?

2016-12-11 Thread Chris Murphy
Yes. Command and device only.

>
> I've tried that initially, but it run for a few hours with no output
> beside the initial 'Superblock...'.

OK when I do it on a file system with just 14GiB of metadata it's
maybe 15 seconds. So a few minutes sounds sorta suspicious to me but,
*shrug* I don't have a file system the same size to try it on, maybe
it's a memory intensive task and once the system gets low on RAM while
traversing the file system it slows done a ton.



-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs-find-root duration?

2016-12-11 Thread Markus Binsteiner
I recon it took me about 5 minutes to realise what I'd done, then I
unmounted the volume. I don't think I wrote anything inbetween, but
there were a few applications open at that time, so there might have
been some i/o.

When you say 'by itself', you mean without the '-o 5'?

I've tried that initially, but it run for a few hours with no output
beside the initial 'Superblock...'.  I realized I forgot to redirect
the stdout which I thought that would be a good idea so I restarted it
with '-o 5' (found some advice said that's the thing to do if it was
the root subvolume that was deleted).

Anyway, restarted it without '-o 5', let's see whether it makes any
difference. Is there any indication on how long it should take? Just
roughly, hours, days? I've got about 150GB of data on that partition I
think. Also, is there supposed to be incremental output, or will it be
one big wall of text once it's finished?

As I said, when I tried before there was no output at all for hours,
which seemed a bit strange.

Thanks for your help!

On Sun, Dec 11, 2016 at 6:47 PM, Chris Murphy  wrote:
> On Sat, Dec 10, 2016 at 5:12 PM, Markus Binsteiner  wrote:
>> It seems I've accidentally deleted all files in my home directory,
>> which sits in its own btrfs partition (lvm on luks). Now I'm trying to
>> find the roots to be able to use btrfs restore later on.
>>
>> btrfs-find-root seems to be taking ages though. I've run it like so:
>>
>> btrfs-find-root /dev/mapper/think--big-home  -o 5 > roots.txt
>
> Uhh, just do btrfs-find-root by itself to get everything it can find.
> And then work backwards from the most recent generation using btrfs
> restore -t using each root bytenr from btrfs-find-root. The more
> recent the generation, the better your luck that it hasn't been
> overwritten yet; but too recent and your data may not exist in that
> root. It really depends how fast you umounted the volume after
> deleting everything.
>
>
>
> --
> Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] btrfs: fix hole read corruption for compressed inline extents

2016-12-11 Thread Xin Zhou
Hi Zygo,
Since the corruption happens after I/O and checksum,
could it be possible to add some bug catcher code in code path for debug build,
to help narrowing down the issue?
Thanks,
Xin
 
 

Sent: Saturday, December 10, 2016 at 9:16 PM
From: "Zygo Blaxell" 
To: "Roman Mamedov" , "Filipe Manana" 
Cc: linux-btrfs@vger.kernel.org
Subject: Re: [PATCH] btrfs: fix hole read corruption for compressed inline 
extents
Ping?

I know at least two people have read this patch, but it hasn't appeared in
the usual integration branches yet, and I've seen no actionable suggestion
to improve it. I've provided two non-overlapping rationales for it.
Is there something else you are looking for?

This patch is a fix for a simple data corruption bug. It (or some
equivalent fix for the same bug) should be on its way to all stable
kernels starting from 2.6.32.

Thanks

On Mon, Nov 28, 2016 at 05:27:10PM +0500, Roman Mamedov wrote:
> On Mon, 28 Nov 2016 00:03:12 -0500
> Zygo Blaxell  wrote:
>
> > diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
> > index 8e3a5a2..b1314d6 100644
> > --- a/fs/btrfs/inode.c
> > +++ b/fs/btrfs/inode.c
> > @@ -6803,6 +6803,12 @@ static noinline int uncompress_inline(struct 
> > btrfs_path *path,
> > max_size = min_t(unsigned long, PAGE_SIZE, max_size);
> > ret = btrfs_decompress(compress_type, tmp, page,
> > extent_offset, inline_size, max_size);
> > + WARN_ON(max_size > PAGE_SIZE);
> > + if (max_size < PAGE_SIZE) {
> > + char *map = kmap(page);
> > + memset(map + max_size, 0, PAGE_SIZE - max_size);
> > + kunmap(page);
> > + }
> > kfree(tmp);
> > return ret;
> > }
>
> Wasn't this already posted as:
>
> btrfs: fix silent data corruption while reading compressed inline extents
> https://patchwork.kernel.org/patch/9371971/
>
> but you don't indicate that's a V2 or something, and in fact the patch seems
> exactly the same, just the subject and commit message are entirely different.
> Quite confusing.
>
> --
> With respect,
> Roman
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at 
> http://vger.kernel.org/majordomo-info.html[http://vger.kernel.org/majordomo-info.html]
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to get back a deleted sub-volume.

2016-12-11 Thread Chris Murphy
On Sun, Dec 11, 2016 at 10:40 AM, Tomasz Kusmierz
 wrote:
> Hi,
>
> So, I've found my self in a pickle after following this steps:
> 1. trying to migrate an array to different system, it became apparent
> that importing array there was not possible to import it because I've
> had a very large amount of snapshots (every 15 minutes during office
> hours amounting to few K) so I've had to remove snapshots for main
> data storage.

True, there is no recursive incremental send.

> 2. while playing with live array, it become apparent that some bright
> spark implemented a "delete all sub-volumes while removing array from
> system" ... needles to say that this behaviour is unexpected to say al
> least ... and I wanted to punch somebody in face.

The technical part of this is vague. I'm guessing you used 'btrfs
device remove' butt it works no differently than lvremove - when a
device is removed from an array, it wipes the signature from that
device.  You probably can restore that signature and use that device
again, depending on what the profile is for metadata and data, it may
be usable stand alone.

Proposing assault is probably not the best way to ask for advice
though. Just a guess.




>
> Since then I was trying to rescue as much data as I can, luckily I
> managed to get a lot of data from snapshots for "other than share"
> volumes (because those were not deleted :/) but the most important
> volume "share" prove difficult. This subvolume comes out with a lot of
> errors on readout with "btrfs restore /dev/sda /mnt2/temp2/ -x -m -S
> -s -i -t".
>
> Also for some reason I can't use a lot of root blocks that I find with
> btrfs-find-root ..
>
> To put some detail here:
> btrfs-find-root -a /dev/sda
> Superblock thinks the generation is 184540
> Superblock thinks the level is 1
> Well block 919363862528(gen: 184540 level: 1) seems good, and it
> matches superblock
> Well block 919356325888(gen: 184539 level: 1) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1
> Well block 919343529984(gen: 184538 level: 1) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1
> Well block 920041308160(gen: 184537 level: 1) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1
> Well block 919941955584(gen: 184536 level: 1) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1
> Well block 919670538240(gen: 184535 level: 1) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1
> Well block 920045371392(gen: 184532 level: 1) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1
> Well block 920070209536(gen: 184531 level: 1) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1
> Well block 920117510144(gen: 184530 level: 1) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1 <<< here
> stuff is gone
> Well block 920139055104(gen: 184511 level: 0) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1
> Well block 920139022336(gen: 184511 level: 0) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1
> Well block 920138989568(gen: 184511 level: 0) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1
> Well block 920138973184(gen: 184511 level: 0) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1
> Well block 920137596928(gen: 184511 level: 0) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1
> Well block 920137531392(gen: 184511 level: 0) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1
> Well block 920137515008(gen: 184511 level: 0) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1
> Well block 920135991296(gen: 184511 level: 0) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1
> Well block 920135958528(gen: 184511 level: 0) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1
> Well block 920135925760(gen: 184511 level: 0) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1
> Well block 920135827456(gen: 184511 level: 0) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1
> Well block 920135811072(gen: 184511 level: 0) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1
> Well block 920133697536(gen: 184511 level: 0) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1
> Well block 920133664768(gen: 184511 level: 0) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1
> Well block 92017088(gen: 184511 level: 0) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1
> Well block 920133206016(gen: 184511 level: 0) seems good, but
> generation/level doesn't match, want gen: 184540 level: 1
> Well block 920132976640(gen: 184511 level: 0) seems good, but
> 

How to get back a deleted sub-volume.

2016-12-11 Thread Tomasz Kusmierz
Hi,

So, I've found my self in a pickle after following this steps:
1. trying to migrate an array to different system, it became apparent
that importing array there was not possible to import it because I've
had a very large amount of snapshots (every 15 minutes during office
hours amounting to few K) so I've had to remove snapshots for main
data storage.
2. while playing with live array, it become apparent that some bright
spark implemented a "delete all sub-volumes while removing array from
system" ... needles to say that this behaviour is unexpected to say al
least ... and I wanted to punch somebody in face.
3. the backup off-site server that was making backups every 30 minutes
was located in CEO house and his wife decide that it's not necessary
to have it connected

(laughs can start roughly here)

So I've got array with all the data there (theoretically COW, right ?)
with additional of plethora of snapshots (important data was snapped
every 15 minutes during a office hours to capture all the changes,
other sub-volumes were snapshoted daily)

This occurred roughly on 4-12-2016.

Since then I was trying to rescue as much data as I can, luckily I
managed to get a lot of data from snapshots for "other than share"
volumes (because those were not deleted :/) but the most important
volume "share" prove difficult. This subvolume comes out with a lot of
errors on readout with "btrfs restore /dev/sda /mnt2/temp2/ -x -m -S
-s -i -t".

Also for some reason I can't use a lot of root blocks that I find with
btrfs-find-root ..

To put some detail here:
btrfs-find-root -a /dev/sda
Superblock thinks the generation is 184540
Superblock thinks the level is 1
Well block 919363862528(gen: 184540 level: 1) seems good, and it
matches superblock
Well block 919356325888(gen: 184539 level: 1) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 919343529984(gen: 184538 level: 1) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 920041308160(gen: 184537 level: 1) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 919941955584(gen: 184536 level: 1) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 919670538240(gen: 184535 level: 1) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 920045371392(gen: 184532 level: 1) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 920070209536(gen: 184531 level: 1) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 920117510144(gen: 184530 level: 1) seems good, but
generation/level doesn't match, want gen: 184540 level: 1 <<< here
stuff is gone
Well block 920139055104(gen: 184511 level: 0) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 920139022336(gen: 184511 level: 0) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 920138989568(gen: 184511 level: 0) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 920138973184(gen: 184511 level: 0) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 920137596928(gen: 184511 level: 0) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 920137531392(gen: 184511 level: 0) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 920137515008(gen: 184511 level: 0) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 920135991296(gen: 184511 level: 0) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 920135958528(gen: 184511 level: 0) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 920135925760(gen: 184511 level: 0) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 920135827456(gen: 184511 level: 0) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 920135811072(gen: 184511 level: 0) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 920133697536(gen: 184511 level: 0) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 920133664768(gen: 184511 level: 0) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 92017088(gen: 184511 level: 0) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 920133206016(gen: 184511 level: 0) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 920132976640(gen: 184511 level: 0) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 920132878336(gen: 184511 level: 0) seems good, but
generation/level doesn't match, want gen: 184540 level: 1
Well block 920132845568(gen: 184511 level: 0) seems good, but
generation/level doesn't match,