Re: 3.18.0: kernel BUG at fs/btrfs/relocation.c:242!

2014-12-19 Thread Josef Bacik

On 12/12/2014 09:37 AM, Tomasz Chmielewski wrote:

FYI, still seeing this with 3.18 (scrub passes fine on this filesystem).

# time btrfs balance start /mnt/lxc2
Segmentation fault



Ok now I remember why I haven't fix this yet, the images you gave me 
restore but then they don't mount because the extent tree is corrupted 
for some reason.  Could you re-image this fs and send it to me and I 
promise to spend all of my time on the problem until its fixed.  Thanks,


Josef

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 3.18.0: kernel BUG at fs/btrfs/relocation.c:242!

2014-12-19 Thread Tomasz Chmielewski

On 2014-12-19 22:47, Josef Bacik wrote:

On 12/12/2014 09:37 AM, Tomasz Chmielewski wrote:
FYI, still seeing this with 3.18 (scrub passes fine on this 
filesystem).


# time btrfs balance start /mnt/lxc2
Segmentation fault



Ok now I remember why I haven't fix this yet, the images you gave me
restore but then they don't mount because the extent tree is corrupted
for some reason.  Could you re-image this fs and send it to me and I
promise to spend all of my time on the problem until its fixed.


(un)fortunately one filesystem stopped crashing on balance with some 
kernel update, and the other one I had crashing on balance was fixed 
with btrfs - so I'm not able to reproduce anymore / produce an image 
which is crashing.


--
Tomasz Chmielewski
http://www.sslrack.com
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 3.18.0: kernel BUG at fs/btrfs/relocation.c:242!

2014-12-15 Thread Josef Bacik

On 12/12/2014 09:37 AM, Tomasz Chmielewski wrote:

FYI, still seeing this with 3.18 (scrub passes fine on this filesystem).

# time btrfs balance start /mnt/lxc2
Segmentation fault

real322m32.153s
user0m0.000s
sys 16m0.930s




Sorry Tomasz, you are now at the top of the list.  I assume the images 
you sent me before are still good for reproducing this?  Thanks,


Josef

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 3.18.0: kernel BUG at fs/btrfs/relocation.c:242!

2014-12-15 Thread Tomasz Chmielewski

On 2014-12-15 21:07, Josef Bacik wrote:

On 12/12/2014 09:37 AM, Tomasz Chmielewski wrote:
FYI, still seeing this with 3.18 (scrub passes fine on this 
filesystem).


# time btrfs balance start /mnt/lxc2
Segmentation fault

real322m32.153s
user0m0.000s
sys 16m0.930s




Sorry Tomasz, you are now at the top of the list.  I assume the images
you sent me before are still good for reproducing this?  Thanks,


I've sent you two URL back then, they should still work. One of these 
filesystems did not crash the 3.18.0 kernel anymore (though there were 
many files changed / added / removed since I've uploaded the images); 
the other still did.



Tomasz

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 3.18.0: kernel BUG at fs/btrfs/relocation.c:242!

2014-12-14 Thread Robert White

On 12/13/2014 03:56 PM, Robert White wrote:

...


Dangit... On re-reading I think I was still less than optimally clear. I 
kept using the word resent when I should have been using a word like 
re-written or re-stored (as opposed to restored). On re-reading 
I'm not sure what the least confusing word would be.


So here is a contrived example with seriously simplified assumptions):

Lets say every day rsync coincidentally sends 1Gib and the receiving 
filesystem is otherwise almost quiescent. So as a side effect the 
receiving filesystem monotonically creates one 1GiB data extent. A 
snapshot is taken every day after the rsync. (This is all to just to 
make the mental picture easier.)


Lets say there is a file Aardvark that just happens to be the first file 
considered every time and also happens to grow by exactly 1MiB in pure 
append each day and started out at 1MiB. After ten days Aardvark is 
stored across the ten extents. After 100 days it is store across 100 
extents. Each successive 1MiB is exactly 1023MiB away from its 
predecessor and successor.


Now consider file Badger, the second file. It is 100MiB in size. It is 
also modified each day such that five percent of its total bytes are 
rewritten as exactly five records of exactly 1MiB aligned on 1MiB 
boundaries, all on convenient rsync boundaries. On the first day a 
100MiB chunk lands square in the first data Extent right next to 
Aardvark. On the second and every successive day 5MiB lands next to 
Aardvark in the next extent. But the 5MiB is not a contiguous, they are 
1MiB holes punched in a completely fair distribution across all the 
active fragments of Badger wherever they lie.


A linear read of Aardvark gets monotonically worse with each rsync. A 
linear read of Badger decays towards being 100 head seeks for every 
linear read.


Now how does rsync work? It does a linear read of each file. All of 
Aardvark, then all of Badger (etc) to create the progressive checksum 
stream that it uses to determine if a block needs to transmitted or not.


Now if we start aging off (and deleting) snapshots, we start realizing 
the holes in the oldest copies of Badger. There is a very high 
probability that the next chunk of Aardvark is going to end up somewhere 
in Badger-of-day-one. Worse still, some parts of Badger are going to end 
up in Badger-of-day-one but nicely out of order.


At this point the model starts to get too complex for my understanding 
(I don't know how BTRFS selects which data extent to put any one chunk 
of data in relative to the rest of the file contents of whether it tries 
to fill the fullest chunk, the least-full chunk, or if it does some 
other best-fit for this case, so I have to stop that half of the example 
there.)


Additionally: After (N*log(N))^2 days (where I think N is 5) [because of 
fair randomness] {so just shy of two months?} there is a high 
probability that no _current_ part of Badger is still mapped to data 
extent 1. But it is still impossible for snapshot removal to result in a 
reclaim of data extent 1... Aardvark's first block is there forever.


Now compare this to doing the copy.

A linear write of a file is supposed to be (if I understand what I'm 
reading here) laid out as closely-as-possible as a linear extent on the 
disk. Not guaranteed, but its a goal.  This would be more true if the 
application doing the writing called fallocate(). [I don't know if rsync 
does fallocate(), I'm just saying.]


So now on day one, Aardvark is one 1MiB chunk in Data extent 1, followed 
by all of Badger.


On day two Aardvark is one 2MiB chunk in Data extent 2, followed by all 
of Badger.


(magical adjustments take place in the source data stream so that we are 
still, by incredible coincidence, using up exactly one extent every day. 
[it's like one of those physics problems where we get to ignore 
friction. 8-)])


On every rsync pass, both the growing Aardvark and the active working 
set of Badger are available as linear reads while making the rolling 
checksums.


If Aardvark and/or Badger need to be used for any purpose from one or 
more of the snapshots, they will also benefit from locality and linear 
read optimization.


When we get around to deleting the first snapshot all of the active 
parts of Aardvark and Badger are long gone. (and since this is magical 
fairly land, data extent one is reclaimed!).


---

How realistic is this? Well clearly magical fairies were involved in the 
making of this play. But the role of Badger will be played by a database 
tablespace and his friend Aardvark will be played by the associated 
update journal. Meaning that both of those two file behaviors are 
real-world examples (not withstanding the cartoonish monotonic update 
profile).


And _clearly_ once you start deleting older snapshots the orderly 
picture would fall apart piecewise.


Then again, according to grep, my /usr/bin/rsync contains the string 
fallocate. Not a guarantee it's being used, but a strong 

Re: 3.18.0: kernel BUG at fs/btrfs/relocation.c:242!

2014-12-13 Thread Tomasz Chmielewski

On 2014-12-12 23:58, Robert White wrote:


I don't have the history to answer this definitively, but I don't
think you have a choice. Nothing else is going to touch that error.

I have not seen any oh my god, btrfsck just ate my filesystem errors
since I joined the list -- but I am a relative newcomer.

I know that you, of course, as a contentious and well-traveled system
administrator, already have a current backup since you are doing
storage maintenance... right? 8-)


Who needs backups with btrfs, right? :)

So apparently btrfsck --repair fixed some issues, the fs is still 
mountable and looks fine.


Running balance again, but that will take many days there.

# btrfsck --repair /dev/sdc1
fixing root item for root 8681, current bytenr 5568935395328, current 
gen 70315, current level 2, new bytenr 5569014104064, new gen 70316, new 
level 2

Fixed 1 roots.
checking extents
checking free space cache
checking fs roots
root 696 inode 2765103 errors 400, nbytes wrong
root 696 inode 2831256 errors 400, nbytes wrong
root 9466 inode 2831256 errors 400, nbytes wrong
root 9505 inode 2831256 errors 400, nbytes wrong
root 10139 inode 2831256 errors 400, nbytes wrong
root 10525 inode 2831256 errors 400, nbytes wrong
root 10561 inode 2831256 errors 400, nbytes wrong
root 10633 inode 2765103 errors 400, nbytes wrong
root 10633 inode 2831256 errors 400, nbytes wrong
root 10650 inode 2765103 errors 400, nbytes wrong
root 10650 inode 2831256 errors 400, nbytes wrong
root 10680 inode 2765103 errors 400, nbytes wrong
root 10680 inode 2831256 errors 400, nbytes wrong
root 10681 inode 2765103 errors 400, nbytes wrong
root 10681 inode 2831256 errors 400, nbytes wrong
root 10701 inode 2765103 errors 400, nbytes wrong
root 10701 inode 2831256 errors 400, nbytes wrong
root 10718 inode 2765103 errors 400, nbytes wrong
root 10718 inode 2831256 errors 400, nbytes wrong
root 10735 inode 2765103 errors 400, nbytes wrong
root 10735 inode 2831256 errors 400, nbytes wrong
enabling repair mode
Checking filesystem on /dev/sdc1
UUID: 371af1dc-d88b-4dee-90ba-91fec2bee6c3
cache and super generation don't match, space cache will be invalidated
found 942113871627 bytes used err is 1
total csum bytes: 2445349244
total tree bytes: 28743073792
total fs tree bytes: 22880043008
total extent tree bytes: 2890547200
btree space waste bytes: 5339534781
file data blocks allocated: 2779865800704
 referenced 3446026993664
Btrfs v3.17.3

real76m27.845s
user19m1.470s
sys 2m55.690s


--
Tomasz Chmielewski
http://www.sslrack.com

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 3.18.0: kernel BUG at fs/btrfs/relocation.c:242!

2014-12-13 Thread Robert White

On 12/13/2014 12:16 AM, Tomasz Chmielewski wrote:

On 2014-12-12 23:58, Robert White wrote:


I don't have the history to answer this definitively, but I don't
think you have a choice. Nothing else is going to touch that error.

I have not seen any oh my god, btrfsck just ate my filesystem errors
since I joined the list -- but I am a relative newcomer.

I know that you, of course, as a contentious and well-traveled system
administrator, already have a current backup since you are doing
storage maintenance... right? 8-)


Who needs backups with btrfs, right? :)

So apparently btrfsck --repair fixed some issues, the fs is still
mountable and looks fine.

Running balance again, but that will take many days there.


Might I ask why you are running balance? After a persistent error I'd 
understand going straight to scrub, but balance is usually for 
transformation or to redistribute things after atypical use.


An entire generation of folks have grown used to defraging windows boxes 
and all, but if you've already got an array that is going to take many 
days to balance what benefit do you actually expect to receive?



Defrag -- used for I think I'm getting a lot of unnecessary head seek 
in this application, these files need to be brought into closer order.


Scrub -- used for defensive checking a-la checkdisk. I suspect that 
after that unexpected power outage something may be a little off, or 
alternately I think my disks are giving me bitrot, I better check.


Btrfsck -- used for I suspect structural problems caused by real world 
events like power hits or that one time when the cat knocked over my 
tower case while I was vacuuming all my sql tables. (often reserved for 
hey, I'm getting weird messages from the kernel about things in my 
filesystem.)


Balance -- primary -- used for Well I used to use this filessytem for a 
small number of large files, but now I am processing a large number of 
small files and I'm running out of metadata even though I've got a lot 
of space. (or vice versa)


Balance -- other -- used for I just changed the geometry of my 
filessytem by adding or removing a disk and I want to spread out.


Balance -- (conversion/restructuring) -- used for single is okay, but 
I'd rather raid-0 to spread out my load across these many disks or 
gee, I'd like some redundancy now that I have the room.




Frequent balancing of a Copy On Write filesystem will tend to make 
things somewhat anti-optimal. You are burping the natural working space 
out of the natural layout.


Since COW implies mandatory movement of data, every time you burp out 
all the slack and pack all the data together you are taking your 
regularly modified files and moving them far, far away from the places 
where frequently modified files are most happy (e.g. the 
only-partly-full data region they were just living in).


Similarly two files that usually get modified at the same time (say a 
databse file and its rollback log) will tend to end up in the same 
active data extent as time goes on, and if balance decides it can clean 
up that extent it will likely give those two files a data-extent 
divorce and force them to the opposite ends of dataland.


COW systems are inherently somewhat chaotic. If you fight that too 
aggressively you will, at best, be wasting the maintenance time.


It may be a decrease in performance measured in very small quanta, but 
so is the expected benefit of most maintenance.



From the wiki::

https://btrfs.wiki.kernel.org/index.php/FAQ#What_does_.22balance.22_do.3F

btrfs filesystem balance is an operation which simply takes all of the 
data and metadata on the filesystem, and re-writes it in a different 
place on the disks, passing it through the allocator algorithm on the 
way. It was originally designed for multi-device filesystems, to spread 
data more evenly across the devices (i.e. to balance their usage). 
This is particularly useful when adding new devices to a nearly-full 
filesystem.

Due to the way that balance works, it also has some useful side-effects:
If there is a lot of allocated but unused data or metadata chunks, a 
balance may reclaim some of that allocated space. This is the main 
reason for running a balance on a single-device filesystem.
On a filesystem with damaged replication (e.g. a RAID-1 FS with a dead 
and removed disk), it will force the FS to rebuild the missing copy of 
the data on one of the currently active devices, restoring the RAID-1 
capability of the filesystem.



--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 3.18.0: kernel BUG at fs/btrfs/relocation.c:242!

2014-12-13 Thread Tomasz Chmielewski

On 2014-12-13 10:39, Robert White wrote:


Might I ask why you are running balance? After a persistent error I'd
understand going straight to scrub, but balance is usually for
transformation or to redistribute things after atypical use.


There were several reasons for running balance on this system:

1) I was getting no space left, even though there were hundreds of GBs 
left. Not sure if this still applies to the current kernels (3.18 and 
later) though, but it was certainly the problem in the past.


2) The system was regularly freezing, I'd say once a week was a norm. 
Sometimes I was getting btrfs traces logged in syslog.
After a few freezes the fs was getting corrupted to different degree. At 
some point, it was so bad that it was only possible to use it read only. 
So I had to get the data off, reformat, copy back... It would start 
crashing after a few weeks of usage.


My usage case is quite simple:

- skinny extents, extended inode refs
- mount compress-force=zlib
- rsync many remote data sources (-a -H --inplace --partial) + snapshot
- around 500 snapshots in total, from 20 or so subvolumes

Especially rsync's --inplace option combined with many snapshots and 
large fragmentation was deadly for btrfs - I was seeing system freezes 
right when rsyncing a highly fragmented, large file.


Then, running balance on the corrupted filesystem was more an exercise 
(if scrub passes fine, I would expect balance to pass as well). Some 
BUGs it was causing was sometimes fixed in newer kernels, sometimes not 
(btrfsck was not really usable a few months back).


3) I had different luck with recovering btrfs after a failed drive (in 
RAID-1). Sometimes it worked as expected, sometimes, the fs was getting 
broken so much I had to rsync data off it and format from scratch (where 
mdraid would kick the drive after getting write errors - it's not the 
case with btrfs, and weird things can happen).
Sometimes, running btrfs device delete missing (it's balance in 
principle, I think) would take weeks, during which a second drive could 
easily die.
Again, running balance would be more exercise there, to see if the newer 
kernel still crashes.




An entire generation of folks have grown used to defraging windows
boxes and all, but if you've already got an array that is going to
take many days to balance what benefit do you actually expect to
receive?


For me - it's a good test to see if btrfs is finally getting stable 
(some cases explained above).




Defrag -- used for I think I'm getting a lot of unnecessary head seek
in this application, these files need to be brought into closer
order.


Fragmentation was an issue for btrfs, at least a few kernels back (as 
explained above, with rsync's --inplace).
However, I'm not running autodefrag anywhere - not sure how it affects 
snapshots.




Scrub -- used for defensive checking a-la checkdisk. I suspect that
after that unexpected power outage something may be a little off, or
alternately I think my disks are giving me bitrot, I better check.


For me, it was passing fine, where balance was crashing the kernel.


Again, my main rationale for running balance is to see if btrfs is 
behaving stable. While I have systems with btrfs which are running fine 
for months, I also have ones which will crash after 1-2 weeks (once the 
system grows in size / complexity).


So hopefully, btrfsck had fixed that fs - once it is running stable for 
a week or two, I might be brave to re-enable btrfs quotas (was another 
system freezer, at least a few kernels back).



--
Tomasz Chmielewski
http://www.sslrack.com

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 3.18.0: kernel BUG at fs/btrfs/relocation.c:242!

2014-12-13 Thread Robert White

On 12/13/2014 05:53 AM, Tomasz Chmielewski wrote:

My usage case is quite simple:

- skinny extents, extended inode refs

okay


- mount compress-force=zlib
I'd, personally, never force compression. This can increase the size 
of files by five or more percent if it is an inherently incompressible 
file. While it is easy to deliberately create a file that will trick the 
compress check logic into not compressing something that would enjoy 
compression it does _not_ happen by chance very often at all.



- rsync many remote data sources (-a -H --inplace --partial) + snapshot


Using --inplace on a Copy On Write filesystem has only one effect, it 
increases fragmentation... a lot... Every new block is going to get 
written to a new area anyway, so if you have enough slack space to keep 
the one new copy of the new file, which you will probably use up anyway 
in the COW event, laying in the fresh copy in a likely more contiguous 
way will tend to make things cleaner over time.


--inplace is doubly useless with compression as compression is perturbed 
by default if one byte changes in the original file.


The only time --inplace might be helpful is if the file is NOCOW... 
except...




- around 500 snapshots in total, from 20 or so subvolumes


That's a lot of snapshots and subvolumes. Not an impossibly high number, 
but a lot. That needs it's own use-case evaluation. But regardless...


Even if you set the NOCOW option on a file to make the --inplace rsync 
work, if that file is snapshotted (snapshot?) between the rsync 
modification events it will be in 1COW mode because of the snapshot 
anyway and you are back to the default anti-optimal conditions.




Especially rsync's --inplace option combined with many snapshots and
large fragmentation was deadly for btrfs - I was seeing system freezes
right when rsyncing a highly fragmented, large file.


You are kind of doing all that to yourself. Combining _forced_ 
compression with denying the natural opportunity for the re-write of the 
file to move it to nicely contiguous new locations and then pinning it 
all in place with multiple snapshots you've created the worst of all 
possible worlds.


The more you use optional gross-behavior options on some sorts of things 
the more you are fighting the natural organization of the system. That 
is, every system is designed around a set of core assumptions and 
behavioral options tend to invalidate the mainline assumptions. Some 
options, like recursive are naturally part of those assumptions and 
play into them, other options, particularly things with force in the 
name tend to be if you really think you must, sure, I'll do what you 
say, but if it turns out bad it's on _your_ head options. Which options 
are which is a judgment call, but the combination you've chosen is 
definitely working in that bad area.


And keep repeating this to yourself :: balance does not reorganize 
anything, it just moves the existing disorder to a new location. This 
is not a perfect summation, and it's clearly wrong if you are using 
convert, but it's the correct way to view what's happening while 
asking yourself should I balance?.


--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 3.18.0: kernel BUG at fs/btrfs/relocation.c:242!

2014-12-13 Thread Tomasz Chmielewski

On 2014-12-13 21:54, Robert White wrote:

- rsync many remote data sources (-a -H --inplace --partial) + 
snapshot


Using --inplace on a Copy On Write filesystem has only one effect, it
increases fragmentation... a lot...


...if the file was changed.



Every new block is going to get
written to a new area anyway,


Exactly - every new block. But that's true with and without --inplace.
Also - without --inplace, it is every block. In other words, without 
--inplace, the file is likely to be rewritten by rsync to a new one, and 
CoW is lost (more below).




so if you have enough slack space to
keep the one new copy of the new file, which you will probably use up
anyway in the COW event, laying in the fresh copy in a likely more
contiguous way will tend to make things cleaner over time.

--inplace is doubly useless with compression as compression is
perturbed by default if one byte changes in the original file.


No. If you change 1 byte in a 100 MB file, or perhaps 1 GB file, you 
will likely loose a few kBs of CoW. The whole file is certainly not 
rewritten if you use --inplace. However it will be wholly rewritten if 
you don't use --inplace.



The only time --inplace might be helpful is if the file is NOCOW... 
except...


No, you're wrong.
By default, rsync creates a new file if it detects any file modification 
- like touch file.


Consider this experiment:

# create a large file
dd if=/dev/urandom of=bigfile bs=1M count=3000

# copy it with rsync
rsync -a -v --progress bigfile bigfile2

# copy it again - blazing fast, no change
rsync -a -v --progress bigfile bigfile2

# touch the original file
touch bigfile

# try copying again with rsync - notice rsync creates a temp file, like 
.bigfile2.J79ta2

# No change to the file except the timestamp, but good bye your CoW.
rsync -a -v --progress bigfile bigfile2

# Now try the same with --inplace; compare data written to disk with 
iostat -m in both cases.



Same goes for append files - even if they are compressed, most CoW will 
be shared. I'd say it will be similar for lightly modified files 
(changed data will be CoW-unshared, some compressed overhead will be 
unshared, but the rest will be untouched / shared by CoW between the 
snapshots).





- around 500 snapshots in total, from 20 or so subvolumes


That's a lot of snapshots and subvolumes. Not an impossibly high
number, but a lot. That needs it's own use-case evaluation. But
regardless...

Even if you set the NOCOW option on a file to make the --inplace rsync
work, if that file is snapshotted (snapshot?) between the rsync
modification events it will be in 1COW mode because of the snapshot
anyway and you are back to the default anti-optimal conditions.


Again - if the file was changed a lot, it doesn't matter if it's 
--inplace or not. If the file data was not changed, or changed little - 
--inplace will help preserve CoW.




Especially rsync's --inplace option combined with many snapshots and
large fragmentation was deadly for btrfs - I was seeing system freezes
right when rsyncing a highly fragmented, large file.


You are kind of doing all that to yourself.


To clarify - freezes - I mean kernel bugs exposed and machine freezing.
I think we all agree that whatever userspace is doing in the filesystem, 
it should not result is kernel BUG / freeze.




Combining _forced_
compression with denying the natural opportunity for the re-write of
the file to move it to nicely contiguous new locations and then
pinning it all in place with multiple snapshots you've created the
worst of all possible worlds.


I disagree. It's quite compact, for my data usage. If I needed blazing 
fast file access, I wouldn't be using a CoW filesystem nor snapshots in 
the first place. For data mostly stored and rarely read, it is OK.



(...)


And keep repeating this to yourself :: balance does not reorganize
anything, it just moves the existing disorder to a new location. This
is not a perfect summation, and it's clearly wrong if you are using
convert, but it's the correct way to view what's happening while
asking yourself should I balance?.


I agree - I don't run it unless I need to (or I'm curious to see if it 
would expose some more bugs).
It would be quite a step back for a filesystem to need some periodic 
maintenance like that after all.


Also I'm in the opinion that balance should not cause the kernel to BUG 
- it should abort, possibly remount the fs ro etc. (suggest running 
btrfsck, if there is enough confidence in this tool), but definitely not 
BUG.



--
Tomasz Chmielewski
http://www.sslrack.com

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 3.18.0: kernel BUG at fs/btrfs/relocation.c:242!

2014-12-13 Thread Robert White

On 12/13/2014 01:52 PM, Tomasz Chmielewski wrote:

On 2014-12-13 21:54, Robert White wrote:


- rsync many remote data sources (-a -H --inplace --partial) + snapshot


Using --inplace on a Copy On Write filesystem has only one effect, it
increases fragmentation... a lot...


...if the file was changed.


If the file hasn't changed then it won't be be transferred by 
definition. So the un-changed file is not terribly interesting.


And I did think about the rest of (most of) your points right after 
sending the original email. Particularly since I don't know your actual 
use case. But there is no un-send which I suddenly realized I wanted 
to do... Because I needed to change my answer. Like ten seconds later. 
/sigh.


I'm still strongly against forcing compression.

That said, my knee-jerk reaction to using --inplace is still strong for 
almost all file types.


And it remains almost absolute in your case simply because you are 
finding yourself needing to balance and whatnot.


E.g. the theoretical model of efficent partial copies as you present is 
fine... up until we get back your original complain about what a mess it 
makes.


The ruling precept here is Ben Franklin's penny wise, pound foolish. 
What you _might_ be saving up-front with --inplace is charging you 
double on the back-end with maintenance.



Every new block is going to get
written to a new area anyway,


Exactly - every new block. But that's true with and without --inplace.
Also - without --inplace, it is every block. In other words, without
--inplace, the file is likely to be rewritten by rsync to a new one, and
CoW is lost (more below).


I don't know the nature of the particular files you are translating, but 
I do know a lot about rsync and file layout in general for lots of 
different types of files.


(rsync details here for readers-along :: 
http://rsync.samba.org/how-rsync-works.html )


Now I am assuming you took this advice from something like the manual 
page [QUOTE] This [--inplace] option  is useful for transferring large 
files with block-based changes or appended data, and also on systems 
that are disk bound, not network bound.  It can also help keep a 
copy-on-write filesystem snapshot from diverging the entire contents of 
a file that only has minor changes.[/QUOTE] Though maybe not since the 
description goes on to say that --inplace implies --partial so 
specifying both is redundant.


But here's the thing, those files are really rare. Way more rare than 
you might think.  The consist almost entirely of block-based database 
extents (like an oracle tablespace file), logfiles (such as is found in 
/var/log/messages etc.), VM disk image files (particularly raw images) 
and ISO images that are _only_ modified by adding tracks may fall into 
this category as well..



So we've already skipped the unchanged files...

So, inserting a single byte into, or removing a single byte from, any 
file will cause a re-write from that point on. It will send that file 
from the block boundary containing that byte. Just about anything with a 
header and a history is going to get re-sent almost completely. This 
includes the output from any word processing program you are likely to 
encounter.


Anything with linear compression (such as Open Document Format, which is 
basically a ZIP file) will be resent entirely.


All compiled programs binaries will be resent entirely if the program 
changed at all (the headers again, the changes in text segments, the 
changes in layout that a single byte difference in size cause the elf 
formats, or the dll formats to juggle significantly.)


And I could go on at length, but I'll skip that...

And _then_ the forced compression comes into play.

Rsync is going to impose its default block size to frame changes (see 
--block-size=) and then BTRFS is going to impose it's compression frame 
sizes (presuming it is done by block size). If these are not exactly teh 
same size any rsync block that updates will result in one or two extra 
compression blocks being re-written by the tiling overlap effect.



so if you have enough slack space to
keep the one new copy of the new file, which you will probably use up
anyway in the COW event, laying in the fresh copy in a likely more
contiguous way will tend to make things cleaner over time.

--inplace is doubly useless with compression as compression is
perturbed by default if one byte changes in the original file.


No. If you change 1 byte in a 100 MB file, or perhaps 1 GB file, you
will likely loose a few kBs of CoW. The whole file is certainly not
rewritten if you use --inplace. However it will be wholly rewritten if
you don't use --inplace.



The only time --inplace might be helpful is if the file is NOCOW...
except...


No, you're wrong.
By default, rsync creates a new file if it detects any file modification
- like touch file.

Consider this experiment:

# create a large file
dd if=/dev/urandom of=bigfile bs=1M count=3000

# copy it with rsync
rsync -a -v 

Re: 3.18.0: kernel BUG at fs/btrfs/relocation.c:242!

2014-12-12 Thread Tomasz Chmielewski

FYI, still seeing this with 3.18 (scrub passes fine on this filesystem).

# time btrfs balance start /mnt/lxc2
Segmentation fault

real322m32.153s
user0m0.000s
sys 16m0.930s


[20182.461873] BTRFS info (device sdd1): relocating block group 
6915027369984 flags 17

[20194.050641] BTRFS info (device sdd1): found 4819 extents
[20286.243576] BTRFS info (device sdd1): found 4819 extents
[20287.143471] BTRFS info (device sdd1): relocating block group 
6468350771200 flags 17

[20295.756934] BTRFS info (device sdd1): found 3613 extents
[20306.981773] BTRFS (device sdd1): parent transid verify failed on 
5568935395328 wanted 70315 found 102416
[20306.983962] BTRFS (device sdd1): parent transid verify failed on 
5568935395328 wanted 70315 found 102416
[20307.029841] BTRFS (device sdd1): parent transid verify failed on 
5568935395328 wanted 70315 found 102416

[20307.030037] [ cut here ]
[20307.030083] kernel BUG at fs/btrfs/relocation.c:242!
[20307.030130] invalid opcode:  [#1] SMP
[20307.030175] Modules linked in: ipt_MASQUERADE nf_nat_masquerade_ipv4 
iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat 
nf_conntrack ip_tables x_tables cpufreq_ondemand cpufreq_conservative 
cpufreq_powersave cpufreq_stats nfsd auth_rpcgss oid_registry exportfs 
nfs_acl nfs lockd grace fscache sunrpc ipv6 btrfs xor raid6_pq 
zlib_deflate coretemp hwmon loop pcspkr i2c_i801 i2c_core lpc_ich 
mfd_core 8250_fintek battery parport_pc parport tpm_infineon tpm_tis tpm 
ehci_pci ehci_hcd video button acpi_cpufreq ext4 crc16 jbd2 mbcache 
raid1 sg sd_mod r8169 mii ahci libahci libata scsi_mod

[20307.030587] CPU: 3 PID: 4218 Comm: btrfs Not tainted 3.18.0 #1
[20307.030634] Hardware name: System manufacturer System Product 
Name/P8H77-M PRO, BIOS 1101 02/04/2013
[20307.030724] task: 8807f2cac830 ti: 8807e9198000 task.ti: 
8807e9198000
[20307.030811] RIP: 0010:[a02e8240]  [a02e8240] 
relocate_block_group+0x432/0x4de [btrfs]

[20307.030914] RSP: 0018:8807e919bb18  EFLAGS: 00010202
[20307.030960] RAX: 8805f06c40f8 RBX: 8805f06c4000 RCX: 
00018023
[20307.031008] RDX: 8805f06c40d8 RSI: 8805f06c40e8 RDI: 
8807ff403900
[20307.031056] RBP: 8807e919bb88 R08: 0001 R09: 

[20307.031105] R10: 0003 R11: a02e43a6 R12: 
8807e637f090
[20307.031153] R13: 8805f06c4108 R14: fff4 R15: 
8805f06c4020
[20307.031201] FS:  7f1bdb4ba880() GS:88081fac() 
knlGS:

[20307.031289] CS:  0010 DS:  ES:  CR0: 80050033
[20307.031336] CR2: 7f5672e18070 CR3: 0007e99cc000 CR4: 
001407e0

[20307.031384] Stack:
[20307.031426]  ea0016296680 8805f06c40e8 ea0016296380 

[20307.031515]  ea0016296400 00ffea0016296440 a805e22b2a30 
1000
[20307.031604]  8804d86963f0 8805f06c4000  
8807f2d785a8

[20307.031693] Call Trace:
[20307.031743]  [a02e8444] 
btrfs_relocate_block_group+0x158/0x278 [btrfs]
[20307.031838]  [a02c5fd4] 
btrfs_relocate_chunk.isra.70+0x35/0xa5 [btrfs]

[20307.031931]  [a02c75d4] btrfs_balance+0xa66/0xc6b [btrfs]
[20307.031981]  [810bd63a] ? 
__alloc_pages_nodemask+0x137/0x702
[20307.032036]  [a02cd485] btrfs_ioctl_balance+0x220/0x29f 
[btrfs]

[20307.032089]  [a02d2586] btrfs_ioctl+0x1134/0x22f6 [btrfs]
[20307.032138]  [810d5d83] ? handle_mm_fault+0x44d/0xa00
[20307.032186]  [81175862] ? avc_has_perm+0x2e/0xf7
[20307.032234]  [810d889d] ? __vm_enough_memory+0x25/0x13c
[20307.032282]  [8110f05d] do_vfs_ioctl+0x3f2/0x43c
[20307.032329]  [8110f0f5] SyS_ioctl+0x4e/0x7d
[20307.032376]  [81030ab3] ? do_page_fault+0xc/0x11
[20307.032424]  [813b5992] system_call_fastpath+0x12/0x17
[20307.032488] Code: 00 00 00 48 39 83 f8 00 00 00 74 02 0f 0b 4c 39 ab 
08 01 00 00 74 02 0f 0b 48 83 7b 20 00 74 02 0f 0b 83 bb 20 01 00 00 00 
74 02 0f 0b 83 bb 24 01 00 00 00 74 02 0f 0b 48 8b 73 18 48 8b 7b 08
[20307.032660] RIP  [a02e8240] 
relocate_block_group+0x432/0x4de [btrfs]

[20307.032754]  RSP 8807e919bb18
[20307.033068] ---[ end trace 18be77360e49d59d ]---



On 2014-11-25 23:33, Tomasz Chmielewski wrote:

I'm still seeing this when running balance with 3.18-rc6:

[95334.066898] BTRFS info (device sdd1): relocating block group
6468350771200 flags 17
[95344.384279] BTRFS info (device sdd1): found 5371 extents
[95373.555640] BTRFS (device sdd1): parent transid verify failed on
5568935395328 wanted 70315 found 89269
[95373.574208] BTRFS (device sdd1): parent transid verify failed on
5568935395328 wanted 70315 found 89269
[95373.574483] [ cut here ]
[95373.574542] kernel BUG at fs/btrfs/relocation.c:242!
[95373.574601] invalid opcode:  [#1] SMP
[95373.574661] Modules linked in: ipt_MASQUERADE
nf_nat_masquerade_ipv4 

Re: 3.18.0: kernel BUG at fs/btrfs/relocation.c:242!

2014-12-12 Thread Tomasz Chmielewski

On 2014-12-12 22:36, Robert White wrote:


In another thread [that was discussing SMART] you talked about
replacing a drive and then needing to do some patching-up of the
result because of drive failures. Is this the same filesystem where
that happened?


Nope, it was on a different server.

--
Tomasz Chmielewski
http://www.sslrack.com

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 3.18.0: kernel BUG at fs/btrfs/relocation.c:242!

2014-12-12 Thread Robert White

On 12/12/2014 01:46 PM, Tomasz Chmielewski wrote:

On 2014-12-12 22:36, Robert White wrote:


In another thread [that was discussing SMART] you talked about
replacing a drive and then needing to do some patching-up of the
result because of drive failures. Is this the same filesystem where
that happened?


Nope, it was on a different server.



okay, so how did the btrfsck turn out?


--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



Re: 3.18.0: kernel BUG at fs/btrfs/relocation.c:242!

2014-12-12 Thread Tomasz Chmielewski

On 2014-12-12 23:34, Robert White wrote:

On 12/12/2014 01:46 PM, Tomasz Chmielewski wrote:

On 2014-12-12 22:36, Robert White wrote:


In another thread [that was discussing SMART] you talked about
replacing a drive and then needing to do some patching-up of the
result because of drive failures. Is this the same filesystem where
that happened?


Nope, it was on a different server.



okay, so how did the btrfsck turn out?


# time btrfsck /dev/sdc1 /root/btrfsck.log

real22m0.140s
user0m3.090s
sys 0m6.120s

root@bkp010 /usr/src/btrfs-progs # echo $?
1

# cat /root/btrfsck.log
root item for root 8681, current bytenr 5568935395328, current gen 
70315, current level 2, new bytenr 5569014104064, new gen 70316, new 
level 2

Found 1 roots with an outdated root item.
Please run a filesystem check with the option --repair to fix them.


Now, I'm a bit afraid to run --repair - as far as I remember, some time 
ago, it used to do all weird things except the actual repair.
Is it better nowadays? I'm using latest clone from 
git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git



--
Tomasz Chmielewski
http://www.sslrack.com

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 3.18.0: kernel BUG at fs/btrfs/relocation.c:242!

2014-12-12 Thread Robert White

On 12/12/2014 02:46 PM, Tomasz Chmielewski wrote:

On 2014-12-12 23:34, Robert White wrote:

On 12/12/2014 01:46 PM, Tomasz Chmielewski wrote:

On 2014-12-12 22:36, Robert White wrote:


In another thread [that was discussing SMART] you talked about
replacing a drive and then needing to do some patching-up of the
result because of drive failures. Is this the same filesystem where
that happened?


Nope, it was on a different server.



okay, so how did the btrfsck turn out?


# time btrfsck /dev/sdc1 /root/btrfsck.log

real22m0.140s
user0m3.090s
sys 0m6.120s

root@bkp010 /usr/src/btrfs-progs # echo $?
1

# cat /root/btrfsck.log
root item for root 8681, current bytenr 5568935395328, current gen
70315, current level 2, new bytenr 5569014104064, new gen 70316, new
level 2
Found 1 roots with an outdated root item.
Please run a filesystem check with the option --repair to fix them.


Now, I'm a bit afraid to run --repair - as far as I remember, some time
ago, it used to do all weird things except the actual repair.
Is it better nowadays? I'm using latest clone from
git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git




I don't have the history to answer this definitively, but I don't think 
you have a choice. Nothing else is going to touch that error.


I have not seen any oh my god, btrfsck just ate my filesystem errors 
since I joined the list -- but I am a relative newcomer.


I know that you, of course, as a contentious and well-traveled system 
administrator, already have a current backup since you are doing storage 
maintenance... right? 8-)


--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html