o way to
merge them. Further, I'm pretty sure Btrfs still has no check for
this, and will corrupt itself if you mount the volume rw (with all
devices present, i.e. not degraded). I think there are patches for
this (?) but in any case I don't think they've been merged either.
So the bottom line is that the sysadmin has to handhold a Btrfs raid1.
It really can't be used for unattended access.
--
Chris Murphy
rruption visible. And that's not as effective if it's
possible to get such corruption in the course of a crash or power
failure of some kind. It might be useful to ask on @linux-integrity
list.
http://vger.kernel.org/vger-lists.html#linux-integrity
--
Chris Murphy
And this prevents automatic
repair from happening, since it prevents the device from reporting a
discrete read + sector value error, and therefore the problem gets
masked behind link resets.
--
Chris Murphy
On Fri, Jan 25, 2019 at 7:43 PM Dennis Katsonis wrote:
>
> On 1/25/19 4:22 AM, Chris Murphy wrote:
> > On Thu, Jan 24, 2019 at 3:40 AM Dennis K wrote:
> >>
> >> The fact is, this thread is the first time I've seen explicitly written
> >> that paren
been changed after it was received
I'm still really not following where your confusion stems from, and
therefore I'm not sure what needs fixing other than the items I've
already mentioned - which itself at least would have stopped you in
your tracks, to go dig deeper or ask questions before arriving at the
understandably confusing results you were getting.
--
Chris Murphy
> disk? I need to figure out what can be failing before I try another
> recovery.
I think it's specifically storage stack related. I think you'd have
more varied and weird problems if it were memory corruption, but
that's speculation on my part.
I'd honestly simplify the layout and not use bcache at all, only use
Btrfs directly on the whole drives, although I think it's reasonably
simple to use dmcrypt if needed/desired. But it's still better for
troubleshooting to make the storage stack as simple as possible.
Without more debugging information from all the layers, it's hard to
tell which layer to blame without just using the big stick called
process of elimination.
Maybe Qu has some ideas based on the call trace though - I can't parse it.
--
Chris Murphy
On Tue, Jan 22, 2019 at 10:57 AM Andrei Borzenkov wrote:
>
> 22.01.2019 9:28, Chris Murphy пишет:
> > On Mon, Jan 21, 2019 at 11:00 PM Remi Gauvin wrote:
> >>
> >> On 2019-01-21 11:54 p.m., Chris Murphy wrote:
> >> #
> >>>
> >>> I
On Mon, Jan 21, 2019 at 11:00 PM Remi Gauvin wrote:
>
> On 2019-01-21 11:54 p.m., Chris Murphy wrote:
> #
> >
> > I expect the last command to fail because 1.ro1 is not the parent of
> > 2.ro2. The command completes, and 2.ro2 is on the destination, and at
> >
On Mon, Jan 21, 2019 at 3:23 PM Chris Murphy wrote:
> If has a UUID of 54321, I expect that must have
> Parent UUID of 54321, or the send command should fail.
OK I think the following is a reproducible bug.
# btrfs sub create 1
# btrfs sub create 2
# touch 1/one
# touch 2/two
# btr
On Mon, Jan 21, 2019 at 3:23 PM Chris Murphy wrote:
> I literally never use the 'property set' feature to set ro and unset
> ro because I think it's dangerous.
Also, there is a distinction between 'btrfs snapshot -r' and 'btrfs
property set ro true' -
to account for the user inadvertently trying
to do an incremental send with a subvolume that does not have a
matching parent UUID to the -p specified parent subvolume's UUID. I
thought there was a check for this but I'm virtually certain I've run
into this problem myself with no warning and yes it's an unworkable
result at destination.
--
Chris Murphy
Yep. It comes up from time to time, it's discussed in the archives. I
suspect if someone comes up with a btrfs-progs patch to add a warning
note under the block group profiles grid (note 4 perhaps) I suspect
it'd get accepted.
--
Chris Murphy
Depends on perspective, if you don't care where they go but you care
about being able to add a single arbitrary sized drive or two or
three, you can grow a Btrfs raid10 volume. You can't do that with
conventional raid10.
--
Chris Murphy
ld eventually complete successful
and in the proper order. That it didn't, suggests a bug. The problem
is where. Btrfs bug? Some other kernel bug? Hardware, including
firmware, bug?
--
Chris Murphy
pecific root addresses using btrfs check -b
working from the top of the list (the highest generation number) and
work down. But for starters just the first two commands above might
reveal a clue.
--
Chris Murphy
;t change.
In your case you're better off with raid0'ing the two drives in each
enclosure (whether it's a feature of the enclosure or doing it with
mdadm or LVM). And then using Btrfs raid1 on top of the resulting
virtual block devices. Or do mdadm/LVM raid10, and format it Btrfs. Or
yeah, use ZFS.
--
Chris Murphy
we tell them OK just start over with a new file
system. It would be better if there's some additional advice to give
them to try and find out what caused the corruption to begin with,
rather than just start over and maybe run into the same problem again.
--
Chris Murphy
On Tue, Jan 15, 2019 at 5:41 PM Chris Murphy wrote:
>
> The relevant error messages are:
>
> unable to find ref byte
> errno=-2 No such entry
>
> Somehow a reference byte has been corrupted and inserted into multiple
> locations in the tree and it's not repairable: i
ean there aren't rare
transient problems.
Chris Murphy
or try to infer a correction for a bad leaf, but it's not
certain how badly damaged that leaf is yet. You can output it
'btrfs insp dump-t -b 44832 '
and remove file names before posting it; this might help a dev sort
out what the problem is.
--
Chris Murphy
O to finish. This seems like a problem in
> > the storage layer, i.e IOs being stuck. Check your dmesg for any error.
> There are no IO errors in dmesg. Also, I never had any problems with
> this disk, SMART reports nothing, and also btrfs dev stats and btrfs
> scrub say everything's ok.
What do you get for:
mount | grep btrfs
btrfs insp dump-s -f /dev/sda8
I ran in this same configuration for a long time, maybe 5 months, and
never ran into this problem. But it was with much older kernel,
perhaps circa 4.8 era.
--
Chris Murphy
On Thu, Jan 3, 2019 at 6:32 PM Qu Wenruo wrote:
>
>
>
> On 2019/1/4 上午9:15, Chris Murphy wrote:
> > If you use btrfs-image -ss option, there won't be any sensitive
> > information included. Files are hashed. Some short name files or dirs
> > can't be has
worse, is really
expensive. That's a bad user experience that reasonably turns into a
lost user. And in order to fix bugs and make Btrfs better, we need
more and good bug reports, not less.
So... yeah. I'm not sure if there is more tracing or debugging
information that's needed by default in Btrfs, so we can have a better
chance of understanding these corruptions when they occur? Or what. We
can't expect people to leave integrity checking always on, it's too
expensive.
--
Chris Murphy
he Btrfs check is failing.
btrfs-image -c9 -t4 -ss /path/to/fileoutput.image
That is usually around 1/2 the size of file system metadata. It
contains no data and filenames will be hashed.
--
Chris Murphy
is
greater than either 'btrfs fi sh' "devid size" or dev_item.total_bytes
found in the super for that device.
> I should note that every device in this particular Array/Pool is not using
> partitions; I'm using
> btrfs directly on the device. Perhaps this is why btrfs is handling it this
> way? I wonder how it
> behaves on a partitioned FileSystem.
It behaves the same. Whether partitioned or not, Btrfs sees it as a
block device.
--
Chris Murphy
On Thu, Dec 27, 2018 at 1:06 PM Chris Murphy wrote:
>
> On Thu, Dec 27, 2018 at 12:14 AM Duncan <1i5t5.dun...@cox.net> wrote:
> >
> > Chris Murphy posted on Wed, 26 Dec 2018 17:36:19 -0700 as excerpted:
> >
> >
> > > I'm not really following thi
On Thu, Dec 27, 2018 at 12:14 AM Duncan <1i5t5.dun...@cox.net> wrote:
>
> Chris Murphy posted on Wed, 26 Dec 2018 17:36:19 -0700 as excerpted:
>
>
> > I'm not really following this. An fs resize is implied by any device
> > add, remove or replace comma
standing free space, using the original tools
4.7.2 Understanding free space, using the new tools
"unallocated" and "allocated" are specific terms that refer to block
groups (chunks). Space that's unallocated has no block groups, space
that's allocated is reserved for
your problem could be discard/TRIM related,
but somehow I kinda doubt it. Usually such bugs show up with entire
block ranges being wiped out when they shouldn't be (either with zeros
or returning corrupt data). In the meantime, drop it for all file
systems. And also check to make sure the SSD has the latest firmware
version.
--
Chris Murphy
mount options? Defaults? Anything custom like discard,
commit=, notreelog? Any non-default mount options themselves would not
be the cause of the problem, but might suggest partial ideas for what
might have happened.
--
Chris Murphy
to see 'btrfs insp dump-s -f ' and see if there's a
log tree. And then also the output from 'btrfs check --mode=lowmem
' which is also read-only, don't use --repair unless a dev
recommends it.
--
Chris Murphy
On Wed, Dec 12, 2018 at 12:26 AM Stephen R. van den Berg wrote:
>
> Chris Murphy wrote:
> >Also, what scheduler are you using? And do you get different results
> >with a different one (better or worse)?
>
> I'm using CFQ, and I don't think I ever tried a dif
Also, what scheduler are you using? And do you get different results
with a different one (better or worse)?
Chris Murphy
the hang.
I see this is btrfs-receive workload, so I wouldn't guess it's
suvolume lock contention unless the contention is happening with a
single shared parent subvolume into which all the receive subvolumes
are going (e.g. subvol id 5). I'm not sure how to alleviate it.
Chris Murphy
Chris Murphy
to the list,
hopefully a developer will get around to looking at it.
It is safe to try:
mount -o ro,norecovery,usebackuproot /device/ /mnt/
If that works, I suggest updating your backup while it's still
possible in the meantime.
--
Chris Murphy
t to properly ask the drive for SCT ERC status. Simplest way to
know is do 'smartctl -x' on one drive, assuming they're all the same
basic make/model other than size.
--
Chris Murphy
out a complete dmesg; but errno=-5 IO failure is
pretty much some kind of hardware problem in my experience. I haven't
seen it be a bug.
--
Chris Murphy
ot run check --repair, until you get some feedback from a
developer.
The thing I'd like to see is
# btrfs rescue super -v /anydevice/
# btrfs insp dump-s -f /anydevice/
First command will tell us if all the supers are the same and valid
across all devices. And the second one, hopefully it's
contributes to lost RAID all the time. And arguably
it leads to unnecessary data loss in even the single device
desktop/laptop use case as well.
Chris Murphy
by how much for each? That's a major
reduction in writes, and suggests it might be possible for further
optimization, to help mitigate the wandering trees impact.
--
Chris Murphy
;s LVM pool of two drives
(linear/concat) with XFS, or if you go with Btrfs -dsingle -mraid1
(also basically a concat) doesn't really matter, but I'd get whatever
you can off the drive. I expect avoiding a rebuild in some form or
another is very wishful thinking and not very likely.
The more changes are made to the file system, repair attempts or
otherwise writing to it, decreases the chance of recovery.
--
Chris Murphy
the fstrim.timer which by default runs fstrim.service once a
week (which in turn issues fstrim, I think on all mounted volumes.)
I am a bit more concerned about the read errors you had that were
being corrected automatically? The corruption suggests a firmware bug
related to trim. I'd check the affected SSD firmware revision and
consider updating it (only after a backup, it's plausible the firmware
update is not guaranteed to be data safe). Does the volume use DUP or
raid1 metadata? I'm not sure how it's correcting for these problems
otherwise.
--
Chris Murphy
e cameras writing out in? It matters if this is a
continuous appending format, or if it's writing them out as individual
JPEG files, one per frame, or whatever. What rate, what size, and any
other concurrent operations, etc.
--
Chris Murphy
container. :-D Avoid container misery by
having a workflow that expects containers to be transient disposable
objects.
--
Chris Murphy
ere is no single remaining drive that contains
all the missing copies, they're distributed. Which means you've got a
very good chance in a 2 drive failure of losing two copies of either
metadata or data or both. While I'm not certain it's 100% not
survivable, the real gotcha is it's possible maybe even likely that
it'll mount and seem to work fine but as soon as it runs into two
missing bg's, it'll face plant.
--
Chris Murphy
Also, since you don't have any snapshots, you could also find this
conventionally:
# du -sh /*
Chris Murphy
o subvolid=5 /mnt
cd /mnt
btrfs fi du -s *
Maybe that will help reveal where it's hiding. It's possible btrfs fi
du does not cross bind mounts. I know the Total column does include
amounts in nested subvolumes.
--
Chris Murphy
mmcblk errors). Mounting with both ro and
nologreplay will ensure no writes are needed, allowing the mount to
succeed. of course any changes that are in the log tree will be
missing so recent transactions may be unrecoverable but so far I've
had good luck recovering from broken SD cards this
s parameters but
> it doesn't change anything nor trying btrf check in single user mode.
>
> Where is my 30 Go missing ?
--
Chris Murphy
discard
mount option for most use cases as it too aggressively discards very
recently stale Btrfs metadata and can make recovery from crashes
harder).
There is a trim bug that causes FITRIM to only get applied to
unallocated space on older file systems, that have been balanced such
that block group logical addresses are outside the physical address
space of the device which prevents the free space inside of such block
groups to be passed over for FITRIM. Looks like this will be fixed in
kernel 4.20/5.0
--
Chris Murphy
l, is to boot a current Fedora or Arch live or install media,
mount the Btrfs and try to read the problem files and see if the
problem still happens. I can't even being to estimate the tens of
thousands of line changes since kernel 4.9.
What profile are you using for this Btrfs? Is this a raid56? What do
you get for 'btrfs fi us ' ?
--
Chris Murphy
your use case with mostly reads, and probably you also don't care
about write performance, you could consider mounting with notreelog.
This will drop the use of the treelog which is used to improve
performance on operations that use fsync. With this option,
transactions calling fsync() fall back to sync() so it's safer but
slower.
--
Chris Murphy
n to
and see if there are any kernel errors. You could recursively copy
files from a directory to /dev/null and then check kernel messages for
any errors. So long as metadata is DUP, there is a good chance a bad
copy of metadata can be automatically fixed up with a good copy. If
there's only single copy of metadata, or both copies get corrupt, then
it's difficult. Usually recovery of data is possible, but depending on
what's damaged, repair might not be possible.
--
Chris Murphy
om seed to sprout, and that the
sprout can be unmounted.
--
Chris Murphy
t;bug" (or more of a limitation)
if the guest is using cache=none on the block device?
Anton what virtual machine tech are you using? qemu/kvm managed with
virt-manager? The configuration affects host behavior; but the
negative effect manifests inside the guest as corruption. If I
remember correctly.
--
Chris Murphy
On Tue, Oct 16, 2018 at 2:13 AM, Anand Jain wrote:
>
>
> On 10/14/2018 06:28 AM, Chris Murphy wrote:
>>
>> Is it practical and desirable to make Btrfs based OS installation
>> images reproducible? Or is Btrfs simply too complex and
>> non-deterministic? [1]
&
On Mon, Oct 15, 2018 at 3:26 PM, Anton Shepelev wrote:
> Chris Murphy to Anton Shepelev:
>
>> > How can I track down the origin of this mount point:
>> >
>> > /dev/sda2 on /home/hana type btrfs
>> > (rw,relatime,space_cache,subvolid=259,subvol=/@/.sna
or Ubuntu and you're using Timeshift?
Maybe it'll show up in the journal if you add boot parameter
'systemd.log_level=debug' and reboot; then use 'journalctl -b | grep
mount' and it should show all instances logged instances of mount
events: systemd, udisks2, maybe others?
--
Chris Murphy
On Mon, Oct 15, 2018 at 6:29 AM, Austin S. Hemmelgarn
wrote:
> On 2018-10-13 18:28, Chris Murphy wrote:
>> The end result is creating two Btrfs volumes would yield image files
>> with matching hashes.
>
> So in other words, you care about matching the block layout _exactly_.
y it's a "btrfs seed device 2.0" idea. But Btrfs is so
complicated it's maybe too much work, hence the question.
--
Chris Murphy
the use of -T (similar to make_ext4) to set all timestamps to
this value, and configurable uuid's for everything that uses uuids,
and whatever other constraints are necessary.
--
Chris Murphy
On Sat, Oct 13, 2018 at 4:28 PM, Chris Murphy wrote:
> Is it practical and desirable to make Btrfs based OS installation
> images reproducible? Or is Btrfs simply too complex and
> non-deterministic? [1]
>
> The main three problems with Btrfs right now for reproducibility are:
&g
ter
integrity checking.
[1] problems of reproducible system images
https://reproducible-builds.org/docs/system-images/
[2] purpose and motivation for reproducible builds
https://reproducible-builds.org/
[3] who is involved?
https://reproducible-builds.org/who/#Qubes%20OS
--
Chris Murphy
What version of btrfs-progs?
780.279436] ---[ end trace 7470f1b607c73b6c ]---
[103780.285841] BTRFS warning (device mmcblk0p3):
cleanup_transaction:1847: Aborting unused transaction(No space left).
[103780.289891] BTRFS info (device mmcblk0p3): delayed_refs has NO entry
--
Chris Murphy
On Wed, Oct 10, 2018 at 10:00 PM, Chris Murphy wrote:
> On Wed, Oct 10, 2018 at 9:07 PM, Larkin Lowrey
> wrote:
>> On 10/10/2018 10:51 PM, Chris Murphy wrote:
>>>
>>> On Wed, Oct 10, 2018 at 8:12 PM, Larkin Lowrey
>>> wrote:
>>>>
&g
On Wed, Oct 10, 2018 at 9:07 PM, Larkin Lowrey
wrote:
> On 10/10/2018 10:51 PM, Chris Murphy wrote:
>>
>> On Wed, Oct 10, 2018 at 8:12 PM, Larkin Lowrey
>> wrote:
>>>
>>> On 10/10/2018 7:55 PM, Hans van Kranenburg wrote:
>>>>
>>>> On
On Wed, Oct 10, 2018 at 8:12 PM, Larkin Lowrey
wrote:
> On 10/10/2018 7:55 PM, Hans van Kranenburg wrote:
>>
>> On 10/10/2018 07:44 PM, Chris Murphy wrote:
>>>
>>>
>>> I'm pretty sure you have to umount, and then clear the space_cache
>>>
x27; ?
I thought the kernel code will not mount a Btrfs if the first super is
not present or valid (checksum match)?
--
Chris Murphy
problems definitely start before then but as it
is we have nothing really to go on.
--
Chris Murphy
. Hands down.
b. Is it freezing on the rebuild? Or something else?
c. I think the devs would like to see the output from btrfs-progs
v4.17.1, 'btrfs check --mode=lowmem' and see if it finds anything, in
particular something not related to free space cache.
Rebuilding either version of space cache requires successfully reading
(and parsing) the extent tree.
--
Chris Murphy
On Tue, Oct 9, 2018 at 11:25 AM, Andrei Borzenkov wrote:
> 09.10.2018 18:52, Chris Murphy пишет:
>>> In this case is root/big_file and snapshot/big_file still share the same
>>> data?
>>
>> You'll be left with three files. /big_file and root/big_file wil
have shared
extents with /big_file - or deduplicate.
--
Chris Murphy
70.004723586 seconds time elapsed
[chris@flap ~]$
Seems like a lot of activity for just a few transactions, but what
really caught my eye here is the qgroup reporting for a file system
that has never had qgroups enabled. Is it expected?
Chris Murphy
d report a discrete error message which Btrfs can do something
about, rather than do a SATA link reset in which case Btrfs can't do
anything about it).
--
Chris Murphy
etadata as raid56 shows a lot more problem reports than metadata
raid1, so there's something goofy going on in those cases. I'm not
sure how well understood they are. But other people don't have
problems with it.
It's worth looking through the archives about some things. Btrfs
raid56 isn't exactly perfectly COW, there is read-modify-write code
that means there can be overwrites. I vaguely recall that it's COW in
the logical layer, but the physical writes can end up being RMW or not
for sure COW.
--
Chris Murphy
this question but maybe
> someone of the devs can point me to the right list?
>
> I cannot get kdump to work. The crashkernel is loaded and everything is
> setup for it afaict. I asked a question on this over at stackexchange but no
> answer yet.
> https://unix.stackexchange.com/questions/469838/linux-kdump-does-not-boot-second-kernel-when-kernel-is-crashing
>
> So i did a little digging and added some debug printk() statements to see
> whats going on and it seems that panic() is never called. maybe the second
> stack trace is the reason?
> Screenshot is here: https://t-5.eu/owncloud/index.php/s/OegsikXo4VFLTJN
>
> Could someone please tell me where I can report this problem and get some
> help on this topic?
Try kexec mailing list. They handle kdump.
http://lists.infradead.org/mailman/listinfo/kexec
--
Chris Murphy
Adding fsdevel@, linux-ext4, and btrfs@ (which has a separate subject
on this same issue)
On Wed, Sep 19, 2018 at 7:45 PM, Dave Chinner wrote:
>On Wed, Sep 19, 2018 at 10:23:38AM -0600, Chris Murphy wrote:
>> Fedora 29 has a new feature to test if boot+startup fails, so the
>>
On Mon, Sep 17, 2018 at 9:44 PM, Chris Murphy wrote:
> https://btrfs.wiki.kernel.org/index.php/FAQ#Does_grub_support_btrfs.3F
>
> Does anyone know if this is still a problem on Btrfs if grubenv has
> xattr +C set? In which case it should be possible to overwrite and
> there'
oing to
>> recompute parity and write to multiple devices? Eek!
>
> Recompute the parity should not be a big deal. Updating all the (b)trees
> would be a too complex goal.
I think it's just asking for trouble. Sometimes the best answer ends
up being no, no and definitely no.
--
Chris Murphy
On Tue, Sep 18, 2018 at 1:01 PM, Andrei Borzenkov wrote:
> 18.09.2018 21:57, Chris Murphy пишет:
>> On Tue, Sep 18, 2018 at 12:16 PM, Andrei Borzenkov
>> wrote:
>>> 18.09.2018 08:37, Chris Murphy пишет:
>>
>>>> The patches aren't upstrea
r users.
So for those distros that support Secure Boot, in practice you're
stuck with the behavior of their prebuilt GRUB binary that goes on the
ESP.
--
Chris Murphy
On Tue, Sep 18, 2018 at 12:16 PM, Andrei Borzenkov wrote:
> 18.09.2018 08:37, Chris Murphy пишет:
>> The patches aren't upstream yet? Will they be?
>>
>
> I do not know. Personally I think much easier is to make grub location
> independent of /boot, allowing
On Tue, Sep 18, 2018 at 11:15 AM, Goffredo Baroncelli
wrote:
> On 18/09/2018 06.21, Chris Murphy wrote:
>> b. The bootloader code, would have to have sophisticated enough Btrfs
>> knowledge to know if the grubenv has been reflinked or snapshot,
>> because even if +C, i
On Mon, Sep 17, 2018 at 11:24 PM, Andrei Borzenkov wrote:
> 18.09.2018 07:21, Chris Murphy пишет:
>> On Mon, Sep 17, 2018 at 9:44 PM, Chris Murphy
>> wrote:
>>> https://btrfs.wiki.kernel.org/index.php/FAQ#Does_grub_support_btrfs.3F
>>>
>>> Does anyone k
On Mon, Sep 17, 2018 at 9:44 PM, Chris Murphy wrote:
> https://btrfs.wiki.kernel.org/index.php/FAQ#Does_grub_support_btrfs.3F
>
> Does anyone know if this is still a problem on Btrfs if grubenv has
> xattr +C set? In which case it should be possible to overwrite and
> there'
ecious for, effectively out of tree
code, to be making modifications to the file system, outside of the
file system.
--
Chris Murphy
rnels, you pick a distro that's doing that
work. And right now it's openSUSE and SUSE that have the most Btrfs
developers supporting 4.9 and 4.14 kernels and Btrfs. Most of those
users are getting distro support, I don't often see SUSE users on
here.
OpenZFS is a different strategy because they're using out of tree
code. So you can run older kernels, and compile the current openzfs
code base against your older kernel. In effect you're using an older
distro kernel, but with new file system code base supported by that
upstream.
--
Chris Murphy
> > ioctl for device
>>
>> That's a bug in older btrfs-progs. It's been fixed, but I'm not sure
>> what version, maybe by 4.14?
>
> Sounds about right -- my version is 4.7.3.
It's not dangerous to use it (maybe --repair is more dangerous but
don't use it without advice first, no matter version). You just don't
get new features and bug fixes. It's also not dangerous to use
something much newer, again if the user space tools are very new and
the kernel is old, you just don't get certain features.
--
Chris Murphy
l, problems are reported by
the kernel. So we need kernel messages, user space messages aren't
enough.
Anyway, good luck with openzfs, cool project.
--
Chris Murphy
;m trying:
>
> btrfs subvol create /bkp/backup-subvol
> cp -prv --reflink=always /bkp/backup/* /bkp/backup-subvol/
Yeah that will take a lot of writes that are not necessary, now that
you see backup is a subvolume already. If you want a copy of it, just
snapshot it.
--
Chris Murphy
a long time because all the metadata is fully read,
modified (new inodes) and written out.
But either way it should work.
--
Chris Murphy
s is a per
subvolume mount time option, so if you're using the subvol= or
subvolid= mount options, you need to noatime every time, once per file
system isn't enough.
--
Chris Murphy
(resend to all)
On Thu, Sep 13, 2018 at 9:44 AM, Nikolay Borisov wrote:
>
>
> On 13.09.2018 18:30, Chris Murphy wrote:
>> This is the 2nd or 3rd thread containing hanging btrfs send, with
>> kernel 4.18.x. The subject of one is "btrfs send hung in pipe_wait"
>&
ld system is 4.19.0-0.rc3.git2.1 - which
translates to git 54eda9df17f3.
Chris Murphy
0:00:00 elapsed,
> 253106 items checked)
> [6/7] checking root refs done with fs roots in lowmem mode, skipping
> [7/7] checking quota groups skipped (not enabled on this FS)
> found 708354711552 bytes used, no error found
> total csum bytes: 689206904
> total tree bytes: 2423865344
> total fs tree bytes: 1542914048
> total extent tree bytes: 129843200
> btree space waste bytes: 299191292
> file data blocks allocated: 31709967417344
> referenced 928531877888
OK good to know.
--
Chris Murphy
an the original. It is slow, however.
--
Chris Murphy
urce
(send) devices, which means two different Btrfs volumes.
All I can say is you need to keep changing things up, process of
elimination. Rather tedious. Maybe you could try downloading a Fedora
28 ISO, make a boot stick out of it, and try to reproduce with the
same drives. At least that's an easy way to isolate the OS from the
equation.
--
Chris Murphy
> basically becomes unuseable.
What kernel? Latest stable is 4.18.6. but I want to make sure that's
what you're using, someone else has reported btrfs send problems in
another thread with 4.18.5 that sound similar.
--
Chris Murphy
kernel? Either 4.14 or 4.17 or
both. The send code is mainly in the kernel, where the receive code is
mainly in user space tools, for this testing you don't need to
downgrade user space tools. If there's a bug here, I expect it's
kernel.
--
Chris Murphy
301 - 400 of 2487 matches
Mail list logo