Marc Joliet posted on Tue, 15 Jan 2019 23:40:18 +0100 as excerpted:
> Am Dienstag, 15. Januar 2019, 09:33:40 CET schrieb Duncan:
>> Marc Joliet posted on Mon, 14 Jan 2019 12:35:05 +0100 as excerpted:
>> > Am Montag, 14. Januar 2019, 06:49:58 CET schrieb Duncan:
>&
Marc Joliet posted on Mon, 14 Jan 2019 12:35:05 +0100 as excerpted:
> Am Montag, 14. Januar 2019, 06:49:58 CET schrieb Duncan:
> [...]
>> Unless you have a known reason not to[1], running noatime with btrfs
>> instead of the kernel-default relatime is strongly recommended,
&g
ecause the read access did an atime update and
turned what otherwise wouldn't be a write operation at all into one!
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
't know for sure and
didn't have a replaced and as yet unresized-filesystem device to check,
so we haven't actually verified whether it displays correctly or not yet.
Thus the request for the btrfs device usage output, to verify all that
for both your case and the previous sim
Jesse Emeth posted on Sun, 30 Dec 2018 16:58:12 +0800 as excerpted:
> Hi Duncan
>
> The backup is irrelevant in this case. I have a backup of this
> particular problem.
> I've had BTRFS on my OS system blow up several times.
> There are several snapshots of this within t
l
going today.
One of them has a 5/reallocated-sector-count raw value of 17, still 100% on the
cooked
value, the other says 0-raw/253 cooked. (For many values including this one,
a cooked value of 253 means entirely clean, with a single "event" it drops to
100%, and
it goes from there based on calculated percentage.)
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
Duncan posted on Sun, 30 Dec 2018 04:11:20 + as excerpted:
> Adrian Bastholm posted on Sat, 29 Dec 2018 23:22:46 +0100 as excerpted:
>
>> Hello all,
>> Is it possible to undelete files on BTRFS ? I just deleted a bunch of
>> folders and would like to restore them if
DIATELY**, because every write reduces your
chance at recovering any of the deleted files.
(More in another reply, but I want to get this sent with the above ASAP.)
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
or
even just ordinary unvalidated backups you can compare against, and
you're worried about the possibility of undiscovered corruption due to
the restore, and/or you were using btrfs in part /because/ of its built-
in checksum verification, it could be worth doing that verification run
guably, if
btrfs filesystem usage is to report it at all, it should be under a
separate (additional) line, presumably device slack, if that's what the
device usage version does with that line.
---
[1] Quote paraphrases a famous US political/legal quote from some years
ago... OT as to the meri
rting -- maybe it
has that separate line tho I doubt it, but if not does it count it or
not?. But that wasn't posted and presumably the query wasn't run while
in the still-unresized state, and I guess it's a bit late now to get it...
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
ices, which do tend to
have problems, tho some hardware is fine.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
with just the tools and documentation available from your emergency boot
media? Untested backup == no backup, or at best, backup still in
process!)
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
he
good device being used to fill in for and (attempt to, if the bad device
is actively getting worse it might be a losing battle) repair any
metadata damage on the bad device, thus giving you a far better chance of
saving the filesystem as a whole.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
Adam Borowski posted on Sun, 04 Nov 2018 20:55:30 +0100 as excerpted:
> On Sun, Nov 04, 2018 at 06:29:06PM +0000, Duncan wrote:
>> So do consider adding noatime to your mount options if you haven't done
>> so already. AFAIK, the only /semi-common/ app that actually uses
>
mount normally, or
will, but then locks up when you try to access something. It's far lest
risky than a normal writable mount, and at minimum it provides you the
additional test data of whether it worked or not, plus if it does, a
chance to access the data and make sure your backups are
cally for the btrfs
replace case when it's actually a different device afterward anyway.
Apparently, it doesn't even do /that/ automatically yet. Keep that in
mind if you replace that device.)
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a ma
and been the force behind the
relatively recent (4.16-ish) changes to the ssd mount option's allocation
strategy. He'd be the one to talk to if you're considering diving into
btrfs' on-disk allocation code, etc.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
ack (with writeback of the correct
version) to the other copy if the first copy read fails checksum
verification, with the much better optimized mdraid0 performance. So it
stands to reason that the same recommendation would apply to raid0 --
just do single-mode btrfs on mdraid0, for better perfo
haps the global reserve size could be bumped up on such
large filesystems, but let's see if the more realistic operations-reserve
calculations can fix things, first, as arguably that shouldn't be
necessary once the calculations aren't so arbitrarily wild.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
due to physical device loss if the
disks/ssds themselves went bad, can never be a big deal, because the
maximum value of the data in question is always strictly limited to that
of the point at which having a backup is more important than the time/
trouble/resources you save(d) by not having one.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
write
hole to worry about), if it's not cost-prohibitive for the amount of data
you need to store. But for people on a really tight budget or who are
storing double-digit TB of data or more, I can understand why they prefer
raid5, and I do think raid5 is stable enough for data now, as long
h:
>
> /sbin/btrfs
> /usr/bin/btrfs-subvolume-show
> /usr/bin/btrfs-subvolume-list
I did get you wrong (and had even understood the separately named
binaries from an earlier post, too, but forgot).
Thanks. =:^)
--
Duncan - List replies preferred. No HTML msgs.
"Every no
but it works.
But in that scheme /bin, /sbin, /usr/bin and /usr/sbin, are all the same
dir, so only one executable of a particularly name can exist therein.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
, you might want to reconsider nocow
on btrfs raid1, since nocow defeats checksumming and thus scrub, which
verifies checksums, simply skips it, and if the two copies get out of
sync for some reason...
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
space and make it writable gets upstreamed, I really hope
there's a build-time configure option to disable the feature, because IMO
grub doesn't /need/ to save state at that point, and allowing it to do so
is effectively needlessly playing a risky Russian Roulette game with my
sto
n, as well, so you'd want at least btrfs-progs 4.9 on your rescue
media, for now, and 4.14, coming up, since when the new kernel goes LTS
that'll displace 4.9 and 4.14 will then be the second-back LTS.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
es without deleting snapshots, thus avoiding any
of the maintenance-scaling issues that are the big limitation, and have
it work just fine.
OTOH, if you're use-case is a bit more conventional, with more
maintenance to have to worry about scaling, capping to 100 snapshots
remains a reasonable recommendation, and if you need quotas as well and
can't afford to disable them even temporarily for a balance, you may find
under 50 snapshots to be your maintenance pain tolerance threshold.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
to be the highly technical and case-optimizer crowds, too.
Everyone else will probably just use the defaults and not even be aware
of the tradeoffs they're making by doing so, as is already the case on
mdraid and zfs.
---
[1] As I'm no longer running either mdraid or parity-raid, I'
fore
I'd really consider it stable enough to recommend, but given the
historically much longer than predicted development and stabilization
times for raid56 already, it could just as easily end up double that, 4-5
years out, too.
But raid56 logging mode for write-hole mitigation is indeed ac
odule and thus can't be removed,
the above doesn't work and a reboot is necessary. Thus the need for
those patches you mentioned.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
tests on that one. I fully understand that tying up the
thing running tests on it for days straight may not be viable.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
a snapshot of the working system at the time I took the backup, so
it's not a limited recovery boot at all, it has the same access to tools,
manpages, net, X/plasma, browsers, etc, that my normal system does,
because it /is/ my normal system from whenever I took the backup.
--
Duncan - L
and you're only trying to make booting
the btrfs raid1 rootfs degraded /possible/ for recovery purposes, go
right ahead! That's what btrfs raid1 is for, after all. But if you were
planning on mounting degraded (semi-)routinely, please do reconsider,
because it's just not ready for
patterns each
time for a -w, but it might be worthwhile to try it on an ssd you're just
trying to salvage, forcing it to swap out any bad sectors it encounters
in the process.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if yo
ly btrfs precedent for this in the form of the
executable built as fsck.btrfs, which does nothing (successfully) but
possibly print a message referring people to btrfs check, if run in
interactive mode.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
ems running degraded raid1
operationally can bring, tho I never figured out for sure whether btrfs
was smart enough to eventually pick up the other devices, after the scan
before bringing other btrfs online or not, but either way it was a risk I
wasn't willing to take.)
--
Duncan - List repli
1 root still require an
initr*? It'd be /so/ nice to be able to supply the appropriate
rootflags=device=...,device=... and actually have it work so I didn't
need the initr* any longer!
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
entation at the VFS layer, with a single kernel interface as
> well as a single user space interface, regardless of the file system.
> Additional file system specific quota features can of course have their
> own tools, but all of this re-invention of the wheel for basic directory
> quotas is
f success, than diving further into
the data recovery morass, with ever more limited chances of success.
Live by that sort of policy from now on, and the results of the next
failure, whether it be hardware, software, or wetware (another fat-
fingering, again, this is coming from someone, me, who has ha
tion to deal with a separate
non-btrfs, ext4 or whatever, and in that case, at least here, I'd
strongly recommend you do just that, avoiding the nocow that I honestly
see as a compromise best left to those that really need it because they
aren't prepared to deal with the hassle
B might wreak.
If they're not SMR then carry-on! =:^)
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
--
To unsubscribe from this list: send the line "uns
problem for you with it, that I've simply not run into
since whatever killed the filecaps here, because I don't use the
lockscreen.
But if I start using the lockscreen again and it fails, I know one not-so-
intuitive thing to check, now. =:^)
--
Duncan - List replies preferred. No
stead of SUID/SGID (on gentoo it'd be iputils' filecaps and
possibly caps USE flags controlling this for ping), and also that btrfs
send/receive did have a recent bugfix related to the extended-attributes
normally used to record filecaps, so the symptoms match the bug and
that's pr
firmware
knows its clean and can use it at its convenience), so the ssd can use
that extra room to do its wear-leveling, and don't do trim/discard at all.
FWIW I actually do both of these here, leaving significant space on the
device unpartitioned, and enabling that systemd fstrim timer jo
Duncan posted on Wed, 18 Jul 2018 07:20:09 + as excerpted:
>> As implemented in BTRFS, raid1 doesn't have striping.
>
> The argument is that because there's only two copies, on multi-device
> btrfs raid1 with 4+ devices of equal size so chunk allocations tend to
Goffredo Baroncelli posted on Wed, 18 Jul 2018 07:59:52 +0200 as
excerpted:
> On 07/17/2018 11:12 PM, Duncan wrote:
>> Goffredo Baroncelli posted on Mon, 16 Jul 2018 20:29:46 +0200 as
>> excerpted:
>>
>>> On 07/15/2018 04:37 PM, waxhead wrote:
>>
&g
The key of striping is
> that every 64k, the data are stored on a different disk
As someone else pointed out, md/lvm-raid10 already work like this. What
btrfs calls raid10 is somewhat different, but btrfs raid1 pretty much
works this way except with huge (gig size) chunks.
--
Duncan -
Andrei Borzenkov posted on Fri, 06 Jul 2018 07:28:48 +0300 as excerpted:
> 03.07.2018 10:15, Duncan пишет:
>> Andrei Borzenkov posted on Tue, 03 Jul 2018 07:25:14 +0300 as
>> excerpted:
>>
>>> 02.07.2018 21:35, Austin S. Hemmelgarn пишет:
>>>> them (tri
left plugged
in doesn't seem to matter, even choosing different boot media from the
bios doesn't seem to matter by the time the kernel runs (I'm less sure
about grub).
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you
ry, again,
you'll potentially be left without any old roots for the usebackuproot
mount option to try to fall back to, should it actually be necessary.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is y
g media files being on a different
filesystem that's mostly read-only, so less at risk and needing less
frequent backups. The tiny boot and large updates (distro repo, sources,
ccache) are also separate, and mounted only for boot maintenance or
updates.
--
Duncan - List replies preferre
ey'd do after reboot or a umount/
remount cycle for a file stored in tmpfs. And if they didn't have even a
stable working copy let alone a backup... well, much like that file in
tmpfs, what did they expect? They *really* defined that data as of no
more than trivial value, didn't
k) on bugs and if they're lucky, a day a week on what they were
supposed to be focused on, which is what we were seeing for awhile.
Plus the tools to do the debugging, etc, are far more mature now, another
reason bugs should hopefully take less time now.
--
Duncan - List replies preferred.
Austin S. Hemmelgarn posted on Mon, 25 Jun 2018 07:26:41 -0400 as
excerpted:
> On 2018-06-24 16:22, Goffredo Baroncelli wrote:
>> On 06/23/2018 07:11 AM, Duncan wrote:
>>> waxhead posted on Fri, 22 Jun 2018 01:13:31 +0200 as excerpted:
>>>
>>>> Accor
mes stable enough for them to integrate (parts of?) it as
existing demonstrated-stable technology.
The other difference, AFAIK, is that stratis is specifically a
corporation making it a/the main money product, whereas btrfs was always
something the btrfs devs used at their employers (oracle, face
Gandalf Corvotempesta posted on Wed, 20 Jun 2018 11:15:03 +0200 as
excerpted:
> Il giorno mer 20 giu 2018 alle ore 10:34 Duncan <1i5t5.dun...@cox.net>
> ha scritto:
>> Parity-raid is certainly nice, but mandatory, especially when there's
>> already other parity
rebalancing system on small filesystems being an increase of the
system chunk size from 8 MB original mkfs.btrfs size to 32 MB... only a
few KiB used! =:^(
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is
t least one system, regardless of whether
that's administering just their own single personal system, or thousands
of systems across dozens of locations in some large corporation or
government institution.
[3] Raid56 mode reliability implications: For raid56 data, this isn't
/that/ big
form of mirroring and you've
already said you're not doing that in any form, but just in case, because
this is a rather obscure trap people using lvm could find themselves in,
without a clue as to the danger, and the resulting symptoms could be
rather hard to troubleshoot if this pos
indeed the same bug, anything even half modern should have it
fixed
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
--
To unsubscribe from this list: send the l
;t mean that we /refuse/ to support 4.4, we still
try, but it's out of primary focus now and in many cases, should you
have problems, the first recommendation is going to be try something
newer and see if the problem goes away or presents differently. Or
as mentioned, check with your distro if i
r repeated writes
between snapshots).
But if you're disabling checksumming anyway, nocow's likely the way to go.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Sta
mpat flag and refuse to mount at all on kernels
that don't have the required compression support.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
--
To unsubscribe fr
t;>> and 'rust-btrfs' do the same as duperemove and simply report the error
>>> (as they should).
> --D
>
>> } else if (file->f_path.mnt != dst_file->f_path.mnt) {
>> info->status = -EXDEV;
>>
the in-place-stripe-read-modify-write atomicity
issue. I'll leave the parity-checksumming debate (now that I know it at
least remains debatable) to those more knowledgeable than myself, but in
addition to what I've learned of it, I've definitely learned that I can't
proper
Gandalf Corvotempesta posted on Wed, 02 May 2018 19:25:41 + as
excerpted:
> On 05/02/2018 03:47 AM, Duncan wrote:
>> Meanwhile, have you looked at zfs? Perhaps they have something like
>> that?
>
> Yes, i've looked at ZFS and I'm using it on some servers but
now it's pie-in-the-sky or still new
enough it'd be 5-7 years before it can be used in practice, as well. But
assuming it's a viable project, presumably it would get support if device-
mapper did/has.
The stratis article I saw (apparently part 2 in a series):
https://opensource.com/ar
essable numbers for
sanity checking.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
--
To unsubscribe from this list: send the line "unsubscribe linux
mething newer after booting the rescue image, if you
have to.
---
[1] In general: I think one regular btrfs dev works with SuSE, and one
non-dev but well-practiced support list regular is most familiar with
Fedora, tho of course Fedora doesn't to be /too/ outdated.
--
Duncan - List rep
That'd be probably 3 years out to stability at the earliest.
There's a cleaner alternative but it'd be /much/ farther out as it'd
involve a pretty heavy rewrite along with the long testing and bugfix
cycle that implies, so ~10 years out if ever, for that. And there's a
David Sterba posted on Wed, 25 Apr 2018 13:02:34 +0200 as excerpted:
> On Wed, Apr 25, 2018 at 06:31:20AM +0000, Duncan wrote:
>> David Sterba posted on Tue, 24 Apr 2018 13:58:57 +0200 as excerpted:
>>
>> > btrfs-progs version 4.16.1 have been released. This
upgrade
that's specifically billed as a "bugfix release".
(Further support for btrfs being "still stabilizing, not yet fully stable
and mature." But development mode habits need to end /sometime/, if
stability is indeed a goal.)
--
Duncan - List replies preferred.
the filesystem, remains entirely chunk-level unallocated and thus free to
allocate to data or metadata chunks as needed.
Meanwhile, data chunk allocation is 3 GiB total per device, of which 2.24
GiB is used. Again, that's healthy, as data chunks are nominally 1 GiB
so that's probably
600P)
830 model is a few years old, IIRC (I have 850s, and I think I saw 860s
out in something I read probably on this list, but am not sure of it). I
suspect the filesystem was created with an old enough btrfs-tools that
the default nodesize was still 4K, either due to older distro, or simpl
not. =:^(
But it's definitely a tradeoff to consider once you /do/ know it!
Presumably that'll be fixed at some point, but not being a dev nor
knowing how complex the fix might be, I won't venture a guess as to when,
or whether it'd be considered stable-kernel backport material or not
ly the latest
two release series in both normal and LTS are best supported, so with
4.15 out and 4.16 nearing release, that's the latest 4.15 stable release
now, or 4.14, to be 4.16 and 4.15 at 4.16 release, or on the LTS track
the previously mentioned 4.14 and 4.9 series, tho at a year old
kernel configured for only the
hardware and dracut initr* modules I need, and a fatter generic initr*
and kernel modules would likely need more space, but your show output
says it's only using 342 MiB for data, so as I said your 1 GiB for ~500
MiB usable in dup mode should be quite reason
userspace. If there's any doubt, stay a version or
two behind the latest release and watch for reports of problems with the
latest, but certainly, with 4.15 userspace out and no serious reports of
new damage from 4.14 userspace, the latter should now be a reasonably
safe upgrade.
--
Du
ing a
snapshot and recovering to it when the upgrade went bad would have worked
just as well, having the independent filesystem backup on a different set
of physical devices means I don't have to worry about loss of the
filesystem or physical devices containing it, either! =:^)
--
Duncan -
or missing device IDs, so the situation, while not ideal,
isn't yet /entirely/ out of hand, either, because a successful guess
based on available information should be possible without /too/ many
attempts.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree pro
btrfs at both levels but nocowing the image files on
the host btrfs is a possibility as well, but nocow on btrfs has enough
limits and caveats that I consider it a second-class "really should have
used a different filesystem for this but didn't want to bother setting up
a dedicated one&
inherited by any newly created files/
subdirs within it.
[2] Many apps that preallocate by default have an option to turn
preallocation off.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."
sing here apparently forced autodefrag to rewrite the
entire file, and a recent "bugfix" changed that so it's more in line with
the normal autodefrag behavior. I rather preferred the old behavior,
especially since I'm on fast ssd and all my large files tend to be write-
once no-r
, to
properly free that space.
---
[1] du: Because its purpose is different. du's primary purpose is
telling you in detail what space files take up, per-file and per-
directory, without particular regard to usage on the filesystem itself.
df's focus, by contrast, is on the filesyst
e down
(again, assuming btrfs was using ssd mode), so the lingering effect could
still be creating problems on the 4.14 kernel servers for the moment.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your
doing raid1.)
If you really wanted you could do the same with -musage for metadata,
except that's not so bad, only 9 gig size, 3 gig used. But you could
free 5 gigs or so, if desired.
That's assuming there's no problem. I see a followup indicating you're
seeing problems
e given numbers were for level 1 and 2, with level 0 not holding
anything, not levels 0 and 1. But that wouldn't jive with your level 0
example, which I would assume could never happen if it couldn't hold even
a single entry.
--
Duncan - List replies preferred. No HTML msgs.
&
only the 4.13 series and early 4.14-rcs and was fixed
by 4.14.0. The bug seemed to trigger most frequently when doing balances
or other major writes to the filesystem, on middle to large sized
filesystems. (My all under quarter-TB each btrfs didn't appear to be
affected.)
--
Duncan
eal-world reports I've
seen on-list:
12-14 TB individual drives?
While you /did/ say enterprise grade so this probably doesn't apply to
you, it might apply to others that will read this.
Be careful that you're not trying to use the "archive application"
targeted SMR dr
Duncan posted on Fri, 02 Feb 2018 02:49:52 + as excerpted:
> As CMurphy says, 4.11-ish is starting to be reasonable. But you're on
> the LTS kernel 4.14 series and userspace 4.14 was developed in parallel,
> so btrfs-progs-3.14 would be ideal.
Umm... obviously that
of
backups, it makes little sense to spend more than a trivial amount of
time trying to recover data from a messed up filesystem, especially given
that there's no guarantee you'll get it all back undamaged even if you
/do/ spend time time. It's often simpler and takes less time,
Andrei Borzenkov posted on Sun, 28 Jan 2018 11:06:06 +0300 as excerpted:
> 27.01.2018 18:22, Duncan пишет:
>> Adam Borowski posted on Sat, 27 Jan 2018 14:26:41 +0100 as excerpted:
>>
>>> On Sat, Jan 27, 2018 at 12:06:19PM +0100, Tomasz Pala wrote:
>>>> On
ng.
All systemd has to do is leave the mount alone that the kernel has
already done, instead of insisting it knows what's going on better than
the kernel does, and immediately umounting it.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a mast
os, but this reminds me that at nearing
six years old the main system's aging too, so I better start thinking of
replacing it again...)
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."
ein posted on Tue, 23 Jan 2018 09:38:13 +0100 as excerpted:
> On 01/22/2018 09:59 AM, Duncan wrote:
>>
>> And to tie up a loose end, xfs has somewhat different design principles
>> and may well not be particularly sensitive to the dirty_* settings,
>> while btrfs,
lement IO priorities.
But I know less about that stuff and it's googlable, should you decide to
try playing with it too. I know what the dirty_* stuff does from
personal experience. =:^)
And to tie up a loose end, xfs has somewhat different design principles
and may well not be particul
m the second copy when available,
consider a layered approach, with btrfs raid1 on top of a pair of mdraid0s
(or dmraid0s, or hardware raid0s).
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master.&q
on
ssds. That would appear to be precisely what you are seeing. =:^) If
that's the case, then arguably the option is misnamed and the ssd_spread
name may well at some point be deprecated in favor of something more
descriptive of its actual function and target devices. Purely my own
speculat
s and the like to clean up.
And qgroups makes btrfs do much more work to track that as well, so as Qu
says, that'll make snapshot deletion take even longer, and you probably
want it disabled unless you actually need the feature for something
you're doing.
--
Duncan - List replies
1 - 100 of 1873 matches
Mail list logo