On Thu, 2016-12-08 at 13:41 -0700, Chris Murphy wrote:
> Pretty sure it will not dedupe extents that are referenced in a read
> only subvolume.
Oh... hm.. well that would be quite some limitation, cause as soon as
one has a snapshot of the full fs (which is probably not so unlikely) i
won't work
Hey.
I just wondered whether out-of-band/"offline" dedup is safe for general
use... https://btrfs.wiki.kernel.org/index.php/Status kinda implies so
(it tells about unspecified performance issues), but this seems again
already outdated (kernel 4.7)...
:-(
My intention was to use it with
On Tue, 2016-11-29 at 08:35 +0100, Adam Borowski wrote:
> I administer no real storage at this time, and got only 16 disks
> (plus a few
> disk-likes) to my name right now. Yet in a ~2 months span I've seen
> three
> cases of silent data corruption
I didn't meant to say we'd have no silent data
On Mon, 2016-11-28 at 16:48 -0500, Zygo Blaxell wrote:
> If a drive's
> embedded controller RAM fails, you get corruption on the majority of
> reads from a single disk, and most writes will be corrupted (even if
> they
> were not before).
Administrating a multi-PiB Tier-2 for the LHC Computing
On Mon, 2016-11-28 at 19:32 +0100, Goffredo Baroncelli wrote:
> I am assuming that a corruption is a quite rare event. So
> occasionally it could happens that a page is corrupted and the system
> corrects it. This shouldn't have an impact on the workloads.
Probably, but it still make sense to
On Mon, 2016-11-28 at 19:45 +0100, Goffredo Baroncelli wrote:
> I am understanding that the status of RAID5/6 code is so badly
Just some random thought:
If the code for raid56 is really as bad as it's often claimed (I
haven't read it, to be honest) could it perhaps make sense to
consider to
On Mon, 2016-11-28 at 06:53 +0300, Andrei Borzenkov wrote:
> If you allow any write to filesystem before resuming from hibernation
> you risk corrupted filesystem. I strongly believe that "ro" must be
> really read-only
You're aware that "ro" already doesn't mean "no changes to the block
device"
On Sat, 2016-11-26 at 14:12 +0100, Goffredo Baroncelli wrote:
> I cant agree. If the filesystem is mounted read-only this behavior
> may be correct; bur in others cases I don't see any reason to not
> correct wrong data even in the read case. If your ram is unreliable
> you have big problem
On Mon, 2016-11-07 at 15:02 +0100, David Sterba wrote:
> I think adding a whole-file dedup mode to duperemove would be better
> (from user's POV) than writing a whole new tool
What would IMO be really good from a user's POV was, if one of the
tools, deemed to be the "best", would be added to the
Hey.
FYI:
Just got this call trace during a send/receive (with -p) between two
btrfs on 4.7.0.
Neither btrfs-send nor -receive showed an error though and seem to have
completed successfully (at least a diff of the changes implied that.
Sep 19 20:24:38 heisenberg kernel: BTRFS info (device
On Mon, 2016-09-19 at 16:07 -0400, Chris Mason wrote:
> That's in the blockdev command (blockdev --setro /dev/xxx).
Well, I know that ;-) ... but I bet most end-user don't (just as most
end-users assume mount -r is truly ro)...
At least this is nowadays documented at the mount manpage... so in a
On Mon, 2016-09-19 at 13:18 -0400, Austin S. Hemmelgarn wrote:
> > > - even mounting a fs ro, may cause it to be changed
> >
> > This would go to the UseCases
> My same argument about the UUID issues applies here, just without
> the
> security aspect.
I personally could agree to have that
+1 for all your changes with the following comments in addition...
On Mon, 2016-09-19 at 17:27 +0200, David Sterba wrote:
> That's more like a usecase, thats out of the scope of the tabular
> overview. But we have an existing page UseCases that I'd like to
> transform to a more structured and
On Thu, 2016-09-15 at 14:20 -0400, Austin S. Hemmelgarn wrote:
> 3. Fsck should be needed only for un-mountable filesystems. Ideally,
> we
> should be handling things like Windows does. Preform slightly
> better
> checking when reading data, and if we see an error, flag the
> filesystem
> for
Hey.
As for the stability matrix...
In general:
- I think another column should be added, which tells when and for
which kernel version the feature-status of each row was
revised/updated the last time and especially by whom.
If a core dev makes a statement on a particular feature, this
On Wed, 2016-09-07 at 11:06 -0400, Austin S. Hemmelgarn wrote:
> This is an issue with any filesystem,
Not really... any other filesystem I'd know (not sure about ZFS) keeps
working when there are UUID collisions... or at least it won't cause
arbitrary corruptions, which then in the end may even
On Wed, 2016-09-07 at 07:58 -0400, Austin S. Hemmelgarn wrote:
> if you want proper security you should
> be
> using a real container system
Won't these probably use the same filesystems?
Cheers,
Chris.
smime.p7s
Description: S/MIME cryptographic signature
On Tue, 2016-09-06 at 18:20 +0100, Graham Cobb wrote:
> they know the UUID of the subvolume?
Unfortunately, btrfs seems to be pretty problematic when anyone knows
your UUIDs...
Look for my thread "attacking btrfs filesystems via UUID collisions?"
in the list archives.
From accidental corruptions
On Mon, 2016-09-05 at 09:27 +0200, David Sterba wrote:
> As others replied, it's a false positive. There's a fix on the way,
> once
> it's done I'll release 4.7.2.
Yeah... thanks again for confirming... and sorry that I've missed the
obvious earlier post :-/
Best wishes,
Chris.
smime.p7s
On Sun, 2016-09-04 at 05:33 +, Paul Jones wrote:
> The errors are wrong. I nearly ruined my filesystem a few days ago by
> trying to repair similar errors, thankfully all seems ok.
> Check again with btrfs-progs 4.6.1 and see if the errors go away,
> mine did.
> See open bug
Hey.
I just did a btrfs check on my notebooks root fs, with:
$ uname -a
Linux heisenberg 4.7.0-1-amd64 #1 SMP Debian 4.7.2-1 (2016-08-28)
x86_64 GNU/Linux
$ btrfs --version
btrfs-progs v4.7.1
during:
checking extents
it found gazillions of these:
Incorrect local backref count on 1107980288
On Mon, 2016-08-29 at 16:25 +0800, Qu Wenruo wrote:
> Send will generate checksum for each command.
What does "command" mean here? Or better said how much data is secured
with one CRC32?
> For send stream, it's CRC32 for the whole command.
And this is verified then on the receiving end?
On Sun, 2016-08-28 at 22:19 +0200, Adam Borowski wrote:
> Transports over which you're likely to send a filesystem stream
> already
> protect against corruption.
Well... in some cases,... but not always... just consider a plain old
netcat...
> It'd still be nice to have something for those which
On Sun, 2016-08-28 at 11:35 -0600, Chris Murphy wrote:
> I don't see evidence of them in the btrfs send file, so I don't think
> csums are in the stream.
hmm... isn't that kinda unfortunate not to make use of the information
that's already there?
IMO, to the extent this is possibly, btrfs should
Hey.
I've often wondered:
When I do a send/receive, does the receiving side use the checksums
from the sending side (either by directly storing them or by comparing
them with calculated checksums and failing if they don't match after
the transfer)?
Cause that would effectively secure any
Hey.
$ uname -a
Linux heisenberg 4.6.0-1-amd64 #1 SMP Debian 4.6.4-1 (2016-07-18) x86_64
GNU/Linux
The following call trace happens, just because of a send/receive to a
read-only mounted btrfs. Isn't that a bit overkill and shouldn't one
rather get just a user space warning/error?
Aug 16
On Mon, 2016-06-27 at 07:35 +0300, Andrei Borzenkov wrote:
> The problem is that current implementation of RAID56 puts exactly CoW
> data at risk. I.e. writing new (copy of) data may suddenly make old
> (copy of) data inaccessible, even though it had been safely committed
> to
> disk and is now in
On Sun, 2016-06-26 at 15:33 -0700, ronnie sahlberg wrote:
> 1, a much more strongly worded warning in the wiki. Make sure there
> are no misunderstandings
> that they really should not use raid56 right now for new filesystems.
I doubt most end users can be assumed to read the wiki...
> 2,
On Sun, 2016-06-05 at 21:07 +, Hugo Mills wrote:
> The problem is that you can't guarantee consistency with
> nodatacow+checksums. If you have nodatacow, then data is overwritten,
> in place. If you do that, then you can't have a fully consistent
> checksum -- there are always race
On Sun, 2016-06-05 at 22:39 +0200, Henk Slager wrote:
> > So the point I'm trying to make:
> > People do probably not care so much whether their VM image/etc. is
> > COWed or not, snapshots/etc. still work with that,... but they may
> > likely care if the integrity feature is lost.
> > So IMHO,
On Sun, 2016-06-05 at 09:51 -0600, Chris Murphy wrote:
> Why is mdadm the reference point for terminology?
I haven't said it is,... I just said it mdadm, original paper, WP use
it the common/historic way.
And since all of these were there before btrfs, and in the case of
mdadm/MD "in" the
On Sun, 2016-06-05 at 09:36 -0600, Chris Murphy wrote:
> That's ridiculous. It isn't incorrect to refer to only 2 copies as
> raid1.
No, if there are only two devices then not.
But obviously we're talking about how btrfs does RAID1, in which even
with n>2 devices there are only 2 copies - that's
On Sun, 2016-06-05 at 02:41 +0200, Brendan Hide wrote:
> The "questionable reason" is simply the fact that it is, now as well
> as
> at the time the features were added, the closest existing
> terminology
> that best describes what it does. Even now, it would be difficult on
> the
> spot
On Sat, 2016-06-04 at 13:13 -0600, Chris Murphy wrote:
> mdadm supports DDF.
Sure... it also supports IMSM,... so what? Neither of them are the
default for mdadm, nor does it change the used terminology :)
Cheers,
Chris.
smime.p7s
Description: S/MIME cryptographic signature
On Sat, 2016-06-04 at 11:00 -0600, Chris Murphy wrote:
> SNIA's DDF 2.0 spec Rev 19
> page 18/19 shows 'RAID-1 Simple Mirroring" vs "RAID-1 Multi-
> Mirroring"
And DDF came how many years after the original RAID paper and everyone
understood RAID1 as it was defined there? 1987 vs. ~2003 or so?
On Sat, 2016-06-04 at 00:22 +0200, Brendan Hide wrote:
> - RAID5/6 seems far from being stable or even usable,... not to
> > talk
> > about higher parity levels, whose earlier posted patches (e.g.
> > http://thread.gmane.org/gmane.linux.kernel/1654735) seem to have
> > been given up.
> I'm
On Fri, 2016-06-03 at 15:50 -0400, Austin S Hemmelgarn wrote:
> There's no point in trying to do higher parity levels if we can't get
> regular parity working correctly. Given the current state of things,
> it might be better to break even and just rewrite the whole parity
> raid thing from
Hey.
Does anyone know whether the write hole issues have been fixed already?
https://btrfs.wiki.kernel.org/index.php/RAID56 still mentions it.
Cheers,
Chris.
smime.p7s
Description: S/MIME cryptographic signature
On Fri, 2016-06-03 at 13:42 -0500, Mitchell Fossen wrote:
> Thanks for pointing that out, so if I'm thinking correctly, with
> RAID1
> it's just that there is a copy of the data somewhere on some other
> drive.
>
> With RAID10, there's still only 1 other copy, but the entire
> "original"
> disk
On Fri, 2016-06-03 at 13:10 -0500, Mitchell Fossen wrote:
> Is there any caveats between RAID1 on all 6 vs RAID10?
Just to be safe: RAID1 in btrfs means not what RAID1 means in any other
terminology about RAID.
The former has only two duplicates, the later means full mirroring of
all devices.
Hey..
Hm... so the overall btrfs state seems to be still pretty worrying,
doesn't it?
- RAID5/6 seems far from being stable or even usable,... not to talk
about higher parity levels, whose earlier posted patches (e.g.
http://thread.gmane.org/gmane.linux.kernel/1654735) seem to have
been
Hey.
I've lost a bit track recently and the wiki changelog doesn't seem to
contain much about how things went on at the RAID5/6 front... so how're
things going?
Is it already more or less "productively" usable? What's still missing?
I guess there still aren't any administrative tools that e.g.
On Wed, 2016-05-11 at 21:50 +0200, Niccolò Belli wrote:
> What did happen?
Perhaps because defrag unfortunately breaks up any reflinks?
Cheers,
Chris.
smime.p7s
Description: S/MIME cryptographic signature
On Tue, 2016-01-05 at 11:44 +0100, David Sterba wrote:
> We have a full 32 bit number space, so multiples of power of 2 are
> also
> possible if that makes sense.
Hmm that would make a maximum of 4GiB RAID chunks...
perhaps we should reserve some of the higher bits for a multiplier, in
case 4GiB
On Tue, 2016-01-05 at 15:34 +, Duncan wrote:
> >What exactly was that bug in 4.1.1 mkfs and how would one notice
> > that
> > one suffers from it?
> > I created a number of personal filesystems that I use
> > "productively" and
> > I'm not 100% sure during which version I've created them... :/
On Sun, 2016-01-03 at 15:00 +, Duncan wrote:
> But now that I think about it, balance does read the chunk in ordered
> to
> rewrite its contents, and that read, like all reads, should normally
> be
> checksum verified
That was my idea :)
> (except of course in the case of nodatasum,
On Sun, 2016-01-03 at 09:37 +0800, Qu Wenruo wrote:
> And since you are making the stripe size configurable, then user is
> responsible for any too large or too small stripe size setting.
That pops up the questions, which raid chunk sizes the kernel,
respectively the userland tools should allow
On Fri, 2016-01-01 at 08:13 +, Duncan wrote:
> you can also try a read-only scrub
OT: I just wondered, would a balance include everything a scrub
includes (i.e. read+verify all data and rebuild an errors on different
devices / block copies)... of course in addition to also copying all
"good"
On Fri, 2015-12-25 at 08:06 +, Duncan wrote:
> I wasn't personally sure if 4.1 itself was affected or not, but the
> wiki
> says don't use 4.1.1 as it's broken with this bug, with the quick-fix
> in
> 4.1.2, so I /think/ 4.1 itself is fine. A scan with a current btrfs
> check should tell
On Thu, 2015-12-31 at 18:29 +, Filipe Manana wrote:
> As for fixing the (very) rare cases where we end up creating empty
> symlinks, it's not trivial to fix.
Would it be reasonable to have btrfs-check list such broken symlinks?
Cheers,
Chris.
smime.p7s
Description: S/MIME cryptographic
On Tue, 2015-12-29 at 19:06 +0100, David Sterba wrote:
> > Both open of course many questions (how to deal with crashes,
> > etc.)...
> > maybe having a look at how mdadm handles similar problems could be
> > worth.
>
> The crash consistency should remain, other than that we'd have to
> enhance
On Wed, 2015-12-30 at 18:26 +, Duncan wrote:
> That should work. Cat the files to /dev/null and check dmesg. For
> single mode it should check the only copy. For raid1/10 or dup,
> running
> two checks, ensuring one is even-PID while the other is odd-PID,
> should
> work to check both
On Wed, 2015-12-30 at 22:10 +0800, Qu Wenruo wrote:
> Or, just don't touch it until there is really enough user demand.
I definitely think that there is demand... as I've written previously,
when I did some benchmarking tests (though on MD and HW raid) then
depending on the RAID chunk size, one
On Wed, 2015-12-30 at 21:02 +, Duncan wrote:
> For something like that, it'd pretty much /have/ to be done as COW,
> at
> least at the chunk level, tho the address from the outside may stay
> the
> same. That's what balance already does, after all.
Ah... of course,... it would be basically
On Wed, 2015-12-30 at 18:39 +0100, David Sterba wrote:
> The closest would be to read the files and look for any reported
> errors.
Doesn't that fail for any multi-device setup, in which case btrfs reads
the blocks only from one device, and if that verifies, doesn't check
the other?
Cheers,
On Wed, 2015-12-30 at 20:57 +, Duncan wrote:
> Meanwhile, it's a pretty clever solution, I think. =:^)
Well the problem with such workaround-solutions is... end-users get
used to it, rely on it, and suddenly they don't work anymore (which the
user wouldn't probably notice, though).
> I was
On Mon, 2015-12-28 at 21:23 -0500, Sanidhya Solanki wrote:
> From Documentation/filesystems/BTRFS.txt:
> Btrfs is under heavy development, and is not suitable for
> any uses other than benchmarking and review.
Well I guess now it's time for Duncan's usual "stable or not" talk
(@Duncan, I think by
On Tue, 2015-12-29 at 16:44 +, Duncan wrote:
> As I normally put it, btrfs is "definitely stabilizING, but not yet
> entirely stable and mature."
[snip snap]
And now... as a song please :D
I'd also take a medieval rhyme ;-)
Cheers,
Chris.
smime.p7s
Description: S/MIME cryptographic
On Mon, 2015-12-28 at 07:24 -0500, Sanidhya Solanki wrote:
> An option to select the RAID Stripe size is made
> available in the BTRFS Filesystem, via an option
> in the BTRFS Config setup
Shouldn't that rather eventually become configurable per filesystem?
Cheers,
Chris.
smime.p7s
Description:
On Mon, 2015-12-28 at 15:38 -0500, Sanidhya Solanki wrote:
> > Shouldn't that rather eventually become configurable per
> > filesystem?
> Don't know. It was in the BTRFS File todo list, hence the
> BTRFS-specific patch.
Uhm you misunderstood me =)
I meant, configurable per instance of a btrfs
On Mon, 2015-12-28 at 16:43 -0500, Sanidhya Solanki wrote:
> Apologies, it appears I did misunderstand. That should be possible,
> though slightly complicated. I will try to get that done.
May get even much more complicated, if reshaping (i.e. conversion from
one chunk size to another) should get
On Mon, 2015-12-28 at 19:03 -0500, Sanidhya Solanki wrote:
> That sounds like an absolutely ghastly idea.
*G* and it probably is ;)
> Lots of potential for
> mistakes and potential data loss. I take up the offer to implement
> such a feature.
> Only question is should it be in-place
On Mon, 2015-12-28 at 18:08 -0600, Zach Fuller wrote:
> I ran "btrfs check --repair" again on the drive, and no "type
> mismatch
> with chunk" errors were returned.
You should always be very conservative in using --repair...
AFAIU, it *may* do more bad than good,... often the best idea is to
On Mon, 2015-12-28 at 20:31 -0500, Sanidhya Solanki wrote:
> What is your experience like about running a production system on
> what
> is essentially a beta product? Crashes?
What do you mean? btrfs? I'm not yet running it in production (there
was a subthread recently, where I've explained a bit
Hey Chris.
On Mon, 2015-12-28 at 19:29 -0700, Chris Murphy wrote:
> Unrelated. When the mkfs is new and nothing is on the fs, I get this:
>
> Free (estimated): 7.44GiB(min: 7.44GiB)
>
> I'm not certain at what point the fs starts to report 16.00EiB but
> it's not immediate.
I
Hey.
Just noted, mine says this:
>Start a scrub on all devices of the filesystem identified by
>or on a single . If a scrub is already running, the new one
>fails.
still not the text you quoted,... but there it is.
Anyway... it still contradicts the main description which implies that
a scrub
On Mon, 2015-12-28 at 02:51 +, Duncan wrote:
> 1) Btrfs very specifically and deliberately uses *lowercase* raidN
> in part to make that distinction, as the btrfs variants are chunk-
> level (and designed so that at some point in the future they can be
> subvolume and/or file level), not
On Sun, 2015-12-27 at 18:23 -0700, Chris Murphy wrote:
> I'd want scrub to immediately fail in a degraded case, because the
> higher workload added by the scrub itself could cause additional
> device failures sooner. And that would negatively impact the ability
> to get the array healthy again
On Mon, 2015-12-28 at 03:30 +, Duncan wrote:
> So how is it not the text I quoted?
Uhm... I just thought you meant that:
> * When you point scrub at a mountpoint, it scrubs all devices
> composing
> that filesystem.
to be the quote which I couldn't find after a quick cross reading...
Sorry
On Mon, 2015-12-28 at 01:58 +, Hugo Mills wrote:
> Isn't this an FAQ already? There is already a patch to rename the
> RAID modes. It's been sitting in the progs patch queue for about 2
> years, because none of the senior devs has acked it yet (since it's a
> big user-visible change).
On Sun, 2015-12-27 at 07:09 +, Duncan wrote:
> raid1 mode
I wonder when that reaches my pain threshold... and I submit a patch
that renames it "notreallyraid1" in all places ;-)
Cheers,
Chris.
smime.p7s
Description: S/MIME cryptographic signature
On Sun, 2015-12-27 at 17:58 -0700, Chris Murphy wrote:
> I don't see a good use case for scrubbing a degraded array. First
> make
> the array healthy, then scrub.
As I've said, I agree basically... but *if* scrubbing a degraded fs
leads to even more errors (apart from the fact that you may loose
On Sun, 2015-12-27 at 11:29 -0700, Chris Murphy wrote:
> then the scrub request is effectively a
> scrub for a volume with a missing drive which you probably wouldn't
> ever do, you'd first replace the missing device.
While that's probably the normal work flow,... it should still work the
other
On Sun, 2015-12-27 at 07:22 +, Duncan wrote:
> I'd call that NOTABUG. As the btrfs-scrub manpage suggests:
>
> * When you point scrub at a mountpoint, it scrubs all devices
> composing
> that filesystem.
Uhm,.. mine doesn't contain this,... neither do those of the master or
devel branches
On Mon, 2015-12-28 at 02:27 +1100, Jiri Kanicky wrote:
> Thanks for the reply. Looks like I will have o use some newer
> distro.
As it was already said... btrfs may even corrupt your filesystem if
colliding UUIDs are "seen".
At least to me it's currently unclear what "seen" exactly means...
On Sun, 2015-12-27 at 04:03 +0100, Christoph Anton Mitterer wrote:
> -WARNING: defragmenting with kernels up to 2.6.37 will unlink COW-ed
Perhaps someone can also check the above.
I was looking through the git history, but, couldn't find anything wrt
2.6.37...
The commit's I've basically searc
Hey.
I've just noted the following behaviour:
# btrfs scrub start -Bdr /
scrub device /dev/sda2 (id 1) done
scrub started at Sun Dec 27 01:59:05 2015 and finished after 00:04:04
total bytes scrubbed: 29.39GiB with 0 errors
scrub device /dev/sdb2 (id 2) done
scrub started
On Thu, 2015-12-24 at 23:41 +, Duncan wrote:
> as the patch is in the userspace 4.3.1 you're running.
I don't think this is the case.
The commit was b08a740d7b797d870cbc3691b1291290d0815998 and AFAICT,
it's not included in any release yet.
4.3.1 was from Nov 16th, the above commit was from
Hey.
There was a patch for that issue.
Simply try that and when it goes away with it, there's probably no further
reason to worry.
Cheers,
Chris.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info
On Mon, 2015-12-21 at 09:28 +, Filipe Manana wrote:
> Hum?
> How is that so? Snapshot-aware defrag was disabled almost 2 years
> ago,
> and that piece of code is used both by a "manual" defrag (ioctl) and
> by automatic defrag.
Thanks for clearing that up.
Could someone then please add an
On Tue, 2015-12-22 at 10:16 +0800, Qu Wenruo wrote:
> Current "recovery" mount option will only try to use backup root.
> However the word "recovery" is too generic and may be confusing for
> some
> users.
I would rather call it something like "restorebackuproot" or
"usebackuproot",... caues
edu>
To: Christoph Anton Mitterer <cales...@scientia.net>, 808...@bugs.debian.org
Cc: Debian Bug Tracking System <sub...@bugs.debian.org>
Subject: Re: Bug#808265: e2fsprogs: support btrfs compression in filefrag
Date: Fri, 18 Dec 2015 18:25:21 -0500
On Fri, Dec 18, 2015 at 01:16:14A
On Fri, 2015-12-18 at 09:29 +0800, Qu Wenruo wrote:
> Given that nothing in the documentation implies that the block
> > device itself
> > must remain unchanged on a read-only mount, I don't see any problem
> > which
> > needs fixing. MS_RDONLY rejects user IO; that's all.
>
> And thanks for the
On Thu, 2015-12-17 at 03:25 +, Duncan wrote:
> So it's definitely _not_ something that reiserfsck would do in a
> "normal"
> fsck, only when doing "I'm desperate and don't have backups, go to
> the
> ends of the earth if necessary to recover what you can of my data,
> and
> yes, I
[I'm combining the messages again, since I feel a bit bad, when I write
so many mails to the list ;) ]
But from my side, feel free to split up as much as you want (perhaps
not single characters or so ;) )
On Thu, 2015-12-17 at 04:06 +, Duncan wrote:
> Just to mention here, that I said
On Thu, 2015-12-17 at 20:51 -0600, Eric Sandeen wrote:
> > -r, --read-only
> > Mount the filesystem read-only. A synonym is -o ro.
> >
> > Note that, depending on the filesystem type, state and
> > kernel behavior, the system may still write to the
> > device. For
> >
On Wed, 2015-12-09 at 16:36 +, Duncan wrote:
> But... as I've pointed out in other replies, in many cases including
> this
> specific one (bittorrent), applications have already had to develop
> their
> own integrity management features
Well let's move discussion upon that into the "dear
On Sun, 2015-12-13 at 07:10 +, Duncan wrote:
> > So you basically mean that ro snapshots won't have their atime
> > updated
> > even without noatime?
> > Well I guess that was anyway the recent behaviour of Linux
> > filesystems,
> > and only very old UNIX systems updated the atime even when
On Mon, 2015-12-14 at 10:51 +, Duncan wrote:
> > AFAIU, the one the get's fragmented then is the snapshot, right,
> > and the
> > "original" will stay in place where it was? (Which is of course
> > good,
> > because one probably marked it nodatacow, to avoid that
> > fragmentation
> > problem
On Thu, 2015-12-17 at 01:09 +, Duncan wrote:
> Well, "don't load the journal on mounting" is exactly what the option
> would do. The journal (aka log) of course has a slightly different
> meaning, it's only the fsync log, but loading it is exactly what the
> option would prevent, here.
On Tue, 2015-12-15 at 11:00 -0500, Austin S. Hemmelgarn wrote:
> > Well sure, I think we'de done most of this and have dedicated
> > controllers, at least of a quality that funding allows us ;-)
> > But regardless how much one tunes, and how good the hardware is. If
> > you'd then loose always a
On Wed, 2015-12-16 at 11:10 +, Duncan wrote:
> And noload doesn't have the namespace collision problem norecovery
> does
> on btrfs, so I'd strongly suggest using it, at least as an alias for
> whatever other btrfs-specific name we might choose.
but noload is, AFAIU, not what's desired
On Tue, 2015-12-15 at 08:54 -0500, Austin S. Hemmelgarn wrote:
> Except for one thing: Automobiles actually provide a measurable
> significant benefit to society. What specific benefit does embedding
> the filesystem UUID in the metadata actually provide?
I guess that's quite obvious.
You want
On Tue, 2015-12-15 at 14:18 +, Hugo Mills wrote:
> That one's easy to answer. It deals with a major issue that
> reiserfs had: if you have a filesystem with another filesystem image
> stored on it, reiserfsck could end up deciding that both the metadata
> blocks of the main filesystem *and*
On Tue, 2015-12-15 at 11:03 -0500, Austin S. Hemmelgarn wrote:
> May I propose the alternative option of adding a flag to tell mount
> to
> _only_ use the devices specified in the options?
That's one part of exactly what I propose since a few days :-P
(no one seems to read my mails ;-) )
Plus
On Wed, 2015-12-16 at 09:41 -0500, Chris Mason wrote:
> Hugo is right here. reiserfs had tools that would scan and entire
> block
> device for metadata blocks and try to reconstruct the filesystem
> based
> on what it found.
Creepy... at least when talking about a "normal" fsck... good that
btrfs
On Tue, 2015-12-15 at 14:42 +, Hugo Mills wrote:
> I would suggest trying to migrate to a state where detecting more
> than one device with the same UUID and devid is cause to prevent the
> FS from mounting, unless there's also a "mount_duplicates_yes_i_
>
On Wed, 2015-12-16 at 07:12 -0500, Austin S. Hemmelgarn wrote:
> I kind of agree with Christoph here. I don't think that noload
> should
> be the what we actually use, although I do think having it as an
> alias
> for whatever name we end up using would be a good thing.
No, because people would
On Tue, 2015-12-15 at 09:19 -0500, Austin S. Hemmelgarn wrote:
> Um, no you don't have direct physical access to the hardware with an
> ATM, at least, not unless you are going to take apart the cover and
> anything else in your way (and probably set off internal alarms).
Well access to the
On Wed, 2015-12-16 at 07:57 -0500, Austin S. Hemmelgarn wrote:
> No, because we should ease the transition from other filesystems to
> the
> greatest extent reasonably possible. It should be clearly documented
> as
> an alias for compatibility with ext{3,4}, and that it might go away
> in
>
101 - 200 of 325 matches
Mail list logo