On Thu, 6 Dec 2018 06:11:46 +
Robert White wrote:
> So it would be dog-slow, but it would be neat if BTRFS had a mount
> option to convert any TRIM command from above into the write of a zero,
> 0xFF, or trash block to the device below if that device doesn't support
> TRIM. Real TRIM
Hello,
To migrate my FS to a different physical disk, I have added a new empty device
to the FS, then ran the remove operation on the original one.
Now my FS has only devid 2:
Label: 'p1' uuid: d886c190-b383-45ba-9272-9f00c6a10c50
Total devices 1 FS bytes used 36.63GiB
devid
On Thu, 22 Nov 2018 22:07:25 +0900
Tomasz Chmielewski wrote:
> Spot on!
>
> Removed "discard" from fstab and added "ssd", rebooted - no more
> btrfs-cleaner running.
Recently there has been a bugfix for TRIM in Btrfs:
btrfs: Ensure btrfs_trim_fs can trim the whole fs
On Thu, 15 Nov 2018 11:39:58 -0700
Juan Alberto Cirez wrote:
> Is BTRFS mature enough to be deployed on a production system to underpin
> the storage layer of a 16+ ipcameras-based NVR (or VMS if you prefer)?
What are you looking to gain from using Btrfs on an NVR system? It doesn't
sound like
On Sat, 10 Nov 2018 03:08:01 +0900
Tomasz Chmielewski wrote:
> After upgrading from kernel 4.16.1 to 4.19.1 and a clean restart, the fs
> no longer mounts:
Did you try rebooting back to 4.16.1 to see if it still mounts there?
--
With respect,
Roman
On Tue, 9 Oct 2018 09:52:00 -0600
Chris Murphy wrote:
> You'll be left with three files. /big_file and root/big_file will
> share extents, and snapshot/big_file will have its own extents. You'd
> need to copy with --reflink for snapshot/big_file to have shared
> extents with /big_file - or
On Fri, 14 Sep 2018 19:27:04 +0200
Rafael Jesús Alcántara Pérez wrote:
> BTRFS info (device sdc1): use lzo compression, level 0
> BTRFS warning (device sdc1): 'recovery' is deprecated, use
> 'usebackuproot' instead
> BTRFS info (device sdc1): trying to use backup root at mount time
>
On Fri, 17 Aug 2018 23:17:33 +0200
Martin Steigerwald wrote:
> > Do not consider SSD "compression" as a factor in any of your
> > calculations or planning. Modern controllers do not do it anymore,
> > the last ones that did are SandForce, and that's 2010 era stuff. You
> > can check for yourself
On Fri, 17 Aug 2018 14:28:25 +0200
Martin Steigerwald wrote:
> > First off, keep in mind that the SSD firmware doing compression only
> > really helps with wear-leveling. Doing it in the filesystem will help
> > not only with that, but will also give you more space to work with.
>
> While also
On Tue, 14 Aug 2018 16:41:11 +0300
Dmitrii Tcvetkov wrote:
> If usebackuproot doesn't help then filesystem is beyond repair and you
> should try to refresh your backups with "btrfs restore" and restore from
> them[1].
>
> [1]
>
Hello,
On two machines I have subvolumes where I backup other hosts' root filesystems
via rsync. These subvolumes have the +c attribute on them.
During the backup, sometimes I get tons of messages like these in dmesg:
[Wed Jul 25 20:58:22 2018] BTRFS error (device dm-8): error inheriting props
On Mon, 2 Jul 2018 08:19:03 -0700
Marc MERLIN wrote:
> I actually have fewer snapshots than this per filesystem, but I backup
> more than 10 filesystems.
> If I used as many snapshots as you recommend, that would already be 230
> snapshots for 10 filesystems :)
(...once again me with my rsync
On Fri, 29 Jun 2018 00:22:10 -0700
Marc MERLIN wrote:
> On Fri, Jun 29, 2018 at 12:09:54PM +0500, Roman Mamedov wrote:
> > On Thu, 28 Jun 2018 23:59:03 -0700
> > Marc MERLIN wrote:
> >
> > > I don't waste a week recreating the many btrfs send/receive relationship
On Thu, 28 Jun 2018 23:59:03 -0700
Marc MERLIN wrote:
> I don't waste a week recreating the many btrfs send/receive relationships.
Consider not using send/receive, and switching to regular rsync instead.
Send/receive is very limiting and cumbersome, including because of what you
described. And
On Mon, 14 May 2018 11:36:26 +0300
Nikolay Borisov wrote:
> So what made you have these expectation, is it codified somewhere
> (docs/man pages etc)? I'm fine with that semantics IF this is what
> people expect.
"Compression ...does not work for NOCOW files":
On Mon, 14 May 2018 11:10:34 +0300
Nikolay Borisov wrote:
> But if we have mounted the fs with FORCE_COMPRESS shouldn't we disregard
> the inode flags, presumably the admin knows what he is doing?
Please don't. Personally I always assumed chattr +C would prevent both CoW and
On Sat, 10 Mar 2018 16:50:22 +0100
Adam Borowski wrote:
> Since we're on a btrfs mailing list, if you use qemu, you really want
> sparse format:raw instead of qcow2 or preallocated raw. This also works
> great with TRIM.
Agreed, that's why I use RAW. QCOW2 would add a
On Sat, 10 Mar 2018 15:19:05 +0100
Christoph Anton Mitterer wrote:
> TRIM/discard... not sure how far this is really a solution.
It is the solution in a great many of usage scenarios, don't know enough about
your particular one, though.
Note you can use it on HDDs too,
On Fri, 12 Jan 2018 17:49:38 + (GMT)
"Konstantin V. Gavrilenko" wrote:
> Hi list,
>
> just wondering whether it is possible to mount two subvolumes with different
> mount options, i.e.
>
> |
> |- /a defaults,compress-force=lza
You can have use different
On Fri, 15 Dec 2017 01:39:03 +0100
Ian Kumlien wrote:
> Hi,
>
> Running a 4.14.3 kernel, this just happened, but there should have
> been another 20 gigs or so available.
>
> The filesystem seems fine after a reboot though
What are your mount options, and can you show
On Sat, 18 Nov 2017 02:08:46 +0100
Hans van Kranenburg wrote:
> It's using send + balance at the same time. There's something that makes
> btrfs explode when you do that.
>
> It's not new in 4.14, I have seen it in 4.7 and 4.9 also, various
> different explosions
On Thu, 16 Nov 2017 16:12:56 -0800
Marc MERLIN wrote:
> On Thu, Nov 16, 2017 at 11:32:33PM +0100, Holger Hoffstätte wrote:
> > Don't pop the champagne just yet, I just read that apprently 4.14 broke
> > bcache for some people [1]. Not sure how much that affects you, but it
On Tue, 14 Nov 2017 15:09:52 +0100
Klaus Agnoletti wrote:
> Hi Roman
>
> I almost understand :-) - however, I need a bit more information:
>
> How do I copy the image file to the 6TB without screwing the existing
> btrfs up when the fs is not mounted? Should I remove it
On Tue, 14 Nov 2017 10:36:22 +0200
Klaus Agnoletti wrote:
> Obviously, I want /dev/sdd emptied and deleted from the raid.
* Unmount the RAID0 FS
* copy the bad drive using `dd_rescue`[1] into a file on the 6TB drive
(noting how much of it is actually unreadable --
On Mon, 13 Nov 2017 22:39:44 -0500
Dave wrote:
> I have my live system on one block device and a backup snapshot of it
> on another block device. I am keeping them in sync with hourly rsync
> transfers.
>
> Here's how this system works in a little more detail:
>
> 1. I
On Tue, 14 Nov 2017 10:14:55 +0300
Marat Khalili wrote:
> Don't keep snapshots under rsync target, place them under ../snapshots
> (if snapper supports this):
> Or, specify them in --exclude and avoid using --delete-excluded.
Both are good suggestions, in my case each system does
On Wed, 1 Nov 2017 11:32:18 +0200
Nikolay Borisov wrote:
> Fallocating a file in btrfs goes through several stages. The one before
> actually
> inserting the fallocated extents is to create a qgroup reservation, covering
> the desired range. To this end there is a loop in
On Wed, 1 Nov 2017 01:00:08 -0400
Dave wrote:
> To reconcile those conflicting goals, the only idea I have come up
> with so far is to use btrfs send-receive to perform incremental
> backups as described here:
> https://btrfs.wiki.kernel.org/index.php/Incremental_Backup
On Thu, 26 Oct 2017 09:40:19 -0600
Cheyenne Wills wrote:
> Briefly when I upgraded a system from 4.0.5 kernel to 4.9.5 (and
> later) I'm seeing a blocked task timeout with heavy IO against a
> multi-lun btrfs filesystem. I've tried a 4.12.12 kernel and am still
>
On Wed, 18 Oct 2017 09:24:01 +0800
Qu Wenruo wrote:
>
>
> On 2017年10月18日 04:43, Cameron Kelley wrote:
> > Hey btrfs gurus,
> >
> > I have a 4 disk btrfs filesystem that has suddenly stopped mounting
> > after a recent reboot. The data is in an odd configuration due to
On Tue, 3 Oct 2017 10:54:05 +
Hugo Mills wrote:
>There are other possibilities for missing space, but let's cover
> the obvious ones first.
One more obvious thing would be files that are deleted, but still kept open by
some app (possibly even from network, via NFS or
On Tue, 26 Sep 2017 16:50:00 + (UTC)
Ferry Toth wrote:
> https://www.phoronix.com/scan.php?page=article=linux414-bcache-
> raid=2
>
> I think it might be idle hopes to think bcache can be used as a ssd cache
> for btrfs to significantly improve performance..
My personal
On Tue, 12 Sep 2017 12:32:14 +0200
Adam Borowski wrote:
> discard in the guest (not supported over ide and virtio, supported over scsi
> and virtio-scsi)
IDE does support discard in QEMU, I use that all the time.
It got broken briefly in QEMU 2.1 [1], but then fixed again.
On Thu, 31 Aug 2017 07:45:55 -0400
"Austin S. Hemmelgarn" wrote:
> If you use dm-cache (what LVM uses), you need to be _VERY_ careful and
> can't use it safely at all with multi-device volumes because it leaves
> the underlying block device exposed.
It locks the
On Thu, 31 Aug 2017 12:43:19 +0200
Marco Lorenzo Crociani wrote:
> Hi,
> this 37T filesystem took some times to mount. It has 47
> subvolumes/snapshots and is mounted with
> noatime,compress=zlib,space_cache. Is it normal, due to its size?
If you could
On Mon, 28 Aug 2017 15:03:47 +0300
Nikolay Borisov wrote:
> when the cleaner thread runs again the snapshot's root item is going to
> be deleted for good and you no longer will see it.
Oh, that's pretty sweet -- it means there's actually a way to reliably wait
for
On Tue, 22 Aug 2017 18:57:25 +0200
Ulli Horlacher <frams...@rus.uni-stuttgart.de> wrote:
> On Tue 2017-08-22 (21:45), Roman Mamedov wrote:
>
> > It is beneficial to not have snapshots in-place. With a local directory of
> > snapshots, issuing things like "find"
On Tue, 22 Aug 2017 17:45:37 +0200
Ulli Horlacher wrote:
> In perl I have now:
>
> $root = $volume;
> while (`btrfs subvolume show "$root" 2>/dev/null` !~ /toplevel subvolume/) {
> $root = dirname($root);
> last if $root eq '/';
> }
>
>
If you are okay with
On Tue, 22 Aug 2017 16:24:51 +0200
Ulli Horlacher wrote:
> On Tue 2017-08-22 (15:44), Peter Becker wrote:
> > Is use: https://github.com/jf647/btrfs-snap
> >
> > 2017-08-22 15:22 GMT+02:00 Ulli Horlacher :
> > > With Netapp/waffle
On Wed, 16 Aug 2017 12:48:42 +0100 (BST)
"Konstantin V. Gavrilenko" wrote:
> I believe the chunk size of 512kb is even worth for performance then the
> default settings on my HW RAID of 256kb.
It might be, but that does not explain the original problem reported at
On Fri, 4 Aug 2017 12:44:44 +0500
Roman Mamedov <r...@romanrm.net> wrote:
> > What is 0x98f94189, is it not a csum of a block of zeroes by any chance?
>
> It does seem to be something of that sort
Actually, I think I know what happened.
I used "dd bs=1M conv=sparse&
On Fri, 4 Aug 2017 12:18:58 +0500
Roman Mamedov <r...@romanrm.net> wrote:
> What I find weird is why the expected csum is the same on all of these.
> Any idea what this might point to as the cause?
>
> What is 0x98f94189, is it not a csum of a block of zeroes by any cha
Hello,
I've migrated my home dir to a luks dm-crypt device some time ago, and today
during a scheduled backup a few files turned out to be unreadable, with csum
errors from Btrfs in dmesg.
What I find weird is why the expected csum is the same on all of these.
Any idea what this might point to
On Wed, 02 Aug 2017 11:17:04 +0200
Thomas Wurfbaum wrote:
> A restore does also not help:
> mainframe:~ # btrfs restore /dev/sdb1 /mnt
> parent transid verify failed on 29392896 wanted 1486833 found 1486836
> parent transid verify failed on 29392896 wanted 1486833 found
On Tue, 1 Aug 2017 10:14:23 -0600
Liu Bo wrote:
> This aims to fix write hole issue on btrfs raid5/6 setup by adding a
> separate disk as a journal (aka raid5/6 log), so that after unclean
> shutdown we can make sure data and parity are consistent on the raid
> array by
On Sun, 30 Jul 2017 18:14:35 +0200
"marcel.cochem" wrote:
> I am pretty sure that not all data is lost as i can grep thorugh the
> 100 GB SSD partition. But my question is, if there is a tool to rescue
> all (intact) data and maybe have only a few corrupt files
On Mon, 31 Jul 2017 11:12:01 -0700
Liu Bo wrote:
> Superblock and chunk tree root is OK, looks like the header part of
> the tree root is now all-zero, but I'm unable to think of a btrfs bug
> which can lead to that (if there is, it is a serious enough one)
I see that the
On Fri, 28 Jul 2017 17:40:50 +0100 (BST)
"Konstantin V. Gavrilenko" wrote:
> Hello list,
>
> I am stuck with a problem of btrfs slow performance when using compression.
>
> when the compress-force=lzo mount flag is enabled, the performance drops to
> 30-40 mb/s and
On Mon, 24 Jul 2017 09:46:34 -0400
"Austin S. Hemmelgarn" wrote:
> > I am a little bit confused because the balance command is running since
> > 12 hours and only 3GB of data are touched. This would mean the whole
> > balance process (new disc has 8TB) would run a long,
On Fri, 21 Jul 2017 13:00:56 +0800
Anand Jain wrote:
>
>
> On 07/18/2017 02:30 AM, David Sterba wrote:
> > So it basically looks good, I could not resist and rewrote the changelog
> > and comments. There's one code fix:
> >
> > On Mon, Jul 17, 2017 at 04:52:58PM +0300,
On Tue, 18 Jul 2017 16:57:10 +0500
Roman Mamedov <r...@romanrm.net> wrote:
> if a block written consists of zeroes entirely, instead of writing zeroes to
> the backing storage, converts that into an "unmap" operation
> (FALLOC_FL_PUNCH_HOLE[1]).
BTW I found that it
Hello,
Qemu/KVM has this nice feature in its storage layer, "detect-zeroes=unmap".
Basically the VM host detects if a block written by the guest consists of
zeroes entirely, and instead of writing zeroes to the backing storage,
converts that into an "unmap" operation (FALLOC_FL_PUNCH_HOLE[1]).
I
On Wed, 5 Jul 2017 22:10:35 -0600
Daniel Brady wrote:
> parent transid verify failed
Typically in Btrfs terms this means "you're screwed", fsck will not fix it, and
nobody will know how to fix or what is the cause either. Time to restore from
backups! Or look into "btrfs
On Thu, 8 Jun 2017 19:57:10 +0200
Hans van Kranenburg wrote:
> There is an improvement with subvolume delete + nossd that is visible
> between 4.7 and 4.9.
I don't remember if I asked before, but did you test on 4.4? The two latest
longterm series are 4.9 and
On Wed, 7 Jun 2017 15:09:02 +0200
Adam Borowski wrote:
> On Wed, Jun 07, 2017 at 01:10:26PM +0300, Timofey Titovets wrote:
> > 2017-06-07 13:05 GMT+03:00 Stefan G. Weichinger :
> > > Am 2017-06-07 um 11:37 schrieb Timofey Titovets:
> > >
> > >> btrfs scrub
On Sun, 21 May 2017 19:54:05 +0300
Timofey Titovets wrote:
> Sorry, but i know about subpagesize-blocksize patch set, but i don't
> understand where you see conflict?
>
> Can you explain what you mean?
>
> By PAGE_SIZE i mean fs cluster size in my patch set.
This appears
On Fri, 19 May 2017 11:55:27 +0300
Pasi Kärkkäinen wrote:
> > > Try saving your data with "btrfs restore" first
> >
> > First post, he tried that. No luck. Tho that was with 4.4 userspace.
> > It might be worth trying with the 4.11-rc or soon to be released 4.11
> >
On Thu, 18 May 2017 04:09:38 +0200
Łukasz Wróblewski wrote:
> I will try when stable 4.12 comes out.
> Unfortunately I do not have a backup.
> Fortunately, these data are not so critical.
> Some private photos and videos of youth.
> However, I would be very happy if I could get it
On Fri, 12 May 2017 20:36:44 +0200
Kai Krakow wrote:
> My concern is with fail scenarios of some SSDs which die unexpected and
> horribly. I found some reports of older Samsung SSDs which failed
> suddenly and unexpected, and in a way that the drive completely died:
> No
On Thu, 11 May 2017 09:19:28 -0600
Chris Murphy wrote:
> On Thu, May 11, 2017 at 8:56 AM, Marat Khalili wrote:
> > Sorry if question sounds unorthodox, Is there some simple way to read (and
> > backup) all BTRFS metadata from volume?
>
> btrfs-image
Hm, I
On Wed, 10 May 2017 09:48:07 +0200
Martin Steigerwald wrote:
> Yet, when it comes to btrfs check? Its still quite rudimentary if you ask me.
>
Indeed it is. It may or may not be possible to build a perfect Fsck, but IMO
for the time being, what's most sorely missing, is
On Wed, 10 May 2017 09:02:46 +0200
Stefan Priebe - Profihost AG wrote:
> how to fix bad key ordering?
You should clarify does the FS in question mount (read-write? read-only?)
and what are the kernel messages if it does not.
--
With respect,
Roman
--
To unsubscribe from
On Mon, 8 May 2017 20:05:44 +0200
"Janos Toth F." wrote:
> May be someone more talented will be able to assist you but in my
> experience this kind of damage is fatal in practice (even if you could
> theoretically fix it, it's probably easier to recreate the fs and
>
Hello,
It appears like during some trouble with HDD cables and controllers, I got some
disk corruption.
As a result, after a short period of time my Btrfs went read-only, and now does
not mount anymore.
[Sun May 7 23:08:02 2017] BTRFS error (device dm-8): parent transid verify
failed on
On Tue, 2 May 2017 23:17:11 -0700
Marc MERLIN wrote:
> On Tue, May 02, 2017 at 11:00:08PM -0700, Marc MERLIN wrote:
> > David,
> >
> > I think you maintain btrfs-progs, but I'm not sure if you're in charge
> > of check --repair.
> > Could you comment on the bottom of the
On Fri, 28 Apr 2017 11:13:36 +0200
Christophe de Dinechin wrote:
> Since we memset tmpl, max_size==0. This does not seem consistent with nr = 1.
> In check_extent_refs, we will call:
>
> set_extent_dirty(root->fs_info->excluded_extents,
>rec->start,
>
On Thu, 27 Apr 2017 08:52:30 -0500
Gerard Saraber wrote:
> I could just reboot the system and be fine for a week or so, but is
> there any way to diagnose this?
`btrfs fi df` for a start.
Also obligatory questions: do you have a lot of snapshots, and do you use
qgroups?
On Tue, 18 Apr 2017 03:23:13 + (UTC)
Duncan <1i5t5.dun...@cox.net> wrote:
> Without reading the links...
>
> Are you /sure/ it's /all/ ssds currently on the market? Or are you
> thinking narrowly, those actually sold as ssds?
>
> Because all I've read (and I admit I may not actually be
On Mon, 17 Apr 2017 07:53:04 -0400
"Austin S. Hemmelgarn" wrote:
> General info (not BTRFS specific):
> * Based on SMART attributes and other factors, current life expectancy
> for light usage (normal desktop usage) appears to be somewhere around
> 8-12 years depending on
On Sun, 9 Apr 2017 06:38:54 +
Paul Jones wrote:
> -Original Message-
> From: linux-btrfs-ow...@vger.kernel.org
> [mailto:linux-btrfs-ow...@vger.kernel.org] On Behalf Of Hans van Kranenburg
> Sent: Sunday, 9 April 2017 6:19 AM
> To: linux-btrfs
On Mon, 3 Apr 2017 11:30:44 +0300
Marat Khalili wrote:
> You may want to look here: https://www.synology.com/en-global/dsm/Btrfs
> . Somebody forgot to tell Synology, which already supports btrfs in all
> hardware-capable devices. I think Rubicon has been crossed in
>
On Sun, 2 Apr 2017 09:30:46 +0300
Andrei Borzenkov wrote:
> 02.04.2017 03:59, Duncan пишет:
> >
> > 4) In fact, since an in-place convert is almost certainly going to take
> > more time than a blow-away and restore from backup,
>
> This caught my eyes. Why? In-place
On Mon, 27 Mar 2017 13:32:47 -0600
Chris Murphy wrote:
> How about if qgroups are enabled, then non-root user is prevented from
> creating new subvolumes?
That sounds like, if you turn your headlights on in a car, then in-vehicle air
conditioner randomly stops working.
On Mon, 27 Mar 2017 16:49:47 +0200
Christian Theune wrote:
> Also: the idea of migrating on btrfs also has its downside - the performance
> of “mkdir” and “fsync” is abysmal at the moment. I’m waiting for the current
> shrinking job to finish but this is likely limited to
On Mon, 27 Mar 2017 15:20:37 +0200
Christian Theune wrote:
> (Background info: we’re migrating large volumes from btrfs to xfs and can
> only do this step by step: copying some data, shrinking the btrfs volume,
> extending the xfs volume, rinse repeat. If someone should
On Sat, 25 Mar 2017 23:00:20 -0400
"J. Hart" wrote:
> I have a Btrfs filesystem on a backup server. This filesystem has a
> directory to hold backups for filesystems from remote machines. In this
> directory is a subdirectory for each machine. Under each machine
>
On Fri, 17 Mar 2017 10:27:11 +0100
Lionel Bouton wrote:
> Hi,
>
> Le 17/03/2017 à 09:43, Hans van Kranenburg a écrit :
> > btrfs-debug-tree -b 3415463870464
>
> Here is what it gives me back :
>
> btrfs-debug-tree -b 3415463870464 /dev/sdb
> btrfs-progs v4.6.1
On Thu, 16 Feb 2017 13:37:53 +0200
Imran Geriskovan wrote:
> What are your experiences for btrfs regarding 4.10 and 4.11 kernels?
> I'm still on 4.8.x. I'd be happy to hear from anyone using 4.1x for
> a very typical single disk setup. Are they reasonably stable/good
On Tue, 14 Feb 2017 10:30:43 -0500
"Austin S. Hemmelgarn" wrote:
> I was just experimenting with snapshots on 4.9.0, and came across some
> unexpected behavior.
>
> The simple explanation is that if you snapshot a subvolume, any files in
> the subvolume that have the
On Tue, 7 Feb 2017 09:13:25 -0500
Peter Zaitsev wrote:
> Hi Hugo,
>
> For the use case I'm looking for I'm interested in having snapshot(s)
> open at all time. Imagine for example snapshot being created every
> hour and several of these snapshots kept at all time providing
On Sun, 5 Feb 2017 22:55:42 +0100
Hans van Kranenburg wrote:
> On 02/05/2017 10:42 PM, Alexander Tomokhov wrote:
> > Is it possible, having two drives to do raid1 for metadata but keep data on
> > a single drive only?
>
> Nope.
>
> Would be a really nice
On Mon, 23 Jan 2017 14:15:55 +0100
Simon Waid wrote:
> I have a btrfs raid5 array that has become unmountable.
That's the third time you send this today. Will you keep resending every few
hours until you get a reply? That's not how mailing lists work.
--
With respect,
On Thu, 19 Jan 2017 17:39:37 +0100
"Alejandro R. Mosteo" wrote:
> I was wondering, from a point of view of data safety, if there is any
> difference between using dup or making a raid1 from two partitions in
> the same disk. This is thinking on having some protection
On Thu, 29 Dec 2016 19:27:30 -0500
Rich Gannon wrote:
> I can mount my filesystem with -o degraded, but I can not do btrfs
> replace or btrfs device add as the filesystem is in read-only mode, and
> I can not mount read-write.
You can try my patch which removes that
On Thu, 29 Dec 2016 16:42:09 +0100
Michał Zegan wrote:
> I have odroid c2, processor architecture aarch64, linux kernel from
> master as of today from http://github.com/torwalds/linux.git.
> It seems that the btrfs module cannot be loaded. The only thing that
>
On Wed, 30 Nov 2016 07:50:17 -0500
"Austin S. Hemmelgarn" wrote:
> > *) Read performance is not optimized: all metadata is always read from the
> > first device unless it has failed, data reads are supposedly balanced
> > between
> > devices per PID of the process reading.
On Wed, 30 Nov 2016 00:16:48 +0100
Wilson Meier wrote:
> That said, btrfs shouldn't be used for other then raid1 as every other
> raid level has serious problems or at least doesn't work as the expected
> raid level (in terms of failure recovery).
RAID1 shouldn't be used
On Mon, 28 Nov 2016 00:03:12 -0500
Zygo Blaxell wrote:
> diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
> index 8e3a5a2..b1314d6 100644
> --- a/fs/btrfs/inode.c
> +++ b/fs/btrfs/inode.c
> @@ -6803,6 +6803,12 @@ static noinline int uncompress_inline(struct
>
On Fri, 25 Nov 2016 12:01:37 + (UTC)
Duncan <1i5t5.dun...@cox.net> wrote:
> Obviously this can be a HUGE problem on spinning rust due to its seek times,
> a problem zero-seek-time ssds don't have
They are not strictly zero seek time either. Sure you don't have the issue of
moving the
On Fri, 25 Nov 2016 12:05:57 +0100
Niccolò Belli wrote:
> This is something pretty unbelievable, so I had to repeat it several times
> before finding the courage to actually post it to the mailing list :)
>
> After dozens of data loss I don't trust my btrfs partition
On Wed, 16 Nov 2016 11:55:32 +0100
Martin Steigerwald wrote:
> I do think that above kernel messages invite such a kind of interpretation
> tough. I took the "BTRFS: open_ctree failed" message as indicative to some
> structural issue with the filesystem.
For the
On Wed, 16 Nov 2016 11:25:00 +0100
Martin Steigerwald wrote:
> merkaba:~> mount -o degraded,clear_cache /dev/satafp1/backup /mnt/zeit
> mount: Falscher Dateisystemtyp, ungültige Optionen, der
> Superblock von /dev/mapper/satafp1-backup ist beschädigt,
On Sun, 13 Nov 2016 07:06:30 -0800
Marc MERLIN wrote:
> So first:
> a) find -inum returns some inodes that don't match
> b) but argh, multiple files (very different) have the same inode number, so
> finding
> files by inode number after scrub flagged an inode bad, isn't going
Hello,
Mounting "degraded,rw" should allow for any number of devices missing, as in
many cases the current check seems overly strict and not helpful during what
is already a manual recovery scenario. Let's assume the user applying the
"degraded" option knows best what condition their FS is in and
On Fri, 4 Nov 2016 01:01:13 -0700
Marc MERLIN wrote:
> Basically I have this:
> sde8:64 0 3.7T 0
> └─sde1 8:65 0 3.7T 0
> └─md59:50 14.6T 0
> └─bcache0252:00
On Thu, 20 Oct 2016 08:09:14 -0400
"Austin S. Hemmelgarn" wrote:
> > So, it's possible to return unlink() early? or this a bad idea(and why)?
> I may be completely off about this, but I could have sworn that unlink()
> returns when enough info is on the disk that both:
>
On Tue, 18 Oct 2016 09:39:32 +0800
Qu Wenruo wrote:
> > static const char * const cmd_inspect_inode_resolve_usage[] = {
> > "btrfs inspect-internal inode-resolve [-v] ",
> > "Get file system paths for the given inode",
> > @@ -702,6 +814,8 @@ const struct
On Wed, 12 Oct 2016 15:19:16 -0400
Zygo Blaxell wrote:
> I'm not even sure btrfs does this--I haven't checked precisely what
> it does in dup mode. It could send both copies of metadata to the
> disks with a single barrier to separate both metadata updates from
>
On Tue, 11 Oct 2016 17:58:22 -0600
Chris Murphy wrote:
> But consider the identical scenario with md or LVM raid5, or any
> conventional hardware raid5. A scrub check simply reports a mismatch.
> It's unknown whether data or parity is bad, so the bad data strip is
>
On Mon, 10 Oct 2016 10:44:39 +0100
Martin Dev wrote:
> I work for system verification of SSDs and we've recently come up
> against an issue with BTRFS on Ubuntu 16.04
> This seems to be a recent change
...well, a change in what?
If you really didn't change anything on
1 - 100 of 336 matches
Mail list logo