On Fri, Sep 21, 2018 at 12:59:31PM +1000, Dave Chinner wrote:
> On Wed, Sep 19, 2018 at 12:12:03AM -0400, Zygo Blaxell wrote:
[...]
> With no DMAPI in the future, people with custom HSM-like interfaces
> based on dmapi are starting to turn to fanotify and friends to
> provide them wit
On Mon, Sep 10, 2018 at 07:06:46PM +1000, Dave Chinner wrote:
> On Thu, Sep 06, 2018 at 11:53:06PM -0400, Zygo Blaxell wrote:
> > On Thu, Sep 06, 2018 at 06:38:09PM +1000, Dave Chinner wrote:
> > > On Fri, Aug 31, 2018 at 01:10:45AM -0400, Zygo Blaxell wrote:
> > > >
On Fri, Sep 07, 2018 at 09:27:28AM +0530, Lakshmipathi.G wrote:
> >
> > One question:
> > Why not ioctl_fideduperange?
> > i.e. you kill most of benefits from that ioctl - atomicity.
> >
> I plan to add fideduperange as an option too. User can
> choose between fideduperange and ficlonerange
On Thu, Aug 30, 2018 at 04:27:43PM +1000, Dave Chinner wrote:
> On Thu, Aug 23, 2018 at 08:58:49AM -0400, Zygo Blaxell wrote:
> > On Mon, Aug 20, 2018 at 08:33:49AM -0700, Darrick J. Wong wrote:
> > > On Mon, Aug 20, 2018 at 11:09:32AM +1000, Dave Chinner wrote:
> > >
On Thu, Aug 23, 2018 at 08:58:49AM -0400, Zygo Blaxell wrote:
> On Mon, Aug 20, 2018 at 08:33:49AM -0700, Darrick J. Wong wrote:
> > On Mon, Aug 20, 2018 at 11:09:32AM +1000, Dave Chinner wrote:
> > > - should we just round down the EOF dedupe request to the
> > >
On Thu, Aug 23, 2018 at 01:10:48PM +0800, Qu Wenruo wrote:
> On 2018/8/23 上午11:11, Zygo Blaxell wrote:
> > This is a repro script for a btrfs bug that causes corrupted data reads
> > when reading a mix of compressed extents and holes. The bug is
> > reproducible on at leas
On Mon, Aug 20, 2018 at 08:33:49AM -0700, Darrick J. Wong wrote:
> On Mon, Aug 20, 2018 at 11:09:32AM +1000, Dave Chinner wrote:
> > - is documenting rejection on request alignment grounds
> > (i.e. EINVAL) in the man page sufficient for app
> > developers to understand what is
This is a repro script for a btrfs bug that causes corrupted data reads
when reading a mix of compressed extents and holes. The bug is
reproducible on at least kernels v4.1..v4.18.
Some more observations and background follow, but first here is the
script and some sample output:
Every month or two I hit a btrfs deadlock like this:
dedup and rsync are both operating on the same file when the filesystem
locked up. The deadlock happens at the moment when rsync renames its
temporary file (the dedup dst file) to replace the old version of the
file (the dedup src file).
t; > this is the outcome (after having resumed it 4 times, two after a
> > > power loss...):
> > >
> > > menion@Menionubuntu:~$ sudo btrfs scrub status /media/storage/das1/
> > > scrub status for 931d40c6-7cd7-46f3-a4bf-61f3a53844bc
> > > scrub resumed a
t; scrub resumed at Sun Aug 12 18:43:31 2018 and finished after 55:06:35
> > total bytes scrubbed: 2.59TiB with 0 errors
> >
> > So, there are 0 errors, but I don't understand why it says 2.59TiB of
> > scrubbed data. Is it possible that also this values is crap, as the
ybe if there are changes to the chunk tree...?).
55 hours for 2600 GB is just under 50GB per hour, which doesn't sound
too unreasonable for btrfs, though it is known to be a bit slow compared
to other raid5 implementations.
> Il giorno sab 11 ago 2018 alle ore 17:29 Zygo Blaxell
> ha scritto:
On Sat, Aug 11, 2018 at 08:27:04AM +0200, erentheti...@mail.de wrote:
> I guess that covers most topics, two last questions:
>
> Will the write hole behave differently on Raid 6 compared to Raid 5 ?
Not really. It changes the probability distribution (you get an extra
chance to recover using a
On Sat, Aug 11, 2018 at 04:18:35AM +0200, erentheti...@mail.de wrote:
> Write hole:
>
>
> > The data will be readable until one of the data blocks becomes
> > inaccessible (bad sector or failed disk). This is because it is only the
> > parity block that is corrupted (old data blocks are still
On Fri, Aug 10, 2018 at 06:55:58PM +0200, erentheti...@mail.de wrote:
> Did i get you right?
> Please correct me if i am wrong:
>
> Scrubbing seems to have been fixed, you only have to run it once.
Yes.
There is one minor bug remaining here: when scrub detects an error
on any disk in a raid5/6
On Fri, Aug 10, 2018 at 03:40:23AM +0200, erentheti...@mail.de wrote:
> I am searching for more information regarding possible bugs related to
> BTRFS Raid 5/6. All sites i could find are incomplete and information
> contradicts itself:
>
> The Wiki Raid 5/6 Page
On Sat, May 26, 2018 at 06:27:57PM -0700, Brad Templeton wrote:
> A few years ago, I encountered an issue (halfway between a bug and a
> problem) with attempting to grow a BTRFS 3 disk Raid 1 which was
> fairly full. The problem was that after replacing (by add/delete) a
> small drive with a
On Mon, May 21, 2018 at 11:38:28AM -0400, Austin S. Hemmelgarn wrote:
> On 2018-05-21 09:42, Timofey Titovets wrote:
> > пн, 21 мая 2018 г. в 16:16, Austin S. Hemmelgarn :
> > > On 2018-05-19 04:54, Niccolò Belli wrote:
> > > > On venerdì 18 maggio 2018 20:33:53 CEST, Austin S. Hemmelgarn wrote:
>
On Sun, May 13, 2018 at 11:26:39AM -0700, Darrick J. Wong wrote:
> On Sun, May 13, 2018 at 06:21:52PM +, Mark Fasheh wrote:
> > On Fri, May 11, 2018 at 05:06:34PM -0700, Darrick J. Wong wrote:
> > > On Fri, May 11, 2018 at 12:26:51PM -0700, Mark Fasheh wrote:
> > > > Right now we return EINVAL
On Mon, Apr 16, 2018 at 09:35:24AM -0500, Jayashree Mohan wrote:
> Hi,
>
> The following seems to be a crash consistency bug on btrfs, where in
> the link count is not persisted even after a fsync on the original
> file.
>
> Consider the following workload :
> creat foo
> link (foo, A/bar)
>
On Wed, Apr 04, 2018 at 11:31:33PM +0200, Goffredo Baroncelli wrote:
> On 04/04/2018 08:01 AM, Zygo Blaxell wrote:
> > On Wed, Apr 04, 2018 at 07:15:54AM +0200, Goffredo Baroncelli wrote:
> >> On 04/04/2018 12:57 AM, Zygo Blaxell wrote:
> [...]
> >> Before you point
On Tue, Apr 03, 2018 at 09:08:01PM -0600, Chris Murphy wrote:
> On Tue, Apr 3, 2018 at 11:03 AM, Goffredo Baroncelli <kreij...@inwind.it>
> wrote:
> > On 04/03/2018 02:31 AM, Zygo Blaxell wrote:
> >> On Mon, Apr 02, 2018 at 06:23:34PM -0400, Zygo Blaxell wrote:
> >
On Wed, Apr 04, 2018 at 07:15:54AM +0200, Goffredo Baroncelli wrote:
> On 04/04/2018 12:57 AM, Zygo Blaxell wrote:
> >> I have to point out that in any case the extent is physically
> >> interrupted at the disk-stripe size. Assuming disk-stripe=64KB, if
> >> you want
On Tue, Apr 03, 2018 at 07:03:06PM +0200, Goffredo Baroncelli wrote:
> On 04/03/2018 02:31 AM, Zygo Blaxell wrote:
> > On Mon, Apr 02, 2018 at 06:23:34PM -0400, Zygo Blaxell wrote:
> >> On Mon, Apr 02, 2018 at 11:49:42AM -0400, Austin S. Hemmelgarn wrote:
> >>>
On Mon, Apr 02, 2018 at 06:23:34PM -0400, Zygo Blaxell wrote:
> On Mon, Apr 02, 2018 at 11:49:42AM -0400, Austin S. Hemmelgarn wrote:
> > On 2018-04-02 11:18, Goffredo Baroncelli wrote:
> > > I thought that a possible solution is to create BG with different
> > number of da
On Mon, Apr 02, 2018 at 11:49:42AM -0400, Austin S. Hemmelgarn wrote:
> On 2018-04-02 11:18, Goffredo Baroncelli wrote:
> > On 04/02/2018 07:45 AM, Zygo Blaxell wrote:
> > [...]
> > > It is possible to combine writes from a single transaction into full
> > >
On Sun, Apr 01, 2018 at 03:11:04PM -0600, Chris Murphy wrote:
> (I hate it when my palm rubs the trackpad and hits send prematurely...)
>
>
> On Sun, Apr 1, 2018 at 2:51 PM, Chris Murphy wrote:
>
> >> Users can run scrub immediately after _every_ unclean shutdown to
>
On Sat, Mar 31, 2018 at 04:34:58PM -0600, Chris Murphy wrote:
> On Sat, Mar 31, 2018 at 12:57 AM, Goffredo Baroncelli
> <kreij...@inwind.it> wrote:
> > On 03/31/2018 07:03 AM, Zygo Blaxell wrote:
> >>>> btrfs has no optimization like mdadm write-intent bitmaps;
On Sat, Mar 31, 2018 at 11:36:50AM +0300, Andrei Borzenkov wrote:
> 31.03.2018 11:16, Goffredo Baroncelli пишет:
> > On 03/31/2018 09:43 AM, Zygo Blaxell wrote:
> >>> The key is that if a data write is interrupted, all the transaction
> >>> is interrupted and a
On Sat, Mar 31, 2018 at 08:57:18AM +0200, Goffredo Baroncelli wrote:
> On 03/31/2018 07:03 AM, Zygo Blaxell wrote:
> >>> btrfs has no optimization like mdadm write-intent bitmaps; recovery
> >>> is always a full-device operation. In theory btrfs could track
> >&g
On Fri, Mar 30, 2018 at 06:14:52PM +0200, Goffredo Baroncelli wrote:
> On 03/29/2018 11:50 PM, Zygo Blaxell wrote:
> > On Wed, Mar 21, 2018 at 09:02:36PM +0100, Christoph Anton Mitterer wrote:
> >> Hey.
> >>
> >> Some things would IMO be nice to get done/clarifi
On Fri, Mar 30, 2018 at 09:21:00AM +0200, Menion wrote:
> Thanks for the detailed explanation. I think that a summary of this
> should go in the btrfs raid56 wiki status page, because now it is
> completely inconsistent and if a user comes there, ihe may get the
> impression that the raid56 is
On Wed, Mar 21, 2018 at 09:02:36PM +0100, Christoph Anton Mitterer wrote:
> Hey.
>
> Some things would IMO be nice to get done/clarified (i.e. documented in
> the Wiki and manpages) from users'/admin's POV:
>
> Some basic questions:
I can answer some easy ones:
> - compression+raid?
There
On Mon, Mar 19, 2018 at 04:30:17PM +0900, Misono, Tomohiro wrote:
> This is a part of RFC I sent last December[1] whose aim is to improve normal
> users' usability.
> The remaining works of RFC are:
> - Allow "sub delete" for empty subvolume
I don't mean to scope creep on you, but I have a
9 ("Btrfs: added btrfs_find_all_roots()")
Signed-off-by: Zygo Blaxell <ce3g8...@umail.furryterror.org>
---
v2:
Replace WARN_ON with rationale instead of merely deleting it.
Trim irrelevant detail from the backtrace. Add Fixes reference.
Fix subject line (miss
On Mon, Jan 22, 2018 at 11:34:52AM +0800, Lu Fengqi wrote:
> On Sun, Jan 21, 2018 at 02:08:58PM -0500, Zygo Blaxell wrote:
> >This warning appears during execution of the LOGICAL_INO ioctl and
> >appears to be spurious:
> >
> > [ cut here ]
On Mon, Jan 22, 2018 at 09:06:23PM +0800, Lu Fengqi wrote:
> On Mon, Jan 22, 2018 at 02:38:42PM +0200, Nikolay Borisov wrote:
> >
> >
> >On 22.01.2018 14:19, Lu Fengqi wrote:
> >> On 01/22/2018 04:46 PM, Nikolay Borisov wrote:
> >>>
> >>>
> >>> On 22.01.2018 05:34, Lu Fengqi wrote:
>
ens.
On kernel v4.14 this warning occurs 100-1000 times more frequently than
on kernels v4.2..v4.12. In the worst case, one test machine had 59020
warnings in 24 hours on v4.14.14 compared to 55 on v4.12.14.
Signed-off-by: Zygo Blaxell <ce3g8...@umail.furryterror.org>
---
fs/btrfs/backref.c
esired.
There is no functional change in this patch. The new flag is always
false.
Signed-off-by: Zygo Blaxell <ce3g8...@umail.furryterror.org>
---
fs/btrfs/backref.c| 63 ++-
fs/btrfs/backref.h| 8 +++---
fs/btrfs/inode.c
, FILE_EXTENT_SAME).
To minimize surprising userspace behavior, apply this change only to
the LOGICAL_INO_V2 ioctl.
Signed-off-by: Zygo Blaxell <ce3g8...@umail.furryterror.org>
---
fs/btrfs/ioctl.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
different). A version parameter and an 'if' statement will suffice.
Now that we have a flags field in logical_ino_args, add a flag
BTRFS_LOGICAL_INO_ARGS_IGNORE_OFFSET to get the behavior we want,
and pass it down the stack to iterate_inodes_from_logical.
Signed-off-by: Zygo Blaxell <ce
Changelog:
v3-v2:
- Stricter check on reserved[] field - now must be all zero, or
userspace gets EINVAL. This prevents userspace from setting any
of the reserved bits without the kernel providing an unambiguous
interpretation of them, and doesn't require us to
On Thu, Sep 21, 2017 at 12:59:42PM -0700, Darrick J. Wong wrote:
> On Thu, Sep 21, 2017 at 12:10:15AM -0400, Zygo Blaxell wrote:
> > Now that check_extent_in_eb()'s extent offset filter can be turned off,
> > we need a way to do it from userspace.
> >
&g
The previous patch series was based on v4.12.14, and this introductory
text was missing.
This patch series fixes some weaknesses in the btrfs LOGICAL_INO ioctl.
Background:
Suppose we have a file with one extent:
root@tester:~# zcat /usr/share/doc/cpio/changelog.gz > /test/a
, FILE_EXTENT_SAME).
To minimize surprising userspace behavior, apply this change only to
the LOGICAL_INO_V2 ioctl.
Signed-off-by: Zygo Blaxell <ce3g8...@umail.furryterror.org>
---
fs/btrfs/ioctl.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
and an 'if' statement will suffice.
Now that we have a flags field in logical_ino_args, add a flag
BTRFS_LOGICAL_INO_ARGS_IGNORE_OFFSET to get the behavior we want,
and pass it down the stack to iterate_inodes_from_logical.
Signed-off-by: Zygo Blaxell <ce3g8...@umail.furryterror.org>
---
fs
esired.
There is no functional change in this patch. The new flag is always
false.
Signed-off-by: Zygo Blaxell <ce3g8...@umail.furryterror.org>
---
fs/btrfs/backref.c| 63 ++-
fs/btrfs/backref.h| 8 +++---
fs/btrfs/inode.c
esired.
There is no functional change in this patch. The new flag is always
false.
Signed-off-by: Zygo Blaxell <ce3g8...@umail.furryterror.org>
---
fs/btrfs/backref.c | 62 --
fs/btrfs/backref.h | 8 ---
fs/btrfs/inode.c | 2 +-
fs/b
and an 'if' statement will suffice.
Now that we have a flags field in logical_ino_args, add a flag
BTRFS_LOGICAL_INO_ARGS_IGNORE_OFFSET to get the behavior we want,
and pass it down the stack to iterate_inodes_from_logical.
Signed-off-by: Zygo Blaxell <ce3g8...@umail.furryterror.org>
---
fs
, FILE_EXTENT_SAME).
To minimize surprising userspace behavior, apply this change only to
the LOGICAL_INO_V2 ioctl.
Signed-off-by: Zygo Blaxell <ce3g8...@umail.furryterror.org>
---
fs/btrfs/ioctl.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
cd cdcd cdcd 6c63 7400 635f 006d
0001760 5f74 6f43 7400 435f 0053 5f74 7363 7400
0002000 435f 0056 5f74 6164 7400 645f 0062 5f74
(...)
Signed-off-by: Zygo Blaxell <ce3g8...@umail.furryterror.org>
Reviewed-by: Liu Bo <bo.li@oracle.com>
---
v4: remove WARN_ON.
On Fri, Mar 10, 2017 at 02:12:54PM -0500, Chris Mason wrote:
>
>
> On 03/10/2017 01:56 PM, Zygo Blaxell wrote:
> >On Fri, Mar 10, 2017 at 11:19:24AM -0500, Chris Mason wrote:
> >>On 03/09/2017 11:41 PM, Zygo Blaxell wrote:
> >>>On Thu, Mar 09, 2017 at
On Fri, Mar 10, 2017 at 11:19:24AM -0500, Chris Mason wrote:
> On 03/09/2017 11:41 PM, Zygo Blaxell wrote:
> >On Thu, Mar 09, 2017 at 10:39:49AM -0500, Chris Mason wrote:
> >>
> >>
> >>On 03/08/2017 09:12 PM, Zygo Blaxell wrote:
> >>>This is a
On Thu, Mar 09, 2017 at 10:39:49AM -0500, Chris Mason wrote:
>
>
> On 03/08/2017 09:12 PM, Zygo Blaxell wrote:
> >This is a story about 4 distinct (and very old) btrfs bugs.
> >
>
> Really great write up.
>
> [ ... ]
>
> >
> >diff --git a/fs/b
On Wed, Mar 08, 2017 at 10:27:33AM +, Filipe Manana wrote:
> On Wed, Mar 8, 2017 at 3:18 AM, Zygo Blaxell
> <zblax...@waya.furryterror.org> wrote:
> > From: Zygo Blaxell <ce3g8...@umail.furryterror.org>
> >
> > This is a story about 4 distinct (and
oleak:
000 cdcd cdcd cdcd cdcd cdcd cdcd cdcd cdcd
*
0001740 cdcd cdcd cdcd cdcd 6c63 7400 635f 006d
0001760 5f74 6f43 7400 435f 0053 5f74 7363 7400
0002000 435f 0056 5f74 6164 7400 645f 0062 5f74
(...)
Signed-off-by: Zygo Blaxell <ce
From: Zygo Blaxell <ce3g8...@umail.furryterror.org>
This is a story about 4 distinct (and very old) btrfs bugs.
Commit c8b978188c ("Btrfs: Add zlib compression support") added
three data corruption bugs for inline extents (bugs #1-3).
Commit 93c82d5750 ("Btrfs: zero page pa
Ping?
This is still reproducible on 4.9.8.
On Mon, Nov 28, 2016 at 12:03:12AM -0500, Zygo Blaxell wrote:
> Commit c8b978188c ("Btrfs: Add zlib compression support") produces
> data corruption when reading a file with a hole positioned after an
> inline extent. btrfs_get
On Wed, Jan 04, 2017 at 07:58:55AM -0500, Austin S. Hemmelgarn wrote:
> On 2017-01-03 16:35, Peter Becker wrote:
> >As i understand the duperemove source-code right (i work on/ try to
> >improve this code since 5 or 6 weeks on multiple parts), duperemove
> >does hashing and calculation before they
>Thanks,
>Xin
>
> Sent: Saturday, December 10, 2016 at 9:16 PM
>From: "Zygo Blaxell" <ce3g8...@umail.furryterror.org>
>To: "Roman Mamedov" <r...@romanrm.net>, "Filipe Manana"
> <fdman...@gmail.com>
&g
for a simple data corruption bug. It (or some
equivalent fix for the same bug) should be on its way to all stable
kernels starting from 2.6.32.
Thanks
On Mon, Nov 28, 2016 at 05:27:10PM +0500, Roman Mamedov wrote:
> On Mon, 28 Nov 2016 00:03:12 -0500
> Zygo Blaxell <ce3g8...@umail.furryt
I got tired of seeing "16.00EiB" whenever btrfs-progs encounters a
negative size value, e.g. during resize:
Unallocated:
/dev/mapper/datamd18 16.00EiB
This version is much more useful:
Unallocated:
/dev/mapper/datamd18 -26.29GiB
Signed-off-by: Zygo Blax
On Sat, Dec 03, 2016 at 10:25:17AM -0800, Omar Sandoval wrote:
> On Sat, Dec 03, 2016 at 01:19:38AM -0500, Zygo Blaxell wrote:
> > I got tired of seeing "16.00EiB" whenever btrfs-progs encounters a
> > negative size value.
> >
> > e.g. during filesyste
Signed-off-by: Zygo Blaxell <ce3g8...@umail.furryterror.org>
---
utils.c | 13 -
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/utils.c b/utils.c
index 69b580a..bd2b66e 100644
--- a/utils.c
+++ b/utils.c
@@ -2594,20 +2594,23 @@ static const char* unit_suffix_binary[]
On Tue, Nov 29, 2016 at 02:03:58PM +0800, Qu Wenruo wrote:
> At 11/29/2016 01:51 PM, Chris Murphy wrote:
> >On Mon, Nov 28, 2016 at 5:48 PM, Qu Wenruo wrote:
> >>
> >>
> >>At 11/19/2016 02:15 AM, Goffredo Baroncelli wrote:
> >>>
> >>>Hello,
> >>>
> >>>these are only my
On Tue, Nov 29, 2016 at 01:49:09PM +0800, Qu Wenruo wrote:
> >>>My proposal requires only a modification to the extent allocator.
> >>>The behavior at the block group layer and scrub remains exactly the same.
> >>>We just need to adjust the allocator slightly to take the RAID5 CoW
> >>>constraints
On Tue, Nov 29, 2016 at 12:12:03PM +0800, Qu Wenruo wrote:
>
>
> At 11/29/2016 11:53 AM, Zygo Blaxell wrote:
> >On Tue, Nov 29, 2016 at 08:48:19AM +0800, Qu Wenruo wrote:
> >>At 11/19/2016 02:15 AM, Goffredo Baroncelli wrote:
> >>>Hello,
> >>>
>
On Tue, Nov 29, 2016 at 08:48:19AM +0800, Qu Wenruo wrote:
> At 11/19/2016 02:15 AM, Goffredo Baroncelli wrote:
> >Hello,
> >
> >these are only my thoughts; no code here, but I would like to share it
> >hoping that it could be useful.
> >
> >As reported several times by Zygo (and others), one of
On Tue, Nov 29, 2016 at 02:52:47AM +0100, Christoph Anton Mitterer wrote:
> On Mon, 2016-11-28 at 16:48 -0500, Zygo Blaxell wrote:
> > If a drive's
> > embedded controller RAM fails, you get corruption on the majority of
> > reads from a single disk, and most writes wi
On Mon, Nov 28, 2016 at 07:32:38PM +0100, Goffredo Baroncelli wrote:
> On 2016-11-28 04:37, Christoph Anton Mitterer wrote:
> > I think for safety it's best to repair as early as possible (and thus
> > on read when a damage is detected), as further blocks/devices may fail
> > till eventually a
On Mon, Nov 28, 2016 at 05:27:10PM +0500, Roman Mamedov wrote:
> On Mon, 28 Nov 2016 00:03:12 -0500
> Zygo Blaxell <ce3g8...@umail.furryterror.org> wrote:
>
> > diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
> > index 8e3a5a2..b1314d6 100644
> > --- a/fs/btrfs/i
at 4085) and item 64 (beginning
at 4096) with zero.
Signed-off-by: Zygo Blaxell <ce3g8...@umail.furryterror.org>
---
fs/btrfs/inode.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 8e3a5a2..b1314d6 100644
--- a/fs/btrfs/inode.c
+++ b/f
On Sun, Nov 27, 2016 at 12:16:34AM +0100, Goffredo Baroncelli wrote:
> On 2016-11-26 19:54, Zygo Blaxell wrote:
> > On Sat, Nov 26, 2016 at 02:12:56PM +0100, Goffredo Baroncelli wrote:
> >> On 2016-11-25 05:31, Zygo Blaxell wrote:
> [...]
> >>
> >> BTW Btr
On Sat, Nov 26, 2016 at 02:12:56PM +0100, Goffredo Baroncelli wrote:
> On 2016-11-25 05:31, Zygo Blaxell wrote:
> >>> Do you mean, read the corrupted data won't repair it?
> >>>
> >>> IIRC that's the designed behavior.
> >
On Fri, Nov 25, 2016 at 03:40:36PM +1100, Gareth Pye wrote:
> On Fri, Nov 25, 2016 at 3:31 PM, Zygo Blaxell
> <ce3g8...@umail.furryterror.org> wrote:
> >
> > This risk mitigation measure does rely on admins taking a machine in this
> > state down immediately, and als
On Tue, Nov 22, 2016 at 07:02:13PM +0100, Goffredo Baroncelli wrote:
> On 2016-11-22 01:28, Qu Wenruo wrote:
> >
> >
> > At 11/22/2016 02:48 AM, Goffredo Baroncelli wrote:
> >> Hi Qu,
> >>
> >> I tested this succefully for RAID5 when doing a scrub (i.e.: I mount a
> >> corrupted disks, then I
On Wed, Nov 23, 2016 at 05:26:18PM -0800, Darrick J. Wong wrote:
[...]
> Keep in mind that the number of bytes deduped is returned to userspace
> via file_dedupe_range.info[x].bytes_deduped, so a properly functioning
> userspace program actually /can/ detect that its 128MB request got cut
> down
On Fri, Nov 04, 2016 at 03:41:49PM +0100, Saint Germain wrote:
> On Thu, 3 Nov 2016 01:17:07 -0400, Zygo Blaxell
> <ce3g8...@umail.furryterror.org> wrote :
> > [...]
> > The quality of the result therefore depends on the amount of effort
> > put into measuring it.
On Thu, Nov 24, 2016 at 03:00:26PM +0100, Niccolò Belli wrote:
> Hi,
> I use snapper, so I have plenty of snapshots in my btrfs partition and most
> of my data is already deduplicated because of that.
> Since long time ago I run offline defragmentation once (because I didn't
> know extents get
On Tue, Nov 22, 2016 at 06:44:19PM -0800, Darrick J. Wong wrote:
> On Tue, Nov 22, 2016 at 09:02:10PM -0500, Zygo Blaxell wrote:
> > On Thu, Nov 17, 2016 at 04:07:48PM -0800, Omar Sandoval wrote:
> > > 3. Both XFS and Btrfs cap each dedupe operation to 16MB, but the
> >
I made a thing!
Bees ("Best-Effort Extent-Same") is a dedup daemon for btrfs.
Bees is a block-oriented userspace dedup designed to avoid scalability
problems on large filesystems.
Bees is designed to degrade gracefully when underprovisioned with RAM.
Bees does not use more RAM or storage as
On Thu, Nov 24, 2016 at 09:13:28AM +1100, Dave Chinner wrote:
> On Wed, Nov 23, 2016 at 08:55:59AM -0500, Zygo Blaxell wrote:
> > On Wed, Nov 23, 2016 at 03:26:32PM +1100, Dave Chinner wrote:
> > > On Tue, Nov 22, 2016 at 09:02:10PM -0500, Zygo Blaxell wrote:
> > > >
On Wed, Nov 23, 2016 at 03:26:32PM +1100, Dave Chinner wrote:
> On Tue, Nov 22, 2016 at 09:02:10PM -0500, Zygo Blaxell wrote:
> > On Thu, Nov 17, 2016 at 04:07:48PM -0800, Omar Sandoval wrote:
> > > 3. Both XFS and Btrfs cap each dedupe operation to 16MB, but the
> > >
On Thu, Nov 17, 2016 at 04:07:48PM -0800, Omar Sandoval wrote:
> 3. Both XFS and Btrfs cap each dedupe operation to 16MB, but the
>implicit EOF gets around this in the existing XFS implementation. I
>copied this for the Btrfs implementation.
Somewhat tangential to this patch, but on the
On Fri, Nov 18, 2016 at 03:58:06PM -0500, Chris Mason wrote:
>
>
> On 11/16/2016 11:10 AM, David Sterba wrote:
> >On Mon, Nov 14, 2016 at 09:55:34AM +0800, Qu Wenruo wrote:
> >>At 11/12/2016 04:22 AM, Liu Bo wrote:
> >>>On Tue, Oct 11, 2016 at 02:47:42PM +0800, Wang Xiaoguang wrote:
> If we
On Fri, Nov 18, 2016 at 07:15:12PM +0100, Goffredo Baroncelli wrote:
> Hello,
>
> these are only my thoughts; no code here, but I would like to share
> it hoping that it could be useful.
>
> As reported several times by Zygo (and others), one of the problem of
> raid5/6 is the write hole. Today
On Fri, Nov 18, 2016 at 07:09:34PM +0100, Goffredo Baroncelli wrote:
> Hi Zygo
> On 2016-11-18 00:13, Zygo Blaxell wrote:
> > On Tue, Nov 15, 2016 at 10:50:22AM +0800, Qu Wenruo wrote:
> >> Fix the so-called famous RAID5/6 scrub error.
> >>
> >> Thanks Go
On Fri, Nov 18, 2016 at 10:42:23AM +0800, Qu Wenruo wrote:
>
>
> At 11/18/2016 09:56 AM, Hugo Mills wrote:
> >On Fri, Nov 18, 2016 at 09:19:11AM +0800, Qu Wenruo wrote:
> >>
> >>
> >>At 11/18/2016 07:13 AM, Zygo Blaxell wrote:
> >>>On
On Tue, Nov 15, 2016 at 10:50:22AM +0800, Qu Wenruo wrote:
> Fix the so-called famous RAID5/6 scrub error.
>
> Thanks Goffredo Baroncelli for reporting the bug, and make it into our
> sight.
> (Yes, without the Phoronix report on this,
>
On Wed, Nov 16, 2016 at 11:24:33PM +0100, Niccolò Belli wrote:
> On martedì 15 novembre 2016 18:52:01 CET, Zygo Blaxell wrote:
> >Like I said, millions of extents per week...
> >
> >64K is an enormous dedup block size, especially if it comes with a 64K
> >a
On Tue, Nov 15, 2016 at 07:26:53AM -0500, Austin S. Hemmelgarn wrote:
> On 2016-11-14 16:10, Zygo Blaxell wrote:
> >Why is deduplicating thousands of blocks of data crazy? I already
> >deduplicate four orders of magnitude more than that per week.
> You missed the 'tiny' quanti
On Mon, Nov 14, 2016 at 09:07:51PM +0100, James Pharaoh wrote:
> On 14/11/16 20:51, Zygo Blaxell wrote:
> >On Mon, Nov 14, 2016 at 01:39:02PM -0500, Austin S. Hemmelgarn wrote:
> >>On 2016-11-14 13:22, James Pharaoh wrote:
> >>>One thing I am keen to understand
On Mon, Nov 14, 2016 at 02:56:51PM -0500, Austin S. Hemmelgarn wrote:
> On 2016-11-14 14:51, Zygo Blaxell wrote:
> >Deduplicating an extent that may might be concurrently modified during the
> >dedup is a reasonable userspace request. In the general case there's
> >no way fo
On Mon, Nov 14, 2016 at 01:39:02PM -0500, Austin S. Hemmelgarn wrote:
> On 2016-11-14 13:22, James Pharaoh wrote:
> >One thing I am keen to understand is if BTRFS will automatically ignore
> >a request to deduplicate a file if it is already deduplicated? Given the
> >performance I see when doing a
On Mon, Nov 14, 2016 at 07:22:59PM +0100, James Pharaoh wrote:
> On 14/11/16 19:07, Zygo Blaxell wrote:
> >There is also a still-unresolved problem where the filesystem CPU usage
> >rises exponentially for some operations depending on the number of shared
> >references to an
On Tue, Nov 08, 2016 at 12:06:01PM +0100, Niccolò Belli wrote:
> Nice, you should probably update the btrfs wiki as well, because there is no
> mention of btrfs-dedupe.
>
> First question, why this name? Don't you plan to support xfs as well?
Does XFS plan to support LOGICAL_INO, INO_PATHS, and
On Mon, Nov 07, 2016 at 07:49:51PM +0100, James Pharaoh wrote:
> Annoyingly I can't find this now, but I definitely remember reading someone,
> apparently someone knowledgable, claim that the latest version of the kernel
> which I was using at the time, still suffered from issues regarding the
>
On Thu, Oct 27, 2016 at 01:30:11PM +0200, Saint Germain wrote:
> Hello,
>
> Following the previous discussion:
> https://www.spinics.net/lists/linux-btrfs/msg19075.html
>
> I would be interested in finding a way to reliably identify reflink /
> CoW files in order to use deduplication programs
On Mon, Oct 17, 2016 at 06:44:14PM +0200, Stefan Malte Schumacher wrote:
> Hello
>
> I would like to monitor my btrfs-filesystem for missing drives. On
> Debian mdadm uses a script in /etc/cron.daily, which calls mdadm and
> sends an email if anything is wrong with the array. I would like to do
>
On Wed, Oct 12, 2016 at 11:35:46AM +0800, Wang Xiaoguang wrote:
> hi,
>
> On 10/11/2016 11:49 PM, Chris Murphy wrote:
> >On Tue, Oct 11, 2016 at 12:47 AM, Wang Xiaoguang
> > wrote:
> >>If we use mount option "-o max_inline=sectorsize", say 4096, indeed
> >>even for a
1 - 100 of 263 matches
Mail list logo