Re: One disc of 3-disc btrfs-raid5 failed - files only partially readable

2016-02-09 Thread Henk Slager
On Sun, Feb 7, 2016 at 6:28 PM, Benjamin Valentin wrote: > Hi, > > I created a btrfs volume with 3x8TB drives (ST8000AS0002-1NA) in raid5 > configuration. > I copied some TB of data onto it without errors (from eSATA drives, so > rather fast - I mention that because of

Re: BTRFS RAM requirements, RAID 6 stability/write holes and expansion questions

2016-02-09 Thread Chris Murphy
On Fri, Feb 5, 2016 at 12:36 PM, Mackenzie Meyer wrote: > > RAID 6 write holes? I don't even understand the nature of the write hole on Btrfs. If modification is still always COW, then either an fs block, a strip, or whole stripe write happens, I'm not sure where the

Re: [PATCH 13/23] xfs: test fragmentation characteristics of copy-on-write

2016-02-09 Thread Dave Chinner
On Mon, Feb 08, 2016 at 05:13:09PM -0800, Darrick J. Wong wrote: > Perform copy-on-writes at random offsets to stress the CoW allocation > system. Assess the effectiveness of the extent size hint at > combatting fragmentation via unshare, a rewrite, and no-op after the > random writes. > >

Re: [PATCH 17/23] reflink: test CoW across a mixed range of block types with cowextsize set

2016-02-09 Thread Dave Chinner
On Mon, Feb 08, 2016 at 05:13:35PM -0800, Darrick J. Wong wrote: > Signed-off-by: Darrick J. Wong > --- > tests/xfs/215 | 108 ++ > tests/xfs/215.out | 14 + > tests/xfs/218 | 108

Re: [PATCH 06/23] dio unwritten conversion bug tests

2016-02-09 Thread Darrick J. Wong
On Tue, Feb 09, 2016 at 06:37:32PM +1100, Dave Chinner wrote: > On Mon, Feb 08, 2016 at 05:12:23PM -0800, Darrick J. Wong wrote: > > Check that we don't expose old disk contents when a directio write to > > an unwritten extent fails due to IO errors. This primarily affects > > XFS and ext4. > >

Re: [PATCH 19/23] xfs: test rmapbt functionality

2016-02-09 Thread Dave Chinner
On Mon, Feb 08, 2016 at 05:13:48PM -0800, Darrick J. Wong wrote: > Signed-off-by: Darrick J. Wong > --- > common/xfs| 44 ++ > tests/xfs/233 | 78 ++ > tests/xfs/233.out |6 +++ > tests/xfs/234

Re: [PATCH 12/23] xfs/122: support refcount/rmap data structures

2016-02-09 Thread Dave Chinner
On Mon, Feb 08, 2016 at 11:55:06PM -0800, Darrick J. Wong wrote: > On Tue, Feb 09, 2016 at 06:43:30PM +1100, Dave Chinner wrote: > > On Mon, Feb 08, 2016 at 05:13:03PM -0800, Darrick J. Wong wrote: > > > Include the refcount and rmap structures in the golden output. > > > > > > Signed-off-by:

USB memory sticks wear & speed: btrfs vs f2fs?

2016-02-09 Thread Martin
How does btrfs compare to f2fs for use on (128GByte) USB memory sticks? Particularly for wearing out certain storage blocks? Does btrfs heavily use particular storage blocks that will prematurely "wear out"? (That is, could the whole 128GBytes be lost due to one 4kByte block having been

Re: [PATCH 21/23] xfs: aio cow tests

2016-02-09 Thread Dave Chinner
On Mon, Feb 08, 2016 at 05:14:01PM -0800, Darrick J. Wong wrote: .,,, > + > +echo "Check for damage" > +_dmerror_unmount > +_dmerror_cleanup > +_repair_scratch_fs >> "$seqres.full" 2>&1 Are you testing repair here? If so, why doesn't failure matter. If not, why do it? Or is

Re: [PATCH 18/23] xfs: test the automatic cowextsize extent garbage collector

2016-02-09 Thread Dave Chinner
On Mon, Feb 08, 2016 at 05:13:42PM -0800, Darrick J. Wong wrote: > Signed-off-by: Darrick J. Wong > + > +_cleanup() > +{ > +cd / > +echo $old_cow_lifetime > > /proc/sys/fs/xfs/speculative_cow_prealloc_lifetime > +#rm -rf "$tmp".* "$testdir" uncomment. >

Re: [PATCH 10/23] xfs: more reflink tests

2016-02-09 Thread Darrick J. Wong
On Tue, Feb 09, 2016 at 06:36:22PM +1100, Dave Chinner wrote: > On Mon, Feb 08, 2016 at 05:12:50PM -0800, Darrick J. Wong wrote: > > Create a couple of XFS-specific tests -- one to check that growing > > and shrinking the refcount btree works and a second one to check > > what happens when we hit

Re: USB memory sticks wear & speed: btrfs vs f2fs?

2016-02-09 Thread Brendan Hide
On 2/9/2016 1:13 PM, Martin wrote: How does btrfs compare to f2fs for use on (128GByte) USB memory sticks? Particularly for wearing out certain storage blocks? Does btrfs heavily use particular storage blocks that will prematurely "wear out"? (That is, could the whole 128GBytes be lost due to

Re: BTRFS RAM requirements, RAID 6 stability/write holes and expansion questions

2016-02-09 Thread Psalle
On 05/02/16 20:36, Mackenzie Meyer wrote: Hello, I've tried checking around on google but can't find information regarding the RAM requirements of BTRFS and most of the topics on stability seem quite old. To keep my answer short: every time I've tried (offline) deduplication or raid5 pools

Re: USB memory sticks wear & speed: btrfs vs f2fs?

2016-02-09 Thread Austin S. Hemmelgarn
On 2016-02-09 09:08, Brendan Hide wrote: On 2/9/2016 1:13 PM, Martin wrote: How does btrfs compare to f2fs for use on (128GByte) USB memory sticks? Particularly for wearing out certain storage blocks? Does btrfs heavily use particular storage blocks that will prematurely "wear out"? (That

Re: btrfs-progs 4.4 re-balance of RAID6 is very slow / limited to one cpu core?

2016-02-09 Thread Christian Rohmann
On 02/01/2016 09:52 PM, Chris Murphy wrote: >> Would some sort of stracing or profiling of the process help to narrow >> > down where the time is currently spent and why the balancing is only >> > running single-threaded? > This can't be straced. Someone a lot more knowledgeable than I am >

Re: "layout" of a six drive raid10

2016-02-09 Thread Austin S. Hemmelgarn
On 2016-02-09 02:02, Kai Krakow wrote: Am Tue, 9 Feb 2016 01:42:40 + (UTC) schrieb Duncan <1i5t5.dun...@cox.net>: Tho I'd consider benchmarking or testing, as I'm not sure btrfs raid1 on spinning rust will in practice fully saturate the gigabit Ethernet, particularly as it gets fragmented

Re: Use fast device only for metadata?

2016-02-09 Thread Austin S. Hemmelgarn
On 2016-02-08 16:44, Nikolaus Rath wrote: On Feb 07 2016, Martin Steigerwald wrote: Am Sonntag, 7. Februar 2016, 21:07:13 CET schrieb Kai Krakow: Am Sun, 07 Feb 2016 11:06:58 -0800 schrieb Nikolaus Rath : Hello, I have a large home directory on a

Re: Use fast device only for metadata?

2016-02-09 Thread Nikolaus Rath
On Feb 09 2016, Kai Krakow wrote: > You could even format a bcache superblock "just in case", > and add an SSD later. Without SSD, bcache will just work in passthru > mode. Do the LVM concerns still apply in passthrough mode, or only when there's an actual cache? Thanks,

Re: Use fast device only for metadata?

2016-02-09 Thread Nikolaus Rath
On Feb 09 2016, Kai Krakow wrote: > I'm myself using bcache+btrfs and it ran bullet proof so far, even > after unintentional resets or power outage. It's important tho to NOT > put any storage layer between bcache and your devices or between btrfs > and your device as there

[PATCH 1/2] btrfs-progs: copy functionality of btrfs-debug-tree to inspect-internal subcommand

2016-02-09 Thread Alexander Fougner
The long-term plan is to merge the features of standalone tools into the btrfs binary, reducing the number of shipped binaries. Signed-off-by: Alexander Fougner --- Makefile.in | 2 +- btrfs-debug-tree.c | 424

[PATCH 2/2] btrfs-progs: update docs for inspect-internal dump-tree

2016-02-09 Thread Alexander Fougner
Signed-off-by: Alexander Fougner --- Documentation/btrfs-debug-tree.asciidoc | 7 +++ Documentation/btrfs-inspect-internal.asciidoc | 26 ++ 2 files changed, 33 insertions(+) diff --git a/Documentation/btrfs-debug-tree.asciidoc

Re: btrfs-progs 4.4 re-balance of RAID6 is very slow / limited to one cpu core?

2016-02-09 Thread Marc MERLIN
On Tue, Feb 09, 2016 at 02:48:14PM +0100, Christian Rohmann wrote: > > > On 02/01/2016 09:52 PM, Chris Murphy wrote: > >> Would some sort of stracing or profiling of the process help to narrow > >> > down where the time is currently spent and why the balancing is only > >> > running

Re: [PATCH 21/23] xfs: aio cow tests

2016-02-09 Thread Darrick J. Wong
On Tue, Feb 09, 2016 at 07:32:15PM +1100, Dave Chinner wrote: > On Mon, Feb 08, 2016 at 05:14:01PM -0800, Darrick J. Wong wrote: > .,,, > > + > > +echo "Check for damage" > > +_dmerror_unmount > > +_dmerror_cleanup > > +_repair_scratch_fs >> "$seqres.full" 2>&1 > > Are you testing repair here? If

Re: USB memory sticks wear & speed: btrfs vs f2fs?

2016-02-09 Thread Kai Krakow
Am Tue, 9 Feb 2016 09:59:12 -0500 schrieb "Austin S. Hemmelgarn" : > > I haven't found much reference or comparison information online wrt > > wear leveling - mostly performance benchmarks that don't really > > address your request. Personally I will likely never bother with

Re: [PATCH] fstests: btrfs, test for send with clone operations

2016-02-09 Thread Filipe Manana
On Thu, Feb 4, 2016 at 9:21 PM, Dave Chinner wrote: > On Thu, Feb 04, 2016 at 12:11:28AM +, fdman...@kernel.org wrote: >> From: Filipe Manana >> >> Test that an incremental send operation which issues clone operations >> works for files that have a

Re: Use fast device only for metadata?

2016-02-09 Thread Nikolaus Rath
On Feb 09 2016, Kai Krakow wrote: >> If there's no way to put LVM anywhere into the stack that'd be a >> bummer, I very much want to use dm-crypt (and I guess that counts as >> lvm?). > > Wasn't there plans for integrating per-file encryption into btrfs (like > there's

Re: btrfs-progs 4.4 re-balance of RAID6 is very slow / limited to one cpu core?

2016-02-09 Thread Chris Murphy
On Tue, Feb 9, 2016 at 6:48 AM, Christian Rohmann wrote: > > > On 02/01/2016 09:52 PM, Chris Murphy wrote: >>> Would some sort of stracing or profiling of the process help to narrow >>> > down where the time is currently spent and why the balancing is only >>> > running

Re: Use fast device only for metadata?

2016-02-09 Thread Kai Krakow
Am Tue, 09 Feb 2016 08:10:15 -0800 schrieb Nikolaus Rath : > On Feb 09 2016, Kai Krakow wrote: > > You could even format a bcache superblock "just in case", > > and add an SSD later. Without SSD, bcache will just work in passthru > > mode. > > Do the LVM

Re: Use fast device only for metadata?

2016-02-09 Thread Kai Krakow
Am Tue, 09 Feb 2016 08:09:20 -0800 schrieb Nikolaus Rath : > On Feb 09 2016, Kai Krakow wrote: > > I'm myself using bcache+btrfs and it ran bullet proof so far, even > > after unintentional resets or power outage. It's important tho to > > NOT put any

Re: Use fast device only for metadata?

2016-02-09 Thread Chris Murphy
On Tue, Feb 9, 2016 at 2:43 PM, Kai Krakow wrote: > Wasn't there plans for integrating per-file encryption into btrfs (like > there's already for ext4)? I think this could pretty well obsolete your > plans - except you prefer full-device encryption.

Re: Use fast device only for metadata?

2016-02-09 Thread Henk Slager
On Tue, Feb 9, 2016 at 8:29 AM, Kai Krakow wrote: > Am Mon, 08 Feb 2016 13:44:17 -0800 > schrieb Nikolaus Rath : > >> On Feb 07 2016, Martin Steigerwald wrote: >> > Am Sonntag, 7. Februar 2016, 21:07:13 CET schrieb Kai Krakow: >> >>

Re: [PATCH 18/23] xfs: test the automatic cowextsize extent garbage collector

2016-02-09 Thread Darrick J. Wong
On Tue, Feb 09, 2016 at 07:15:47PM +1100, Dave Chinner wrote: > On Mon, Feb 08, 2016 at 05:13:42PM -0800, Darrick J. Wong wrote: > > Signed-off-by: Darrick J. Wong > > + > > +_cleanup() > > +{ > > +cd / > > +echo $old_cow_lifetime > > >

Re: [PATCH 19/23] xfs: test rmapbt functionality

2016-02-09 Thread Darrick J. Wong
On Tue, Feb 09, 2016 at 07:26:40PM +1100, Dave Chinner wrote: > On Mon, Feb 08, 2016 at 05:13:48PM -0800, Darrick J. Wong wrote: > > Signed-off-by: Darrick J. Wong > > --- > > common/xfs| 44 ++ > > tests/xfs/233 | 78

Re: [PATCH 17/23] reflink: test CoW across a mixed range of block types with cowextsize set

2016-02-09 Thread Darrick J. Wong
On Tue, Feb 09, 2016 at 07:09:23PM +1100, Dave Chinner wrote: > On Mon, Feb 08, 2016 at 05:13:35PM -0800, Darrick J. Wong wrote: > > Signed-off-by: Darrick J. Wong > > --- > > tests/xfs/215 | 108 ++ > > tests/xfs/215.out |

Re: [PATCH 13/23] xfs: test fragmentation characteristics of copy-on-write

2016-02-09 Thread Darrick J. Wong
On Tue, Feb 09, 2016 at 07:01:44PM +1100, Dave Chinner wrote: > On Mon, Feb 08, 2016 at 05:13:09PM -0800, Darrick J. Wong wrote: > > Perform copy-on-writes at random offsets to stress the CoW allocation > > system. Assess the effectiveness of the extent size hint at > > combatting fragmentation

Re: Use fast device only for metadata?

2016-02-09 Thread Henk Slager
On Tue, Feb 9, 2016 at 11:38 PM, Nikolaus Rath wrote: > On Feb 09 2016, Kai Krakow wrote: >>> If there's no way to put LVM anywhere into the stack that'd be a >>> bummer, I very much want to use dm-crypt (and I guess that counts as >>> lvm?). >> >> Wasn't

Re: Use fast device only for metadata?

2016-02-09 Thread Nikolaus Rath
On Feb 08 2016, Nikolaus Rath wrote: > Otherwise I'll give bcache a shot. I've avoided it so far because of the > need to reformat and because of rumours that it doesn't work well with > LVM or BTRFS. But it sounds as if that's not the case.. I now have the following stack:

Re: btrfs-progs 4.4 re-balance of RAID6 is very slow / limited to one cpu core?

2016-02-09 Thread Chris Murphy
# perf stat -e 'btrfs:*' -a sleep 10 ## This is single device HDD, balance of a root fs was started before these 10 seconds of sampling. There are some differences in the statistics depending on whether there are predominately reads or writes for the balance, so clearly balance does

Re: btrfs-progs 4.4 re-balance of RAID6 is very slow / limited to one cpu core?

2016-02-09 Thread Chris Murphy
This could also be interesting. It means canceling the balance in progress; waiting some time; and then cancelling it again to get results to return. # perf stat -B btrfs balance start / ## Again, single device example, balancing at expected performance. http://fpaste.org/320562/55071438/ I

Re: How to show current profile?

2016-02-09 Thread Hugo Mills
On Tue, Feb 09, 2016 at 11:36:49PM -0800, Ian Kelling wrote: > I searched the man pages, can't seem to find it. > btrfs-balance can change profiles, but not show > the current profile... seems odd. btrfs fi df /mountpoint Hugo. -- Hugo Mills | Gentlemen! You can't fight

RAID5 Unable to remove Failing HD

2016-02-09 Thread Rene Castberg
Hi, This morning i woke up to a failing disk: [230743.953079] BTRFS: bdev /dev/sdc errs: wr 1573, rd 45648, flush 503, corrupt 0, gen 0 [230743.953970] BTRFS: bdev /dev/sdc errs: wr 1573, rd 45649, flush 503, corrupt 0, gen 0 [230744.106443] BTRFS: lost page write due to I/O error on /dev/sdc

How to show current profile?

2016-02-09 Thread Ian Kelling
I searched the man pages, can't seem to find it. btrfs-balance can change profiles, but not show the current profile... seems odd. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at