On Tue, Feb 09, 2016 at 11:36:49PM -0800, Ian Kelling wrote:
> I searched the man pages, can't seem to find it.
> btrfs-balance can change profiles, but not show
> the current profile... seems odd.
btrfs fi df /mountpoint
Hugo.
--
Hugo Mills | Gentlemen! You can't fight here!
I searched the man pages, can't seem to find it.
btrfs-balance can change profiles, but not show
the current profile... seems odd.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.ker
Hi,
This morning i woke up to a failing disk:
[230743.953079] BTRFS: bdev /dev/sdc errs: wr 1573, rd 45648, flush
503, corrupt 0, gen 0
[230743.953970] BTRFS: bdev /dev/sdc errs: wr 1573, rd 45649, flush
503, corrupt 0, gen 0
[230744.106443] BTRFS: lost page write due to I/O error on /dev/sdc
[23
On Feb 08 2016, Nikolaus Rath wrote:
> Otherwise I'll give bcache a shot. I've avoided it so far because of the
> need to reformat and because of rumours that it doesn't work well with
> LVM or BTRFS. But it sounds as if that's not the case..
I now have the following stack:
btrfs on LUKS on LVM
This could also be interesting. It means canceling the balance in
progress; waiting some time; and then cancelling it again to get
results to return.
# perf stat -B btrfs balance start /
## Again, single device example, balancing at expected performance.
http://fpaste.org/320562/55071438/
I didn
# perf stat -e 'btrfs:*' -a sleep 10
## This is single device HDD, balance of a root fs was started before
these 10 seconds of sampling. There are some differences in the
statistics depending on whether there are predominately reads or
writes for the balance, so clearly balance does predominately
On Tue, Feb 9, 2016 at 11:38 PM, Nikolaus Rath wrote:
> On Feb 09 2016, Kai Krakow wrote:
>>> If there's no way to put LVM anywhere into the stack that'd be a
>>> bummer, I very much want to use dm-crypt (and I guess that counts as
>>> lvm?).
>>
>> Wasn't there plans for integrating per-file encr
On Tue, Feb 09, 2016 at 07:26:40PM +1100, Dave Chinner wrote:
> On Mon, Feb 08, 2016 at 05:13:48PM -0800, Darrick J. Wong wrote:
> > Signed-off-by: Darrick J. Wong
> > ---
> > common/xfs| 44 ++
> > tests/xfs/233 | 78 ++
> >
On Tue, Feb 09, 2016 at 07:15:47PM +1100, Dave Chinner wrote:
> On Mon, Feb 08, 2016 at 05:13:42PM -0800, Darrick J. Wong wrote:
> > Signed-off-by: Darrick J. Wong
> > +
> > +_cleanup()
> > +{
> > +cd /
> > +echo $old_cow_lifetime >
> > /proc/sys/fs/xfs/speculative_cow_prealloc_lifetime
>
On Tue, Feb 09, 2016 at 07:09:23PM +1100, Dave Chinner wrote:
> On Mon, Feb 08, 2016 at 05:13:35PM -0800, Darrick J. Wong wrote:
> > Signed-off-by: Darrick J. Wong
> > ---
> > tests/xfs/215 | 108 ++
> > tests/xfs/215.out | 14 +
> > tests/xfs/21
On Tue, Feb 09, 2016 at 07:01:44PM +1100, Dave Chinner wrote:
> On Mon, Feb 08, 2016 at 05:13:09PM -0800, Darrick J. Wong wrote:
> > Perform copy-on-writes at random offsets to stress the CoW allocation
> > system. Assess the effectiveness of the extent size hint at
> > combatting fragmentation vi
On Feb 09 2016, Kai Krakow wrote:
>> If there's no way to put LVM anywhere into the stack that'd be a
>> bummer, I very much want to use dm-crypt (and I guess that counts as
>> lvm?).
>
> Wasn't there plans for integrating per-file encryption into btrfs (like
> there's already for ext4)? I think t
On Thu, Feb 4, 2016 at 9:21 PM, Dave Chinner wrote:
> On Thu, Feb 04, 2016 at 12:11:28AM +, fdman...@kernel.org wrote:
>> From: Filipe Manana
>>
>> Test that an incremental send operation which issues clone operations
>> works for files that have a full path containing more than one parent
>>
On Tue, Feb 9, 2016 at 2:43 PM, Kai Krakow wrote:
> Wasn't there plans for integrating per-file encryption into btrfs (like
> there's already for ext4)? I think this could pretty well obsolete your
> plans - except you prefer full-device encryption.
https://btrfs.wiki.kernel.org/index.php/Projec
Am Tue, 9 Feb 2016 09:59:12 -0500
schrieb "Austin S. Hemmelgarn" :
> > I haven't found much reference or comparison information online wrt
> > wear leveling - mostly performance benchmarks that don't really
> > address your request. Personally I will likely never bother with
> > f2fs unless I some
On Tue, Feb 09, 2016 at 07:32:15PM +1100, Dave Chinner wrote:
> On Mon, Feb 08, 2016 at 05:14:01PM -0800, Darrick J. Wong wrote:
> .,,,
> > +
> > +echo "Check for damage"
> > +_dmerror_unmount
> > +_dmerror_cleanup
> > +_repair_scratch_fs >> "$seqres.full" 2>&1
>
> Are you testing repair here? If
On Tue, Feb 9, 2016 at 6:48 AM, Christian Rohmann
wrote:
>
>
> On 02/01/2016 09:52 PM, Chris Murphy wrote:
>>> Would some sort of stracing or profiling of the process help to narrow
>>> > down where the time is currently spent and why the balancing is only
>>> > running single-threaded?
>> This ca
Am Tue, 09 Feb 2016 08:09:20 -0800
schrieb Nikolaus Rath :
> On Feb 09 2016, Kai Krakow wrote:
> > I'm myself using bcache+btrfs and it ran bullet proof so far, even
> > after unintentional resets or power outage. It's important tho to
> > NOT put any storage layer between bcache and your devices
Am Tue, 09 Feb 2016 08:10:15 -0800
schrieb Nikolaus Rath :
> On Feb 09 2016, Kai Krakow wrote:
> > You could even format a bcache superblock "just in case",
> > and add an SSD later. Without SSD, bcache will just work in passthru
> > mode.
>
> Do the LVM concerns still apply in passthrough mode,
On Fri, Feb 5, 2016 at 12:36 PM, Mackenzie Meyer wrote:
>
> RAID 6 write holes?
I don't even understand the nature of the write hole on Btrfs. If
modification is still always COW, then either an fs block, a strip, or
whole stripe write happens, I'm not sure where the hole comes from. It
suggests
On Sun, Feb 7, 2016 at 6:28 PM, Benjamin Valentin
wrote:
> Hi,
>
> I created a btrfs volume with 3x8TB drives (ST8000AS0002-1NA) in raid5
> configuration.
> I copied some TB of data onto it without errors (from eSATA drives, so
> rather fast - I mention that because of [1]), then set it up as a
>
On Tue, Feb 9, 2016 at 8:29 AM, Kai Krakow wrote:
> Am Mon, 08 Feb 2016 13:44:17 -0800
> schrieb Nikolaus Rath :
>
>> On Feb 07 2016, Martin Steigerwald wrote:
>> > Am Sonntag, 7. Februar 2016, 21:07:13 CET schrieb Kai Krakow:
>> >> Am Sun, 07 Feb 2016 11:06:58 -0800
>> >>
>> >> schrieb Nikolaus
On Tue, Feb 09, 2016 at 02:48:14PM +0100, Christian Rohmann wrote:
>
>
> On 02/01/2016 09:52 PM, Chris Murphy wrote:
> >> Would some sort of stracing or profiling of the process help to narrow
> >> > down where the time is currently spent and why the balancing is only
> >> > running single-thread
The long-term plan is to merge the features of standalone tools
into the btrfs binary, reducing the number of shipped binaries.
Signed-off-by: Alexander Fougner
---
Makefile.in | 2 +-
btrfs-debug-tree.c | 424 +---
cmds-inspect-dump-t
Signed-off-by: Alexander Fougner
---
Documentation/btrfs-debug-tree.asciidoc | 7 +++
Documentation/btrfs-inspect-internal.asciidoc | 26 ++
2 files changed, 33 insertions(+)
diff --git a/Documentation/btrfs-debug-tree.asciidoc
b/Documentation/btrfs-debug-tree
On Feb 09 2016, Kai Krakow wrote:
> You could even format a bcache superblock "just in case",
> and add an SSD later. Without SSD, bcache will just work in passthru
> mode.
Do the LVM concerns still apply in passthrough mode, or only when
there's an actual cache?
Thanks,
-Nikolaus
--
GPG encry
On Feb 09 2016, Kai Krakow wrote:
> I'm myself using bcache+btrfs and it ran bullet proof so far, even
> after unintentional resets or power outage. It's important tho to NOT
> put any storage layer between bcache and your devices or between btrfs
> and your device as there are reports it becomes
On 2016-02-09 09:08, Brendan Hide wrote:
On 2/9/2016 1:13 PM, Martin wrote:
How does btrfs compare to f2fs for use on (128GByte) USB memory sticks?
Particularly for wearing out certain storage blocks?
Does btrfs heavily use particular storage blocks that will prematurely
"wear out"?
(That is,
On 2/9/2016 1:13 PM, Martin wrote:
How does btrfs compare to f2fs for use on (128GByte) USB memory sticks?
Particularly for wearing out certain storage blocks?
Does btrfs heavily use particular storage blocks that will prematurely
"wear out"?
(That is, could the whole 128GBytes be lost due to
On 05/02/16 20:36, Mackenzie Meyer wrote:
Hello,
I've tried checking around on google but can't find information
regarding the RAM requirements of BTRFS and most of the topics on
stability seem quite old.
To keep my answer short: every time I've tried (offline) deduplication
or raid5 pools
On 02/01/2016 09:52 PM, Chris Murphy wrote:
>> Would some sort of stracing or profiling of the process help to narrow
>> > down where the time is currently spent and why the balancing is only
>> > running single-threaded?
> This can't be straced. Someone a lot more knowledgeable than I am
> might
On 2016-02-08 16:44, Nikolaus Rath wrote:
On Feb 07 2016, Martin Steigerwald wrote:
Am Sonntag, 7. Februar 2016, 21:07:13 CET schrieb Kai Krakow:
Am Sun, 07 Feb 2016 11:06:58 -0800
schrieb Nikolaus Rath :
Hello,
I have a large home directory on a spinning disk that I regularly
synchronize b
On 2016-02-09 02:02, Kai Krakow wrote:
Am Tue, 9 Feb 2016 01:42:40 + (UTC)
schrieb Duncan <1i5t5.dun...@cox.net>:
Tho I'd consider benchmarking or testing, as I'm not sure btrfs raid1
on spinning rust will in practice fully saturate the gigabit
Ethernet, particularly as it gets fragmented (
How does btrfs compare to f2fs for use on (128GByte) USB memory sticks?
Particularly for wearing out certain storage blocks?
Does btrfs heavily use particular storage blocks that will prematurely
"wear out"?
(That is, could the whole 128GBytes be lost due to one 4kByte block
having been re-writt
On Mon, Feb 08, 2016 at 11:55:06PM -0800, Darrick J. Wong wrote:
> On Tue, Feb 09, 2016 at 06:43:30PM +1100, Dave Chinner wrote:
> > On Mon, Feb 08, 2016 at 05:13:03PM -0800, Darrick J. Wong wrote:
> > > Include the refcount and rmap structures in the golden output.
> > >
> > > Signed-off-by: Darr
On Mon, Feb 08, 2016 at 05:14:01PM -0800, Darrick J. Wong wrote:
.,,,
> +
> +echo "Check for damage"
> +_dmerror_unmount
> +_dmerror_cleanup
> +_repair_scratch_fs >> "$seqres.full" 2>&1
Are you testing repair here? If so, why doesn't failure matter.
If not, why do it? Or is _require_scratch_nochec
On Mon, Feb 08, 2016 at 05:13:48PM -0800, Darrick J. Wong wrote:
> Signed-off-by: Darrick J. Wong
> ---
> common/xfs| 44 ++
> tests/xfs/233 | 78 ++
> tests/xfs/233.out |6 +++
> tests/xfs/234 | 89
On Tue, Feb 09, 2016 at 06:36:22PM +1100, Dave Chinner wrote:
> On Mon, Feb 08, 2016 at 05:12:50PM -0800, Darrick J. Wong wrote:
> > Create a couple of XFS-specific tests -- one to check that growing
> > and shrinking the refcount btree works and a second one to check
> > what happens when we hit m
On Mon, Feb 08, 2016 at 05:13:42PM -0800, Darrick J. Wong wrote:
> Signed-off-by: Darrick J. Wong
> +
> +_cleanup()
> +{
> +cd /
> +echo $old_cow_lifetime >
> /proc/sys/fs/xfs/speculative_cow_prealloc_lifetime
> +#rm -rf "$tmp".* "$testdir"
uncomment.
> +echo "CoW and leave leftover
On Mon, Feb 08, 2016 at 05:13:35PM -0800, Darrick J. Wong wrote:
> Signed-off-by: Darrick J. Wong
> ---
> tests/xfs/215 | 108 ++
> tests/xfs/215.out | 14 +
> tests/xfs/218 | 108 ++
> tests/xfs/218.o
On Tue, Feb 09, 2016 at 06:37:32PM +1100, Dave Chinner wrote:
> On Mon, Feb 08, 2016 at 05:12:23PM -0800, Darrick J. Wong wrote:
> > Check that we don't expose old disk contents when a directio write to
> > an unwritten extent fails due to IO errors. This primarily affects
> > XFS and ext4.
> >
>
On Mon, Feb 08, 2016 at 05:13:09PM -0800, Darrick J. Wong wrote:
> Perform copy-on-writes at random offsets to stress the CoW allocation
> system. Assess the effectiveness of the extent size hint at
> combatting fragmentation via unshare, a rewrite, and no-op after the
> random writes.
>
> Signed
42 matches
Mail list logo