Hi Marcel,
On 2015/01/16 4:46, Marcel Ritter wrote:
Hi,
I just started some btrfs stress testing on latest linux kernel 3.19-rc4:
A few hours later, filesystem stopped working - the kernel bug report
can be found below.
The test consists of one massive IO thread (writing 100GB files with dd),
Hi,
On 2015/01/16 10:05, Tomasz Chmielewski wrote:
I just started some btrfs stress testing on latest linux kernel 3.19-rc4:
A few hours later, filesystem stopped working - the kernel bug report
can be found below.
Hi,
your "kernel BUG at fs/btrfs/inode.c:3142!" from 3.19-rc4 corresponds to
On 15/01/15 21:48, David Sterba wrote:
> Chandan, please drop the btrfs_inode_otime helper and resend. Thanks.
Thanks!
Sorry I'd had no further time to look at this, I've been fully committed
with $DAY_JOB and on a number of projects with our local community
observatory (if anyone is in/visiting
Reported in Red Hat BZ#1181627, 'btrfs fi show' on unmounted device will
return 1 even no error happens.
Introduced by: commit 2513077f
btrfs-progs: fix device missing of btrfs fi show with seed devices
Patch fixing it:
https://patchwork.kernel.org/patch/5626001/
btrfs-progs: Fix wrong return val
Hi, David Sterba
* From: David Sterba [mailto:dste...@suse.cz]
> The cleanups look good in general, some minor nitpicks below.
>
> On Tue, Jan 13, 2015 at 08:34:37PM +0800, Zhaolei wrote:
> > - kfree(bbio);
> > + put_btrfs_bio(bbio);
>
> Please rename it to btrfs_put_bbio, th
I just started some btrfs stress testing on latest linux kernel
3.19-rc4:
A few hours later, filesystem stopped working - the kernel bug report
can be found below.
Hi,
your "kernel BUG at fs/btrfs/inode.c:3142!" from 3.19-rc4 corresponds to
http://marc.info/?l=linux-btrfs&m=141903172106342&w=
Daniel Pocock posted on Thu, 15 Jan 2015 20:54:10 +0100 as excerpted:
> Can anybody comment on how BtrFs (particularly RAID1 mirroring)
> interacts with drives that offer error recovery control (or TLER in WDC
> terms)?
>
> I generally prefer to buy this type of drive for any serious data
> stora
Hello,
I am having trouble with my btrfs setup. An unwanted reset probably
caused the corruption. I can mount the filesystem, but cannot perform
scrub as this ends with GPF.
uname -a
Linux sysresccd 3.14.24-alt441-amd64 #2 SMP Sun Nov 16 08:27:16 UTC
2014 x86_64 AMD Phenom(tm) II X4 965 Processor
David Sterba posted on Thu, 15 Jan 2015 13:05:46 +0100 as excerpted:
> A shell completion would be great of course, it's in the project ideas.
> There's a starting point
> http://www.spinics.net/lists/linux-btrfs/msg15899.html .
FWIW, in case anyone is interested...
What I did here is a bit diff
Hi,
Can anybody comment on how BtrFs (particularly RAID1 mirroring)
interacts with drives that offer error recovery control (or TLER in WDC
terms)?
I generally prefer to buy this type of drive for any serious data
storage purposes
I notice ZFS gets a mention in the Wikipedia article about the
Hi,
I just started some btrfs stress testing on latest linux kernel 3.19-rc4:
A few hours later, filesystem stopped working - the kernel bug report
can be found below.
The test consists of one massive IO thread (writing 100GB files with dd),
and 2 tar instances extracting kernel sources and delet
On Thu, Jan 8, 2015 at 11:53 AM, Lennart Poettering
wrote:
On Thu, 08.01.15 10:56, Zygo Blaxell (ce3g8...@umail.furryterror.org)
wrote:
On Wed, Jan 07, 2015 at 06:43:15PM +0100, Lennart Poettering wrote:
> Heya!
>
> Currently, systemd-journald's disk access patterns (appending to
the
On Thu, Jan 15, 2015 at 12:24:41PM +0100, David Sterba wrote:
> On Wed, Jan 14, 2015 at 02:27:17PM -0800, Zach Brown wrote:
> > On Wed, Jan 14, 2015 at 04:06:02PM -0500, Sandy McArthur Jr wrote:
> > > Sometimes btrfs scrub status reports that is not running when it still is.
> > >
> > > I think th
When removing a block group we were deleting it from its space_info's
ro_bgs list, using list_del_init, without any synchronization.
Fix this by doing the list delete while holding the space info and
block group spinlocks.
This issue was introduced in the 3.19 kernel by the following change:
在 2015年01月15日 20:30, David Sterba 写道:
On Thu, Jan 15, 2015 at 09:17:01AM +0800, Fan Chengniang/樊成酿 wrote:
在 2015年01月14日 23:46, David Sterba 写道:
On Tue, Jan 13, 2015 at 01:53:39PM +0800, Fan Chengniang wrote:
make btrfs qgroups show human readable sizes
using --human-readable option, example:
Hello,
>
> Could you check how many extents with BTRFS and Ext4:
> # filefrag test1
So my findings are odd:
On BTRFS when I run fio with a single worker thread (target file is
12GB large,and its 100% random write of 4kb blocks), then number of
extents reported by filefrag is around 3.
However wh
The cleanups look good in general, some minor nitpicks below.
On Tue, Jan 13, 2015 at 08:34:37PM +0800, Zhaolei wrote:
> - kfree(bbio);
> + put_btrfs_bio(bbio);
Please rename it to btrfs_put_bbio, this is more consistent with other
*_put_* helpers and 'bbio' distinguishes
On Wed, Jan 14, 2015 at 01:21:50PM +0100, Merlijn Wajer wrote:
> Josef, can you verify that this patch restores the backtrace functionality?
Worked for me, patch applied.
> I'm sorry that my previous patch broke the backtrace functionality -- I
> guess that sometimes trivial patches can still be
On Thu, Jan 15, 2015 at 09:17:01AM +0800, Fan Chengniang/樊成酿 wrote:
>
> 在 2015年01月14日 23:46, David Sterba 写道:
> > On Tue, Jan 13, 2015 at 01:53:39PM +0800, Fan Chengniang wrote:
> >> make btrfs qgroups show human readable sizes
> >> using --human-readable option, example:
> > That's too long to ty
On Thu, Jan 15, 2015 at 09:01:37AM +0800, Qu Wenruo wrote:
> > On Tue, Jan 13, 2015 at 01:53:39PM +0800, Fan Chengniang wrote:
> >> make btrfs qgroups show human readable sizes
> >> using --human-readable option, example:
> > That's too long to type
> It's completely OK to make the option shorter a
On Wed, Jan 14, 2015 at 11:20:29PM +0500, Roman Mamedov wrote:
> On Wed, 14 Jan 2015 16:46:33 +0100
> David Sterba wrote:
>
> > On Tue, Jan 13, 2015 at 01:53:39PM +0800, Fan Chengniang wrote:
> > > make btrfs qgroups show human readable sizes
> > > using --human-readable option, example:
> >
> >
On Wed, Jan 14, 2015 at 11:21:43PM +, Filipe Manana wrote:
> Currently this test fails on 2 situations:
>
> 1) The scratch device supports trim/discard. In this case any modern
>version of mkfs.btrfs outputs a message (to stderr) informing that
>a trim is performed, which the golden ou
On Wed, Jan 14, 2015 at 02:27:17PM -0800, Zach Brown wrote:
> On Wed, Jan 14, 2015 at 04:06:02PM -0500, Sandy McArthur Jr wrote:
> > Sometimes btrfs scrub status reports that is not running when it still is.
> >
> > I think this a cosmetic bug. And I believe this is related to the
> > scrub comple
On Fri, Jan 09, 2015 at 05:11:42PM +0100, David Sterba wrote:
> > --- a/fs/btrfs/inode.c
> > +++ b/fs/btrfs/inode.c
> > @@ -5835,6 +5835,11 @@ static struct inode *btrfs_new_inode(struct
> > btrfs_trans_handle *trans,
> > sizeof(*inode_item));
> > fill_inode_it
There is a global list @fs_uuids to keep @fs_devices object
for each created btrfs. But when a btrfs becomes "empty"
(all devices belong to it are gone), its @fs_devices remains
in @fs_uuids list until module exit.
If we keeps mkfs.btrfs on the same device again and again,
all empty @fs_devices pro
The following patch:
btrfs: remove empty fs_devices to prevent memory runout
introduces @valid_dev_root aiming at recording @btrfs_device objects that
have corresponding block devices with btrfs.
But if a block device is broken or unplugged, no one tells the
@valid_dev_root to cleanup the
26 matches
Mail list logo