Nathan Shearer posted on Mon, 01 Sep 2014 18:14:12 -0600 as excerpted:
I had a multi-drive raid6 setup and failed and removed 2 drives. I tried
to start a scrub and rebalance to recalculate the parity and something
happened where I could not write to the filesystem. Any programs that
tried to
I will definitely try the latest 3.14.x (never had any problem of this
kind with it). And I'll look into the other possibilities you pointed
out. However what I can tell you right now is this:
-the filesystem was new. I've been bitten by this bug with 3.15 and
3.16 and I kept
trying to do the
john terragon posted on Tue, 02 Sep 2014 08:12:36 +0200 as excerpted:
I will definitely try the latest 3.14.x (never had any problem of this
kind with it). And I'll look into the other possibilities you pointed
out. However what I can tell you right now is this:
-the filesystem was new.
On Mon, Sep 01, 2014 at 08:00:03PM +0300, Konstantinos Skarlatos wrote:
On 1/9/2014 7:27 μμ, Marc MERLIN wrote:
On Sat, Aug 30, 2014 at 11:26:52AM -1000, Jean-Denis Girard wrote:
So I commented out the break on line 238 of btrfs-find-root so that it
Thanks for that report.
Can a developer
When the fsync callback (btrfs_sync_file) starts, it first waits for
the writeback of any dirty pages to start and finish without holding
the inode's mutex (to reduce contention). After this it acquires the
inode's mutex and repeats that process via btrfs_wait_ordered_range
only if we're doing a
Further to the old thread: Machine lockup due to btrfs-transaction on
AWS EC2 Ubuntu 14.04:
http://thread.gmane.org/gmane.comp.file-systems.btrfs/37224
Since I have done a nightly rebalance and ensured plenty of
unallocated space, the main 3 btrfs machines have behaved themselves
for almost a
On Wed, Aug 06, 2014 at 05:34:22PM +0800, Qu Wenruo wrote:
When impatient sysadmin is tired of waiting background running btrfs
scrub/replace and send SIGKILL to btrfs process, unlike
SIGINT/SIGTERM which can be caught by user space program and cancel the
scrub work, user space program will
On Wed, Aug 06, 2014 at 09:17:07AM +0800, Qu Wenruo wrote:
Current BTRFS_IOC_DEV_REPLACE ioctl is synchronous, and during the ioctl
program is fallen into kernel and unable to handle signal, the original
signal function will never be executed until the dev replace is done.
This is very
On Thu, Aug 14, 2014 at 07:40:20PM +0800, Eryu Guan wrote:
[root@hp-dl388eg8-01 btrfs-progs]# btrfs fi show
Label: none uuid: 1aba7da5-ce2b-4af0-a716-db732abc60b2
Total devices 1 FS bytes used 384.00KiB
devid1 size 15.00GiB used 2.04GiB path
- the very small max readahead size
For things like the readahead size, that's probably something that we
should autotune, based the time it takes to read N sectors. i.e.,
start N relatively small, such as 128k, and then bump it up based on
how long it takes to do a sequential read of N
While we're doing a full fsync (when the inode has the flag
BTRFS_INODE_NEEDS_FULL_SYNC set) that is ranged too (covers only a
portion of the file), we might have ordered operations that are started
before or while we're logging the inode and that fall outside the fsync
range.
Therefore when a
Fix (at least one user-visible) typos: it's its, not it's.
Signed-off-by: Holger Hoffstätte holger.hoffstae...@googlemail.com
---
btrfs-convert.c | 2 +-
cmds-device.c | 2 +-
qgroup-verify.c | 4 ++--
utils.c | 2 +-
4 files changed, 5 insertions(+), 5 deletions(-)
diff --git
On Thu, Aug 21, 2014 at 09:04:07PM +0900, Naohiro Aota wrote:
btrfs check is still under heavy development and so there are some
BUGs beging hit. btrfs check can be run on limited environment which
lacks gdb to debug the abort in detail. If we could see backtrace, it
will be easier to find a
I updated to progs-3.16 and noticed during testing:
rootlosetup
NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE
/dev/loop0 0 0 0 0 /tmp/img
rootmkfs.btrfs -f /dev/loop0
Btrfs v3.16
See http://btrfs.wiki.kernel.org for more information.
Performing full device TRIM
On Tue, Sep 02, 2014 at 12:05:33PM +, Holger Hoffstätte wrote:
I updated to progs-3.16 and noticed during testing:
rootlosetup
NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE
/dev/loop0 0 0 0 0 /tmp/img
rootmkfs.btrfs -f /dev/loop0
Btrfs v3.16
See
On Sat, Aug 30, 2014 at 02:48:08PM +0200, Thomas Petazzoni wrote:
Here are two patches that we have in the Buildroot embedded Linux
build system against btrfs-progs. The first patch allows to disable
the build and installation of the docmentation, the second patch
improves static building and
On Tue, Sep 02, 2014 at 01:32:34PM +0200, David Sterba wrote:
On Thu, Aug 14, 2014 at 07:40:20PM +0800, Eryu Guan wrote:
[root@hp-dl388eg8-01 btrfs-progs]# btrfs fi show
Label: none uuid: 1aba7da5-ce2b-4af0-a716-db732abc60b2
Total devices 1 FS bytes used 384.00KiB
devid
On Mon, Sep 01, 2014 at 02:56:15PM +0800, Miao Xie wrote:
On Fri, 29 Aug 2014 14:31:48 -0400, Chris Mason wrote:
On 07/29/2014 05:24 AM, Miao Xie wrote:
This patch implement data repair function when direct read fails.
The detail of the implementation is:
- When we find the data is not
On Tue, 02 Sep 2014 13:13:49 +0100, Hugo Mills wrote:
[snip]
So where does the confusing initial display come from? I'm running this
against a (very patched) 3.14.17, but don't remember ever seeing this
with btrfs-progs-3.14.2.
Your memory is faulty, I'm afraid. It's always done that --
While I'm sure some of those settings were selected with good reason,
maybe there can be a few options (2 or 3) that have some basic
intelligence at creation to pick a more sane option.
Some checks to see if an option or two might be better suited for the
fs. Like the RAID5 stripe size. Leave the
I wholeheartedly agree. Of course, getting something other than CFQ as
the default I/O scheduler is going to be a difficult task. Enough
people upstream are convinced that we all NEED I/O priorities, when most
of what I see people doing with them is bandwidth provisioning, which
can be done much
On Tue 02-09-14 07:31:04, Ted Tso wrote:
- the very small max readahead size
For things like the readahead size, that's probably something that we
should autotune, based the time it takes to read N sectors. i.e.,
start N relatively small, such as 128k, and then bump it up based on
how
On Tue, Sep 02, 2014 at 04:20:24PM +0200, Jan Kara wrote:
On Tue 02-09-14 07:31:04, Ted Tso wrote:
- the very small max readahead size
For things like the readahead size, that's probably something that we
should autotune, based the time it takes to read N sectors. i.e.,
start N
Hi,
On Tue, Aug 19, 2014 at 01:10:45PM +0200, David Sterba wrote:
Commits:
6f7ff6d7832c6be13e8c95598884dbc40ad69fb7
ce62003f690dff38d3164a632ec69efa15c32cbf
27b9a8122ff71a8cadfbffb9c4f0694300464f3b
please add the commits to stable queue. Thanks in advance.
--
To
I thought I'd follow-up and give everyone an update, in case anyone
had further interest.
I've rebuilt the RAID10 volume in question with a Samsung 840 Pro for
bcache front device.
It's 5x600GB SAS 15k RPM drives RAID10, with the 512MB SSD bcache.
2014-09-02 11:23:16
root@eanna i
On Sep 2, 2014, at 12:31 PM, G. Richard Bellamy rbell...@pteradigm.com wrote:
I thought I'd follow-up and give everyone an update, in case anyone
had further interest.
I've rebuilt the RAID10 volume in question with a Samsung 840 Pro for
bcache front device.
It's 5x600GB SAS 15k RPM
On 2014-09-02 14:31, G. Richard Bellamy wrote:
I thought I'd follow-up and give everyone an update, in case anyone
had further interest.
I've rebuilt the RAID10 volume in question with a Samsung 840 Pro for
bcache front device.
It's 5x600GB SAS 15k RPM drives RAID10, with the 512MB SSD
Nice...now I get the hung task even with 3.14.17 And I tried with
4K for node and leaf size...same result. And to top it all off, today
I've been bitten by the bug also on my main root fs (which is on two
fast ssd), although with 3.16.1.
Is it at least safe for the data? I mean, as long as
On 09/02/2014 03:56 PM, john terragon wrote:
Nice...now I get the hung task even with 3.14.17 And I tried with
4K for node and leaf size...same result. And to top it all off, today
I've been bitten by the bug also on my main root fs (which is on two
fast ssd), although with 3.16.1.
Is
On Aug 22, 2014, at 10:00 PM, Liu Bo bo.li@oracle.com wrote:
The crash is
[ cut here ]
kernel BUG at fs/btrfs/extent_io.c:2124!
invalid opcode: [#1] SMP
...
CPU: 3 PID: 88 Comm: kworker/u8:7 Not tainted 3.17.0-0.rc1.git0.1.fc22.x86_64
#1
Hardware name:
I don't know what to tell you about the ENOSPC code being heavily
involved. At this point I'm using this simple test to see if things
improve:
-freshly created btrfs on dmcrypt,
-rsync some stuff (since the fs is empty I could just use cp but I
keep the test the same as it was when I had the
On Sep 2, 2014, at 12:02 AM, Duncan 1i5t5.dun...@cox.net wrote:
The only benefit to raid5/raid6 mode at
this time is that assuming it survives without a device loss until the
raid5/6 mode code is complete, you'll get a free upgrade to raid5/6 at
that point, since it has actually been
OK, so I'm using 3.17-rc3, same test on a flash usb drive, no
autodefrag. The situation is even stranger. The rsync is clearly
stuck, it's trying to write the same file for much more than 120 secs.
However dmesg is clean, no INFO: task kworker/u16:11:1763 blocked for
more than 120 seconds or
On Mon, 1 Sep 2014 09:37:54 PM Toralf Förster wrote:
Ah thx, it seems that fix does not made it in -rc3, so -rc4 would be a
better choice for a re-test, or ?
It all depends on what Chris Mason sends to Linus, and what Linus chooses to
accept.
Your best bet is to watch the mailing list for
From: Jan-Simon Möller dl...@gmx.de
The use of variable length arrays in structs (VLAIS) in the Linux Kernel code
precludes the use of compilers which don't implement VLAIS (for instance the
Clang compiler). This patch instead allocates the appropriate amount of memory
using an char array.
From: Jan-Simon Möller dl...@gmx.de
The use of variable length arrays in structs (VLAIS) in the Linux Kernel code
precludes the use of compilers which don't implement VLAIS (for instance the
Clang compiler). This patch instead allocates the appropriate amount of memory
using an char array.
From: Jan-Simon Möller dl...@gmx.de
The use of variable length arrays in structs (VLAIS) in the Linux Kernel code
precludes the use of compilers which don't implement VLAIS (for instance the
Clang compiler). This patch instead allocates the appropriate amount of memory
using an char array.
From: Vinícius Tinti viniciusti...@gmail.com
Replaced the use of a Variable Length Array In Struct (VLAIS) with a C99
compliant equivalent. This is the original VLAIS struct.
struct {
struct shash_desc shash;
char ctx[crypto_shash_descsize(tfm)];
} desc;
This patch instead
From: Vinícius Tinti viniciusti...@gmail.com
Replaced the use of a Variable Length Array In Struct (VLAIS) with a C99
compliant equivalent. This is the original VLAIS struct.
struct {
struct shash_desc shash;
char ctx[crypto_shash_descsize(apparmor_tfm)];
} desc;
This patch
From: Jan-Simon Möller dl...@gmx.de
The use of variable length arrays in structs (VLAIS) in the Linux Kernel code
precludes the use of compilers which don't implement VLAIS (for instance the
Clang compiler). This patch instead allocates the appropriate amount of memory
using an char array.
From: Behan Webster beh...@converseincode.com
These patches remove the use of Variable Length Arrays In Structs (VLAIS) in
crypto related code. Presented here for comments as a whole (since they all do
the same thing in the same way). Once everyone is happy I will submit them
individually to
Hi Behan,
These patches remove the use of Variable Length Arrays In Structs (VLAIS) in
crypto related code. Presented here for comments as a whole (since they all do
the same thing in the same way). Once everyone is happy I will submit them
individually to their appropriate maintainers.
On 09/02/14 16:01, Marcel Holtmann wrote:
Hi Behan,
These patches remove the use of Variable Length Arrays In Structs (VLAIS) in
crypto related code. Presented here for comments as a whole (since they all do
the same thing in the same way). Once everyone is happy I will submit them
On 09/02/2014 03:32 PM, beh...@converseincode.com wrote:
From: Vinícius Tinti viniciusti...@gmail.com
Replaced the use of a Variable Length Array In Struct (VLAIS) with a C99
compliant equivalent. This is the original VLAIS struct.
struct {
struct shash_desc shash;
char
On 09/02/14 16:16, John Johansen wrote:
I'm fine with this, do you want me to pull it into my tree for our next push
or do you want this all to go together as a set?
Acked-by: John Johansen john.johan...@canonical.com
I'm more than happy for individual maintainers to pull relevant patches
Thanks @chris @austin. You both bring up interesting questions and points.
@chris: atlas-data.qcow2 isn't running any software or logging at this
time, I isolated my D:\ drive on that file via clonezilla and
virt-resize.
Microsoft DiskPart version 6.1.7601
Copyright (C) 1999-2008 Microsoft
On Mon, 1 Sep 2014 18:22:22 -0700 Christoph Hellwig h...@infradead.org wrote:
On Tue, Sep 02, 2014 at 10:08:22AM +1000, Dave Chinner wrote:
Pretty obvious difference: avgrq-sz. btrfs is doing 512k IOs, ext4
and XFS are doing is doing 128k IOs because that's the default block
device
Original Message
Subject: Re: [PATCH] btrfs: cancel scrub/replace if the user space
process receive SIGKILL.
From: David Sterba dste...@suse.cz
To: Qu Wenruo quwen...@cn.fujitsu.com
Date: 2014年09月02日 19:05
On Wed, Aug 06, 2014 at 05:34:22PM +0800, Qu Wenruo wrote:
When
Original Message
Subject: Re: [PATCH] btrfs-progs: make 'btrfs replace' signal-handling
works.
From: David Sterba dste...@suse.cz
To: Qu Wenruo quwen...@cn.fujitsu.com
Date: 2014年09月02日 19:25
On Wed, Aug 06, 2014 at 09:17:07AM +0800, Qu Wenruo wrote:
Current
Rsync finished. FWIW in the end it reported an average speed of about
900K/sec. Without autodefrag there have been no messages about hung
kworkers even though rsync seemingly keeps getting hung for several
minutes throughout the whole execution.
Thanks
John
On Tue, Sep 2, 2014 at 10:48 PM,
On Sep 2, 2014, at 12:40 AM, Duncan 1i5t5.dun...@cox.net wrote:
Mkfs.btrfs used to default to 4 KiB node/leaf sizes; now days it defaults
to 16 KiB as that's far better for most usage. I wonder if USB sticks
are an exception…
USB sticks 1 GB get 16KB nodesize also. At = 1 GB, mixed-bg
From: Liu Bo liub.li...@gmail.com
btrfs/012 is a case to verify btrfs-convert feature, it converts an ext4 to
btrfs firstly and do something, then rolls back to ext4.
So at last we have a ext4 on the scratch device, but setting _require_scratch
will force a btrfsck on a ext4 fs because $FSTYP
On Tue, Sep 02, 2014 at 05:20:29AM +, Duncan wrote:
suspect your firmware is SERIOUSLY out of space and shuffling, as that'll
slow the balance down too, and again after), try running fstrim on the
device. It may or may not work on that device, but if it does and the
firmware /was/ out
Hugo Mills posted on Tue, 02 Sep 2014 13:13:49 +0100 as excerpted:
On Tue, Sep 02, 2014 at 12:05:33PM +, Holger Hoffstätte wrote:
I updated to progs-3.16 and noticed during testing:
rootmkfs.btrfs -f /dev/loop0
All fine until here..
rootbtrfs filesystem df /tmp/btrfs
Data, single:
Chris Murphy posted on Tue, 02 Sep 2014 20:44:06 -0600 as excerpted:
On Sep 2, 2014, at 12:40 AM, Duncan 1i5t5.dun...@cox.net wrote:
Mkfs.btrfs used to default to 4 KiB node/leaf sizes; now days it
defaults to 16 KiB as that's far better for most usage. I wonder if
USB sticks are an
55 matches
Mail list logo