Chris Murphy posted on Wed, 03 May 2017 16:18:34 -0600 as excerpted:
> On Wed, May 3, 2017 at 2:28 PM, Alexandru Guzu
> wrote:
>
>> In a VirtualBox VM, I converted a EXT4 fs to BTRFS that is now running
>> on Ubuntu 16.04 (Kernel 4.4.0-72). I was able to use the system for
>> several weeks. I ev
03.05.2017 21:43, Chris Murphy пишет:
> If I understand the bug report correctly, the user specifies mounting
> by label which then systemd is converting into /dev/dm-0 (because it's
> a two LUKS devices Btrfs volume).
>
No, that's not the problem.
The actual reason for report is that systemd sh
qemu-kvm (Fedora 26 pre-beta guest and host)
systemd-233-3.fc26.x86_64
kernel-4.11.0-0.rc8.git0.1.fc26.x86_64
The guest installed OS uses ext4, and boot parameters rd.udev.debug
systemd.log_level=debug so we can see the entirety of Btrfs device
discovery and module loading. Using virsh I can hot p
On Wed, May 3, 2017 at 2:28 PM, Alexandru Guzu wrote:
> In a VirtualBox VM, I converted a EXT4 fs to BTRFS that is now running
> on Ubuntu 16.04 (Kernel 4.4.0-72). I was able to use the system for
> several weeks. I even did kernel updates, compression, deduplication
> without issues.
Which vers
Chris Murphy posted on Wed, 03 May 2017 12:43:36 -0600 as excerpted:
> If I understand the bug report correctly, the user specifies mounting by
> label which then systemd is converting into /dev/dm-0 (because it's a
> two LUKS devices Btrfs volume).
>
> Why not convert the fstab mount by label re
On 05/03/2017 11:31 PM, Austin S. Hemmelgarn wrote:
On 2017-05-03 09:34, Anand Jain wrote:
As the below two patches are about managing the failed disk,
I have separated them from the spare disk and auto replace
support patch set which was sent before here [1]..
[1] https://lwn.net/Article
On Wed, May 03, 2017 at 11:32:26AM +0500, Roman Mamedov wrote:
> > Actually, another thought:
> > Is there or should there be a way to repair around the bit that cannot
> > be repaired?
> > Separately, or not, can I locate which bits are causing the repair to
> > fail and maybe get a pointer to the
Hi all,
In a VirtualBox VM, I converted a EXT4 fs to BTRFS that is now running
on Ubuntu 16.04 (Kernel 4.4.0-72). I was able to use the system for
several weeks. I even did kernel updates, compression, deduplication
without issues.
Since today, a little while after booting (usually when I start
o
On 2017-05-03 14:12, Andrei Borzenkov wrote:
03.05.2017 14:26, Austin S. Hemmelgarn пишет:
On 2017-05-02 15:50, Goffredo Baroncelli wrote:
On 2017-05-02 20:49, Adam Borowski wrote:
It could be some daemon that waits for btrfs to become complete. Do we
have something?
Such a daemon would also
If I understand the bug report correctly, the user specifies mounting
by label which then systemd is converting into /dev/dm-0 (because it's
a two LUKS devices Btrfs volume).
Why not convert the fstab mount by label request, into a /dev/by-uuid/
path; and then systemd calls mount with -u,--uuid mo
03.05.2017 14:26, Austin S. Hemmelgarn пишет:
> On 2017-05-02 15:50, Goffredo Baroncelli wrote:
>> On 2017-05-02 20:49, Adam Borowski wrote:
It could be some daemon that waits for btrfs to become complete. Do we
have something?
>>> Such a daemon would also have to read the chunk tree.
>>
On Wed, May 3, 2017 at 8:17 AM, Christophe de Dinechin
wrote:
>> Check the qcow2 files with filefrag and see how many extents they
>> have. I'll bet they're massively fragmented.
>
> Indeed:
>
> fedora25.qcow2: 28358 extents found
> mac_hdd.qcow2: 79493 extents found
> ubuntu14.04-64.qcow2: 35069
On 2017-05-02 22:15, Kai Krakow wrote:
>> For example, it would be possible to implement a sane check that
>> prevent to mount a btrfs filesystem if two devices exposes the same
>> UUID...
> Ideally, the btrfs wouldn't even appear in /dev until it was assembled
> by udev. But apparently that's not
> file is ~160MB. This sounds much better. So please write a test, using
> something like
>
> truncate -s3T image
> mkfs.ext4 image
> mount && write some data
> convert && rollback
>
> Thanks. Later we might need to add some mkfs.ext4 option coverage.
Sure, will send the above test script in upc
On 2017-05-03 09:34, Anand Jain wrote:
As the below two patches are about managing the failed disk,
I have separated them from the spare disk and auto replace
support patch set which was sent before here [1]..
[1] https://lwn.net/Articles/684195/
V7 changes are very limited in this individu
On Wed, May 03, 2017 at 04:42:40PM +0800, Qu Wenruo wrote:
> When reading out name from inode_ref, it's possible that corrupted
> name_len can lead to read beyond boundary of item or even extent buffer.
>
> This happens when checking fuzzed image /tmp/bko-161811.raw, for both
> lowmem mode and ori
On 05/03/2017 04:36 AM, Jan Kara wrote:
On Tue 02-05-17 09:28:13, Davidlohr Bueso wrote:
Commit b685d3d65ac7 "block: treat REQ_FUA and REQ_PREFLUSH as
synchronous" removed REQ_SYNC flag from WRITE_FUA implementation.
Since REQ_FUA and REQ_FLUSH flags are stripped from submitted IO
when the dis
On Wed, May 03, 2017 at 04:42:39PM +0800, Qu Wenruo wrote:
> When reading out name from inode_ref, it's possible that corrupted
> name_len can lead to read beyond boundary of item or even extent buffer.
>
> This happens when checking fuzzed image /tmp/bko-161811.raw, for both
> lowmem mode and ori
On Wed, May 03, 2017 at 09:50:14AM +0800, Su Yue wrote:
> While iterating over backrefs in repair_inode_backrefs, there are several
> situations to repair one backref according backref->found_dir_item and
> backref->found_dir_index.
> Two of these branches may free the backref, but next judgments w
On 2017-05-03 10:17, Christophe de Dinechin wrote:
On 29 Apr 2017, at 21:13, Chris Murphy wrote:
On Sat, Apr 29, 2017 at 2:46 AM, Christophe de Dinechin
wrote:
On 28 Apr 2017, at 22:09, Chris Murphy wrote:
On Fri, Apr 28, 2017 at 3:10 AM, Christophe de Dinechin
wrote:
QEMU qcow2. Ho
> On 2 May 2017, at 02:17, Qu Wenruo wrote:
>
>
>
> At 04/28/2017 04:47 PM, Christophe de Dinechin wrote:
>>> On 28 Apr 2017, at 02:45, Qu Wenruo wrote:
>>>
>>>
>>>
>>> At 04/26/2017 01:50 AM, Christophe de Dinechin wrote:
Hi,
I”ve been trying to run btrfs as my primary work file
> On 29 Apr 2017, at 21:13, Chris Murphy wrote:
>
> On Sat, Apr 29, 2017 at 2:46 AM, Christophe de Dinechin
> wrote:
>>
>>> On 28 Apr 2017, at 22:09, Chris Murphy wrote:
>>>
>>> On Fri, Apr 28, 2017 at 3:10 AM, Christophe de Dinechin
>>> wrote:
>>>
QEMU qcow2. Host is BTRFS. Gue
David,
Can you pls comment on this. ? I don't think I got
any comments on this.
(And that's same for other two patches sent before
about the failed disk).
Thanks, Anand
Forwarded Message
Subject: Re: [PATCH] btrfs: Introduce device pool sysfs attributes
Date: Tue, 8 Nov
As the below two patches are about managing the failed disk,
I have separated them from the spare disk and auto replace
support patch set which was sent before here [1]..
[1] https://lwn.net/Articles/684195/
V7 changes are very limited in this individual patches. But adds
the mount option deg
From: Anand Jain
Write and Flush errors are considered as critical errors,
upon which the device will be brought offline and marked as
failed. Write and Flush errors are identified using device
error statistics. This is monitored using a kthread
btrfs_health.
Signed-off-by: Anand Jain
---
V7:
From: Anand Jain
This patch provides helper functions to force a device to offline
or failed, and we need this device states for the following reasons,
1) a. it can be reported that device has failed when it does
b. close the device when it goes offline so that blocklayer can
cleanup
2)
> I have a btrfs filesystem mounted at /btrfs_vol/ Every N
> minutes, I run bedup for deduplication of data in /btrfs_vol
> Inside /btrfs_vol, I have several subvolumes (consider this as
> home directories of several users) I have set individual
> qgroup limits for each of these subvolumes. [ ... ]
On Wed, May 03, 2017 at 05:11:04PM +0530, Shyam Prasad N wrote:
> Hi,
>
> This email is actually several questions clubbed as one...
>
> I have a btrfs filesystem mounted at /btrfs_vol/
> Every N minutes, I run bedup for deduplication of data in /btrfs_vol
> Inside /btrfs_vol, I have several subv
Hi,
This email is actually several questions clubbed as one...
I have a btrfs filesystem mounted at /btrfs_vol/
Every N minutes, I run bedup for deduplication of data in /btrfs_vol
Inside /btrfs_vol, I have several subvolumes (consider this as home
directories of several users)
I have set individ
On 2017-05-02 16:15, Kai Krakow wrote:
Am Tue, 2 May 2017 21:50:19 +0200
schrieb Goffredo Baroncelli :
On 2017-05-02 20:49, Adam Borowski wrote:
It could be some daemon that waits for btrfs to become complete.
Do we have something?
Such a daemon would also have to read the chunk tree.
I don
On 2017-05-02 15:50, Goffredo Baroncelli wrote:
On 2017-05-02 20:49, Adam Borowski wrote:
It could be some daemon that waits for btrfs to become complete. Do we
have something?
Such a daemon would also have to read the chunk tree.
I don't think that a daemon is necessary. As proof of concept
On Fri, Apr 28, 2017 at 11:25:52AM -0600, Liu Bo wrote:
> This case tests whether dio read can repair the bad copy if we have
> a good copy.
>
> Commit 2dabb3248453 ("Btrfs: Direct I/O read: Work on sectorsized blocks")
> introduced the regression.
>
> The upstream fix is
> Btrfs: fix inval
When reading out name from inode_ref, it's possible that corrupted
name_len can lead to read beyond boundary of item or even extent buffer.
This happens when checking fuzzed image /tmp/bko-161811.raw, for both
lowmem mode and original mode.
ERROR: root 5 INODE REF[256 256] doesn't have related DI
When reading out name from inode_ref, it's possible that corrupted
name_len can lead to read beyond boundary of item or even extent buffer.
This happens when checking fuzzed image /tmp/bko-161811.raw, for both
lowmem mode and original mode.
Below is the example from lowmem mode.
ERROR: root 5 IN
On Tue 02-05-17 09:28:13, Davidlohr Bueso wrote:
> Commit b685d3d65ac7 "block: treat REQ_FUA and REQ_PREFLUSH as
> synchronous" removed REQ_SYNC flag from WRITE_FUA implementation.
> Since REQ_FUA and REQ_FLUSH flags are stripped from submitted IO
> when the disk doesn't have volatile write cache a
35 matches
Mail list logo