On Mon, Feb 24, 2014 at 10:36:52PM -0800, Marc MERLIN wrote:
I got this during a btrfs send:
BTRFS error (device dm-2): did not find backref in send_root. inode=22672,
offset=524288, disk_byte=1490517954560 found extent=1490517954560
I'll try a scrub when I've finished my backup, but is there
Original btrfs will not detection any missing device since there is no
notification mechanism for fs layer to detect missing device in block layer.
However we don't really need to notify fs layer upon dev remove, probing in
dev_info/rm_dev ioctl is good enough since they are the only two ioctls
Add userspace support for kernel missing dev detection from dev_info
ioctl.
Now 'btrfs fi show' will auto detect the output format of dev_info ioctl
and use kernel missing dev detection if supported.
Also userspace missing dev detection is used as a fallback method and
when used, a info message
Add flags member for btrfs_ioctl_dev_info_args to preset missing btrfs
devices.
The new member is added in the original padding area so the ioctl API is
not affected but user headers needs to be updated.
Cc: Anand Jain anand.j...@oracle.com
Signed-off-by: Qu Wenruo quwen...@cn.fujitsu.com
---
Old btrfs can't find a missing btrfs device since there is no
mechanism for block layer to inform fs layer.
But we can use a workaround that only check status(by using
request_queue-queue_flags) of every device in a btrfs
filesystem when calling dev_info/rm_dev ioctl, since other ioctls
do not
Follow the kernel header changes to add new member of
btrfs_ioctl_dev_info_args.
This change will use special bit to keep backward compatibility, so even
on old kernels this will not screw anything up.
Signed-off-by: Qu Wenruo quwen...@cn.fujitsu.com
---
ioctl.h | 5 -
1 file changed, 4
On 05/06/2014 08:10 AM, David Brown wrote:
On Mon, Feb 24, 2014 at 10:36:52PM -0800, Marc MERLIN wrote:
I got this during a btrfs send:
BTRFS error (device dm-2): did not find backref in send_root.
inode=22672, offset=524288, disk_byte=1490517954560 found
extent=1490517954560
I'll try a
just one last doubt:
why do you use --align-payload=1024? (or 8912)
Cryptsetup man says that the default for the payload alignment is 2048
(512-byte sectors). So, it's already aligned by default to 4K-byte
physical sectors (if that was your concern). Am I missing something?
John
On Mon, May 5,
This patch adds a regression test to verify btrfs can not
reuse inode id until we have committed transaction. Which was
addressed by the following kernel patch:
Btrfs: fix inode cache vs tree log
Signed-off-by: Wang Shilong wangsl.f...@cn.fujitsu.com
---
tests/btrfs/049 | 109
Dear list,
I am running btrfs on Arch Linux ARM (Linux 3.14.2, Btrfs v3.14.1). I can run
scrub w/o errors, but I never get stats from scrub status
What I get is
btrfs scrub status /pools/dataPool
scrub status for b5f082e2-2ce0-4f91-b54b-c2d26185a635
no stats available
Hello all!
I would like to use btrfs (or anyting else actually) to maximize raid0
performance. Basically I have a relatively constant stream of data that
simply has to be written out to disk. So my question is, how is the
block allocator deciding on which device to write, can this decision be
On Tue, May 06, 2014 at 12:41:38PM +0200, Hendrik Siedelmann wrote:
Hello all!
I would like to use btrfs (or anyting else actually) to maximize raid0
performance. Basically I have a relatively constant stream of data that
simply has to be written out to disk. So my question is, how is the
On Tue, May 06, 2014 at 11:52:58AM +0200, Wolfgang Mader wrote:
Dear list,
I am running btrfs on Arch Linux ARM (Linux 3.14.2, Btrfs v3.14.1). I can run
scrub w/o errors, but I never get stats from scrub status
What I get is
btrfs scrub status /pools/dataPool
scrub status for
On Tue, May 06, 2014 at 01:14:26PM +0200, Hendrik Siedelmann wrote:
On 06.05.2014 12:59, Hugo Mills wrote:
On Tue, May 06, 2014 at 12:41:38PM +0200, Hendrik Siedelmann wrote:
Hello all!
I would like to use btrfs (or anyting else actually) to maximize raid0
performance. Basically I have a
On 06.05.2014 13:19, Hugo Mills wrote:
On Tue, May 06, 2014 at 01:14:26PM +0200, Hendrik Siedelmann wrote:
On 06.05.2014 12:59, Hugo Mills wrote:
On Tue, May 06, 2014 at 12:41:38PM +0200, Hendrik Siedelmann wrote:
Hello all!
I would like to use btrfs (or anyting else actually) to maximize
On Tue, May 06, 2014 at 01:26:44PM +0200, Hendrik Siedelmann wrote:
On 06.05.2014 13:19, Hugo Mills wrote:
On Tue, May 06, 2014 at 01:14:26PM +0200, Hendrik Siedelmann wrote:
On 06.05.2014 12:59, Hugo Mills wrote:
On Tue, May 06, 2014 at 12:41:38PM +0200, Hendrik Siedelmann wrote:
Hello all!
On 06.05.2014 13:46, Hugo Mills wrote:
On Tue, May 06, 2014 at 01:26:44PM +0200, Hendrik Siedelmann wrote:
On 06.05.2014 13:19, Hugo Mills wrote:
On Tue, May 06, 2014 at 01:14:26PM +0200, Hendrik Siedelmann wrote:
On 06.05.2014 12:59, Hugo Mills wrote:
On Tue, May 06, 2014 at 12:41:38PM
On Mon, May 05, 2014 at 07:07:29PM +0200, Brendan Hide wrote:
In the case above, because the filesystem is only 55% full, I can
ask balance to rewrite all chunks that are more than 55% full:
legolas:~# btrfs balance start -dusage=50 /mnt/btrfs_pool1
-dusage=50 will balance all chunks that
Hi, Marc. Inline below. :)
On 2014/05/06 02:19 PM, Marc MERLIN wrote:
On Mon, May 05, 2014 at 07:07:29PM +0200, Brendan Hide wrote:
In the case above, because the filesystem is only 55% full, I can
ask balance to rewrite all chunks that are more than 55% full:
legolas:~# btrfs balance start
On Tue, May 06, 2014 at 06:30:31PM +0200, Brendan Hide wrote:
Hi, Marc. Inline below. :)
On 2014/05/06 02:19 PM, Marc MERLIN wrote:
On Mon, May 05, 2014 at 07:07:29PM +0200, Brendan Hide wrote:
In the case above, because the filesystem is only 55% full, I can
ask balance to rewrite all
Hi,
instead of extending the BTRFS_IOCTL_DEV_INFO ioctl, why do not add a field
under /sys/fs/btrfs/UUID/ ? Something like /sys/fs/btrfs/UUID/missing_device
BR
G.Baroncelli
On 05/06/2014 08:33 AM, Qu Wenruo wrote:
Original btrfs will not detection any missing device since there is
no
Hi
tried with a newer version of btrfs, but still getting the same error.
checking extents
checking free space cache
checking fs roots
root 5 inode 5769204 errors 2001, no inode item, link count wrong
unresolved ref dir 5783881 index 3 namelen 38 name
Brendan Hide posted on Sun, 04 May 2014 09:54:38 +0200 as excerpted:
From the man page section on -c:
You must not specify clone sources unless you guarantee that these
snapshots are exactly in the same state on both sides, the sender and
the receiver. It is allowed to omit the '-p
Hugo Mills posted on Sun, 04 May 2014 19:31:55 +0100 as excerpted:
My proposal was simply a description mechanism, not an
implementation. The description is N-copies, M-device-stripe,
P-parity-devices (NcMsPp), and (more or less comfortably) covers at
minimum all of the current and
Brendan Hide posted on Mon, 05 May 2014 08:55:55 +0200 as excerpted:
You are 100% right, though. The scale is very small. By negligible, the
penalty is at most a few CPU cycles. When compared to the wait time on
a spindle, it really doesn't matter much.
The analogy I've used before is that of
Marc MERLIN posted on Sat, 03 May 2014 17:47:32 -0700 as excerpted:
Is there any functional difference between
mount -o subvol=usr /dev/sda1 /usr
and
mount /dev/sda1 /mnt/btrfs_pool
mount -o bind /mnt/btrfs_pool/usr /usr
?
Brendan answered the primary aspect of this well so I won't
Marc MERLIN posted on Sat, 03 May 2014 17:52:57 -0700 as excerpted:
(more questions I'm asking myself while writing my talk slides)
I know Suse uses btrfs to roll back filesystem changes.
So I understand how you can take a snapshot before making a change, but
not how you revert to that
Marc MERLIN posted on Sun, 04 May 2014 22:06:17 -0700 as excerpted:
That's true, but in this case I barely see the point of -m single vs -m
raid0. It sounds like they both stripe data anyway, maybe not at the
same level, but if both are striped, than they're almost the same in my
book :)
Marc MERLIN posted on Sun, 04 May 2014 22:04:59 -0700 as excerpted:
On Mon, May 05, 2014 at 01:36:39AM +0100, Hugo Mills wrote:
I'm guessing it involves reflink copies of files from the snapshot
back to the original, and then restarting affected services. That's
about the only other thing
Marc MERLIN posted on Sun, 04 May 2014 18:27:19 -0700 as excerpted:
On Sun, May 04, 2014 at 09:44:41AM +0200, Brendan Hide wrote:
Ah, I see the man page now This is because SSDs can remap blocks
internally so duplicate blocks could end up in the same erase block
which negates the benefits of
N-copies, M-device-stripe, P-parity-devices (NcMsPp)
At expense of being the terminology nut, who doesn't even like SNIA's chosen
terminology because it's confusing, I suggest a concerted effort to either use
SNIA's terms anyway, or push back and ask them to make changes before
propagating
On 05/05/2014 11:17 PM, Hugo Mills wrote:
[...]
Does this all make sense? Are there any other options or features
that we might consider for chunk allocation at this point?
The kind of chunk (DATA, METADATA, MIXED) and the subvolume (when /if this
possibility will come)
As how write
Marc MERLIN posted on Sun, 04 May 2014 18:27:19 -0700 as excerpted:
The original reason why I was asking myself this question and trying to
figure out how much better -m raid1 -d raid0 was over -m raid0 -d raid0
I think the summary is that in the first case, you're going to to be
abel to
Marc MERLIN posted on Sun, 04 May 2014 22:50:29 -0700 as excerpted:
In the second FS:
Label: btrfs_pool1 uuid: [...]
Total devices 1 FS bytes used 442.17GiB
devid1 size 865.01GiB used 751.04GiB path [...]
The difference is huge between 'Total used' and 'devid used'.
Is
Brendan Hide posted on Mon, 05 May 2014 23:47:17 +0200 as excerpted:
At the moment, we have two chunk allocation strategies: dup and
spread (for want of a better word; not to be confused with the
ssd_spread mount option, which is a whole different kettle of borscht).
The dup allocation
Hendrik Siedelmann posted on Tue, 06 May 2014 12:41:38 +0200 as excerpted:
I would like to use btrfs (or anyting else actually) to maximize raid0
performance. Basically I have a relatively constant stream of data that
simply has to be written out to disk.
If flexible parallelization is all
Brendan Hide posted on Tue, 06 May 2014 18:30:31 +0200 as excerpted:
So in my case when I hit that case, I had to use dusage=0 to recover.
Anything above that just didn't work.
I suspect when using more than zero the first chunk it wanted to balance
wasn't empty - and it had nowhere to put
On May 6, 2014, at 4:41 AM, Hendrik Siedelmann
hendrik.siedelm...@googlemail.com wrote:
Hello all!
I would like to use btrfs (or anyting else actually) to maximize raid0
performance. Basically I have a relatively constant stream of data that
simply has to be written out to disk.
I
On 06.05.2014 23:49, Chris Murphy wrote:
On May 6, 2014, at 4:41 AM, Hendrik Siedelmann
hendrik.siedelm...@googlemail.com wrote:
Hello all!
I would like to use btrfs (or anyting else actually) to maximize
raid0 performance. Basically I have a relatively constant stream of
data that simply
Hello,
I've been having a number of issues with processes hanging due to
btrfs using 3.14 kernels. This seems pretty new as it has been working
fine before. I also rebuilt the filesystem and am still receiving
hangs.
The filesystem is running on dmcrypt which is running on lvm2 which is
running
Original Message
Subject: Re: [RFC PATCH 0/2] Kernel space btrfs missing device detection.
From: Goffredo Baroncelli kreij...@libero.it
To: Qu Wenruo quwen...@cn.fujitsu.com, linux-btrfs@vger.kernel.org
Date: 2014年05月07日 02:10
Hi,
instead of extending the BTRFS_IOCTL_DEV_INFO
On Tue, May 06, 2014 at 08:49:04PM -0300, Kenny MacDermid wrote:
Hello,
I've been having a number of issues with processes hanging due to
btrfs using 3.14 kernels. This seems pretty new as it has been working
fine before. I also rebuilt the filesystem and am still receiving
hangs.
The
How could BTRFS and a database fight about data recovery?
BTRFS offers similar guarantees about data durability etc to other journalled
filesystems and only differs by having checksums so that while a snapshot might
have half the data that was written by an app you at least know that the half
43 matches
Mail list logo