It's now gone back to a pattern from a full week ago:
(gdb) bt
#0 0x0042d576 in read_extent_buffer ()
#1 0x0041ee79 in btrfs_check_node ()
#2 0x00420211 in check_block ()
#3 0x00420813 in btrfs_search_slot ()
#4 0x00427bb4 in btrfs_read_block_groups ()
On 19/11/13 19:24, deadhorseconsulting wrote:
> Interesting, this confirms what I was observing.
> Given the wording in man pages for "-m" and "-d" which states "Specify
> how the metadata or data must be spanned across the devices
> specified."
> I took "devices specified" to literally mean the de
On 19/11/13 23:16, Duncan wrote:
> So we have:
>
> 1) raid1 is exactly two copies of data, paired devices.
>
> 2) raid0 is a stripe exactly two devices wide (reinforced by to read a
> stripe takes only two devices), so again paired devices.
Which is fine for some occasions and a very good star
On Tue, Nov 19, 2013 at 07:56:35PM -0800, Kees Cook wrote:
> Hi!
>
> Which tree is 'devel-snb'? I don't see that on the kernel.org trees.
It's my local merge branch, based on the latest upstream release.
Let's CC the btrfs developers for this warning. :)
Thanks,
Fengguang
> On Tue, Nov 19, 201
Dear list members,
While I was defragging my file system, the following warning showed up in the
dmesg:
[ 6323.296521] [ cut here ]
[ 6323.296551] WARNING: CPU: 5 PID: 13598 at
/home/abuild/rpmbuild/BUILD/kernel-
desktop-3.12.0/linux-3.12/fs/btrfs/backref.c:934 find_pare
On Tue, Nov 19, 2013 at 4:54 PM, Chris Murphy
wrote:
> If anything, I'd like to see two implementations of RAID 6 dual
> parity. The existing implementation in the md driver and btrfs could
> remain the default, but users could opt into Cauchy matrix based dual
> parity which would then enable the
We hit a forever loop when doing balance relocation,the reason
is that we firstly reserve 4M(node size is 16k).and within transaction
we will try to add extra reservation for snapshot roots,this will
return -EAGAIN if there has been a thread flushing space to reserve
space.We will do this again and
On Nov 19, 2013, at 3:51 PM, Drew wrote:
> I'm not going to claim any expert status on this discussion (the
> theory makes my head spin) but I will say I agree with Andrea as far
> as prefering his implementation for triple parity and beyond.
>
> PSHUFB has been around the intel platform since
Hugo Mills posted on Tue, 19 Nov 2013 09:06:02 + as excerpted:
> This will happen with RAID-10. The allocator will write stripes as wide
> as it can: in this case, the first stripes will run across all 8
> devices, until the SSDs are full, and then will write across the
> remaining 4 devices.
I'm not going to claim any expert status on this discussion (the
theory makes my head spin) but I will say I agree with Andrea as far
as prefering his implementation for triple parity and beyond.
PSHUFB has been around the intel platform since the Core2 introduced
it as part of SSSE3 back in Q1 20
The inode eviction can be very slow, because during eviction we
tell the VFS to truncate all of the inode's pages. This results
in calls to btrfs_invalidatepage() which in turn does calls to
lock_extent_bits() and clear_extent_bit(). These calls result in
too many merges and splits of extent_state
deadhorseconsulting posted on Tue, 19 Nov 2013 13:24:01 -0600 as
excerpted:
> Interesting, this confirms what I was observing.
> Given the wording in man pages for "-m" and "-d" which states "Specify
> how the metadata or data must be spanned across the devices specified."
> I took "devices speci
On 11/19/2013 12:28 PM, Andrea Mazzoleni wrote:
Hi Peter,
Yes, 251 disks for 6 parity.
To build a NxM Cauchy matrix you need to pick N+M distinct values
in the GF(2^8) and we have only 2^8 == 256 available.
This means that for every row we add for an extra parity level, we
have to remove one of
Interesting, this confirms what I was observing.
Given the wording in man pages for "-m" and "-d" which states "Specify
how the metadata or data must be spanned across the devices
specified."
I took "devices specified" to literally mean the devices specified
after the according switch.
- DHC
On
On Mon, Nov 18, 2013 at 11:08:59PM +0100, Andrea Mazzoleni wrote:
> Hi,
>
> I want to report that I recently implemented a support for
> arbitrary number of parities that could be useful also for Linux
> RAID and Btrfs, both currently limited to double parity.
>
> In short, to generate the parity
Hi Filipe
On 2013-11-19 17:06, Filipe David Manana wrote:
> On Wed, Nov 13, 2013 at 6:59 PM, Goffredo Baroncelli
> wrote:
>> Hi Filipe,
>>
>> my comments below
>> On 2013-11-13 02:21, Filipe David Borba Manana wrote:
>>> This change adds infrastructure to allow for generic properties for
>>> inod
Hi David,
Just to say that I know your good past work, and it helped me a lot.
Thanks for that!
Unfortunately the Cauchy matrix is not compatible with a triple parity
implementation using power coefficients. They are different and
incompatible roads.
I partially agree on your considerations, and
Hi Peter,
Yes, 251 disks for 6 parity.
To build a NxM Cauchy matrix you need to pick N+M distinct values
in the GF(2^8) and we have only 2^8 == 256 available.
This means that for every row we add for an extra parity level, we
have to remove one of the disk columns.
Note that in true, I use an Ex
On 11/19/13, 11:07 AM, Filipe David Borba Manana wrote:
> From kmemleak:
>
> hex dump (first 32 bytes):
> 03 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> backtrace:
> [] kmemleak_alloc+0x26/0x50
If the ordered extent's last byte was 1 less than our region's
start byte, we would unnecessarily wait for the completion of
that ordered extent, because it doesn't intersect our target
range.
Signed-off-by: Filipe David Borba Manana
---
fs/btrfs/file.c |2 +-
1 file changed, 1 insertion(+),
This change adds infrastructure to allow for generic properties for
inodes. Properties are name/value pairs that can be associated with
inodes for different purposes. They're stored as xattrs with the
prefix "btrfs."
Properties can be inherited - this means when a directory inode has
inheritable p
>From kmemleak:
hex dump (first 32 bytes):
03 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
backtrace:
[] kmemleak_alloc+0x26/0x50
[] kmem_cache_alloc+0x114/0x200
[] sysfs_new_dirent+0x51/0x1
On Wed, Nov 13, 2013 at 6:59 PM, Goffredo Baroncelli wrote:
> Hi Filipe,
>
> my comments below
> On 2013-11-13 02:21, Filipe David Borba Manana wrote:
>> This change adds infrastructure to allow for generic properties for
>> inodes. Properties are name/value pairs that can be associated with
>> in
Quoting har...@redhat.com (2013-11-19 05:36:05)
> From: Harald Hoyer
>
[ create new vfsmounts with different states ]
Thanks for resending Harald. I'll give this a shot and see if I can
find any problems with it.
-chris
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs"
We don't need to crash hard here, it's just reading a sysfs file. The
values considered in switch are from a fixed set, the default case
should not happen at all.
Signed-off-by: David Sterba
---
fs/btrfs/sysfs.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/fs/btrfs/sysf
On Tue, Nov 19, 2013 at 1:00 PM, Pedro Fonseca wrote:
> Hi,
>
> In another test, I've encountered a few situations that triggered a warning
> message in "record_one_backref()". I'm not sure if it's serious but it is
> probably related to the concurrent defragment operations executed during the
> t
Hi,
In another test, I've encountered a few situations that triggered a warning message in "record_one_backref()". I'm not sure if it's serious but it is probably related to
the concurrent defragment operations executed during the test.
Warning dump:
[ 147.558178] [ cut here ]---
On 11/18/2013 11:53 AM, David Sterba wrote:
> On Sat, Sep 14, 2013 at 01:26:22PM +0200, Harald Hoyer wrote:
>>> Any comments?
>> Not even a "no, we don't want that" ?
>
> Please resend.
>
> david
>
done
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a m
From: Harald Hoyer
Given the following /etc/fstab entries:
/dev/sda3 /mnt/foo btrfs subvol=foo,ro 0 0
/dev/sda3 /mnt/bar btrfs subvol=bar,rw 0 0
you can't issue:
$ mount /mnt/foo
$ mount /mnt/bar
You would have to do:
$ mount /mnt/foo
$ mount -o remount,rw /mnt/foo
$ mount --bind -o remount,
On 19/11/13 00:25, H. Peter Anvin wrote:
> On 11/18/2013 02:35 PM, Andrea Mazzoleni wrote:
>> Hi Peter,
>>
>> The Cauchy matrix has the mathematical property to always have itself
>> and all submatrices not singular. So, we are sure that we can always
>> solve the equations to recover the data disk
Hi Liu,
Sorry, somehow I missed your email.
Let me know if you need additional information. Here's the result of "objdump -d -S"
nearby "run_clustered_refs+0x877":
c12a854f: 83 c4 0cadd$0xc,%esp
c12a8552: eb 3f jmpc12a8593
* a node might live in
On Mon, Nov 18, 2013 at 11:12:03PM -0600, deadhorseconsulting wrote:
> In theory (going by the man page and available documentation, not 100%
> clear) does the following command indeed actually work as advertised
> and specify how metadata should be placed and kept only on the
> "devices" specified
32 matches
Mail list logo