On 04/19/2013 02:17 PM, Tejun Heo wrote:
> On Thu, Apr 18, 2013 at 10:57:54PM -0700, Tejun Heo wrote:
>> No wonder this thing crashes. Chris, can't the original bio carry
>> bbio in bi_private and let end_bio_extent_readpage() free the bbio
>> instead of abusing bi_bdev like this?
>
> BTW, I thin
On Thu, Apr 18, 2013 at 10:57:54PM -0700, Tejun Heo wrote:
> No wonder this thing crashes. Chris, can't the original bio carry
> bbio in bi_private and let end_bio_extent_readpage() free the bbio
> instead of abusing bi_bdev like this?
BTW, I think it's a bit too late to fix this properly from bt
(cc'ing btrfs people)
On Fri, Apr 19, 2013 at 11:33:20AM +0800, Wanlong Gao wrote:
> RIP: 0010:[] []
> ftrace_raw_event_block_bio_complete+0x73/0xf0
...
> [] bio_endio+0x80/0x90
> [] btrfs_end_bio+0xf6/0x190 [btrfs]
> [] bio_endio+0x3d/0x90
> [] req_bio_endio+0xa3/0xe0
Ugh
In fs/btrfs/
Martin wrote:
> Or perhaps include the same Ceph code routines into btrfs?...
That's actually what I was thinking. The CRUSH code is actually already
pretty well factored out - it lives in net/ceph/crush/ in the kernel source
tree, and is treated as part of 'libceph' (which is used by both th
fget() returns NULL if error. So, we should check NULL or not.
Signed-off-by: Tsutomu Itoh
---
fs/btrfs/send.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
index 96a826a..f892e0e 100644
--- a/fs/btrfs/send.c
+++ b/fs/btrfs/send.c
@@ -4
On 18/04/13 20:48, Alex Elsayed wrote:
> Hugo Mills wrote:
>
>> On Thu, Apr 18, 2013 at 02:45:24PM +0100, Martin wrote:
>>> Dear Devs,
>
>>> Note that esata shows just the disks as individual physical disks, 4 per
>>> disk pack. Can physical disks be grouped together to force the RAID data
>>> to
On 18/04/13 20:44, Hugo Mills wrote:
> On Thu, Apr 18, 2013 at 05:29:10PM +0100, Martin wrote:
>> On 18/04/13 15:06, Hugo Mills wrote:
>>> On Thu, Apr 18, 2013 at 02:45:24PM +0100, Martin wrote:
Dear Devs,
I have a number of esata disk packs holding 4 physical disks
each where
On Wed, Apr 17, 2013 at 07:50:09PM -0600, Matt Pursley wrote:
> Hey All,
>
> Here are the results of making and reading back a 13GB file on
> "mdraid6 + ext4", "mdraid6 + btrfs", and "btrfsraid6 + btrfs".
>
> Seems to show that:
> 1) "mdraid6 + ext4" can do ~1100 MB/s for these sequential reads
Hugo Mills wrote:
> On Thu, Apr 18, 2013 at 02:45:24PM +0100, Martin wrote:
>> Dear Devs,
>> Note that esata shows just the disks as individual physical disks, 4 per
>> disk pack. Can physical disks be grouped together to force the RAID data
>> to be mirrored across all the nominated groups?
>
>
On Thu, Apr 18, 2013 at 05:29:10PM +0100, Martin wrote:
> On 18/04/13 15:06, Hugo Mills wrote:
> > On Thu, Apr 18, 2013 at 02:45:24PM +0100, Martin wrote:
> >> Dear Devs,
> >>
> >> I have a number of esata disk packs holding 4 physical disks each
> >> where I wish to use the disk packs aggregated
1) Right now scrub_stripe() is looping in some unnecessary cases:
* when the found extent item's objectid has been out of the dev extent's range
but we haven't finish scanning all the range within the dev extent
* when all the items has been processed but we haven't finish scanning all the
rang
On Thu, Apr 11, 2013 at 06:22:08PM +0200, Stefan Behrens wrote:
> +static char *all_field_items[] = {
> + [BTRFS_LIST_OBJECTID] = "rootid",
> + [BTRFS_LIST_GENERATION] = "gen",
> + [BTRFS_LIST_CGENERATION]= "cgen",
> + [BTRFS_LIST_OGENERATION]= "oge
On 18/04/13 15:06, Hugo Mills wrote:
> On Thu, Apr 18, 2013 at 02:45:24PM +0100, Martin wrote:
>> Dear Devs,
>>
>> I have a number of esata disk packs holding 4 physical disks each
>> where I wish to use the disk packs aggregated for 16TB and up to
>> 64TB backups...
>>
>> Can btrfs...?
>>
>> 1:
On Fri, Apr 12, 2013 at 09:44:53AM +0200, Stefan Behrens wrote:
> On Fri, 12 Apr 2013 08:58:27 +0800, Wang Shilong wrote:
> >> "btrfs subvolume list" gets a new option "--fields=..." which allows
> >> to specify which pieces of information about subvolumes shall be
> >> printed. This is necessary b
Apart from the dates, this sounds highly plausible :-)
If the hashing is done before the compression and the compression is
done for isolated blocks, then this could even work!
Any takers? ;-)
For a performance enhancement, keep a hash tree in memory for the "n"
most recently used/seen blocks?.
On Thu, Apr 18, 2013 at 04:42:18PM +0200, David Sterba wrote:
> xfstests loop has hit this after a day, failing test was 276.
sorry it's test 273
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at
Hi,
xfstests loop has hit this after a day, failing test was 276. The sources are
btrfs-next/linus-base branch. I've hit this some time ago with
3.9.0-rc4-default+ .
[64394.422743] BUG: unable to handle kernel NULL pointer dereference at
0078
[64394.426716] IP: [] btrfs_search_slot+0
On Thu, Apr 18, 2013 at 02:45:24PM +0100, Martin wrote:
> Dear Devs,
>
> I have a number of esata disk packs holding 4 physical disks each where
> I wish to use the disk packs aggregated for 16TB and up to 64TB backups...
>
> Can btrfs...?
>
> 1:
>
> Mirror data such that there is a copy of dat
Dear Devs,
I have a number of esata disk packs holding 4 physical disks each where
I wish to use the disk packs aggregated for 16TB and up to 64TB backups...
Can btrfs...?
1:
Mirror data such that there is a copy of data on each *disk pack* ?
Note that esata shows just the disks as individual
If one of the copy of the superblock is zero it does not
confirm to us that btrfs isn't there on that disk. When
we are having more than one copy of superblock we should
rather let the for loop to continue to check other copies.
the following test case and results would justify the
fix
mkfs.bt
Variable 'p' is not used any more. So, remove it.
Signed-off-by: Tsutomu Itoh
---
fs/btrfs/send.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
index ed897dc..96a826a 100644
--- a/fs/btrfs/send.c
+++ b/fs/btrfs/send.c
@@ -3479,7 +3479,6 @@ static int __pr
21 matches
Mail list logo