Hey All,
Here are the results of making and reading back a 13GB file on
"mdraid6 + ext4", "mdraid6 + btrfs", and "btrfsraid6 + btrfs".
Seems to show that:
1) "mdraid6 + ext4" can do ~1100 MB/s for these sequential reads with
either one or two files at once.
2) "btrfsraid6 + btrfs" can do ~1100 M
On Mon, Apr 15, 2013 at 06:26:38PM +0800, Wang Shilong wrote:
> If out of memory happens, we should return -ENOMEM directly to the caller
> rather than continue the work.
Reviewed-by: David Sterba
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message t
On Thu, Apr 11, 2013 at 06:30:16PM +0800, Miao Xie wrote:
> In order to avoid this problem, we introduce a lock named super_lock into
> the btrfs_fs_info structure. If we want to update incompat/compat flags
> of the super block, we must hold it.
>
> + /*
> + * Used to protect the incompa
On Wed, Apr 17, 2013 at 10:45:18PM +0200, Vincent wrote:
> Thanks for your review; here is a resend with your 'Reviewed-by'.
No need to do that, reviewed-by or the other tags are picked by
maintainers, see
http://git.kernel.org/cgit/linux/kernel/git/josef/btrfs-next.git/commit/?id=086d51b3022700b
This fixes the following errors:
fs/btrfs/reada.c: In function ‘btrfs_reada_wait’:
fs/btrfs/reada.c:958:42: error: invalid operands to binary < (have ‘atomic_t’
and ‘int’)
fs/btrfs/reada.c:961:41: error: invalid operands to binary < (have ‘atomic_t’
and ‘int’)
Signed-off-by: Vincent Stehl
On Tue, Apr 16, 2013 at 11:55 PM, Sander wrote:
> Matt Pursley wrote (ao):
>> I have an LSI HBA card (LSI SAS 9207-8i) with 12 7200rpm SAS drives
>> attached. When it's formated with mdraid6+ext4 I get about 1200MB/s
>> for multiple streaming random reads with iozone. With btrfs in
>> 3.9.0-rc4
On Wed, Apr 17, 2013 at 10:19:09AM +0800, Anand Jain wrote:
>
>
> On 04/16/2013 07:57 PM, David Sterba wrote:
> >On Fri, Apr 12, 2013 at 03:55:06PM +0800, Anand Jain wrote:
> >>If one of the copy of the superblock is zero it does not
> >>confirm to us that btrfs isn't there on that disk. When
> >
On Wed, Apr 17, 2013 at 12:19:16PM -0400, Josef Bacik wrote:
> The locking order for stuff is
>
> __sb_start_write
> ordered_mutex
>
> but with sync() we don't do __sb_start_write for some strange reason, which
> means that our iput in wait_ordered_extents could start a transaction which
> does
The locking order for stuff is
__sb_start_write
ordered_mutex
but with sync() we don't do __sb_start_write for some strange reason, which
means that our iput in wait_ordered_extents could start a transaction which does
the __sb_start_write while we're holding the ordered_mutex. Fix this by using
On Tue, April 16, 2013 at 14:22 (+0200), Wang Shilong wrote:
>
> Hello Jan, more comments below..
>
> [...snip..]
>
>>
>> +
>> +static long btrfs_ioctl_quota_rescan_status(struct file *file, void __user
>> *arg)
>> +{
>> +struct btrfs_root *root = BTRFS_I(fdentry(file)->d_inode)->root;
>>
From: Wang Shilong
Since all the quota configurations are loaded in memory, and we can
have ioctl checks before operating in the disk. It is safe to do such
things because qgroup_ioctl_lock is held outside.
Without these extra checks firstly, it should be ok to do user change
for quota operation
Dave reported a BUG_ON() that happened in end_page_writeback() after an abort.
This happened because we unconditionally call end_page_writeback() in the endio
case, which is right. However when we abort the transaction we will call
end_page_writeback() on any writeback pages we find, which is wron
Hello Josef,
It really takes me the whole day to tack such strange regression down!
In fact, i should test every patch even for a cleanup patch carefully….
Sorry for inconvenience to you.
Wang
> From: Wang Shilong
>
> ulist_add() may return -ENOMEM, fix missing check about
> return va
From: Wang Shilong
ulist_add() may return -ENOMEM, fix missing check about
return value.
Signed-off-by: Wang Shilong
---
Changelog v1->v2:
ulist_add() may return 1, and this is ok. For this case,
btrfs_qgroup_reserve() should return 0, otherwise, i get
a regression when
For created snapshots, the full root_item is copied from the source
root and afterwards selectively modified. The current code forgets
to clear the field received_uuid. The only problem is that it is
confusing when you look at it with 'btrfs subv list', since for
writable snapshots, the contents of
15 matches
Mail list logo