Just small cleanups for incoming btrfs_alloc_chunk() rework, which is
designed to allow btrfs_alloc_chunk() to be able to alloc meta chunk,
even current meta chunks are already full.
The cleanups are quite small, most of them are just removing unnecessary
parameters, and make some function
btrfs_reserve_extent() uses int @data to determine if we're allocating
data extent, while reuse the parameter later to pass it as profile
(data/meta/sys).
It's a little confusing, this patch will follow kernel parameter to use
bool @is_data to replace it.
And in btrfs_reserve_extent(), use
The function is not used by anyone else outside of volumes.c, make it
static.
Signed-off-by: Qu Wenruo
---
volumes.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/volumes.c b/volumes.c
index ce3a540578fd..0e50e1d5833e 100644
--- a/volumes.c
+++ b/volumes.c
@chunk_objectid of btrfs_make_block_group() function is always fixed to
BTRFS_FIRST_FREE_OBJECTID, so there is no need to pass it as parameter
explicitly.
Signed-off-by: Qu Wenruo
---
cmds-check.c | 4 ++--
convert/main.c | 4 +---
ctree.h| 5 ++---
extent-tree.c |
Remove @trans parameter for find_free_dev_extent_start() and its
callers.
The function itself is doing read-only tree search, no use of
transaction.
Signed-off-by: Qu Wenruo
---
volumes.c | 17 +++--
1 file changed, 7 insertions(+), 10 deletions(-)
diff --git
@chunk_tree and @chunk_objectid of device extent is fixed to
BTRFS_CHUNK_TREE_OBJECTID and BTRFS_FIRST_CHUNK_TREE_OBJECTID
respectively.
There is no need to pass them as parameter explicitly.
Signed-off-by: Qu Wenruo
---
volumes.c | 18 +++---
1 file changed, 7
On 2018年01月03日 09:12, Dmitry Katsubo wrote:
> Dear btrfs team,
>
> I send a kernel crash report which I have observed recently during btrfs
> scrub.
> It looks like scrub itself has completed without errors.
>
> # btrfs scrub status /home
> scrub status for
On Tue, Jan 02, 2018 at 11:13:06AM -0500, Josef Bacik wrote:
> On Wed, Dec 20, 2017 at 03:30:55PM +0100, Jan Kara wrote:
> > On Wed 20-12-17 08:35:05, Dave Chinner wrote:
> > > On Tue, Dec 19, 2017 at 01:07:09PM +0100, Jan Kara wrote:
> > > > On Wed 13-12-17 09:20:04, Dave Chinner wrote:
> > > > >
Dear btrfs team,
I send a kernel crash report which I have observed recently during btrfs scrub.
It looks like scrub itself has completed without errors.
# btrfs scrub status /home
scrub status for 83a3cb60-3334-4d11-9fdf-70b8e8703167
scrub started at Mon Jan 1 06:52:01 2018 and
On 2018年01月03日 09:55, robbieko wrote:
> Hi Qu,
>
> Do you have a patch to reduce meta rsv ?
Not exactly, only for qgroup.
[PATCH v2 10/10] btrfs: qgroup: Use independent and accurate per inode
qgroup rsv
But that could provide enough clue to implement a smaller meta rsv.
My current safe
Hi Qu,
Do you have a patch to reduce meta rsv ?
Hi Peter Grandi,
1. all files have been initialized with dd, No need to change any
metadata.
2. my test with Total volume size 190G, used 128G, available 60G, but
only 60 MB dirty pages.
According to the meta rsv rules, 1GB free space up
On Fri, Dec 15, 2017 at 02:03:33PM -0800, Matthew Wilcox wrote:
> From: Matthew Wilcox
>
> This is a simple rename, except that xa_ail becomes ail_head.
>
> Signed-off-by: Matthew Wilcox
That was an eyeful,
Reviewed-by: Darrick J. Wong
tree: https://git.kernel.org/pub/scm/linux/kernel/git/josef/btrfs-next.git
kill-btree-inode
head: 225f09a5848138ede157eff4aa27bfa70d354fcb
commit: fb6cad3454e3172599129cff50d5234a7abc551c [30/32] Btrfs: kill the
btree_inode
reproduce:
# apt-get install sparse
git checkout
On Tue, Jan 02, 2018 at 10:01:55AM -0800, Darrick J. Wong wrote:
> On Tue, Dec 26, 2017 at 07:58:15PM -0800, Matthew Wilcox wrote:
> > spin_lock_irqsave(>pages, flags);
> > __delete_from_page_cache(page, NULL);
> > spin_unlock_irqrestore(>pages, flags);
> >
> > More details
The raid6 corruption is that,
suppose that all disks can be read without problems and if the content
that was read out doesn't match its checksum, currently for raid6
btrfs at most retries twice,
- the 1st retry is to rebuild with all other stripes, it'll eventually
be a raid5 xor rebuild,
- if
There is a scenario that can end up with rebuild process failing to
return good content, i.e.
suppose that all disks can be read without problems and if the content
that was read out doesn't match its checksum, currently for raid6
btrfs at most retries twice,
- the 1st retry is to rebuild with
This is to reproduce a bug of scrub, with which scrub is unable to
repair raid6 corruption as expected.
The kernel side fixes are
Btrfs: make raid6 rebuild retry more
Btrfs: fix scrub to repair raid6 corruption
Signed-off-by: Liu Bo
---
tests/btrfs/158 | 114
2018-01-02 21:31 GMT+03:00 Liu Bo :
> On Sat, Dec 30, 2017 at 11:32:04PM +0300, Timofey Titovets wrote:
>> Currently btrfs raid1/10 balancer bаlance requests to mirrors,
>> based on pid % num of mirrors.
>>
>> Make logic understood:
>> - if one of underline devices are non
On Wed, Dec 20, 2017 at 11:59:20PM +0300, Timofey Titovets wrote:
> How reproduce:
> touch test_file
> chattr +C test_file
> dd if=/dev/zero of=test_file bs=1M count=1
> btrfs fi def -vrczlib test_file
> filefrag -v test_file
>
> test_file
> Filesystem type is: 9123683e
> File size of test_file
On Sat, Dec 30, 2017 at 11:32:04PM +0300, Timofey Titovets wrote:
> Currently btrfs raid1/10 balancer bаlance requests to mirrors,
> based on pid % num of mirrors.
>
> Make logic understood:
> - if one of underline devices are non rotational
> - Queue leght to underline devices
>
> By default
On Sat, Dec 23, 2017 at 09:39:00AM +0200, Nikolay Borisov wrote:
>
>
> On 22.12.2017 21:07, Liu Bo wrote:
> > On Fri, Dec 22, 2017 at 10:56:31AM +0200, Nikolay Borisov wrote:
> >>
> >>
> >> On 22.12.2017 00:42, Liu Bo wrote:
> >>> This is adding a tracepoint 'btrfs_handle_em_exist' to help debug
On Tue, 2018-01-02 at 17:50 +0100, Jan Kara wrote:
> On Fri 22-12-17 07:05:53, Jeff Layton wrote:
> > From: Jeff Layton
> >
> > We only really need to update i_version if someone has queried for it
> > since we last incremented it. By doing that, we can avoid having to
> >
On Tue, 2018-01-02 at 17:20 +, David Howells wrote:
> Jeff Layton wrote:
>
> > Note that AFS has quite a different definition for this counter. AFS
> > only increments it on changes to the data, not for the metadata.
>
> This also applies to AFS directories: create,
On Tue, Dec 26, 2017 at 07:58:15PM -0800, Matthew Wilcox wrote:
> On Tue, Dec 26, 2017 at 07:43:40PM -0800, Matthew Wilcox wrote:
> > Also add the xa_lock() and xa_unlock() family of wrappers to make it
> > easier to use the lock. If we could rely on -fplan9-extensions in
> > the
On Mon, Dec 18, 2017 at 05:46:59PM +0800, Qu Wenruo wrote:
>
>
> On 2017年12月18日 17:08, Anand Jain wrote:
> > Update btrfs_check_rw_degradable() to check against the given
> > device if its lost.
> >
> > We can use this function to know if the volume is going to be
> > in degraded mode OR failed
On Thu, Dec 28, 2017 at 09:18:07PM +0100, David Disseldorp wrote:
> On Sun, 24 Dec 2017 13:31:40 +0100, Ceriel Jacobs wrote:
>
> > Saving:
> > 1. ± 0.4 seconds of boot time (10% of boot until root)
> > 2. ± 150k of RAM
> > 3. ± 75k of disk space
>
> Thanks for bringing this up - I'm also
Recent patches reworking the mount path left some unused parameters. We
pass a vfsmount to mount_subvol, the flags and data (ie. mount options)
have been already applied and we will not need them.
Signed-off-by: David Sterba
---
fs/btrfs/super.c | 6 ++
1 file changed, 2
Jeff Layton wrote:
> Note that AFS has quite a different definition for this counter. AFS
> only increments it on changes to the data, not for the metadata.
This also applies to AFS directories: create, mkdir, unlink, rmdir, link,
symlink, rename, and mountpoint
On Thu, Dec 14, 2017 at 05:28:00PM +0900, Misono, Tomohiro wrote:
> Long ago, commit edf24abe51493 ("btrfs: sanity mount option parsing and
> early mount code") split the btrfs_parse_options() into two parts
> (btrfs_parse_early_options() and btrfs_parse_options()). As a result,
>
On Fri 22-12-17 18:54:57, Jeff Layton wrote:
> On Sat, 2017-12-23 at 10:14 +1100, NeilBrown wrote:
> > > +#include
> > > +
> > > +/*
> > > + * The change attribute (i_version) is mandated by NFSv4 and is mostly
> > > for
> > > + * knfsd, but is also used for other purposes (e.g. IMA). The
On Fri 22-12-17 07:05:56, Jeff Layton wrote:
> From: Jeff Layton
>
> Since i_version is mostly treated as an opaque value, we can exploit that
> fact to avoid incrementing it when no one is watching. With that change,
> we can avoid incrementing the counter on writes, unless
On Fri 22-12-17 07:05:53, Jeff Layton wrote:
> From: Jeff Layton
>
> We only really need to update i_version if someone has queried for it
> since we last incremented it. By doing that, we can avoid having to
> update the inode if the times haven't changed.
>
> If the times
On Fri, Dec 22, 2017 at 04:23:01PM -0700, Liu Bo wrote:
> In fact nobody is waiting on @wait's waitqueue, it can be safely
> removed.
>
> Signed-off-by: Liu Bo
Reviewed-by: David Sterba
--
To unsubscribe from this list: send the line "unsubscribe
On Tue, Dec 26, 2017 at 01:57:44PM +0800, Qu Wenruo wrote:
> Btrfs qgroup also supports to limit the usage of specified qgroups.
>
> It's possible to enable qgroup but doesn't enable limit.
> (Most user won't use qgroup limit for various problems)
>
> So add a new test group 'limit' for btrfs,
On Fri, Dec 15, 2017 at 12:06:18PM +0200, Nikolay Borisov wrote:
> Commit 0e8c36a9fd81 ("Btrfs: fix lots of orphan inodes when the space
> is not enough") changed the way transaction reservation is made in
> btrfs_evict_node and as a result this function became unused. This has
> been the status
On Wed, Dec 20, 2017 at 03:30:55PM +0100, Jan Kara wrote:
> On Wed 20-12-17 08:35:05, Dave Chinner wrote:
> > On Tue, Dec 19, 2017 at 01:07:09PM +0100, Jan Kara wrote:
> > > On Wed 13-12-17 09:20:04, Dave Chinner wrote:
> > > > On Tue, Dec 12, 2017 at 01:05:35PM -0500, Josef Bacik wrote:
> > > > >
On Fri, Dec 22, 2017 at 12:55:08AM +, Colin King wrote:
> From: Colin Ian King
>
> Label 'retry' is not used, remove it. Cleans up a clang build
> warning:
>
> warning: label ‘retry’ defined but not used [-Wunused-label]
>
> Fixes: b283738ab0ad ("Revert "btrfs:
On Wed, Dec 13, 2017 at 10:25:36AM +0200, Nikolay Borisov wrote:
> Hello,
>
> Here is a series which cleans the btrfs source code of all the redundant
> bio_get/set calls. After it's applied there is only a single bio_get
> inovcation
> left in __alloc_device for the flush_bio (and that is
On 01/02/2018 01:45 PM, Marat Khalili wrote:
>> I think the 1-3TB Seagate drives are garbage.
>
> There are known problems with ST3000DM001, but first of all you should not
> put PC-oriented disks in RAID, they are not designed for it on multiple
> levels (vibration tolerance, error
> When testing Btrfs with fio 4k random write,
That's an exceptionally narrowly defined workload. Also it is
narrower than that, because it must be without 'fsync' after
each write, or else there would be no accumulation of dirty
blocks in memory at all.
> I found that volume with smaller free
> I think the 1-3TB Seagate drives are garbage.
There are known problems with ST3000DM001, but first of all you should not put
PC-oriented disks in RAID, they are not designed for it on multiple levels
(vibration tolerance, error reporting...) There are similar horror stories
about people
> -Original Message-
> From: linux-btrfs-ow...@vger.kernel.org [mailto:linux-btrfs-
> ow...@vger.kernel.org] On Behalf Of ein
> Sent: Tuesday, 2 January 2018 9:03 PM
> To: swest...@gmail.com; Kai Krakow
> Cc: linux-btrfs@vger.kernel.org
> Subject: Re: A Big Thank
On 01/01/2018 08:44 PM, Stirling Westrup wrote:
> On Mon, Jan 1, 2018 at 7:15 AM, Kai Krakow wrote:
>> Am Mon, 01 Jan 2018 18:13:10 +0800 schrieb Qu Wenruo:
>>
>>> On 2018年01月01日 08:48, Stirling Westrup wrote:
1) I had a 2T drive die with exactly 3 hard-sector
On 2018年01月02日 15:51, robbieko wrote:
> Hi All,
>
> When testing Btrfs with fio 4k random write, I found that volume with
> smaller free space available has lower performance.
>
> It seems that the smaller the free space of volume is, the smaller
> amount of dirty page filesystem could have.
>
44 matches
Mail list logo