We used to have two chunk allocators, btrfs_alloc_chunk() and
btrfs_alloc_data_chunk(), the former is the more generic one, while the
later is only used in mkfs and convert, to allocate SINGLE data chunk.
Although btrfs_alloc_data_chunk() has some special hacks to cooperate
with convert, it's quit
Kernel uses a delayed chunk allocation behavior for metadata chunks
KERNEL:
btrfs_alloc_chunk()
|- __btrfs_alloc_chunk(): Only allocate chunk mapping
|- btrfs_make_block_group(): Add corresponding bg to fs_info->new_bgs
Then at transaction commit time, it finishes the remaining work:
btr
Signed-off-by: Qu Wenruo
---
check/main.c | 22 --
volumes.h| 22 ++
2 files changed, 22 insertions(+), 22 deletions(-)
diff --git a/check/main.c b/check/main.c
index c051a862eb35..96607f6817af 100644
--- a/check/main.c
+++ b/check/main.c
@@ -7638,28 +
Before this patch, chunk allocation is split into 2 parts:
1) Chunk allocation
Handled by btrfs_alloc_chunk(), which will insert chunk and device
extent items.
2) Block group allocation
Handled by btrfs_make_block_group(), which will insert block group
item and update space info.
How
Used by later btrfs_alloc_chunk() rework.
Signed-off-by: Qu Wenruo
---
Makefile | 3 +-
kernel-lib/sort.c | 104 ++
kernel-lib/sort.h | 16 +
3 files changed, 122 insertions(+), 1 deletion(-)
create mode 100644 kernel-lib/s
This patchset can be fetched from github:
https://github.com/adam900710/btrfs-progs/tree/libbtrfs_prepare
This patchset unified a large part of chunk allocator (free device
extent search) between kernel and btrfs-progs.
And reuses kernel function structures like btrfs_finish_chunk_alloc()
and btrf
As preparation to create libbtrfs which shares code between kernel and
btrfs, this patch mainly unifies the search for free device extents.
The main modifications are:
1) Search for free device extent
Use the kernel method, by sorting the devices by its max hole
capability, and use that sor
Same as kernel declaration.
Signed-off-by: Qu Wenruo
---
utils.c | 2 +-
volumes.c | 6 +++---
volumes.h | 2 +-
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/utils.c b/utils.c
index e9cb3a82fda6..eff5fb64cfd5 100644
--- a/utils.c
+++ b/utils.c
@@ -216,7 +216,7 @@ int btrfs_ad
Just as kernel find_free_dev_extent(), allow it to return maximum hole
size for us to build device list for later chunk allocator rework.
Signed-off-by: Qu Wenruo
---
volumes.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/volumes.c b/volumes.c
index b47ff1f392b5..f40
As part of the effort to unify code and behavior between btrfs-progs and
kernel, copy the btrfs_raid_array from kernel to btrfs-progs.
So later we can use the btrfs_raid_array[] to get needed raid info other
than manually do if-else branches.
Signed-off-by: Qu Wenruo
---
ctree.h | 12
Strangely, we have level check in btrfs_print_tree() while we don't have
the same check in read_node_slot().
That's to say, for the following corruption, btrfs_search_slot() or
btrfs_next_leaf() can return invalid leaf:
Parent eb:
node XX level 1
^^^
Child sh
Signed-off-by: Qu Wenruo
---
volumes.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/volumes.c b/volumes.c
index edad367b593c..677d085de96c 100644
--- a/volumes.c
+++ b/volumes.c
@@ -826,7 +826,7 @@ error:
return ret;
}
-#define BTRFS_MAX_DEVS(r) ((BTRFS_LEA
On 2018年02月09日 15:23, Ralph Gauges wrote:
> Hi Qu,
>
> I applied the patch to the sources of v4.15 and ran it in gdb. This is
> the result.
>
> (gdb) set args check /dev/sdf1
> (gdb) run
> Starting program: /home/gauges/Applications/bin/btrfs check /dev/sdf1
> [Thread debugging using libthread_
On Mon, Feb 05, 2018 at 04:15:02PM -0700, Liu Bo wrote:
> Btrfs tries its best to tolerate write errors, but kind of silently
> (except some messages in kernel log).
>
> For raid1 and raid10, this is usually not a problem because there is a
> copy as backup, while for parity based raid setup, i.e.
Hi,
> -Original Message-
> From: David Sterba [mailto:dste...@suse.cz]
> Sent: Friday, February 09, 2018 2:02 AM
> To: Gu, Jinxiang/顾 金香
> Cc: linux-btrfs@vger.kernel.org; dste...@suse.cz
> Subject: Re: [PATCH v5 0/3] Add support for export testsuits
>
> On Thu, Feb 08, 2018 at 02:34:17P
On 2018年02月09日 03:07, Liu Bo wrote:
> On Thu, Feb 08, 2018 at 07:52:05PM +0100, Goffredo Baroncelli wrote:
>> On 02/06/2018 12:15 AM, Liu Bo wrote:
>> [...]
>>> One way to mitigate the data loss pain is to expose 'bad chunks',
>>> i.e. degraded chunks, to users, so that they can use 'btrfs balanc
I've installed openSUSE Tumbleweed on a VM and checked how the disk is
with btrfs partioned, how fstab looks like, how snapper works and also
what are the differences to my system. I've decided to leave it like
it is for now and next time use the guide from the link provided by
@Andrei.
Thanks!
R
On Thu, Feb 08, 2018 at 07:52:05PM +0100, Goffredo Baroncelli wrote:
> On 02/06/2018 12:15 AM, Liu Bo wrote:
> [...]
> > One way to mitigate the data loss pain is to expose 'bad chunks',
> > i.e. degraded chunks, to users, so that they can use 'btrfs balance'
> > to relocate the whole chunk and get
On Thu, Feb 08, 2018 at 06:25:17PM +0200, Nikolay Borisov wrote:
> list_first_entry is essentially a wrapper over cotnainer_of. The latter
> can never return null even if it's working on inconsistent list since it
> will either crash or return some offset in the wrong struct.
> Additionally, for th
On 02/06/2018 12:15 AM, Liu Bo wrote:
[...]
> One way to mitigate the data loss pain is to expose 'bad chunks',
> i.e. degraded chunks, to users, so that they can use 'btrfs balance'
> to relocate the whole chunk and get the full raid6 protection again
> (if the relocation works).
[...]
[...]
> +
On Thu, Feb 08, 2018 at 02:34:17PM +0800, Gu Jinxiang wrote:
> Achieved:
> 1. export testsuite by:
> $ make testsuite
> files list in testsuites-list will be added into tarball
> btrfs-progs-tests.tar.gz.
>
> 2. after decompress btrfs-progs-tests.tar.gz, run test by:
> $ TEST=`MASK` ./tests/mkf
list_first_entry is essentially a wrapper over cotnainer_of. The latter
can never return null even if it's working on inconsistent list since it
will either crash or return some offset in the wrong struct.
Additionally, for the dirty_bgs list the iteration is done under
dirty_bgs_lock which ensures
The reason why io_bgs can be modified without holding any lock is
non-obvious. Document it and reference that documentation from the
respective call sites
Signed-off-by: Nikolay Borisov
---
fs/btrfs/disk-io.c | 4
fs/btrfs/extent-tree.c | 7 ++-
fs/btrfs/transaction.h | 15 +++
On Thu, Feb 8, 2018 at 5:32 AM, Andrei Borzenkov wrote:
> 08.02.2018 06:03, Chris Murphy пишет:
>> On Wed, Feb 7, 2018 at 6:26 PM, Nick Gilmour wrote:
>>> Hi all,
>>>
>>> I have successfully restored a snapshot of root but now when I try to
>
> How exactly was it done?
>
>>> make a new snapshot I
On Thu, Feb 08, 2018 at 01:08:56PM +0800, Gu Jinxiang wrote:
> Since tests/cli-tests/002-balance-full-no-filters/test.sh need
> the mkfs.btrfs for prerequisite.
> So add the dependency in Makefile.
>
> Signed-off-by: Gu Jinxiang
1-3 applied, thanks.
--
To unsubscribe from this list: send the lin
From: Colin Ian King
The function btrfs_test_extent_map requires a void argument to be ANSI C
compliant and so it matches the prototype in fs/btrfs/tests/btrfs-tests.h
Cleans up sparse warning:
fs/btrfs/tests/extent-map-tests.c:346:27: warning: non-ANSI function
declaration of function 'btrfs_te
>From 361d37a7d36978020dfb4c11ec1f4800937ccb68 Mon Sep 17 00:00:00 2001
From: Tetsuo Handa
Date: Thu, 8 Feb 2018 10:35:35 +0900
Subject: [PATCH v2] lockdep: Fix fs_reclaim warning.
Dave Jones reported fs_reclaim lockdep warnings.
WARNING: possible
> How much RAM on the machine and how much swap available? This looks like a
> lot of dirty data has accumulated, and then also there's swapping happening.
> Both swap out and swap in.
The machine has 16GB Ram and 40GB Swap on a SSD. Its not doing much
besides being my personal file archive, so th
On 02/06/2018 07:15 AM, Liu Bo wrote:
Btrfs tries its best to tolerate write errors, but kind of silently
(except some messages in kernel log).
For raid1 and raid10, this is usually not a problem because there is a
copy as backup, while for parity based raid setup, i.e. raid5 and
raid6, the pr
29 matches
Mail list logo