This is a consolidation of z3fold optimizations and fixes done so far, revised
after comments from Dan [1].
The coming patches are to be applied on top of the following commit:
commit 07cfe852286d5e314f8cd19781444e12a2b6cdf3
Author: zhong jiang
Date: Tue Dec 20 11:53:40
This is a consolidation of z3fold optimizations and fixes done so far, revised
after comments from Dan [1].
The coming patches are to be applied on top of the following commit:
commit 07cfe852286d5e314f8cd19781444e12a2b6cdf3
Author: zhong jiang
Date: Tue Dec 20 11:53:40 2016 +1100
On Thu, Dec 22, 2016 at 10:55 PM, Dan Streetman <ddstr...@ieee.org> wrote:
> On Sun, Dec 18, 2016 at 3:15 AM, Vitaly Wool <vitalyw...@gmail.com> wrote:
>> On Tue, Nov 29, 2016 at 11:39 PM, Andrew Morton
>> <a...@linux-foundation.org> wrote:
>>> On Tue
On Thu, Dec 22, 2016 at 10:55 PM, Dan Streetman wrote:
> On Sun, Dec 18, 2016 at 3:15 AM, Vitaly Wool wrote:
>> On Tue, Nov 29, 2016 at 11:39 PM, Andrew Morton
>> wrote:
>>> On Tue, 29 Nov 2016 17:33:19 -0500 Dan Streetman wrote:
>>>
>>>> On Sat
On Tue, Nov 29, 2016 at 11:39 PM, Andrew Morton
<a...@linux-foundation.org> wrote:
> On Tue, 29 Nov 2016 17:33:19 -0500 Dan Streetman <ddstr...@ieee.org> wrote:
>
>> On Sat, Nov 26, 2016 at 2:15 PM, Vitaly Wool <vitalyw...@gmail.com> wrote:
>> > Here come
On Tue, Nov 29, 2016 at 11:39 PM, Andrew Morton
wrote:
> On Tue, 29 Nov 2016 17:33:19 -0500 Dan Streetman wrote:
>
>> On Sat, Nov 26, 2016 at 2:15 PM, Vitaly Wool wrote:
>> > Here come 2 patches with z3fold fixes for chunks counting and locking. As
>> > commi
-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 108
1 file changed, 57 insertions(+), 51 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 729a2da..8dcf35e 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -52,6 +52,7 @@ enum
-by: Vitaly Wool
---
mm/z3fold.c | 108
1 file changed, 57 insertions(+), 51 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 729a2da..8dcf35e 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -52,6 +52,7 @@ enum buddy
On Fri, Nov 25, 2016 at 7:25 PM, Dan Streetman <ddstr...@ieee.org> wrote:
> On Tue, Nov 15, 2016 at 11:00 AM, Vitaly Wool <vitalyw...@gmail.com> wrote:
>> If a z3fold page couldn't be compacted, we don't want it to be
>> used for next object allocation in the first plac
On Fri, Nov 25, 2016 at 7:25 PM, Dan Streetman wrote:
> On Tue, Nov 15, 2016 at 11:00 AM, Vitaly Wool wrote:
>> If a z3fold page couldn't be compacted, we don't want it to be
>> used for next object allocation in the first place.
>
> why? !compacted can only mean 1) already
lru entry).
[1] https://lkml.org/lkml/2016/11/25/628
[2] http://www.spinics.net/lists/linux-mm/msg117227.html
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 18 --
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold
lru entry).
[1] https://lkml.org/lkml/2016/11/25/628
[2] http://www.spinics.net/lists/linux-mm/msg117227.html
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 18 --
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index efbcfcc..729a2da 10064
big").
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 161
1 file changed, 87 insertions(+), 74 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 7ad70fa..efbcfcc 100644
--- a/mm/z3fold.c
+++ b/mm
big").
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 161
1 file changed, 87 insertions(+), 74 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 7ad70fa..efbcfcc 100644
--- a/mm/z3fold.c
+++ b/mm/z3fol
es that weren't compacted") and
applied the coming 2 instead.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
[1] https://lkml.org/lkml/2016/11/25/595
es that weren't compacted") and
applied the coming 2 instead.
Signed-off-by: Vitaly Wool
[1] https://lkml.org/lkml/2016/11/25/595
On Fri, Nov 25, 2016 at 7:33 PM, Dan Streetman <ddstr...@ieee.org> wrote:
> On Fri, Nov 25, 2016 at 11:25 AM, Vitaly Wool <vitalyw...@gmail.com> wrote:
>> On Fri, Nov 25, 2016 at 4:59 PM, Dan Streetman <ddstr...@ieee.org> wrote:
>>> On Tue, Nov 15, 20
On Fri, Nov 25, 2016 at 7:33 PM, Dan Streetman wrote:
> On Fri, Nov 25, 2016 at 11:25 AM, Vitaly Wool wrote:
>> On Fri, Nov 25, 2016 at 4:59 PM, Dan Streetman wrote:
>>> On Tue, Nov 15, 2016 at 11:00 AM, Vitaly Wool wrote:
>>>> Currently the whole kernel bui
On Fri, Nov 25, 2016 at 10:17 PM, Dan Streetman <ddstr...@ieee.org> wrote:
> On Fri, Nov 25, 2016 at 9:43 AM, Dan Streetman <ddstr...@ieee.org> wrote:
>> On Thu, Nov 3, 2016 at 5:04 PM, Vitaly Wool <vitalyw...@gmail.com> wrote:
>>> z3fold_compact_page() curre
On Fri, Nov 25, 2016 at 10:17 PM, Dan Streetman wrote:
> On Fri, Nov 25, 2016 at 9:43 AM, Dan Streetman wrote:
>> On Thu, Nov 3, 2016 at 5:04 PM, Vitaly Wool wrote:
>>> z3fold_compact_page() currently only handles the situation when
>>> there's a single middle c
On Fri, Nov 25, 2016 at 4:59 PM, Dan Streetman <ddstr...@ieee.org> wrote:
> On Tue, Nov 15, 2016 at 11:00 AM, Vitaly Wool <vitalyw...@gmail.com> wrote:
>> Currently the whole kernel build will be stopped if the size of
>> struct z3fold_header is greater than the size o
On Fri, Nov 25, 2016 at 4:59 PM, Dan Streetman wrote:
> On Tue, Nov 15, 2016 at 11:00 AM, Vitaly Wool wrote:
>> Currently the whole kernel build will be stopped if the size of
>> struct z3fold_header is greater than the size of one chunk, which
>> is 64 bytes by de
On Fri, Nov 25, 2016 at 9:41 AM, Arnd Bergmann <a...@arndb.de> wrote:
> On Friday, November 25, 2016 8:38:25 AM CET Vitaly Wool wrote:
>> >> diff --git a/mm/z3fold.c b/mm/z3fold.c
>> >> index e282ba073e77..66ac7a7dc934 100644
>> >> --- a/mm/z3fold.c
&
On Fri, Nov 25, 2016 at 9:41 AM, Arnd Bergmann wrote:
> On Friday, November 25, 2016 8:38:25 AM CET Vitaly Wool wrote:
>> >> diff --git a/mm/z3fold.c b/mm/z3fold.c
>> >> index e282ba073e77..66ac7a7dc934 100644
>> >> --- a/mm/z3fold.c
>> >> +++
Hi Joe,
On Thu, Nov 24, 2016 at 6:08 PM, Joe Perches wrote:
> On Thu, 2016-11-24 at 17:31 +0100, Arnd Bergmann wrote:
>> Printing a size_t requires the %zd format rather than %d:
>>
>> mm/z3fold.c: In function ‘init_z3fold’:
>> include/linux/kern_levels.h:4:18: error: format
Hi Joe,
On Thu, Nov 24, 2016 at 6:08 PM, Joe Perches wrote:
> On Thu, 2016-11-24 at 17:31 +0100, Arnd Bergmann wrote:
>> Printing a size_t requires the %zd format rather than %d:
>>
>> mm/z3fold.c: In function ‘init_z3fold’:
>> include/linux/kern_levels.h:4:18: error: format ‘%d’ expects
t argument 2 has type ‘long unsigned int’ [-Werror=format=]
>
> Fixes: 50a50d2676c4 ("z3fold: don't fail kernel build if z3fold_header is too
> big")
> Signed-off-by: Arnd Bergmann <a...@arndb.de>
Acked-by: Vitaly Wool <vitalyw...@gmail.com>
And thanks :)
~v
‘long unsigned int’ [-Werror=format=]
>
> Fixes: 50a50d2676c4 ("z3fold: don't fail kernel build if z3fold_header is too
> big")
> Signed-off-by: Arnd Bergmann
Acked-by: Vitaly Wool
And thanks :)
~vitaly
> ---
> mm/z3fold.c | 2 +-
> 1 file changed, 1 insertion(
5-7% improvement in randrw fio tests and
about 10% improvement in fio sequential read/write.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 22 +-
1 file changed, 17 insertions(+), 5 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index f
5-7% improvement in randrw fio tests and
about 10% improvement in fio sequential read/write.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 22 +-
1 file changed, 17 insertions(+), 5 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index ffd9353..e282ba0 100644
--- a/mm
by z3fold_alloc(), bail out from
z3fold_free() early
Changes from v3 [3]:
- spinlock changed to raw spinlock to avoid BUILD_BUG_ON trigger
[1] https://lkml.org/lkml/2016/11/5/59
[2] https://lkml.org/lkml/2016/11/8/400
[3] https://lkml.org/lkml/2016/11/9/146
Signed-off-by: Vitaly Wool <vita
by z3fold_alloc(), bail out from
z3fold_free() early
Changes from v3 [3]:
- spinlock changed to raw spinlock to avoid BUILD_BUG_ON trigger
[1] https://lkml.org/lkml/2016/11/5/59
[2] https://lkml.org/lkml/2016/11/8/400
[3] https://lkml.org/lkml/2016/11/9/146
Signed-off-by: Vitaly Wool
---
mm
.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 11 ---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 7ad70fa..ffd9353 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -870,10 +870,15 @@ MODULE_ALIAS("zpool-z3fold&q
.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 11 ---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 7ad70fa..ffd9353 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -870,10 +870,15 @@ MODULE_ALIAS("zpool-z3fold");
static int __init i
Coming is the patchset with the per-page spinlock as the main
modification, and two smaller dependent patches, one of which
removes build error when the z3fold header size exceeds the
size of a chunk, and the other puts non-compacted pages to the
end of the unbuddied list.
Signed-off-by: Vitaly
Coming is the patchset with the per-page spinlock as the main
modification, and two smaller dependent patches, one of which
removes build error when the z3fold header size exceeds the
size of a chunk, and the other puts non-compacted pages to the
end of the unbuddied list.
Signed-off-by: Vitaly
On Tue, Nov 15, 2016 at 1:33 AM, Andrew Morton
<a...@linux-foundation.org> wrote:
> On Fri, 11 Nov 2016 14:02:07 +0100 Vitaly Wool <vitalyw...@gmail.com> wrote:
>
>> If a z3fold page couldn't be compacted, we don't want it to be
>> used for next object allocation in
On Tue, Nov 15, 2016 at 1:33 AM, Andrew Morton
wrote:
> On Fri, 11 Nov 2016 14:02:07 +0100 Vitaly Wool wrote:
>
>> If a z3fold page couldn't be compacted, we don't want it to be
>> used for next object allocation in the first place. It makes more
>> sense to add it to
5-7% improvement in randrw fio tests and
about 10% improvement in fio sequential read/write.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 32 ++--
1 file changed, 22 insertions(+), 10 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
5-7% improvement in randrw fio tests and
about 10% improvement in fio sequential read/write.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 32 ++--
1 file changed, 22 insertions(+), 10 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 5fe2652..eb8f9a0 100644
-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 11 ---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index cd3713d..5fe2652 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -866,10 +866,15 @@ MODULE_ALIAS("zpool-z3fold"
-off-by: Vitaly Wool
---
mm/z3fold.c | 11 ---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index cd3713d..5fe2652 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -866,10 +866,15 @@ MODULE_ALIAS("zpool-z3fold");
static int __init init_z
by z3fold_alloc(), bail out from
z3fold_free() early
Changes from v3 [3]:
- spinlock changed to raw spinlock to avoid BUILD_BUG_ON trigger
[1] https://lkml.org/lkml/2016/11/5/59
[2] https://lkml.org/lkml/2016/11/8/400
[3] https://lkml.org/lkml/2016/11/9/146
Signed-off-by: Vitaly Wool <vita
by z3fold_alloc(), bail out from
z3fold_free() early
Changes from v3 [3]:
- spinlock changed to raw spinlock to avoid BUILD_BUG_ON trigger
[1] https://lkml.org/lkml/2016/11/5/59
[2] https://lkml.org/lkml/2016/11/8/400
[3] https://lkml.org/lkml/2016/11/9/146
Signed-off-by: Vitaly Wool
---
mm/z3fold.c
to spinlocks
- no read/write locks, just per-page spinlock
Changes from v2 [2]:
- if a page is taken off its list by z3fold_alloc(), bail out from
z3fold_free() early
[1] https://lkml.org/lkml/2016/11/5/59
[2] https://lkml.org/lkml/2016/11/8/400
Signed-off-by: Vitaly Wool <vitalyw...@gmail.
to spinlocks
- no read/write locks, just per-page spinlock
Changes from v2 [2]:
- if a page is taken off its list by z3fold_alloc(), bail out from
z3fold_free() early
[1] https://lkml.org/lkml/2016/11/5/59
[2] https://lkml.org/lkml/2016/11/8/400
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 137
to spinlocks
- no read/write locks, just per-page spinlock
[1] https://lkml.org/lkml/2016/11/5/59
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 123 +---
1 file changed, 85 insertions(+), 38 deletions(-)
diff --gi
to spinlocks
- no read/write locks, just per-page spinlock
[1] https://lkml.org/lkml/2016/11/5/59
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 123 +---
1 file changed, 85 insertions(+), 38 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
On Sun, Nov 6, 2016 at 12:38 AM, Andi Kleen <a...@firstfloor.org> wrote:
> Vitaly Wool <vitalyw...@gmail.com> writes:
>
>> Most of z3fold operations are in-page, such as modifying z3fold
>> page header or moving z3fold objects within a page. Taking
>> per-pool
On Sun, Nov 6, 2016 at 12:38 AM, Andi Kleen wrote:
> Vitaly Wool writes:
>
>> Most of z3fold operations are in-page, such as modifying z3fold
>> page header or moving z3fold objects within a page. Taking
>> per-pool spinlock to protect per-page objects is therefore
>
one directly to the
z3fold header makes the latter quite big on some systems so that
it won't fit in a signle chunk.
This patch implements custom per-page read/write locking mechanism
which is lightweight enough to fit into the z3fold header.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.
one directly to the
z3fold header makes the latter quite big on some systems so that
it won't fit in a signle chunk.
This patch implements custom per-page read/write locking mechanism
which is lightweight enough to fit into the z3fold header.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 148
On Thu, Nov 3, 2016 at 11:17 PM, Andrew Morton
<a...@linux-foundation.org> wrote:
> On Thu, 3 Nov 2016 22:24:07 +0100 Vitaly Wool <vitalyw...@gmail.com> wrote:
>
>> On Thu, Nov 3, 2016 at 10:14 PM, Andrew Morton
>> <a...@linux-foundation.org> wrote:
>>
On Thu, Nov 3, 2016 at 11:17 PM, Andrew Morton
wrote:
> On Thu, 3 Nov 2016 22:24:07 +0100 Vitaly Wool wrote:
>
>> On Thu, Nov 3, 2016 at 10:14 PM, Andrew Morton
>> wrote:
>> > On Thu, 3 Nov 2016 22:00:58 +0100 Vitaly Wool wrote:
>> >
>> >&g
On Thu, Nov 3, 2016 at 10:16 PM, Andrew Morton
<a...@linux-foundation.org> wrote:
> On Thu, 3 Nov 2016 22:04:28 +0100 Vitaly Wool <vitalyw...@gmail.com> wrote:
>
>> z3fold_compact_page() currently only handles the situation when
>> there's a single middle chunk w
On Thu, Nov 3, 2016 at 10:16 PM, Andrew Morton
wrote:
> On Thu, 3 Nov 2016 22:04:28 +0100 Vitaly Wool wrote:
>
>> z3fold_compact_page() currently only handles the situation when
>> there's a single middle chunk within the z3fold page. However it
>> may be worth it to
On Thu, Nov 3, 2016 at 10:14 PM, Andrew Morton
<a...@linux-foundation.org> wrote:
> On Thu, 3 Nov 2016 22:00:58 +0100 Vitaly Wool <vitalyw...@gmail.com> wrote:
>
>> This patch converts pages_nr per-pool counter to atomic64_t.
>
> Which is slower.
>
> Pre
On Thu, Nov 3, 2016 at 10:14 PM, Andrew Morton
wrote:
> On Thu, 3 Nov 2016 22:00:58 +0100 Vitaly Wool wrote:
>
>> This patch converts pages_nr per-pool counter to atomic64_t.
>
> Which is slower.
>
> Presumably there is a reason for making this change. This reas
, using BIG_CHUNK_GAP define as
a threshold for middle chunk to be worth moving.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 60 +++-
1 file changed, 47 insertions(+), 13 deletions(-)
diff --git a/mm/z3fold.
, using BIG_CHUNK_GAP define as
a threshold for middle chunk to be worth moving.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 60 +++-
1 file changed, 47 insertions(+), 13 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 4d02280
This patch converts pages_nr per-pool counter to atomic64_t.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 26 +++---
1 file changed, 15 insertions(+), 11 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 8f9e89c..4d02280 100644
--
This patch converts pages_nr per-pool counter to atomic64_t.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 26 +++---
1 file changed, 15 insertions(+), 11 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 8f9e89c..4d02280 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
On Tue, Nov 1, 2016 at 9:16 PM, Dan Streetman <ddstr...@ieee.org> wrote:
> On Thu, Oct 27, 2016 at 7:12 AM, Vitaly Wool <vitalyw...@gmail.com> wrote:
>> Mapping/unmapping goes with no actual modifications so it makes
>> sense to only take a read lock in map/unmap functio
On Tue, Nov 1, 2016 at 9:16 PM, Dan Streetman wrote:
> On Thu, Oct 27, 2016 at 7:12 AM, Vitaly Wool wrote:
>> Mapping/unmapping goes with no actual modifications so it makes
>> sense to only take a read lock in map/unmap functions.
>>
>> This change gives up to 10%
On Tue, Nov 1, 2016 at 9:03 PM, Dan Streetman <ddstr...@ieee.org> wrote:
> On Thu, Oct 27, 2016 at 7:08 AM, Vitaly Wool <vitalyw...@gmail.com> wrote:
>> This patch converts pages_nr per-pool counter to atomic64_t.
>> It also introduces a new counter, unbuddied_nr,
On Tue, Nov 1, 2016 at 9:03 PM, Dan Streetman wrote:
> On Thu, Oct 27, 2016 at 7:08 AM, Vitaly Wool wrote:
>> This patch converts pages_nr per-pool counter to atomic64_t.
>> It also introduces a new counter, unbuddied_nr, which is
>> atomic64_t, too, to track the number of u
Linus's tree.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 166 ++--
1 file changed, 140 insertions(+), 26 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 014d84f..cc26ff5 100644
--- a/mm/z3fold.c
++
Linus's tree.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 166 ++--
1 file changed, 140 insertions(+), 26 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 014d84f..cc26ff5 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -27,6 +27,7
This patch converts pages_nr per-pool counter to atomic64_t.
It also introduces a new counter, unbuddied_nr, which is
atomic64_t, too, to track the number of unbuddied (compactable)
z3fold pages.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.
This patch converts pages_nr per-pool counter to atomic64_t.
It also introduces a new counter, unbuddied_nr, which is
atomic64_t, too, to track the number of unbuddied (compactable)
z3fold pages.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 33 +
1 file changed
=280249KB/s, maxb=281130KB/s,
mint=839218msec, maxt=841856msec
Run status group 1 (all jobs):
READ: io=2700.0GB, aggrb=5210.7MB/s, minb=444640KB/s, maxb=447791KB/s,
mint=526874msec, maxt=530607msec
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.
=280249KB/s, maxb=281130KB/s,
mint=839218msec, maxt=841856msec
Run status group 1 (all jobs):
READ: io=2700.0GB, aggrb=5210.7MB/s, minb=444640KB/s, maxb=447791KB/s,
mint=526874msec, maxt=530607msec
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 44
.
The patchset thus implements in-page compaction worker for
z3fold, preceded by some code optimizations and preparations
which, again, deserved to be separate patches.
Changes compared to v2:
- more accurate accounting of unbuddied_nr, per Dan's
comments
- various cleanups.
Signed-off-by: Vitaly
.
The patchset thus implements in-page compaction worker for
z3fold, preceded by some code optimizations and preparations
which, again, deserved to be separate patches.
Changes compared to v2:
- more accurate accounting of unbuddied_nr, per Dan's
comments
- various cleanups.
Signed-off-by: Vitaly
On Thu, Oct 20, 2016 at 10:15 PM, Dan Streetman <ddstr...@ieee.org> wrote:
> On Wed, Oct 19, 2016 at 12:35 PM, Vitaly Wool <vitalyw...@gmail.com> wrote:
>> The per-pool z3fold spinlock should generally be taken only when
>> a non-atomic pool variable is modifi
On Thu, Oct 20, 2016 at 10:15 PM, Dan Streetman wrote:
> On Wed, Oct 19, 2016 at 12:35 PM, Vitaly Wool wrote:
>> The per-pool z3fold spinlock should generally be taken only when
>> a non-atomic pool variable is modified. There's no need to take it
>> to map/unmap an object.
On Thu, Oct 20, 2016 at 10:17 PM, Dan Streetman <ddstr...@ieee.org> wrote:
> On Wed, Oct 19, 2016 at 12:35 PM, Vitaly Wool <vitalyw...@gmail.com> wrote:
>> This patch converts pages_nr per-pool counter to atomic64_t.
>> It also introduces a new counter, unbuddied_nr,
On Thu, Oct 20, 2016 at 10:17 PM, Dan Streetman wrote:
> On Wed, Oct 19, 2016 at 12:35 PM, Vitaly Wool wrote:
>> This patch converts pages_nr per-pool counter to atomic64_t.
>> It also introduces a new counter, unbuddied_nr, which is
>> atomic64_t, too, to track th
Linus's tree.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 166 ++--
1 file changed, 140 insertions(+), 26 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 014d84f..cc26ff5 100644
--- a/mm/z3fold.c
++
Linus's tree.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 166 ++--
1 file changed, 140 insertions(+), 26 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 014d84f..cc26ff5 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -27,6 +27,7
Mapping/unmapping goes with no actual modifications so it makes
sense to only take a read lock in map/unmap functions.
This change gives up to 15% performance gain in fio tests.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.
Mapping/unmapping goes with no actual modifications so it makes
sense to only take a read lock in map/unmap functions.
This change gives up to 15% performance gain in fio tests.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 44 +++-
1 file changed, 23
This patch converts pages_nr per-pool counter to atomic64_t.
It also introduces a new counter, unbuddied_nr, which is
atomic64_t, too, to track the number of unbuddied (compactable)
z3fold pages.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.
This patch converts pages_nr per-pool counter to atomic64_t.
It also introduces a new counter, unbuddied_nr, which is
atomic64_t, too, to track the number of unbuddied (compactable)
z3fold pages.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 33 +
1 file changed
on x86_64 with gcc
6.0) and non-obvious performance benefits
- instead, per-pool spinlock is substituted with rwlock.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
[1] https://lkml.org/lkml/2016/10/15/31
on x86_64 with gcc
6.0) and non-obvious performance benefits
- instead, per-pool spinlock is substituted with rwlock.
Signed-off-by: Vitaly Wool
[1] https://lkml.org/lkml/2016/10/15/31
Linus's tree.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 159 ++--
1 file changed, 133 insertions(+), 26 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 329bc26..580a732 100644
--- a/mm/z3fold.c
++
Linus's tree.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 159 ++--
1 file changed, 133 insertions(+), 26 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 329bc26..580a732 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -27,6 +27,7
.
The patchset thus implements in-page compaction worker for
z3fold, preceded by some code optimizations and preparations
which, again, deserved to be separate patches.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
[1] https://lkml.org/lkml/2016/10/15/31
The per-pool z3fold spinlock should generally be taken only when
a non-atomic pool variable is modified. There's no need to take it
to map/unmap an object. This patch introduces per-page lock that
will be used instead to protect per-page variables in map/unmap
functions.
Signed-off-by: Vitaly
This patch converts pages_nr per-pool counter to atomic64_t.
It also introduces a new counter, unbuddied_nr, which is
atomic64_t, too, to track the number of unbuddied (compactable)
z3fold pages.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.
.
The patchset thus implements in-page compaction worker for
z3fold, preceded by some code optimizations and preparations
which, again, deserved to be separate patches.
Signed-off-by: Vitaly Wool
[1] https://lkml.org/lkml/2016/10/15/31
The per-pool z3fold spinlock should generally be taken only when
a non-atomic pool variable is modified. There's no need to take it
to map/unmap an object. This patch introduces per-page lock that
will be used instead to protect per-page variables in map/unmap
functions.
Signed-off-by: Vitaly
This patch converts pages_nr per-pool counter to atomic64_t.
It also introduces a new counter, unbuddied_nr, which is
atomic64_t, too, to track the number of unbuddied (compactable)
z3fold pages.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 33 +
1 file changed
On Tue, Oct 18, 2016 at 7:35 PM, Dan Streetman <ddstr...@ieee.org> wrote:
> On Tue, Oct 18, 2016 at 12:26 PM, Vitaly Wool <vitalyw...@gmail.com> wrote:
>> 18 окт. 2016 г. 18:29 пользователь "Dan Streetman" <ddstr...@ieee.org>
>> написал:
>>
>>
On Tue, Oct 18, 2016 at 7:35 PM, Dan Streetman wrote:
> On Tue, Oct 18, 2016 at 12:26 PM, Vitaly Wool wrote:
>> 18 окт. 2016 г. 18:29 пользователь "Dan Streetman"
>> написал:
>>
>>
>>>
>>> On Tue, Oct 18, 2016 at 10:51 AM, Vitaly Wool
y: Dan Streetman <ddstr...@ieee.org>
>> Signed-off-by: zhong jiang <zhongji...@huawei.com>
>
> Acked-by: Dan Streetman <ddstr...@ieee.org>
Acked-by: Vitaly Wool <vitalyw...@gmail.com>
>> ---> mm/z3fold.c | 10 +++---
>> 1 file changed, 7 insertions(+),
of buddies.
>>
>> The patch limit the first_num to actual range of possible buddy indexes.
>> and that is more reasonable and obvious without functional change.
>>
>> Suggested-by: Dan Streetman
>> Signed-off-by: zhong jiang
>
> Acked-by: Dan Streetman
Ac
On Tue, Oct 18, 2016 at 4:27 PM, Dan Streetman <ddstr...@ieee.org> wrote:
> On Mon, Oct 17, 2016 at 10:45 PM, Vitaly Wool <vitalyw...@gmail.com> wrote:
>> Hi Dan,
>>
>> On Tue, Oct 18, 2016 at 4:06 AM, Dan Streetman <ddstr...@ieee.org> wrote:
>>&
On Tue, Oct 18, 2016 at 4:27 PM, Dan Streetman wrote:
> On Mon, Oct 17, 2016 at 10:45 PM, Vitaly Wool wrote:
>> Hi Dan,
>>
>> On Tue, Oct 18, 2016 at 4:06 AM, Dan Streetman wrote:
>>> On Sat, Oct 15, 2016 at 8:05 AM, Vitaly Wool wrote:
>>>>
201 - 300 of 460 matches
Mail list logo