LGTM for whole series
Reviewed-by: Timofey Titovets
вт, 14 мая 2019 г. в 16:17, Oleksandr Natalenko :
>
> Document respective sysfs knob.
>
> Signed-off-by: Oleksandr Natalenko
> ---
> Documentation/admin-guide/mm/ksm.rst | 11 +++
> 1 file changed, 11 insertion
LGTM
Reviewed-by: Timofey Titovets
вт, 14 мая 2019 г. в 16:22, Aaron Tomlin :
>
> On Tue 2019-05-14 15:16 +0200, Oleksandr Natalenko wrote:
> > Present a new sysfs knob to mark task's anonymous memory as mergeable.
> >
> > To force merging task's VMAs, its
пн, 13 мая 2019 г. в 14:33, Oleksandr Natalenko :
>
> Hi.
>
> On Mon, May 13, 2019 at 01:38:43PM +0300, Kirill Tkhai wrote:
> > On 10.05.2019 10:21, Oleksandr Natalenko wrote:
> > > By default, KSM works only on memory that is marked by madvise(). And the
> > > only way to get around that is to eit
вт, 13 нояб. 2018 г. в 20:59, Pavel Tatashin :
>
> On 18-11-13 15:23:50, Oleksandr Natalenko wrote:
> > Hi.
> >
> > > Yep. However, so far, it requires an application to explicitly opt in
> > > to this behavior, so it's not all that bad. Your patch would remove
> > > the requirement for application
Gentle ping
2018-01-03 6:09 GMT+03:00 Timofey Titovets :
> 1. Pickup, Sioh Lee crc32 patch, after some long conversation
> 2. Merge with my work on xxhash
> 3. Add autoselect code to choice fastest hash helper.
>
> Base idea are same, replace jhash2 with something faster.
&
deletions(-)
> delete mode 100644 fs/btrfs/hash.c
> delete mode 100644 fs/btrfs/hash.h
>
> --
> 2.7.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at
Hi,
2 month ago.
I start topic about replace jhash with xxhash.
That a another topic, about replace replace in memory hashing with xxhash.
Or at least make some light on that.
I use simple printk() in jhash/jhash2, to get correct input sizes,
so, at least on x86_64 systems, most of inputs are:
16
v6:
- Nothing, whole patchset version bump
Signed-off-by: Timofey Titovets
---
include/linux/xxhash.h | 23 +++
1 file changed, 23 insertions(+)
diff --git a/include/linux/xxhash.h b/include/linux/xxhash.h
index 9e1f42cb57e9..52b073fea17f 100644
--- a/include/linux/xxhash.h
+++ b/incl
of == 64 on x86_64,
that makes them cache friendly. As we don't suffer from hash collisions,
change hash type from unsigned long back to u32.
- Fix kbuild robot warning, make all local functions static
Signed-off-by: Timofey Titovets
Signed-off-by: leesioh
CC: Andrea Arcangeli
C
other problems exists.
No other problem exists.
> thanks.
>
> -sioh lee-
>
In sum, we can prove, change hash are useful and good performance
improvement in general.
With good potential on hardware acceleration on CPU.
Let's wait on advice of mm folks,
If that ok, and that do next if ne
JFYI performance on more fast/modern CPU:
Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz
[ 172.651044] ksm: crc32c hash() 22633 MB/s
[ 172.776060] ksm: xxhash hash() 10920 MB/s
[ 172.776066] ksm: choice crc32c as hash function
to first call of fastcall()
- Don't alloc page for hash testing, use arch zero pages for that
Signed-off-by: Timofey Titovets
Signed-off-by: leesioh
CC: Andrea Arcangeli
CC: linux...@kvack.org
CC: k...@vger.kernel.org
---
mm/Kconf
v5:
- Nothing, whole patchset version bump
Signed-off-by: Timofey Titovets
---
include/linux/xxhash.h | 23 +++
1 file changed, 23 insertions(+)
diff --git a/include/linux/xxhash.h b/include/linux/xxhash.h
index 9e1f42cb57e9..52b073fea17f 100644
--- a/include/linux/xxhash.h
+++ b/incl
*FACEPALM*,
Sorry, just forgot about numbering of old jhash2 -> xxhash conversion
Also pickup patch for xxhash - arch dependent xxhash() function that will use
fastest algo for current arch.
So next will be v5, as that must be v4.
Thanks.
2017-12-29 12:52 GMT+03:00 Timofey Titovets :
>
mode"
Two passible values:
- normal [default] - ksm use only madvice
- always [new] - ksm will search vma over all processes memory and
add it to the dedup list
Signed-off-by: Timofey Titovets
---
Documentation/vm/ksm.txt | 3 +
mm/ksm.c
to speed test and auto choice of fastest hash function
Signed-off-by: Timofey Titovets
Signed-off-by: leesioh
CC: Andrea Arcangeli
CC: linux...@kvack.org
CC: k...@vger.kernel.org
---
mm/Kconfig | 4 ++
mm/ksm.c | 133 -
2 files
fit on machines with "big pages (64k)".
Thanks!
2017-10-02 15:58 GMT+03:00 Timofey Titovets :
> Currently while search/inserting in RB tree,
> memcmp used for comparing out of tree pages with in tree pages.
>
> But on each compare step memcmp for pages start at
> zero offset
Reviewed-by: Timofey Titovets
2017-11-15 6:19 GMT+03:00 Kyeongdon Kim :
> The current ksm is using memcmp to insert and search 'rb_tree'.
> It does cause very expensive computation cost.
> In order to reduce the time of this operation,
> we have added a checksum to tr
)
> * node in the stable tree and add both rmap_items.
> */
> lock_page(kpage);
> - stable_node = stable_tree_insert(kpage);
> + stable_node = stable_tree_insert(kpage, checksum);
> if (stable_node) {
> stable_tree_append(tree_rmap_item,
> stable_node,
>false);
> --
> 2.6.2
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org";> em...@kvack.org
Thanks,
anyway in general idea looks good.
Reviewed-by: Timofey Titovets
--
Have a nice day,
Timofey.
> +static int zswap_is_page_same_filled(void *ptr, unsigned long *value)
> +{
> + unsigned int pos;
> + unsigned long *page;
> +
> + page = (unsigned long *)ptr;
> + for (pos = 1; pos < PAGE_SIZE / sizeof(*page); pos++) {
> + if (page[pos] != page[0])
> +
2017-10-18 15:34 GMT+03:00 Matthew Wilcox :
> On Wed, Oct 18, 2017 at 10:48:32AM +, Srividya Desireddy wrote:
>> +static void zswap_fill_page(void *ptr, unsigned long value)
>> +{
>> + unsigned int pos;
>> + unsigned long *page;
>> +
>> + page = (unsigned long *)ptr;
>> + if (va
just RFC, i.e. does that type of optimization make a sense?
Thanks.
Changes:
v1 -> v2:
Add: configurable max_offset_error
Move logic to memcmpe()
Signed-off-by: Timofey Titovets
---
mm/ksm.c |
last start offset where no diff in page content.
offset aligned to 1024, that a some type of magic value
For that value i get ~ same performance in bad case (where offset useless)
for memcmp_pages() with offset and without.
Signed-off-by: Timofey Titovets
---
mm/ksm.c | 32
xxh32() - fast on both 32/64-bit platforms
xxh64() - fast only on 64-bit platform
Create xxhash() which will pickup fastest version
on compile time.
As result depends on cpu word size,
the main proporse of that - in memory hashing.
Signed-off-by: Timofey Titovets
Acked-by: Andi Kleen
Acked-by
hash.c -> xxhash.h
- replace xxhash_t with 'unsigned long'
- update kerneldoc above xxhash()
Timofey Titovets (2):
xxHash: create arch dependent 32/64-bit xxhash()
KSM: Replace jhash2 with xxhash
include/linux/xxhash.h | 23 +++
mm/Kconfig
: sleep_millisecs = 20 - default
jhash2: ~4.7%
xxhash64: ~3.3%
- 11 / 18 ~= 0.6 -> Profit: ~40%
- 3.3/4.7 ~= 0.7 -> Profit: ~30%
Signed-off-by: Timofey Titovets
Acked-by: Andi Kleen
Acked-by: Christian Borntraeger
Cc: Linux-kernel
Cc: Linux-kvm
---
mm/Kconfig | 1 +
mm
2017-09-25 17:59 GMT+03:00 Matthew Wilcox :
> On Fri, Sep 22, 2017 at 02:18:17AM +0300, Timofey Titovets wrote:
>> diff --git a/include/linux/xxhash.h b/include/linux/xxhash.h
>> index 9e1f42cb57e9..195a0ae10e9b 100644
>> --- a/include/linux/xxhash.h
>> +++ b/include
: Timofey Titovets
Acked-by: Andi Kleen
Cc: Linux-kernel
---
include/linux/xxhash.h | 24
lib/xxhash.c | 10 ++
2 files changed, 34 insertions(+)
diff --git a/include/linux/xxhash.h b/include/linux/xxhash.h
index 9e1f42cb57e9..195a0ae10e9b 100644
--- a
: sleep_millisecs = 20 - default
jhash2: ~4.7%
xxhash64: ~3.3%
- 11 / 18 ~= 0.6 -> Profit: ~40%
- 3.3/4.7 ~= 0.7 -> Profit: ~30%
Signed-off-by: Timofey Titovets
Acked-by: Andi Kleen
---
mm/Kconfig | 1 +
mm/ksm.c | 14 +++---
2 files changed, 8 insertions(+), 7 del
hes
Timofey Titovets (2):
xxHash: create arch dependent 32/64-bit xxhash()
KSM: Replace jhash2 with xxhash
include/linux/xxhash.h | 24
lib/xxhash.c | 10 ++
mm/Kconfig | 1 +
mm/ksm.c | 14 +++---
4 files changed,
: Timofey Titovets
Acked-by: Andi Kleen
Cc: Linux-kernel
---
include/linux/xxhash.h | 24
lib/xxhash.c | 10 ++
2 files changed, 34 insertions(+)
diff --git a/include/linux/xxhash.h b/include/linux/xxhash.h
index 9e1f42cb57e9..195a0ae10e9b 100644
--- a
: sleep_millisecs = 20 - default
jhash2: ~4.7%
xxhash64: ~3.3%
- 11 / 18 ~= 0.6 -> Profit: ~40%
- 3.3/4.7 ~= 0.7 -> Profit: ~30%
Signed-off-by: Timofey Titovets
Acked-by: Andi Kleen
---
mm/Kconfig | 1 +
mm/ksm.c | 14 +++---
2 files changed, 8 insertions(+), 7 del
worlds,
create arch dependent xxhash() function that will use
fastest algo for current arch.
This a first patch.
Performance info and ksm update can be found in second patch.
Changelog:
v1 -> v2:
- Move xxhash() to xxhash.h/c and separate patches
Timofey Titovets (2):
xxHash: create a
Sorry Markus, but main problem with your patches described at that page:
https://btrfs.wiki.kernel.org/index.php/Developer%27s_FAQ#How_not_to_start
I.e. it's cool that you try to help as you can, but not that way, thanks.
2017-08-21 16:27 GMT+03:00 SF Markus Elfring :
>> That's will work,
>
> Tha
Don't needed, and you did miss several similar places (L573 & L895) in
that file with explicit initialisation.
Reviewed-by: Timofey Titovets
2017-08-20 23:20 GMT+03:00 SF Markus Elfring :
> From: Markus Elfring
> Date: Sun, 20 Aug 2017 22:02:54 +0200
>
> The variable &quo
That's will work, but that's don't improve anything.
Reviewed-by: Timofey Titovets
2017-08-20 23:18 GMT+03:00 SF Markus Elfring :
> From: Markus Elfring
> Date: Sun, 20 Aug 2017 21:36:31 +0200
> MIME-Version: 1.0
> Content-Type: text/plain; charset=UTF-8
> C
You use that doc [1], so it's okay, because that's style are more safe.
But i don't think that at now such cleanups are really usefull at now.
Because that's not improve anything.
Reviewed-by: Timofey Titovets
[1] -
https://www.kernel.org/doc/html/v4.12/process/coding-s
Hi Nick Terrell,
If i understood all correctly,
zstd can compress (decompress) data in way compatible with gzip (zlib)
Do that also true for in kernel library?
If that true, does that make a sense to directly replace zlib with
zstd (configured to work like zlib) in place (as example for btrfs
zlib
It allow to control user mark new vma as VM_MERGEABLE or not
Create new sysfs interface /sys/kernel/mm/ksm/mark_new_vma
1 - enabled - mark new allocated vma as VM_MERGEABLE and add it to ksm queue
0 - disable it
Signed-off-by: Timofey Titovets
---
include/linux/ksm.h | 10 +-
mm/ksm.c
Implement two functions:
ksm_vm_flags_mod() - if existing flags supported by ksm - then mark like
VM_MERGEABLE
ksm_vma_add_new() - If vma marked as VM_MERGEABLE add it to ksm page queue
Signed-off-by: Timofey Titovets
---
include/linux/ksm.h | 31 +++
mm/mmap.c
Allowing to control mark_new_vma default value
Allowing work ksm on early allocated vmas
Signed-off-by: Timofey Titovets
---
mm/Kconfig | 7 +++
1 file changed, 7 insertions(+)
diff --git a/mm/Kconfig b/mm/Kconfig
index 1d1ae6b..90f40a6 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -340,6
Mb (deduped)/(used)
v2:
Added Kconfig for control default value of mark_new_vma
Added sysfs interface for control mark_new_vma
Splitted in several patches
v3:
Documentation for ksm changed for clarify new cha
Timofey Titovets (4):
KSM: Add auto flag new VMA as
Signed-off-by: Timofey Titovets
---
Documentation/vm/ksm.txt | 7 +++
1 file changed, 7 insertions(+)
diff --git a/Documentation/vm/ksm.txt b/Documentation/vm/ksm.txt
index f34a8ee..880fdbf 100644
--- a/Documentation/vm/ksm.txt
+++ b/Documentation/vm/ksm.txt
@@ -24,6 +24,8 @@ KSM only
Implement two functions:
ksm_vm_flags_mod() - if existing flags supported by ksm - then mark like
VM_MERGEABLE
ksm_vma_add_new() - If vma marked as VM_MERGEABLE add it to ksm page queue
Signed-off-by: Timofey Titovets
---
include/linux/ksm.h | 31 +++
mm/mmap.c
Allowing to control mark_new_vma default value
Allowing work ksm on early allocated vmas
Signed-off-by: Timofey Titovets
---
mm/Kconfig | 7 +++
1 file changed, 7 insertions(+)
diff --git a/mm/Kconfig b/mm/Kconfig
index 1d1ae6b..90f40a6 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -340,6
apply it and enable ksm:
echo 1 | sudo tee /sys/kernel/mm/ksm/run
This show how much memory saved:
echo $[$(cat /sys/kernel/mm/ksm/pages_shared)*$(getconf PAGE_SIZE)/1024 ]KB
On my system i save ~1% of memory 26 Mb/2100 Mb (deduped)/(used)
Timofey Titovets (3):
KSM: Add auto flag new VMA as
It allow for user to control process of marking new vma as VM_MERGEABLE
Create new sysfs interface /sys/kernel/mm/ksm/mark_new_vma
1 - enabled - mark new allocated vma as VM_MERGEABLE and add it to ksm queue
0 - disable it
Signed-off-by: Timofey Titovets
---
include/linux/ksm.h | 10
to ksm internal tree
If you see broken patch lines i have also attach patch.
From db8ad0877146a69e1e5d5ab98825cefcf44a95bb Mon Sep 17 00:00:00 2001
From: Timofey Titovets
Date: Sat, 8 Nov 2014 03:02:52 +0300
Subject: [PATCH] KSM: Add auto flag new VMA as VM_MERGEABLE
Signed-off-by: Timofey Titovet
2014-10-30 6:19 GMT+03:00 Matt :
> Hi Timofey,
> Hi List,
> don't forget to consider PKSM - it's supposed to be an improvement
> over UKSM & KSM:
>
> http://www.phoronix.com/scan.php?page=news_item&px=MTM0OTQ
> https://code.google.com/p/pksm/
>
> Kind Regards
>
> Matt
I can mistaking, as i know UK
ect?
UKSM code licensed under GPL and as i think we can feel free for port
and adopt code (with indicating the author)
Please, fix me if i mistake or miss something.
This is just stream of my thoughts %_%
---
> On Sat, Oct 25, 2014 at 09:32:01PM -0700, Andrew Morton wrote:
>> On Sat,
Good time of day, people.
I try to find 'mm' subsystem specific people and lists, but list
linux-mm looks dead and mail archive look like deprecated.
If i must to sent this message to another list or add CC people, let me know.
If questions are already asked (i can't find activity before), feel
fr
2014-08-24 8:41 GMT+03:00 Brian Norris :
> It looks like this intended to be 64-bit arithmetic, but it's actually
> performed as 32-bit. Fix that. (Note that 'increment' was being
> initialized twice, so this patch removes one of those.)
>
> Caught by Coverity Scan (CID 1201422).
>
> Signed-off-by:
nux is a big cake, where everybody cook their own piece and in
result we have very powerful and diverse system.
I love linux and i just do attempt to make it better.
This is my story =)
> On Mon, Jul 21, 2014 at 01:46:14PM +0300, Timofey Titovets wrote:
>> From: Timofey Titovets
>>
From: Timofey Titovets
This add supporting of auto allocate new zram devices on demand, like
loop devices
This working by following rules:
- Pre create zram devices specified by num_device
- if all device already in use -> add new free device
From v1 -> v2:
Delete u
From: Timofey Titovets
This add supporting of auto allocate new zram devices on demand, like
loop devices
This working by following rules:
- Pre create zram devices specified by num_device
- if all device already in use -> add new free device
From v1 -> v2:
Delete u
On 07/17/2014 06:19 PM, Timofey Titovets wrote:
On 07/17/2014 05:17 PM, Jerome Marchand wrote:
Looks like it:
$ cat conctest.sh
#! /bin/sh
modprobe zram
while true; do
for i in `seq 1 10`; do
echo 10485760 > /sys/block/zram0/disksize&
echo 1 > /sys/block/z
On 07/17/2014 06:26 PM, Jerome Marchand wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 07/17/2014 05:01 PM, Timofey Titovets wrote:
On 07/17/2014 05:04 PM, Sergey Senozhatsky wrote:
On (07/17/14 15:27), Timofey Titovets wrote:
This add supporting of autochange count of zram
On 07/17/2014 05:17 PM, Jerome Marchand wrote:
Looks like it:
$ cat conctest.sh
#! /bin/sh
modprobe zram
while true; do
for i in `seq 1 10`; do
echo 10485760 > /sys/block/zram0/disksize&
echo 1 > /sys/block/zram0/reset&
done
done
$ sudo ./conctest.sh
[ 51.535387]
On 07/17/2014 05:04 PM, Sergey Senozhatsky wrote:
On (07/17/14 15:27), Timofey Titovets wrote:
This add supporting of autochange count of zram devices on demand, like loop
devices;
This working by following rules:
- Always save minimum devices count specified by num_device (can be
From: Timofey Titovets
This add supporting of autochange count of zram devices on demand, like
loop devices;
This working by following rules:
- Always save minimum devices count specified by num_device (can be
specified while kernel module loading)
- if last device already using
Thanks for comments, i will fix what you say and split patches one for
code and one for Documentation.
After, i will resend patch set
On 07/17/2014 01:09 PM, Jerome Marchand wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 07/17/2014 11:32 AM, Timofey Titovets wrote:
From: Timofey
From: Timofey Titovets
This add supporting of autochange count of zram devices on demand, like
loop devices;
This working by following rules:
- Always save minimum devices count specified by num_device (can be
specified while kernel module loading)
- if last device already using
62 matches
Mail list logo