From: Joonsoo Kim
PageHighMem() is used for two different cases. One is to check if there
is a direct mapping for this page or not. The other is to check the
zone of this page, that is, weather it is the highmem type zone or not.
Until now, both the cases are the perfectly same thing. So,
From: Joonsoo Kim
This reverts the following commits that change CMA design in MM.
Revert "ARM: CMA: avoid double mapping to the CMA area if CONFIG_HIGHMEM=y"
This reverts commit 3d2054ad8c2d5100b68b0c0405f89fd90bf4107b.
Revert "mm/cma: remove ALLOC_CMA"
This reverts
From: Joonsoo Kim
This reverts the following commits that change CMA design in MM.
Revert "ARM: CMA: avoid double mapping to the CMA area if CONFIG_HIGHMEM=y"
This reverts commit 3d2054ad8c2d5100b68b0c0405f89fd90bf4107b.
Revert "mm/cma: remove ALLOC_CMA"
This reverts commit
From: Joonsoo Kim
Currently, we use the zone index of preferred_zone which represents
the best matching zone for allocation, as classzone_idx. It has a problem
on NUMA system with ZONE_MOVABLE.
In NUMA system, it can be possible that each node has different populated
From: Joonsoo Kim
Currently, we use the zone index of preferred_zone which represents
the best matching zone for allocation, as classzone_idx. It has a problem
on NUMA system with ZONE_MOVABLE.
In NUMA system, it can be possible that each node has different populated
zones. For example, node 0
From: Joonsoo Kim
ZONE_MOVABLE only has movable pages so we don't need to keep enough
freepages to avoid or deal with fragmentation. So, don't count it.
This changes min_free_kbytes and thus min_watermark greatly
if ZONE_MOVABLE is used. It will make the user uses more
From: Joonsoo Kim
ZONE_MOVABLE only has movable pages so we don't need to keep enough
freepages to avoid or deal with fragmentation. So, don't count it.
This changes min_free_kbytes and thus min_watermark greatly
if ZONE_MOVABLE is used. It will make the user uses more memory.
o System
22GB
From: Joonsoo Kim
CMA area is now managed by the separate zone, ZONE_MOVABLE,
to fix many MM related problems. In this implementation, if
CONFIG_HIGHMEM = y, then ZONE_MOVABLE is considered as HIGHMEM and
the memory of the CMA area is also considered as HIGHMEM.
That
From: Joonsoo Kim
0. History
This patchset is the follow-up of the discussion about the
"Introduce ZONE_CMA (v7)" [1]. Please reference it if more information
is needed.
1. What does this patch do?
This patch changes the management way for the memory of the CMA area
in
From: Joonsoo Kim
CMA area is now managed by the separate zone, ZONE_MOVABLE,
to fix many MM related problems. In this implementation, if
CONFIG_HIGHMEM = y, then ZONE_MOVABLE is considered as HIGHMEM and
the memory of the CMA area is also considered as HIGHMEM.
That means that they are
From: Joonsoo Kim
0. History
This patchset is the follow-up of the discussion about the
"Introduce ZONE_CMA (v7)" [1]. Please reference it if more information
is needed.
1. What does this patch do?
This patch changes the management way for the memory of the CMA area
in the MM subsystem.
From: Joonsoo Kim
Now, all reserved pages for CMA region are belong to the ZONE_MOVABLE
and it only serves for a request with GFP_HIGHMEM && GFP_MOVABLE.
Therefore, we don't need to maintain ALLOC_CMA at all.
Reviewed-by: Aneesh Kumar K.V
From: Joonsoo Kim
Now, all reserved pages for CMA region are belong to the ZONE_MOVABLE
and it only serves for a request with GFP_HIGHMEM && GFP_MOVABLE.
Therefore, we don't need to maintain ALLOC_CMA at all.
Reviewed-by: Aneesh Kumar K.V
Acked-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
From: Joonsoo Kim
v2
o previous failure in linux-next turned out that it's not the problem of
this patchset. It was caused by the wrong assumption by specific
architecture.
lkml.kernel.org/r/20171114173719.ga28...@atomide.com
o add missing cache flush to the patch "ARM:
From: Joonsoo Kim
v2
o previous failure in linux-next turned out that it's not the problem of
this patchset. It was caused by the wrong assumption by specific
architecture.
lkml.kernel.org/r/20171114173719.ga28...@atomide.com
o add missing cache flush to the patch "ARM: CMA: avoid double
From: Joonsoo Kim
Hello,
This patchset introduces a new tool, valid access checker.
Vchecker is a dynamic memory error detector. It provides a new debug feature
that can find out an un-intended access to valid area. Valid area here means
the memory which is allocated
From: Joonsoo Kim
Hello,
This patchset introduces a new tool, valid access checker.
Vchecker is a dynamic memory error detector. It provides a new debug feature
that can find out an un-intended access to valid area. Valid area here means
the memory which is allocated and allowed to be accessed
From: Joonsoo Kim
To prepare per object memory for vchecker, we need to change the layout
of the object when kmem_cache initialization. Add such code on
vchecker_cache_create() which is called when kmem_cache initialization.
And, this memory should be initialized when
From: Joonsoo Kim
To prepare per object memory for vchecker, we need to change the layout
of the object when kmem_cache initialization. Add such code on
vchecker_cache_create() which is called when kmem_cache initialization.
And, this memory should be initialized when object is populated. Do it
From: Joonsoo Kim
They will be used for the vchecker in the following patch.
Make it non-static and add declairation in header files.
Signed-off-by: Joonsoo Kim
---
include/linux/kasan.h | 1 +
mm/kasan/kasan.c | 2 +-
mm/kasan/kasan.h
From: Joonsoo Kim
They will be used for the vchecker in the following patch.
Make it non-static and add declairation in header files.
Signed-off-by: Joonsoo Kim
---
include/linux/kasan.h | 1 +
mm/kasan/kasan.c | 2 +-
mm/kasan/kasan.h | 2 ++
mm/kasan/report.c | 2 +-
4 files
From: Joonsoo Kim
Vchecker is a dynamic memory error detector. It provides a new debug
feature that can find out an un-intended access to valid area. Valid
area here means the memory which is allocated and allowed to be
accessed by memory owner and un-intended access
From: Joonsoo Kim
Vchecker is a dynamic memory error detector. It provides a new debug
feature that can find out an un-intended access to valid area. Valid
area here means the memory which is allocated and allowed to be
accessed by memory owner and un-intended access means the read/write
that is
From: Joonsoo Kim
There is a usecase that check if stack trace is new or not during specific
period. Since stackdepot library doesn't support removal of stack trace,
it's impossible to know above thing. Since removal of stack trace is not
easy in the design of stackdepot
From: Joonsoo Kim
There is a usecase that check if stack trace is new or not during specific
period. Since stackdepot library doesn't support removal of stack trace,
it's impossible to know above thing. Since removal of stack trace is not
easy in the design of stackdepot library, we need another
From: Joonsoo Kim
Since there is a different callpath even in the vchecker, static skip
value doesn't always exclude vchecker's stacktrace. Fix it through
checking stacktrace dynamically.
v2: skip two depth of stack at default, it's safe!
Signed-off-by: Joonsoo Kim
From: Joonsoo Kim
Since there is a different callpath even in the vchecker, static skip
value doesn't always exclude vchecker's stacktrace. Fix it through
checking stacktrace dynamically.
v2: skip two depth of stack at default, it's safe!
Signed-off-by: Joonsoo Kim
---
mm/kasan/vchecker.c |
From: Namhyung Kim
Since we're finding a cause of broken data, it'd be desired not to miss
any suspects. It doesn't use GFP_ATOMIC since it includes __GFP_HIGH
which is for system making forward progress.
It also adds a WARN_ON whenever it fails to allocate pages even with
From: Joonsoo Kim
Getting full callstack is heavy job so it's sometimes better to
reduce this overhead by limiting callstack depth. So, this patch
makes the callstack depth configurable by using debugfs interface.
Signed-off-by: Joonsoo Kim
---
From: Namhyung Kim
Since we're finding a cause of broken data, it'd be desired not to miss
any suspects. It doesn't use GFP_ATOMIC since it includes __GFP_HIGH
which is for system making forward progress.
It also adds a WARN_ON whenever it fails to allocate pages even with
__GFP_ATOMIC.
From: Joonsoo Kim
Getting full callstack is heavy job so it's sometimes better to
reduce this overhead by limiting callstack depth. So, this patch
makes the callstack depth configurable by using debugfs interface.
Signed-off-by: Joonsoo Kim
---
mm/kasan/vchecker.c | 81
From: Joonsoo Kim
kmalloc() is used everywhere in the kernel and it doesn't distiniguish
the callers since it doesn't much help to efficiently manage the memory.
However, there is a difference in the view of the debugging. A bug usually
happens on the objects allocated
From: Joonsoo Kim
Vchecker requires kmalloc caller address to support validation on
*specific* kmalloc user. Therefore, this patch passes slab allocation
caller address to vchecker hook. This caller address will be used in the
following patch.
Signed-off-by: Joonsoo Kim
From: Joonsoo Kim
kmalloc() is used everywhere in the kernel and it doesn't distiniguish
the callers since it doesn't much help to efficiently manage the memory.
However, there is a difference in the view of the debugging. A bug usually
happens on the objects allocated by specific allocation
From: Joonsoo Kim
Vchecker requires kmalloc caller address to support validation on
*specific* kmalloc user. Therefore, this patch passes slab allocation
caller address to vchecker hook. This caller address will be used in the
following patch.
Signed-off-by: Joonsoo Kim
---
From: Joonsoo Kim
It's not easy to understand what can be done by the vchecker.
This sample could explain it and help to understand the vchecker.
Signed-off-by: Joonsoo Kim
---
lib/Kconfig.kasan | 9
lib/Makefile| 1 +
From: Joonsoo Kim
It's not easy to understand what can be done by the vchecker.
This sample could explain it and help to understand the vchecker.
Signed-off-by: Joonsoo Kim
---
lib/Kconfig.kasan | 9
lib/Makefile| 1 +
lib/vchecker_test.c | 117
From: Namhyung Kim
The is_new argument is to check whether the given stack trace was
already in the stack depot or newly added. It'll be used by vchecker
callstack in the next patch.
Also add WARN_ONCE if stack depot failed to allocate stack slab for some
reason. This is
From: Namhyung Kim
The is_new argument is to check whether the given stack trace was
already in the stack depot or newly added. It'll be used by vchecker
callstack in the next patch.
Also add WARN_ONCE if stack depot failed to allocate stack slab for some
reason. This is unusual as it
From: Joonsoo Kim
There is no reason not to support inline KASAN build. Support it.
Note that vchecker_check() function is now placed on kasan report function
to support inline build because gcc generates the inline check code and
then directly jump to kasan report
From: Namhyung Kim
By default, callstack checker only collects callchains. When a user
writes 'on' to the callstack file in debugfs, it checks and reports new
callstacks. Writing 'off' to disable it again.
# cd /sys/kernel/debug/vchecker
# echo 0 8 >
From: Namhyung Kim
The callstack checker is to find invalid code paths accessing to a
certain field in an object. Currently it only saves all stack traces at
the given offset. Reporting will be added in the next patch.
The below example checks callstack of anon_vma:
#
From: Joonsoo Kim
There is no reason not to support inline KASAN build. Support it.
Note that vchecker_check() function is now placed on kasan report function
to support inline build because gcc generates the inline check code and
then directly jump to kasan report function when poisoned value
From: Namhyung Kim
By default, callstack checker only collects callchains. When a user
writes 'on' to the callstack file in debugfs, it checks and reports new
callstacks. Writing 'off' to disable it again.
# cd /sys/kernel/debug/vchecker
# echo 0 8 > anon_vma/callstack
# echo 1 >
From: Namhyung Kim
The callstack checker is to find invalid code paths accessing to a
certain field in an object. Currently it only saves all stack traces at
the given offset. Reporting will be added in the next patch.
The below example checks callstack of anon_vma:
# cd
From: Joonsoo Kim
This is a main document for vchecker user.
Signed-off-by: Joonsoo Kim
---
Documentation/dev-tools/vchecker.rst | 200 +++
1 file changed, 200 insertions(+)
create mode 100644
From: Joonsoo Kim
This is a main document for vchecker user.
Signed-off-by: Joonsoo Kim
---
Documentation/dev-tools/vchecker.rst | 200 +++
1 file changed, 200 insertions(+)
create mode 100644 Documentation/dev-tools/vchecker.rst
diff --git
From: Joonsoo Kim
The purpose of the value checker is finding invalid user writing
invalid value at the moment that the value is written. However, there is
not enough infrastructure so that we cannot easily detect this case
in time.
However, by following way, we can
From: Joonsoo Kim
Since stack depot library doesn't support removal operation,
after removing and adding again callstack cb, callstack checker cannot
correctly judge whether this callstack is new or not for current cb
if the same callstack happens for previous cb.
This
From: Joonsoo Kim
The purpose of the value checker is finding invalid user writing
invalid value at the moment that the value is written. However, there is
not enough infrastructure so that we cannot easily detect this case
in time.
However, by following way, we can emulate similar effect.
1.
From: Joonsoo Kim
Since stack depot library doesn't support removal operation,
after removing and adding again callstack cb, callstack checker cannot
correctly judge whether this callstack is new or not for current cb
if the same callstack happens for previous cb.
This problem can be fixed by
From: Joonsoo Kim
Mark/unmark the shadow of the objects that is allocated before the
vchecker is enabled/disabled. It is necessary to fully debug the system.
Since there is no synchronization way to prevent slab object free,
we cannot synchronously mark/unmark the shadow
From: Joonsoo Kim
Mark/unmark the shadow of the objects that is allocated before the
vchecker is enabled/disabled. It is necessary to fully debug the system.
Since there is no synchronization way to prevent slab object free,
we cannot synchronously mark/unmark the shadow of the allocated object.
From: Joonsoo Kim
slub uses higher order allocation than it actually needs. In this case,
we don't want to do direct reclaim to make such a high order page since
it causes a big latency to the user. Instead, we would like to fallback
lower order allocation that it
From: Joonsoo Kim
High-order atomic allocation is difficult to succeed since we cannot
reclaim anything in this context. So, we reserves the pageblock for
this kind of request.
In slub, we try to allocate higher-order page more than it actually
needs in order to get the
From: Joonsoo Kim
slub uses higher order allocation than it actually needs. In this case,
we don't want to do direct reclaim to make such a high order page since
it causes a big latency to the user. Instead, we would like to fallback
lower order allocation that it actually needs.
However, we
From: Joonsoo Kim
High-order atomic allocation is difficult to succeed since we cannot
reclaim anything in this context. So, we reserves the pageblock for
this kind of request.
In slub, we try to allocate higher-order page more than it actually
needs in order to get the best performance. If
From: Joonsoo Kim
Freepage on ZONE_HIGHMEM doesn't work for kernel memory so it's not that
important to reserve. When ZONE_MOVABLE is used, this problem would
theorectically cause to decrease usable memory for GFP_HIGHUSER_MOVABLE
allocation request which is mainly used
From: Joonsoo Kim
Freepage on ZONE_HIGHMEM doesn't work for kernel memory so it's not that
important to reserve. When ZONE_MOVABLE is used, this problem would
theorectically cause to decrease usable memory for GFP_HIGHUSER_MOVABLE
allocation request which is mainly used for page cache and anon
From: Joonsoo Kim
High-order atomic allocation is difficult to succeed since we cannot
reclaim anything in this context. So, we reserves the pageblock for
this kind of request.
In slub, we try to allocate higher-order page more than it actually
needs in order to get the
From: Joonsoo Kim
slub uses higher order allocation than it actually needs. In this case,
we don't want to do direct reclaim to make such a high order page since
it causes a big latency to the user. Instead, we would like to fallback
lower order allocation that it
From: Joonsoo Kim
High-order atomic allocation is difficult to succeed since we cannot
reclaim anything in this context. So, we reserves the pageblock for
this kind of request.
In slub, we try to allocate higher-order page more than it actually
needs in order to get the best performance. If
From: Joonsoo Kim
slub uses higher order allocation than it actually needs. In this case,
we don't want to do direct reclaim to make such a high order page since
it causes a big latency to the user. Instead, we would like to fallback
lower order allocation that it actually needs.
However, we
From: Joonsoo Kim
page_zone_id() is a specialized function to compare the zone for the pages
that are within the section range. If the section of the pages are
different, page_zone_id() can be different even if their zone is the same.
This wrong usage doesn't cause any
From: Joonsoo Kim
page_zone_id() is a specialized function to compare the zone for the pages
that are within the section range. If the section of the pages are
different, page_zone_id() can be different even if their zone is the same.
This wrong usage doesn't cause any actual problem since
From: Joonsoo Kim
0. History
This patchset is the follow-up of the discussion about the
"Introduce ZONE_CMA (v7)" [1]. Please reference it if more information
is needed.
1. What does this patch do?
This patch changes the management way for the memory of the CMA area
in
From: Joonsoo Kim
0. History
This patchset is the follow-up of the discussion about the
"Introduce ZONE_CMA (v7)" [1]. Please reference it if more information
is needed.
1. What does this patch do?
This patch changes the management way for the memory of the CMA area
in the MM subsystem.
From: Joonsoo Kim
Now, all reserved pages for CMA region are belong to the ZONE_MOVABLE
and it only serves for a request with GFP_HIGHMEM && GFP_MOVABLE.
Therefore, we don't need to maintain ALLOC_CMA at all.
Reviewed-by: Aneesh Kumar K.V
From: Joonsoo Kim
CMA area is now managed by the separate zone, ZONE_MOVABLE,
to fix many MM related problems. In this implementation, if
CONFIG_HIGHMEM = y, then ZONE_MOVABLE is considered as HIGHMEM and
the memory of the CMA area is also considered as HIGHMEM.
That
From: Joonsoo Kim
Now, all reserved pages for CMA region are belong to the ZONE_MOVABLE
and it only serves for a request with GFP_HIGHMEM && GFP_MOVABLE.
Therefore, we don't need to maintain ALLOC_CMA at all.
Reviewed-by: Aneesh Kumar K.V
Acked-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
From: Joonsoo Kim
CMA area is now managed by the separate zone, ZONE_MOVABLE,
to fix many MM related problems. In this implementation, if
CONFIG_HIGHMEM = y, then ZONE_MOVABLE is considered as HIGHMEM and
the memory of the CMA area is also considered as HIGHMEM.
That means that they are
From: Joonsoo Kim
This patchset is the follow-up of the discussion about the
"Introduce ZONE_CMA (v7)" [1]. Please reference it if more information
is needed.
In this patchset, the memory of the CMA area is managed by using
the ZONE_MOVABLE. Since there is another type
From: Joonsoo Kim
This patchset is the follow-up of the discussion about the
"Introduce ZONE_CMA (v7)" [1]. Please reference it if more information
is needed.
In this patchset, the memory of the CMA area is managed by using
the ZONE_MOVABLE. Since there is another type of the memory in this
From: Joonsoo Kim
Freepage on ZONE_HIGHMEM doesn't work for kernel memory so it's not that
important to reserve. When ZONE_MOVABLE is used, this problem would
theorectically cause to decrease usable memory for GFP_HIGHUSER_MOVABLE
allocation request which is mainly used
From: Joonsoo Kim
Freepage on ZONE_HIGHMEM doesn't work for kernel memory so it's not that
important to reserve. When ZONE_MOVABLE is used, this problem would
theorectically cause to decrease usable memory for GFP_HIGHUSER_MOVABLE
allocation request which is mainly used for page cache and anon
From: Joonsoo Kim
Original shadow memory is only used when it is used by specific types
of access. We can distinguish them and can allocate actual shadow memory
on-demand to reduce memory consumption.
There is a problem on this on-demand shadow memory. After setting up
From: Joonsoo Kim
This patch enables for x86 to use per-page shadow memory.
Most of initialization code for per-page shadow memory is
copied from the code for original shadow memory.
There are two things that aren't trivial.
1. per-page shadow memory for global variable
From: Joonsoo Kim
Enable on-demand shadow mapping in x86.
x86 uses separate per-cpu kernel stack for interrupt/exception context.
We need to populate shadow memory for them before they are used.
And, there are two possible problems due to stable TLB entry when using
From: Joonsoo Kim
Original shadow memory is only used when it is used by specific types
of access. We can distinguish them and can allocate actual shadow memory
on-demand to reduce memory consumption.
There is a problem on this on-demand shadow memory. After setting up
new mapping, we need to
From: Joonsoo Kim
This patch enables for x86 to use per-page shadow memory.
Most of initialization code for per-page shadow memory is
copied from the code for original shadow memory.
There are two things that aren't trivial.
1. per-page shadow memory for global variable is initialized
as the
From: Joonsoo Kim
Enable on-demand shadow mapping in x86.
x86 uses separate per-cpu kernel stack for interrupt/exception context.
We need to populate shadow memory for them before they are used.
And, there are two possible problems due to stable TLB entry when using
on-demand shadow mapping
From: Joonsoo Kim
Majority of access in the kernel is an access to slab objects.
In current implementation, we checks two types of shadow memory
in this case and it causes performance regression.
kernel build (2048 MB QEMU)
Base vs per-page
219 sec vs 238 sec
Although
From: Joonsoo Kim
Now, we have the per-page shadow. The purpose of the per-page shadow is
to check the page that is just used/checked in page size granularity.
File cache pages/anonymous page are in this category. The other
category is for being used by byte size
From: Joonsoo Kim
Majority of access in the kernel is an access to slab objects.
In current implementation, we checks two types of shadow memory
in this case and it causes performance regression.
kernel build (2048 MB QEMU)
Base vs per-page
219 sec vs 238 sec
Although current per-page shadow
From: Joonsoo Kim
Now, we have the per-page shadow. The purpose of the per-page shadow is
to check the page that is just used/checked in page size granularity.
File cache pages/anonymous page are in this category. The other
category is for being used by byte size granularity. Global variable,
From: Joonsoo Kim
1. What is per-page shadow memory
This patch introduces infrastructure to support per-page shadow memory.
Per-page shadow memory is the same with original shadow memory except
the granualarity. It's one byte shows the shadow value for the page.
The
From: Joonsoo Kim
On-demand alloc/map the shadow memory isn't sufficient to save
memory consumption since shadow memory would be populated
for all the memory range in the long running system. This patch
implements dynamic shadow memory unmap/free to solve this problem.
From: Joonsoo Kim
1. What is per-page shadow memory
This patch introduces infrastructure to support per-page shadow memory.
Per-page shadow memory is the same with original shadow memory except
the granualarity. It's one byte shows the shadow value for the page.
The purpose of introducing this
From: Joonsoo Kim
On-demand alloc/map the shadow memory isn't sufficient to save
memory consumption since shadow memory would be populated
for all the memory range in the long running system. This patch
implements dynamic shadow memory unmap/free to solve this problem.
Since shadow memory is
From: Joonsoo Kim
It doesn't handle unaligned end address so last pte could not
be initialized. Fix it.
Note that this shadow memory can be used by others so map
the actual page in this case.
Signed-off-by: Joonsoo Kim
---
mm/kasan/kasan_init.c
From: Joonsoo Kim
In the following patch, per-page shadow memory will be introduced and
some ranges are checked by per-page shadow and the others are checked by
original shadow. To notify the range type, per-page shadow will be mapped
by the page that is filled by a
From: Joonsoo Kim
It doesn't handle unaligned end address so last pte could not
be initialized. Fix it.
Note that this shadow memory can be used by others so map
the actual page in this case.
Signed-off-by: Joonsoo Kim
---
mm/kasan/kasan_init.c | 8
1 file changed, 8 insertions(+)
From: Joonsoo Kim
In the following patch, per-page shadow memory will be introduced and
some ranges are checked by per-page shadow and the others are checked by
original shadow. To notify the range type, per-page shadow will be mapped
by the page that is filled by a special shadow value,
From: Joonsoo Kim
Fetching the next shadow value speculartively has pros and cons.
If shadow bytes are zero, we can exit the check with a single branch.
However, it could cause unaligned access. And, if the next shadow value
isn't zero, we need to do additional check.
From: Joonsoo Kim
Fetching the next shadow value speculartively has pros and cons.
If shadow bytes are zero, we can exit the check with a single branch.
However, it could cause unaligned access. And, if the next shadow value
isn't zero, we need to do additional check. Next shadow value can be
From: Joonsoo Kim
Hello, all.
This is an attempt to recude memory consumption of KASAN. Please see
following description to get the more information.
1. What is per-page shadow memory
This patch introduces infrastructure to support per-page shadow memory.
Per-page
From: Joonsoo Kim
Hello, all.
This is an attempt to recude memory consumption of KASAN. Please see
following description to get the more information.
1. What is per-page shadow memory
This patch introduces infrastructure to support per-page shadow memory.
Per-page shadow memory is the same
From: Joonsoo Kim
There is missing optimization in zero_p4d_populate() that can save
some memory when mapping zero shadow. Implement it like as others.
Signed-off-by: Joonsoo Kim
---
mm/kasan/kasan_init.c | 12
1 file changed, 12
From: Joonsoo Kim
There is missing optimization in zero_p4d_populate() that can save
some memory when mapping zero shadow. Implement it like as others.
Signed-off-by: Joonsoo Kim
---
mm/kasan/kasan_init.c | 12
1 file changed, 12 insertions(+)
diff --git a/mm/kasan/kasan_init.c
From: Joonsoo Kim
Benefit of deduplication is dependent on the workload so it's not
preferable to always enable. Therefore, make it optional in Kconfig
and device param. Default is 'off'. This option will be beneficial
for users who use the zram as blockdev and stores
101 - 200 of 591 matches
Mail list logo