nr_swap_pages =
-1
Signed-off-by: Zhaoyang Huang
---
change of v2: fix bug of unpaired of spin_lock
---
---
mm/swapfile.c | 11 ++-
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index cf63b5f..1212f17 100644
--- a/mm/swapfile.c
+++ b/mm/
It is show_swap_cache_info() which races with get_swap_xxx
On Thu, Dec 3, 2020 at 7:36 PM Zhaoyang Huang wrote:
>
> The scenario on which "Free swap -4kB" happens in my system, which is caused
> by
> get_swap_page_of_type or get_swap_pages racing with show_mem. R
The scenario on which "Free swap -4kB" happens in my system, which is caused by
get_swap_page_of_type or get_swap_pages racing with show_mem. Remove the race
here.
Signed-off-by: Zhaoyang Huang
---
mm/swapfile.c | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff
Memory reclaiming will run as several seconds in memory constraint system, which
will be deemed as heavy memstall. Have the memory reclaim be more presiced by
bailing out when cond_resched
Signed-off-by: Zhaoyang Huang
---
mm/vmscan.c | 23 ---
1 file changed, 16 insertions
On Fri, Aug 21, 2020 at 7:57 PM Matthew Wilcox wrote:
>
> On Fri, Aug 21, 2020 at 05:31:52PM +0800, Zhaoyang Huang wrote:
> > This patch has been verified on an android system and reduces 15% of
> > UNITERRUPTIBLE_SLEEP_BLOCKIO which was used to be caused by wrong
> > ra-&
On Fri, Aug 21, 2020 at 7:57 PM Matthew Wilcox wrote:
>
> On Fri, Aug 21, 2020 at 05:31:52PM +0800, Zhaoyang Huang wrote:
> > This patch has been verified on an android system and reduces 15% of
> > UNITERRUPTIBLE_SLEEP_BLOCKIO which was used to be caused by wrong
> > ra-&
On Fri, Aug 21, 2020 at 5:24 PM Zhaoyang Huang wrote:
>
> Some system(like android) will turbo read during startup via expanding the
> readahead window and then set it back to normal(128kb as usual). However, some
> files in the system process context will keep to be opened since
above two cases.
Signed-off-by: Zhaoyang Huang
---
change from v2:
fix checkpatch error
---
---
include/linux/fs.h | 17 +
mm/fadvise.c | 4 +++-
mm/filemap.c | 19 +--
mm/readahead.c | 37 +
4 files chang
On Fri, Aug 14, 2020 at 5:03 PM Zhaoyang Huang wrote:
>
> Some system(like android) will turbo read during startup via expanding the
> readahead window and then set it back to normal(128kb as usual). However, some
> files in the system process context will keep to be opened since
On Sat, Aug 15, 2020 at 12:15 PM Andrew Morton
wrote:
>
> On Fri, 14 Aug 2020 13:10:34 -0700 Andrew Morton
> wrote:
>
> > On Fri, 14 Aug 2020 17:03:44 +0800 Zhaoyang Huang
> > wrote:
> >
> > > Some system(like android) will turbo read during startup v
above two cases.
Signed-off-by: Zhaoyang Huang
---
include/linux/fs.h | 17 +
mm/fadvise.c | 4 +++-
mm/filemap.c | 19 +--
mm/readahead.c | 38 ++
4 files changed, 67 insertions(+), 11 deletions(-)
diff --git a/i
On Fri, Aug 14, 2020 at 10:33 AM Andrew Morton
wrote:
>
> On Fri, 14 Aug 2020 10:20:11 +0800 Zhaoyang Huang
> wrote:
>
> > On Fri, Aug 14, 2020 at 10:07 AM Matthew Wilcox wrote:
> > >
> > > On Fri, Aug 14, 2020 at 02:43:55AM +0100, Matthew Wilcox wrote:
&
On Fri, Aug 14, 2020 at 10:20 AM Zhaoyang Huang wrote:
>
> On Fri, Aug 14, 2020 at 10:07 AM Matthew Wilcox wrote:
> >
> > On Fri, Aug 14, 2020 at 02:43:55AM +0100, Matthew Wilcox wrote:
> > > On Fri, Aug 14, 2020 at 09:30:11AM +0800, Zhaoyang Huang wrote:
> > >
On Fri, Aug 14, 2020 at 10:07 AM Matthew Wilcox wrote:
>
> On Fri, Aug 14, 2020 at 02:43:55AM +0100, Matthew Wilcox wrote:
> > On Fri, Aug 14, 2020 at 09:30:11AM +0800, Zhaoyang Huang wrote:
> > > file->f_ra->ra_pages will remain the initialized value since it opend,
&
file->f_ra->ra_pages will remain the initialized value since it opend, which may
be NOT equal to bdi->ra_pages as the latter one is updated somehow(etc,
echo xxx > /sys/block/dm/queue/read_ahead_kb).So sync ra->ra_pages to the
updated value when sync read.
Signed-off-by: Zhaoyang
file->f_ra->ra_pages will remain the initialized value since it opend, which may
be NOT equal to bdi->ra_pages as the latter one is updated somehow(etc,
echo xxx > /sys/block/dm/queue/read_ahead_kb).So having readahead use
bdi->ra_pages.
Signed-off-by: Zhaoyang Huang
---
m
)
(vfs_open) from [] (path_openat+0x5fc/0x169c)
(path_openat) from [] (do_filp_open+0x94/0xf8)
(do_filp_open) from [] (do_sys_open+0x168/0x26c)
(do_sys_open) from [] (SyS_openat+0x34/0x38)
(SyS_openat) from [] (ret_fast_syscall+0x0/0x28)
Signed-off-by: Zhaoyang Huang
---
kernel/trace/trace.c | 4
On Thu, Jul 30, 2020 at 9:58 PM Steven Rostedt wrote:
>
> On Thu, 30 Jul 2020 19:04:12 +0800
> Zhaoyang Huang wrote:
>
> > High order memory stuff within trace could introduce OOM, use kvmalloc
> > instead.
> >
> > Please find the bellowing for the ca
/0x70)
(vfs_open) from [] (path_openat+0x5fc/0x169c)
(path_openat) from [] (do_filp_open+0x94/0xf8)
(do_filp_open) from [] (do_sys_open+0x168/0x26c)
(do_sys_open) from [] (SyS_openat+0x34/0x38)
(SyS_openat) from [] (ret_fast_syscall+0x0/0x28)
Signed-off-by: Zhaoyang Huang
---
changes since v1: change
/0x70)
(vfs_open) from [] (path_openat+0x5fc/0x169c)
(path_openat) from [] (do_filp_open+0x94/0xf8)
(do_filp_open) from [] (do_sys_open+0x168/0x26c)
(do_sys_open) from [] (SyS_openat+0x34/0x38)
(SyS_openat) from [] (ret_fast_syscall+0x0/0x28)
Signed-off-by: Zhaoyang Huang
---
kernel/trace/trace.c
[] (ret_fast_syscall+0x0/0x28)
Signed-off-by: Zhaoyang Huang
---
kernel/trace/trace.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index ca1ee65..d4eb7ea 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -6891,7 +6891,7 @@ static
From: Zhaoyang Huang
pfn_valid can be wrong when parsing a invalid pfn whose phys address
exceeds BITS_PER_LONG as the MSB will be trimed when shifted.
The issue originally arise from bellowing call stack, which corresponding to
an access of the /proc/kpageflags from userspace with a invalid
From: Zhaoyang Huang
pfn_valid can be wrong when parsing a invalid pfn whose phys address
exceeds BITS_PER_LONG as the MSB will be trimed when shifted.
The issue originally arise from bellowing call stack, which corresponding to
an access of the /proc/kpageflags from userspace with a invalid
From: Zhaoyang Huang
pfn_valid can be wrong when parsing a invalid pfn whose phys address
exceeds BITS_PER_LONG as the MSB will be trimed when shifted.
Signed-off-by: Zhaoyang Huang
---
v2: use __pfn_to_phys/__phys_to_pfn instead of max_pfn as the criteria
---
arch/arm/mm/init.c | 5 +
1
From: Zhaoyang Huang
pfn_valid can be wrong when parsing a invalid pfn whose phys address
exceeds BITS_PER_LONG as the MSB will be trimed when shifted.
Signed-off-by: Zhaoyang Huang
---
arch/arm/mm/init.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/arch/arm/mm/init.c b/arch/arm
On Sun, Aug 18, 2019 at 2:32 AM Russell King - ARM Linux admin
wrote:
>
> On Sat, Aug 17, 2019 at 11:00:13AM +0800, Zhaoyang Huang wrote:
> > From: Zhaoyang Huang
> >
> > pfn_valid can be wrong while the MSB of physical address be trimed as pfn
> > larger than the
On Sat, Aug 17, 2019 at 5:00 PM Mike Rapoport wrote:
>
> On Sat, Aug 17, 2019 at 11:00:13AM +0800, Zhaoyang Huang wrote:
> > From: Zhaoyang Huang
> >
> > pfn_valid can be wrong while the MSB of physical address be trimed as pfn
> > larger than the max_pfn.
>
&g
From: Zhaoyang Huang
pfn_valid can be wrong while the MSB of physical address be trimed as pfn
larger than the max_pfn.
Signed-off-by: Zhaoyang Huang
---
arch/arm/mm/init.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index
On Mon, May 6, 2019 at 10:57 PM Johannes Weiner wrote:
>
> On Sun, Apr 28, 2019 at 03:44:34PM +0800, Zhaoyang Huang wrote:
> > From: Zhaoyang Huang
> >
> > this patch introduce timestamp into workingset's entry and judge if the
> > page is
> > active
From: Zhaoyang Huang
this patch introduce timestamp into workingset's entry and judge if the page is
active or inactive via active_file/refault_ratio instead of refault distance.
The original thought is coming from the logs we got from trace_printk in this
patch, we can find about 1/5 o
, so keep working with up to
* double the initial memory by using totalram_pages as-is.
*/
- timestamp_bits = BITS_PER_LONG - EVICTION_SHIFT;
+ timestamp_bits = BITS_PER_LONG - EVICTION_SHIFT
+ - EVICTION_SECS_POS_SHIFT - EVICTION_SECS_SHRINK_SHIFT;
+
max_order = fls_long(totalram_pages() - 1);
i
g refault distance but very short refault time.
On Wed, Apr 17, 2019 at 7:46 PM Michal Hocko wrote:
>
> On Wed 17-04-19 19:36:21, Zhaoyang Huang wrote:
> > sorry for the confusion. What I mean is the basic idea doesn't change
> > as replacing the refault criteria from re
sense for starting a new context.
On Wed, Apr 17, 2019 at 7:06 PM Michal Hocko wrote:
>
> On Wed 17-04-19 18:55:15, Zhaoyang Huang wrote:
> > fix one mailbox and update for some information
> >
> > Comparing to
> > http://lkml.kernel.org/r/1554348617-12897-1-git
or my feedback.
On Wed, Apr 17, 2019 at 3:59 PM Zhaoyang Huang wrote:
>
> add Johannes and answer his previous question.
>
> @Johannes Weiner
> Yes. I do agree with you about the original thought of sacrificing
> long distance access pages when huge memory demands arise. The p
, that is, some pages have long refault_distance
while having a very short access time in between. I think the latter
one should be take into consideration or as part of the finnal
decision of if the page should be active/inactive.
On Wed, Apr 17, 2019 at 3:48 PM Zhaoyang Huang wrote:
>
>
From: Zhaoyang Huang
This patch introduce timestamp into workingset's entry and judge if the page
is active or inactive via active_file/refault_ratio instead of refault distance.
The original thought is coming from the logs we got from trace_printk in this
patch, we can find about 1/5 o
resend it via the right mailling list and rewrite the comments by ZY.
On Thu, Apr 4, 2019 at 3:15 PM Michal Hocko wrote:
>
> [Fixup email for Pavel and add Johannes]
>
> On Thu 04-04-19 11:30:17, Zhaoyang Huang wrote:
> > From: Zhaoyang Huang
> >
> > In previous
On Fri, Apr 5, 2019 at 12:39 AM Johannes Weiner wrote:
>
> On Thu, Apr 04, 2019 at 11:30:17AM +0800, Zhaoyang Huang wrote:
> > From: Zhaoyang Huang
> >
> > In previous implementation, the number of refault pages is used
> > for judging the refault period of each
From: Zhaoyang Huang
In previous implementation, the number of refault pages is used
for judging the refault period of each page, which is not precised as
eviction of other files will be affect a lot on current cache.
We introduce the timestamp into the workingset's entry and refault rat
From: Zhaoyang Huang
In previous implementation, the number of refault pages is used
for judging the refault period of each page, which is not precised.
We introduce the timestamp into the workingset's entry to measure
the file page's activity.
The patch is tested on an Android sys
On Wed, Mar 20, 2019 at 9:10 AM David Rientjes wrote:
>
> On Thu, 14 Mar 2019, Zhaoyang Huang wrote:
>
> > From: Zhaoyang Huang
> >
> > Two action for this patch:
> > 1. set a batch size for system heap's shrinker, which can have it buffer
> > reasonab
From: Zhaoyang Huang
Two action for this patch:
1. set a batch size for system heap's shrinker, which can have it buffer
reasonable page blocks in pool for future allocation.
2. reverse the order sequence when free page blocks, the purpose is also
to have system heap keep as more big bloc
From: Zhaoyang Huang
There is no caller and pages information etc for the area which is
created by vm_map_ram as well as the page count > VMAP_MAX_ALLOC.
Add them on in this commit.
Signed-off-by: Zhaoyang Huang
---
mm/vmalloc.c | 30 --
1 file changed,
From: Zhaoyang Huang
In some cases, the instruction of "bl foo1" will be the last one of the
foo2[1], which will cause the lr be the first instruction of the adjacent
foo3[2]. Hence, the backtrace will show the weird result as bellow[3].
The patch will fix it by miner 4 of t
On Fri, Aug 3, 2018 at 2:18 PM Michal Hocko wrote:
>
> On Fri 03-08-18 14:11:26, Zhaoyang Huang wrote:
> > On Fri, Aug 3, 2018 at 1:48 PM Zhaoyang Huang
> > wrote:
> > >
> > > for the soft_limit reclaim has more directivity than global reclaim,
> > >
On Fri, Aug 3, 2018 at 1:48 PM Zhaoyang Huang wrote:
>
> for the soft_limit reclaim has more directivity than global reclaim, we40960
> have current memcg be skipped to avoid potential page thrashing.
>
The patch is tested in our android system with 2GB ram. The case
mainly focus o
for the soft_limit reclaim has more directivity than global reclaim, we
have current memcg be skipped to avoid potential page thrashing.
Signed-off-by: Zhaoyang Huang
---
mm/memcontrol.c | 11 ++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/mm/memcontrol.c b/mm
On Tue, Jul 31, 2018 at 7:19 PM Michal Hocko wrote:
>
> On Tue 31-07-18 19:09:28, Zhaoyang Huang wrote:
> > This patch try to let the direct reclaim finish earlier than it used
> > to be. The problem comes from We observing that the direct reclaim
> > took a long time
barriers to judge if it has reclaimed
enough memory as same criteria as it is in shrink_lruvec:
1. for each memcg softlimit reclaim.
2. before starting the global reclaim in shrink_zone.
Signed-off-by: Zhaoyang Huang
---
include/linux/memcontrol.h | 3 ++-
mm/memcontrol.c| 3 +++
mm
barriers to judge if it has reclaimed
enough memory as same criteria as it is in shrink_lruvec:
1. for each memcg softlimit reclaim.
2. before starting the global reclaim in shrink_zone.
Signed-off-by: Zhaoyang Huang
---
include/linux/memcontrol.h | 3 ++-
mm/memcontrol.c| 3 +++
mm
On Wed, Apr 11, 2018 at 2:39 AM, Joel Fernandes wrote:
> Hi Steve,
>
> On Tue, Apr 10, 2018 at 11:00 AM, Steven Rostedt wrote:
>> On Tue, 10 Apr 2018 09:45:54 -0700
>> Joel Fernandes wrote:
>>
>>> > diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
>>> > index a0233edc0718..
On Tue, Apr 10, 2018 at 5:32 PM, Zhaoyang Huang wrote:
> On Tue, Apr 10, 2018 at 5:01 PM, Michal Hocko wrote:
>> On Tue 10-04-18 16:38:32, Zhaoyang Huang wrote:
>>> On Tue, Apr 10, 2018 at 4:12 PM, Michal Hocko wrote:
>>> > On Tue 10-04-18 16:04:40, Zhaoyang Huan
On Tue, Apr 10, 2018 at 5:01 PM, Michal Hocko wrote:
> On Tue 10-04-18 16:38:32, Zhaoyang Huang wrote:
>> On Tue, Apr 10, 2018 at 4:12 PM, Michal Hocko wrote:
>> > On Tue 10-04-18 16:04:40, Zhaoyang Huang wrote:
>> >> On Tue, Apr 10, 2018 at 3:49 PM, Michal Hocko
On Tue, Apr 10, 2018 at 4:12 PM, Michal Hocko wrote:
> On Tue 10-04-18 16:04:40, Zhaoyang Huang wrote:
>> On Tue, Apr 10, 2018 at 3:49 PM, Michal Hocko wrote:
>> > On Tue 10-04-18 14:39:35, Zhaoyang Huang wrote:
>> >> On Tue, Apr 10, 2018 a
On Tue, Apr 10, 2018 at 3:49 PM, Michal Hocko wrote:
> On Tue 10-04-18 14:39:35, Zhaoyang Huang wrote:
>> On Tue, Apr 10, 2018 at 2:14 PM, Michal Hocko wrote:
>> > On Tue 10-04-18 11:41:44, Zhaoyang Huang wrote:
>> >> On Tue, Apr 10, 2018 at 11:12 AM, Steven Rostedt
On Tue, Apr 10, 2018 at 2:14 PM, Michal Hocko wrote:
> On Tue 10-04-18 11:41:44, Zhaoyang Huang wrote:
>> On Tue, Apr 10, 2018 at 11:12 AM, Steven Rostedt wrote:
>> > On Tue, 10 Apr 2018 10:32:36 +0800
>> > Zhaoyang Huang wrote:
>> >
>> >> For be
On Tue, Apr 10, 2018 at 11:12 AM, Steven Rostedt wrote:
> On Tue, 10 Apr 2018 10:32:36 +0800
> Zhaoyang Huang wrote:
>
>> For bellowing scenario, process A have no intension to exhaust the
>> memory, but will be likely to be selected by OOM for we set
>> OOM_CORE_A
On Tue, Apr 10, 2018 at 8:32 AM, Zhaoyang Huang wrote:
> On Mon, Apr 9, 2018 at 9:49 PM, Steven Rostedt wrote:
>> On Mon, 9 Apr 2018 08:56:01 +0800
>> Zhaoyang Huang wrote:
>>
>>> >>
>>> >> if (oom_task_origin
On Mon, Apr 9, 2018 at 9:49 PM, Steven Rostedt wrote:
> On Mon, 9 Apr 2018 08:56:01 +0800
> Zhaoyang Huang wrote:
>
>> >>
>> >> if (oom_task_origin(task)) {
>> >> points = ULONG_MAX;
>> >>
On Sun, Apr 8, 2018 at 8:47 PM, Steven Rostedt wrote:
> [ Removing kernel-patch-test, because of annoying "moderator" messages ]
>
> On Sun, 8 Apr 2018 13:54:59 +0800
> Zhaoyang Huang wrote:
>
>> On Sun, Apr 8, 2018 at 11:48 AM, Steven Rostedt wrote:
>>
On Sun, Apr 8, 2018 at 11:48 AM, Steven Rostedt wrote:
> On Sun, 8 Apr 2018 10:16:23 +0800
> Zhaoyang Huang wrote:
>
>> Don't choose the process with adj == OOM_SCORE_ADJ_MIN which
>> over-allocating pages for ring buffers.
>
> Why?
>
> -- Steve
because
Don't choose the process with adj == OOM_SCORE_ADJ_MIN which
over-allocating pages for ring buffers.
Signed-off-by: Zhaoyang Huang
---
kernel/trace/ring_buffer.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
On Fri, Apr 6, 2018 at 7:36 AM, Joel Fernandes wrote:
> Hi Steve,
>
> On Thu, Apr 5, 2018 at 12:57 PM, Joel Fernandes wrote:
>> On Thu, Apr 5, 2018 at 6:43 AM, Steven Rostedt wrote:
>>> On Wed, 4 Apr 2018 16:59:18 -0700
>>> Joel Fernandes wrote:
>>>
Happy to try anything else, BTW when the
On Wed, Apr 4, 2018 at 2:23 PM, Michal Hocko wrote:
> On Wed 04-04-18 10:58:39, Zhaoyang Huang wrote:
>> On Tue, Apr 3, 2018 at 9:56 PM, Michal Hocko wrote:
>> > On Tue 03-04-18 09:32:45, Steven Rostedt wrote:
>> >> On Tue, 3 Apr 2018 14:35:14 +
On Tue, Apr 3, 2018 at 9:56 PM, Michal Hocko wrote:
> On Tue 03-04-18 09:32:45, Steven Rostedt wrote:
>> On Tue, 3 Apr 2018 14:35:14 +0200
>> Michal Hocko wrote:
> [...]
>> > Being clever is OK if it doesn't add a tricky code. And relying on
>> > si_mem_available is definitely tricky and obscure.
On Sat, Mar 31, 2018 at 5:42 AM, Steven Rostedt wrote:
> On Fri, 30 Mar 2018 17:30:31 -0400
> Steven Rostedt wrote:
>
>> I'll take a look at si_mem_available() that Joel suggested and see if
>> we can make that work.
>
> Wow, this appears to work great! Joel and Zhaoyang, can you test this?
>
> -
On Fri, Mar 30, 2018 at 12:05 AM, Steven Rostedt wrote:
> On Thu, 29 Mar 2018 18:41:44 +0800
> Zhaoyang Huang wrote:
>
>> It is reported that some user app would like to echo a huge
>> number to "/sys/kernel/debug/tracing/buffer_size_kb" regardless
>> of t
t to avoid the consequence allocation.
Signed-off-by: Zhaoyang Huang
---
kernel/trace/trace.c | 39 ++-
1 file changed, 38 insertions(+), 1 deletion(-)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 2d0ffcc..a4a4237 100644
--- a/kernel/trace/tra
On Fri, Mar 23, 2018 at 4:38 PM, Michal Hocko wrote:
> On Fri 23-03-18 15:57:32, Zhaoyang Huang wrote:
>> For the type of 'ALLOC_HARDER' page allocation, there is an express
>> highway for the whole process which lead the allocation reach __rmqueue_xxx
>> easier th
ype, which may cause the watermark
check fail, but there are possible enough HighAtomic or Unmovable and
Reclaimable pages in the zone. So add 'alloc_harder' here to
count CMA pages in to clean the obstacles on the way to the final.
Signed-off-by: Zhaoyang Huang
---
mm/page_alloc.c | 7 +++
to record the one just in front of
the cached_hole_size, which can help to avoid walking the rb tree and
the list and make the T = 0;
Signed-off-by: Zhaoyang Huang
---
mm/vmalloc.c | 23 +--
1 file changed, 21 insertions(+), 2 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc
ee_vmap_cache, for
it just take effect when free_vmap_cache miss and will reestablish it laterly.
Signed-off-by: Zhaoyang Huang
---
mm/vmalloc.c | 11 +++
1 file changed, 11 insertions(+)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 8698c1c..f58f445 100644
--- a/mm/vmalloc.c
+++ b/mm
update the comment bellow as ...'s/by one driver's allocating/because
one driver has allocated/'..., sorry
for the confusion
On Thu, Jul 20, 2017 at 9:15 AM, Zhaoyang Huang wrote:
> On Thu, Jul 20, 2017 at 4:50 AM, Andrew Morton
> wrote:
>> On Wed, 19 Jul 2017 18
On Thu, Jul 20, 2017 at 4:50 AM, Andrew Morton
wrote:
> On Wed, 19 Jul 2017 18:44:03 +0800 Zhaoyang Huang
> wrote:
>
>> /proc/vmallocinfo will not show the area allocated by vm_map_ram, which
>> will make confusion when debug. Add vm_struct for them and show them in
&
/proc/vmallocinfo will not show the area allocated by vm_map_ram, which
will make confusion when debug. Add vm_struct for them and show them in
proc.
Signed-off-by: Zhaoyang Huang
---
mm/vmalloc.c | 27 ++-
1 file changed, 26 insertions(+), 1 deletion(-)
diff --git a/mm
On Mon, Jul 17, 2017 at 4:29 PM, Michal Hocko wrote:
> On Mon 17-07-17 15:27:31, Zhaoyang Huang wrote:
>> From: Zhaoyang Huang
>>
>> It is no need to find the very beginning of the area within
>> alloc_vmap_area, which can be done by judging each node during th
tmp_next U
/
tmp
/
... (1)
/
first(current approach)
vmap_area_list->...->first->...->tmp->tmp_next
(2)
Signed-off-by: Zhaoyang Huang
---
mm/vmalloc.c | 9 +
1 file changed, 9 insertions(+)
diff --git a/mm
From: Zhaoyang Huang
It is no need to find the very beginning of the area within
alloc_vmap_area, which can be done by judging each node during the process
For current approach, the worst case is that the starting node which be found
for searching the 'vmap_area_list' is close to t
It is no need to find the very beginning of the area within
alloc_vmap_area, which can be done by judging each node during the process
Signed-off-by: Zhaoyang Huang
Signed-off-by: Zhaoyang Huang
---
mm/vmalloc.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/mm/vmalloc.c b/mm
From: Zhaoyang Huang
It is no need to find the very beginning of the area within
alloc_vmap_area, which can be done by judging each node during the process
Signed-off-by: Zhaoyang Huang
---
mm/vmalloc.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
---|
1.IDLE_START 2.CPU_PM_ENTER
3.now 4.select idle statenext_event
(sleep_length)
Signed
R
3.now 4.select idle statenext_event
(sleep_length)
Signed-off-by: Zhaoyang Huang
---
kernel/sched/idle.c | 18 --
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/kernel/sched/idl
On 25 June 2016 at 09:09, Rafael J. Wysocki wrote:
> On Fri, Jun 17, 2016 at 11:13 AM, Zhaoyang Huang
> wrote:
>> In previous version, cpu_pm_enter is invoked
>
> By whom? Not by the core surely?
>
>> after the governor select the state, which cause the executing tim
On 23 June 2016 at 16:35, Thomas Gleixner wrote:
> On Thu, 23 Jun 2016, Zhaoyang Huang wrote:
>> On 23 June 2016 at 16:18, Thomas Gleixner wrote:
>> > On Thu, 23 Jun 2016, Zhaoyang Huang wrote:
>> >> On 23 June 2016 at 15:01, Thomas Gleixner wrote:
>>
On 23 June 2016 at 16:18, Thomas Gleixner wrote:
> On Thu, 23 Jun 2016, Zhaoyang Huang wrote:
>> On 23 June 2016 at 15:01, Thomas Gleixner wrote:
>> Thomas, I agree with you, I have discussed the modification with the
>> call back owner. However, I wonder if we can make
On 23 June 2016 at 15:01, Thomas Gleixner wrote:
> On Wed, 22 Jun 2016, Zhaoyang Huang wrote:
>> On 20 June 2016 at 09:14, Zhaoyang Huang wrote:
>> > On 17 June 2016 at 19:50, Thomas Gleixner wrote:
>> >> On Fri, 17 Jun 2016, Zhaoyang Huang wrote:
>>
On 20 June 2016 at 09:14, Zhaoyang Huang wrote:
> On 17 June 2016 at 19:50, Thomas Gleixner wrote:
>> On Fri, 17 Jun 2016, Zhaoyang Huang wrote:
>>> On 17 June 2016 at 17:27, Thomas Gleixner wrote:
>>> > On Fri, 17 Jun 2016, Zhaoyang Huang wrote:
>&g
On 17 June 2016 at 19:50, Thomas Gleixner wrote:
> On Fri, 17 Jun 2016, Zhaoyang Huang wrote:
>> On 17 June 2016 at 17:27, Thomas Gleixner wrote:
>> > On Fri, 17 Jun 2016, Zhaoyang Huang wrote:
>> >> There should be a gap between tick_nohz_idle_enter and
>>
On 17 June 2016 at 17:27, Thomas Gleixner wrote:
> On Fri, 17 Jun 2016, Zhaoyang Huang wrote:
>> There should be a gap between tick_nohz_idle_enter and
>> tick_nohz_get_sleep_length when idle, which will cause the
>> sleep_length is not very precised. Change it in this pa
There should be a gap between tick_nohz_idle_enter and
tick_nohz_get_sleep_length when idle, which will cause the
sleep_length is not very precised. Change it in this patch.
Signed-off-by: Zhaoyang Huang
---
kernel/time/tick-sched.c |5 +
1 file changed, 5 insertions(+)
diff --git a
In previous version, cpu_pm_enter is invoked after the governor
select the state, which cause the executing time of cpu_pm_enter
is included in the idle time. Moving it before the state selection.
Signed-off-by: Zhaoyang Huang
---
kernel/sched/idle.c | 18 --
1 file changed
From: Zhaoyang Huang
There should be a gap between tick_nohz_idle_enter and
tick_nohz_get_sleep_length when idle, which will cause the
sleep_length is not very precised. Change it in this patch.
Signed-off-by: Zhaoyang Huang
---
kernel/time/tick-sched.c |5 +
1 file changed, 5
From: Zhaoyang Huang
In previous version, cpu_pm_enter is invoked after the governor
select the state, which cause the executing time of cpu_pm_enter
is included in the idle time. Moving it before the state selection.
Signed-off-by: Zhaoyang Huang
---
kernel/sched/idle.c | 18
In previous version, cpu_pm_enter is invoked after the governor
select the state, which cause the executing time of cpu_pm_enter
is included in the idle time. Moving it before the state selection.
Signed-off-by: Zhaoyang Huang
---
kernel/sched/idle.c | 18 --
1 file changed
On 22 January 2016 at 03:32, Pavel Machek wrote:
>
>> - goto repeat;
>> +
>> + /*check expires firstly for auto suspend mode,
>> + *if not, just go ahead to the async
>> + */
>
> English, coding style.
>
On 21 January 2016 at 18:51, Mark Rutland wrote:
> On Thu, Jan 21, 2016 at 04:48:57PM +0800, Zhaoyang Huang wrote:
>> Hi Mark,
>
> Hi,
>
>> Do you have any suggestion on how to sync the GIC operation from
>> kernel and psci parallelly? Thanks!
>
> I'm not
Hi Mark,
Do you have any suggestion on how to sync the GIC operation from
kernel and psci parallelly? Thanks!
On 12 January 2016 at 19:51, Mark Rutland wrote:
> On Tue, Jan 12, 2016 at 09:38:20AM +, Lorenzo Pieralisi wrote:
>> On Tue, Jan 12, 2016 at 10:17:42AM +0800, Zhaoyang Hu
| |
V | |
_rpm_suspend_wait- |
| |
| |
V |
_rpm_suspend_call>_rpm_suspend_fail
| |
| |
V V
_rpm_suspend_success--->E
On 28 October 2015 at 01:40, Marc Titinger wrote:
> From: Marc Titinger
>
> This patch allows cluster-level idle-states to being soaked in as generic
> domain power states, in order for the domain governor to chose the most
> efficient power state compatible with the device constraints. Similarly
On 6 October 2015 at 22:27, Marc Titinger wrote:
> From: Marc Titinger
>
> Cpuidle now handles c-states and power-states differently. c-states do not
> decrement
> the reference count for the CPUs in the cluster, while power-states i.e.
> cluster level states like 'CLUSTER_SLEEP_0' in the case
1 - 100 of 102 matches
Mail list logo