On Thu, 2017-05-18 at 20:20 +0100, Roman Gushchin wrote:
> On Fri, May 19, 2017 at 04:37:27AM +1000, Balbir Singh wrote:
> > On Fri, May 19, 2017 at 3:30 AM, Michal Hocko <mho...@kernel.org> wrote:
> > > On Thu 18-05-17 17:28:04, Roman Gushchin wrote:
> > >
On Thu, 2017-05-18 at 20:20 +0100, Roman Gushchin wrote:
> On Fri, May 19, 2017 at 04:37:27AM +1000, Balbir Singh wrote:
> > On Fri, May 19, 2017 at 3:30 AM, Michal Hocko wrote:
> > > On Thu 18-05-17 17:28:04, Roman Gushchin wrote:
> > > > Traditionally, the OOM ki
On Fri, May 19, 2017 at 3:30 AM, Michal Hocko wrote:
> On Thu 18-05-17 17:28:04, Roman Gushchin wrote:
>> Traditionally, the OOM killer is operating on a process level.
>> Under oom conditions, it finds a process with the highest oom score
>> and kills it.
>>
>> This behavior
On Fri, May 19, 2017 at 3:30 AM, Michal Hocko wrote:
> On Thu 18-05-17 17:28:04, Roman Gushchin wrote:
>> Traditionally, the OOM killer is operating on a process level.
>> Under oom conditions, it finds a process with the highest oom score
>> and kills it.
>>
>> This behavior doesn't suit well
On Sat, May 13, 2017 at 2:42 AM, Johannes Weiner <han...@cmpxchg.org> wrote:
> On Fri, May 12, 2017 at 12:25:22PM +1000, Balbir Singh wrote:
>> On Thu, 2017-05-11 at 20:16 +0100, Roman Gushchin wrote:
>> > The meaning of each value is the same as for global counters,
&g
On Sat, May 13, 2017 at 2:42 AM, Johannes Weiner wrote:
> On Fri, May 12, 2017 at 12:25:22PM +1000, Balbir Singh wrote:
>> On Thu, 2017-05-11 at 20:16 +0100, Roman Gushchin wrote:
>> > The meaning of each value is the same as for global counters,
>> > available using /
On Thu, 2017-05-11 at 20:16 +0100, Roman Gushchin wrote:
> Track the following reclaim counters for every memory cgroup:
> PGREFILL, PGSCAN, PGSTEAL, PGACTIVATE, PGDEACTIVATE, PGLAZYFREE and
> PGLAZYFREED.
>
> These values are exposed using the memory.stats interface of cgroup v2.
The changelog
On Thu, 2017-05-11 at 20:16 +0100, Roman Gushchin wrote:
> Track the following reclaim counters for every memory cgroup:
> PGREFILL, PGSCAN, PGSTEAL, PGACTIVATE, PGDEACTIVATE, PGLAZYFREE and
> PGLAZYFREED.
>
> These values are exposed using the memory.stats interface of cgroup v2.
The changelog
53885512b8d3..6c0132c7212f 100644
> --- a/arch/powerpc/include/asm/module.h
> +++ b/arch/powerpc/include/asm/module.h
> @@ -14,6 +14,10 @@
> #include
>
>
> +#ifdef CC_USING_MPROFILE_KERNEL
> +#define MODULE_ARCH_VERMAGIC "mprofile-kernel"
> +#endif
> +
> #ifndef __powerpc64__
> /*
> * Thanks to Paul M for explaining this.
> --
Makes sense
Acked-by: Balbir Singh <bsinghar...@gmail.com>
ch/powerpc/include/asm/module.h
> +++ b/arch/powerpc/include/asm/module.h
> @@ -14,6 +14,10 @@
> #include
>
>
> +#ifdef CC_USING_MPROFILE_KERNEL
> +#define MODULE_ARCH_VERMAGIC "mprofile-kernel"
> +#endif
> +
> #ifndef __powerpc64__
> /*
> * Thanks to Paul M for explaining this.
> --
Makes sense
Acked-by: Balbir Singh
On Mon, 2017-05-08 at 12:42 +0200, Laurent Dufour wrote:
> Sorry Balbir,
>
> You pointed this out since the beginning but I missed your comment.
> My mistake.
>
No worries, as long as the right thing gets in
Balbir Singh
On Mon, 2017-05-08 at 12:42 +0200, Laurent Dufour wrote:
> Sorry Balbir,
>
> You pointed this out since the beginning but I missed your comment.
> My mistake.
>
No worries, as long as the right thing gets in
Balbir Singh
0 so we have
> to
> + * uncharge it manually from its memcg.
> + */
> + mem_cgroup_uncharge(p);
> +
Yep, that is the right fix
https://lkml.org/lkml/2017/4/26/133
Reviewed-by: Balbir Singh <bsinghar...@gmail.com>
0 so we have
> to
> + * uncharge it manually from its memcg.
> + */
> + mem_cgroup_uncharge(p);
> +
Yep, that is the right fix
https://lkml.org/lkml/2017/4/26/133
Reviewed-by: Balbir Singh
On Wed, 2017-04-26 at 03:13 +, Naoya Horiguchi wrote:
> On Wed, Apr 26, 2017 at 12:10:15PM +1000, Balbir Singh wrote:
> > On Tue, 2017-04-25 at 16:27 +0200, Laurent Dufour wrote:
> > > The commit b023f46813cd ("memory-hotplug: skip HWPoisoned page when
>
On Wed, 2017-04-26 at 03:13 +, Naoya Horiguchi wrote:
> On Wed, Apr 26, 2017 at 12:10:15PM +1000, Balbir Singh wrote:
> > On Tue, 2017-04-25 at 16:27 +0200, Laurent Dufour wrote:
> > > The commit b023f46813cd ("memory-hotplug: skip HWPoisoned page when
>
On Wed, 2017-04-26 at 04:46 +, Naoya Horiguchi wrote:
> On Wed, Apr 26, 2017 at 01:45:00PM +1000, Balbir Singh wrote:
> > > > > static int delete_from_lru_cache(struct page *p)
> > > > > {
> > > > > + if (memcg_kmem_enabled())
On Wed, 2017-04-26 at 04:46 +, Naoya Horiguchi wrote:
> On Wed, Apr 26, 2017 at 01:45:00PM +1000, Balbir Singh wrote:
> > > > > static int delete_from_lru_cache(struct page *p)
> > > > > {
> > > > > + if (memcg_kmem_enabled())
ge freeing code.
> So I think that this change is to keep the consistent charging for such a
> case.
I agree we should uncharge, but looking at the API name, it seems to
be for kmem pages, why are we not using mem_cgroup_uncharge()? Am I missing
something?
Balbir Singh.
ge freeing code.
> So I think that this change is to keep the consistent charging for such a
> case.
I agree we should uncharge, but looking at the API name, it seems to
be for kmem pages, why are we not using mem_cgroup_uncharge()? Am I missing
something?
Balbir Singh.
pfn_to_page(start_pfn + i);
> + if (PageHWPoison(page)) {
> + ClearPageReserved(page);
Why do we clear page reserved? Also if the page is marked PageHWPoison, it
was never offlined to begin with? Or do you expect this to be set on newly
hotplugged memory? Also don't we need to skip the entire pageblock?
Balbir Singh.
e)) {
> + ClearPageReserved(page);
Why do we clear page reserved? Also if the page is marked PageHWPoison, it
was never offlined to begin with? Or do you expect this to be set on newly
hotplugged memory? Also don't we need to skip the entire pageblock?
Balbir Singh.
em_enabled())
> + memcg_kmem_uncharge(p, 0);
> +
The changelog is not quite clear, so we are uncharging a page using
memcg_kmem_uncharge for a page in swap cache/page cache?
Balbir Singh.
dex 27f7210e7fab..22bd22eb25cb 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -529,6 +529,9 @@ static const char * const action_page_types[] = {
> */
> static int delete_from_lru_cache(struct page *p)
> {
> + if (memcg_kmem_enabled())
> +
On Wed, 2017-04-19 at 15:06 +0800, Huang, Ying wrote:
> From: Huang Ying
>
> In this patch, splitting huge page is delayed from almost the first
> step of swapping out to after allocating the swap space for the
> THP (Transparent Huge Page) and adding the THP into the swap
On Wed, 2017-04-19 at 15:06 +0800, Huang, Ying wrote:
> From: Huang Ying
>
> In this patch, splitting huge page is delayed from almost the first
> step of swapping out to after allocating the swap space for the
> THP (Transparent Huge Page) and adding the THP into the swap cache.
> This will
>>
>> Yes. It was derived from TASK_SIZE :
>>
>> http://lxr.free-electrons.com/source/arch/powerpc/include/asm/processor.h#L105
>>
>
> That is getting update to 128TB by default and conditionally to 512TB
>
Since this is compile time, we should probably keep the scope to 128TB
for now and see if
>>
>> Yes. It was derived from TASK_SIZE :
>>
>> http://lxr.free-electrons.com/source/arch/powerpc/include/asm/processor.h#L105
>>
>
> That is getting update to 128TB by default and conditionally to 512TB
>
Since this is compile time, we should probably keep the scope to 128TB
for now and see if
regs->gpr[r1] with the updated ea that
is written down to the GPR1(r1) which will be what we restore when we return
from the exception.
The conversion of lwz to ld indeed looks correct
Balbir Singh.
ich will be what we restore when we return
from the exception.
The conversion of lwz to ld indeed looks correct
Balbir Singh.
: PPC: Book3S HV: Migrate pinned pages out of CMA")
> Cc: sta...@vger.kernel.org # v4.9+
> Signed-off-by: Alexey Kardashevskiy <a...@ozlabs.ru>
> Acked-by: Balbir Singh <bsinghar...@gmail.com>
> ---
>
> Changes:
> v2:
> * instead of moving PageCompound() t
: PPC: Book3S HV: Migrate pinned pages out of CMA")
> Cc: sta...@vger.kernel.org # v4.9+
> Signed-off-by: Alexey Kardashevskiy
> Acked-by: Balbir Singh
> ---
>
> Changes:
> v2:
> * instead of moving PageCompound() to the beginning, this just drops
> PageHuge() and
simplify all callers.
>
> This patch shouldn't have any visible effect
>
> Signed-off-by: Michal Hocko <mho...@suse.com>
> ---
This makes sense
Acked-by: Balbir Singh <bsinghar...@gmail.com>
>
> This patch shouldn't have any visible effect
>
> Signed-off-by: Michal Hocko
> ---
This makes sense
Acked-by: Balbir Singh
ust want to make sure i didn't denatured
> it :)
>
> Also as side note, v20 fix build issue by restricting HMM to x86-64
> which is safer than pretending this can be use on any random arch
> as build failures i am getting clearly shows that thing i assumed to
> be true on all arch aren't.
In that case could you please document what an arch needs to do to enable
HMM? What are the dependencies and requirements?
Balbir Singh.
ust want to make sure i didn't denatured
> it :)
>
> Also as side note, v20 fix build issue by restricting HMM to x86-64
> which is safer than pretending this can be use on any random arch
> as build failures i am getting clearly shows that thing i assumed to
> be true on all arch aren't.
In that case could you please document what an arch needs to do to enable
HMM? What are the dependencies and requirements?
Balbir Singh.
On Fri, 2017-04-07 at 12:26 -0400, Jerome Glisse wrote:
> On Thu, Apr 06, 2017 at 10:02:55PM -0400, Jerome Glisse wrote:
> > On Fri, Apr 07, 2017 at 11:37:34AM +1000, Balbir Singh wrote:
> > > On Wed, 2017-04-05 at 16:40 -0400, Jérôme Glisse wrote:
> > > >
On Fri, 2017-04-07 at 12:26 -0400, Jerome Glisse wrote:
> On Thu, Apr 06, 2017 at 10:02:55PM -0400, Jerome Glisse wrote:
> > On Fri, Apr 07, 2017 at 11:37:34AM +1000, Balbir Singh wrote:
> > > On Wed, 2017-04-05 at 16:40 -0400, Jérôme Glisse wrote:
> > > >
I think user space will need to adopt, for example using
malloc on a coherent device will not work, the user space will need to
have a driver supported way of accessing coherent memory.
> Nonetheless we need to make progress on this as they are hardware
> right around the corner and it would be a shame if we could not
> leverage such hardware with linux.
>
>
I agree 100%
Balbir Singh.
I think user space will need to adopt, for example using
malloc on a coherent device will not work, the user space will need to
have a driver supported way of accessing coherent memory.
> Nonetheless we need to make progress on this as they are hardware
> right around the corner and it would be a shame if we could not
> leverage such hardware with linux.
>
>
I agree 100%
Balbir Singh.
On Wed, 2017-04-05 at 16:40 -0400, Jérôme Glisse wrote:
> This introduce a simple struct and associated helpers for device driver
> to use when hotpluging un-addressable device memory as ZONE_DEVICE. It
> will find a unuse physical address range and trigger memory hotplug for
> it which allocates
On Wed, 2017-04-05 at 16:40 -0400, Jérôme Glisse wrote:
> This introduce a simple struct and associated helpers for device driver
> to use when hotpluging un-addressable device memory as ZONE_DEVICE. It
> will find a unuse physical address range and trigger memory hotplug for
> it which allocates
On 17/03/17 14:42, Balbir Singh wrote:
>>> Or make the HMM Kconfig feature 64BIT only by making it depend on 64BIT?
>>>
>>
>> Yes, that was my first reaction too, but these particular routines are
>> aspiring to be generic routines--in fact, you hav
On 17/03/17 14:42, Balbir Singh wrote:
>>> Or make the HMM Kconfig feature 64BIT only by making it depend on 64BIT?
>>>
>>
>> Yes, that was my first reaction too, but these particular routines are
>> aspiring to be generic routines--in fact, you hav
>> Or make the HMM Kconfig feature 64BIT only by making it depend on 64BIT?
>>
>
> Yes, that was my first reaction too, but these particular routines are
> aspiring to be generic routines--in fact, you have had an influence there,
> because these might possibly help with NUMA migrations. :)
>
>> Or make the HMM Kconfig feature 64BIT only by making it depend on 64BIT?
>>
>
> Yes, that was my first reaction too, but these particular routines are
> aspiring to be generic routines--in fact, you have had an influence there,
> because these might possibly help with NUMA migrations. :)
>
;
> ...obviously, there is not enough room for these flags, in a 32-bit pfn.
>
> So, given the current HMM design, I think we are going to have to provide a
> 32-bit version of these routines (migrate_pfn_to_page, and related) that is
> a no-op, right?
Or make the HMM Kconfig feature 64BIT only by making it depend on 64BIT?
Balbir Singh
these flags, in a 32-bit pfn.
>
> So, given the current HMM design, I think we are going to have to provide a
> 32-bit version of these routines (migrate_pfn_to_page, and related) that is
> a no-op, right?
Or make the HMM Kconfig feature 64BIT only by making it depend on 64BIT?
Balbir Singh
>
> Reviewed-by: Reza Arbab <ar...@linux.vnet.ibm.com>
> Tested-by: Reza Arbab <ar...@linux.vnet.ibm.com>
>
Acked-by: Balbir Singh <bsinghar...@gmail.com>
t; can be allocated through special allocator). It differs from numa migration
>> by working on a range of virtual address and thus by doing migration in
>> chunk that can be large enough to use DMA engine or special copy offloading
>> engine.
>
>
> Reviewed-by: Reza Arbab
> Tested-by: Reza Arbab
>
Acked-by: Balbir Singh
On Thu, Mar 9, 2017 at 12:28 PM, Huang, Ying <ying.hu...@intel.com> wrote:
> Balbir Singh <bsinghar...@gmail.com> writes:
>
>> On Wed, 2017-03-08 at 15:26 +0800, Huang, Ying wrote:
>>> From: Huang Ying <ying.hu...@intel.com>
>>>
>>>
On Thu, Mar 9, 2017 at 12:28 PM, Huang, Ying wrote:
> Balbir Singh writes:
>
>> On Wed, 2017-03-08 at 15:26 +0800, Huang, Ying wrote:
>>> From: Huang Ying
>>>
>>> This patch make it possible to charge or uncharge a set of continuous
>>> swap
cgroup operations for the THP swap too.
A quick look at the patches makes it look sane. I wonder if we would
make sense to track THP swapout separately as well
(from a memory.stat perspective)
Balbir Singh
r the THP swap too.
A quick look at the patches makes it look sane. I wonder if we would
make sense to track THP swapout separately as well
(from a memory.stat perspective)
Balbir Singh
On Tue, 2017-03-07 at 10:12 -0600, Josh Poimboeuf wrote:
> On Tue, Mar 07, 2017 at 05:50:55PM +1100, Balbir Singh wrote:
> > On Mon, 2017-02-13 at 19:42 -0600, Josh Poimboeuf wrote:
> > > For live patching and possibly other use cases, a stack trace is only
> > >
On Tue, 2017-03-07 at 10:12 -0600, Josh Poimboeuf wrote:
> On Tue, Mar 07, 2017 at 05:50:55PM +1100, Balbir Singh wrote:
> > On Mon, 2017-02-13 at 19:42 -0600, Josh Poimboeuf wrote:
> > > For live patching and possibly other use cases, a stack trace is only
> > >
...@redhat.com>
> ---
Could you comment on why we need a reliable trace for live-patching? Are
we in any way reliant on the stack trace to patch something broken?
Thanks,
Balbir Singh.
--
Could you comment on why we need a reliable trace for live-patching? Are
we in any way reliant on the stack trace to patch something broken?
Thanks,
Balbir Singh.
> Reviewed-by: Miroslav Benes <mbe...@suse.cz>
> Reviewed-by: Kamalesh Babulal <kamal...@linux.vnet.ibm.com>
> ---
Reviewed-by: Balbir Singh <bsinghar...@gmail.com>
s the kernel.
>
> The bit is included in the _TIF_USER_WORK_MASK macro so that
> do_notify_resume() and klp_update_patch_state() get called when the bit
> is set.
>
> Signed-off-by: Josh Poimboeuf
> Reviewed-by: Petr Mladek
> Reviewed-by: Miroslav Benes
> Reviewed-by: Kamal
On Wed, Mar 1, 2017 at 8:55 PM, Mel Gorman <mgor...@suse.de> wrote:
> On Wed, Mar 01, 2017 at 01:42:40PM +1100, Balbir Singh wrote:
>> >>>The idea of this patchset was to introduce
>> >>>the concept of memory that is not necessarily system memory
On Wed, Mar 1, 2017 at 8:55 PM, Mel Gorman wrote:
> On Wed, Mar 01, 2017 at 01:42:40PM +1100, Balbir Singh wrote:
>> >>>The idea of this patchset was to introduce
>> >>>the concept of memory that is not necessarily system memory, but is
>> >>
with HMM?
5. Why can't we use cpusets?
Would that be a fair set of concerns to address?
@Anshuman/@Srikar/@Aneesh anything else you'd like to add in terms
of concerns/issues? I think it will also make a good discussion thread
for those attending LSF/MM (I am not there) on this topic.
Balbir Singh.
with HMM?
5. Why can't we use cpusets?
Would that be a fair set of concerns to address?
@Anshuman/@Srikar/@Aneesh anything else you'd like to add in terms
of concerns/issues? I think it will also make a good discussion thread
for those attending LSF/MM (I am not there) on this topic.
Balbir Singh.
On 03/02/17 20:17, Vlastimil Babka wrote:
> Hi,
>
> this mail tries to summarize the problems with current cpusets implementation
> wrt memory restrictions, especially when used together with mempolicies.
> The issues were initially discovered when working on the series fixing recent
>
On 03/02/17 20:17, Vlastimil Babka wrote:
> Hi,
>
> this mail tries to summarize the problems with current cpusets implementation
> wrt memory restrictions, especially when used together with mempolicies.
> The issues were initially discovered when working on the series fixing recent
>
up_init(), but in that
> path mem_cgroup_soft_limit_reclaim() is called which assumes that
> these data are allocated.
>
> As mem_cgroup_soft_limit_reclaim() is best effort, it should return
> when these data are not yet allocated.
>
> This patch also fixes potential null pointer access in
> mem_cgroup_remove_from_trees() and mem_cgroup_update_tree().
>
> Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
> ---
Acked-by: Balbir Singh <bsinghar...@gmail.com>
ath mem_cgroup_soft_limit_reclaim() is called which assumes that
> these data are allocated.
>
> As mem_cgroup_soft_limit_reclaim() is best effort, it should return
> when these data are not yet allocated.
>
> This patch also fixes potential null pointer access in
> mem_cgroup_remove_from_trees() and mem_cgroup_update_tree().
>
> Signed-off-by: Laurent Dufour
> ---
Acked-by: Balbir Singh
it_initialize();
What happens if this fails? Do we disable this interface?
It's a good idea, but I wonder if we can deal with certain
memory cgroups not supporting soft limits due to memory
shortage at the time of using them.
> memcg->soft_limit = nr_pages;
> ret = 0;
> break;
Balbir Singh.
this interface?
It's a good idea, but I wonder if we can deal with certain
memory cgroups not supporting soft limits due to memory
shortage at the time of using them.
> memcg->soft_limit = nr_pages;
> ret = 0;
> break;
Balbir Singh.
per node data in mem_cgroup_init(), but in that
> path mem_cgroup_soft_limit_reclaim() is called which assumes that
> these data are allocated.
>
> As mem_cgroup_soft_limit_reclaim() is best effort, it should return
> when these data are not yet allocated.
>
> Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
> ---
Looks good to me, but we might need to audit other parts. We could have some
checks to see if memcgroup is ready for reclaim
Balbir Singh.
t(), but in that
> path mem_cgroup_soft_limit_reclaim() is called which assumes that
> these data are allocated.
>
> As mem_cgroup_soft_limit_reclaim() is best effort, it should return
> when these data are not yet allocated.
>
> Signed-off-by: Laurent Dufour
> ---
Looks good to me, but we might need to audit other parts. We could have some
checks to see if memcgroup is ready for reclaim
Balbir Singh.
On 22/02/17 16:55, Anshuman Khandual wrote:
> On 02/22/2017 10:34 AM, Balbir Singh wrote:
>> On Fri, Feb 17, 2017 at 04:54:47PM +0530, Anshuman Khandual wrote:
>>> This patch series is base on the work posted by Zi Yan back in
>>> November 2016 (https://l
On 22/02/17 16:55, Anshuman Khandual wrote:
> On 02/22/2017 10:34 AM, Balbir Singh wrote:
>> On Fri, Feb 17, 2017 at 04:54:47PM +0530, Anshuman Khandual wrote:
>>> This patch series is base on the work posted by Zi Yan back in
>>> November 2016 (https://l
On Wed, Feb 22, 2017 at 7:16 PM, Andrew Morton
<a...@linux-foundation.org> wrote:
> On Wed, 22 Feb 2017 18:19:15 +1100 Balbir Singh <bsinghar...@gmail.com> wrote:
>
>> On Fri, Jan 27, 2017 at 05:52:07PM -0500, J__r__me Glisse wrote:
>> > Cliff note: HMM offers 2
On Wed, Feb 22, 2017 at 7:16 PM, Andrew Morton
wrote:
> On Wed, 22 Feb 2017 18:19:15 +1100 Balbir Singh wrote:
>
>> On Fri, Jan 27, 2017 at 05:52:07PM -0500, J__r__me Glisse wrote:
>> > Cliff note: HMM offers 2 things (each standing on its own). First
>> >
discussed the features with other company and i am confident
> it can be use on other, yet, unrelease hardware.
>
> Please condiser applying for 4.11
>
Andrew, do we expect to get this in 4.11/4.12? Just curious.
Balbir Singh.
discussed the features with other company and i am confident
> it can be use on other, yet, unrelease hardware.
>
> Please condiser applying for 4.11
>
Andrew, do we expect to get this in 4.11/4.12? Just curious.
Balbir Singh.
932.00 msecs 0.197448 GBs **
> Moved 195 huge pages in 54.00 msecs 7.064254 GBs ***
> Moved 195 huge pages in 86.00 msecs 4.435694 GBs ***
>
Could you also comment on the CPU utilization impact of these
patches.
Balbir Singh.
932.00 msecs 0.197448 GBs **
> Moved 195 huge pages in 54.00 msecs 7.064254 GBs ***
> Moved 195 huge pages in 86.00 msecs 4.435694 GBs ***
>
Could you also comment on the CPU utilization impact of these
patches.
Balbir Singh.
ounter to track shmem pages during charging and uncharging.
>
> Reported-by: Chris Down <cd...@fb.com>
> Signed-off-by: Johannes Weiner <han...@cmpxchg.org>
> ---
Makes sense
Acked-by: Balbir Singh <bsinghar...@gmail.com>
ounter to track shmem pages during charging and uncharging.
>
> Reported-by: Chris Down
> Signed-off-by: Johannes Weiner
> ---
Makes sense
Acked-by: Balbir Singh
On Fri, 2017-02-17 at 09:33 +, Mel Gorman wrote:
> On Fri, Feb 17, 2017 at 09:14:44AM +1100, Balbir Singh wrote:
> >
> >
> > On 16/02/17 05:20, Mel Gorman wrote:
> > > On Wed, Feb 15, 2017 at 05:37:22PM +0530, Anshuman Khandual wrote:
> > > >
On Fri, 2017-02-17 at 09:33 +, Mel Gorman wrote:
> On Fri, Feb 17, 2017 at 09:14:44AM +1100, Balbir Singh wrote:
> >
> >
> > On 16/02/17 05:20, Mel Gorman wrote:
> > > On Wed, Feb 15, 2017 at 05:37:22PM +0530, Anshuman Khandual wrote:
> > > >
Do not litter the core
> with is_cdm_whatever checks.
>
The idea is to have these nodes as ZONE_MOVABLE and those are isolated from
early mem allocations. Any new feature requires checks, but one could consider
consolidating those checks
Balbir Singh.
Do not litter the core
> with is_cdm_whatever checks.
>
The idea is to have these nodes as ZONE_MOVABLE and those are isolated from
early mem allocations. Any new feature requires checks, but one could consider
consolidating those checks
Balbir Singh.
oint
Balbir Singh.
oint
Balbir Singh.
nt on X86_64, do we need to touch all
architectures? I guess we could selectively enable things as we enable
ZONE_DEVICE for other architectures?
Balbir Singh.
nt on X86_64, do we need to touch all
architectures? I guess we could selectively enable things as we enable
ZONE_DEVICE for other architectures?
Balbir Singh.
all the stop states can be found here:
> https://lists.ozlabs.org/pipermail/skiboot/2016-September/004869.html
>
> [Optimize the number of instructions before entering STOP with
> ESL=EC=0, validate the PSSCR values provided by the firimware
> maintains the invariants required as per
here:
> https://lists.ozlabs.org/pipermail/skiboot/2016-September/004869.html
>
> [Optimize the number of instructions before entering STOP with
> ESL=EC=0, validate the PSSCR values provided by the firimware
> maintains the invariants required as per the ISA suggested by Balbir
> Singh]
>
> Signed-off-by: Gautham R. Shenoy
> ---
Acked-by: Balbir Singh
e.
>
> Add an inline helper function to populate the powernv_states[] table
> for a given idle state. Invoke this for populating the "Nap",
> "Fastsleep" and the stop states in powernv_add_idle_states.
>
> Signed-off-by: Gautham R. Shenoy <e...@linux.vnet.ibm.com>
> --
Acked-by: Balbir Singh <bsinghar...@gmail.com>
helper function to populate the powernv_states[] table
> for a given idle state. Invoke this for populating the "Nap",
> "Fastsleep" and the stop states in powernv_add_idle_states.
>
> Signed-off-by: Gautham R. Shenoy
> --
Acked-by: Balbir Singh
had arch300.
>
I would prefer power9 to arch300
Balbir Singh.
On Tue, Jan 10, 2017 at 02:37:01PM +0530, Gautham R. Shenoy wrote:
> From: "Gautham R. Shenoy"
>
> Balbir pointed out that in idle_book3s.S and powernv/idle.c some
> functions and variables had power9 in their names while some others
> had arch300.
>
I would pref
T for the
> no-return variant and reuses the name IDLE_STATE_ENTER_SEQ
> for a variant that allows resuming operation at the instruction next
> to the idle-instruction.
>
> Signed-off-by: Gautham R. Shenoy <e...@linux.vnet.ibm.com>
> ---
> No changes from v4
>
Acked-by: Balbir Singh <bsinghar...@gmail.com>
the name IDLE_STATE_ENTER_SEQ
> for a variant that allows resuming operation at the instruction next
> to the idle-instruction.
>
> Signed-off-by: Gautham R. Shenoy
> ---
> No changes from v4
>
Acked-by: Balbir Singh
), page);
This can be VM_BUG_ON_PAGE(1, page), hopefully the compiler does the right thing
here. I suspect this should be a BUG_ON, independent of CONFIG_DEBUG_VM
> + page_ref_inc(page);
> + return;
> + }
> +
Balbir Singh.
ght thing
here. I suspect this should be a BUG_ON, independent of CONFIG_DEBUG_VM
> + page_ref_inc(page);
> + return;
> + }
> +
Balbir Singh.
401 - 500 of 2047 matches
Mail list logo