apd")
> Signed-off-by: Pavel Tatashin
> Reviewed-by: Steven Sistare
> Reviewed-by: Daniel Jordan
> Reviewed-by: Bob Picco
Considering that some HW might behave strangely and this would be rather
hard to debug I would be tempted to mark this for stable. It should also
be merged
ter_page_bootmem_info();
> +
> /* Register memory areas for /proc/kcore */
> kclist_add(&kcore_vsyscall, (void *)VSYSCALL_ADDR,
>PAGE_SIZE, KCORE_OTHER);
> --
> 2.14.0
--
Michal Hocko
SUSE Labs
;
> static void __init trim_low_memory_range(void)
> {
> - memblock_reserve(0, ALIGN(reserve_low, PAGE_SIZE));
> + unsigned long min_pfn = find_min_pfn_with_active_regions();
> + phys_addr_t base = min_pfn << PAGE_SHIFT;
> +
> + memblock_reserve(base, ALIGN(reserve_low, PAGE_SIZE));
> }
>
> /*
> --
> 2.14.0
--
Michal Hocko
SUSE Labs
| 31 +---
> mm/sparse-vmemmap.c | 10 ++-
> mm/sparse.c | 6 +-
> 14 files changed, 356 insertions(+), 88 deletions(-)
>
> --
> 2.14.0
--
Michal Hocko
SUSE Labs
On Thu 27-07-17 21:50:35, Aneesh Kumar K.V wrote:
>
>
> On 07/27/2017 06:31 PM, Michal Hocko wrote:
> >On Thu 27-07-17 11:48:26, Aneesh Kumar K.V wrote:
> >>For ppc64, we want to call this function when we are not running as guest.
> >
> >What does this
On Thu 27-07-17 21:27:37, Aneesh Kumar K.V wrote:
>
>
> On 07/27/2017 06:27 PM, Michal Hocko wrote:
> >On Thu 27-07-17 14:07:56, Aneesh Kumar K.V wrote:
> >>Instead of marking the pmd ready for split, invalidate the pmd. This should
> >>take care of powerpc req
On Thu 27-07-17 21:18:35, Aneesh Kumar K.V wrote:
>
>
> On 07/27/2017 06:24 PM, Michal Hocko wrote:
> >EMISSING_CHANGELOG
> >
> >besides that no user actually uses the return value. Please fold this
> >into the patch which uses the new functionality.
>
>
; return 0;
>
> found:
> --
> 2.13.3
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org";> em...@kvack.org
--
Michal Hocko
SUSE Labs
e the pmd_trans_huge
> - * and pmd_trans_splitting must remain set at all times on the pmd
> - * until the split is complete for this pmd), then we flush the SMP TLB
> - * and finally we write the non-huge version of the pmd entry with
> - * pmd_populate.
> - */
> - old = pmdp_invalidate(vma, haddr, pmd);
> -
> - /*
> - * Transfer dirty bit using value returned by pmd_invalidate() to be
> - * sure we don't race with CPU that can set the bit under us.
> - */
> - if (pmd_dirty(old))
> - SetPageDirty(page);
> -
> pmd_populate(mm, pmd, pgtable);
>
> if (freeze) {
> --
> 2.13.3
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org";> em...@kvack.org
--
Michal Hocko
SUSE Labs
t; {
> --
> 2.13.3
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org";> em...@kvack.org
--
Michal Hocko
SUSE Labs
*/
> serialize_against_pte_lookup(vma->vm_mm);
> + return __pmd(old_pmd);
> }
>
> static pmd_t pmd_set_protbits(pmd_t pmd, pgprot_t pgprot)
> --
> 2.13.3
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org";> em...@kvack.org
--
Michal Hocko
SUSE Labs
On Thu 13-07-17 08:53:52, Benjamin Herrenschmidt wrote:
> On Wed, 2017-07-12 at 09:23 +0200, Michal Hocko wrote:
> >
> > >
> > > Ideally the MMU looks at the PTE for keys, in order to enforce
> > > protection. This is the case with x86 and is the case with
On Wed 12-07-17 09:23:37, Michal Hocko wrote:
> On Tue 11-07-17 12:32:57, Ram Pai wrote:
[...]
> > Ideally the MMU looks at the PTE for keys, in order to enforce
> > protection. This is the case with x86 and is the case with power9 Radix
> > page table. Hence the keys have
On Tue 11-07-17 12:32:57, Ram Pai wrote:
> On Tue, Jul 11, 2017 at 04:52:46PM +0200, Michal Hocko wrote:
> > On Wed 05-07-17 14:21:37, Ram Pai wrote:
> > > Memory protection keys enable applications to protect its
> > > address space from inadvertent access or c
do
you need to store anything in the pte? My understanding of PKEYs is that
the setup and teardown should be very cheap and so no page tables have
to updated. Or do I just misunderstand what you wrote here?
--
Michal Hocko
SUSE Labs
PARC by removing memset() from memblock, and
> using 8 stx in __init_single_page(). It appears we never miss L1 in
> __init_single_page() after the initial 8 stx.
OK, that is good to hear and it actually matches my understanding that
writes to a single cacheline should add an overhead.
Thanks!
--
Michal Hocko
SUSE Labs
it would be slower but would that be
measurable? I am sorry to be so persistent here but I would be really
happier if this didn't depend on the deferred initialization. If this is
absolutely a no-go then I can live with that of course.
--
Michal Hocko
SUSE Labs
memory traffic for exclusive cache line and
struct pages are very likely to exclusive at that stage.
If that doesn't fly then be it but I have to confess I still do not
understand why that is not the case.
--
Michal Hocko
SUSE Labs
On Mon 15-05-17 16:44:26, Pasha Tatashin wrote:
> On 05/15/2017 03:38 PM, Michal Hocko wrote:
> >I do not think this is the right approach. Your measurements just show
> >that sparc could have a more optimized memset for small sizes. If you
> >keep the same memset
re too worried then make it opt-in and make
it depend on ARCH_WANT_PER_PAGE_INIT and make it enabled for x86 and
sparc after memset optimization.
--
Michal Hocko
SUSE Labs
On Wed 10-05-17 11:19:43, David S. Miller wrote:
> From: Michal Hocko
> Date: Wed, 10 May 2017 16:57:26 +0200
>
> > Have you measured that? I do not think it would be super hard to
> > measure. I would be quite surprised if this added much if anything at
> > all as t
nce count and other struct members. Almost nobody should be
looking at our page at this time and stealing the cache line. On the
other hand a large memcpy will basically wipe everything away from the
cpu cache. Or am I missing something?
--
Michal Hocko
SUSE Labs
Correct, when memory it hotplugged, to gain the benefit of this fix, and
> also not to regress by actually double zeroing "struct pages" we should not
> zero it out. However, I do not really have means to test it.
It should be pretty easy to test with kvm, but I can help with testing
on the real HW as well.
Thanks!
--
Michal Hocko
SUSE Labs
unsigned long align,
unsigned long goal)
{
- return memblock_virt_alloc_try_nid(size, align, goal,
+ return memblock_virt_alloc_core(size, align, goal,
BOOTMEM_ALLOC_ACCESSIBLE, node);
}
--
Michal Hocko
SUSE Labs
On Fri 24-02-17 17:40:25, Vitaly Kuznetsov wrote:
> Michal Hocko writes:
>
> > On Fri 24-02-17 17:09:13, Vitaly Kuznetsov wrote:
[...]
> >> While this will most probably work for me I still disagree with the
> >> concept of 'one size fits all' here and
On Fri 24-02-17 17:09:13, Vitaly Kuznetsov wrote:
> Michal Hocko writes:
>
> > On Fri 24-02-17 16:05:18, Vitaly Kuznetsov wrote:
> >> Michal Hocko writes:
> >>
> >> > On Fri 24-02-17 15:10:29, Vitaly Kuznetsov wrote:
> > [...]
> >>
On Fri 24-02-17 16:05:18, Vitaly Kuznetsov wrote:
> Michal Hocko writes:
>
> > On Fri 24-02-17 15:10:29, Vitaly Kuznetsov wrote:
[...]
> >> Just did a quick (and probably dirty) test, increasing guest memory from
> >> 4G to 8G (32 x 128mb blocks) require 68Mb
On Fri 24-02-17 15:10:29, Vitaly Kuznetsov wrote:
> Michal Hocko writes:
>
> > On Thu 23-02-17 19:14:27, Vitaly Kuznetsov wrote:
[...]
> >> Virtual guests under stress were getting into OOM easily and the OOM
> >> killer was even killing the udev process tr
On Thu 23-02-17 19:14:27, Vitaly Kuznetsov wrote:
> Michal Hocko writes:
>
> > On Thu 23-02-17 17:36:38, Vitaly Kuznetsov wrote:
> >> Michal Hocko writes:
> > [...]
> >> > Is a grow from 256M -> 128GB really something that happens in real life?
>
On Thu 23-02-17 17:36:38, Vitaly Kuznetsov wrote:
> Michal Hocko writes:
[...]
> > Is a grow from 256M -> 128GB really something that happens in real life?
> > Don't get me wrong but to me this sounds quite exaggerated. Hotmem add
> > which is an operation which
On Thu 23-02-17 16:49:06, Vitaly Kuznetsov wrote:
> Michal Hocko writes:
>
> > On Thu 23-02-17 14:31:24, Vitaly Kuznetsov wrote:
> >> Michal Hocko writes:
> >>
> >> > On Wed 22-02-17 10:32:34, Vitaly Kuznetsov wrote:
> >> > [...]
> >
On Thu 23-02-17 14:31:24, Vitaly Kuznetsov wrote:
> Michal Hocko writes:
>
> > On Wed 22-02-17 10:32:34, Vitaly Kuznetsov wrote:
> > [...]
> >> > There is a workaround in that a user could online the memory or have
> >> > a udev rule to online th
d/udev folks
> continuosly refused to add this udev rule to udev calling it stupid as
> it actually is an unconditional and redundant ping-pong between kernel
> and udev.
This is a policy and as such it doesn't belong to the kernel. The whole
auto-enable in the kernel is just plain wrong IMHO and we shouldn't have
merged it.
--
Michal Hocko
SUSE Labs
On Thu 24-11-16 00:05:12, Balbir Singh wrote:
>
>
> On 23/11/16 20:28, Michal Hocko wrote:
[...]
> > I am more worried about synchronization with the hotplug which tends to
> > be a PITA in places were we were simply safe by definition until now. We
> > do not have a
On Wed 23-11-16 19:37:16, Balbir Singh wrote:
>
>
> On 23/11/16 19:07, Michal Hocko wrote:
> > On Wed 23-11-16 18:50:42, Balbir Singh wrote:
> >>
> >>
> >> On 23/11/16 18:25, Michal Hocko wrote:
> >>> On Wed 23-11-16 15:36:51, Balbir Sing
On Wed 23-11-16 18:50:42, Balbir Singh wrote:
>
>
> On 23/11/16 18:25, Michal Hocko wrote:
> > On Wed 23-11-16 15:36:51, Balbir Singh wrote:
> >> In the absence of hotplug we use extra memory proportional to
> >> (possible_nodes - online_nodes) * number_of_c
rew Morton
> Cc: Johannes Weiner
> Cc: Michal Hocko
> Cc: Vladimir Davydov
>
> I've tested this patches under a VM with two nodes and movable
> nodes enabled. I've offlined nodes and checked that the system
> and cgroups with tasks deep in the hierarchy continue to
specific data structures.
Thanks!
--
Michal Hocko
SUSE Labs
On Wed 19-10-16 10:23:55, Dave Hansen wrote:
> On 10/19/2016 10:01 AM, Michal Hocko wrote:
> > The question I had earlier was whether this has to be an explicit FOLL
> > flag used by g-u-p users or we can just use it internally when mm !=
> > current->mm
>
> The rea
On Wed 19-10-16 09:49:43, Dave Hansen wrote:
> On 10/19/2016 02:07 AM, Michal Hocko wrote:
> > On Wed 19-10-16 09:58:15, Lorenzo Stoakes wrote:
> >> On Tue, Oct 18, 2016 at 05:30:50PM +0200, Michal Hocko wrote:
> >>> I am wondering whether we can go further. E.g. it i
On Wed 19-10-16 10:06:46, Lorenzo Stoakes wrote:
> On Wed, Oct 19, 2016 at 10:52:05AM +0200, Michal Hocko wrote:
> > yes this is the desirable and expected behavior.
> >
> > > wonder if this is desirable behaviour or whether this ought to be limited
> > > to
>
On Wed 19-10-16 09:58:15, Lorenzo Stoakes wrote:
> On Tue, Oct 18, 2016 at 05:30:50PM +0200, Michal Hocko wrote:
> > I am wondering whether we can go further. E.g. it is not really clear to
> > me whether we need an explicit FOLL_REMOTE when we can in fact check
> > mm !=
On Wed 19-10-16 09:40:45, Lorenzo Stoakes wrote:
> On Wed, Oct 19, 2016 at 10:13:52AM +0200, Michal Hocko wrote:
> > On Wed 19-10-16 09:59:03, Jan Kara wrote:
> > > On Thu 13-10-16 01:20:18, Lorenzo Stoakes wrote:
> > > > This patch removes the write parameter
_FORCE for access_remote_vm? I mean FOLL_FORCE is a really
non-trivial thing. It doesn't obey vma permissions so we should really
minimize its usage. Do all of those users really need FOLL_FORCE?
Anyway I would rather see the flag explicit and used at more places than
hidden behind a helper function.
--
Michal Hocko
SUSE Labs
L_FORCE users was always a nightmare
and the flag behavior is really subtle so we should better be explicit
about it. I haven't gone through each patch separately but rather
applied the whole series and checked the resulting diff. This all seems
OK to me and feel free to add
Acked-by: Micha
On Mon 03-10-16 14:47:16, Michal Hocko wrote:
> [Sorry I have only now noticed this email]
>
> On Thu 04-08-16 16:44:10, Paul Mackerras wrote:
[...]
> > [1.717648] Call Trace:
> > [1.717687] [c00ff0707b80] [c0270d08]
> > refresh_zone_stat_thresh
continue;
> for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) {
> int v;
>
> @@ -748,6 +758,8 @@ void cpu_vm_stats_fold(int cpu)
> for_each_online_pgdat(pgdat) {
> struct per_cpu_nodestat *p;
>
> + if (!pgdat->per_cpu_nodestats)
> + continue;
> p = per_cpu_ptr(pgdat->per_cpu_nodestats, cpu);
>
> for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org";> em...@kvack.org
--
Michal Hocko
SUSE Labs
e
subtle and it is not entirely clear when to use which one. I agree that
the reclaim path is the most critical one so the patch seems OK to me.
At least from a quick glance it should help with the reported issue so
feel free to add
Acked-by: Michal Hocko
I expect we might want to turn
m hashes.
So I think that this is just a fallout from how fadump is hackish and
tricky. Reserving large portion/majority of memory from the kernel just
sounds like a mind field. This patchset is dealing with one particular
problem. Fair enough, it seems like the easiest way to go and something
that would
ace fcc50906d9164c56 ]---
> *15:34:57* [ 862.550562]
> *15:35:18* [ 883.551577] INFO: rcu_sched self-detected stall on CPU
> *15:35:18* [ 883.551578] INFO: rcu_sched self-detected stall on CPU
> *15:35:18* [ 883.551588] 2-...: (5249 ticks this GP)
> idle=cc5/141/0 s
On Thu 19-05-16 11:07:09, Arnd Bergmann wrote:
[...]
> > 6 mm/page_alloc.c:3651:6: warning: 'compact_result' may be used
> > uninitialized in this function [-Wmaybe-uninitialized]
>
> I'm surprised this one is still there, I sent a patch but Michal Hocko
_HUGE_MASK flag sounds more appropriate than the other one
> in the context. Hence change it back.
Yes, SHM_HUGE_MASK mixing with MAP_HUGE_SHIFT is not only misleading
it might bite us later should any of the two change.
>
> Signed-off-by: Anshuman Khandual
Acked-by: Michal Hocko
>
On Wed 06-04-16 15:39:17, Aneesh Kumar K.V wrote:
> Michal Hocko writes:
>
> > [ text/plain ]
> > On Tue 05-04-16 12:05:47, Sukadev Bhattiprolu wrote:
> > [...]
> >> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
> >> ind
details look for comment in __pte_alloc().
> + */
> + smp_wmb();
> +
what is the pairing memory barrier?
> spin_lock(&mm->page_table_lock);
> #ifdef CONFIG_PPC_FSL_BOOK3E
> /*
> --
> 1.8.3.1
--
Michal Hocko
SUSE Labs
_
ave more consistently. Feel free to add to all patches
Acked-by: Michal Hocko
On a side note. I have received patches with broken threading - the
follow up patches are not in the single thread under this cover email.
I thought this was the default behavior of git send-email but maybe your
(older
On Fri 28-08-15 16:31:30, Michal Hocko wrote:
> On Wed 26-08-15 14:24:23, Eric B Munson wrote:
> > The previous patch introduced a flag that specified pages in a VMA
> > should be placed on the unevictable LRU, but they should not be made
> > present when the area is created.
x-mips.org
> Cc: linux-par...@vger.kernel.org
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: sparcli...@vger.kernel.org
> Cc: linux-xte...@linux-xtensa.org
> Cc: linux-a...@vger.kernel.org
> Cc: linux-...@vger.kernel.org
> Cc: linux...@kvack.org
I haven't checked the arch speci
On Thu 20-08-15 16:14:34, Michal Hocko wrote:
> On Thu 20-08-15 13:43:21, Vlastimil Babka wrote:
> > Perform the same debug checks in alloc_pages_node() as are done in
> > __alloc_pages_node(), by making the former function a wrapper of the latter
> > one.
> >
_mask));
> + return __alloc_pages_node(nid, gfp_mask, order);
> }
>
> #ifdef CONFIG_NUMA
> --
> 2.5.0
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.kernel.org
> More majord
V
> Acked-by: Christoph Lameter
> Cc: Pekka Enberg
> Cc: Joonsoo Kim
> Cc: Naoya Horiguchi
> Cc: Tony Luck
> Cc: Fenghua Yu
> Cc: Arnd Bergmann
> Cc: Benjamin Herrenschmidt
> Cc: Paul Mackerras
> Acked-by: Michael Ellerman
> Cc: Gleb Natapov
> Cc: Paolo
te making it useful.
Looks good to me
Acked-by: Michal Hocko
> Signed-off-by: Eric B Munson
> Acked-by: Vlastimil Babka
> Cc: Michal Hocko
> Cc: Vlastimil Babka
> Cc: Heiko Carstens
> Cc: Geert Uytterhoeven
> Cc: Catalin Marinas
> Cc: Stephen Rothwell
> Cc: Guen
On Tue 28-07-15 09:49:42, Eric B Munson wrote:
> On Tue, 28 Jul 2015, Michal Hocko wrote:
>
> > [I am sorry but I didn't get to this sooner.]
> >
> > On Mon 27-07-15 10:54:09, Eric B Munson wrote:
> > > Now that VM_LOCKONFAULT is a modifier to V
T]),
munlockall(MCL_CURRENT) resp. munlockall(MCL_CURRENT|MCL_FUTURE) but
other combinations sound weird to me.
Anyway munlock with flags opens new doors of trickiness.
--
Michal Hocko
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
On Tue 23-06-15 14:45:17, Vlastimil Babka wrote:
> On 06/22/2015 04:18 PM, Eric B Munson wrote:
> >On Mon, 22 Jun 2015, Michal Hocko wrote:
> >
> >>On Fri 19-06-15 12:43:33, Eric B Munson wrote:
[...]
> >>>My thought on detecting was that someone might want to
On Mon 22-06-15 10:18:06, Eric B Munson wrote:
> On Mon, 22 Jun 2015, Michal Hocko wrote:
>
> > On Fri 19-06-15 12:43:33, Eric B Munson wrote:
[...]
> > > Are you objecting to the addition of the VMA flag VM_LOCKONFAULT, or the
> > > new MAP_LOCKONFAULT flag (or
On Fri 19-06-15 12:43:33, Eric B Munson wrote:
> On Fri, 19 Jun 2015, Michal Hocko wrote:
>
> > On Thu 18-06-15 16:30:48, Eric B Munson wrote:
> > > On Thu, 18 Jun 2015, Michal Hocko wrote:
> > [...]
> > > > Wouldn't it be much more reasonable and strai
On Thu 18-06-15 16:30:48, Eric B Munson wrote:
> On Thu, 18 Jun 2015, Michal Hocko wrote:
[...]
> > Wouldn't it be much more reasonable and straightforward to have
> > MAP_FAULTPOPULATE as a counterpart for MAP_POPULATE which would
> > explicitly disallow any form of
> for the entire mapping lock as if MAP_LOCKED was used.
>
> Signed-off-by: Eric B Munson
> Cc: Michal Hocko
> Cc: linux-al...@vger.kernel.org
> Cc: linux-ker...@vger.kernel.org
> Cc: linux-m...@linux-mips.org
> Cc: linux-par...@vger.kernel.org
> Cc: linuxppc-dev@
, but still significantly
> more work required than the LOCKONFAULT case.
>
> Startup average:0.047
> Processing average: 86.431
Have you played with batching? Has it helped? Anyway it is to be
expected that the overhead will be higher than a single mmap call. The
question is whether you can live with it because adding a new semantic
to mlock sounds trickier and MAP_LOCKED is tricky enough already...
--
Michal Hocko
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
as that way but you can mitigate that to a
certain level by mapping larger than PAGE_SIZE chunks in the fault
handler. Would that work in your usecase?
--
Michal Hocko
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
On Fri 03-04-15 10:43:57, Nishanth Aravamudan wrote:
> On 31.03.2015 [11:48:29 +0200], Michal Hocko wrote:
[...]
> > I would expect kswapd would be looping endlessly because the zone
> > wouldn't be balanced obviously. But I would be wrong... because
> > pg
FAICS (build_zonelists doesn't ignore those
zones right?) and so the kswapd would be woken up easily. So it looks
like a mess.
There are possibly other places which rely on populated_zone or
for_each_populated_zone without checking reclaimability. Are those
working as expected?
That being
. I'll add this
> > to my series :)
>
> In looking at the code, I am wondering about the following:
>
> init_zone_allows_reclaim() is called for each nid from
> free_area_init_node(). Which means that calculate_node_totalpages for
> other "later" nids and
On Wed 19-02-14 00:20:21, David Rientjes wrote:
> On Wed, 19 Feb 2014, Michal Hocko wrote:
>
> > > I strongly suspect that the patch is correct since powerpc node distances
> > > are different than the architectures you're talking about and get doubled
> &g
ntly interested in is based on 3.0
kernel where zone_reclaim_mode is set in build_zonelists which relies on
find_next_best_node which iterates only N_HIGH_MEMORY nodes which should
have non 0 present pages.
[...]
--
Michal Hocko
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
On Tue 18-02-14 15:34:05, Nishanth Aravamudan wrote:
> Hi Michal,
>
> On 18.02.2014 [10:06:58 +0100], Michal Hocko wrote:
> > Hi,
> > I have just noticed that ppc has RECLAIM_DISTANCE reduced to 10 set by
> > 56608209d34b (powerpc/numa: Set a smaller value for RECLAIM_
On Tue 18-02-14 14:27:11, David Rientjes wrote:
> On Tue, 18 Feb 2014, Michal Hocko wrote:
>
> > Hi,
> > I have just noticed that ppc has RECLAIM_DISTANCE reduced to 10 set by
> > 56608209d34b (powerpc/numa: Set a smaller value for RECLAIM_DISTANCE to
> > enable zon
will send a revert I would like to understand what
led to the patch in the first place. I do not see why would PPC use only
LOCAL_DISTANCE and REMOTE_DISTANCE distances and in fact machines I have
seen use different values.
Anton, could you comment please?
--
Michal Hocko
SUSE Labs
On Mon 28-01-13 09:33:49, Tang Chen wrote:
> On 01/25/2013 09:17 PM, Michal Hocko wrote:
> >On Wed 23-01-13 06:29:31, Simon Jeons wrote:
> >>On Tue, 2013-01-22 at 19:42 +0800, Tang Chen wrote:
> >>>Here are some bug fix patches for physical memory hot-remove. All t
ored into a git tree.
http://git.kernel.org/?p=linux/kernel/git/mhocko/mm.git;a=summary
It contains only Memory management patches on top of the last major
release (since-.X.Y branch).
--
Michal Hocko
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
'm not so sure...
My understanding is that doing that in arch code is more appropriate
because it makes the generic code less complicated. But I do not have
any strong opinion on that. Looking at other ARCH_ENABLE_MEMORY_HOTPLUG
and others suggests that we should be consistent with that.
mented functions suggested by Michal.
I think that both patches should be merged into one and put to Andrew's
queue as
memory-hotplug-implement-register_page_bootmem_info_section-of-sparse-vmemmap-fix.patch
rather than a separate patch.
--
Michal Hocko
SUSE Labs
__
t I am thinking about that more, maybe it would
be cleaner to put the select into arch/x86/Kconfig and do it
same as ARCH_ENABLE_MEMORY_{HOTPLUG,HOTREMOVE} (and name it
ARCH_HAVE_BOOTMEM_INFO_NODE).
--
Michal Hocko
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
; unsigned int fault)
> @@ -879,7 +865,14 @@ mm_fault_error(struct pt_regs *regs, unsigned long
> error_code,
> return 1;
> }
>
> - out_of_memory(regs, error_code, address);
> +
401 - 484 of 484 matches
Mail list logo