On 01.02.21 14:23, David Hildenbrand wrote:
On 01.02.21 12:56, Yafang Shao wrote:
Currently the pGp only shows the names of page flags, rather than
the full information including section, node, zone, last cpupid and
kasan tag. While it is not easy to parse these information manually
because
On 01.02.21 15:30, Mike Rapoport wrote:
On Mon, Feb 01, 2021 at 10:32:44AM +0100, David Hildenbrand wrote:
On 30.01.21 23:10, Mike Rapoport wrote:
From: Mike Rapoport
The physical memory on an x86 system starts at address 0, but this is not
always reflected in e820 map. For example, the BIOS
On 28.01.21 23:29, Oscar Salvador wrote:
On Wed, Jan 27, 2021 at 11:36:15AM +0100, David Hildenbrand wrote:
Extending on that, I just discovered that only x86-64, ppc64, and arm64
really support hugepage migration.
Maybe one approach with the "magic switch" really would be to disabl
What's your opinion about this? Should we take this approach?
I think trying to solve all the issues that could happen as the result of
not being able to dissolve a hugetlb page has made this extremely complex.
I know this is something we need to address/solve. We do not want to add
more unexpe
so removing start_pfn from init_memory_block()
- Added Acks
David Hildenbrand (2):
drivers/base/memory: don't store phys_device in memory blocks
Documentation: sysfs/memory: clarify some memory block device
properties
.../ABI/testing/sysfs-devices-memory | 58 ---
b
Cc: Ilya Dryomov
Cc: Vaibhav Jain
Cc: Tom Rix
Cc: Geert Uytterhoeven
Cc: linux-...@vger.kernel.org
Signed-off-by: David Hildenbrand
---
.../ABI/testing/sysfs-devices-memory | 5 ++--
.../admin-guide/mm/memory-hotplug.rst | 4 +--
drivers/base/memory.c
that the interface is legacy. Also
update documentation of the "state" property and "valid_zones"
properties.
Acked-by: Michal Hocko
Cc: Andrew Morton
Cc: Dave Hansen
Cc: Michal Hocko
Cc: Oscar Salvador
Cc: Jonathan Corbet
Cc: David Hildenbrand
Cc: Greg Kroah-Hartman
Cc:
map_memory(addr);
}
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
IMHO, we should rip out that code here and enforce page alignment in
vmemmap_populate()/vmemmap_free().
Am I missing something?
Thanks David for bringing this up, I must say I was not aware that this
topic was ever discussed.
Yeah, last time I raised it was in
https://lkml.kernel.org/r/20200
On 02.02.21 09:46, Zhiyuan Dai wrote:
Signed-off-by: Zhiyuan Dai
---
mm/hugetlb.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 18f6ee3..35db386 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3990,7 +3990,7 @@ void unmap_hugepage_range
On 02.02.21 11:12, yanfei...@windriver.com wrote:
From: Yanfei Xu
Gigantic page is a compound page and its order is more than 1.
Thus it must be available for hpage_pincount. Let's remove this
meaningless if statement.
Signed-off-by: Yanfei Xu
---
mm/hugetlb.c | 4 +---
1 file changed, 1 i
);
map = rcu_dereference_protected(pn->shrinker_map, true);
- if (map)
- kvfree(map);
+ kvfree(map);
rcu_assign_pointer(pn->shrinker_map, NULL);
}
}
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
ULL;
}
spin_unlock_irq(&cache->free_lock);
- if (slots)
- kvfree(slots);
+ kvfree(slots);
}
}
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
On 02.02.21 13:35, Will Deacon wrote:
On Tue, Feb 02, 2021 at 12:32:15PM +, Will Deacon wrote:
On Tue, Feb 02, 2021 at 09:41:53AM +0530, Anshuman Khandual wrote:
pfn_valid() validates a pfn but basically it checks for a valid struct page
backing for that pfn. It should always return positiv
On 02.02.21 13:51, Will Deacon wrote:
On Tue, Feb 02, 2021 at 01:39:29PM +0100, David Hildenbrand wrote:
On 02.02.21 13:35, Will Deacon wrote:
On Tue, Feb 02, 2021 at 12:32:15PM +, Will Deacon wrote:
On Tue, Feb 02, 2021 at 09:41:53AM +0530, Anshuman Khandual wrote:
pfn_valid() validates
On 02.02.21 13:48, Mike Rapoport wrote:
On Tue, Feb 02, 2021 at 10:35:05AM +0100, Michal Hocko wrote:
On Mon 01-02-21 08:56:19, James Bottomley wrote:
I have also proposed potential ways out of this. Either the pool is not
fixed sized and you make it a regular unevictable memory (if direct map
@@ -962,7 +962,6 @@ remove_pte_table(pte_t *pte_start, unsigned long addr,
unsigned long end,
{
unsigned long next, pages = 0;
pte_t *pte;
- void *page_addr;
phys_addr_t phys_addr;
pte = pte_start + pte_index(addr);
@@ -983,42 +982,19 @@ remove_pte_table(pte
@@ -1088,10 +1150,10 @@ remove_pud_table(pud_t *pud_start, unsigned long addr,
unsigned long end,
pages++;
} else {
/* If here, we are freeing vmemmap pages. */
- memset((void *)a
On 02.02.21 15:22, Michal Hocko wrote:
On Tue 02-02-21 15:12:21, David Hildenbrand wrote:
[...]
I think secretmem behaves much more like longterm GUP right now
("unmigratable", "lifetime controlled by user space", "cannot go on
CMA/ZONE_MOVABLE"). I'd eit
On 29.01.21 09:51, Michal Hocko wrote:
On Fri 29-01-21 09:21:28, Mike Rapoport wrote:
On Thu, Jan 28, 2021 at 02:01:06PM +0100, Michal Hocko wrote:
On Thu 28-01-21 11:22:59, Mike Rapoport wrote:
And hugetlb pools may be also depleted by anybody by calling
mmap(MAP_HUGETLB) and there is no any
On 02.02.21 15:32, Michal Hocko wrote:
On Tue 02-02-21 15:26:20, David Hildenbrand wrote:
On 02.02.21 15:22, Michal Hocko wrote:
On Tue 02-02-21 15:12:21, David Hildenbrand wrote:
[...]
I think secretmem behaves much more like longterm GUP right now
("unmigratable", "lifetim
On 04.02.21 14:43, Oscar Salvador wrote:
We never get to allocate 1GB pages when mapping the vmemmap range.
Drop the dead code both for the aligned and unaligned cases and leave
only the direct map handling.
Signed-off-by: Oscar Salvador
Suggested-by: David Hildenbrand
---
arch/x86/mm
ist traversal cycles.
I think this is correct. We will have addr >= vm_end for any VMA, so
there are no applicable VMAs.
Reviewed-by: David Hildenbrand
Signed-off-by: Miaohe Lin
---
mm/mlock.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/mlock.c b/mm/mlock.c
in
omic_set(compound_pincount_ptr(page), 0);
+ atomic_set(compound_pincount_ptr(page), 0);
}
/*
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
On 04.02.21 08:01, Anshuman Khandual wrote:
MAX_ORDER which invariably depends on FORCE_MAX_ZONEORDER can be a variable
for a given page size, depending on whether TRANSPARENT_HUGEPAGE is enabled
or not. In certain page size and THP combinations HUGETLB_PAGE_ORDER can be
greater than MAX_ORDER, m
() thinks that those unused parts are still in
use.
Fix this by marking the unused parts with PAGE_UNUSED, so memchr_inv()
will do the right thing and will let us free the PMD when the last user
of it is gone.
This patch is based on a similar patch by David Hildenbrand:
https://lore.kernel.org/linux
(&hstates[idx]));
+ nr_pages = round_down(nr_pages, pages_per_huge_page(&hstates[idx]));
switch (MEMFILE_ATTR(of_cft(of)->private)) {
case RES_RSVD_LIMIT:
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
> Am 08.02.2021 um 22:13 schrieb Mike Rapoport :
>
> On Mon, Feb 08, 2021 at 10:27:18AM +0100, David Hildenbrand wrote:
>> On 08.02.21 09:49, Mike Rapoport wrote:
>>
>> Some questions (and request to document the answers) as we now allow to have
>> unmovable a
On 09.02.21 09:59, Michal Hocko wrote:
On Mon 08-02-21 22:38:03, David Hildenbrand wrote:
Am 08.02.2021 um 22:13 schrieb Mike Rapoport :
On Mon, Feb 08, 2021 at 10:27:18AM +0100, David Hildenbrand wrote:
On 08.02.21 09:49, Mike Rapoport wrote:
Some questions (and request to document the
A lot of unevictable memory is a concern regardless of CMA/ZONE_MOVABLE.
As I've said it is quite easy to land at the similar situation even with
tmpfs/MAP_ANON|MAP_SHARED on swapless system. Neither of the two is
really uncommon. It would be even worse that those would be allowed to
consume both
On 09.02.21 11:23, David Hildenbrand wrote:
A lot of unevictable memory is a concern regardless of CMA/ZONE_MOVABLE.
As I've said it is quite easy to land at the similar situation even with
tmpfs/MAP_ANON|MAP_SHARED on swapless system. Neither of the two is
really uncommon. It would be
ka
Signed-off-by: Yafang Shao
Acked-by: Vlastimil Babka
Reviewed-by: Miaohe Lin
Cc: David Hildenbrand
Cc: Matthew Wilcox
---
mm/slub.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 87ff086e68a4..2514c37ab4e4 100644
--- a/mm/slub.c
++
after this change, the output is,
[ 8846.517809] INFO: Slab 0xf42a2c60 objects=33 used=3
fp=0x60d32ca8 flags=0x17c0010200(slab|head)
Signed-off-by: Yafang Shao
Reviewed-by: David Hildenbrand
Reviewed-by: Vlastimil Babka
Acked-by: David Rientjes
Acked-by: Christoph La
On 09.02.21 11:56, Yafang Shao wrote:
Currently the pGp only shows the names of page flags, rather than
the full information including section, node, zone, last cpupid and
kasan tag. While it is not easy to parse these information manually
because there're so many flavors. Let's interpret them in
__func__, count, ret);
+ pr_err("%s: %s: alloc failed, req-size: %zu pages, ret: %d\n",
+ __func__, cma->name, count, ret);
cma_debug_show_areas(cma);
}
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
On 09.02.21 14:25, Michal Hocko wrote:
On Tue 09-02-21 11:23:35, David Hildenbrand wrote:
[...]
I am constantly trying to fight for making more stuff MOVABLE instead of
going into the other direction (e.g., because it's easier to implement,
which feels like the wrong direction).
Maybe I a
On 09.02.21 18:50, Minchan Kim wrote:
__alloc_contig_migrate_range already has lru_add_drain_all call
via migrate_prep. It's necessary to move LRU taget pages into
LRU list to be able to isolated. However, lru_add_drain_all call
after __alloc_contig_migrate_range is called is pointless.
This pat
On 24.02.21 15:25, David Hildenbrand wrote:
+ tmp_end = min_t(unsigned long, end, vma->vm_end);
+ pages = populate_vma_page_range(vma, start, tmp_end, &locked);
+ if (!locked) {
+ mmap_read_lock(mm);
+
|| | |> +
ffc0 | -256GB | ffc7 | 32 GB | kasan
+ ffcefee0 | -196GB | ffcefeff |2 MB | fixmap
+ ffceff00 | -196GB | ffce | 16 MB | PCI io
+ ffcf |
On 25.02.21 14:38, Arnd Bergmann wrote:
From: Arnd Bergmann
The inlining logic in clang-13 is rewritten to often not inline
some functions that were inlined by all earlier compilers.
In case of the memblock interfaces, this exposed a harmless bug
of a missing __init annotation:
WARNING: modpo
On 25.02.21 15:06, Arnd Bergmann wrote:
On Thu, Feb 25, 2021 at 2:47 PM David Hildenbrand wrote:
On 25.02.21 14:38, Arnd Bergmann wrote:
From: Arnd Bergmann
The inlining logic in clang-13 is rewritten to often not inline
some functions that were inlined by all earlier compilers.
In case
On 24.02.21 16:39, Mike Rapoport wrote:
From: Mike Rapoport
There could be struct pages that are not backed by actual physical memory.
This can happen when the actual memory bank is not a multiple of
SECTION_SIZE or when an architecture does not register memory holes
reserved by the firmware as
On 25.02.21 17:31, George Kennedy wrote:
: rsdp_address=bfbfa014
[ 0.066612] ACPI: RSDP 0xBFBFA014 24 (v02 BOCHS )
[ 0.067759] ACPI: XSDT 0xBFBF90E8 4C (v01 BOCHS BXPCFACP
0001 0113)
[ 0.069470] ACPI: FACP 0xBFBF5000 74 (v01 BOCHS BXPCFACP
On 25.02.21 18:06, Mike Rapoport wrote:
On Thu, Feb 25, 2021 at 04:59:06PM +0100, David Hildenbrand wrote:
On 24.02.21 16:39, Mike Rapoport wrote:
From: Mike Rapoport
There could be struct pages that are not backed by actual physical memory.
This can happen when the actual memory bank is not
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 9bcc460e8bfe..95695483a622 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -42,7 +42,31 @@
#include "internal.h"
#include "shuffle.h"
-static bool memmap_on_memory_enabled;
+/*
+ * memory_hotplug.memmap_on_memory pa
On 09.02.21 14:38, Oscar Salvador wrote:
When struct page's size is not multiple of PMD, these do not get
fully populated when adding sections, hence two sections will
intersect the same the PMD. This goes against the vmemmap-per-device
premise, so reject it if that is the case.
Signed-off-by: O
On 09.02.21 14:38, Oscar Salvador wrote:
Many places expects us to pass a pageblock aligned range.
E.g: memmap_init_zone() needs a pageblock aligned range in order
to set the proper migrate type for it.
online_pages() needs to operate on a pageblock aligned range for
isolation purposes.
Make sur
On 09.02.21 14:38, Oscar Salvador wrote:
Enable x86_64 platform to use the MHP_MEMMAP_ON_MEMORY feature.
Signed-off-by: Oscar Salvador
---
arch/x86/Kconfig | 4
1 file changed, 4 insertions(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 72663de8b04c..81046b7adb10 100644
---
On 09.02.21 14:38, Oscar Salvador wrote:
Enable arm64 platform to use the MHP_MEMMAP_ON_MEMORY feature.
Signed-off-by: Oscar Salvador
---
arch/arm64/Kconfig | 4
1 file changed, 4 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 87fd02a7a62f..d4fb29779cd4 100644
On 09.02.21 14:38, Oscar Salvador wrote:
Physical memory hotadd has to allocate a memmap (struct page array) for
the newly added memory section. Currently, alloc_pages_node() is used
for those allocations.
This has some disadvantages:
a) an existing memory is consumed for that purpose
(eg
rn inject_fault(vcpu, rc_src, src, 0);
The rc_dest and rc_src handling towards the end is a little confusing,
but I have no real suggestion to make it easier to digest.
Only some suggestions to make the code a bit nicer to read. Apart from
that LGTM.
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
> Am 25.02.2021 um 22:43 schrieb Mike Kravetz :
>
> On 2/10/21 12:23 AM, David Hildenbrand wrote:
>>> On 08.02.21 11:38, Oscar Salvador wrote:
>>> --- a/mm/compaction.c
>>> +++ b/mm/compaction.c
>>> @@ -952,6 +952,17 @@ isolate_migratepages_block(s
> Am 26.02.2021 um 09:38 schrieb Michal Hocko :
>
> On Fri 26-02-21 09:35:10, Michal Hocko wrote:
>>> On Mon 22-02-21 14:51:36, Oscar Salvador wrote:
>>> alloc_contig_range will fail if it ever sees a HugeTLB page within the
>>> range we are trying to allocate, even when that page is free and c
a
---
Reviewed-by: David Hildenbrand
Thanks Mike!
--
Thanks,
David / dhildenb
On 20.05.20 07:25, teawater wrote:
> Hi David,
>
> Thanks for your work.
> I tried this version with cloud-hypervisor master. It worked very well.
>
> Best,
> Hui
Hi Hui,
thanks for testing!
Cheers!
--
Thanks,
David / dhildenb
On 29.07.20 12:47, Baoquan He wrote:
> On 07/28/20 at 04:07pm, David Hildenbrand wrote:
>> On 28.07.20 15:48, Baoquan He wrote:
>>> On 06/30/20 at 04:26pm, David Hildenbrand wrote:
>>>> Let's move the split comment regarding bootmem allocations and memory
>&
On 29.07.20 15:00, Mike Rapoport wrote:
> On Wed, Jul 29, 2020 at 11:35:20AM +0200, David Hildenbrand wrote:
>> On 29.07.20 11:31, Mike Rapoport wrote:
>>> Hi Justin,
>>>
>>> On Wed, Jul 29, 2020 at 08:27:58AM +, Justin He wrote:
>>>> Hi David
&g
On 29.07.20 15:24, Baoquan He wrote:
> On 06/30/20 at 04:26pm, David Hildenbrand wrote:
>> Inside has_unmovable_pages(), we have a comment describing how unmovable
>> data could end up in ZONE_MOVABLE - via "movable_core". Also, besides
>
On 29.07.20 16:18, Michael S. Tsirkin wrote:
> On Tue, Jul 28, 2020 at 03:31:43PM -0700, Andrew Morton wrote:
>> On Wed, 29 Jul 2020 08:20:53 +1000 Stephen Rothwell
>> wrote:
>>
>>> Hi Andrew,
>>>
>>> On Tue, 28 Jul 2020 14:55:53 -0700 Andrew Morton
>>> wrote:
config CONTIG_ALLOC
On 29.07.20 16:38, David Hildenbrand wrote:
> On 29.07.20 16:18, Michael S. Tsirkin wrote:
>> On Tue, Jul 28, 2020 at 03:31:43PM -0700, Andrew Morton wrote:
>>> On Wed, 29 Jul 2020 08:20:53 +1000 Stephen Rothwell
>>> wrote:
>>>
>>>> Hi Andrew,
>
On 29.07.20 19:31, Mike Kravetz wrote:
> On 6/30/20 7:26 AM, David Hildenbrand wrote:
>> Right now, if we have two isolations racing, we might trigger the
>> WARN_ON_ONCE() and to dump_page(NULL), dereferencing NULL. Let's just
>> return directly.
>
> Just
> Am 29.07.2020 um 20:36 schrieb Mike Kravetz :
>
> On 7/29/20 11:08 AM, David Hildenbrand wrote:
>> I have no clue what you mean with "reintroducing this abandoning of
>> pageblocks". All this patch is changing is not doing the dump_page() -
>> or am I
ed-by: Baoquan He
Reviewed-by: Pankaj Gupta
Acked-by: Mike Kravetz
Fixes: 4a55c0474a92 ("mm/hotplug: silence a lockdep splat with printk()")
Cc: Andrew Morton
Cc: Michal Hocko
Cc: Michael S. Tsirkin
Cc: Qian Cai
Signed-off-by: David Hildenbrand
---
mm/page_isolation.c | 9 +
Baoquan He
Signed-off-by: David Hildenbrand
---
include/linux/mmzone.h | 34 ++
1 file changed, 34 insertions(+)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index f6f884970511d..b8c49b2aff684 100644
--- a/include/linux/mmzone.h
+++ b/include/linu
ges within ZONE_MOVABLE
for a longer time).
Cc: Andrew Morton
Cc: Michal Hocko
Cc: Michael S. Tsirkin
Cc: Jason Wang
Cc: Mike Kravetz
Cc: Pankaj Gupta
Cc: Baoquan He
Signed-off-by: David Hildenbrand
---
drivers/virtio/virtio_mem.c | 47 +++--
1 file changed, 8
N_ON_ONCE() in set_migratetype_isolate()"
-- Keep curly braces on "else" case
- Replace "[PATCH v1 5/6] mm/page_alloc: restrict ZONE_MOVABLE optimization
in has_unmovable_pages() to memory offlining"
by "mm: document semantics of ZONE_MOVABLE"
-- Bra
Let's move the split comment regarding bootmem allocations and memory
holes, especially in the context of ZONE_MOVABLE, to the PageReserved()
check.
Reviewed-by: Baoquan He
Cc: Andrew Morton
Cc: Michal Hocko
Cc: Michael S. Tsirkin
Cc: Mike Kravetz
Cc: Pankaj Gupta
Signed-off-by:
htlb_alloc_mask()).
Reviewed-by: Baoquan He
Cc: Andrew Morton
Cc: Michal Hocko
Cc: Michael S. Tsirkin
Cc: Mike Kravetz
Cc: Pankaj Gupta
Signed-off-by: David Hildenbrand
---
mm/page_isolation.c | 15 ++-
1 file changed, 6 insertions(+), 9 deletions(-)
diff --git a/mm/page_i
Let's clean it up a bit, simplifying error handling and getting rid of
the label.
Reviewed-by: Baoquan He
Reviewed-by: Pankaj Gupta
Cc: Andrew Morton
Cc: Michal Hocko
Cc: Michael S. Tsirkin
Cc: Mike Kravetz
Signed-off-by: David Hildenbrand
---
mm/page_isolation.c | 17 +++
resource up again.
Signed-off-by: David Hildenbrand
---
drivers/virtio/virtio_mem.c | 14 --
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c
index f26f5f64ae822..2396a8d67875e 100644
--- a/drivers/virtio/virtio_mem.c
llini
Cc: Roger Pau Monné
Cc: Julien Grall
Signed-off-by: David Hildenbrand
---
drivers/xen/balloon.c | 4
1 file changed, 4 insertions(+)
diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 77c57568e5d7f..644ae2e3798e2 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/x
c: Dan Williams
Cc: Jason Gunthorpe
Cc: Kees Cook
Cc: Ard Biesheuvel
Cc: Wei Yang
Signed-off-by: David Hildenbrand
---
include/linux/ioport.h | 4 ++--
kernel/resource.c | 49 --
mm/memory_hotplug.c| 22 +--
3 files changed
nné
Cc: Julien Grall
Signed-off-by: David Hildenbrand
---
include/linux/ioport.h | 3 +++
kernel/resource.c | 56 ++
2 files changed, 59 insertions(+)
diff --git a/include/linux/ioport.h b/include/linux/ioport.h
index 52a91f5fa1a36..743b87fe2205b 10064
EABLE and extending our
add_memory*() interfaces with a flag, specifying that merging after adding
succeeded is acceptable. I'd like to avoid that complexity and code churn
for now.
David Hildenbrand (5):
kernel/resource: make release_mem_region_adjustable() never fail
kerne
ephen Hemminger
Cc: Wei Liu
Signed-off-by: David Hildenbrand
---
drivers/hv/hv_balloon.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
index 32e3bc0aa665a..0745f7cc1727b 100644
--- a/drivers/hv/hv_balloon.c
+++ b/drivers/hv/hv_b
On 31.07.20 11:18, David Hildenbrand wrote:
Grml, forgot to add cc: list for this patch, ccing the right people.
> virtio-mem adds memory in memory block granularity, to be able to
> remove it in the same granularity again later, and to grow slowly on
> demand. This, however, results i
On 11.08.20 04:21, Andrew Morton wrote:
> On Mon, 10 Aug 2020 09:56:32 +0200 David Hildenbrand wrote:
>
>> On 04.08.20 21:41, David Hildenbrand wrote:
>>> @Andrew can we give this a churn and consider it for v5.9 in case there
>>> are no more comments?
>>
On 10.08.20 18:10, Charan Teja Reddy wrote:
> The following race is observed with the repeated online, offline and a
> delay between two successive online of memory blocks of movable zone.
>
> P1P2
>
> Online the first memory block in
> the movable zone
tf(m, ", %ld", zone->lowmem_reserve[i]);
> seq_putc(m, ')');
>
> + /* If unpopulated, no other information is useful */
> + if (!populated_zone(zone)) {
> + seq_putc(m, '\n');
> + return;
> + }
> +
On 11.08.20 15:11, Charan Teja Kalla wrote:
> Thanks David for the comments.
>
> On 8/11/2020 1:59 PM, David Hildenbrand wrote:
>> On 10.08.20 18:10, Charan Teja Reddy wrote:
>>> The following race is observed with the repeated online, offline and a
>>> delay
On 11.08.20 14:58, Charan Teja Reddy wrote:
> The following race is observed with the repeated online, offline and a
> delay between two successive online of memory blocks of movable zone.
>
> P1P2
>
> Online the first memory block in
> the movable zone
On 11.08.20 11:44, Roger Pau Monne wrote:
> This is in preparation for the logic behind MEMORY_DEVICE_DEVDAX also
> being used by non DAX devices.
>
> No functional change intended.
>
> Signed-off-by: Roger Pau Monné
> ---
> Cc: Dan Williams
> Cc: Vishal Verma
> Cc: Dave Jiang
> Cc: Andrew Mo
On 12.08.20 11:46, Charan Teja Kalla wrote:
>
> Thanks David for the inputs.
>
> On 8/12/2020 2:35 AM, David Hildenbrand wrote:
>> On 11.08.20 14:58, Charan Teja Reddy wrote:
>>> The following race is observed with the repeated online, offline and a
>>> de
[...]
> Well, no v5.8-rc8 to line this up for v5.9, so next best is early
> integration into -mm before other collisions develop.
>
> Chatted with Justin offline and it currently appears that the missing
> numa information is the fault of the platform firmware to populate all
> the necessary NUMA
On 03.08.20 08:10, pullip@samsung.com wrote:
> From: Cho KyongHo
>
> LPDDR5 introduces rank switch delay. If three successive DRAM accesses
> happens and the first and the second ones access one rank and the last
> access happens on the other rank, the latency of the last access will
> be lon
> build_all_zonelists(NULL);
> - else
> - zone_pcp_update(zone);
> + zone_pcp_update(zone);
>
> init_per_zone_wmark_min();
>
>
Does, in general, look sane to me.
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
On 03.08.20 15:28, Charan Teja Kalla wrote:
> Thanks David for the comments.
>
> On 8/3/2020 1:35 PM, David Hildenbrand wrote:
>> On 02.08.20 14:54, Charan Teja Reddy wrote:
>>> When onlining a first memory block in a zone, pcp lists are not updated
>>> thus
On 05.08.20 23:49, Wei Yang wrote:
> On Fri, Jul 03, 2020 at 11:18:28AM +0800, Wei Yang wrote:
>> There are two code path which invoke __populate_section_memmap()
>>
>> * sparse_init_nid()
>> * sparse_add_section()
>>
>> For both case, we are sure the memory range is sub-section aligned.
>>
>> *
On 06.08.20 15:35, Vlastimil Babka wrote:
> On 7/30/20 11:34 AM, David Hildenbrand wrote:
>> Let's clean it up a bit, simplifying error handling and getting rid of
>> the label.
>
> Nit: the label was already removed by patch 1/6?
>
Ack, leftover from reshuffling
On 07.08.20 06:32, Andrew Morton wrote:
> On Fri, 3 Jul 2020 18:28:23 +0530 Srikar Dronamraju
> wrote:
>
>>> The memory hotplug changes that somehow because you can hotremove numa
>>> nodes and therefore make the nodemask sparse but that is not a common
>>> case. I am not sure what would happen
> Am 08.08.2020 um 13:39 schrieb kernel test robot :
>
> tree: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
> master
> head: 449dc8c97089a6e09fb2dac4d92b1b7ac0eb7c1e
> commit: 5f1f79bbc9e26fa9412fa9522f957bb8f030c442 virtio-mem: Paravirtualized
> memory hotplug
> dat
On 07.08.20 09:08, Pekka Enberg wrote:
> Hi Cho and David,
>
> On Mon, Aug 3, 2020 at 10:57 AM David Hildenbrand wrote:
>>
>> On 03.08.20 08:10, pullip@samsung.com wrote:
>>> From: Cho KyongHo
>>>
>>> LPDDR5 introduces rank switch delay. If th
On 10.08.20 04:24, Rong Chen wrote:
>
>
> On 8/8/20 8:44 PM, David Hildenbrand wrote:
>>
>>> Am 08.08.2020 um 13:39 schrieb kernel test robot :
>>>
>>> tree: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
>>> master
&
On 04.08.20 21:41, David Hildenbrand wrote:
> @Andrew can we give this a churn and consider it for v5.9 in case there
> are no more comments?
@Andrew, Ping, so I assume we'll target v5.10?
>
> Patch #1-#4,#6 have RBss or ACKs, patch #5 is virtio-mem stuff maintained
> by
irtio_cread(vm->vdev, struct virtio_mem_config, region_size,
> - &vm->region_size);
> + virtio_cread_le(vm->vdev, struct virtio_mem_config, addr, &vm->addr);
> + virtio_cread_le(vm->vdev, struct virtio_mem_config, region_size,
> + &vm->region_size);
>
> /*
>* We always hotplug memory in memory block granularity. This way,
>
Acked-by: David Hildenbrand
--
Thanks,
David / dhildenb
On 18.08.20 08:58, Michal Hocko wrote:
> On Tue 18-08-20 11:58:49, Anshuman Khandual wrote:
>>
>>
>> On 08/18/2020 11:35 AM, Michal Hocko wrote:
>>> On Tue 18-08-20 09:52:02, Anshuman Khandual wrote:
Currently a debug message is printed describing the reason for memory range
offline failu
On 18.08.20 05:05, Wei Yang wrote:
> On Mon, Aug 17, 2020 at 07:07:04PM +0200, David Hildenbrand wrote:
>> On 17.08.20 18:05, Alexander Duyck wrote:
>>>
>>>
>>> On 8/17/2020 2:35 AM, David Hildenbrand wrote:
>>>> On 17.08.20 10:48,
On 12.08.20 08:01, Srikar Dronamraju wrote:
> Hi Andrew, Michal, David
>
> * Andrew Morton [2020-08-06 21:32:11]:
>
>> On Fri, 3 Jul 2020 18:28:23 +0530 Srikar Dronamraju
>> wrote:
>>
The memory hotplug changes that somehow because you can hotremove numa
nodes and therefore make the
uot;, not "pte_alloc_pne". Let's fix that.
"
Reviewed-by: David Hildenbrand
> Signed-off-by: Yanfei Xu
> ---
> mm/memory.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index c3a83f4ca851..9cc
On 18.08.20 10:04, Hyesoo Yu wrote:
> This patch adds support for a chunk heap that allows for buffers
> that are made up of a list of fixed size chunks taken from a CMA.
> Chunk sizes are configuratd when the heaps are created.
>
> Signed-off-by: Hyesoo Yu
> ---
> drivers/dma-buf/heaps/Kconfig
701 - 800 of 3170 matches
Mail list logo