On Tue, Dec 08, 2020 at 09:59:19PM -0800, John Hubbard wrote:
> On 12/8/20 9:28 AM, Joao Martins wrote:
> > Add a new flag for struct dev_pagemap which designates that a a pagemap
>
> a a
>
> > is described as a set of compound pages or in other words, that how
> > pages are grouped together in
On 12/8/20 9:28 AM, Joao Martins wrote:
Replace vmem_altmap with an vmem_context argument. That let us
express how the vmemmap is gonna be initialized e.g. passing
flags and a page size for reusing pages upon initializing the
vmemmap.
How about this instead:
Replace the vmem_altmap argument
On 12/8/20 9:28 AM, Joao Martins wrote:
Add a new flag for struct dev_pagemap which designates that a a pagemap
a a
is described as a set of compound pages or in other words, that how
pages are grouped together in the page tables are reflected in how we
describe struct pages. This means that
On 12/8/20 9:29 AM, Joao Martins wrote:
Similar to follow_hugetlb_page() add a follow_devmap_page which rather
than calling follow_page() per 4K page in a PMD/PUD it does so for the
entire PMD, where we lock the pmd/pud, get all pages , unlock.
While doing so, we only change the refcount once
On 12/8/20 9:29 AM, Joao Martins wrote:
Take advantage of the newly added unpin_user_pages() batched
refcount update, by calculating a page array from an SGL
(same size as the one used in ib_mem_get()) and call
unpin_user_pages() with that.
unpin_user_pages() will check on consecutive pages
On 12/8/20 11:34 AM, Jason Gunthorpe wrote:
On Tue, Dec 08, 2020 at 05:28:59PM +, Joao Martins wrote:
Rather than decrementing the ref count one by one, we
walk the page array and checking which belong to the same
compound_head. Later on we decrement the calculated amount
of references in a
On Tue, Dec 8, 2020 at 8:17 PM Aneesh Kumar K.V
wrote:
>
> On 12/8/20 3:30 AM, Dan Williams wrote:
> > On Mon, Oct 5, 2020 at 6:01 PM Santosh Sivaraj wrote:
> >
>
> ...
>
> >> +static int ndtest_blk_do_io(struct nd_blk_region *ndbr, resource_size_t
> >> dpa,
> >> + void *iobuf,
On 12/8/20 9:28 AM, Joao Martins wrote:
Much like hugetlbfs or THPs, we treat device pagemaps with
compound pages like the rest of GUP handling of compound pages.
Rather than incrementing the refcount every 4K, we record
all sub pages and increment by @refs amount *once*.
Performance measured
On 12/8/20 3:30 AM, Dan Williams wrote:
On Mon, Oct 5, 2020 at 6:01 PM Santosh Sivaraj wrote:
...
+static int ndtest_blk_do_io(struct nd_blk_region *ndbr, resource_size_t dpa,
+ void *iobuf, u64 len, int rw)
+{
+ struct ndtest_dimm *dimm = ndbr->blk_provider_data;
+
Dan Williams writes:
> On Sun, Nov 8, 2020 at 4:21 AM Santosh Sivaraj wrote:
>>
>> Don't fail is nfit module is missing, this will happen in
>> platforms that don't have ACPI support. Add attributes to
>> PAPR dimm family that are independent of platforms like the
>> test dimms.
>>
>>
Hi,
I actually just ran into the NULL deref issue that is fixed here.
Bu I have a question for the experts:
what might cause libndctl to run into the NULL deref like below ?
Program terminated with signal 11, Segmentation fault.
#0 ndctl_pfn_get_bus (pfn=pfn@entry=0x0) at libndctl.c:5540
5540
On 12/8/20 5:28 PM, Joao Martins wrote:
> Introduce a new flag, MEMHP_REUSE_VMEMMAP, which signals that that
> struct pages are onlined with a given alignment, and should reuse the
> tail pages vmemmap areas. On that circunstamce we reuse the PFN backing
> only the tail pages subsections, while
Introduce a new flag, MEMHP_REUSE_VMEMMAP, which signals that
struct pages are onlined with a given alignment, and should reuse the
tail pages vmemmap areas. On that circunstamce we reuse the PFN backing
only the tail pages subsections, while letting the head page PFN remain
different. This
(above) on parity
between device-dax and hugetlbfs.
Some of the patches are a little fresh/WIP (specially patch 3 and 9) and we are
still running tests. Hence the RFC, asking for comments and general direction
of the work before continuing.
Patches apply on top of linux-next tag next-2020
Add a new flag for struct dev_pagemap which designates that a a pagemap
is described as a set of compound pages or in other words, that how
pages are grouped together in the page tables are reflected in how we
describe struct pages. This means that rather than initializing
individual struct pages,
Similar to follow_hugetlb_page() add a follow_devmap_page which rather
than calling follow_page() per 4K page in a PMD/PUD it does so for the
entire PMD, where we lock the pmd/pud, get all pages , unlock.
While doing so, we only change the refcount once when PGMAP_COMPOUND is
passed in.
This let
Take advantage of the newly added unpin_user_pages() batched
refcount update, by calculating a page array from an SGL
(same size as the one used in ib_mem_get()) and call
unpin_user_pages() with that.
unpin_user_pages() will check on consecutive pages that belong
to the same compound page set and
Rather than decrementing the ref count one by one, we
walk the page array and checking which belong to the same
compound_head. Later on we decrement the calculated amount
of references in a single write to the head page.
Signed-off-by: Joao Martins
---
mm/gup.c | 41
Much like hugetlbfs or THPs, we treat device pagemaps with
compound pages like the rest of GUP handling of compound pages.
Rather than incrementing the refcount every 4K, we record
all sub pages and increment by @refs amount *once*.
Performance measured by gup_benchmark improves considerably
dax devices are created with a fixed @align (huge page size) which
is enforced through as well at mmap() of the device. Faults,
consequently happen too at the specified @align specified at the
creation, and those don't change through out dax device lifetime.
MCEs poisons a whole dax huge page, as
When PGMAP_COMPOUND is set, all pages are onlined at a given huge page
alignment and using compound pages to describe them as opposed to a
struct per 4K.
To minimize struct page overhead and given the usage of compound pages we
utilize the fact that most tail pages look the same, we online the
Replace vmem_altmap with an vmem_context argument. That let us
express how the vmemmap is gonna be initialized e.g. passing
flags and a page size for reusing pages upon initializing the
vmemmap.
Signed-off-by: Joao Martins
---
include/linux/memory_hotplug.h | 6 +-
include/linux/mm.h
Introduce a new flag, MEMHP_REUSE_VMEMMAP, which signals that that
struct pages are onlined with a given alignment, and should reuse the
tail pages vmemmap areas. On that circunstamce we reuse the PFN backing
only the tail pages subsections, while letting the head page PFN remain
different. This
On Tue, Dec 08, 2020 at 04:55:54PM +0100, Thomas Gleixner wrote:
> Ira,
>
> On Mon, Dec 07 2020 at 14:14, Ira Weiny wrote:
> > Is there any chance of this landing before the kmap stuff gets sorted out?
>
> I have marked this as needs an update because the change log of 5/10
> sucks.
Ira,
On Mon, Dec 07 2020 at 14:14, Ira Weiny wrote:
> Is there any chance of this landing before the kmap stuff gets sorted out?
I have marked this as needs an update because the change log of 5/10
sucks. https://lore.kernel.org/r/87lff1xcmv@nanos.tec.linutronix.de
> It would be nice to
On Mon, Dec 07, 2020 at 04:54:21PM -0800, Dan Williams wrote:
> [ add perf maintainers ]
>
> On Sun, Nov 8, 2020 at 1:16 PM Vaibhav Jain wrote:
> >
> > Implement support for exposing generic nvdimm statistics via newly
> > introduced dimm-command ND_CMD_GET_STAT that can be handled by nvdimm
> >
___
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
linux-nvdimm@lists.01.org
亲:
如 对 此 课 程 有需求 请查 收附 件内容
___
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
28 matches
Mail list logo