On 03/16/2018 07:36 PM, John Hubbard wrote:
> On 03/16/2018 12:14 PM, jgli...@redhat.com wrote:
>> From: Ralph Campbell
>>
>
>
>
>> +static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm)
>> +{
>> +struct hmm *hmm = mm->hmm;
>
change
> or any other operations that allow you to get the memory value through
> them.
>
> Signed-off-by: Jérôme Glisse
> Cc: Evgeny Baskakov
> Cc: Ralph Campbell
> Cc: Mark Hairgrove
> Cc: John Hubbard
> ---
> include/linux/hmm.h |
On 03/16/2018 12:14 PM, jgli...@redhat.com wrote:
> From: Jérôme Glisse
>
Hi Jerome,
I failed to find any problems in this patch, so:
Reviewed-by: John Hubbard
There are a couple of documentation recommended typo fixes listed
below, which are very minor but as long as I'm here
mirror_register(), to handle that. Especially
considering
that right now, hmm_mirror_register() will return success in this case--so
there is no indication that anything is wrong.
Maybe hmm_mirror_register() could return an error (and not add to the mirror
list),
in such a situation, how's that sound?
thanks,
--
John Hubbard
NVIDIA
@vger.kernel.org
> Cc: Evgeny Baskakov
> Cc: Ralph Campbell
> Cc: Mark Hairgrove
> Cc: John Hubbard
> ---
> mm/hmm.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/hmm.c b/mm/hmm.c
> index 6088fa6ed137..64d9e7dae712 100644
> --- a/mm/h
eaders.
>
> That doesn't seem to warrant a -stable backport? The developer of such
> a driver will simply fix the headers?
Right. For this patch, I would strongly request a -stable backport. It's
really going to cause problems if anyone tries to use -stable with HMM,
without
t there that would
be exposed to this, when it only require a small patch to avoid it.
On the other hand, it's also reasonable to claim that this is part of the
evolving HMM feature, and as such, this new feature does not belong in
stable. I'm not sure which argument carries more weight here.
thanks,
--
John Hubbard
NVIDIA
>
> Cheers,
> Jérôme
>
new routines (hmm_vma_handle_pte, and others)
That way, reviewers can see more easily that things are correct.
> Signed-off-by: Jérôme Glisse
> Cc: Evgeny Baskakov
> Cc: Ralph Campbell
> Cc: Mark Hairgrove
> Cc: John Hubbard
> ---
> include/linux/hmm
hat hard to hit it: just a good directed stress
test involving multiple threads that are doing early process termination
while also doing lots of migrations and page faults, should suffice.
It is probably best to add this patch to stable, for that reason.
thanks,
--
John Hubbard
NVIDIA
On 12/18/2017 11:15 AM, Michael Kerrisk (man-pages) wrote:
> On 12/12/2017 01:23 AM, john.hubb...@gmail.com wrote:
>> From: John Hubbard
>>
>> -- Expand the documentation to discuss the hazards in
>>enough detail to allow avoiding them.
>>
>>
On 12/13/2017 06:52 PM, Jann Horn wrote:
> On Wed, Dec 13, 2017 at 10:31 AM, Michal Hocko wrote:
>> From: John Hubbard
[...]
>> +.IP
>> +Furthermore, this option is extremely hazardous (when used on its own),
>> because
>> +it forcibly removes pre-existi
On 12/13/2017 06:52 PM, Jann Horn wrote:
> On Wed, Dec 13, 2017 at 10:31 AM, Michal Hocko wrote:
>> From: John Hubbard
>>
>> -- Expand the documentation to discuss the hazards in
>>enough detail to allow avoiding them.
>>
>> -- M
From: John Hubbard
-- Expand the documentation to discuss the hazards in
enough detail to allow avoiding them.
-- Mention the upcoming MAP_FIXED_SAFE flag.
-- Enhance the alignment requirement slightly.
CC: Michael Ellerman
CC: Jann Horn
CC: Matthew Wilcox
CC: Michal
On 12/10/2017 02:31 AM, Michal Hocko wrote:
> On Tue 05-12-17 19:14:34, john.hubb...@gmail.com wrote:
>> From: John Hubbard
>>
>> Previously, MAP_FIXED was "discouraged", due to portability
>> issues with the fixed address. In fact, there are other, mo
Don't interpret addr as a hint: place the mapping at exactly that
address. addr must be suitably aligned: for most architectures a
multiple of page size is sufficient; however, some architectures
may impose additional restrictions.
...which is basical
On 12/06/2017 04:19 PM, Kees Cook wrote:
> On Wed, Dec 6, 2017 at 1:08 AM, Michal Hocko wrote:
>> On Wed 06-12-17 08:33:37, Rasmus Villemoes wrote:
>>> On 2017-12-06 05:50, Michael Ellerman wrote:
Michal Hocko writes:
> On Wed 29-11-17 14:25:36, Kees Cook wrote:
> It is safe in
better.
Or maybe you're thinking that since the SHMLBA cannot be put in the man
pages, we could instead provide MapAlignment as sort of a different
way to document the requirement?
--
thanks,
John Hubbard
NVIDIA
On 12/05/2017 11:35 PM, Florian Weimer wrote:
> On 12/06/2017 08:33 AM, John Hubbard wrote:
>> In that case, maybe:
>>
>> MAP_EXACT
>>
>> ? ...because that's the characteristic behavior.
>
> Is that true? mmap still silently rounding up the length
_MAP_NOT_A_HINT set.
>
> I'm not set on MAP_REQUIRED. I came up with some awful names
> (MAP_TODDLER, MAP_TANTRUM, MAP_ULTIMATUM, MAP_BOSS, MAP_PROGRAM_MANAGER,
> etc). But I think we should drop FIXED from the middle of the name.
>
In that case, maybe:
MAP_EXACT
? ...because that's the characteristic behavior. It doesn't clobber, but
you don't need to say that in the name, now that we're not including
_FIXED_ in the middle.
thanks,
John Hubbard
NVIDIA
From: John Hubbard
Previously, MAP_FIXED was "discouraged", due to portability
issues with the fixed address. In fact, there are other, more
serious issues. Also, alignment requirements were a bit vague.
So:
-- Expand the documentation to discuss the hazards in
enough detai
On 12/04/2017 11:08 PM, Michal Hocko wrote:
> On Mon 04-12-17 18:52:27, John Hubbard wrote:
>> On 12/04/2017 03:31 AM, Mike Rapoport wrote:
>>> On Sun, Dec 03, 2017 at 06:14:11PM -0800, john.hubb...@gmail.com wrote:
>>>> From: John Hubbard
>>>>
>> [
On 12/04/2017 11:05 PM, Michal Hocko wrote:
> On Mon 04-12-17 18:14:18, John Hubbard wrote:
>> On 12/04/2017 02:55 AM, Cyril Hrubis wrote:
>>> Hi!
>>> I know that we are not touching the rest of the existing description for
>>> MAP_FIXED however the second s
From: John Hubbard
Previously, MAP_FIXED was "discouraged", due to portability
issues with the fixed address. In fact, there are other, more
serious issues. Also, in some limited cases, this option can
be used safely.
Expand the documentation to discuss both the hazards, and how
On 12/04/2017 03:31 AM, Mike Rapoport wrote:
> On Sun, Dec 03, 2017 at 06:14:11PM -0800, john.hubb...@gmail.com wrote:
>> From: John Hubbard
>>
[...]
>> +.IP
>> +Given the above limitations, one of the very few ways to use this option
>> +safely is: mmap() a reg
addr must be a multiple of SHMLBA (), which in turn is either
the system page size (on many architectures) or a multiple of the system
page size (on some architectures)."
What do you think?
thanks,
John Hubbard
NVIDIA
> Which should at least hint the reader that this is architecture specific.
>
From: John Hubbard
Previously, MAP_FIXED was "discouraged", due to portability
issues with the fixed address. In fact, there are other, more
serious issues. Also, in some limited cases, this option can
be used safely.
Expand the documentation to discuss both the hazards, and how
ling thread).
Newer kernels (Linux 4.16 and later) have a MAP_FIXED_SAFE option that
avoids the corruption problem; if available, MAP_FIXED_SAFE should be
preferred over MAP_FIXED.
thanks,
John Hubbard
NVIDIA
change in response to
> virtually any library call. This is because almost any library call may be
> implemented by using dlopen(3) to load another shared library, which will be
> mapped into the process's address space. The PAM libraries are an excellent
> example, as well as more obvious examples like brk(2), malloc(3) and even
> pthread_create(3)."
>
> What do you think?
>
I'm working on some updated wording to capture these points. I'm even slower
at writing than I am at coding, so there will be a somewhat-brief pause here...
:)
thanks,
John Hubbard
NVIDIA
From: John Hubbard
MAP_FIXED has been widely used for a very long time, yet the man
page still claims that "the use of this option is discouraged".
The documentation assumes that "less portable" == "must be discouraged".
Instead of discouraging something tha
AP_FIXED_SAFE flag
>
> 4.16+ kernels offer a new MAP_FIXED_SAFE flag which allows the caller to
> atomicaly probe for a given address range.
>
> [wording heavily updated by John Hubbard ]
> Signed-off-by: Michal Hocko
> ---
> man2/mmap.2 | 22 ++
> 1 fil
hat different kernels and C libraries may set up quite
+different mapping ranges.
...because that advice is just wrong (it presumes that "less portable" ==
"must be discouraged").
Should I send out a separate patch for that, or is it better to glom it
together
with this one?
thanks,
John Hubbard
NVIDIA
On 11/28/2017 12:12 AM, Michal Hocko wrote:
> On Mon 27-11-17 15:26:27, John Hubbard wrote:
> [...]
>> Let me add a belated report, then: we ran into this limit while implementing
>> an early version of Unified Memory[1], back in 2013. The implementation
>> at the time d
ound that. (And later, the design was *completely* changed to use a separate
tracking system altogether).
The existing limit seems rather too low, at least from my perspective. Maybe
it would be better, if expressed as a function of RAM size?
[1] https://devblogs.nvidia.com/parallelforall/unified-memory-in-cuda-6/
This is a way to automatically (via page faulting) migrate memory
between CPUs and devices (GPUs, here). This is before HMM, of course.
thanks,
John Hubbard
On 11/20/2017 01:05 AM, Michal Hocko wrote:
> On Fri 17-11-17 00:45:49, John Hubbard wrote:
>> On 11/16/2017 04:14 AM, Michal Hocko wrote:
>>> [Ups, managed to screw the subject - fix it]
>>>
>>> On Thu 16-11-17 11:18:58, Michal Hocko wrote:
>>>> Hi,
de
that surrounds HMM (speaking loosely there--it's really any user space
code that manages a unified memory address space, across devices)
often ends up using MAP_FIXED, but MAP_FIXED crams several features
into one flag: an exact address, an "atomic" switch to the new mapping,
and
n at this point is a nice way to solve the problem. :)
For the naming and implementation, I see a couple of things that might improve
it slightly:
a) Change MAP_FIXED_SAFE to MAP_NO_CLOBBER (as per Kees' idea), but keep the
new flag independent, by omitting the above two lines. Instead of forcing
MAP_FIXED
From: John Hubbard
Hi everyone,
I really don't know for sure which fix is going to be preferred--the
following patch, or just an obvious one-line fix that changes
DECLARE_ACPI_FWNODE_OPS() so that it invokes EXPORT_SYMBOL, instead of
EXPORT_SYMBOL_GPL. I explained the reasoning in PATC
From: John Hubbard
Due to commit db3e50f3234b ("device property: Get rid of struct
fwnode_handle type field"), ACPI_HANDLE() inadvertently became
a GPL-only call. The call path that led to that was:
ACPI_HANDLE()
ACPI_COMPANION()
to_acpi_device_node()
is_acpi_d
On 07/06/2017 02:52 PM, Ross Zwisler wrote:
[...]
> diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile
> index b1aacfc..31e3f20 100644
> --- a/drivers/acpi/Makefile
> +++ b/drivers/acpi/Makefile
> @@ -72,6 +72,7 @@ obj-$(CONFIG_ACPI_PROCESSOR)+= processor.o
> obj-$(CONFIG_ACPI)
On 07/06/2017 02:52 PM, Ross Zwisler wrote:
[...]
>
> The naming collision between Jerome's "Heterogeneous Memory Management
> (HMM)" and this "Heterogeneous Memory (HMEM)" series is unfortunate, but I
> was trying to stick with the word "Heterogeneous" because of the naming of
> the ACPI 6.2 Hete
On 06/29/2017 07:25 PM, Mikulas Patocka wrote:
> The __vmalloc function has a parameter gfp_mask with the allocation flags,
> however it doesn't fully respect the GFP_NOIO and GFP_NOFS flags. The
> pages are allocated with the specified gfp flags, but the pagetables are
> always allocated with GFP_
ooks like your patch
was not rejected, but I can't tell if (!rejected == accepted), there. :)
We'll continue testing, but I expect at this point that anything we find
can be patched up after HMM finally gets merged.
thanks,
John Hubbard
NVIDIA
>
> Everything else is the same. Bel
document is already good enough. This is based on not seeing any "I am
having trouble understanding HMM" complaints.
If that's not the case, please speak up. Otherwise, I'm assuming that all is well in the
HMM Documentation department.
thanks,
--
John Hubbard
NVIDIA
and* an
addr argument.
3. ...and it doesn't add anything that the driver can't trivially do itself.
So, let's just remove it. Less is more this time. :)
thanks,
--
John Hubbard
NVIDIA
On 06/14/2017 07:09 PM, Jerome Glisse wrote:
On Wed, Jun 14, 2017 at 04:10:32PM -0700, John Hubbard wrote:
On 06/14/2017 01:11 PM, Jérôme Glisse wrote:
[...]
Hi Jerome,
There are still some problems with using this configuration. First and
foremost, it is still possible (and likely, given
On 06/14/2017 01:11 PM, Jérôme Glisse wrote:
This just simplify kconfig and allow HMM and DEVICE_PUBLIC to be
selected for ppc64 once ZONE_DEVICE is allowed on ppc64 (different
patchset).
Signed-off-by: Jérôme Glisse
Signed-off-by: John Hubbard
Cc: Balbir Singh
Cc: Aneesh Kumar
Cc: Paul E
dent Kconfig choice. It's complicating the Kconfig choices,
and adding problems. However, if DEVICE_PRIVATE must be kept, then something like this also fixes my
HMM tests:
From: John Hubbard
Date: Thu, 8 Jun 2017 20:13:13 -0700
Subject: [PATCH] hmm: select CONFIG_DEVICE_PRIVATE with HM
On 05/17/2017 01:09 AM, Michal Hocko wrote:
From: Michal Hocko
While converting drm_[cm]alloc* helpers to kvmalloc* variants Chris
Wilson has wondered why we want to try kmalloc before vmalloc fallback
even for larger allocations requests. Let's clarify that one larger
physically contiguous blo
ong start, unsigned long end,
bool direct)
void __ref vmemmap_free(unsigned long start, unsigned long end)
{
remove_pagetable(start, end, false);
+ sync_global_pgds(start, end - 1);
This does fix the HMM crash that I was seeing in hmm-next.
thanks,
John Hubbard
NVIDIA
}
#ifdef CON
On Tue, 25 Apr 2017, Christoph Hellwig wrote:
> Hi John,
>
> please fix your quoting of the previous mails, thanks!
Shoot, sorry about any quoting issues. I'm sufficiently new to conversing
on these lists that I'm not even sure which mistake I made.
>
>
> What ACPI defines does not matter at
update to fix it, but there will be a window of time with some breakage there.)
[1] http://www.uefi.org/sites/default/files/resources/ACPI_6_1.pdf , seciton
6.5.6, page 397
thanks,
--
John Hubbard
NVIDIA
Thanks,
- Haiyang
in order to work. And it's also true that we might
want to take a different approach than HMM, to support that kind of
device: for example, making it a NUMA node has been debated here, recently.
But even so, I think the potential for the "unaddressable" memory
actually becoming "addressable" someday, is a good argument for using a
different name.
thanks,
--
John Hubbard
NVIDIA
On 03/24/2017 09:52 AM, Tim Chen wrote:
On Fri, 2017-03-24 at 06:56 -0700, Dave Hansen wrote:
On 03/24/2017 12:33 AM, John Hubbard wrote:
There might be some additional information you are using to come up with
that conclusion, that is not obvious to me. Any thoughts there? These
calls use
[...]
Hi Ying,
I'm a little surprised to see vmalloc calls replaced with
kmalloc-then-vmalloc calls, because that actually makes fragmentation
worse (contrary to the above claim). That's because you will consume
contiguous memory (even though you don't need it to be contiguous),
whereas before,
On 03/23/2017 09:52 PM, Huang, Ying wrote:
John Hubbard writes:
On 03/23/2017 07:41 PM, Huang, Ying wrote:
David Rientjes writes:
On Mon, 20 Mar 2017, Huang, Ying wrote:
From: Huang Ying
Now vzalloc() is used in swap code to allocate various data
structures, such as swap cache, swap
On 03/23/2017 07:41 PM, Huang, Ying wrote:
David Rientjes writes:
On Mon, 20 Mar 2017, Huang, Ying wrote:
From: Huang Ying
Now vzalloc() is used in swap code to allocate various data
structures, such as swap cache, swap slots cache, cluster info, etc.
Because the size may be too large on s
rrent draft, so brace yourself before saying yes... :)
thanks
John Hubbard
NVIDIA
Signed-off-by: Jérôme Glisse
---
Documentation/vm/hmm.txt | 362 +++
1 file changed, 362 insertions(+)
create mode 100644 Documentation/vm/hmm.txt
diff --git a/D
the
MIGRATE_PFN_* defines? The 1ULL is what determines the type of the resulting number,
so it's one more tiny piece of type correctness that is good to have.
The rest of this fix looks good, and the above is not technically necessary (the
code that uses it will force its own type anyway),
On 03/16/2017 05:45 PM, Balbir Singh wrote:
On Fri, Mar 17, 2017 at 11:22 AM, John Hubbard wrote:
On 03/16/2017 04:05 PM, Andrew Morton wrote:
On Thu, 16 Mar 2017 12:05:26 -0400 Jérôme Glisse
wrote:
+static inline struct page *migrate_pfn_to_page(unsigned long mpfn)
+{
+ if (!(mpfn
32-bit pfn.
So, given the current HMM design, I think we are going to have to provide a 32-bit version of these
routines (migrate_pfn_to_page, and related) that is a no-op, right?
thanks
John Hubbard
NVIDIA
On 03/14/2017 06:33 AM, Anshuman Khandual wrote:
On 03/08/2017 04:37 PM, John Hubbard wrote:
[...]
There was a discussion, on an earlier version of this patchset, in which
someone pointed out that a slight over-allocation on a device that has
much more memory than the CPU has, could use up
On 03/08/2017 10:37 PM, Minchan Kim wrote:
>[...]
I think it's the matter of taste.
if (try_to_unmap(xxx))
something
else
something
It's perfectly understandable to me. IOW, if try_to_unmap returns true,
it means it did unmap successfully. Otherw
need to set an error in the
mapping when this fails, so I just added this to make it clear for any
new callers in the future.
Yes, somehow, even in this tiny patchset, I missed those two new comment lines.
arghh. :)
Well, everything looks great, then.
thanks,
John Hubbard
NVIDIA
On 03/08/2017 02:12 AM, Greg Kroah-Hartman wrote:
On Wed, Mar 08, 2017 at 01:59:33AM -0800, John Hubbard wrote:
On 03/08/2017 01:48 AM, Greg Kroah-Hartman wrote:
On Wed, Mar 08, 2017 at 01:25:48AM -0800, john.hubb...@gmail.com wrote:
From: John Hubbard
Hi,
Say, I'm 99% sure that thi
On 03/08/2017 01:48 AM, Greg Kroah-Hartman wrote:
On Wed, Mar 08, 2017 at 01:25:48AM -0800, john.hubb...@gmail.com wrote:
From: John Hubbard
Hi,
Say, I'm 99% sure that this was just an oversight, so
I'm sticking my neck out here and floating a patch to
Put Things Back. I
lly, concisely addressed each one, somewhere, (maybe in a cover letter).
Because otherwise, it's too easy for earlier, important problems to be forgotten.
And reviewers don't want to have to repeat themselves, of course.
thanks
John Hubbard
NVIDIA
* CDM node's zones are part of
On 03/08/2017 01:50 AM, Greg Kroah-Hartman wrote:
On Wed, Mar 08, 2017 at 01:25:49AM -0800, john.hubb...@gmail.com wrote:
From: John Hubbard
Originally, kref_get and kref_put were available as
standard routines that even non-GPL device drivers
could use.
As I stated in my response to the
From: John Hubbard
Originally, kref_get and kref_put were available as
standard routines that even non-GPL device drivers
could use. However, as an unintended side effect of
the recent kref_*() upgrade[1], these calls are now
effectively GPL, because they get routed to the
new refcount_inc() and
From: John Hubbard
Hi,
Say, I'm 99% sure that this was just an oversight, so
I'm sticking my neck out here and floating a patch to
Put Things Back. I'm hoping that there is not some
firm reason to GPL-protect the basic kref_get and
kref_put routines, because when designing some
r
*/
if (page_mapped(page)) {
- switch (ret = try_to_unmap(page,
- ttu_flags | TTU_BATCH_FLUSH)) {
- case SWAP_FAIL:
Again: the SWAP_FAIL makes it crystal clear which case we're in.
I also wonder if UNMAP_FAIL or TTU_RESULT_FAIL is a better name?
thanks,
John Hubbard
NVIDIA
t got interrupted, maybe?
The code changes look perfect, though. And although I'm not a fs guy, it seems
pretty clear that with all the callers passing in 1 all this time, nobody is likely
to complain about this simplification.
thanks,
John Hubbard
NVIDIA
No existing caller uses this on normal
o again: yes, both systems are providing a sort of coherent memory. HMM provides software based
coherence, while NUMA assumes hardware-based memory coherence as a prerequisite.
I hope that helps, and doesn't just further muddy the waters?
--
John Hubbard
NVIDIA
Thanks,
-Bob
,
I'll get them to report back as well. I think John Hubbard has been
testing iterations as well. CC'ing other interested people as well
Balbir
Yes, Evgeny Baskakov and I have been testing each of the posted versions. We are using both
migration and mirroring, and have a small set of
Hi Anshuman,
I'd question the need to avoid kernel allocations in device memory.
Maybe we should simply allow these pages to *potentially* participate in
everything that N_MEMORY pages do: huge pages, kernel allocations, for
example.
No, allowing kernel allocations on CDM has two problems.
On 02/10/2017 02:06 AM, Anshuman Khandual wrote:
There are certain devices like specialized accelerator, GPU cards, network
cards, FPGA cards etc which might contain onboard memory which is coherent
along with the existing system RAM while being accessed either from the CPU
or from the device. Th
On 01/30/2017 05:57 PM, Dave Hansen wrote:
On 01/30/2017 05:36 PM, Anshuman Khandual wrote:
Let's say we had a CDM node with 100x more RAM than the rest of the
system and it was just as fast as the rest of the RAM. Would we still
want it isolated like this? Or would we want a different policy?
On 01/27/2017 02:52 PM, Jérôme Glisse wrote:
Cliff note: HMM offers 2 things (each standing on its own). First
it allows to use device memory transparently inside any process
without any modifications to process program code. Second it allows
to mirror process address space on a device.
Change s
On 01/22/2017 05:14 PM, zhong jiang wrote:
On 2017/1/22 20:58, zhongjiang wrote:
From: zhong jiang
Recently, I find the ioremap_page_range had been abusing. The improper
address mapping is a issue. it will result in the crash. so, remove
the symbol. It can be replaced by the ioremap_cache or
fer way to
achieve the mapping.
Therefore, stop EXPORT-ing ioremap_page_range.
---
I may get some heat for this if another out-of-tree driver needs that symbol, but if no one else
pops up and shrieks, you can add:
Reviewed-by: John Hubbard
thanks,
john h
Signed-off-by: zhong jiang
On 01/19/2017 01:56 AM, Michal Hocko wrote:
On Thu 19-01-17 01:09:35, John Hubbard wrote:
[...]
So that leaves us with maybe this for documentation?
* Reclaim modifiers - __GFP_NORETRY and __GFP_NOFAIL should not be passed in.
* Passing in __GFP_REPEAT is supported, and will cause the
On 01/19/2017 12:45 AM, Michal Hocko wrote:
On Thu 19-01-17 00:37:08, John Hubbard wrote:
On 01/18/2017 12:21 AM, Michal Hocko wrote:
On Tue 17-01-17 21:59:13, John Hubbard wrote:
[...]
* Reclaim modifiers - __GFP_NORETRY and __GFP_NOFAIL should not be passed in.
* Passing in
On 01/18/2017 12:21 AM, Michal Hocko wrote:
On Tue 17-01-17 21:59:13, John Hubbard wrote:
On 01/16/2017 11:51 PM, Michal Hocko wrote:
On Mon 16-01-17 13:57:43, John Hubbard wrote:
On 01/16/2017 01:48 PM, Michal Hocko wrote:
On Mon 16-01-17 13:15:08, John Hubbard wrote:
On 01/16/2017
On 01/16/2017 11:51 PM, Michal Hocko wrote:
On Mon 16-01-17 13:57:43, John Hubbard wrote:
On 01/16/2017 01:48 PM, Michal Hocko wrote:
On Mon 16-01-17 13:15:08, John Hubbard wrote:
On 01/16/2017 11:40 AM, Michal Hocko wrote:
On Mon 16-01-17 11:09:37, John Hubbard wrote:
On 01/16/2017
On 01/16/2017 01:48 PM, Michal Hocko wrote:
On Mon 16-01-17 13:15:08, John Hubbard wrote:
On 01/16/2017 11:40 AM, Michal Hocko wrote:
On Mon 16-01-17 11:09:37, John Hubbard wrote:
On 01/16/2017 12:47 AM, Michal Hocko wrote:
On Sun 15-01-17 20:34:13, John Hubbard wrote:
[...]
Is that
On 01/16/2017 11:40 AM, Michal Hocko wrote:
On Mon 16-01-17 11:09:37, John Hubbard wrote:
On 01/16/2017 12:47 AM, Michal Hocko wrote:
On Sun 15-01-17 20:34:13, John Hubbard wrote:
[...]
Is that "Reclaim modifiers" line still true, or is it a leftover from an
earlier approach? I
On 01/16/2017 12:47 AM, Michal Hocko wrote:
On Sun 15-01-17 20:34:13, John Hubbard wrote:
On 01/12/2017 07:37 AM, Michal Hocko wrote:
[...]
diff --git a/mm/util.c b/mm/util.c
index 3cb2164f4099..7e0c240b5760 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -324,6 +324,48 @@ unsigned long vm_mmap
On 01/12/2017 07:37 AM, Michal Hocko wrote:
From: Michal Hocko
Using kmalloc with the vmalloc fallback for larger allocations is a
common pattern in the kernel code. Yet we do not have any common helper
for that and so users have invented their own helpers. Some of them are
really creative wh
-GPU cases, are all working.
We do think we've found a bug in a corner case that involves invalid GPU
memory (of course, it's always possible that the bug is on our side),
which Jerome is investigating now. If you spot the bug by inspection,
you'll get some major told-you-so points. :)
;
> Changed since v3:
> - Get rid of HMM_ISDIRTY and rely on write protect instead.
> - Adapt to HMM page table changes
>
> Signed-off-by: Jérôme Glisse
> Signed-off-by: Sherry Cheung
> Signed-off-by: Subhash Gutti
> Signed-off-by: Mark Hairgrove
> Signed-off-by:
On Wed, 3 Jun 2015, Jerome Glisse wrote:
> On Tue, Jun 02, 2015 at 02:32:01AM -0700, John Hubbard wrote:
> > On Thu, 21 May 2015, j.gli...@gmail.com wrote:
> >
> > > From: Jérôme Glisse
> > >
> > > The mmu_notifier_invalidate_range_start() an
On Wed, 3 Jun 2015, Jerome Glisse wrote:
> On Mon, Jun 01, 2015 at 04:10:46PM -0700, John Hubbard wrote:
> > On Mon, 1 Jun 2015, Jerome Glisse wrote:
> > > On Fri, May 29, 2015 at 08:43:59PM -0700, John Hubbard wrote:
> > > > On Thu, 21 May 2015, j.gli...@gmail.com
On Thu, 21 May 2015, j.gli...@gmail.com wrote:
> From: Jérôme Glisse
>
> Listener of mm event might not have easy way to get the struct page
> behind and address invalidated with mmu_notifier_invalidate_page()
s/behind and address/behind an address/
> function as this happens after the cpu pag
On Thu, 21 May 2015, j.gli...@gmail.com wrote:
> From: Jérôme Glisse
>
> The mmu_notifier_invalidate_range_start() and
> mmu_notifier_invalidate_range_end()
> can be considered as forming an "atomic" section for the cpu page table update
> point of view. Between this two function the cpu page t
On Mon, 1 Jun 2015, Jerome Glisse wrote:
> On Fri, May 29, 2015 at 08:43:59PM -0700, John Hubbard wrote:
> > On Thu, 21 May 2015, j.gli...@gmail.com wrote:
> >
> > > From: Jérôme Glisse
> > >
> > > The event information will be useful for new user of
;ll take a look at the corresponding HMM_ISDIRTY, too.
> + MMU_MIGRATE,
> + MMU_MPROT,
The MMU_PROT also looks questionable. Short answer: probably better to
read the protection, and pass either MMU_WRITE_PROTECT, MMU_READ_WRITE
(that's a new item, of course), or MMU_UNMAP.
ssing about names. Therefore, you'll
see a bunch of small and large naming recommendations coming from me, for
the various patches here.
thanks,
John Hubbard
>
>
> Why doing this ?
>
> Mirroring a process address space is mandatory with OpenCL 2.0 and
> with other GPU c
On Fri, 27 Jun 2014, Jérôme Glisse wrote:
> From: Jérôme Glisse
>
> The event information will be usefull for new user of mmu_notifier API.
> The event argument differentiate between a vma disappearing, a page
> being write protected or simply a page being unmaped. This allow new
> user to take
&nr_unqueued_dirty, &nr_congested,
> &nr_writeback, &nr_immediate,
> false);
> --
> 1.9.0
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org";> em...@kvack.org
>
Other than that, looks good.
Reviewed-by: John Hubbard
thanks,
John H.
+ return 0;
> }
>
> struct page *ksm_might_need_to_copy(struct page *page,
> @@ -2305,11 +2312,20 @@ static struct attribute_group ksm_attr_group = {
> };
> #endif /* CONFIG_SYSFS */
>
> +static struct notifier_block ksm_mmput_nb = {
> + .notifier_call = ksm_exit,
> + .priority = 2,
> +};
> +
> static int __init ksm_init(void)
> {
> struct task_struct *ksm_thread;
> int err;
>
> + err = mmput_register_notifier(&ksm_mmput_nb);
> + if (err)
> + return err;
> +
In order to be perfectly consistent with this routine's existing code, you
would want to write:
if (err)
goto out;
...but it does the same thing as your code. It' just a consistency thing.
> err = ksm_slab_init();
> if (err)
> goto out;
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 61aec93..b684a21 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2775,6 +2775,9 @@ void exit_mmap(struct mm_struct *mm)
> struct vm_area_struct *vma;
> unsigned long nr_accounted = 0;
>
> + /* Important to call this first. */
> + khugepaged_exit(mm);
> +
> /* mm's last user has gone, and its about to be pulled down */
> mmu_notifier_release(mm);
>
> --
> 1.9.0
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org";> em...@kvack.org
>
Above points are extremely minor, so:
Reviewed-by: John Hubbard
thanks,
John H.
& TTU_MUNLOCK))
> - mmu_notifier_invalidate_page(mm, address, event);
> + mmu_notifier_invalidate_page(mm, vma, address, event);
> out:
> return ret;
>
> @@ -1325,7 +1325,8 @@ static int try_to_unmap_cluster(unsigned long cursor,
> unsigned int *mapcount,
>
> mmun_start = address;
> mmun_end = end;
> - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end, event);
> + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> + mmun_end, event);
>
> /*
>* If we can acquire the mmap_sem for read, and vma is VM_LOCKED,
> @@ -1390,7 +1391,7 @@ static int try_to_unmap_cluster(unsigned long cursor,
> unsigned int *mapcount,
> (*mapcount)--;
> }
> pte_unmap_unlock(pte - 1, ptl);
> - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end, event);
> + mmu_notifier_invalidate_range_end(mm, vma, mmun_start, mmun_end, event);
> if (locked_vma)
> up_read(&vma->vm_mm->mmap_sem);
> return ret;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 6e1992f..c4b7bf9 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -262,6 +262,7 @@ static inline struct kvm *mmu_notifier_to_kvm(struct
> mmu_notifier *mn)
>
> static void kvm_mmu_notifier_invalidate_page(struct mmu_notifier *mn,
>struct mm_struct *mm,
> + struct vm_area_struct *vma,
>unsigned long address,
>enum mmu_event event)
> {
> @@ -318,6 +319,7 @@ static void kvm_mmu_notifier_change_pte(struct
> mmu_notifier *mn,
>
> static void kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
> struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long start,
> unsigned long end,
> enum mmu_event event)
> @@ -345,6 +347,7 @@ static void
> kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
>
> static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,
> struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long start,
> unsigned long end,
> enum mmu_event event)
> --
> 1.9.0
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org";> em...@kvack.org
>
Other than the refinements suggested above, I can't seem to find anything
wrong with this patch, so:
Reviewed-by: John Hubbard
thanks,
John H.
901 - 1000 of 1002 matches
Mail list logo