ow.
But either way, this improvement is nice to have, so:
Reviewed-by: John Hubbard
thanks,
--
John Hubbard
NVIDIA
What: /sys/kernel/mm/cma//alloc_pages_success
Date: Feb 2021
On 3/30/21 8:56 PM, John Hubbard wrote:
On 3/30/21 3:56 PM, Alistair Popple wrote:
...
+1 for renaming "munlock*" items to "mlock*", where applicable. good grief.
At least the situation was weird enough to prompt further investigation :)
Renaming to mlock* doesn
as there
is only one caller of try_to_munlock.
- Alistair
No objections here. :)
thanks,
--
John Hubbard
NVIDIA
unlock*" items to "mlock*", where applicable. good grief.
Although, it seems reasonable to tack such renaming patches onto the tail end
of this series. But whatever works.
thanks,
--
John Hubbard
NVIDIA
5fa35057a062ac98c3e8138b013ce4ce351c ("s390/gmap: improve THP
splitting"),
July 29, 2020, removes the above use of FOLL_SPLIT.
And "git grep", just to be sure, shows it is not there in today's linux.git. So
I guess the
https://github.com/0day-ci/linux repo needs a better way to stay in sync?
thanks,
--
John Hubbard
NVIDIA
t grep -nw FOLL_SPLIT
Documentation/vm/transhuge.rst:57:follow_page, the FOLL_SPLIT bit can be
specified as a parameter to
Reviewed-by: John Hubbard
thanks,
--
John Hubbard
NVIDIA
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 8ba434287387..3568836841f9 100644
--- a/include/
t
log.
Maybe that's why it looked like a change to me. I do think it's worth
mentioning.
thanks,
--
John Hubbard
NVIDIA
fix! I'm not saying that it should be a separate patch, but it
does seem worth loudly mentioning in the commit log, yes?
return res;
}
+ write_unlock(&resource_lock);
+ free_resource(res);
+
return ERR_PTR(-ERANGE);
}
thanks,
--
John Hubbard
NVIDIA
On 3/24/21 3:11 PM, Dmitry Osipenko wrote:
25.03.2021 01:01, John Hubbard пишет:
On 3/24/21 2:31 PM, Dmitry Osipenko wrote:
...
+#include
+
+struct cma_kobject {
+ struct cma *cma;
+ struct kobject kobj;
If you'll place the kobj as the first member of the struct, then
contain
once such case.
thanks,
--
John Hubbard
NVIDIA
10547.4134370-1-minc...@kernel.org/
Reported-by: Dmitry Osipenko
Tested-by: Dmitry Osipenko
Suggested-by: Dmitry Osipenko
Suggested-by: John Hubbard
Suggested-by: Matthew Wilcox
Signed-off-by: Minchan Kim
---
I belive it's worth to have separate patch rather than replacing
original patch.
(I don't imagine
it could grow up in cma_sysfs in future), I don't think it would
be a problem. If we really want to make it more clear, maybe?
cma->cma_kobj->kobj
It would be consistent with other variables in cma_sysfs_init.
OK, that's at least better than it was.
thanks,
--
John Hubbard
NVIDIA
On 3/23/21 10:44 PM, Minchan Kim wrote:
On Tue, Mar 23, 2021 at 09:47:27PM -0700, John Hubbard wrote:
On 3/23/21 8:27 PM, Minchan Kim wrote:
...
+static int __init cma_sysfs_init(void)
+{
+ unsigned int i;
+
+ cma_kobj_root = kobject_create_and_add("cma", mm_kobj);
anything allocated on
previous iterations of the loop.
thanks,
--
John Hubbard
NVIDIA
return err;
Hopefully this little bit of logic could also go into the cleanup
routine.
+ }
+ }
+
+ return 0;
+}
+subsys_initcall(cma_sysfs_init);
thanks,
--
John Hubbard
NVIDIA
deference issue. Fix this by only calling
As far as I can tell, it's not possible to actually cause that null
failure with the existing kernel code paths. *Might* be worth mentioning
that here (unless I'm wrong), but either way, looks good, so:
Reviewed-by: John Hubbard
thanks,
--
anged, 17 insertions(+), 3 deletions(-)
Seems reasonable, and the diffs look good.
Reviewed-by: John Hubbard
thanks,
--
John Hubbard
NVIDIA
diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index 18e75974d4e3..21d7c7f72f1c 100644
--- a/include/linux/vm_event_item.h
the multiple items per line is a weak idea at best, even
though it's used here already. Each item is important and needs to be
visually compared to it's output item later. So one per line might
have helped avoid mismatches, and I think we should change to that to
enco
ut we can forcefully break this whenever we feel like by revoking
the page, moving it, and then reinstating the gpu pte again and let it
continue.
Oh yes, that's true.
If that's no possible then what we need here instead is an mlock() type of
thing I think.
No need for that, then.
have failed to find any logic errors, so:
Reviewed-by: John Hubbard
thanks,
--
John Hubbard
NVIDIA
+{
+ struct page *next, *page;
+ unsigned int nr = 1;
+
+ if (i >= npages)
+ return;
+
+ next = *list + i;
+ page = compound_head(next);
+
cma/SENSOR/cma_alloc_pages_[attempts|fails]
/sys/kernel/mm/cma/BLUETOOTH/cma_alloc_pages_[attempts|fails]
Signed-off-by: Minchan Kim
---
Looks good.
Reviewed-by: John Hubbard
thanks,
--
John Hubbard
NVIDIA
From v2 -
https://lore.kernel.org/linux-mm/20210208180142.2765456-1-minc...@
ish
to have a kobject that you never free represent this object also is not
normal :)
OK, thanks for taking the time to explain that, much appreciated!
thanks,
--
John Hubbard
NVIDIA
I remain convinced that the above is not "improper"; it's a reasonable
step, given the limitations of the current sysfs design. I just wanted to say
that out loud, as my proposal sinks to the bottom of the trench here. haha :)
thanks,
--
John Hubbard
NVIDIA
asily write programs that do a long series of atomic
operations.
Such a program would be a little weird, but it's hard to rule out.
- long term pin: the page cannot be moved, all migration must fail.
Also this will have an impact on COW behaviour for fork (but not sure
where those patches ar
s a
bad example.
I think we should just use a static kobject, with a cautionary comment to
would-be copy-pasters, that explains the design constraints above. That way,
we still get a nice, less-code implementation, a safe design, and it only
really costs us a single carefully written comment.
thanks,
--
John Hubbard
NVIDIA
On 2/8/21 10:27 PM, John Hubbard wrote:
On 2/8/21 10:13 PM, Greg KH wrote:
On Mon, Feb 08, 2021 at 05:57:17PM -0800, John Hubbard wrote:
On 2/8/21 3:36 PM, Minchan Kim wrote:
...
char name[CMA_MAX_NAME];
+#ifdef CONFIG_CMA_SYSFS
+ struct cma_stat *stat;
This should not be a
On 2/8/21 10:13 PM, Greg KH wrote:
On Mon, Feb 08, 2021 at 05:57:17PM -0800, John Hubbard wrote:
On 2/8/21 3:36 PM, Minchan Kim wrote:
...
char name[CMA_MAX_NAME];
+#ifdef CONFIG_CMA_SYSFS
+ struct cma_stat *stat;
This should not be a pointer. By making it a pointer, you
On 2/8/21 9:18 PM, John Hubbard wrote:
On 2/8/21 8:19 PM, Minchan Kim wrote:
On Mon, Feb 08, 2021 at 05:57:17PM -0800, John Hubbard wrote:
On 2/8/21 3:36 PM, Minchan Kim wrote:
...
char name[CMA_MAX_NAME];
+#ifdef CONFIG_CMA_SYSFS
+ struct cma_stat *stat;
This should not be a
On 2/8/21 8:19 PM, Minchan Kim wrote:
On Mon, Feb 08, 2021 at 05:57:17PM -0800, John Hubbard wrote:
On 2/8/21 3:36 PM, Minchan Kim wrote:
...
char name[CMA_MAX_NAME];
+#ifdef CONFIG_CMA_SYSFS
+ struct cma_stat *stat;
This should not be a pointer. By making it a pointer, you
anted kobj methods
to be used *if* you are dealing with kobjects. That's a narrower point.
I can't imagine that he would have insisted on having additional
allocations just so that kobj freeing methods could be used. :)
thanks,
--
John Hubbard
NVIDIA
cma->stat = &cma_stats[i];
+ spin_lock_init(&cma->stat->lock);
+ if (kobject_init_and_add(&cma->stat->kobj, &cma_ktype,
+ cma_kobj, "%s", cma->name)) {
+ kobject_put(&cma->stat->kobj);
+ goto out;
+ }
+ } while (++i < cma_area_count)
This clearly is not going to compile! Don't forget to build and test the
patches.
+
+ return 0;
+out:
+ while (--i >= 0) {
+ cma = &cma_areas[i];
+ kobject_put(&cma->stat->kobj);
+ }
+
+ kfree(cma_stats);
+ kobject_put(cma_kobj);
+
+ return -ENOMEM;
+}
+subsys_initcall(cma_sysfs_init);
thanks,
--
John Hubbard
NVIDIA
l config required)
Any feedback on point (6) specifically ?
Well, /proc these days is for process-specific items. And CMA areas are
system-wide. So that's an argument against it. However...if there is any
process-specific CMA allocation info to report, then /proc is just the
right place for i
On 2/5/21 1:28 PM, Minchan Kim wrote:
On Fri, Feb 05, 2021 at 12:25:52PM -0800, John Hubbard wrote:
On 2/5/21 8:15 AM, Minchan Kim wrote:
...
OK. But...what *is* your goal, and why is this useless (that's what
orthogonal really means here) for your goal?
As I mentioned, the goal is to mo
re if the
problem is caused by pinning/fragmentation or by over-utilization.
I agree. That seems about right, now that we've established that
cma areas are a must-have.
thanks,
--
John Hubbard
NVIDIA
ee it would be also useful but I'd like to enable it under
CONFIG_CMA_SYSFS_ALLOC_RANGE as separate patchset.
I will stop harassing you very soon, just want to bottom out on
understanding the real goals first. :)
thanks,
--
John Hubbard
NVIDIA
_vmtests.sh
So I guess this is OK, given that I see "run_vmtests" in both -next
and main.
Reviewed-by: John Hubbard
thanks,
--
John Hubbard
NVIDIA
easier to verify that it is correct.
However, given that the patch is correct and works as-is, the above is really
just
an optional idea, so please feel free to add:
Reviewed-by: John Hubbard
Thanks!
Hopefully I can retain that if the snippet above is preferred?
Joao
Yes. Still l
On 2/4/21 10:24 PM, Minchan Kim wrote:
On Thu, Feb 04, 2021 at 09:49:54PM -0800, John Hubbard wrote:
On 2/4/21 9:17 PM, Minchan Kim wrote:
...
# cat vmstat | grep -i cma
nr_free_cma 261718
# cat meminfo | grep -i cma
CmaTotal:1048576 kB
CmaFree: 1046872 kB
OK, given that CMA
18
# cat meminfo | grep -i cma
CmaTotal:1048576 kB
CmaFree: 1046872 kB
OK, given that CMA is already in those two locations, maybe we should put
this information in one or both of those, yes?
thanks,
--
John Hubbard
NVIDIA
put_compound_head(head, ntails, FOLL_PIN);
+ }
+}
+EXPORT_SYMBOL(unpin_user_page_range_dirty_lock);
+
/**
* unpin_user_pages() - release an array of gup-pinned pages.
* @pages: array of pages to be marked dirty and released.
Didn't spot any actual problems with how this works.
thanks,
--
John Hubbard
NVIDIA
to take care of *ntails.
However, given that the patch is correct and works as-is, the above is really
just
an optional idea, so please feel free to add:
Reviewed-by: John Hubbard
thanks,
--
John Hubbard
NVIDIA
+ page = compound_head(*list);
+
+ for (nr = 1; nr < npages;
On 2/4/21 7:30 PM, Darrick J. Wong wrote:
On Thu, Feb 04, 2021 at 07:18:14PM -0800, John Hubbard wrote:
Delete the unused "log" variable in xfs_log_cover().
Fixes: 303591a0a9473 ("xfs: cover the log during log quiesce")
Cc: Brian Foster
Cc: Christoph Hellwig
Cc: Darrick
Delete the unused "log" variable in xfs_log_cover().
Fixes: 303591a0a9473 ("xfs: cover the log during log quiesce")
Cc: Brian Foster
Cc: Christoph Hellwig
Cc: Darrick J. Wong
Cc: Allison Henderson
Signed-off-by: John Hubbard
---
Hi,
I just ran into this on today's l
On 2/4/21 5:44 PM, Minchan Kim wrote:
On Thu, Feb 04, 2021 at 04:24:20PM -0800, John Hubbard wrote:
On 2/4/21 4:12 PM, Minchan Kim wrote:
...
Then, how to know how often CMA API failed?
Why would you even need to know that, *in addition* to knowing specific
page allocation numbers that
On 2/4/21 4:25 PM, John Hubbard wrote:
On 2/4/21 3:45 PM, Suren Baghdasaryan wrote:
...
2) The overall CMA allocation attempts/failures (first two items above) seem
an odd pair of things to track. Maybe that is what was easy to track, but I'd
vote for just omitting them.
Then, how to kno
utting in a couple of items into /proc/vmstat, as I
just mentioned in my other response, and calling it good.
thanks,
--
John Hubbard
NVIDIA
that we're talking about. It seems to fit right in there, yes?
thanks,
--
John Hubbard
NVIDIA
On 2/4/21 11:53 AM, Jason Gunthorpe wrote:
On Wed, Feb 03, 2021 at 03:00:01PM -0800, John Hubbard wrote:
+static inline void compound_next(unsigned long i, unsigned long npages,
+struct page **list, struct page **head,
+unsigned
On 2/4/21 12:07 PM, Minchan Kim wrote:
On Thu, Feb 04, 2021 at 12:50:58AM -0800, John Hubbard wrote:
On 2/3/21 7:50 AM, Minchan Kim wrote:
Since CMA is getting used more widely, it's more important to
keep monitoring CMA statistics for system health since it's
directly relat
se(struct cma *cma, const struct page *pages, unsigned
int count);
extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data);
+
A single additional blank line seems to be the only change to this file. :)
thanks,
--
John Hubbard
NVIDIA
es.
Actually, the for_each_sg() code and its behavior with sg->length and
sg_page(sg)
confuses me because I'm new to it, and I don't quite understand how this works.
Especially with SG_CHAIN. I'm assuming that you've monitored /proc/vmstat for
nr_foll_pin* ?
sg_free_table(&umem->sg_head);
}
thanks,
--
John Hubbard
NVIDIA
. Perhaps we should rename it to something like:
unpin_user_compound_page_dirty_lock()
?
thanks,
--
John Hubbard
NVIDIA
1)
return 1;
return min_t(unsigned int, (head + compound_nr(head) - page), npages);
thanks,
--
John Hubbard
NVIDIA
+
for (ntails = 1; ntails < npages; ntails++) {
if (compound_head(pages[ntails]) != head)
break;
@@ -22
(and the related one below) finally done!
Everything looks correct here.
Reviewed-by: John Hubbard
thanks,
--
John Hubbard
NVIDIA
+ struct page *head;
+ unsigned int ntails;
if (!make_dirty) {
unpin_user_pages(pages, npages);
return;
}
); \
+i < npages; i += ntails, \
+compound_next(i, npages, list, &head, &ntails))
+
/**
* unpin_user_pages_dirty_lock() - release and optionally dirty gup-pinned
pages
* @pages: array of pages to be maybe marked dirty, and definitely released.
thanks,
--
John Hubbard
NVIDIA
Definitely "reusable" seems better to me, and especially anything *other*
than "reserved" is a good idea, IMHO.
thanks,
--
John Hubbard
NVIDIA
On 1/24/21 3:18 PM, John Hubbard wrote:
On 1/21/21 7:37 PM, Pavel Tatashin wrote:
When pages are pinned they can be faulted in userland and migrated, and
they can be faulted right in kernel without migration.
In either case, the pinned pages must end-up being pinnable (not movable).
Add a new
y* the new -z
option.
I'll poke around the rest of the patchset and see if that is expected
and normal, but either way the test code itself looks correct and seems
to be passing my set of "run a bunch of different gup_test options" here,
so feel free to add:
Reviewed-by: John
test flag"
That is just a minor documentation point, so either way, feel free to add:
Reviewed-by: John Hubbard
thanks,
--
John Hubbard
NVIDIA
With the fix, dump works like this:
root@virtme:/# gup_test -c
page #0, starting from user virt addr: 0x7f8acb9e4000
page:
t it's impossible :)
I proposed this exact idea a few days ago [1]. It's remarkable that we both
picked nearly identical values for the layout! :)
But as the responses show, security problems prevent pursuing that approach.
[1] https://lore.kernel.org/r/45806a5a-65c2-67ce-fc92-dc8c2144d...@nvidia.com
thanks,
--
John Hubbard
NVIDIA
pin_user_pages() calls). We already have all the
unpin_user_pages() calls in place, and so it's "merely" a matter of
adding a flag to 74 call sites.
The real question is whether we can get away with supporting only a very
low count of FOLL_PIN and FOLL_GET pages. Can we?
thanks,
--
John Hubbard
NVIDIA
;t add constraints
to the RDMA cases, but still does what we need here.
thanks,
--
John Hubbard
NVIDIA
On 1/7/21 2:00 PM, Linus Torvalds wrote:
On Thu, Jan 7, 2021 at 1:53 PM John Hubbard wrote:
Now, I do agree that from a QoI standpoint, it would be really lovely
if we actually enforced it. I'm not entirely sure we can, but maybe it
would be reasonable to use that
mm->ha
we end up with a clear (-ish) difference between
pages that can be waited for, and pages that should not be waited for in the
kernel.
I hope this helps, but if it's too much of a side-track, please disregard.
thanks,
--
John Hubbard
NVIDIA
's leave FOLL_TOUCH "pure": it's just a gup flag.
And then, add an option (maybe -z, after all) that says, "skip faulting pages
in from user space". That's a lot clearer! And you can tell it's better,
because we don't have to write a chunk of documentation explaining the odd
quirks. Ha, it feels better!
What do you think? (Again, if you want me to send over some diffs that put this
all together, let me know. I'm kind of embarrassed at all the typing required
here.)
thanks,
--
John Hubbard
NVIDIA
MADV_NOHUGEPAGE);
- for (; (unsigned long)p < gup.addr + size; p += PAGE_SIZE)
- p[0] = 0;
+ if (touch) {
+ gup.flags |= FOLL_TOUCH;
+ } else {
+ for (; (unsigned long)p < gup.addr + size; p += PAGE_SIZE)
+ p[0] = 0
On 12/18/20 1:06 AM, John Hubbard wrote:
Add a new test_flags field, to allow raw gup_flags to work.
I think .test_control_flags field would be a good name, to make it very
clear that it's not destined for gup_flags. Just .test_flags is not quite
as clear a distinction from .gup_flag
-93,6 +97,9 @@ int main(int argc, char **argv)
case 'w':
write = 1;
break;
+ case 'W':
+ write = 0;
+ break;
case 'f':
file = optarg;
break;
thanks,
--
John Hubbard
NVIDIA
g_end = contig_start + readahead_batch_length(rac);
+ u64 contig_end = contig_start + readahead_batch_length(rac) - 1;
Yes, confirmed on my end, too.
thanks,
--
John Hubbard
NVIDIA
I'm sending
it out early.
thanks,
--
John Hubbard
NVIDIA
anyway! Just these:
bool vma_is_secretmem(struct vm_area_struct *vma);
bool page_is_secretmem(struct page *page);
bool secretmem_active(void);
...or am I just Doing It Wrong? :)
thanks,
--
John Hubbard
NVIDIA
better to let the callers retry? Obviously that
would be a separate patch and I'm not sure it's safe to make that change,
but the current loop seems buried maybe too far down.
Thoughts, anyone?
thanks,
--
John Hubbard
NVIDIA
ags;
So, keeping "current" in the function name makes its intent misleading.
OK, I see. That sounds OK then.
thanks,
--
John Hubbard
NVIDIA
point it's a relief to finally
see the nested ifdefs get removed.
One naming nit/idea below, but this looks fine as is, so:
Reviewed-by: John Hubbard
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 0f8d1583fa8e..00bab23d1ee5 100644
--- a/include/linux/migrate.h
+++ b/in
flags, which right now happen to only cover CMA flags, so the original name
seems
accurate, right?
thanks,
John Hubbard
NVIDIA
{
#ifdef CONFIG_CMA
- unsigned int pflags = current->flags;
-
- if (!(pflags & PF_MEMALLOC_NOMOVABLE) &&
- gfp_migrate
d any lingering rename candidates
after this series is applied. And it's a good rename.
Reviewed-by: John Hubbard
thanks,
--
John Hubbard
NVIDIA
---
include/linux/sched.h| 2 +-
include/linux/sched/mm.h | 21 +
mm/gup.c | 4 ++--
mm
-#endif /* CONFIG_FS_DAX || CONFIG_CMA */
static bool is_valid_gup_flags(unsigned int gup_flags)
{
At last some simplification here, yea!
Reviewed-by: John Hubbard
thanks,
--
John Hubbard
NVIDIA
;
struct migration_target_control mtc = {
.nid = NUMA_NO_NODE,
- .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_NOWARN,
+ .gfp_mask = GFP_USER | __GFP_NOWARN,
};
check_again:
Reviewed-by: John Hubbard
...while I was here, I noticed this
)
+{
+ return false;
+}
+#endif
#ifdef CONFIG_CMA
static long check_and_migrate_cma_pages(struct mm_struct *mm,
Looks obviously correct, and the follow-up simplication is very nice.
Reviewed-by: John Hubbard
thanks,
--
John Hubbard
NVIDIA
like to get an
opinion from the community on an appropriate path forward for this
problem. If what I described sounds reasonable, or if there are other
ideas on how to address the problem that I am seeing.
I'm also in favor with avoiding (3) for now and maybe forever, depending on
how it goes. Good
On 11/18/20 10:17 AM, Dan Williams wrote:
On Wed, Nov 18, 2020 at 5:51 AM Jan Kara wrote:
On Mon 16-11-20 19:35:31, John Hubbard wrote:
On 11/16/20 6:48 PM, kernel test robot wrote:
Greeting,
FYI, we noticed a -45.0% regression of phoronix-test-suite.npb.FT.A.total_mop_s
due to commit
ct pin counts for huge pages")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
...but that commit happened in April, 2020. Surely if this were a serious issue
we
would have some other indication...is this worth following up on?? I'm inclined
to
ignore it, hon
VERSION < 11
# error Sorry, your version of Clang is too old - please use 10.0.1 or newer.
#endif
+#endif
/* Compiler specific definitions for Clang compiler */
thanks,
--
John Hubbard
NVIDIA
On 11/8/20 12:37 AM, Barry Song wrote:
Without DEBUG_FS, all the code in gup_test becomes meaningless. For sure
kernel provides debugfs stub while DEBUG_FS is disabled, but the point
here is that GUP_TEST can do nothing without DEBUG_FS.
Cc: John Hubbard
Cc: Ralph Campbell
Cc: Randy Dunlap
On 11/8/20 7:41 PM, Souptick Joarder wrote:
On Sat, Nov 7, 2020 at 2:27 PM John Hubbard wrote:
On 11/7/20 12:24 AM, Souptick Joarder wrote:
Fixed typo s/Poiner/Pointer
Fixes: 5b636857fee6 ("TOMOYO: Allow using argv[]/envp[] of execve() as
conditions.")
Signed-off-by: Souptick J
nd it quite difficult, having options appear and disappear on me,
in this system. If they all had this "comment" behavior by default, to show up
as a placeholder, I think it would be a better user experience.
thanks,
--
John Hubbard
NVIDIA
On 11/7/20 8:12 PM, Tetsuo Handa wrote:
On 2020/11/08 11:17, John Hubbard wrote:
Excuse me, but Documentation/core-api/pin_user_pages.rst says
"CASE 5: Pinning in order to _write_ to the data within the page"
while tomoyo_dump_page() is for "_read_ the data within the page&
abled"
depends on !GUP_TEST && !DEBUG_FS
Sweet--I just applied that here, and it does exactly what I wanted: puts a nice
clear
message on the "make menuconfig" screen. No more hidden item. Brilliant!
Let's go with that, shall we?
thanks,
--
John Hubbard
NVIDIA
On 11/7/20 7:14 PM, John Hubbard wrote:
On 11/7/20 6:58 PM, Song Bao Hua (Barry Song) wrote:
On 11/7/20 2:20 PM, Randy Dunlap wrote:
On 11/7/20 11:16 AM, John Hubbard wrote:
On 11/7/20 11:05 AM, Song Bao Hua (Barry Song) wrote:
From: John Hubbard [mailto:jhubb...@nvidia.com]
...
But if you
On 11/7/20 6:58 PM, Song Bao Hua (Barry Song) wrote:
On 11/7/20 2:20 PM, Randy Dunlap wrote:
On 11/7/20 11:16 AM, John Hubbard wrote:
On 11/7/20 11:05 AM, Song Bao Hua (Barry Song) wrote:
From: John Hubbard [mailto:jhubb...@nvidia.com]
...
But if you really disagree, then I'd go with,
On 11/7/20 5:13 PM, Tetsuo Handa wrote:
On 2020/11/08 4:17, John Hubbard wrote:
On 11/7/20 1:04 AM, John Hubbard wrote:
On 11/7/20 12:24 AM, Souptick Joarder wrote:
In 2019, we introduced pin_user_pages*() and now we are converting
get_user_pages*() to the new API as appropriate. [1] &
On 11/7/20 4:24 PM, Randy Dunlap wrote:
On 11/7/20 4:03 PM, John Hubbard wrote:
On 11/7/20 2:20 PM, Randy Dunlap wrote:
On 11/7/20 11:16 AM, John Hubbard wrote:
On 11/7/20 11:05 AM, Song Bao Hua (Barry Song) wrote:
...
OK, thanks, I see how you get that list now.
JFTR, those are not 42
On 11/7/20 2:20 PM, Randy Dunlap wrote:
On 11/7/20 11:16 AM, John Hubbard wrote:
On 11/7/20 11:05 AM, Song Bao Hua (Barry Song) wrote:
-Original Message-
From: John Hubbard [mailto:jhubb...@nvidia.com]
...
config GUP_BENCHMARK
bool "Enable infrastructur
On 11/7/20 1:04 AM, John Hubbard wrote:
On 11/7/20 12:24 AM, Souptick Joarder wrote:
In 2019, we introduced pin_user_pages*() and now we are converting
get_user_pages*() to the new API as appropriate. [1] & [2] could
be referred for more information. This is case 5 as per document [1].
On 11/7/20 11:05 AM, Song Bao Hua (Barry Song) wrote:
-Original Message-
From: John Hubbard [mailto:jhubb...@nvidia.com]
...
config GUP_BENCHMARK
bool "Enable infrastructure for get_user_pages() and related calls
benchmarking"
+ depends on DEBUG_FS
I thi
, to keep us on the straight and narrow, just in case
I'm misunderstanding something.
[1] https://lore.kernel.org/r/e78fb7af-627b-ce80-275e-51f97f1f3...@nvidia.com
thanks,
--
John Hubbard
NVIDIA
[1] Documentation/core-api/pin_user_pages.rst
[2] "Explicit pinning of user-space pages":
On 11/7/20 12:24 AM, Souptick Joarder wrote:
Fixed typo s/Poiner/Pointer
Fixes: 5b636857fee6 ("TOMOYO: Allow using argv[]/envp[] of execve() as
conditions.")
Signed-off-by: Souptick Joarder
Cc: John Hubbard
---
security/tomoyo/domain.c | 2 +-
1 file changed, 1 insertion(+),
On 11/4/20 2:05 AM, Barry Song wrote:
Without DEBUG_FS, all the code in gup_benchmark becomes meaningless.
For sure kernel provides debugfs stub while DEBUG_FS is disabled, but
the point here is that GUP_BENCHMARK can do nothing without DEBUG_FS.
Cc: John Hubbard
Cc: Ralph Campbell
Inspired
ly, and I'm not
seeing a pud_mkhugespecial anywhere. So not sure this works, but probably
just me missing something again.
It means ioremap can't create an IO page PUD, it has to be broken up.
Does ioremap even create anything larger than PTEs?
From my reading, yes. See ioremap_try_hug
age, get_futex_key requires a
* get_user_pages_fast_only implementation that can pin pages. Thus it's still
* useful to have gup_huge_pmd even if we can't operate on ptes.
*/
thanks,
--
John Hubbard
NVIDIA
1 - 100 of 988 matches
Mail list logo