(), pks_mkrdwr(), and pks_key_free(). Add 2 new macros;
PAGE_KERNEL_PKEY(key) and _PAGE_PKEY(pkey).
Update the protection key documentation to cover pkeys on supervisor
pages.
Co-developed-by: Ira Weiny
Signed-off-by: Ira Weiny
Signed-off-by: Fenghua Yu
---
Changes from RFC V3
Per
From: Ira Weiny
Define a helper, update_pkey_val(), which will be used to support both
Protection Key User (PKU) and the new Protection Key for Supervisor
(PKS) in subsequent patches.
Co-developed-by: Peter Zijlstra
Signed-off-by: Peter Zijlstra
Signed-off-by: Ira Weiny
---
Changes from RFC
irqentry_state_t to carry lockdep state.
Signed-off-by: Thomas Gleixner
Signed-off-by: Ira Weiny
---
arch/x86/entry/common.c | 34 ---
arch/x86/include/asm/idtentry.h | 3 ---
arch/x86/kernel/cpu/mce/core.c | 6 +++---
arch/x86/kernel/nmi.c | 6 +++---
arch
and then enable PKS when configured and
indicated by the CPU instance. While not strictly necessary in this
patch, ARCH_HAS_SUPERVISOR_PKEYS separates this functionality through
the patch series so it is introduced here.
Co-developed-by: Ira Weiny
Signed-off-by: Ira Weiny
Signed-off-by: Fenghua Yu
From: Ira Weiny
The core PKS functionality provides an interface for kernel users to
reserve keys to their domains set up the page tables with those keys and
control access to those domains when needed.
Define test code which exercises the core functionality of PKS via a
debugfs entry. Basic
From: Ira Weiny
Changes from RFC V3[3]
Rebase to TIP master
Update test error output
Standardize on 'irq_state' for state variables
From Dave Hansen
Update commit messages
Add/clean up comments
Add X86_FEATURE_PKS
From: Ira Weiny
In preparation for adding PKS information to struct irqentry_state_t
change all call sites and usages to pass the struct by reference
instead of by value.
While we are editing the call sites it is a good time to standardize on
the name 'irq_state'.
Signed-off-by: Ira Weiny
From: Ira Weiny
Define a helper, update_pkey_val(), which will be used to support both
Protection Key User (PKU) and the new Protection Key for Supervisor
(PKS) in subsequent patches.
Co-developed-by: Peter Zijlstra
Signed-off-by: Peter Zijlstra
Signed-off-by: Ira Weiny
---
Changes from RFC
From: Ira Weiny
The core PKS functionality provides an interface for kernel users to
reserve keys to their domains set up the page tables with those keys and
control access to those domains when needed.
Define test code which exercises the core functionality of PKS via a
debugfs entry. Basic
From: Ira Weiny
The PKRS MSR is not managed by XSAVE. It is preserved through a context
switch but this support leaves exception handling code open to memory
accesses during exceptions.
2 possible places for preserving this state were considered,
irqentry_state_t or pt_regs.[1] pt_regs
and then enable PKS when configured and
indicated by the CPU instance. While not strictly necessary in this
patch, ARCH_HAS_SUPERVISOR_PKEYS separates this functionality through
the patch series so it is introduced here.
Co-developed-by: Ira Weiny
Signed-off-by: Ira Weiny
Signed-off-by: Fenghua Yu
From: Ira Weiny
The PKRS MSR is not managed by XSAVE. It is preserved through a context
switch but this support leaves exception handling code open to memory
accesses during exceptions.
2 possible places for preserving this state were considered,
irqentry_state_t or pt_regs.[1] pt_regs
From: Ira Weiny
Protection Keys User (PKU) and Protection Keys Supervisor (PKS) work in
similar fashions and can share common defines. Specifically PKS and PKU
each have:
1. A single control register
2. The same number of keys
3. The same number of bits in the register
From: Ira Weiny
In preparation for adding PKS information to struct irqentry_state_t
change all call sites and usages to pass the struct by reference
instead of by value.
While we are editing the call sites it is a good time to standardize on
the name 'irq_state'.
Signed-off-by: Ira Weiny
From: Ira Weiny
Changes from RFC V3[3]
Rebase to TIP master
Update test error output
Standardize on 'irq_state' for state variables
From Dave Hansen
Update commit messages
Add/clean up comments
Add X86_FEATURE_PKS
On Thu, Oct 22, 2020 at 11:19:43AM -0700, Ralph Campbell wrote:
>
> On 10/22/20 8:41 AM, Ira Weiny wrote:
> > On Thu, Oct 22, 2020 at 11:37:53AM +0530, Aneesh Kumar K.V wrote:
> > > commit 6f42193fd86e ("memremap: don't use a separate devm action for
> > >
c1fd80] [c003a430] system_call_exception+0x120/0x270
> [c00025c1fe20] [c000c540] system_call_common+0xf0/0x27c
>
> Cc: Christoph Hellwig
> Cc: Dan Williams
> Cc: Sachin Sant
> Cc: linux-nvdimm@lists.01.org
> Cc: Ira Weiny
> Cc: Jason Gunthorpe
> Sig
U Lesser General Public License for
> - * more details.
> - */
> +// Copyright (c) 2014-2017, Intel Corporation. All rights reserved.
> +// SPDX-License-Identifier: LGPL-2.1
And I'm not sure why some of the copyrights are extended to 2020 while others
are not. I would think they would all be?
kasan_check_read(data, len);
> + check_object_size(data, len, true);
> +
> + while ((seg = next_segment(len, offset)) != 0) {
> + npages = get_user_pages_unlocked(hva, 1, , FOLL_WRITE);
> + if (npages != 1)
> + return -EFAULT;
>
On Mon, Oct 19, 2020 at 11:12:44PM +0200, Thomas Gleixner wrote:
> On Mon, Oct 19 2020 at 13:26, Ira Weiny wrote:
> > On Mon, Oct 19, 2020 at 11:32:50AM +0200, Thomas Gleixner wrote:
> > Sorry, let me clarify. After this patch we have.
> >
> > typedef union ir
On Mon, Oct 19, 2020 at 11:12:44PM +0200, Thomas Gleixner wrote:
> On Mon, Oct 19 2020 at 13:26, Ira Weiny wrote:
> > On Mon, Oct 19, 2020 at 11:32:50AM +0200, Thomas Gleixner wrote:
> > Sorry, let me clarify. After this patch we have.
> >
> > typedef union ir
On Mon, Oct 19, 2020 at 11:32:50AM +0200, Thomas Gleixner wrote:
> On Sun, Oct 18 2020 at 22:37, Ira Weiny wrote:
> > On Fri, Oct 16, 2020 at 02:55:21PM +0200, Thomas Gleixner wrote:
> >> Subject: x86/entry: Move nmi entry/exit into common code
> >> From: Thomas Gleix
On Mon, Oct 19, 2020 at 11:32:50AM +0200, Thomas Gleixner wrote:
> On Sun, Oct 18 2020 at 22:37, Ira Weiny wrote:
> > On Fri, Oct 16, 2020 at 02:55:21PM +0200, Thomas Gleixner wrote:
> >> Subject: x86/entry: Move nmi entry/exit into common code
> >> From: Thomas Gleix
On Mon, Oct 19, 2020 at 11:37:14AM +0200, Peter Zijlstra wrote:
> On Fri, Oct 16, 2020 at 10:14:10PM -0700, Ira Weiny wrote:
> > > so it either needs to
> > > explicitly do so, or have an assertion that preemption is indeed
> > > disabled.
> >
> > H
On Mon, Oct 19, 2020 at 11:37:14AM +0200, Peter Zijlstra wrote:
> On Fri, Oct 16, 2020 at 10:14:10PM -0700, Ira Weiny wrote:
> > > so it either needs to
> > > explicitly do so, or have an assertion that preemption is indeed
> > > disabled.
> >
> > H
xit_to_user_mode(struct p
> #ifndef irqentry_state
> typedef struct irqentry_state {
> boolexit_rcu;
> + boollockdep;
> } irqentry_state_t;
Building on what Peter said do you agree this should be made into a union?
It may not be strictly necessary in this patch
xit_to_user_mode(struct p
> #ifndef irqentry_state
> typedef struct irqentry_state {
> boolexit_rcu;
> + boollockdep;
> } irqentry_state_t;
Building on what Peter said do you agree this should be made into a union?
It may not be strictly necessary in this patch but I think it would reflect the
mutual exclusivity better and could be changed easy enough in the follow on
patch which adds the pkrs state.
Ira
les preemption itself. Wrapping it in yet
> another layer is useless.
I was thinking the update to saved_pkrs needed this protection as well and that
was to be included in the preemption disable. But that too is incorrect.
I've removed this preemption disable.
Thanks,
Ira
les preemption itself. Wrapping it in yet
> another layer is useless.
I was thinking the update to saved_pkrs needed this protection as well and that
was to be included in the preemption disable. But that too is incorrect.
I've removed this preemption disable.
Thanks,
Ira
___
whitespace damage; space followed by tabstop)
Fixed thanks.
>
> > + */
> > +void write_pkrs(u32 new_pkrs)
> > +{
> > + u32 *pkrs;
> > +
> > + if (!static_cpu_has(X86_FEATURE_PKS))
> > + return;
> > +
> > + pkrs = get_cpu_ptr(_cache);
> > + if (*pkrs != new_pkrs) {
> > + *pkrs = new_pkrs;
> > + wrmsrl(MSR_IA32_PKRS, new_pkrs);
> > + }
> > + put_cpu_ptr(pkrs);
> > +}
>
> looks familiar that... :-)
Added you as a co-developer if that is ok?
Ira
whitespace damage; space followed by tabstop)
Fixed thanks.
>
> > + */
> > +void write_pkrs(u32 new_pkrs)
> > +{
> > + u32 *pkrs;
> > +
> > + if (!static_cpu_has(X86_FEATURE_PKS))
> > + return;
> >
undamentally be preempt disabled,
I agree, the update to the percpu cache value and MSR can not be torn.
> so it either needs to
> explicitly do so, or have an assertion that preemption is indeed
> disabled.
However, I don't think I understand clearly. Doesn't [get|put]_cpu_ptr()
handle the preempt_disable() for us? Is it not sufficient to rely on that?
Dave's comment seems to be the opposite where we need to eliminate preempt
disable before calling write_pkrs().
FWIW I think I'm mistaken in my response to Dave regarding the
preempt_disable() in pks_update_protection().
Ira
and the new Protection Key for Supervisor
> > (PKS) in subsequent patches.
> >
> > Co-developed-by: Ira Weiny
> > Signed-off-by: Ira Weiny
> > Signed-off-by: Fenghua Yu
> > ---
> > arch/x86/include/asm/pkeys.h | 2 ++
> > arch/x86/kernel/fpu/
undamentally be preempt disabled,
I agree, the update to the percpu cache value and MSR can not be torn.
> so it either needs to
> explicitly do so, or have an assertion that preemption is indeed
> disabled.
However, I don't think I understand clearly. Doesn't [get|put]_cpu_ptr()
han
and the new Protection Key for Supervisor
> > (PKS) in subsequent patches.
> >
> > Co-developed-by: Ira Weiny
> > Signed-off-by: Ira Weiny
> > Signed-off-by: Fenghua Yu
> > ---
> > arch/x86/include/asm/pkeys.h | 2 ++
> > arch/x86/kernel/fpu/
On Tue, Oct 13, 2020 at 08:36:43PM +0100, Matthew Wilcox wrote:
> On Tue, Oct 13, 2020 at 11:44:29AM -0700, Dan Williams wrote:
> > On Fri, Oct 9, 2020 at 12:52 PM wrote:
> > >
> > > From: Ira Weiny
> > >
> > > The kmap() calls in this F
I don't like that "highpage" in the name and
> highmem.h as location - these make perfect sense regardless of highmem;
> they are normal memory operations with page + offset used instead of
> a pointer...
I was thinking along those lines as well especially beca
be a lot easier if you add
> helpers for the above code sequence and its counterpart that copies
> to a potential hughmem page first, as that hides the implementation
> details from most users.
Matthew Wilcox and Al Viro have suggested similar ideas.
https://lore.kernel.org/l
uot;);
> > + return -1;
> > + }
> > +
>
> Will this return code make anybody mad? Should we have a nicer return
> code for when this is running on non-PKS hardware?
I'm not sure it will matter much but I think it is better to report the m
uot;);
> > + return -1;
> > + }
> > +
>
> Will this return code make anybody mad? Should we have a nicer return
> code for when this is running on non-PKS hardware?
I'm not sure it will matter much but I think it is better to report the m
On Wed, Oct 14, 2020 at 09:06:44PM -0700, Dave Hansen wrote:
> On 10/14/20 8:46 PM, Ira Weiny wrote:
> > On Tue, Oct 13, 2020 at 11:52:32AM -0700, Dave Hansen wrote:
> >> On 10/9/20 12:42 PM, ira.we...@intel.com wrote:
> >>> @@ -341,6 +341,9 @@ noinstr void irqen
On Wed, Oct 14, 2020 at 09:06:44PM -0700, Dave Hansen wrote:
> On 10/14/20 8:46 PM, Ira Weiny wrote:
> > On Tue, Oct 13, 2020 at 11:52:32AM -0700, Dave Hansen wrote:
> >> On 10/9/20 12:42 PM, ira.we...@intel.com wrote:
> >>> @@ -341,6 +341,9 @@ noinstr void irqen
_PF_PK);
> > + if (!IS_ENABLED(CONFIG_ARCH_HAS_SUPERVISOR_PKEYS) ||
> > + !cpu_feature_enabled(X86_FEATURE_PKS))
> > + WARN_ON_ONCE(hw_error_code & X86_PF_PK);
>
> Yeah, please stick X86_FEATURE_PKS in disabled-features so you can use
> cpu_feature_enabled(X86_FEATURE_PKS) by itself here..
done.
thanks,
Ira
___
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
_PF_PK);
> > + if (!IS_ENABLED(CONFIG_ARCH_HAS_SUPERVISOR_PKEYS) ||
> > + !cpu_feature_enabled(X86_FEATURE_PKS))
> > + WARN_ON_ONCE(hw_error_code & X86_PF_PK);
>
> Yeah, please stick X86_FEATURE_PKS in disabled-features so you can use
> cpu_feature_enabled(X86_FEATURE_PKS) by itself here..
done.
thanks,
Ira
better name.
>
> Even if we did:
>
> irq_save_set_pkrs(state, INIT_VAL);
>
> It would probably compile down to the same thing, but be *really*
> obvious what's going on.
I suppose that is true. But I think it is odd having a parameter which is the
same for every call
better name.
>
> Even if we did:
>
> irq_save_set_pkrs(state, INIT_VAL);
>
> It would probably compile down to the same thing, but be *really*
> obvious what's going on.
I suppose that is true. But I think it is odd having a parameter which is the
same for every call
> > +
> > + /* for debugging key exhaustion */
> > + pks_key_users[nr] = pkey_user;
> > +
> > + return nr;
> > +}
> > +EXPORT_SYMBOL_GPL(pks_key_alloc);
> > +
> > +/*
> > + * pks_key_free - Free a previously allocate PKS key
> > + *
> > + * @pkey: Key to be free'ed
> > + */
> > +void pks_key_free(int pkey)
> > +{
> > + if (!cpu_feature_enabled(X86_FEATURE_PKS))
> > + return;
> > +
> > + if (pkey >= PKS_NUM_KEYS || pkey <= PKS_KERN_DEFAULT_KEY)
> > + return;
>
> This seems worthy of a WARN_ON_ONCE() at least. It's essentially
> corrupt data coming into a kernel API.
Ok, Done,
Ira
> > +
> > + /* for debugging key exhaustion */
> > + pks_key_users[nr] = pkey_user;
> > +
> > + return nr;
> > +}
> > +EXPORT_SYMBOL_GPL(pks_key_alloc);
> > +
> > +/*
> > + * pks_key_free - Free a previously allocate PKS key
> > + *
> > + * @pkey: Key to be free'ed
> > + */
> > +void pks_key_free(int pkey)
> > +{
> > + if (!cpu_feature_enabled(X86_FEATURE_PKS))
> > + return;
> > +
> > + if (pkey >= PKS_NUM_KEYS || pkey <= PKS_KERN_DEFAULT_KEY)
> > + return;
>
> This seems worthy of a WARN_ON_ONCE() at least. It's essentially
> corrupt data coming into a kernel API.
Ok, Done,
Ira
___
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
Birders
Tammy spotted 3 large groups of cranes flying south very high just now. At
least 100 birds.
--
Ira Sanders
Golden, CO
"My mind is a raging torrent flooded with rivulets of thought cascading
into a waterfall of creative alternatives."
--
You received this message b
On Tue, Oct 13, 2020 at 11:31:45AM -0700, Dave Hansen wrote:
> On 10/9/20 12:42 PM, ira.we...@intel.com wrote:
> > From: Ira Weiny
> >
> > The PKRS MSR is defined as a per-logical-processor register. This
> > isolates memory access by logical CPU. Unfortunately,
On Tue, Oct 13, 2020 at 11:31:45AM -0700, Dave Hansen wrote:
> On 10/9/20 12:42 PM, ira.we...@intel.com wrote:
> > From: Ira Weiny
> >
> > The PKRS MSR is defined as a per-logical-processor register. This
> > isolates memory access by logical CPU. Unfortunately,
I don't like that "highpage" in the name and
> highmem.h as location - these make perfect sense regardless of highmem;
> they are normal memory operations with page + offset used instead of
> a pointer...
I was thinking along those lines as well especially because of the direction
this
be a lot easier if you add
> helpers for the above code sequence and its counterpart that copies
> to a potential hughmem page first, as that hides the implementation
> details from most users.
Matthew Wilcox and Al Viro have suggested similar ideas.
https://lore.kernel.org/lkml/202010
On Tue, Oct 13, 2020 at 08:36:43PM +0100, Matthew Wilcox wrote:
> On Tue, Oct 13, 2020 at 11:44:29AM -0700, Dan Williams wrote:
> > On Fri, Oct 9, 2020 at 12:52 PM wrote:
> > >
> > > From: Ira Weiny
> > >
> > > The kmap() calls in this F
On Tue, Oct 13, 2020 at 11:23:08AM -0700, Dave Hansen wrote:
> On 10/9/20 12:42 PM, ira.we...@intel.com wrote:
> > +/*
> > + * PKS is independent of PKU and either or both may be supported on a CPU.
> > + * Configure PKS if the cpu supports the feature.
> > + */
>
> Let's at least be consistent
On Tue, Oct 13, 2020 at 11:23:08AM -0700, Dave Hansen wrote:
> On 10/9/20 12:42 PM, ira.we...@intel.com wrote:
> > +/*
> > + * PKS is independent of PKU and either or both may be supported on a CPU.
> > + * Configure PKS if the cpu supports the feature.
> > + */
>
> Let's at least be consistent
be a lot easier if you add
> helpers for the above code sequence and its counterpart that copies
> to a potential hughmem page first, as that hides the implementation
> details from most users.
Matthew Wilcox and Al Viro have suggested similar ideas.
https://lore.kernel.org/l
R_WD_BIT << pkey_shift;
>
> I still think this deserves two lines of comments:
>
> /* Mask out old bit values */
>
> /* Or in new values */
Sure, done.
Ira
___
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
R_WD_BIT << pkey_shift;
>
> I still think this deserves two lines of comments:
>
> /* Mask out old bit values */
>
> /* Or in new values */
Sure, done.
Ira
be a lot easier if you add
> helpers for the above code sequence and its counterpart that copies
> to a potential hughmem page first, as that hides the implementation
> details from most users.
Matthew Wilcox and Al Viro have suggested similar ideas.
https://lore.kernel.org/l
be a lot easier if you add
> helpers for the above code sequence and its counterpart that copies
> to a potential hughmem page first, as that hides the implementation
> details from most users.
Matthew Wilcox and Al Viro have suggested similar ideas.
https://lore.kernel.org/l
be a lot easier if you add
> helpers for the above code sequence and its counterpart that copies
> to a potential hughmem page first, as that hides the implementation
> details from most users.
Matthew Wilcox and Al Viro have suggested similar ideas.
https://lore.kernel.org/l
be a lot easier if you add
> helpers for the above code sequence and its counterpart that copies
> to a potential hughmem page first, as that hides the implementation
> details from most users.
Matthew Wilcox and Al Viro have suggested similar ideas.
https://lore.kernel.org/lkml/20201013205012.gi2046...@iweiny-desk2.sc.intel.com/
Ira
be a lot easier if you add
> helpers for the above code sequence and its counterpart that copies
> to a potential hughmem page first, as that hides the implementation
> details from most users.
Matthew Wilcox and Al Viro have suggested similar ideas.
https://lore.kernel.org/lkml/20201013205012.gi2046...@iweiny-desk2.sc.intel.com/
Ira
be a lot easier if you add
> helpers for the above code sequence and its counterpart that copies
> to a potential hughmem page first, as that hides the implementation
> details from most users.
Matthew Wilcox and Al Viro have suggested similar ideas.
https://lore.kernel.org/l
be a lot easier if you add
> helpers for the above code sequence and its counterpart that copies
> to a potential hughmem page first, as that hides the implementation
> details from most users.
Matthew Wilcox and Al Viro have suggested similar ideas.
https://lore.kernel.org/lkml/20201013205012.gi2046...@iweiny-desk2.sc.intel.com/
Ira
be a lot easier if you add
> helpers for the above code sequence and its counterpart that copies
> to a potential hughmem page first, as that hides the implementation
> details from most users.
Matthew Wilcox and Al Viro have suggested similar ideas.
https://lore.kernel.org/l
be a lot easier if you add
> helpers for the above code sequence and its counterpart that copies
> to a potential hughmem page first, as that hides the implementation
> details from most users.
Matthew Wilcox and Al Viro have suggested similar ideas.
https://lore.kernel.org/lkml/20201013205012.gi2046...@iweiny-desk2.sc.intel.com/
Ira
I don't like that "highpage" in the name and
> highmem.h as location - these make perfect sense regardless of highmem;
> they are normal memory operations with page + offset used instead of
> a pointer...
I was thinking along those lines as well especially beca
I don't like that "highpage" in the name and
> highmem.h as location - these make perfect sense regardless of highmem;
> they are normal memory operations with page + offset used instead of
> a pointer...
I was thinking along those lines as well especially because of the direction
this patch set takes kmap().
Thanks for pointing these out to me. How about I lift them to a common header?
But if not highmem.h where?
Ira
I don't like that "highpage" in the name and
> highmem.h as location - these make perfect sense regardless of highmem;
> they are normal memory operations with page + offset used instead of
> a pointer...
I was thinking along those lines as well especially because of the direction
this patch set takes kmap().
Thanks for pointing these out to me. How about I lift them to a common header?
But if not highmem.h where?
Ira
On Tue, Oct 13, 2020 at 08:36:43PM +0100, Matthew Wilcox wrote:
> On Tue, Oct 13, 2020 at 11:44:29AM -0700, Dan Williams wrote:
> > On Fri, Oct 9, 2020 at 12:52 PM wrote:
> > >
> > > From: Ira Weiny
> > >
> > > The kmap() calls in this F
I don't like that "highpage" in the name and
> highmem.h as location - these make perfect sense regardless of highmem;
> they are normal memory operations with page + offset used instead of
> a pointer...
I was thinking along those lines as well especially beca
I don't like that "highpage" in the name and
> highmem.h as location - these make perfect sense regardless of highmem;
> they are normal memory operations with page + offset used instead of
> a pointer...
I was thinking along those lines as well especially beca
I don't like that "highpage" in the name and
> highmem.h as location - these make perfect sense regardless of highmem;
> they are normal memory operations with page + offset used instead of
> a pointer...
I was thinking along those lines as well especially beca
I don't like that "highpage" in the name and
> highmem.h as location - these make perfect sense regardless of highmem;
> they are normal memory operations with page + offset used instead of
> a pointer...
I was thinking along those lines as well especially because of the direction
this patch set takes kmap().
Thanks for pointing these out to me. How about I lift them to a common header?
But if not highmem.h where?
Ira
I don't like that "highpage" in the name and
> highmem.h as location - these make perfect sense regardless of highmem;
> they are normal memory operations with page + offset used instead of
> a pointer...
I was thinking along those lines as well especially beca
I don't like that "highpage" in the name and
> highmem.h as location - these make perfect sense regardless of highmem;
> they are normal memory operations with page + offset used instead of
> a pointer...
I was thinking along those lines as well especially because of the direction
this patch set takes kmap().
Thanks for pointing these out to me. How about I lift them to a common header?
But if not highmem.h where?
Ira
I don't like that "highpage" in the name and
> highmem.h as location - these make perfect sense regardless of highmem;
> they are normal memory operations with page + offset used instead of
> a pointer...
I was thinking along those lines as well especially beca
I don't like that "highpage" in the name and
> highmem.h as location - these make perfect sense regardless of highmem;
> they are normal memory operations with page + offset used instead of
> a pointer...
I was thinking along those lines as well especially because of the direction
this patch set takes kmap().
Thanks for pointing these out to me. How about I lift them to a common header?
But if not highmem.h where?
Ira
On Tue, Oct 13, 2020 at 08:36:43PM +0100, Matthew Wilcox wrote:
> On Tue, Oct 13, 2020 at 11:44:29AM -0700, Dan Williams wrote:
> > On Fri, Oct 9, 2020 at 12:52 PM wrote:
> > >
> > > From: Ira Weiny
> > >
> > > The kmap() calls in this F
On Tue, Oct 13, 2020 at 08:36:43PM +0100, Matthew Wilcox wrote:
> On Tue, Oct 13, 2020 at 11:44:29AM -0700, Dan Williams wrote:
> > On Fri, Oct 9, 2020 at 12:52 PM wrote:
> > >
> > > From: Ira Weiny
> > >
> > > The kmap() calls in this F
On Tue, Oct 13, 2020 at 08:36:43PM +0100, Matthew Wilcox wrote:
> On Tue, Oct 13, 2020 at 11:44:29AM -0700, Dan Williams wrote:
> > On Fri, Oct 9, 2020 at 12:52 PM wrote:
> > >
> > > From: Ira Weiny
> > >
> > > The kmap() calls in this F
On Tue, Oct 13, 2020 at 08:36:43PM +0100, Matthew Wilcox wrote:
> On Tue, Oct 13, 2020 at 11:44:29AM -0700, Dan Williams wrote:
> > On Fri, Oct 9, 2020 at 12:52 PM wrote:
> > >
> > > From: Ira Weiny
> > >
> > > The kmap() calls in this F
On Tue, Oct 13, 2020 at 08:36:43PM +0100, Matthew Wilcox wrote:
> On Tue, Oct 13, 2020 at 11:44:29AM -0700, Dan Williams wrote:
> > On Fri, Oct 9, 2020 at 12:52 PM wrote:
> > >
> > > From: Ira Weiny
> > >
> > > The kmap() calls in this F
On Tue, Oct 13, 2020 at 08:36:43PM +0100, Matthew Wilcox wrote:
> On Tue, Oct 13, 2020 at 11:44:29AM -0700, Dan Williams wrote:
> > On Fri, Oct 9, 2020 at 12:52 PM wrote:
> > >
> > > From: Ira Weiny
> > >
> > > The kmap() calls in this F
On Tue, Oct 13, 2020 at 08:36:43PM +0100, Matthew Wilcox wrote:
> On Tue, Oct 13, 2020 at 11:44:29AM -0700, Dan Williams wrote:
> > On Fri, Oct 9, 2020 at 12:52 PM wrote:
> > >
> > > From: Ira Weiny
> > >
> > > The kmap() calls in this F
On Tue, Oct 13, 2020 at 08:36:43PM +0100, Matthew Wilcox wrote:
> On Tue, Oct 13, 2020 at 11:44:29AM -0700, Dan Williams wrote:
> > On Fri, Oct 9, 2020 at 12:52 PM wrote:
> > >
> > > From: Ira Weiny
> > >
> > > The kmap() calls in this F
On Tue, Oct 13, 2020 at 08:36:43PM +0100, Matthew Wilcox wrote:
> On Tue, Oct 13, 2020 at 11:44:29AM -0700, Dan Williams wrote:
> > On Fri, Oct 9, 2020 at 12:52 PM wrote:
> > >
> > > From: Ira Weiny
> > >
> > > The kmap() calls in this F
with PKS added behind kmap we have 3 levels
of mapping we want.
global kmap (can span threads and sleep)
'thread' kmap (can sleep but not span threads)
'atomic' kmap (can't sleep nor span threads [by definition])
As Matthew said perhaps 'global kmaps' may be best changed to vmaps? I just
don't
th the _same_ resulting semantics; a thread local mapping which is
preemptable.[2] Therefore they each focus on changing different call sites.
While this patch set is huge I think it serves a valuable purpose to identify a
large number of call sites which are candidates for this new semantic.
I
On Mon, Oct 12, 2020 at 09:02:54PM +0100, Matthew Wilcox wrote:
> On Mon, Oct 12, 2020 at 12:53:54PM -0700, Ira Weiny wrote:
> > On Mon, Oct 12, 2020 at 05:44:38PM +0100, Matthew Wilcox wrote:
> > > On Mon, Oct 12, 2020 at 09:28:29AM -0700, Dave Hansen wrote:
> > >
's add a comment:
>
> /*
> * Generate an Access-Disable mask for the given pkey. Several of these
> * can be OR'd together to generate pkey register values.
> */
Fair enough. done.
>
> Once that's in place, along with the updated changelog:
>
> Reviewed-by: Dave
's add a comment:
>
> /*
> * Generate an Access-Disable mask for the given pkey. Several of these
> * can be OR'd together to generate pkey register values.
> */
Fair enough. done.
>
> Once that's in place, along with the updated changelog:
>
> Reviewed-by: Dave Hansen
Thanks,
Ira
th the _same_ resulting semantics; a thread local mapping which is
preemptable.[2] Therefore they each focus on changing different call sites.
While this patch set is huge I think it serves a valuable purpose to identify a
large number of call sites which are candidates for this new semantic.
I
On Mon, Oct 12, 2020 at 09:02:54PM +0100, Matthew Wilcox wrote:
> On Mon, Oct 12, 2020 at 12:53:54PM -0700, Ira Weiny wrote:
> > On Mon, Oct 12, 2020 at 05:44:38PM +0100, Matthew Wilcox wrote:
> > > On Mon, Oct 12, 2020 at 09:28:29AM -0700, Dave Hansen wrote:
> > >
On Mon, Oct 12, 2020 at 09:02:54PM +0100, Matthew Wilcox wrote:
> On Mon, Oct 12, 2020 at 12:53:54PM -0700, Ira Weiny wrote:
> > On Mon, Oct 12, 2020 at 05:44:38PM +0100, Matthew Wilcox wrote:
> > > On Mon, Oct 12, 2020 at 09:28:29AM -0700, Dave Hansen wrote:
> > >
th the _same_ resulting semantics; a thread local mapping which is
preemptable.[2] Therefore they each focus on changing different call sites.
While this patch set is huge I think it serves a valuable purpose to identify a
large number of call sites which are candidates for this new semantic.
I
On Mon, Oct 12, 2020 at 09:02:54PM +0100, Matthew Wilcox wrote:
> On Mon, Oct 12, 2020 at 12:53:54PM -0700, Ira Weiny wrote:
> > On Mon, Oct 12, 2020 at 05:44:38PM +0100, Matthew Wilcox wrote:
> > > On Mon, Oct 12, 2020 at 09:28:29AM -0700, Dave Hansen wrote:
> > >
On Mon, Oct 12, 2020 at 09:02:54PM +0100, Matthew Wilcox wrote:
> On Mon, Oct 12, 2020 at 12:53:54PM -0700, Ira Weiny wrote:
> > On Mon, Oct 12, 2020 at 05:44:38PM +0100, Matthew Wilcox wrote:
> > > On Mon, Oct 12, 2020 at 09:28:29AM -0700, Dave Hansen wrote:
> > >
901 - 1000 of 13060 matches
Mail list logo