ddr, 0)
--
Kirill A. Shutemov
able]
> unsigned long trampoline_start;
> ^
>
> Signed-off-by: Zhenzhong Duan
> Cc: Kirill A. Shutemov
> Cc: Peter Zijlstra
> Cc: Thomas Gleixner
> Cc: Ingo Molnar
> Cc: Borislav Petkov
Acked-by: Kirill A. Shutemov
Have no idea why I do
On Sun, Jul 14, 2019 at 06:16:49PM +, Randy Dunlap wrote:
> On 5/8/19 7:44 AM, Kirill A. Shutemov wrote:
> > From: Alison Schofield
> >
> > Provide an overview of MKTME on Intel Platforms.
> >
> > Signed-off-by: Alison Schofield
>
evel before
introducing new API? How changing swappiness affects your workloads? What
is swappiness value in your setup?
--
Kirill A. Shutemov
On Mon, Jun 24, 2019 at 05:12:40PM -0700, Song Liu wrote:
> Please share your comments and suggestions on this.
Looks like a great first step to THP in page cache. Thanks!
Acked-by: Kirill A. Shutemov
THP allocation in the fault path and write support are next goals.
--
Kirill A. Shutemov
Commit-ID: 432c833218dd0f75e7b56bd5e8658b72073158d2
Gitweb: https://git.kernel.org/tip/432c833218dd0f75e7b56bd5e8658b72073158d2
Author: Kirill A. Shutemov
AuthorDate: Mon, 24 Jun 2019 15:31:50 +0300
Committer: Thomas Gleixner
CommitDate: Wed, 26 Jun 2019 07:25:09 +0200
x86/mm: Handle
Commit-ID: c1887159eb48ba40e775584cfb2a443962cf1a05
Gitweb: https://git.kernel.org/tip/c1887159eb48ba40e775584cfb2a443962cf1a05
Author: Kirill A. Shutemov
AuthorDate: Thu, 20 Jun 2019 14:24:22 +0300
Committer: Thomas Gleixner
CommitDate: Wed, 26 Jun 2019 07:25:09 +0200
x86/boot/64
Commit-ID: 81c7ed296dcd02bc0b4488246d040e03e633737a
Gitweb: https://git.kernel.org/tip/81c7ed296dcd02bc0b4488246d040e03e633737a
Author: Kirill A. Shutemov
AuthorDate: Thu, 20 Jun 2019 14:23:45 +0300
Committer: Thomas Gleixner
CommitDate: Wed, 26 Jun 2019 07:25:09 +0200
x86/boot/64
On Tue, Jun 25, 2019 at 09:04:39PM +0200, Thomas Gleixner wrote:
> On Thu, 20 Jun 2019, Kirill A. Shutemov wrote:
> > @@ -190,18 +190,18 @@ unsigned long __head __startup_64(unsigned long
> > physaddr,
> > pgd[i + 0] = (pgdval_t)p4d + pgtable_flags;
>
On Tue, Jun 25, 2019 at 12:33:03PM +, Song Liu wrote:
>
>
> > On Jun 25, 2019, at 3:34 AM, Kirill A. Shutemov
> > wrote:
> >
> > On Mon, Jun 24, 2019 at 02:25:42PM +, Song Liu wrote:
> >>
> >>
> >>> On Jun 24, 2019, at 6:
On Mon, Jun 24, 2019 at 02:25:42PM +, Song Liu wrote:
>
>
> > On Jun 24, 2019, at 6:19 AM, Kirill A. Shutemov
> > wrote:
> >
> > On Sat, Jun 22, 2019 at 10:48:28PM -0700, Song Liu wrote:
> >> khugepaged needs exclusive mmap_sem to access page tab
On Mon, Jun 24, 2019 at 09:54:05AM -0700, Yang Shi wrote:
>
>
> On 6/13/19 10:13 AM, Yang Shi wrote:
> >
> >
> > On 6/13/19 4:39 AM, Kirill A. Shutemov wrote:
> > > On Thu, Jun 13, 2019 at 05:56:47AM +0800, Yang Shi wrote:
> > > > The later pa
On Mon, Jun 24, 2019 at 03:04:21PM +, Song Liu wrote:
>
>
> > On Jun 24, 2019, at 7:54 AM, Kirill A. Shutemov
> > wrote:
> >
> > On Mon, Jun 24, 2019 at 02:42:13PM +, Song Liu wrote:
> >>
> >>
> >>> On Jun 24, 2019, at 7:
On Mon, Jun 24, 2019 at 02:42:13PM +, Song Liu wrote:
>
>
> > On Jun 24, 2019, at 7:27 AM, Kirill A. Shutemov
> > wrote:
> >
> > On Mon, Jun 24, 2019 at 02:01:05PM +, Song Liu wrote:
> >>>> @@ -1392,6 +1403,23 @@ s
ly(page == NULL)) {
> >> + result = SCAN_FAIL;
> >> + goto xa_unlocked;
> >> + }
> >> + } else if (!PageUptodate(page)) {
> >
> > Maybe we should try wait_on_page_locked() here before give up?
>
> Are you referring to the "if (!PageUptodate(page))" case?
Yes.
--
Kirill A. Shutemov
>flags)) {
Who said it's the only PMD range that's subject to collapse? The bit has
to be per-PMD, not per-mapping.
I beleive we can store the bit in struct page of PTE page table, clearing
it if we've mapped anyting that doesn't belong to there from fault path.
And in general this calls for more substantial re-design for khugepaged:
we might want to split if into two different kernel threads. One works on
collapsing small pages into compound and the other changes virtual address
space to map the page as PMD.
Even if only the first step is successful, it's still useful: the new
mapping of the file will get huge page, even if the old is sill
PTE-mapped.
--
Kirill A. Shutemov
of
> + * THP for now.
> + */
> +static inline void release_file_thp(struct file *file)
> +{
> +#ifdef CONFIG_READ_ONLY_THP_FOR_FS
Please, use IS_ENABLED() where it is possible.
--
Kirill A. Shutemov
ck_page(mapping, index);
> + if (unlikely(page == NULL)) {
> + result = SCAN_FAIL;
> + goto xa_unlocked;
> + }
> + } else if (!PageUptodate(page)) {
Maybe we should try wait_on_page_locked() here before give up?
> + VM_BUG_ON(is_shmem);
> + result = SCAN_FAIL;
> + goto xa_locked;
> + } else if (!is_shmem && PageDirty(page)) {
> + result = SCAN_FAIL;
> + goto xa_locked;
> } else if (trylock_page(page)) {
> get_page(page);
> xas_unlock_irq();
--
Kirill A. Shutemov
On Fri, Jun 21, 2019 at 06:04:14PM +, Song Liu wrote:
>
>
> > On Jun 21, 2019, at 9:30 AM, Song Liu wrote:
> >
> >
> >
> >> On Jun 21, 2019, at 6:45 AM, Song Liu wrote:
> >>
> >>
> >>
> >>> On Jun 21, 2
ork correctly with
phys-virt mismatch.
Signed-off-by: Kirill A. Shutemov
Reported-and-tested-by: Kyle Pelton
Fixes: b569c1843498 ("x86/mm/KASLR: Reduce randomization granularity for
5-level paging to 1GB")
Cc: Baoquan He
Signed-off-by: Kirill A. Shutemov
---
arch/x86/mm/init_64.c | 24
On Mon, Jun 24, 2019 at 06:07:42PM +0800, Baoquan He wrote:
> On 06/21/19 at 01:54pm, Kirill A. Shutemov wrote:
> > > The code block as below is to zero p4d entries which are not coverred by
> > > the current memory range, and if haven't been mapped already. It's
> > &g
On Fri, Jun 21, 2019 at 01:10:54PM +, Song Liu wrote:
>
>
> > On Jun 21, 2019, at 6:07 AM, Kirill A. Shutemov
> > wrote:
> >
> > On Thu, Jun 20, 2019 at 01:53:48PM -0700, Song Liu wrote:
> >> In previous patch, an application could put part of i
On Fri, Jun 21, 2019 at 01:17:05PM +, Song Liu wrote:
>
>
> > On Jun 21, 2019, at 5:48 AM, Kirill A. Shutemov
> > wrote:
> >
> > On Thu, Jun 13, 2019 at 10:57:47AM -0700, Song Liu wrote:
> >> After all uprobes are removed from the huge page
(inode, 0);
> +#endif
> +}
> +
> /*
> * Handle the last step of open()
> */
> @@ -3418,7 +3434,11 @@ static int do_last(struct nameidata *nd,
> goto out;
> opened:
> error = ima_file_check(file, op->acc_mode);
> - if (!error && will_truncate)
> + if (error)
> + goto out;
> +
> + release_file_thp(file);
What protects against re-fill the file with THP in parallel?
--
Kirill A. Shutemov
epaged functinallity. We need to fix
khugepaged to handle SCAN_PAGE_COMPOUND and probably refactor the code to
be able to call for collapse of particular range if we have all locks
taken (as we do in uprobe case).
--
Kirill A. Shutemov
d the failure should be propogated to the caller.
--
Kirill A. Shutemov
On Fri, Jun 21, 2019 at 05:02:49PM +0800, Baoquan He wrote:
> Hi Kirill,
>
> On 06/20/19 at 02:22pm, Kirill A. Shutemov wrote:
> > Kyle has reported that kernel crashes sometimes when it boots in
> > 5-level paging mode with KASLR enabled:
>
> This is a great finding,
On Thu, Jun 20, 2019 at 02:42:55PM +, Dave Hansen wrote:
> On 6/20/19 4:22 AM, Kirill A. Shutemov wrote:
> > The commit relaxes KASLR alignment requirements and it can lead to
> > mismatch bentween 'i' and 'p4d_index(vaddr)' inside the loop in
> > phys_p4d_init(). The
trigger the issue, but Clang emmits a R_X86_64_32S which
leads to an invalid memory and system reboot.
Signed-off-by: Kirill A. Shutemov
Fixes: 187e91fe5e91 ("x86/boot/64/clang: Use fixup_pointer() to access
'next_early_pgt'")
Cc: Alexander Potapenko
---
arch/x86/kernel/head64.c | 2
across 512G and, for 5-level paging, 64T boundary.
Signed-off-by: Kirill A. Shutemov
Fixes: c88d71508e36 ("x86/boot/64: Rewrite startup_64() in C")
---
arch/x86/kernel/head64.c | 16
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kernel/head64.c
e the loop in
phys_p4d_init(). The mismatch in its turn leads to clearing wrong p4d
entry and eventually to the oops.
The fix is to make phys_p4d_init() walk virtual address space, not
physical one.
Signed-off-by: Kirill A. Shutemov
Reported-and-tested-by: Kyle Pelton
Fixes: b569c1843498 ("x86/m
r without access to the right key is able to
prevent legitimate user from accessing the file. Attacker just need read
access to the encrypted file to prevent any legitimate use to access it.
The problem applies to ioctl() too.
To make sense of it we must have a way to distinguish right key from
wrong. I don't see obvious solution with the current hardware design.
--
Kirill A. Shutemov
On Mon, Jun 17, 2019 at 04:51:58PM +0200, Peter Zijlstra wrote:
> On Mon, Jun 17, 2019 at 05:43:28PM +0300, Kirill A. Shutemov wrote:
> > On Mon, Jun 17, 2019 at 11:27:55AM +0200, Peter Zijlstra wrote:
>
> > > > > And yet I don't see anything in pageattr.c.
> >
On Mon, Jun 17, 2019 at 11:27:55AM +0200, Peter Zijlstra wrote:
> On Sat, Jun 15, 2019 at 01:43:09AM +0300, Kirill A. Shutemov wrote:
> > On Fri, Jun 14, 2019 at 11:51:32AM +0200, Peter Zijlstra wrote:
> > > On Wed, May 08, 2019 at 05:43:38PM +0300, Kirill A. Shutemov wrote:
&g
y 08, 2019 at 05:44:09PM +0300, Kirill A. Shutemov wrote:
> > > > > From: Kai Huang
> > > > >
> > > > > KVM needs those variables to get/set memory encryption mask.
> > > > >
> > > > > Signed-off-by: Kai Huang
> >
On Fri, Jun 14, 2019 at 01:12:59PM +0200, Peter Zijlstra wrote:
> On Wed, May 08, 2019 at 05:43:40PM +0300, Kirill A. Shutemov wrote:
> > page_keyid() is inline funcation that uses lookup_page_ext(). KVM is
> > going to use page_keyid() and since KVM can be built as a module
>
On Fri, Jun 14, 2019 at 11:51:32AM +0200, Peter Zijlstra wrote:
> On Wed, May 08, 2019 at 05:43:38PM +0300, Kirill A. Shutemov wrote:
> > For MKTME we use per-KeyID direct mappings. This allows kernel to have
> > access to encrypted memory.
> >
> > sync_direct_mapp
On Fri, Jun 14, 2019 at 03:43:35PM +0200, Peter Zijlstra wrote:
> On Fri, Jun 14, 2019 at 04:28:36PM +0300, Kirill A. Shutemov wrote:
> > On Fri, Jun 14, 2019 at 01:04:58PM +0200, Peter Zijlstra wrote:
> > > On Fri, Jun 14, 2019 at 11:34:09AM +0200, Peter Zijlstra wrote:
>
On Fri, Jun 14, 2019 at 01:04:58PM +0200, Peter Zijlstra wrote:
> On Fri, Jun 14, 2019 at 11:34:09AM +0200, Peter Zijlstra wrote:
> > On Wed, May 08, 2019 at 05:43:33PM +0300, Kirill A. Shutemov wrote:
> >
> > > + lookup_page_ext(page)->keyid = keyid;
>
&
On Fri, Jun 14, 2019 at 11:34:09AM +0200, Peter Zijlstra wrote:
> On Wed, May 08, 2019 at 05:43:33PM +0300, Kirill A. Shutemov wrote:
>
> > +/* Prepare page to be used for encryption. Called from page allocator. */
> > +void __prep_encrypted_page(struct page *page, int orde
On Fri, Jun 14, 2019 at 11:15:14AM +0200, Peter Zijlstra wrote:
> On Wed, May 08, 2019 at 05:43:29PM +0300, Kirill A. Shutemov wrote:
> > + * Cast PAGE_MASK to a signed type so that it is sign-extended if
> > + * virtual addresses are 32-bits but physical addresses are larger
>
considered it? Are you sure it will not break
anything?
--
Kirill A. Shutemov
r, pte, mk_pte(p, prot));
> + page_add_file_rmap(p, false);
> + }
> +
> + spin_unlock(ptl);
> + unlock_page(page);
> + add_mm_counter(mm, mm_counter_file(page), HPAGE_PMD_NR);
> + ret = 0;
> + }
> return ret ? ERR_PTR(ret) :
> follow_page_pte(vma, address, pmd, flags, >pgmap);
> }
> --
> 2.17.1
>
--
Kirill A. Shutemov
On Thu, Jun 13, 2019 at 03:03:01PM +, Song Liu wrote:
>
>
> > On Jun 13, 2019, at 7:16 AM, Kirill A. Shutemov
> > wrote:
> >
> > On Thu, Jun 13, 2019 at 01:57:30PM +, Song Liu wrote:
> >>> And I'm not convinced that it belongs here at all. Use
but it is fine: they will
be populated on the next access to them.
--
Kirill A. Shutemov
On Tue, Jun 11, 2019 at 10:07:54PM -0700, Yang Shi wrote:
>
>
> On 6/11/19 7:52 PM, Kirill A. Shutemov wrote:
> > On Fri, Jun 07, 2019 at 02:07:39PM +0800, Yang Shi wrote:
> > > Currently shrinker is just allocated and can work when memcg kmem is
> > > enabled.
On Tue, Jun 11, 2019 at 10:06:36PM -0700, Yang Shi wrote:
>
>
> On 6/11/19 7:47 PM, Kirill A. Shutemov wrote:
> > On Fri, Jun 07, 2019 at 02:07:37PM +0800, Yang Shi wrote:
> > > + /*
> > > + * The THP may be not on LRU at this point, e.g. the old
ks bisectability. It has to be done before makeing
shrinker memcg-aware, hasn't it?
--
Kirill A. Shutemov
moving
of destructor is required?
--
Kirill A. Shutemov
pages? This is just broken.
> +
> + pmd = pmd_offset(pud, addr);
> + if (sz != PMD_SIZE && pmd_none(*pmd))
> + return NULL;
> + /* hugepage or swap? */
> + if (pmd_huge(*pmd) || !pmd_present(*pmd))
> + return pmd;
> +
> + return NULL;
> +}
> +
--
Kirill A. Shutemov
nd you can map it with
PMD page? I believe you don't have such guarantee.
--
Kirill A. Shutemov
ry),
> + head + i, 0);
> }
> }
>
> The locking is definitely wrong.
Does it help with the problem, or it's just a possible lead?
--
Kirill A. Shutemov
ill eventually have to be
> rewritten to stop invoking page faults without the mmap_sem for
> reading. So the long term plan is still to drop all
> mmget_still_valid().
>
> Cc:
> Fixes: ba76149f47d8 ("thp: khugepaged")
> Reported-by: Michal Hocko
> Acked-by: Michal Hocko
> Signed-off-by: Andrea Arcangeli
Acked-by: Kirill A. Shutemov
--
Kirill A. Shutemov
On Mon, Jun 03, 2019 at 05:56:32PM +0300, Kirill Tkhai wrote:
> On 03.06.2019 17:38, Kirill Tkhai wrote:
> > On 22.05.2019 18:22, Kirill A. Shutemov wrote:
> >> On Mon, May 20, 2019 at 05:00:01PM +0300, Kirill Tkhai wrote:
> >>> This patchset adds a new syscall, which
On Thu, May 30, 2019 at 05:26:38PM +, Song Liu wrote:
>
>
> > On May 30, 2019, at 5:20 AM, Kirill A. Shutemov
> > wrote:
> >
> > On Wed, May 29, 2019 at 02:20:49PM -0700, Song Liu wrote:
> >> After all uprobes are removed from the huge page
hugepaged.
We need to teach khugepaged to deal with PTE-mapped compound page.
And uprobe should only kick khugepaged for a VMA. Maybe synchronously.
--
Kirill A. Shutemov
mm is pointing to THP.
Maybe it would be cleaner to have FOLL_SPLIT_PMD which would strip
trans_huge PMD if any and then set pte using get_locked_pte()?
This way you'll not need any changes in split_huge_pmd() path. Clearing
PMD will be fine.
--
Kirill A. Shutemov
On Tue, May 28, 2019 at 08:44:24PM +0800, Yang Shi wrote:
> @@ -81,6 +79,7 @@ struct shrinker {
> /* Flags */
> #define SHRINKER_NUMA_AWARE (1 << 0)
> #define SHRINKER_MEMCG_AWARE (1 << 1)
> +#define SHRINKER_NONSLAB (1 << 3)
Why 3?
--
Kirill A. Shutemov
ave helper that would return
pointer to the struct which is right for the page: from pgdat or from
memcg, depending on the situation?
This way we will be able to kill most of code duplication, right?
--
Kirill A. Shutemov
ge), PAGE_SIZE) == 0) {
Does it work for highmem?
--
Kirill A. Shutemov
On Thu, May 30, 2019 at 02:10:15PM +0300, Kirill A. Shutemov wrote:
> On Wed, May 29, 2019 at 02:20:46PM -0700, Song Liu wrote:
> > @@ -2133,10 +2133,15 @@ static void __split_huge_pmd_locked(struct
> > vm_area_struct *vma, pmd_t *pmd,
> > VM_BUG_ON_VMA(vma->vm_end
> /*
>* We are going to unmap this huge page. So
Nope. This going to leak a page table for architectures where
arch_needs_pgtable_deposit() is true.
--
Kirill A. Shutemov
On Wed, May 29, 2019 at 10:21:25AM +0300, Mike Rapoport wrote:
> Shouldn't it be EXPORT_SYMBOL?
We don't have callers outside core-mm at the moment.
I'll add kerneldoc in the next submission.
--
Kirill A. Shutemov
On Tue, May 28, 2019 at 12:15:16PM +0300, Kirill Tkhai wrote:
> On 28.05.2019 02:30, Kirill A. Shutemov wrote:
> > On Fri, May 24, 2019 at 05:00:32PM +0300, Kirill Tkhai wrote:
> >> On 24.05.2019 14:52, Kirill A. Shutemov wrote:
> >>> On Fri, May 24, 2019 at 01:45:
On Fri, May 24, 2019 at 05:00:32PM +0300, Kirill Tkhai wrote:
> On 24.05.2019 14:52, Kirill A. Shutemov wrote:
> > On Fri, May 24, 2019 at 01:45:50PM +0300, Kirill Tkhai wrote:
> >> On 22.05.2019 18:22, Kirill A. Shutemov wrote:
> >>> On Mon, May 20, 2019 at 05:00:
On Fri, May 24, 2019 at 01:45:50PM +0300, Kirill Tkhai wrote:
> On 22.05.2019 18:22, Kirill A. Shutemov wrote:
> > On Mon, May 20, 2019 at 05:00:01PM +0300, Kirill Tkhai wrote:
> >> This patchset adds a new syscall, which makes possible
> >> to clone a VMA from a
will not break
something in the area.
--
Kirill A. Shutemov
On Mon, May 20, 2019 at 05:00:12PM +0300, Kirill Tkhai wrote:
> This prepares the function to copy a vma between
> two processes. Two new arguments are introduced.
This kind of changes requires a lot more explanation in commit message,
describing all possible corner cases.
For instance, I would
On Thu, May 09, 2019 at 05:54:56AM -0400, Justin Piszcz wrote:
> Hello,
>
> Kernel: 5.1 (self-compiled, no modules)
> Arch: x86_64
> Distro: Debian Testing
>
> Issue: I was performing a dump of ext3 and ext4 filesystems and then
> restoring them to a separate volume (testing)-- afterwards I
On Thu, May 09, 2019 at 09:05:31AM -0700, Larry Bassel wrote:
> This patchset implements sharing of page table entries pointing
> to 2MiB pages (PMDs) for FS/DAX on x86.
-EPARSE.
How do you share entries? Entries do not take any space, page tables that
cointain these entries do.
Have you
On Thu, May 09, 2019 at 09:05:33AM -0700, Larry Bassel wrote:
> This is based on (but somewhat different from) what hugetlbfs
> does to share/unshare page tables.
>
> Signed-off-by: Larry Bassel
> ---
> include/linux/hugetlb.h | 4 ++
> mm/huge_memory.c| 32 ++
>
.org
> >
> > Fixes: ?
>
> Not sure which commit validated 5-level.
>
> Hi Kirill,
>
> Is this commit OK?
>
> Fiexes: eedb92abb9bb ("x86/mm: Make virtual memory layout dynamic for
> CONFIG_X86_5LEVEL=y")
Yep.
--
Kirill A. Shutemov
On Fri, May 10, 2019 at 06:07:11PM +, Dave Hansen wrote:
> On 5/8/19 7:43 AM, Kirill A. Shutemov wrote:
> > KeyID indicates what key to use to encrypt and decrypt page's content.
> > Depending on the implementation a cipher text may be tied to physical
> > address
On Wed, May 08, 2019 at 08:52:25PM +, Jacob Pan wrote:
> On Wed, 8 May 2019 09:58:30 -0700
> Christoph Hellwig wrote:
>
> > On Wed, May 08, 2019 at 05:44:12PM +0300, Kirill A. Shutemov wrote:
> > > +EXPORT_SYMBOL_GPL(__mem_encrypt_dma_set);
> > > +
> >
D-0, regardless VMA's KeyID.
Introduce helpers that create a page table entry for zero page.
The generic implementation will be overridden by architecture-specific
code that takes care about using correct KeyID.
Signed-off-by: Kirill A. Shutemov
---
fs/dax.c | 3 +--
include/
VMAs with different KeyID do not mix together. Only VMAs with the same
KeyID are compatible.
Signed-off-by: Kirill A. Shutemov
---
fs/userfaultfd.c | 7 ---
include/linux/mm.h | 9 -
mm/madvise.c | 2 +-
mm/mempolicy.c | 3 ++-
mm/mlock.c | 2 +-
mm/mmap.c
Zero pages are never encrypted. Keep KeyID-0 for them.
Signed-off-by: Kirill A. Shutemov
---
arch/x86/include/asm/pgtable.h | 19 +++
1 file changed, 19 insertions(+)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 50b3e2d963c9..59c3dd50b8d5
.
To make it work kernel only allows merging pages with the same KeyID.
The approach guarantees that the merged page can be read by all users.
Signed-off-by: Kirill A. Shutemov
---
include/linux/mm.h | 7 +++
mm/ksm.c | 17 +
2 files changed, 24 insertions(+)
diff
. With this
change we don' t need to cover alloc_hugepage_vma() separately.
The change makes typo in Alpha's implementation of
__alloc_zeroed_user_highpage() visible. Fix it too.
Signed-off-by: Kirill A. Shutemov
---
arch/alpha/include/asm/page.h | 2 +-
include/linux/gfp.h | 6 ++
2 files changed
that dials with encrypted pages has to call
prep_encrypted_page() too. See compaction_alloc() for instance.
Signed-off-by: Kirill A. Shutemov
---
include/linux/gfp.h | 45 -
include/linux/migrate.h | 14 +---
mm/compaction.c | 3 +++
mm
to early_init_intel().
Signed-off-by: Kirill A. Shutemov
---
arch/x86/kernel/cpu/intel.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index e271264e238a..4c9fadb57a13 100644
--- a/arch/x86/kernel/cpu/intel.c
by KVM which can be built as a module. We need
to export mktme_enabled_key to be able to inline page_keyid().
Signed-off-by: Kirill A. Shutemov
---
arch/x86/include/asm/mktme.h | 28
arch/x86/include/asm/page.h | 1 +
arch/x86/mm/mktme.c | 21
mktme_nr_keyids holds the number of KeyIDs available for MKTME,
excluding KeyID zero which used by TME. MKTME KeyIDs start from 1.
mktme_keyid_shift holds the shift of KeyID within physical address.
mktme_keyid_mask holds the mask to extract KeyID from physical address.
Signed-off-by: Kirill
().
Signed-off-by: Kirill A. Shutemov
---
mm/khugepaged.c | 10 ++
1 file changed, 10 insertions(+)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 449044378782..96326a7e9d61 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1055,6 +1055,16 @@ static void collapse_huge_page(struct
page_keyid() is inline funcation that uses lookup_page_ext(). KVM is
going to use page_keyid() and since KVM can be built as a module
lookup_page_ext() has to be exported.
Signed-off-by: Kirill A. Shutemov
---
mm/page_ext.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/page_ext.c b
Rename the option to CONFIG_MEMORY_PHYSICAL_PADDING. It will be used
not only for KASLR.
Signed-off-by: Kirill A. Shutemov
---
arch/x86/Kconfig| 2 +-
arch/x86/mm/kaslr.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index
be a problem in 4-level paging mode. If the
system has more physical memory than we can handle with MKTME the
feature allows to fail MKTME, but boot the system successfully.
Signed-off-by: Kirill A. Shutemov
---
arch/x86/include/asm/mktme.h | 5 +
arch/x86/kernel/cpu/intel.c | 5 +
arch/x86/mm
-by: Kirill A. Shutemov
---
security/keys/mktme_keys.c | 12 +++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/security/keys/mktme_keys.c b/security/keys/mktme_keys.c
index a7ca32865a1c..9fdf482ea3e6 100644
--- a/security/keys/mktme_keys.c
+++ b/security/keys/mktme_keys.c
and map it to the Userspace Key.
During destroy, MKTME wil returned the hardware KeyID to the pool of
available keys.
Signed-off-by: Alison Schofield
Signed-off-by: Kirill A. Shutemov
---
security/keys/mktme_keys.c | 24
1 file changed, 24 insertions(+)
diff --git a/security
Per-KeyID direct mappings require changes into how we find the right
virtual address for a page and virt-to-phys address translations.
page_to_virt() definition overwrites default macros provided by
.
Signed-off-by: Kirill A. Shutemov
---
arch/x86/include/asm/page.h| 3 +++
arch/x86
-off-by: Kirill A. Shutemov
---
security/keys/mktme_keys.c | 39 --
1 file changed, 37 insertions(+), 2 deletions(-)
diff --git a/security/keys/mktme_keys.c b/security/keys/mktme_keys.c
index 14bc4e600978..a7ca32865a1c 100644
--- a/security/keys/mktme_keys.c
if this check fails.
Signed-off-by: Alison Schofield
Signed-off-by: Kirill A. Shutemov
---
mm/mprotect.c | 24
1 file changed, 24 insertions(+)
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 38d766b5cc20..53bd41f99a67 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -346,6
+ N * direct_mapping_size.
Size of direct mapping is calculated during KASLR setup. If KALSR is
disabled it happens during MKTME initialization.
With MKTME size of direct mapping has to be power-of-2. It makes
implementation of __pa() efficient.
Signed-off-by: Kirill A. Shutemov
---
Documentation
until MKTME is enabled.
Signed-off-by: Kirill A. Shutemov
---
arch/x86/include/asm/mktme.h | 6 +
arch/x86/mm/init_64.c| 10 +
arch/x86/mm/mktme.c | 441 +++
3 files changed, 457 insertions(+)
diff --git a/arch/x86/include/asm/mktme.h b/arch/x86
From: Alison Schofield
The MKTME key type uses capabilities to restrict the allocation
of keys to privileged users. CAP_SYS_RESOURCE is required, but
the broader capability of CAP_SYS_ADMIN is accepted.
Signed-off-by: Alison Schofield
Signed-off-by: Kirill A. Shutemov
---
security/keys
to store user type key payloads.
Add 'mktme_bitmap_user_type' to recall when USER type keys are in
use. If no USER type keys are currently in use, new memory
may be brought online, despite the absence of 'mktme_storekeys'.
Signed-off-by: Alison Schofield
Signed-off-by: Kirill A. Shutemov
the intermediary references are get/put. The intermediaries in this
case are the encrypted VMA's.
Align the percpu_ref_init and percpu_ref_kill with the key service
instantiate and destroy methods respectively.
Signed-off-by: Alison Schofield
Signed-off-by: Kirill A. Shutemov
---
security/keys
parsing functions can be used when
_HMA objects are evaluated at runtime. The _HMA object provides
a completely new HMAT, overriding the existing table. The table
parsing functions will come in handy for those events.
Signed-off-by: Alison Schofield
Signed-off-by: Kirill A. Shutemov
---
drivers
ped from vm_page_prot on the first pgprot_modify().
Define PTE_PFN_MASK_MAX similar to PTE_PFN_MASK but based on
__PHYSICAL_MASK_SHIFT. This way we include whole range of bits
architecturally available for PFN without referencing physical_mask and
mktme_keyid_mask variables.
Signed-off-by: Kir
,
or an unprogrammable memory controller may be removed from the
platform.
Signed-off-by: Alison Schofield
Signed-off-by: Kirill A. Shutemov
---
security/keys/mktme_keys.c | 39 ++
1 file changed, 31 insertions(+), 8 deletions(-)
diff --git a/security/keys/mktme_keys.c b
501 - 600 of 10468 matches
Mail list logo