Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Fri, Dec 02, 2022 at 02:49:09PM +0800, Chao Peng wrote: > On Thu, Dec 01, 2022 at 06:16:46PM -0800, Vishal Annapurve wrote: > > On Tue, Oct 25, 2022 at 8:18 AM Chao Peng > > wrote: > > > > ... > > > +} > > > + > > > +SYSCALL_DEFINE1(memfd_restricted, unsigned int, flags) > > > +{ > > > > Looking at the underlying shmem implementation, there seems to be no > > way to enable transparent huge pages specifically for restricted memfd > > files. > > > > Michael discussed earlier about tweaking > > /sys/kernel/mm/transparent_hugepage/shmem_enabled setting to allow > > hugepages to be used while backing restricted memfd. Such a change > > will affect the rest of the shmem usecases as well. Even setting the > > shmem_enabled policy to "advise" wouldn't help unless file based > > advise for hugepage allocation is implemented. > > Had a look at fadvise() and looks it does not support HUGEPAGE for any > filesystem yet. Yes, I think fadvise() is the right direction here. The problem is similar to NUMA policy where existing APIs are focused around virtual memory addresses. We need to extend ABI to take fd+offset as input instead. -- Kiryl Shutsemau / Kirill A. Shutemov
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Thu, Dec 01, 2022 at 06:16:46PM -0800, Vishal Annapurve wrote: > On Tue, Oct 25, 2022 at 8:18 AM Chao Peng wrote: > > ... > > +} > > + > > +SYSCALL_DEFINE1(memfd_restricted, unsigned int, flags) > > +{ > > Looking at the underlying shmem implementation, there seems to be no > way to enable transparent huge pages specifically for restricted memfd > files. > > Michael discussed earlier about tweaking > /sys/kernel/mm/transparent_hugepage/shmem_enabled setting to allow > hugepages to be used while backing restricted memfd. Such a change > will affect the rest of the shmem usecases as well. Even setting the > shmem_enabled policy to "advise" wouldn't help unless file based > advise for hugepage allocation is implemented. Had a look at fadvise() and looks it does not support HUGEPAGE for any filesystem yet. > > Does it make sense to provide a flag here to allow creating restricted > memfds backed possibly by huge pages to give a more granular control? We do have a unused 'flags' can be extended for such usage, but I would let Kirill have further look, perhaps need more discussions. Chao > > > + struct file *file, *restricted_file; > > + int fd, err; > > + > > + if (flags) > > + return -EINVAL; > > + > > + fd = get_unused_fd_flags(0); > > + if (fd < 0) > > + return fd; > > + > > + file = shmem_file_setup("memfd:restrictedmem", 0, VM_NORESERVE); > > + if (IS_ERR(file)) { > > + err = PTR_ERR(file); > > + goto err_fd; > > + } > > + file->f_mode |= FMODE_LSEEK | FMODE_PREAD | FMODE_PWRITE; > > + file->f_flags |= O_LARGEFILE; > > + > > + restricted_file = restrictedmem_file_create(file); > > + if (IS_ERR(restricted_file)) { > > + err = PTR_ERR(restricted_file); > > + fput(file); > > + goto err_fd; > > + } > > + > > + fd_install(fd, restricted_file); > > + return fd; > > +err_fd: > > + put_unused_fd(fd); > > + return err; > > +} > > + > > +void restrictedmem_register_notifier(struct file *file, > > +struct restrictedmem_notifier > > *notifier) > > +{ > > + struct restrictedmem_data *data = file->f_mapping->private_data; > > + > > + mutex_lock(&data->lock); > > + list_add(¬ifier->list, &data->notifiers); > > + mutex_unlock(&data->lock); > > +} > > +EXPORT_SYMBOL_GPL(restrictedmem_register_notifier); > > + > > +void restrictedmem_unregister_notifier(struct file *file, > > + struct restrictedmem_notifier > > *notifier) > > +{ > > + struct restrictedmem_data *data = file->f_mapping->private_data; > > + > > + mutex_lock(&data->lock); > > + list_del(¬ifier->list); > > + mutex_unlock(&data->lock); > > +} > > +EXPORT_SYMBOL_GPL(restrictedmem_unregister_notifier); > > + > > +int restrictedmem_get_page(struct file *file, pgoff_t offset, > > + struct page **pagep, int *order) > > +{ > > + struct restrictedmem_data *data = file->f_mapping->private_data; > > + struct file *memfd = data->memfd; > > + struct page *page; > > + int ret; > > + > > + ret = shmem_getpage(file_inode(memfd), offset, &page, SGP_WRITE); > > + if (ret) > > + return ret; > > + > > + *pagep = page; > > + if (order) > > + *order = thp_order(compound_head(page)); > > + > > + SetPageUptodate(page); > > + unlock_page(page); > > + > > + return 0; > > +} > > +EXPORT_SYMBOL_GPL(restrictedmem_get_page); > > -- > > 2.25.1 > >
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Tue, Oct 25, 2022 at 8:18 AM Chao Peng wrote: > > From: "Kirill A. Shutemov" > > Introduce 'memfd_restricted' system call with the ability to create > memory areas that are restricted from userspace access through ordinary > MMU operations (e.g. read/write/mmap). The memory content is expected to > be used through a new in-kernel interface by a third kernel module. > > memfd_restricted() is useful for scenarios where a file descriptor(fd) > can be used as an interface into mm but want to restrict userspace's > ability on the fd. Initially it is designed to provide protections for > KVM encrypted guest memory. > > Normally KVM uses memfd memory via mmapping the memfd into KVM userspace > (e.g. QEMU) and then using the mmaped virtual address to setup the > mapping in the KVM secondary page table (e.g. EPT). With confidential > computing technologies like Intel TDX, the memfd memory may be encrypted > with special key for special software domain (e.g. KVM guest) and is not > expected to be directly accessed by userspace. Precisely, userspace > access to such encrypted memory may lead to host crash so should be > prevented. > > memfd_restricted() provides semantics required for KVM guest encrypted > memory support that a fd created with memfd_restricted() is going to be > used as the source of guest memory in confidential computing environment > and KVM can directly interact with core-mm without the need to expose > the memoy content into KVM userspace. > > KVM userspace is still in charge of the lifecycle of the fd. It should > pass the created fd to KVM. KVM uses the new restrictedmem_get_page() to > obtain the physical memory page and then uses it to populate the KVM > secondary page table entries. > > The userspace restricted memfd can be fallocate-ed or hole-punched > from userspace. When these operations happen, KVM can get notified > through restrictedmem_notifier, it then gets chance to remove any > mapped entries of the range in the secondary page tables. > > memfd_restricted() itself is implemented as a shim layer on top of real > memory file systems (currently tmpfs). Pages in restrictedmem are marked > as unmovable and unevictable, this is required for current confidential > usage. But in future this might be changed. > > By default memfd_restricted() prevents userspace read, write and mmap. > By defining new bit in the 'flags', it can be extended to support other > restricted semantics in the future. > > The system call is currently wired up for x86 arch. > > Signed-off-by: Kirill A. Shutemov > Signed-off-by: Chao Peng > --- > arch/x86/entry/syscalls/syscall_32.tbl | 1 + > arch/x86/entry/syscalls/syscall_64.tbl | 1 + > include/linux/restrictedmem.h | 62 ++ > include/linux/syscalls.h | 1 + > include/uapi/asm-generic/unistd.h | 5 +- > include/uapi/linux/magic.h | 1 + > kernel/sys_ni.c| 3 + > mm/Kconfig | 4 + > mm/Makefile| 1 + > mm/restrictedmem.c | 250 + > 10 files changed, 328 insertions(+), 1 deletion(-) > create mode 100644 include/linux/restrictedmem.h > create mode 100644 mm/restrictedmem.c > > diff --git a/arch/x86/entry/syscalls/syscall_32.tbl > b/arch/x86/entry/syscalls/syscall_32.tbl > index 320480a8db4f..dc70ba90247e 100644 > --- a/arch/x86/entry/syscalls/syscall_32.tbl > +++ b/arch/x86/entry/syscalls/syscall_32.tbl > @@ -455,3 +455,4 @@ > 448i386process_mreleasesys_process_mrelease > 449i386futex_waitv sys_futex_waitv > 450i386set_mempolicy_home_node sys_set_mempolicy_home_node > +451i386memfd_restrictedsys_memfd_restricted > diff --git a/arch/x86/entry/syscalls/syscall_64.tbl > b/arch/x86/entry/syscalls/syscall_64.tbl > index c84d12608cd2..06516abc8318 100644 > --- a/arch/x86/entry/syscalls/syscall_64.tbl > +++ b/arch/x86/entry/syscalls/syscall_64.tbl > @@ -372,6 +372,7 @@ > 448common process_mreleasesys_process_mrelease > 449common futex_waitv sys_futex_waitv > 450common set_mempolicy_home_node sys_set_mempolicy_home_node > +451common memfd_restrictedsys_memfd_restricted > > # > # Due to a historical design error, certain syscalls are numbered differently > diff --git a/include/linux/restrictedmem.h b/include/linux/restrictedmem.h > new file mode 100644 > index ..9c37c3ea3180 > --- /dev/null > +++ b/include/linux/restrictedmem.h > @@ -0,0 +1,62 @@ > +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ > +#ifndef _LINUX_RESTRICTEDMEM_H > + > +#include > +#include > +#include > + > +struct restrictedmem_notifier; > + > +struct restrictedmem_notifier_ops { > + void (*invalidate_start)(struct restrictedmem_notifier *notifier, > +pgoff_t start, pgoff_t end); > + void (*invalidate_end)(struct restr
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Wed, Nov 30, 2022 at 05:39:31PM +0800, Chao Peng wrote: > On Tue, Nov 29, 2022 at 01:18:15PM -0600, Michael Roth wrote: > > On Tue, Nov 29, 2022 at 01:06:58PM -0600, Michael Roth wrote: > > > On Tue, Nov 29, 2022 at 10:06:15PM +0800, Chao Peng wrote: > > > > On Mon, Nov 28, 2022 at 06:37:25PM -0600, Michael Roth wrote: > > > > > On Tue, Oct 25, 2022 at 11:13:37PM +0800, Chao Peng wrote: > > > > ... > > > > > > +static long restrictedmem_fallocate(struct file *file, int mode, > > > > > > + loff_t offset, loff_t len) > > > > > > +{ > > > > > > + struct restrictedmem_data *data = file->f_mapping->private_data; > > > > > > + struct file *memfd = data->memfd; > > > > > > + int ret; > > > > > > + > > > > > > + if (mode & FALLOC_FL_PUNCH_HOLE) { > > > > > > + if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len)) > > > > > > + return -EINVAL; > > > > > > + } > > > > > > + > > > > > > + restrictedmem_notifier_invalidate(data, offset, offset + len, > > > > > > true); > > > > > > > > > > The KVM restrictedmem ops seem to expect pgoff_t, but here we pass > > > > > loff_t. For SNP we've made this strange as part of the following patch > > > > > and it seems to produce the expected behavior: > > > > > > > > That's correct. Thanks. > > > > > > > > > > > > > > > > > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmdroth%2Flinux%2Fcommit%2Fd669c7d3003ff7a7a47e73e8c3b4eeadbd2c4eb6&data=05%7C01%7Cmichael.roth%40amd.com%7Cf3ad9d505bec4006028308dad2b76bc5%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638053982483658905%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=ipHjTVNhiRmaa%2BKTJiodbxHS7TOaYbBhAPD0VZ%2FFU2k%3D&reserved=0 > > > > > > > > > > > + ret = memfd->f_op->fallocate(memfd, mode, offset, len); > > > > > > + restrictedmem_notifier_invalidate(data, offset, offset + len, > > > > > > false); > > > > > > + return ret; > > > > > > +} > > > > > > + > > > > > > > > > > > > > > > > > > > > > +int restrictedmem_get_page(struct file *file, pgoff_t offset, > > > > > > + struct page **pagep, int *order) > > > > > > +{ > > > > > > + struct restrictedmem_data *data = file->f_mapping->private_data; > > > > > > + struct file *memfd = data->memfd; > > > > > > + struct page *page; > > > > > > + int ret; > > > > > > + > > > > > > + ret = shmem_getpage(file_inode(memfd), offset, &page, > > > > > > SGP_WRITE); > > > > > > > > > > This will result in KVM allocating pages that userspace hasn't > > > > > necessary > > > > > fallocate()'d. In the case of SNP we need to get the PFN so we can > > > > > clean > > > > > up the RMP entries when restrictedmem invalidations are issued for a > > > > > GFN > > > > > range. > > > > > > > > Yes fallocate() is unnecessary unless someone wants to reserve some > > > > space (e.g. for determination or performance purpose), this matches its > > > > semantics perfectly at: > > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.man7.org%2Flinux%2Fman-pages%2Fman2%2Ffallocate.2.html&data=05%7C01%7Cmichael.roth%40amd.com%7Cf3ad9d505bec4006028308dad2b76bc5%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638053982483658905%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=NJXs0bvvqb3oU%2FGhcvgHSvh8r1DouskOY5CreP1Q5OU%3D&reserved=0 > > > > > > > > > > > > > > If the guest supports lazy-acceptance however, these pages may not > > > > > have > > > > > been faulted in yet, and if the VMM defers actually fallocate()'ing > > > > > space > > > > > until the guest actually tries to issue a shared->private for that GFN > > > > > (to support lazy-pinning), then there may never be a need to allocate > > > > > pages for these backends. > > > > > > > > > > However, the restrictedmem invalidations are for GFN ranges so there's > > > > > no way to know inadvance whether it's been allocated yet or not. The > > > > > xarray is one option but currently it defaults to 'private' so that > > > > > doesn't help us here. It might if we introduced a 'uninitialized' > > > > > state > > > > > or something along that line instead of just the binary > > > > > 'shared'/'private' though... > > > > > > > > How about if we change the default to 'shared' as we discussed at > > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fall%2FY35gI0L8GMt9%2BOkK%40google.com%2F&data=05%7C01%7Cmichael.roth%40amd.com%7Cf3ad9d505bec4006028308dad2b76bc5%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638053982483658905%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=%2F1g3NdU0iLO6rWVgSm42UYlfHGG2EJ1Wp0r%2FGEznUoo%3D&reserved=0? > > > > > > Need to look at this a bit more, but I think that could work as well. > > > > > > > > > > > > > But for now we add
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Tue, Nov 29, 2022 at 01:18:15PM -0600, Michael Roth wrote: > On Tue, Nov 29, 2022 at 01:06:58PM -0600, Michael Roth wrote: > > On Tue, Nov 29, 2022 at 10:06:15PM +0800, Chao Peng wrote: > > > On Mon, Nov 28, 2022 at 06:37:25PM -0600, Michael Roth wrote: > > > > On Tue, Oct 25, 2022 at 11:13:37PM +0800, Chao Peng wrote: > > > ... > > > > > +static long restrictedmem_fallocate(struct file *file, int mode, > > > > > + loff_t offset, loff_t len) > > > > > +{ > > > > > + struct restrictedmem_data *data = file->f_mapping->private_data; > > > > > + struct file *memfd = data->memfd; > > > > > + int ret; > > > > > + > > > > > + if (mode & FALLOC_FL_PUNCH_HOLE) { > > > > > + if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len)) > > > > > + return -EINVAL; > > > > > + } > > > > > + > > > > > + restrictedmem_notifier_invalidate(data, offset, offset + len, > > > > > true); > > > > > > > > The KVM restrictedmem ops seem to expect pgoff_t, but here we pass > > > > loff_t. For SNP we've made this strange as part of the following patch > > > > and it seems to produce the expected behavior: > > > > > > That's correct. Thanks. > > > > > > > > > > > > > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmdroth%2Flinux%2Fcommit%2Fd669c7d3003ff7a7a47e73e8c3b4eeadbd2c4eb6&data=05%7C01%7CMichael.Roth%40amd.com%7C0c26815eb6af4f1a243508dad23cf713%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638053456609134623%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=kAL42bmyBB0alVwh%2FN%2BT3D%2BiVTdxxMsJ7V4TNuCTjM4%3D&reserved=0 > > > > > > > > > + ret = memfd->f_op->fallocate(memfd, mode, offset, len); > > > > > + restrictedmem_notifier_invalidate(data, offset, offset + len, > > > > > false); > > > > > + return ret; > > > > > +} > > > > > + > > > > > > > > > > > > > > > > > +int restrictedmem_get_page(struct file *file, pgoff_t offset, > > > > > +struct page **pagep, int *order) > > > > > +{ > > > > > + struct restrictedmem_data *data = file->f_mapping->private_data; > > > > > + struct file *memfd = data->memfd; > > > > > + struct page *page; > > > > > + int ret; > > > > > + > > > > > + ret = shmem_getpage(file_inode(memfd), offset, &page, > > > > > SGP_WRITE); > > > > > > > > This will result in KVM allocating pages that userspace hasn't necessary > > > > fallocate()'d. In the case of SNP we need to get the PFN so we can clean > > > > up the RMP entries when restrictedmem invalidations are issued for a GFN > > > > range. > > > > > > Yes fallocate() is unnecessary unless someone wants to reserve some > > > space (e.g. for determination or performance purpose), this matches its > > > semantics perfectly at: > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.man7.org%2Flinux%2Fman-pages%2Fman2%2Ffallocate.2.html&data=05%7C01%7CMichael.Roth%40amd.com%7C0c26815eb6af4f1a243508dad23cf713%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638053456609134623%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=acBSquFG%2FHtpbcZfHDZrP2O63bu06rI0pjiPJFSJSj8%3D&reserved=0 > > > > > > > > > > > If the guest supports lazy-acceptance however, these pages may not have > > > > been faulted in yet, and if the VMM defers actually fallocate()'ing > > > > space > > > > until the guest actually tries to issue a shared->private for that GFN > > > > (to support lazy-pinning), then there may never be a need to allocate > > > > pages for these backends. > > > > > > > > However, the restrictedmem invalidations are for GFN ranges so there's > > > > no way to know inadvance whether it's been allocated yet or not. The > > > > xarray is one option but currently it defaults to 'private' so that > > > > doesn't help us here. It might if we introduced a 'uninitialized' state > > > > or something along that line instead of just the binary > > > > 'shared'/'private' though... > > > > > > How about if we change the default to 'shared' as we discussed at > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fall%2FY35gI0L8GMt9%2BOkK%40google.com%2F&data=05%7C01%7CMichael.Roth%40amd.com%7C0c26815eb6af4f1a243508dad23cf713%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638053456609134623%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Q1vZWQiZ7mx12Qn5aKl4s8Ea9hNbwCJBb%2BjiA1du3Os%3D&reserved=0? > > > > Need to look at this a bit more, but I think that could work as well. > > > > > > > > > > But for now we added a restrictedmem_get_page_noalloc() that uses > > > > SGP_NONE instead of SGP_WRITE to avoid accidentally allocating a bunch > > > > of memory as part of guest shutdown, and a > > > > kvm_restrictedmem_get_pfn_noalloc() variant to go along wi
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Tue, Nov 29, 2022 at 01:06:58PM -0600, Michael Roth wrote: > On Tue, Nov 29, 2022 at 10:06:15PM +0800, Chao Peng wrote: > > On Mon, Nov 28, 2022 at 06:37:25PM -0600, Michael Roth wrote: > > > On Tue, Oct 25, 2022 at 11:13:37PM +0800, Chao Peng wrote: > > ... > > > > +static long restrictedmem_fallocate(struct file *file, int mode, > > > > + loff_t offset, loff_t len) > > > > +{ > > > > + struct restrictedmem_data *data = file->f_mapping->private_data; > > > > + struct file *memfd = data->memfd; > > > > + int ret; > > > > + > > > > + if (mode & FALLOC_FL_PUNCH_HOLE) { > > > > + if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len)) > > > > + return -EINVAL; > > > > + } > > > > + > > > > + restrictedmem_notifier_invalidate(data, offset, offset + len, > > > > true); > > > > > > The KVM restrictedmem ops seem to expect pgoff_t, but here we pass > > > loff_t. For SNP we've made this strange as part of the following patch > > > and it seems to produce the expected behavior: > > > > That's correct. Thanks. > > > > > > > > > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmdroth%2Flinux%2Fcommit%2Fd669c7d3003ff7a7a47e73e8c3b4eeadbd2c4eb6&data=05%7C01%7CMichael.Roth%40amd.com%7C0c26815eb6af4f1a243508dad23cf713%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638053456609134623%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=kAL42bmyBB0alVwh%2FN%2BT3D%2BiVTdxxMsJ7V4TNuCTjM4%3D&reserved=0 > > > > > > > + ret = memfd->f_op->fallocate(memfd, mode, offset, len); > > > > + restrictedmem_notifier_invalidate(data, offset, offset + len, > > > > false); > > > > + return ret; > > > > +} > > > > + > > > > > > > > > > > > > +int restrictedmem_get_page(struct file *file, pgoff_t offset, > > > > + struct page **pagep, int *order) > > > > +{ > > > > + struct restrictedmem_data *data = file->f_mapping->private_data; > > > > + struct file *memfd = data->memfd; > > > > + struct page *page; > > > > + int ret; > > > > + > > > > + ret = shmem_getpage(file_inode(memfd), offset, &page, > > > > SGP_WRITE); > > > > > > This will result in KVM allocating pages that userspace hasn't necessary > > > fallocate()'d. In the case of SNP we need to get the PFN so we can clean > > > up the RMP entries when restrictedmem invalidations are issued for a GFN > > > range. > > > > Yes fallocate() is unnecessary unless someone wants to reserve some > > space (e.g. for determination or performance purpose), this matches its > > semantics perfectly at: > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.man7.org%2Flinux%2Fman-pages%2Fman2%2Ffallocate.2.html&data=05%7C01%7CMichael.Roth%40amd.com%7C0c26815eb6af4f1a243508dad23cf713%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638053456609134623%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=acBSquFG%2FHtpbcZfHDZrP2O63bu06rI0pjiPJFSJSj8%3D&reserved=0 > > > > > > > > If the guest supports lazy-acceptance however, these pages may not have > > > been faulted in yet, and if the VMM defers actually fallocate()'ing space > > > until the guest actually tries to issue a shared->private for that GFN > > > (to support lazy-pinning), then there may never be a need to allocate > > > pages for these backends. > > > > > > However, the restrictedmem invalidations are for GFN ranges so there's > > > no way to know inadvance whether it's been allocated yet or not. The > > > xarray is one option but currently it defaults to 'private' so that > > > doesn't help us here. It might if we introduced a 'uninitialized' state > > > or something along that line instead of just the binary > > > 'shared'/'private' though... > > > > How about if we change the default to 'shared' as we discussed at > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fall%2FY35gI0L8GMt9%2BOkK%40google.com%2F&data=05%7C01%7CMichael.Roth%40amd.com%7C0c26815eb6af4f1a243508dad23cf713%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638053456609134623%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Q1vZWQiZ7mx12Qn5aKl4s8Ea9hNbwCJBb%2BjiA1du3Os%3D&reserved=0? > > Need to look at this a bit more, but I think that could work as well. > > > > > > > But for now we added a restrictedmem_get_page_noalloc() that uses > > > SGP_NONE instead of SGP_WRITE to avoid accidentally allocating a bunch > > > of memory as part of guest shutdown, and a > > > kvm_restrictedmem_get_pfn_noalloc() variant to go along with that. But > > > maybe a boolean param is better? Or maybe SGP_NOALLOC is the better > > > default, and we just propagate an error to userspace if they didn't > > > fallocate() in advance? > > > > Thi
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Tue, Nov 29, 2022 at 10:06:15PM +0800, Chao Peng wrote: > On Mon, Nov 28, 2022 at 06:37:25PM -0600, Michael Roth wrote: > > On Tue, Oct 25, 2022 at 11:13:37PM +0800, Chao Peng wrote: > ... > > > +static long restrictedmem_fallocate(struct file *file, int mode, > > > + loff_t offset, loff_t len) > > > +{ > > > + struct restrictedmem_data *data = file->f_mapping->private_data; > > > + struct file *memfd = data->memfd; > > > + int ret; > > > + > > > + if (mode & FALLOC_FL_PUNCH_HOLE) { > > > + if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len)) > > > + return -EINVAL; > > > + } > > > + > > > + restrictedmem_notifier_invalidate(data, offset, offset + len, true); > > > > The KVM restrictedmem ops seem to expect pgoff_t, but here we pass > > loff_t. For SNP we've made this strange as part of the following patch > > and it seems to produce the expected behavior: > > That's correct. Thanks. > > > > > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmdroth%2Flinux%2Fcommit%2Fd669c7d3003ff7a7a47e73e8c3b4eeadbd2c4eb6&data=05%7C01%7Cmichael.roth%40amd.com%7C99e80696067a40d42f6e08dad2138556%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638053278531323330%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=WDj4KxJjhcntBWJUGCjNmMPfZMGQkCSaAo6ElYrGgF0%3D&reserved=0 > > > > > + ret = memfd->f_op->fallocate(memfd, mode, offset, len); > > > + restrictedmem_notifier_invalidate(data, offset, offset + len, false); > > > + return ret; > > > +} > > > + > > > > > > > > > +int restrictedmem_get_page(struct file *file, pgoff_t offset, > > > +struct page **pagep, int *order) > > > +{ > > > + struct restrictedmem_data *data = file->f_mapping->private_data; > > > + struct file *memfd = data->memfd; > > > + struct page *page; > > > + int ret; > > > + > > > + ret = shmem_getpage(file_inode(memfd), offset, &page, SGP_WRITE); > > > > This will result in KVM allocating pages that userspace hasn't necessary > > fallocate()'d. In the case of SNP we need to get the PFN so we can clean > > up the RMP entries when restrictedmem invalidations are issued for a GFN > > range. > > Yes fallocate() is unnecessary unless someone wants to reserve some > space (e.g. for determination or performance purpose), this matches its > semantics perfectly at: > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.man7.org%2Flinux%2Fman-pages%2Fman2%2Ffallocate.2.html&data=05%7C01%7Cmichael.roth%40amd.com%7C99e80696067a40d42f6e08dad2138556%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638053278531323330%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=67sdTY47cM1IBUG2eJCltYF5SyGOpd9%2FVxVlHUw02tU%3D&reserved=0 > > > > > If the guest supports lazy-acceptance however, these pages may not have > > been faulted in yet, and if the VMM defers actually fallocate()'ing space > > until the guest actually tries to issue a shared->private for that GFN > > (to support lazy-pinning), then there may never be a need to allocate > > pages for these backends. > > > > However, the restrictedmem invalidations are for GFN ranges so there's > > no way to know inadvance whether it's been allocated yet or not. The > > xarray is one option but currently it defaults to 'private' so that > > doesn't help us here. It might if we introduced a 'uninitialized' state > > or something along that line instead of just the binary > > 'shared'/'private' though... > > How about if we change the default to 'shared' as we discussed at > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fall%2FY35gI0L8GMt9%2BOkK%40google.com%2F&data=05%7C01%7Cmichael.roth%40amd.com%7C99e80696067a40d42f6e08dad2138556%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638053278531323330%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=qzWObDo7ZHW4YjuAjZ5%2B1wEwbqymgBiNM%2BYXiyUSBdI%3D&reserved=0? Need to look at this a bit more, but I think that could work as well. > > > > But for now we added a restrictedmem_get_page_noalloc() that uses > > SGP_NONE instead of SGP_WRITE to avoid accidentally allocating a bunch > > of memory as part of guest shutdown, and a > > kvm_restrictedmem_get_pfn_noalloc() variant to go along with that. But > > maybe a boolean param is better? Or maybe SGP_NOALLOC is the better > > default, and we just propagate an error to userspace if they didn't > > fallocate() in advance? > > This (making fallocate() a hard requirement) not only complicates the > userspace but also forces the lazy-faulting going through a long path of > exiting to userspace. Unless we don't have other options I would not go > this way. Unless I'm missing something, it's already the case that userspace is responsible for handling all the shared->private transitions in response
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Mon, Nov 28, 2022 at 4:37 PM Michael Roth wrote: > > On Tue, Oct 25, 2022 at 11:13:37PM +0800, Chao Peng wrote: > > From: "Kirill A. Shutemov" > > > > Introduce 'memfd_restricted' system call with the ability to create > > memory areas that are restricted from userspace access through ordinary > > MMU operations (e.g. read/write/mmap). The memory content is expected to > > be used through a new in-kernel interface by a third kernel module. > > > > memfd_restricted() is useful for scenarios where a file descriptor(fd) > > can be used as an interface into mm but want to restrict userspace's > > ability on the fd. Initially it is designed to provide protections for > > KVM encrypted guest memory. > > > > Normally KVM uses memfd memory via mmapping the memfd into KVM userspace > > (e.g. QEMU) and then using the mmaped virtual address to setup the > > mapping in the KVM secondary page table (e.g. EPT). With confidential > > computing technologies like Intel TDX, the memfd memory may be encrypted > > with special key for special software domain (e.g. KVM guest) and is not > > expected to be directly accessed by userspace. Precisely, userspace > > access to such encrypted memory may lead to host crash so should be > > prevented. > > > > memfd_restricted() provides semantics required for KVM guest encrypted > > memory support that a fd created with memfd_restricted() is going to be > > used as the source of guest memory in confidential computing environment > > and KVM can directly interact with core-mm without the need to expose > > the memoy content into KVM userspace. > > > > KVM userspace is still in charge of the lifecycle of the fd. It should > > pass the created fd to KVM. KVM uses the new restrictedmem_get_page() to > > obtain the physical memory page and then uses it to populate the KVM > > secondary page table entries. > > > > The userspace restricted memfd can be fallocate-ed or hole-punched > > from userspace. When these operations happen, KVM can get notified > > through restrictedmem_notifier, it then gets chance to remove any > > mapped entries of the range in the secondary page tables. > > > > memfd_restricted() itself is implemented as a shim layer on top of real > > memory file systems (currently tmpfs). Pages in restrictedmem are marked > > as unmovable and unevictable, this is required for current confidential > > usage. But in future this might be changed. > > > > By default memfd_restricted() prevents userspace read, write and mmap. > > By defining new bit in the 'flags', it can be extended to support other > > restricted semantics in the future. > > > > The system call is currently wired up for x86 arch. > > > > Signed-off-by: Kirill A. Shutemov > > Signed-off-by: Chao Peng > > --- > > arch/x86/entry/syscalls/syscall_32.tbl | 1 + > > arch/x86/entry/syscalls/syscall_64.tbl | 1 + > > include/linux/restrictedmem.h | 62 ++ > > include/linux/syscalls.h | 1 + > > include/uapi/asm-generic/unistd.h | 5 +- > > include/uapi/linux/magic.h | 1 + > > kernel/sys_ni.c| 3 + > > mm/Kconfig | 4 + > > mm/Makefile| 1 + > > mm/restrictedmem.c | 250 + > > 10 files changed, 328 insertions(+), 1 deletion(-) > > create mode 100644 include/linux/restrictedmem.h > > create mode 100644 mm/restrictedmem.c > > > > diff --git a/arch/x86/entry/syscalls/syscall_32.tbl > > b/arch/x86/entry/syscalls/syscall_32.tbl > > index 320480a8db4f..dc70ba90247e 100644 > > --- a/arch/x86/entry/syscalls/syscall_32.tbl > > +++ b/arch/x86/entry/syscalls/syscall_32.tbl > > @@ -455,3 +455,4 @@ > > 448 i386process_mreleasesys_process_mrelease > > 449 i386futex_waitv sys_futex_waitv > > 450 i386set_mempolicy_home_node sys_set_mempolicy_home_node > > +451 i386memfd_restrictedsys_memfd_restricted > > diff --git a/arch/x86/entry/syscalls/syscall_64.tbl > > b/arch/x86/entry/syscalls/syscall_64.tbl > > index c84d12608cd2..06516abc8318 100644 > > --- a/arch/x86/entry/syscalls/syscall_64.tbl > > +++ b/arch/x86/entry/syscalls/syscall_64.tbl > > @@ -372,6 +372,7 @@ > > 448 common process_mreleasesys_process_mrelease > > 449 common futex_waitv sys_futex_waitv > > 450 common set_mempolicy_home_node sys_set_mempolicy_home_node > > +451 common memfd_restrictedsys_memfd_restricted > > > > # > > # Due to a historical design error, certain syscalls are numbered > > differently > > diff --git a/include/linux/restrictedmem.h b/include/linux/restrictedmem.h > > new file mode 100644 > > index ..9c37c3ea3180 > > --- /dev/null > > +++ b/include/linux/restrictedmem.h > > @@ -0,0 +1,62 @@ > > +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ > > +#ifndef _LINUX_RESTRICTEDMEM_H > > + > > +#include > > +#include > > +#include > > + > > +st
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Mon, Nov 28, 2022 at 06:37:25PM -0600, Michael Roth wrote: > On Tue, Oct 25, 2022 at 11:13:37PM +0800, Chao Peng wrote: ... > > +static long restrictedmem_fallocate(struct file *file, int mode, > > + loff_t offset, loff_t len) > > +{ > > + struct restrictedmem_data *data = file->f_mapping->private_data; > > + struct file *memfd = data->memfd; > > + int ret; > > + > > + if (mode & FALLOC_FL_PUNCH_HOLE) { > > + if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len)) > > + return -EINVAL; > > + } > > + > > + restrictedmem_notifier_invalidate(data, offset, offset + len, true); > > The KVM restrictedmem ops seem to expect pgoff_t, but here we pass > loff_t. For SNP we've made this strange as part of the following patch > and it seems to produce the expected behavior: That's correct. Thanks. > > > https://github.com/mdroth/linux/commit/d669c7d3003ff7a7a47e73e8c3b4eeadbd2c4eb6 > > > + ret = memfd->f_op->fallocate(memfd, mode, offset, len); > > + restrictedmem_notifier_invalidate(data, offset, offset + len, false); > > + return ret; > > +} > > + > > > > > +int restrictedmem_get_page(struct file *file, pgoff_t offset, > > + struct page **pagep, int *order) > > +{ > > + struct restrictedmem_data *data = file->f_mapping->private_data; > > + struct file *memfd = data->memfd; > > + struct page *page; > > + int ret; > > + > > + ret = shmem_getpage(file_inode(memfd), offset, &page, SGP_WRITE); > > This will result in KVM allocating pages that userspace hasn't necessary > fallocate()'d. In the case of SNP we need to get the PFN so we can clean > up the RMP entries when restrictedmem invalidations are issued for a GFN > range. Yes fallocate() is unnecessary unless someone wants to reserve some space (e.g. for determination or performance purpose), this matches its semantics perfectly at: https://www.man7.org/linux/man-pages/man2/fallocate.2.html > > If the guest supports lazy-acceptance however, these pages may not have > been faulted in yet, and if the VMM defers actually fallocate()'ing space > until the guest actually tries to issue a shared->private for that GFN > (to support lazy-pinning), then there may never be a need to allocate > pages for these backends. > > However, the restrictedmem invalidations are for GFN ranges so there's > no way to know inadvance whether it's been allocated yet or not. The > xarray is one option but currently it defaults to 'private' so that > doesn't help us here. It might if we introduced a 'uninitialized' state > or something along that line instead of just the binary > 'shared'/'private' though... How about if we change the default to 'shared' as we discussed at https://lore.kernel.org/all/y35gi0l8gmt9+...@google.com/? > > But for now we added a restrictedmem_get_page_noalloc() that uses > SGP_NONE instead of SGP_WRITE to avoid accidentally allocating a bunch > of memory as part of guest shutdown, and a > kvm_restrictedmem_get_pfn_noalloc() variant to go along with that. But > maybe a boolean param is better? Or maybe SGP_NOALLOC is the better > default, and we just propagate an error to userspace if they didn't > fallocate() in advance? This (making fallocate() a hard requirement) not only complicates the userspace but also forces the lazy-faulting going through a long path of exiting to userspace. Unless we don't have other options I would not go this way. Chao > > -Mike > > > + if (ret) > > + return ret; > > + > > + *pagep = page; > > + if (order) > > + *order = thp_order(compound_head(page)); > > + > > + SetPageUptodate(page); > > + unlock_page(page); > > + > > + return 0; > > +} > > +EXPORT_SYMBOL_GPL(restrictedmem_get_page); > > -- > > 2.25.1 > >
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Tue, Nov 29, 2022 at 12:39:06PM +0100, David Hildenbrand wrote: > On 29.11.22 12:21, Kirill A. Shutemov wrote: > > On Mon, Nov 28, 2022 at 06:06:32PM -0600, Michael Roth wrote: > > > On Tue, Oct 25, 2022 at 11:13:37PM +0800, Chao Peng wrote: > > > > From: "Kirill A. Shutemov" > > > > > > > > > > > > > > > > > +static struct file *restrictedmem_file_create(struct file *memfd) > > > > +{ > > > > + struct restrictedmem_data *data; > > > > + struct address_space *mapping; > > > > + struct inode *inode; > > > > + struct file *file; > > > > + > > > > + data = kzalloc(sizeof(*data), GFP_KERNEL); > > > > + if (!data) > > > > + return ERR_PTR(-ENOMEM); > > > > + > > > > + data->memfd = memfd; > > > > + mutex_init(&data->lock); > > > > + INIT_LIST_HEAD(&data->notifiers); > > > > + > > > > + inode = alloc_anon_inode(restrictedmem_mnt->mnt_sb); > > > > + if (IS_ERR(inode)) { > > > > + kfree(data); > > > > + return ERR_CAST(inode); > > > > + } > > > > + > > > > + inode->i_mode |= S_IFREG; > > > > + inode->i_op = &restrictedmem_iops; > > > > + inode->i_mapping->private_data = data; > > > > + > > > > + file = alloc_file_pseudo(inode, restrictedmem_mnt, > > > > +"restrictedmem", O_RDWR, > > > > +&restrictedmem_fops); > > > > + if (IS_ERR(file)) { > > > > + iput(inode); > > > > + kfree(data); > > > > + return ERR_CAST(file); > > > > + } > > > > + > > > > + file->f_flags |= O_LARGEFILE; > > > > + > > > > + mapping = memfd->f_mapping; > > > > + mapping_set_unevictable(mapping); > > > > + mapping_set_gfp_mask(mapping, > > > > +mapping_gfp_mask(mapping) & > > > > ~__GFP_MOVABLE); > > > > > > Is this supposed to prevent migration of pages being used for > > > restrictedmem/shmem backend? > > > > Yes, my bad. I expected it to prevent migration, but it is not true. > > Maybe add a comment that these pages are not movable and we don't want to > place them into movable pageblocks (including CMA and ZONE_MOVABLE). That's > the primary purpose of the GFP mask here. Yes I can do that. Chao > > -- > Thanks, > > David / dhildenb
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Tue, Nov 29, 2022 at 02:21:39PM +0300, Kirill A. Shutemov wrote: > On Mon, Nov 28, 2022 at 06:06:32PM -0600, Michael Roth wrote: > > On Tue, Oct 25, 2022 at 11:13:37PM +0800, Chao Peng wrote: > > > From: "Kirill A. Shutemov" > > > > > > > > > > > > +static struct file *restrictedmem_file_create(struct file *memfd) > > > +{ > > > + struct restrictedmem_data *data; > > > + struct address_space *mapping; > > > + struct inode *inode; > > > + struct file *file; > > > + > > > + data = kzalloc(sizeof(*data), GFP_KERNEL); > > > + if (!data) > > > + return ERR_PTR(-ENOMEM); > > > + > > > + data->memfd = memfd; > > > + mutex_init(&data->lock); > > > + INIT_LIST_HEAD(&data->notifiers); > > > + > > > + inode = alloc_anon_inode(restrictedmem_mnt->mnt_sb); > > > + if (IS_ERR(inode)) { > > > + kfree(data); > > > + return ERR_CAST(inode); > > > + } > > > + > > > + inode->i_mode |= S_IFREG; > > > + inode->i_op = &restrictedmem_iops; > > > + inode->i_mapping->private_data = data; > > > + > > > + file = alloc_file_pseudo(inode, restrictedmem_mnt, > > > + "restrictedmem", O_RDWR, > > > + &restrictedmem_fops); > > > + if (IS_ERR(file)) { > > > + iput(inode); > > > + kfree(data); > > > + return ERR_CAST(file); > > > + } > > > + > > > + file->f_flags |= O_LARGEFILE; > > > + > > > + mapping = memfd->f_mapping; > > > + mapping_set_unevictable(mapping); > > > + mapping_set_gfp_mask(mapping, > > > + mapping_gfp_mask(mapping) & ~__GFP_MOVABLE); > > > > Is this supposed to prevent migration of pages being used for > > restrictedmem/shmem backend? > > Yes, my bad. I expected it to prevent migration, but it is not true. > > Looks like we need to bump refcount in restrictedmem_get_page() and reduce > it back when KVM is no longer use it. The restrictedmem_get_page() has taken a reference, but later KVM put_page() after populating the secondary page table entry through kvm_release_pfn_clean(). One option would let the user feature(e.g. TDX/SEV) to get_page/put_page() during populating the secondary page table entry, AFAICS, this requirement also comes from these features. Chao > > Chao, could you adjust it? > > -- > Kiryl Shutsemau / Kirill A. Shutemov
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On 29.11.22 12:21, Kirill A. Shutemov wrote: On Mon, Nov 28, 2022 at 06:06:32PM -0600, Michael Roth wrote: On Tue, Oct 25, 2022 at 11:13:37PM +0800, Chao Peng wrote: From: "Kirill A. Shutemov" +static struct file *restrictedmem_file_create(struct file *memfd) +{ + struct restrictedmem_data *data; + struct address_space *mapping; + struct inode *inode; + struct file *file; + + data = kzalloc(sizeof(*data), GFP_KERNEL); + if (!data) + return ERR_PTR(-ENOMEM); + + data->memfd = memfd; + mutex_init(&data->lock); + INIT_LIST_HEAD(&data->notifiers); + + inode = alloc_anon_inode(restrictedmem_mnt->mnt_sb); + if (IS_ERR(inode)) { + kfree(data); + return ERR_CAST(inode); + } + + inode->i_mode |= S_IFREG; + inode->i_op = &restrictedmem_iops; + inode->i_mapping->private_data = data; + + file = alloc_file_pseudo(inode, restrictedmem_mnt, +"restrictedmem", O_RDWR, +&restrictedmem_fops); + if (IS_ERR(file)) { + iput(inode); + kfree(data); + return ERR_CAST(file); + } + + file->f_flags |= O_LARGEFILE; + + mapping = memfd->f_mapping; + mapping_set_unevictable(mapping); + mapping_set_gfp_mask(mapping, +mapping_gfp_mask(mapping) & ~__GFP_MOVABLE); Is this supposed to prevent migration of pages being used for restrictedmem/shmem backend? Yes, my bad. I expected it to prevent migration, but it is not true. Maybe add a comment that these pages are not movable and we don't want to place them into movable pageblocks (including CMA and ZONE_MOVABLE). That's the primary purpose of the GFP mask here. -- Thanks, David / dhildenb
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Mon, Nov 28, 2022 at 06:06:32PM -0600, Michael Roth wrote: > On Tue, Oct 25, 2022 at 11:13:37PM +0800, Chao Peng wrote: > > From: "Kirill A. Shutemov" > > > > > > > +static struct file *restrictedmem_file_create(struct file *memfd) > > +{ > > + struct restrictedmem_data *data; > > + struct address_space *mapping; > > + struct inode *inode; > > + struct file *file; > > + > > + data = kzalloc(sizeof(*data), GFP_KERNEL); > > + if (!data) > > + return ERR_PTR(-ENOMEM); > > + > > + data->memfd = memfd; > > + mutex_init(&data->lock); > > + INIT_LIST_HEAD(&data->notifiers); > > + > > + inode = alloc_anon_inode(restrictedmem_mnt->mnt_sb); > > + if (IS_ERR(inode)) { > > + kfree(data); > > + return ERR_CAST(inode); > > + } > > + > > + inode->i_mode |= S_IFREG; > > + inode->i_op = &restrictedmem_iops; > > + inode->i_mapping->private_data = data; > > + > > + file = alloc_file_pseudo(inode, restrictedmem_mnt, > > +"restrictedmem", O_RDWR, > > +&restrictedmem_fops); > > + if (IS_ERR(file)) { > > + iput(inode); > > + kfree(data); > > + return ERR_CAST(file); > > + } > > + > > + file->f_flags |= O_LARGEFILE; > > + > > + mapping = memfd->f_mapping; > > + mapping_set_unevictable(mapping); > > + mapping_set_gfp_mask(mapping, > > +mapping_gfp_mask(mapping) & ~__GFP_MOVABLE); > > Is this supposed to prevent migration of pages being used for > restrictedmem/shmem backend? Yes, my bad. I expected it to prevent migration, but it is not true. Looks like we need to bump refcount in restrictedmem_get_page() and reduce it back when KVM is no longer use it. Chao, could you adjust it? -- Kiryl Shutsemau / Kirill A. Shutemov
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Tue, Oct 25, 2022 at 11:13:37PM +0800, Chao Peng wrote: > From: "Kirill A. Shutemov" > > Introduce 'memfd_restricted' system call with the ability to create > memory areas that are restricted from userspace access through ordinary > MMU operations (e.g. read/write/mmap). The memory content is expected to > be used through a new in-kernel interface by a third kernel module. > > memfd_restricted() is useful for scenarios where a file descriptor(fd) > can be used as an interface into mm but want to restrict userspace's > ability on the fd. Initially it is designed to provide protections for > KVM encrypted guest memory. > > Normally KVM uses memfd memory via mmapping the memfd into KVM userspace > (e.g. QEMU) and then using the mmaped virtual address to setup the > mapping in the KVM secondary page table (e.g. EPT). With confidential > computing technologies like Intel TDX, the memfd memory may be encrypted > with special key for special software domain (e.g. KVM guest) and is not > expected to be directly accessed by userspace. Precisely, userspace > access to such encrypted memory may lead to host crash so should be > prevented. > > memfd_restricted() provides semantics required for KVM guest encrypted > memory support that a fd created with memfd_restricted() is going to be > used as the source of guest memory in confidential computing environment > and KVM can directly interact with core-mm without the need to expose > the memoy content into KVM userspace. > > KVM userspace is still in charge of the lifecycle of the fd. It should > pass the created fd to KVM. KVM uses the new restrictedmem_get_page() to > obtain the physical memory page and then uses it to populate the KVM > secondary page table entries. > > The userspace restricted memfd can be fallocate-ed or hole-punched > from userspace. When these operations happen, KVM can get notified > through restrictedmem_notifier, it then gets chance to remove any > mapped entries of the range in the secondary page tables. > > memfd_restricted() itself is implemented as a shim layer on top of real > memory file systems (currently tmpfs). Pages in restrictedmem are marked > as unmovable and unevictable, this is required for current confidential > usage. But in future this might be changed. > > By default memfd_restricted() prevents userspace read, write and mmap. > By defining new bit in the 'flags', it can be extended to support other > restricted semantics in the future. > > The system call is currently wired up for x86 arch. > > Signed-off-by: Kirill A. Shutemov > Signed-off-by: Chao Peng > --- > arch/x86/entry/syscalls/syscall_32.tbl | 1 + > arch/x86/entry/syscalls/syscall_64.tbl | 1 + > include/linux/restrictedmem.h | 62 ++ > include/linux/syscalls.h | 1 + > include/uapi/asm-generic/unistd.h | 5 +- > include/uapi/linux/magic.h | 1 + > kernel/sys_ni.c| 3 + > mm/Kconfig | 4 + > mm/Makefile| 1 + > mm/restrictedmem.c | 250 + > 10 files changed, 328 insertions(+), 1 deletion(-) > create mode 100644 include/linux/restrictedmem.h > create mode 100644 mm/restrictedmem.c > > diff --git a/arch/x86/entry/syscalls/syscall_32.tbl > b/arch/x86/entry/syscalls/syscall_32.tbl > index 320480a8db4f..dc70ba90247e 100644 > --- a/arch/x86/entry/syscalls/syscall_32.tbl > +++ b/arch/x86/entry/syscalls/syscall_32.tbl > @@ -455,3 +455,4 @@ > 448 i386process_mreleasesys_process_mrelease > 449 i386futex_waitv sys_futex_waitv > 450 i386set_mempolicy_home_node sys_set_mempolicy_home_node > +451 i386memfd_restrictedsys_memfd_restricted > diff --git a/arch/x86/entry/syscalls/syscall_64.tbl > b/arch/x86/entry/syscalls/syscall_64.tbl > index c84d12608cd2..06516abc8318 100644 > --- a/arch/x86/entry/syscalls/syscall_64.tbl > +++ b/arch/x86/entry/syscalls/syscall_64.tbl > @@ -372,6 +372,7 @@ > 448 common process_mreleasesys_process_mrelease > 449 common futex_waitv sys_futex_waitv > 450 common set_mempolicy_home_node sys_set_mempolicy_home_node > +451 common memfd_restrictedsys_memfd_restricted > > # > # Due to a historical design error, certain syscalls are numbered differently > diff --git a/include/linux/restrictedmem.h b/include/linux/restrictedmem.h > new file mode 100644 > index ..9c37c3ea3180 > --- /dev/null > +++ b/include/linux/restrictedmem.h > @@ -0,0 +1,62 @@ > +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ > +#ifndef _LINUX_RESTRICTEDMEM_H > + > +#include > +#include > +#include > + > +struct restrictedmem_notifier; > + > +struct restrictedmem_notifier_ops { > + void (*invalidate_start)(struct restrictedmem_notifier *notifier, > + pgoff_t start, pgoff_t end); > + void (*invalidate_end)(struct restric
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Tue, Oct 25, 2022 at 11:13:37PM +0800, Chao Peng wrote: > From: "Kirill A. Shutemov" > > +static struct file *restrictedmem_file_create(struct file *memfd) > +{ > + struct restrictedmem_data *data; > + struct address_space *mapping; > + struct inode *inode; > + struct file *file; > + > + data = kzalloc(sizeof(*data), GFP_KERNEL); > + if (!data) > + return ERR_PTR(-ENOMEM); > + > + data->memfd = memfd; > + mutex_init(&data->lock); > + INIT_LIST_HEAD(&data->notifiers); > + > + inode = alloc_anon_inode(restrictedmem_mnt->mnt_sb); > + if (IS_ERR(inode)) { > + kfree(data); > + return ERR_CAST(inode); > + } > + > + inode->i_mode |= S_IFREG; > + inode->i_op = &restrictedmem_iops; > + inode->i_mapping->private_data = data; > + > + file = alloc_file_pseudo(inode, restrictedmem_mnt, > + "restrictedmem", O_RDWR, > + &restrictedmem_fops); > + if (IS_ERR(file)) { > + iput(inode); > + kfree(data); > + return ERR_CAST(file); > + } > + > + file->f_flags |= O_LARGEFILE; > + > + mapping = memfd->f_mapping; > + mapping_set_unevictable(mapping); > + mapping_set_gfp_mask(mapping, > + mapping_gfp_mask(mapping) & ~__GFP_MOVABLE); Is this supposed to prevent migration of pages being used for restrictedmem/shmem backend? In my case I've been testing SNP support based on UPM v9, and for large guests (128GB+), if I force 2M THPs via: echo always >/sys/kernel/mm/transparent_hugepages/shmem_enabled it will in some cases trigger the below trace, which suggests that kcompactd is trying to call migrate_folio() on a PFN that was/is still allocated for guest private memory (and so has been removed from directmap as part of shared->private conversation via REG_REGION kvm ioctl, leading to the crash). This trace seems to occur during early OVMF boot while the guest is in the middle of pre-accepting on private memory (no lazy accept in this case). Is this expected behavior? What else needs to be done to ensure migrations aren't attempted in this case? Thanks! -Mike # Host logs with debug info for crash during SNP boot ... [ 904.373632] kvm_restricted_mem_get_pfn: GFN: 0x1caced1, PFN: 0x156b7f, page: ea0006b197b0, ref_count: 2 [ 904.373634] kvm_restricted_mem_get_pfn: GFN: 0x1caced2, PFN: 0x156840, page: ea0006b09400, ref_count: 2 [ 904.373637] kvm_restricted_mem_get_pfn: GFN: 0x1caced3, PFN: 0x156841, page: ea0006b09450, ref_count: 2 [ 904.373639] kvm_restricted_mem_get_pfn: GFN: 0x1caced4, PFN: 0x156842, page: ea0006b094a0, ref_count: 2 [ 904.373641] kvm_restricted_mem_get_pfn: GFN: 0x1caced5, PFN: 0x156843, page: ea0006b094f0, ref_count: 2 [ 904.373645] kvm_restricted_mem_get_pfn: GFN: 0x1caced6, PFN: 0x156844, page: ea0006b09540, ref_count: 2 [ 904.373647] kvm_restricted_mem_get_pfn: GFN: 0x1caced7, PFN: 0x156845, page: ea0006b09590, ref_count: 2 [ 904.373649] kvm_restricted_mem_get_pfn: GFN: 0x1caced8, PFN: 0x156846, page: ea0006b095e0, ref_count: 2 [ 904.373652] kvm_restricted_mem_get_pfn: GFN: 0x1caced9, PFN: 0x156847, page: ea0006b09630, ref_count: 2 [ 904.373654] kvm_restricted_mem_get_pfn: GFN: 0x1caceda, PFN: 0x156848, page: ea0006b09680, ref_count: 2 [ 904.373656] kvm_restricted_mem_get_pfn: GFN: 0x1cacedb, PFN: 0x156849, page: ea0006b096d0, ref_count: 2 [ 904.373661] kvm_restricted_mem_get_pfn: GFN: 0x1cacedc, PFN: 0x15684a, page: ea0006b09720, ref_count: 2 [ 904.373663] kvm_restricted_mem_get_pfn: GFN: 0x1cacedd, PFN: 0x15684b, page: ea0006b09770, ref_count: 2 # PFN 0x15684c is allocated for guest private memory, will have been removed from directmap as part of RMP requirements [ 904.373665] kvm_restricted_mem_get_pfn: GFN: 0x1cacede, PFN: 0x15684c, page: ea0006b097c0, ref_count: 2 ... # kcompactd crashes trying to copy PFN 0x15684c to a new folio, crashes trying to access PFN via directmap [ 904.470135] Migrating restricted page, SRC pfn: 0x15684c, folio_ref_count: 2, folio_order: 0 [ 904.470154] BUG: unable to handle page fault for address: 88815684c000 [ 904.470314] kvm_restricted_mem_get_pfn: GFN: 0x1cafe00, PFN: 0x19f6d0, page: ea00081d2100, ref_count: 2 [ 904.477828] #PF: supervisor read access in kernel mode [ 904.477831] #PF: error_code(0x) - not-present page [ 904.477833] PGD 6601067 P4D 6601067 PUD 1569ad063 PMD 1569af063 PTE 800ea97b3060 [ 904.508806] Oops: [#1] SMP NOPTI [ 904.512892] CPU: 52 PID: 1563 Comm: kcompactd0 Tainted: GE 6.0.0-rc7-hsnp-v7pfdv9d+ #10 [ 904.523473] Hardware name: AMD Corporation ETHANOL_X/ETHANOL_X, BIOS RXM1006B 08/20/2021 [ 904.532499] RIP: 0010:copy_page+0x7/0x10 [ 904.536877] Code: 00 66 90 48 89 f8 48 89 d1 f3 a4 31 c0 c3 cc cc cc cc 48 89 c8 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc 66 9
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Mon, Nov 14, 2022 at 04:16:32PM -0600, Michael Roth wrote: > On Mon, Nov 14, 2022 at 06:28:43PM +0300, Kirill A. Shutemov wrote: > > On Mon, Nov 14, 2022 at 03:02:37PM +0100, Vlastimil Babka wrote: > > > On 11/1/22 16:19, Michael Roth wrote: > > > > On Tue, Nov 01, 2022 at 07:37:29PM +0800, Chao Peng wrote: > > > >> > > > > >> > 1) restoring kernel directmap: > > > >> > > > > >> > Currently SNP (and I believe TDX) need to either split or > > > >> > remove kernel > > > >> > direct mappings for restricted PFNs, since there is no > > > >> > guarantee that > > > >> > other PFNs within a 2MB range won't be used for non-restricted > > > >> > (which will cause an RMP #PF in the case of SNP since the 2MB > > > >> > mapping overlaps with guest-owned pages) > > > >> > > > >> Has the splitting and restoring been a well-discussed direction? I'm > > > >> just curious whether there is other options to solve this issue. > > > > > > > > For SNP it's been discussed for quite some time, and either splitting or > > > > removing private entries from directmap are the well-discussed way I'm > > > > aware of to avoid RMP violations due to some other kernel process using > > > > a 2MB mapping to access shared memory if there are private pages that > > > > happen to be within that range. > > > > > > > > In both cases the issue of how to restore directmap as 2M becomes a > > > > problem. > > > > > > > > I was also under the impression TDX had similar requirements. If so, > > > > do you know what the plan is for handling this for TDX? > > > > > > > > There are also 2 potential alternatives I'm aware of, but these haven't > > > > been discussed in much detail AFAIK: > > > > > > > > a) Ensure confidential guests are backed by 2MB pages. shmem has a way > > > > to > > > >request 2MB THP pages, but I'm not sure how reliably we can guarantee > > > >that enough THPs are available, so if we went that route we'd > > > > probably > > > >be better off requiring the use of hugetlbfs as the backing store. > > > > But > > > >obviously that's a bit limiting and it would be nice to have the > > > > option > > > >of using normal pages as well. One nice thing with invalidation > > > >scheme proposed here is that this would "Just Work" if implement > > > >hugetlbfs support, so an admin that doesn't want any directmap > > > >splitting has this option available, otherwise it's done as a > > > >best-effort. > > > > > > > > b) Implement general support for restoring directmap as 2M even when > > > >subpages might be in use by other kernel threads. This would be the > > > >most flexible approach since it requires no special handling during > > > >invalidations, but I think it's only possible if all the CPA > > > >attributes for the 2M range are the same at the time the mapping is > > > >restored/unsplit, so some potential locking issues there and still > > > >chance for splitting directmap over time. > > > > > > I've been hoping that > > > > > > c) using a mechanism such as [1] [2] where the goal is to group together > > > these small allocations that need to increase directmap granularity so > > > maximum number of large mappings are preserved. > > > > As I mentioned in the other thread the restricted memfd can be backed by > > secretmem instead of plain memfd. It already handles directmap with care. > > It looks like it would handle direct unmapping/cleanup nicely, but it > seems to lack fallocate(PUNCH_HOLE) support which we'd probably want to > avoid additional memory requirements. I think once we added that we'd > still end up needing some sort of handling for the invalidations. > > Also, I know Chao has been considering hugetlbfs support, I assume by > leveraging the support that already exists in shmem. Ideally SNP would > be able to make use of that support as well, but relying on a separate > backend seems likely to result in more complications getting there > later. > > > > > But I don't think it has to be part of initial restricted memfd > > implementation. It is SEV-specific requirement and AMD folks can extend > > implementation as needed later. > > Admittedly the suggested changes to the invalidation mechanism made a > lot more sense to me when I was under the impression that TDX would have > similar requirements and we might end up with a common hook. Since that > doesn't actually seem to be the case, it makes sense to try to do it as > a platform-specific hook for SNP. > > I think, given a memslot, a GFN range, and kvm_restricted_mem_get_pfn(), > we should be able to get the same information needed to figure out whether > the range is backed by huge pages or not. I'll see how that works out > instead. Sounds a viable solution, just that kvm_restricted_mem_get_pfn() will only give you the ability to check a page, not a range. But you can still call it many times I think. The invalidation callback will be still needed, it give
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Mon, Nov 14, 2022 at 06:28:43PM +0300, Kirill A. Shutemov wrote: > On Mon, Nov 14, 2022 at 03:02:37PM +0100, Vlastimil Babka wrote: > > On 11/1/22 16:19, Michael Roth wrote: > > > On Tue, Nov 01, 2022 at 07:37:29PM +0800, Chao Peng wrote: > > >> > > > >> > 1) restoring kernel directmap: > > >> > > > >> > Currently SNP (and I believe TDX) need to either split or remove > > >> > kernel > > >> > direct mappings for restricted PFNs, since there is no guarantee > > >> > that > > >> > other PFNs within a 2MB range won't be used for non-restricted > > >> > (which will cause an RMP #PF in the case of SNP since the 2MB > > >> > mapping overlaps with guest-owned pages) > > >> > > >> Has the splitting and restoring been a well-discussed direction? I'm > > >> just curious whether there is other options to solve this issue. > > > > > > For SNP it's been discussed for quite some time, and either splitting or > > > removing private entries from directmap are the well-discussed way I'm > > > aware of to avoid RMP violations due to some other kernel process using > > > a 2MB mapping to access shared memory if there are private pages that > > > happen to be within that range. > > > > > > In both cases the issue of how to restore directmap as 2M becomes a > > > problem. > > > > > > I was also under the impression TDX had similar requirements. If so, > > > do you know what the plan is for handling this for TDX? > > > > > > There are also 2 potential alternatives I'm aware of, but these haven't > > > been discussed in much detail AFAIK: > > > > > > a) Ensure confidential guests are backed by 2MB pages. shmem has a way to > > >request 2MB THP pages, but I'm not sure how reliably we can guarantee > > >that enough THPs are available, so if we went that route we'd probably > > >be better off requiring the use of hugetlbfs as the backing store. But > > >obviously that's a bit limiting and it would be nice to have the option > > >of using normal pages as well. One nice thing with invalidation > > >scheme proposed here is that this would "Just Work" if implement > > >hugetlbfs support, so an admin that doesn't want any directmap > > >splitting has this option available, otherwise it's done as a > > >best-effort. > > > > > > b) Implement general support for restoring directmap as 2M even when > > >subpages might be in use by other kernel threads. This would be the > > >most flexible approach since it requires no special handling during > > >invalidations, but I think it's only possible if all the CPA > > >attributes for the 2M range are the same at the time the mapping is > > >restored/unsplit, so some potential locking issues there and still > > >chance for splitting directmap over time. > > > > I've been hoping that > > > > c) using a mechanism such as [1] [2] where the goal is to group together > > these small allocations that need to increase directmap granularity so > > maximum number of large mappings are preserved. > > As I mentioned in the other thread the restricted memfd can be backed by > secretmem instead of plain memfd. It already handles directmap with care. It looks like it would handle direct unmapping/cleanup nicely, but it seems to lack fallocate(PUNCH_HOLE) support which we'd probably want to avoid additional memory requirements. I think once we added that we'd still end up needing some sort of handling for the invalidations. Also, I know Chao has been considering hugetlbfs support, I assume by leveraging the support that already exists in shmem. Ideally SNP would be able to make use of that support as well, but relying on a separate backend seems likely to result in more complications getting there later. > > But I don't think it has to be part of initial restricted memfd > implementation. It is SEV-specific requirement and AMD folks can extend > implementation as needed later. Admittedly the suggested changes to the invalidation mechanism made a lot more sense to me when I was under the impression that TDX would have similar requirements and we might end up with a common hook. Since that doesn't actually seem to be the case, it makes sense to try to do it as a platform-specific hook for SNP. I think, given a memslot, a GFN range, and kvm_restricted_mem_get_pfn(), we should be able to get the same information needed to figure out whether the range is backed by huge pages or not. I'll see how that works out instead. Thanks, Mike > > -- > Kiryl Shutsemau / Kirill A. Shutemov
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On 11/1/22 16:19, Michael Roth wrote: > On Tue, Nov 01, 2022 at 07:37:29PM +0800, Chao Peng wrote: >> > >> > 1) restoring kernel directmap: >> > >> > Currently SNP (and I believe TDX) need to either split or remove >> > kernel >> > direct mappings for restricted PFNs, since there is no guarantee that >> > other PFNs within a 2MB range won't be used for non-restricted >> > (which will cause an RMP #PF in the case of SNP since the 2MB >> > mapping overlaps with guest-owned pages) >> >> Has the splitting and restoring been a well-discussed direction? I'm >> just curious whether there is other options to solve this issue. > > For SNP it's been discussed for quite some time, and either splitting or > removing private entries from directmap are the well-discussed way I'm > aware of to avoid RMP violations due to some other kernel process using > a 2MB mapping to access shared memory if there are private pages that > happen to be within that range. > > In both cases the issue of how to restore directmap as 2M becomes a > problem. > > I was also under the impression TDX had similar requirements. If so, > do you know what the plan is for handling this for TDX? > > There are also 2 potential alternatives I'm aware of, but these haven't > been discussed in much detail AFAIK: > > a) Ensure confidential guests are backed by 2MB pages. shmem has a way to >request 2MB THP pages, but I'm not sure how reliably we can guarantee >that enough THPs are available, so if we went that route we'd probably >be better off requiring the use of hugetlbfs as the backing store. But >obviously that's a bit limiting and it would be nice to have the option >of using normal pages as well. One nice thing with invalidation >scheme proposed here is that this would "Just Work" if implement >hugetlbfs support, so an admin that doesn't want any directmap >splitting has this option available, otherwise it's done as a >best-effort. > > b) Implement general support for restoring directmap as 2M even when >subpages might be in use by other kernel threads. This would be the >most flexible approach since it requires no special handling during >invalidations, but I think it's only possible if all the CPA >attributes for the 2M range are the same at the time the mapping is >restored/unsplit, so some potential locking issues there and still >chance for splitting directmap over time. I've been hoping that c) using a mechanism such as [1] [2] where the goal is to group together these small allocations that need to increase directmap granularity so maximum number of large mappings are preserved. But I guess that means knowing at allocation time that this will happen. So I've been wondering how this would be possible to employ in the SNP/UPM case? I guess it depends on how we expect the private/shared conversions to happen in practice, and I don't know the details. I can imagine the following complications: - a memfd_restricted region is created such that it's 2MB large/aligned, i.e. like case a) above, we can allocate it normally. Now, what if a 4k page in the middle is to be temporarily converted to shared for some communication between host and guest (can such thing happen?). With the punch hole approach, I wonder if we end up fragmenting directmap unnecessarily? IIUC the now shared page will become backed by some other page (as the memslot supports both private and shared pages simultaneously). But does it make sense to really split the direct mapping (and e.g. the shmem page?) We could leave the whole 2MB unmapped without splitting if we didn't free the private 4k subpage. - a restricted region is created that's below 2MB. If something like [1] is merged, it could be used for the backing pages to limit directmap fragmentation. But then in case it's eventually fallocated to become larger and gain one more more 2MB aligned ranges, the result is suboptimal. Unless in that case we migrate the existing pages to a THP-backed shmem, kinda like khugepaged collapses hugepages. But that would have to be coordinated with the guest, maybe not even possible? [1] https://lore.kernel.org/all/20220127085608.306306-1-r...@kernel.org/ [2] https://lwn.net/Articles/894557/ >> >> > >> > Previously we were able to restore 2MB mappings to some degree >> > since both shared/restricted pages were all pinned, so anything >> > backed by a THP (or hugetlb page once that is implemented) at guest >> > teardown could be restored as 2MB direct mapping. >> > >> > Invalidation seems like the most logical time to have this happen, >> >> Currently invalidation only happens at user-initiated fallocate(). It >> does not cover the VM teardown case where the restoring might also be >> expected to be handled. > > Right, I forgot to add that in my proposed changes I added invalidations > for any still-allocated private pages present when the restricted memfd > noti
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Mon, Nov 14, 2022 at 03:02:37PM +0100, Vlastimil Babka wrote: > On 11/1/22 16:19, Michael Roth wrote: > > On Tue, Nov 01, 2022 at 07:37:29PM +0800, Chao Peng wrote: > >> > > >> > 1) restoring kernel directmap: > >> > > >> > Currently SNP (and I believe TDX) need to either split or remove > >> > kernel > >> > direct mappings for restricted PFNs, since there is no guarantee > >> > that > >> > other PFNs within a 2MB range won't be used for non-restricted > >> > (which will cause an RMP #PF in the case of SNP since the 2MB > >> > mapping overlaps with guest-owned pages) > >> > >> Has the splitting and restoring been a well-discussed direction? I'm > >> just curious whether there is other options to solve this issue. > > > > For SNP it's been discussed for quite some time, and either splitting or > > removing private entries from directmap are the well-discussed way I'm > > aware of to avoid RMP violations due to some other kernel process using > > a 2MB mapping to access shared memory if there are private pages that > > happen to be within that range. > > > > In both cases the issue of how to restore directmap as 2M becomes a > > problem. > > > > I was also under the impression TDX had similar requirements. If so, > > do you know what the plan is for handling this for TDX? > > > > There are also 2 potential alternatives I'm aware of, but these haven't > > been discussed in much detail AFAIK: > > > > a) Ensure confidential guests are backed by 2MB pages. shmem has a way to > >request 2MB THP pages, but I'm not sure how reliably we can guarantee > >that enough THPs are available, so if we went that route we'd probably > >be better off requiring the use of hugetlbfs as the backing store. But > >obviously that's a bit limiting and it would be nice to have the option > >of using normal pages as well. One nice thing with invalidation > >scheme proposed here is that this would "Just Work" if implement > >hugetlbfs support, so an admin that doesn't want any directmap > >splitting has this option available, otherwise it's done as a > >best-effort. > > > > b) Implement general support for restoring directmap as 2M even when > >subpages might be in use by other kernel threads. This would be the > >most flexible approach since it requires no special handling during > >invalidations, but I think it's only possible if all the CPA > >attributes for the 2M range are the same at the time the mapping is > >restored/unsplit, so some potential locking issues there and still > >chance for splitting directmap over time. > > I've been hoping that > > c) using a mechanism such as [1] [2] where the goal is to group together > these small allocations that need to increase directmap granularity so > maximum number of large mappings are preserved. As I mentioned in the other thread the restricted memfd can be backed by secretmem instead of plain memfd. It already handles directmap with care. But I don't think it has to be part of initial restricted memfd implementation. It is SEV-specific requirement and AMD folks can extend implementation as needed later. -- Kiryl Shutsemau / Kirill A. Shutemov
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Mon, Nov 14, 2022 at 03:02:37PM +0100, Vlastimil Babka wrote: > On 11/1/22 16:19, Michael Roth wrote: > > On Tue, Nov 01, 2022 at 07:37:29PM +0800, Chao Peng wrote: > >> > > >> > 1) restoring kernel directmap: > >> > > >> > Currently SNP (and I believe TDX) need to either split or remove > >> > kernel > >> > direct mappings for restricted PFNs, since there is no guarantee > >> > that > >> > other PFNs within a 2MB range won't be used for non-restricted > >> > (which will cause an RMP #PF in the case of SNP since the 2MB > >> > mapping overlaps with guest-owned pages) > >> > >> Has the splitting and restoring been a well-discussed direction? I'm > >> just curious whether there is other options to solve this issue. > > > > For SNP it's been discussed for quite some time, and either splitting or > > removing private entries from directmap are the well-discussed way I'm > > aware of to avoid RMP violations due to some other kernel process using > > a 2MB mapping to access shared memory if there are private pages that > > happen to be within that range. > > > > In both cases the issue of how to restore directmap as 2M becomes a > > problem. > > > > I was also under the impression TDX had similar requirements. If so, > > do you know what the plan is for handling this for TDX? > > > > There are also 2 potential alternatives I'm aware of, but these haven't > > been discussed in much detail AFAIK: > > > > a) Ensure confidential guests are backed by 2MB pages. shmem has a way to > >request 2MB THP pages, but I'm not sure how reliably we can guarantee > >that enough THPs are available, so if we went that route we'd probably > >be better off requiring the use of hugetlbfs as the backing store. But > >obviously that's a bit limiting and it would be nice to have the option > >of using normal pages as well. One nice thing with invalidation > >scheme proposed here is that this would "Just Work" if implement > >hugetlbfs support, so an admin that doesn't want any directmap > >splitting has this option available, otherwise it's done as a > >best-effort. > > > > b) Implement general support for restoring directmap as 2M even when > >subpages might be in use by other kernel threads. This would be the > >most flexible approach since it requires no special handling during > >invalidations, but I think it's only possible if all the CPA > >attributes for the 2M range are the same at the time the mapping is > >restored/unsplit, so some potential locking issues there and still > >chance for splitting directmap over time. > > I've been hoping that > > c) using a mechanism such as [1] [2] where the goal is to group together > these small allocations that need to increase directmap granularity so > maximum number of large mappings are preserved. But I guess that means Thanks for the references. I wasn't aware there was work in this area, this opens up some possibilities on how to approach this. > maximum number of large mappings are preserved. But I guess that means > knowing at allocation time that this will happen. So I've been wondering how > this would be possible to employ in the SNP/UPM case? I guess it depends on > how we expect the private/shared conversions to happen in practice, and I > don't know the details. I can imagine the following complications: > > - a memfd_restricted region is created such that it's 2MB large/aligned, > i.e. like case a) above, we can allocate it normally. Now, what if a 4k page > in the middle is to be temporarily converted to shared for some > communication between host and guest (can such thing happen?). With the > punch hole approach, I wonder if we end up fragmenting directmap > unnecessarily? IIUC the now shared page will become backed by some other Yes, we end up fragmenting in cases where a guest converts a sub-page to a shared page because the fallocate(PUNCH_HOLE) gets forwarded through to shmem which will then split it. At that point the subpage might get used elsewhere so we no longer have the ability to restore as 2M after invalidation/shutdown. We could potentially just intercept those fallocate()'s and only issue the invalidation once all the subpages have been PUNCH_HOLE'd. We'd still need to ensure KVM MMU invalidations happen immediately though, but since we rely on a KVM ioctl to do the conversion in advance, we can rely on the KVM MMU invalidation that happens at that point and simply make fallocate(PUNCH_HOLE) fail if someone attempts it on a page that hasn't been converted to shared yet. Otherwise we could end up being an good chunk of pages depending on how guest allocates shared pages, but I'm slightly less concerned about that seeing as there are some general solutions to directmap fragmentation being considered. I need to think more how this hooks would tie in to that though. And since we'd only really being able to avoid unrecoverable splits if the restrictedmem is
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Wed, Nov 02, 2022 at 05:07:00PM -0500, Michael Roth wrote: > On Thu, Nov 03, 2022 at 12:14:04AM +0300, Kirill A. Shutemov wrote: > > On Mon, Oct 31, 2022 at 12:47:38PM -0500, Michael Roth wrote: > > > > > > In v8 there was some discussion about potentially passing the page/folio > > > and order as part of the invalidation callback, I ended up needing > > > something similar for SEV-SNP, and think it might make sense for other > > > platforms. This main reasoning is: > > > > > > 1) restoring kernel directmap: > > > > > > Currently SNP (and I believe TDX) need to either split or remove > > > kernel > > > direct mappings for restricted PFNs, since there is no guarantee that > > > other PFNs within a 2MB range won't be used for non-restricted > > > (which will cause an RMP #PF in the case of SNP since the 2MB > > > mapping overlaps with guest-owned pages) > > > > That's news to me. Where the restriction for SNP comes from? > > Sorry, missed your first question. > > For SNP at least, the restriction is documented in APM Volume 2, Section > 15.36.10, First row of Table 15-36 (preceeding paragraph has more > context). I forgot to mention this is only pertaining to writes by the > host to 2MB pages that contain guest-owned subpages, for reads it's > not an issue, but I think the implementation requirements end up being > the same either way: > > https://www.amd.com/system/files/TechDocs/24593.pdf Looks like you wanted restricted memfd to be backed by secretmem rather then normal memfd. It would help preserve directmap. -- Kiryl Shutsemau / Kirill A. Shutemov
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Thu, Nov 03, 2022 at 12:14:04AM +0300, Kirill A. Shutemov wrote: > On Mon, Oct 31, 2022 at 12:47:38PM -0500, Michael Roth wrote: > > > > In v8 there was some discussion about potentially passing the page/folio > > and order as part of the invalidation callback, I ended up needing > > something similar for SEV-SNP, and think it might make sense for other > > platforms. This main reasoning is: > > > > 1) restoring kernel directmap: > > > > Currently SNP (and I believe TDX) need to either split or remove kernel > > direct mappings for restricted PFNs, since there is no guarantee that > > other PFNs within a 2MB range won't be used for non-restricted > > (which will cause an RMP #PF in the case of SNP since the 2MB > > mapping overlaps with guest-owned pages) > > That's news to me. Where the restriction for SNP comes from? Sorry, missed your first question. For SNP at least, the restriction is documented in APM Volume 2, Section 15.36.10, First row of Table 15-36 (preceeding paragraph has more context). I forgot to mention this is only pertaining to writes by the host to 2MB pages that contain guest-owned subpages, for reads it's not an issue, but I think the implementation requirements end up being the same either way: https://www.amd.com/system/files/TechDocs/24593.pdf -Mike > That's news to me. Where the restriction for SNP comes from? There's no > such limitation on TDX side AFAIK? > > Could you point me to relevant documentation if there's any? > > -- > Kiryl Shutsemau / Kirill A. Shutemov
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Thu, Nov 03, 2022 at 12:14:04AM +0300, Kirill A. Shutemov wrote: > On Mon, Oct 31, 2022 at 12:47:38PM -0500, Michael Roth wrote: > > > > In v8 there was some discussion about potentially passing the page/folio > > and order as part of the invalidation callback, I ended up needing > > something similar for SEV-SNP, and think it might make sense for other > > platforms. This main reasoning is: > > > > 1) restoring kernel directmap: > > > > Currently SNP (and I believe TDX) need to either split or remove kernel > > direct mappings for restricted PFNs, since there is no guarantee that > > other PFNs within a 2MB range won't be used for non-restricted > > (which will cause an RMP #PF in the case of SNP since the 2MB > > mapping overlaps with guest-owned pages) > > That's news to me. Where the restriction for SNP comes from? There's no > such limitation on TDX side AFAIK? > > Could you point me to relevant documentation if there's any? I could be mistaken, I haven't looked into the specific documentation and was going off of this discussion from a ways back: https://lore.kernel.org/all/ywb8wg6ravbs1...@google.com/ Sean, is my read of that correct? Do you happen to know where there's some documentation on that for the TDX side? Thanks, Mike > > -- > Kiryl Shutsemau / Kirill A. Shutemov
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Wed, Nov 02, 2022 at 10:53:25PM +0800, Chao Peng wrote: > On Tue, Nov 01, 2022 at 02:30:58PM -0500, Michael Roth wrote: > > On Tue, Nov 01, 2022 at 10:19:44AM -0500, Michael Roth wrote: > > > On Tue, Nov 01, 2022 at 07:37:29PM +0800, Chao Peng wrote: > > > > On Mon, Oct 31, 2022 at 12:47:38PM -0500, Michael Roth wrote: > > > > > On Tue, Oct 25, 2022 at 11:13:37PM +0800, Chao Peng wrote: > > > > > > > > > > > > > > 3) Potentially useful for hugetlbfs support: > > > > > > > > > > One issue with hugetlbfs is that we don't support splitting the > > > > > hugepage in such cases, which was a big obstacle prior to UPM. > > > > > Now > > > > > however, we may have the option of doing "lazy" invalidations > > > > > where > > > > > fallocate(PUNCH_HOLE, ...) won't free a shmem-allocate page > > > > > unless > > > > > all the subpages within the 2M range are either hole-punched, or > > > > > the > > > > > guest is shut down, so in that way we never have to split it. > > > > > Sean > > > > > was pondering something similar in another thread: > > > > > > > > > > > > > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Flinux-mm%2FYyGLXXkFCmxBfu5U%40google.com%2F&data=05%7C01%7Cmichael.roth%40amd.com%7C13192ae987b442f10b7408dabce2a4c5%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638029978853935768%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Is%2Bfm3c9BGFmU%2Btn3ZgPPQnUeCK%2BhKPArsPrWY5JeSg%3D&reserved=0 > > > > > > > > > > Issuing invalidations with folio-granularity ties in fairly well > > > > > with this sort of approach if we end up going that route. > > > > > > > > There is semantics difference between the current one and the proposed > > > > one: The invalidation range is exactly what userspace passed down to the > > > > kernel (being fallocated) while the proposed one will be subset of that > > > > (if userspace-provided addr/size is not aligned to power of two), I'm > > > > not quite confident this difference has no side effect. > > > > > > In theory userspace should not be allocating/hole-punching restricted > > > pages for GPA ranges that are already mapped as private in the xarray, > > > and KVM could potentially fail such requests (though it does currently). > > > > > > But if we somehow enforced that, then we could rely on > > > KVM_MEMORY_ENCRYPT_REG_REGION to handle all the MMU invalidation stuff, > > > which would free up the restricted fd invalidation callbacks to be used > > > purely to handle doing things like RMP/directmap fixups prior to returning > > > restricted pages back to the host. So that was sort of my thinking why the > > > new semantics would still cover all the necessary cases. > > > > Sorry, this explanation is if we rely on userspace to fallocate() on 2MB > > boundaries, and ignore any non-aligned requests in the kernel. But > > that's not how I actually ended up implementing things, so I'm not sure > > why answered that way... > > > > In my implementation we actually do issue invalidations for fallocate() > > even for non-2M-aligned GPA/offset ranges. For instance (assuming > > restricted FD offset 0 corresponds to GPA 0), an fallocate() on GPA > > range 0x1000-0x402000 would result in the following invalidations being > > issued if everything was backed by a 2MB page: > > > > invalidate GPA: 0x001000-0x20, Page: pfn_to_page(I), order:9 > > invalidate GPA: 0x20-0x40, Page: pfn_to_page(J), order:9 > > invalidate GPA: 0x40-0x402000, Page: pfn_to_page(K), order:9 > > Only see this I understand what you are actually going to propose;) > > So the memory range(start/end) will be still there and covers exactly > what it should be from usrspace point of view, the page+order(or just > folio) is really just a _hint_ for the invalidation callbacks. Looks > ugly though. Yes that's accurate: callbacks still need to handle partial ranges, so it's more of a hint/optimization for cases where callbacks can benefit from knowing the entire backing hugepage is being invalidated/freed. > > In v9 we use a invalidate_start/ invalidate_end pair to solve a race > contention > issue(https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fkvm%2FY1LOe4JvnTbFNs4u%40google.com%2F&data=05%7C01%7Cmichael.roth%40amd.com%7C13192ae987b442f10b7408dabce2a4c5%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638029978853935768%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=zccj0lNcqBCxGVGLBYAD2BCkJuy75nTxFTSUMfDJjzM%3D&reserved=0). > To work with this, I believe we only need pass this hint info for > invalidate_start() since at the invalidate_end() time, the page has > already been discarded. Ok, yah, that's the approach I'm looking at for v9: pass the page/order for invalidate_start, but keep invalidate_end as-is. > > Another wor
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Mon, Oct 31, 2022 at 12:47:38PM -0500, Michael Roth wrote: > > In v8 there was some discussion about potentially passing the page/folio > and order as part of the invalidation callback, I ended up needing > something similar for SEV-SNP, and think it might make sense for other > platforms. This main reasoning is: > > 1) restoring kernel directmap: > > Currently SNP (and I believe TDX) need to either split or remove kernel > direct mappings for restricted PFNs, since there is no guarantee that > other PFNs within a 2MB range won't be used for non-restricted > (which will cause an RMP #PF in the case of SNP since the 2MB > mapping overlaps with guest-owned pages) That's news to me. Where the restriction for SNP comes from? There's no such limitation on TDX side AFAIK? Could you point me to relevant documentation if there's any? -- Kiryl Shutsemau / Kirill A. Shutemov
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Tue, Nov 01, 2022 at 02:30:58PM -0500, Michael Roth wrote: > On Tue, Nov 01, 2022 at 10:19:44AM -0500, Michael Roth wrote: > > On Tue, Nov 01, 2022 at 07:37:29PM +0800, Chao Peng wrote: > > > On Mon, Oct 31, 2022 at 12:47:38PM -0500, Michael Roth wrote: > > > > On Tue, Oct 25, 2022 at 11:13:37PM +0800, Chao Peng wrote: > > > > > > > > > > > 3) Potentially useful for hugetlbfs support: > > > > > > > > One issue with hugetlbfs is that we don't support splitting the > > > > hugepage in such cases, which was a big obstacle prior to UPM. Now > > > > however, we may have the option of doing "lazy" invalidations where > > > > fallocate(PUNCH_HOLE, ...) won't free a shmem-allocate page unless > > > > all the subpages within the 2M range are either hole-punched, or > > > > the > > > > guest is shut down, so in that way we never have to split it. Sean > > > > was pondering something similar in another thread: > > > > > > > > > > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Flinux-mm%2FYyGLXXkFCmxBfu5U%40google.com%2F&data=05%7C01%7CMichael.Roth%40amd.com%7C28ba5dbb51844f910dec08dabc1c99e6%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638029128345507924%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=bxcRfuJIgo1Z1G8HQ800HscE6y7RXRQwvWSkfc5M8Bs%3D&reserved=0 > > > > > > > > Issuing invalidations with folio-granularity ties in fairly well > > > > with this sort of approach if we end up going that route. > > > > > > There is semantics difference between the current one and the proposed > > > one: The invalidation range is exactly what userspace passed down to the > > > kernel (being fallocated) while the proposed one will be subset of that > > > (if userspace-provided addr/size is not aligned to power of two), I'm > > > not quite confident this difference has no side effect. > > > > In theory userspace should not be allocating/hole-punching restricted > > pages for GPA ranges that are already mapped as private in the xarray, > > and KVM could potentially fail such requests (though it does currently). > > > > But if we somehow enforced that, then we could rely on > > KVM_MEMORY_ENCRYPT_REG_REGION to handle all the MMU invalidation stuff, > > which would free up the restricted fd invalidation callbacks to be used > > purely to handle doing things like RMP/directmap fixups prior to returning > > restricted pages back to the host. So that was sort of my thinking why the > > new semantics would still cover all the necessary cases. > > Sorry, this explanation is if we rely on userspace to fallocate() on 2MB > boundaries, and ignore any non-aligned requests in the kernel. But > that's not how I actually ended up implementing things, so I'm not sure > why answered that way... > > In my implementation we actually do issue invalidations for fallocate() > even for non-2M-aligned GPA/offset ranges. For instance (assuming > restricted FD offset 0 corresponds to GPA 0), an fallocate() on GPA > range 0x1000-0x402000 would result in the following invalidations being > issued if everything was backed by a 2MB page: > > invalidate GPA: 0x001000-0x20, Page: pfn_to_page(I), order:9 > invalidate GPA: 0x20-0x40, Page: pfn_to_page(J), order:9 > invalidate GPA: 0x40-0x402000, Page: pfn_to_page(K), order:9 Only see this I understand what you are actually going to propose;) So the memory range(start/end) will be still there and covers exactly what it should be from usrspace point of view, the page+order(or just folio) is really just a _hint_ for the invalidation callbacks. Looks ugly though. In v9 we use a invalidate_start/ invalidate_end pair to solve a race contention issue(https://lore.kernel.org/kvm/y1loe4jvntbfn...@google.com/). To work with this, I believe we only need pass this hint info for invalidate_start() since at the invalidate_end() time, the page has already been discarded. Another worth-mentioning-thing is invalidate_start/end is not just invoked for hole punching, but also for allocation(e.g. default fallocate). While for allocation we can get the page only at the invalidate_end() time. But AFAICS, the invalidate() is called for fallocate(allocation) is because previously we rely on the existence in memory backing store to tell a page is private and we need notify KVM that the page is being converted from shared to private, but that is not true for current code and fallocate() is also not mandatory since KVM can call restrictedmem_get_page() to allocate dynamically, so I think we can remove the invalidation path for fallocate(allocation). > > So you still cover the same range, but the arch/platform callbacks can > then, as a best effort, do things like restore 2M directmap if they see > that the backing page is 2MB+ and the GPA range covers the entire range. > If the GPA doesn't covers the whole range, or the backing page is > o
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Tue, Nov 01, 2022 at 10:19:44AM -0500, Michael Roth wrote: > On Tue, Nov 01, 2022 at 07:37:29PM +0800, Chao Peng wrote: > > On Mon, Oct 31, 2022 at 12:47:38PM -0500, Michael Roth wrote: > > > On Tue, Oct 25, 2022 at 11:13:37PM +0800, Chao Peng wrote: > > > > > > > > 3) Potentially useful for hugetlbfs support: > > > > > > One issue with hugetlbfs is that we don't support splitting the > > > hugepage in such cases, which was a big obstacle prior to UPM. Now > > > however, we may have the option of doing "lazy" invalidations where > > > fallocate(PUNCH_HOLE, ...) won't free a shmem-allocate page unless > > > all the subpages within the 2M range are either hole-punched, or the > > > guest is shut down, so in that way we never have to split it. Sean > > > was pondering something similar in another thread: > > > > > > > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Flinux-mm%2FYyGLXXkFCmxBfu5U%40google.com%2F&data=05%7C01%7CMichael.Roth%40amd.com%7C28ba5dbb51844f910dec08dabc1c99e6%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638029128345507924%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=bxcRfuJIgo1Z1G8HQ800HscE6y7RXRQwvWSkfc5M8Bs%3D&reserved=0 > > > > > > Issuing invalidations with folio-granularity ties in fairly well > > > with this sort of approach if we end up going that route. > > > > There is semantics difference between the current one and the proposed > > one: The invalidation range is exactly what userspace passed down to the > > kernel (being fallocated) while the proposed one will be subset of that > > (if userspace-provided addr/size is not aligned to power of two), I'm > > not quite confident this difference has no side effect. > > In theory userspace should not be allocating/hole-punching restricted > pages for GPA ranges that are already mapped as private in the xarray, > and KVM could potentially fail such requests (though it does currently). > > But if we somehow enforced that, then we could rely on > KVM_MEMORY_ENCRYPT_REG_REGION to handle all the MMU invalidation stuff, > which would free up the restricted fd invalidation callbacks to be used > purely to handle doing things like RMP/directmap fixups prior to returning > restricted pages back to the host. So that was sort of my thinking why the > new semantics would still cover all the necessary cases. Sorry, this explanation is if we rely on userspace to fallocate() on 2MB boundaries, and ignore any non-aligned requests in the kernel. But that's not how I actually ended up implementing things, so I'm not sure why answered that way... In my implementation we actually do issue invalidations for fallocate() even for non-2M-aligned GPA/offset ranges. For instance (assuming restricted FD offset 0 corresponds to GPA 0), an fallocate() on GPA range 0x1000-0x402000 would result in the following invalidations being issued if everything was backed by a 2MB page: invalidate GPA: 0x001000-0x20, Page: pfn_to_page(I), order:9 invalidate GPA: 0x20-0x40, Page: pfn_to_page(J), order:9 invalidate GPA: 0x40-0x402000, Page: pfn_to_page(K), order:9 So you still cover the same range, but the arch/platform callbacks can then, as a best effort, do things like restore 2M directmap if they see that the backing page is 2MB+ and the GPA range covers the entire range. If the GPA doesn't covers the whole range, or the backing page is order:0, then in that case we are still forced to leave the directmap split. But with that in place we can then improve on that by allowing for the use of hugetlbfs. We'd still be somewhat reliant on userspace to issue fallocate()'s on 2M-aligned boundaries to some degree (guest teardown invalidations could be issued as 2M-aligned, which would be the bulk of the pages in most cases, but for discarding pages after private->shared conversion we could still get fragmentation). This could maybe be addressed by keeping track of those partial/non-2M-aligned fallocate() requests and then issuing them as a batched 2M invalidation once all the subpages have been fallocate(HOLE_PUNCH)'d. We'd need to enforce that fallocate(PUNCH_HOLE) is preceeded by KVM_MEMORY_ENCRYPT_UNREG_REGION to make sure MMU invalidations happen though. Not sure on these potential follow-ups, but they all at least seem compatible with the proposed invalidation scheme. -Mike > > -Mike > > > > > > > > > I need to rework things for v9, and we'll probably want to use struct > > > folio instead of struct page now, but as a proof-of-concept of sorts this > > > is what I'd added on top of v8 of your patchset to implement 1) and 2): > > > > > > > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmdroth%2Flinux%2Fcommit%2F127e5ea477c7bd5e4107fd44a04b9dc9e9b1af8b&data=05%7C01%7CMichael.Roth%40amd.com%7C28ba5dbb51844f910dec08dabc1c99e6%7C3dd8961f
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Tue, Nov 01, 2022 at 07:37:29PM +0800, Chao Peng wrote: > On Mon, Oct 31, 2022 at 12:47:38PM -0500, Michael Roth wrote: > > On Tue, Oct 25, 2022 at 11:13:37PM +0800, Chao Peng wrote: > > > From: "Kirill A. Shutemov" > > > > > > +struct restrictedmem_data { > > > + struct mutex lock; > > > + struct file *memfd; > > > + struct list_head notifiers; > > > +}; > > > + > > > +static void restrictedmem_notifier_invalidate(struct restrictedmem_data > > > *data, > > > + pgoff_t start, pgoff_t end, bool notify_start) > > > +{ > > > + struct restrictedmem_notifier *notifier; > > > + > > > + mutex_lock(&data->lock); > > > + list_for_each_entry(notifier, &data->notifiers, list) { > > > + if (notify_start) > > > + notifier->ops->invalidate_start(notifier, start, end); > > > + else > > > + notifier->ops->invalidate_end(notifier, start, end); > > > + } > > > + mutex_unlock(&data->lock); > > > +} > > > + > > > +static int restrictedmem_release(struct inode *inode, struct file *file) > > > +{ > > > + struct restrictedmem_data *data = inode->i_mapping->private_data; > > > + > > > + fput(data->memfd); > > > + kfree(data); > > > + return 0; > > > +} > > > + > > > +static long restrictedmem_fallocate(struct file *file, int mode, > > > + loff_t offset, loff_t len) > > > +{ > > > + struct restrictedmem_data *data = file->f_mapping->private_data; > > > + struct file *memfd = data->memfd; > > > + int ret; > > > + > > > + if (mode & FALLOC_FL_PUNCH_HOLE) { > > > + if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len)) > > > + return -EINVAL; > > > + } > > > + > > > + restrictedmem_notifier_invalidate(data, offset, offset + len, true); > > > + ret = memfd->f_op->fallocate(memfd, mode, offset, len); > > > + restrictedmem_notifier_invalidate(data, offset, offset + len, false); > > > + return ret; > > > +} > > > > In v8 there was some discussion about potentially passing the page/folio > > and order as part of the invalidation callback, I ended up needing > > something similar for SEV-SNP, and think it might make sense for other > > platforms. This main reasoning is: > > In that context what we talked on is the inaccessible_get_pfn(), I was > not aware there is need for invalidation callback as well. Right, your understanding is correct. I think Sean had only mentioned in passing that it was something we could potentially do, and in the cases I was looking at it ended up being useful. I only mentioned it so I don't seem like I'm too far out in the weeds here :) > > > > > 1) restoring kernel directmap: > > > > Currently SNP (and I believe TDX) need to either split or remove kernel > > direct mappings for restricted PFNs, since there is no guarantee that > > other PFNs within a 2MB range won't be used for non-restricted > > (which will cause an RMP #PF in the case of SNP since the 2MB > > mapping overlaps with guest-owned pages) > > Has the splitting and restoring been a well-discussed direction? I'm > just curious whether there is other options to solve this issue. For SNP it's been discussed for quite some time, and either splitting or removing private entries from directmap are the well-discussed way I'm aware of to avoid RMP violations due to some other kernel process using a 2MB mapping to access shared memory if there are private pages that happen to be within that range. In both cases the issue of how to restore directmap as 2M becomes a problem. I was also under the impression TDX had similar requirements. If so, do you know what the plan is for handling this for TDX? There are also 2 potential alternatives I'm aware of, but these haven't been discussed in much detail AFAIK: a) Ensure confidential guests are backed by 2MB pages. shmem has a way to request 2MB THP pages, but I'm not sure how reliably we can guarantee that enough THPs are available, so if we went that route we'd probably be better off requiring the use of hugetlbfs as the backing store. But obviously that's a bit limiting and it would be nice to have the option of using normal pages as well. One nice thing with invalidation scheme proposed here is that this would "Just Work" if implement hugetlbfs support, so an admin that doesn't want any directmap splitting has this option available, otherwise it's done as a best-effort. b) Implement general support for restoring directmap as 2M even when subpages might be in use by other kernel threads. This would be the most flexible approach since it requires no special handling during invalidations, but I think it's only possible if all the CPA attributes for the 2M range are the same at the time the mapping is restored/unsplit, so some potential locking issues there and still chance for splitting directmap over time. > > > > > Previously we were able to restore 2MB mappings to some degree > > since both sh
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Mon, Oct 31, 2022 at 12:47:38PM -0500, Michael Roth wrote: > On Tue, Oct 25, 2022 at 11:13:37PM +0800, Chao Peng wrote: > > From: "Kirill A. Shutemov" > > > > Introduce 'memfd_restricted' system call with the ability to create > > memory areas that are restricted from userspace access through ordinary > > MMU operations (e.g. read/write/mmap). The memory content is expected to > > be used through a new in-kernel interface by a third kernel module. > > > > memfd_restricted() is useful for scenarios where a file descriptor(fd) > > can be used as an interface into mm but want to restrict userspace's > > ability on the fd. Initially it is designed to provide protections for > > KVM encrypted guest memory. > > > > Normally KVM uses memfd memory via mmapping the memfd into KVM userspace > > (e.g. QEMU) and then using the mmaped virtual address to setup the > > mapping in the KVM secondary page table (e.g. EPT). With confidential > > computing technologies like Intel TDX, the memfd memory may be encrypted > > with special key for special software domain (e.g. KVM guest) and is not > > expected to be directly accessed by userspace. Precisely, userspace > > access to such encrypted memory may lead to host crash so should be > > prevented. > > > > memfd_restricted() provides semantics required for KVM guest encrypted > > memory support that a fd created with memfd_restricted() is going to be > > used as the source of guest memory in confidential computing environment > > and KVM can directly interact with core-mm without the need to expose > > the memoy content into KVM userspace. > > > > KVM userspace is still in charge of the lifecycle of the fd. It should > > pass the created fd to KVM. KVM uses the new restrictedmem_get_page() to > > obtain the physical memory page and then uses it to populate the KVM > > secondary page table entries. > > > > The userspace restricted memfd can be fallocate-ed or hole-punched > > from userspace. When these operations happen, KVM can get notified > > through restrictedmem_notifier, it then gets chance to remove any > > mapped entries of the range in the secondary page tables. > > > > memfd_restricted() itself is implemented as a shim layer on top of real > > memory file systems (currently tmpfs). Pages in restrictedmem are marked > > as unmovable and unevictable, this is required for current confidential > > usage. But in future this might be changed. > > > > By default memfd_restricted() prevents userspace read, write and mmap. > > By defining new bit in the 'flags', it can be extended to support other > > restricted semantics in the future. > > > > The system call is currently wired up for x86 arch. > > > > Signed-off-by: Kirill A. Shutemov > > Signed-off-by: Chao Peng > > --- > > arch/x86/entry/syscalls/syscall_32.tbl | 1 + > > arch/x86/entry/syscalls/syscall_64.tbl | 1 + > > include/linux/restrictedmem.h | 62 ++ > > include/linux/syscalls.h | 1 + > > include/uapi/asm-generic/unistd.h | 5 +- > > include/uapi/linux/magic.h | 1 + > > kernel/sys_ni.c| 3 + > > mm/Kconfig | 4 + > > mm/Makefile| 1 + > > mm/restrictedmem.c | 250 + > > 10 files changed, 328 insertions(+), 1 deletion(-) > > create mode 100644 include/linux/restrictedmem.h > > create mode 100644 mm/restrictedmem.c > > > > diff --git a/arch/x86/entry/syscalls/syscall_32.tbl > > b/arch/x86/entry/syscalls/syscall_32.tbl > > index 320480a8db4f..dc70ba90247e 100644 > > --- a/arch/x86/entry/syscalls/syscall_32.tbl > > +++ b/arch/x86/entry/syscalls/syscall_32.tbl > > @@ -455,3 +455,4 @@ > > 448i386process_mreleasesys_process_mrelease > > 449i386futex_waitv sys_futex_waitv > > 450i386set_mempolicy_home_node > > sys_set_mempolicy_home_node > > +451i386memfd_restrictedsys_memfd_restricted > > diff --git a/arch/x86/entry/syscalls/syscall_64.tbl > > b/arch/x86/entry/syscalls/syscall_64.tbl > > index c84d12608cd2..06516abc8318 100644 > > --- a/arch/x86/entry/syscalls/syscall_64.tbl > > +++ b/arch/x86/entry/syscalls/syscall_64.tbl > > @@ -372,6 +372,7 @@ > > 448common process_mreleasesys_process_mrelease > > 449common futex_waitv sys_futex_waitv > > 450common set_mempolicy_home_node sys_set_mempolicy_home_node > > +451common memfd_restrictedsys_memfd_restricted > > > > # > > # Due to a historical design error, certain syscalls are numbered > > differently > > diff --git a/include/linux/restrictedmem.h b/include/linux/restrictedmem.h > > new file mode 100644 > > index ..9c37c3ea3180 > > --- /dev/null > > +++ b/include/linux/restrictedmem.h > > @@ -0,0 +1,62 @@ > > +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ > > +#ifndef _LINUX_RESTRIC
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Tue, Oct 25, 2022 at 11:13:37PM +0800, Chao Peng wrote: > From: "Kirill A. Shutemov" > > Introduce 'memfd_restricted' system call with the ability to create > memory areas that are restricted from userspace access through ordinary > MMU operations (e.g. read/write/mmap). The memory content is expected to > be used through a new in-kernel interface by a third kernel module. > > memfd_restricted() is useful for scenarios where a file descriptor(fd) > can be used as an interface into mm but want to restrict userspace's > ability on the fd. Initially it is designed to provide protections for > KVM encrypted guest memory. > > Normally KVM uses memfd memory via mmapping the memfd into KVM userspace > (e.g. QEMU) and then using the mmaped virtual address to setup the > mapping in the KVM secondary page table (e.g. EPT). With confidential > computing technologies like Intel TDX, the memfd memory may be encrypted > with special key for special software domain (e.g. KVM guest) and is not > expected to be directly accessed by userspace. Precisely, userspace > access to such encrypted memory may lead to host crash so should be > prevented. > > memfd_restricted() provides semantics required for KVM guest encrypted > memory support that a fd created with memfd_restricted() is going to be > used as the source of guest memory in confidential computing environment > and KVM can directly interact with core-mm without the need to expose > the memoy content into KVM userspace. > > KVM userspace is still in charge of the lifecycle of the fd. It should > pass the created fd to KVM. KVM uses the new restrictedmem_get_page() to > obtain the physical memory page and then uses it to populate the KVM > secondary page table entries. > > The userspace restricted memfd can be fallocate-ed or hole-punched > from userspace. When these operations happen, KVM can get notified > through restrictedmem_notifier, it then gets chance to remove any > mapped entries of the range in the secondary page tables. > > memfd_restricted() itself is implemented as a shim layer on top of real > memory file systems (currently tmpfs). Pages in restrictedmem are marked > as unmovable and unevictable, this is required for current confidential > usage. But in future this might be changed. > > By default memfd_restricted() prevents userspace read, write and mmap. > By defining new bit in the 'flags', it can be extended to support other > restricted semantics in the future. > > The system call is currently wired up for x86 arch. > > Signed-off-by: Kirill A. Shutemov > Signed-off-by: Chao Peng > --- > arch/x86/entry/syscalls/syscall_32.tbl | 1 + > arch/x86/entry/syscalls/syscall_64.tbl | 1 + > include/linux/restrictedmem.h | 62 ++ > include/linux/syscalls.h | 1 + > include/uapi/asm-generic/unistd.h | 5 +- > include/uapi/linux/magic.h | 1 + > kernel/sys_ni.c| 3 + > mm/Kconfig | 4 + > mm/Makefile| 1 + > mm/restrictedmem.c | 250 + > 10 files changed, 328 insertions(+), 1 deletion(-) > create mode 100644 include/linux/restrictedmem.h > create mode 100644 mm/restrictedmem.c > > diff --git a/arch/x86/entry/syscalls/syscall_32.tbl > b/arch/x86/entry/syscalls/syscall_32.tbl > index 320480a8db4f..dc70ba90247e 100644 > --- a/arch/x86/entry/syscalls/syscall_32.tbl > +++ b/arch/x86/entry/syscalls/syscall_32.tbl > @@ -455,3 +455,4 @@ > 448 i386process_mreleasesys_process_mrelease > 449 i386futex_waitv sys_futex_waitv > 450 i386set_mempolicy_home_node sys_set_mempolicy_home_node > +451 i386memfd_restrictedsys_memfd_restricted > diff --git a/arch/x86/entry/syscalls/syscall_64.tbl > b/arch/x86/entry/syscalls/syscall_64.tbl > index c84d12608cd2..06516abc8318 100644 > --- a/arch/x86/entry/syscalls/syscall_64.tbl > +++ b/arch/x86/entry/syscalls/syscall_64.tbl > @@ -372,6 +372,7 @@ > 448 common process_mreleasesys_process_mrelease > 449 common futex_waitv sys_futex_waitv > 450 common set_mempolicy_home_node sys_set_mempolicy_home_node > +451 common memfd_restrictedsys_memfd_restricted > > # > # Due to a historical design error, certain syscalls are numbered differently > diff --git a/include/linux/restrictedmem.h b/include/linux/restrictedmem.h > new file mode 100644 > index ..9c37c3ea3180 > --- /dev/null > +++ b/include/linux/restrictedmem.h > @@ -0,0 +1,62 @@ > +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ > +#ifndef _LINUX_RESTRICTEDMEM_H > + > +#include > +#include > +#include > + > +struct restrictedmem_notifier; > + > +struct restrictedmem_notifier_ops { > + void (*invalidate_start)(struct restrictedmem_notifier *notifier, > + pgoff_t start, pgoff_t end); > + void (*invalidate_end)(struct restric
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Wed, Oct 26, 2022 at 10:31:45AM -0700, Isaku Yamahata wrote: > On Tue, Oct 25, 2022 at 11:13:37PM +0800, > Chao Peng wrote: > > > +int restrictedmem_get_page(struct file *file, pgoff_t offset, > > + struct page **pagep, int *order) > > +{ > > + struct restrictedmem_data *data = file->f_mapping->private_data; > > + struct file *memfd = data->memfd; > > + struct page *page; > > + int ret; > > + > > + ret = shmem_getpage(file_inode(memfd), offset, &page, SGP_WRITE); > > shmem_getpage() was removed. > https://lkml.kernel.org/r/20220902194653.1739778-34-wi...@infradead.org Thanks for pointing out. My current base(kvm/queue) has not included this change yet so still use shmem_getpage(). Chao > > I needed the following fix to compile. > > thanks, > > diff --git a/mm/restrictedmem.c b/mm/restrictedmem.c > index e5bf8907e0f8..4694dd5609d6 100644 > --- a/mm/restrictedmem.c > +++ b/mm/restrictedmem.c > @@ -231,13 +231,15 @@ int restrictedmem_get_page(struct file *file, pgoff_t > offset, > { > struct restrictedmem_data *data = file->f_mapping->private_data; > struct file *memfd = data->memfd; > + struct folio *folio = NULL; > struct page *page; > int ret; > > - ret = shmem_getpage(file_inode(memfd), offset, &page, SGP_WRITE); > + ret = shmem_get_folio(file_inode(memfd), offset, &folio, SGP_WRITE); > if (ret) > return ret; > > + page = folio_file_page(folio, offset); > *pagep = page; > if (order) > *order = thp_order(compound_head(page)); > -- > Isaku Yamahata
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
Hi, On Tue, Oct 25, 2022 at 4:18 PM Chao Peng wrote: > > From: "Kirill A. Shutemov" > > Introduce 'memfd_restricted' system call with the ability to create > memory areas that are restricted from userspace access through ordinary > MMU operations (e.g. read/write/mmap). The memory content is expected to > be used through a new in-kernel interface by a third kernel module. > > memfd_restricted() is useful for scenarios where a file descriptor(fd) > can be used as an interface into mm but want to restrict userspace's > ability on the fd. Initially it is designed to provide protections for > KVM encrypted guest memory. > > Normally KVM uses memfd memory via mmapping the memfd into KVM userspace > (e.g. QEMU) and then using the mmaped virtual address to setup the > mapping in the KVM secondary page table (e.g. EPT). With confidential > computing technologies like Intel TDX, the memfd memory may be encrypted > with special key for special software domain (e.g. KVM guest) and is not > expected to be directly accessed by userspace. Precisely, userspace > access to such encrypted memory may lead to host crash so should be > prevented. > > memfd_restricted() provides semantics required for KVM guest encrypted > memory support that a fd created with memfd_restricted() is going to be > used as the source of guest memory in confidential computing environment > and KVM can directly interact with core-mm without the need to expose > the memoy content into KVM userspace. > > KVM userspace is still in charge of the lifecycle of the fd. It should > pass the created fd to KVM. KVM uses the new restrictedmem_get_page() to > obtain the physical memory page and then uses it to populate the KVM > secondary page table entries. > > The userspace restricted memfd can be fallocate-ed or hole-punched > from userspace. When these operations happen, KVM can get notified > through restrictedmem_notifier, it then gets chance to remove any > mapped entries of the range in the secondary page tables. > > memfd_restricted() itself is implemented as a shim layer on top of real > memory file systems (currently tmpfs). Pages in restrictedmem are marked > as unmovable and unevictable, this is required for current confidential > usage. But in future this might be changed. > > By default memfd_restricted() prevents userspace read, write and mmap. > By defining new bit in the 'flags', it can be extended to support other > restricted semantics in the future. > > The system call is currently wired up for x86 arch. > > Signed-off-by: Kirill A. Shutemov > Signed-off-by: Chao Peng > --- Reviewed-by: Fuad Tabba And I'm working on porting to arm64 and testing V9. Cheers, /fuad > arch/x86/entry/syscalls/syscall_32.tbl | 1 + > arch/x86/entry/syscalls/syscall_64.tbl | 1 + > include/linux/restrictedmem.h | 62 ++ > include/linux/syscalls.h | 1 + > include/uapi/asm-generic/unistd.h | 5 +- > include/uapi/linux/magic.h | 1 + > kernel/sys_ni.c| 3 + > mm/Kconfig | 4 + > mm/Makefile| 1 + > mm/restrictedmem.c | 250 + > 10 files changed, 328 insertions(+), 1 deletion(-) > create mode 100644 include/linux/restrictedmem.h > create mode 100644 mm/restrictedmem.c > > diff --git a/arch/x86/entry/syscalls/syscall_32.tbl > b/arch/x86/entry/syscalls/syscall_32.tbl > index 320480a8db4f..dc70ba90247e 100644 > --- a/arch/x86/entry/syscalls/syscall_32.tbl > +++ b/arch/x86/entry/syscalls/syscall_32.tbl > @@ -455,3 +455,4 @@ > 448i386process_mreleasesys_process_mrelease > 449i386futex_waitv sys_futex_waitv > 450i386set_mempolicy_home_node sys_set_mempolicy_home_node > +451i386memfd_restrictedsys_memfd_restricted > diff --git a/arch/x86/entry/syscalls/syscall_64.tbl > b/arch/x86/entry/syscalls/syscall_64.tbl > index c84d12608cd2..06516abc8318 100644 > --- a/arch/x86/entry/syscalls/syscall_64.tbl > +++ b/arch/x86/entry/syscalls/syscall_64.tbl > @@ -372,6 +372,7 @@ > 448common process_mreleasesys_process_mrelease > 449common futex_waitv sys_futex_waitv > 450common set_mempolicy_home_node sys_set_mempolicy_home_node > +451common memfd_restrictedsys_memfd_restricted > > # > # Due to a historical design error, certain syscalls are numbered differently > diff --git a/include/linux/restrictedmem.h b/include/linux/restrictedmem.h > new file mode 100644 > index ..9c37c3ea3180 > --- /dev/null > +++ b/include/linux/restrictedmem.h > @@ -0,0 +1,62 @@ > +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ > +#ifndef _LINUX_RESTRICTEDMEM_H > + > +#include > +#include > +#include > + > +struct restrictedmem_notifier; > + > +struct restrictedmem_notifier_ops { > + void (*invalidate_start)(struct restrictedmem_notifier *notifier, > +
Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Tue, Oct 25, 2022 at 11:13:37PM +0800, Chao Peng wrote: > +int restrictedmem_get_page(struct file *file, pgoff_t offset, > +struct page **pagep, int *order) > +{ > + struct restrictedmem_data *data = file->f_mapping->private_data; > + struct file *memfd = data->memfd; > + struct page *page; > + int ret; > + > + ret = shmem_getpage(file_inode(memfd), offset, &page, SGP_WRITE); shmem_getpage() was removed. https://lkml.kernel.org/r/20220902194653.1739778-34-wi...@infradead.org I needed the following fix to compile. thanks, diff --git a/mm/restrictedmem.c b/mm/restrictedmem.c index e5bf8907e0f8..4694dd5609d6 100644 --- a/mm/restrictedmem.c +++ b/mm/restrictedmem.c @@ -231,13 +231,15 @@ int restrictedmem_get_page(struct file *file, pgoff_t offset, { struct restrictedmem_data *data = file->f_mapping->private_data; struct file *memfd = data->memfd; + struct folio *folio = NULL; struct page *page; int ret; - ret = shmem_getpage(file_inode(memfd), offset, &page, SGP_WRITE); + ret = shmem_get_folio(file_inode(memfd), offset, &folio, SGP_WRITE); if (ret) return ret; + page = folio_file_page(folio, offset); *pagep = page; if (order) *order = thp_order(compound_head(page)); -- Isaku Yamahata
[PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
From: "Kirill A. Shutemov" Introduce 'memfd_restricted' system call with the ability to create memory areas that are restricted from userspace access through ordinary MMU operations (e.g. read/write/mmap). The memory content is expected to be used through a new in-kernel interface by a third kernel module. memfd_restricted() is useful for scenarios where a file descriptor(fd) can be used as an interface into mm but want to restrict userspace's ability on the fd. Initially it is designed to provide protections for KVM encrypted guest memory. Normally KVM uses memfd memory via mmapping the memfd into KVM userspace (e.g. QEMU) and then using the mmaped virtual address to setup the mapping in the KVM secondary page table (e.g. EPT). With confidential computing technologies like Intel TDX, the memfd memory may be encrypted with special key for special software domain (e.g. KVM guest) and is not expected to be directly accessed by userspace. Precisely, userspace access to such encrypted memory may lead to host crash so should be prevented. memfd_restricted() provides semantics required for KVM guest encrypted memory support that a fd created with memfd_restricted() is going to be used as the source of guest memory in confidential computing environment and KVM can directly interact with core-mm without the need to expose the memoy content into KVM userspace. KVM userspace is still in charge of the lifecycle of the fd. It should pass the created fd to KVM. KVM uses the new restrictedmem_get_page() to obtain the physical memory page and then uses it to populate the KVM secondary page table entries. The userspace restricted memfd can be fallocate-ed or hole-punched from userspace. When these operations happen, KVM can get notified through restrictedmem_notifier, it then gets chance to remove any mapped entries of the range in the secondary page tables. memfd_restricted() itself is implemented as a shim layer on top of real memory file systems (currently tmpfs). Pages in restrictedmem are marked as unmovable and unevictable, this is required for current confidential usage. But in future this might be changed. By default memfd_restricted() prevents userspace read, write and mmap. By defining new bit in the 'flags', it can be extended to support other restricted semantics in the future. The system call is currently wired up for x86 arch. Signed-off-by: Kirill A. Shutemov Signed-off-by: Chao Peng --- arch/x86/entry/syscalls/syscall_32.tbl | 1 + arch/x86/entry/syscalls/syscall_64.tbl | 1 + include/linux/restrictedmem.h | 62 ++ include/linux/syscalls.h | 1 + include/uapi/asm-generic/unistd.h | 5 +- include/uapi/linux/magic.h | 1 + kernel/sys_ni.c| 3 + mm/Kconfig | 4 + mm/Makefile| 1 + mm/restrictedmem.c | 250 + 10 files changed, 328 insertions(+), 1 deletion(-) create mode 100644 include/linux/restrictedmem.h create mode 100644 mm/restrictedmem.c diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl index 320480a8db4f..dc70ba90247e 100644 --- a/arch/x86/entry/syscalls/syscall_32.tbl +++ b/arch/x86/entry/syscalls/syscall_32.tbl @@ -455,3 +455,4 @@ 448i386process_mreleasesys_process_mrelease 449i386futex_waitv sys_futex_waitv 450i386set_mempolicy_home_node sys_set_mempolicy_home_node +451i386memfd_restrictedsys_memfd_restricted diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl index c84d12608cd2..06516abc8318 100644 --- a/arch/x86/entry/syscalls/syscall_64.tbl +++ b/arch/x86/entry/syscalls/syscall_64.tbl @@ -372,6 +372,7 @@ 448common process_mreleasesys_process_mrelease 449common futex_waitv sys_futex_waitv 450common set_mempolicy_home_node sys_set_mempolicy_home_node +451common memfd_restrictedsys_memfd_restricted # # Due to a historical design error, certain syscalls are numbered differently diff --git a/include/linux/restrictedmem.h b/include/linux/restrictedmem.h new file mode 100644 index ..9c37c3ea3180 --- /dev/null +++ b/include/linux/restrictedmem.h @@ -0,0 +1,62 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +#ifndef _LINUX_RESTRICTEDMEM_H + +#include +#include +#include + +struct restrictedmem_notifier; + +struct restrictedmem_notifier_ops { + void (*invalidate_start)(struct restrictedmem_notifier *notifier, +pgoff_t start, pgoff_t end); + void (*invalidate_end)(struct restrictedmem_notifier *notifier, + pgoff_t start, pgoff_t end); +}; + +struct restrictedmem_notifier { + struct list_head list; + const struct restrictedmem_notifier_ops *ops; +}; + +#ifdef CONFIG_RESTRICTEDMEM + +vo