On Sat, Feb 14, 2015 at 12:20 AM, Tamas K Lengyel <tamas.leng...@zentific.com> wrote: > On Fri, Feb 13, 2015 at 10:23 PM, Andrew Cooper > <andrew.coop...@citrix.com> wrote: >> On 13/02/15 16:33, Tamas K Lengyel wrote: >>> The memop handler function for paging/sharing responsible for calling XSM >>> doesn't really have anything to do with vm_event, thus in this patch we >>> relocate it into mem_paging_memop and mem_sharing_memop. This has already >>> been the approach in mem_access_memop, so in this patch we just make it >>> consistent. >>> >>> Signed-off-by: Tamas K Lengyel <tamas.leng...@zentific.com> >>> --- >>> xen/arch/x86/mm/mem_paging.c | 36 ++++++++++++++----- >>> xen/arch/x86/mm/mem_sharing.c | 76 >>> ++++++++++++++++++++++++++------------- >>> xen/arch/x86/x86_64/compat/mm.c | 28 +++------------ >>> xen/arch/x86/x86_64/mm.c | 26 +++----------- >>> xen/common/vm_event.c | 43 ---------------------- >>> xen/include/asm-x86/mem_paging.h | 7 +++- >>> xen/include/asm-x86/mem_sharing.h | 4 +-- >>> xen/include/xen/vm_event.h | 1 - >>> 8 files changed, 97 insertions(+), 124 deletions(-) >>> >>> diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c >>> index e63d8c1..4aee6b7 100644 >>> --- a/xen/arch/x86/mm/mem_paging.c >>> +++ b/xen/arch/x86/mm/mem_paging.c >>> @@ -21,28 +21,45 @@ >>> */ >>> >>> >>> +#include <xen/guest_access.h> >>> #include <asm/p2m.h> >>> -#include <xen/vm_event.h> >>> +#include <xsm/xsm.h> >> >> Order of includes. >> >>> >>> - >>> -int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpo) >>> +int mem_paging_memop(unsigned long cmd, >> >> I don't believe cmd is a useful parameter to pass. You know that its >> value is XENMEM_paging_op by virtue of being in this function. >> >>> + XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg) >>> { >>> - int rc = -ENODEV; >>> + int rc; >>> + xen_mem_paging_op_t mpo; >>> + struct domain *d; >>> + >>> + rc = -EFAULT; >>> + if ( copy_from_guest(&mpo, arg, 1) ) >>> + return rc; >>> + >>> + rc = rcu_lock_live_remote_domain_by_id(mpo.domain, &d); >>> + if ( rc ) >>> + return rc; >>> + >>> + rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_paging_op); >>> + if ( rc ) >>> + return rc; >>> + >>> + rc = -ENODEV; >>> if ( unlikely(!d->vm_event->paging.ring_page) ) >>> return rc; >>> >>> - switch( mpo->op ) >>> + switch( mpo.op ) >>> { >>> case XENMEM_paging_op_nominate: >>> - rc = p2m_mem_paging_nominate(d, mpo->gfn); >>> + rc = p2m_mem_paging_nominate(d, mpo.gfn); >>> break; >>> >>> case XENMEM_paging_op_evict: >>> - rc = p2m_mem_paging_evict(d, mpo->gfn); >>> + rc = p2m_mem_paging_evict(d, mpo.gfn); >>> break; >>> >>> case XENMEM_paging_op_prep: >>> - rc = p2m_mem_paging_prep(d, mpo->gfn, mpo->buffer); >>> + rc = p2m_mem_paging_prep(d, mpo.gfn, mpo.buffer); >>> break; >>> >>> default: >>> @@ -50,6 +67,9 @@ int mem_paging_memop(struct domain *d, >>> xen_mem_paging_op_t *mpo) >>> break; >>> } >>> >>> + if ( !rc && __copy_to_guest(arg, &mpo, 1) ) >>> + rc = -EFAULT; >> >> Do any of the paging ops need to be copied back? Nothing appears to >> write into mpo in this function. (The original code looks to be overly >> pessimistic). > > I'm not sure if any of these actually copy back - I just tried to keep > as much of the existing flow intact as possible while sanitizing the > vm_event side of things. That is not to say mem_sharing and mem_paging > couldn't use more scrutiny and optimizations, it's just a bit out of > the scope of what this series is about. > >> >>> + >>> return rc; >>> } >>> >>> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c >>> index 4959407..612ed89 100644 >>> --- a/xen/arch/x86/mm/mem_sharing.c >>> +++ b/xen/arch/x86/mm/mem_sharing.c >>> @@ -28,6 +28,7 @@ >>> #include <xen/grant_table.h> >>> #include <xen/sched.h> >>> #include <xen/rcupdate.h> >>> +#include <xen/guest_access.h> >>> #include <xen/vm_event.h> >>> #include <asm/page.h> >>> #include <asm/string.h> >>> @@ -1293,30 +1294,52 @@ int relinquish_shared_pages(struct domain *d) >>> return rc; >>> } >>> >>> -int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec) >>> +int mem_sharing_memop(unsigned long cmd, >>> + XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg) >>> { >>> - int rc = 0; >>> + int rc; >>> + xen_mem_sharing_op_t mso; >>> + struct domain *d; >>> + >>> + rc = -EFAULT; >>> + if ( copy_from_guest(&mso, arg, 1) ) >>> + return rc; >>> + >>> + if ( mso.op == XENMEM_sharing_op_audit ) >>> + return mem_sharing_audit(); >>> + >>> + rc = rcu_lock_live_remote_domain_by_id(mso.domain, &d); >>> + if ( rc ) >>> + return rc; >>> >>> /* Only HAP is supported */ >>> if ( !hap_enabled(d) || !d->arch.hvm_domain.mem_sharing_enabled ) >>> return -ENODEV; >>> >>> - switch(mec->op) >>> + rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_sharing_op); >>> + if ( rc ) >>> + return rc; >>> + >>> + rc = -ENODEV; >>> + if ( unlikely(!d->vm_event->share.ring_page) ) >>> + return rc; >>> + >>> + switch(mso.op) >> >> Style ( spaces ) > > Ack. > >> >>> @@ -1465,6 +1488,9 @@ int mem_sharing_memop(struct domain *d, >>> xen_mem_sharing_op_t *mec) >>> break; >>> } >>> >>> + if ( !rc && __copy_to_guest(arg, &mso, 1) ) >>> + return -EFAULT; >> >> Only certain subops need to copy information back. It is common to have >> a function-level bool_t copyback which relevant subops sets. > > If it's not a critical fix I would rather just keep it as it was in > this series. It could be part of another cleanup series for mem_paging > and mem_sharing. > > Thanks, > Tamas
Nevermind, it's quite obvious that only XENMEM_paging_op_prep copies stuff into the buffer so it's a no brainer. I'll add the fix. Thanks, Tamas > >> >> ~Andrew >> >>> + >>> return rc; >>> } >> >> _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel