On 10/08/18 11:37, Paul Durrant wrote: > When emulating a rep I/O operation it is possible that the ioreq will > describe a single operation that spans multiple GFNs. This is fine as long > as all those GFNs fall within an MMIO region covered by a single device > model, but unfortunately the higher levels of the emulation code do not > guarantee that. This is something that should almost certainly be fixed, > but in the meantime this patch makes sure that MMIO is truncated at GFN > boundaries and hence the appropriate device model is re-evaluated for each > target GFN. > > Signed-off-by: Paul Durrant <paul.durr...@citrix.com> > --- > Cc: Jan Beulich <jbeul...@suse.com> > Cc: Andrew Cooper <andrew.coop...@citrix.com> > --- > xen/arch/x86/hvm/emulate.c | 17 ++++++++++++++++- > 1 file changed, 16 insertions(+), 1 deletion(-) > > diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c > index 8385c62145..d6a81ec4d1 100644 > --- a/xen/arch/x86/hvm/emulate.c > +++ b/xen/arch/x86/hvm/emulate.c > @@ -184,8 +184,23 @@ static int hvmemul_do_io( > hvmtrace_io_assist(&p); > } > > - vio->io_req = p; > + /* > + * Make sure that we truncate rep MMIO at any GFN boundary. This is > + * necessary to ensure that the correct device model is targetted > + * or that we correctly handle a rep op spanning MMIO and RAM. > + */ > + if ( unlikely(p.count > 1) && p.type == IOREQ_TYPE_COPY ) > + { > + unsigned long off = p.addr & ~PAGE_MASK; > > + p.count = min_t(unsigned long, > + p.count, > + p.df ? > + (off + p.size) / p.size : > + (PAGE_SIZE - off) / p.size);
(p.df ? (off + p.size) : (PAGE_SIZE - off)) / p.size ? > + } > + > + vio->io_req = p; You'll get a cleaner patch if you retain the newline between these two. Both can be fixed up on commit. From a functional point of view, this looks fine. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel