>>> On 21.03.17 at 03:52, <yu.c.zh...@linux.intel.com> wrote: > --- a/xen/arch/x86/hvm/ioreq.c > +++ b/xen/arch/x86/hvm/ioreq.c > @@ -949,6 +949,14 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, > ioservid_t id, > > spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); > > + if ( rc == 0 && flags == 0 ) > + { > + struct p2m_domain *p2m = p2m_get_hostp2m(d); > + > + if ( read_atomic(&p2m->ioreq.entry_count) ) > + p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw); > + }
If you do this after dropping the lock, don't you risk a race with another server mapping the type to itself? > --- a/xen/arch/x86/mm/p2m-ept.c > +++ b/xen/arch/x86/mm/p2m-ept.c > @@ -544,6 +544,12 @@ static int resolve_misconfig(struct p2m_domain *p2m, > unsigned long gfn) > e.ipat = ipat; > if ( e.recalc && p2m_is_changeable(e.sa_p2mt) ) > { > + if ( e.sa_p2mt == p2m_ioreq_server ) > + { > + p2m->ioreq.entry_count--; > + ASSERT(p2m->ioreq.entry_count >= 0); If you did the ASSERT() first (using > 0), you wouldn't need the type be a signed one, doubling the valid value range (even if right now the full 64 bits can't be used anyway, but it would be one less thing to worry about once we get 6-level page tables). Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel