On 02/18/16 21:14, Konrad Rzeszutek Wilk wrote:
> > > > QEMU would always use MFN above guest normal ram and I/O holes for
> > > > vNVDIMM. It would attempt to search in that space for a contiguous range
> > > > that is large enough for that that vNVDIMM devices. Is guest able to
> > > > punch holes in such GFN space?
> > > 
> > > See XENMAPSPACE_* and their uses.
> > > 
> > 
> > I think we can add following restrictions to avoid uses of XENMAPSPACE_*
> > punching holes in GFNs of vNVDIMM:
> > 
> > (1) For XENMAPSPACE_shared_info and _grant_table, never map idx in them
> >     to GFNs occupied by vNVDIMM.
> 
> OK, that sounds correct.
> >     
> > (2) For XENMAPSPACE_gmfn, _gmfn_range and _gmfn_foreign,
> >    (a) never map idx in them to GFNs occupied by vNVDIMM, and
> >    (b) never map idx corresponding to GFNs occupied by vNVDIMM
> 
> Would that mean that guest xen-blkback or xen-netback wouldn't
> be able to fetch data from the GFNs? As in, what if the HVM guest
> that has the NVDIMM also serves as a device domain - that is it
> has xen-blkback running to service other guests?
> 

I'm not familiar with xen-blkback and xen-netback, so following
statements maybe wrong.

In my understanding, xen-blkback/-netback in a device domain maps the
pages from other domains into its own domain, and copies data between
those pages and vNVDIMM. The access to vNVDIMM is performed by NVDIMM
driver in device domain. In which steps of this procedure that
xen-blkback/-netback needs to map into GFNs of vNVDIMM?

Thanks,
Haozhong

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

Reply via email to