On 03/16/16 09:23, Jan Beulich wrote:
> >>> On 16.03.16 at 15:55, <haozhong.zh...@intel.com> wrote:
> > On 03/16/16 08:23, Jan Beulich wrote:
> >> >>> On 16.03.16 at 14:55, <haozhong.zh...@intel.com> wrote:
> >> > On 03/16/16 07:16, Jan Beulich wrote:
> >> >> Which reminds me: When considering a file on NVDIMM, how
> >> >> are you making sure the mapping of the file to disk (i.e.
> >> >> memory) blocks doesn't change while the guest has access
> >> >> to it, e.g. due to some defragmentation going on?
> >> > 
> >> > The current linux kernel 4.5 has an experimental "raw device dax
> >> > support" (enabled by removing "depends on BROKEN" from "config
> >> > BLK_DEV_DAX") which can guarantee the consistent mapping. The driver
> >> > developers are going to make it non-broken in linux kernel 4.6.
> >> 
> >> But there you talk about full devices, whereas my question was
> >> for files.
> >>
> > 
> > the raw device dax support is for files on NVDIMM.
> 
> Okay, I can only trust you here. I thought FS_DAX is the file level
> thing.
> 
> >> >> And
> >> >> talking of fragmentation - how do you mean to track guest
> >> >> permissions for an unbounded number of address ranges?
> >> >>
> >> > 
> >> > In this case range structs in iomem_caps for NVDIMMs may consume a lot
> >> > of memory, so I think they are another candidate that should be put in
> >> > the reserved area on NVDIMM. If we only allow to grant access
> >> > permissions to NVDIMM page by page (rather than byte), the number of
> >> > range structs for each NVDIMM in the worst case is still decidable.
> >> 
> >> Of course the permission granularity is going to by pages, not
> >> bytes (or else we couldn't allow the pages to be mapped into
> >> guest address space). And the limit on the per-domain range
> >> sets isn't going to be allowed to be bumped significantly, at
> >> least not for any of the existing ones (or else you'd have to
> >> prove such bumping can't be abused).
> > 
> > What is that limit? the total number of range structs in per-domain
> > range sets? I must miss something when looking through 'case
> > XEN_DOMCTL_iomem_permission' of do_domctl() and didn't find that
> > limit, unless it means alloc_range() will fail when there are lots of
> > range structs.
> 
> Oh, I'm sorry, that was a different set of range sets I was
> thinking about. But note that excessive creation of ranges
> through XEN_DOMCTL_iomem_permission is not a security issue
> just because of XSA-77, i.e. we'd still not knowingly allow a
> severe increase here.
>

I didn't notice that multiple domains can all have access permission
to an iomem range, i.e. there can be multiple range structs for a
single iomem range. If range structs for NVDIMM are put on NVDIMM,
then there would be still a huge amount of them on NVDIMM in the worst
case (maximum number of domains * number of NVDIMM pages).

A workaround is to only allow a range of NVDIMM pages be accessed by a
single domain. Whenever we add the access permission of NVDIMM pages
to a domain, we also remove the permission from its current
grantee. In this way, we only need to put 'number of NVDIMM pages'
range structs on NVDIMM in the worst case.

> >> Putting such control
> >> structures on NVDIMM is a nice idea, but following our isolation
> >> model for normal memory, any such memory used by Xen
> >> would then need to be (made) inaccessible to Dom0.
> > 
> > I'm not clear how this is done. By marking those inaccessible pages as
> > unpresent in dom0's page table? Or any example I can follow?
> 
> That's the problem - so far we had no need to do so since Dom0
> was only ever allowed access to memory Xen didn't use for itself
> or knows it wants to share. Whereas now you want such a
> resource controlled first by Dom0, and only then handed to Xen.
> So yes, Dom0 would need to zap any mappings of these pages
> (and Xen would need to verify that, which would come mostly
> without new code as long as struct page_info gets properly
> used for all this memory) before Xen could use it. Much like
> ballooning out a normal RAM page.
> 

Thanks, I'll look into this balloon approach.

Haozhong

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

Reply via email to