Hi,

> > static u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
> > {
> > [ ... ]
> >          * When there is no need to deal with noncoherent DMA (e.g., no VT-d
> >          * or VT-d has snoop control), guest CD/MTRR/PAT are all ignored.  
> > The
> >          * EPT memory type is set to WB.  The effective memory type is 
> > forced
> >          * WB.
> >          *
> >          * Otherwise, we trust guest.  Guest CD/MTRR/PAT are all honored.  
> > The
> >          * EPT memory type is used to emulate guest CD/MTRR.
> > [ ... ]
> > 
> >> Something must be special about Min's assigned device.
> > 
> > Yep.  I think the magic word is "snoop control".  When pci-assigning a
> > *real* pci device VT-d (aka iommu) handles cache control that way.  When
> > assigning a mdev device this is not the case.
> > 
> > mdev is a virtual pci device emulated by the kernel.  This can be purely
> > virtual (see samples/vfio-mdev/mtty.c in the linux kernel, which can be
> > used to reproduce this).  More typical is hardware-assisted device
> > partitioning, used for some intel and nvidia gpus.  Roughly comparable
> > with SR/IOV, but not implemented completely in hardware, the kernel has
> > some device-specific support code instead.
> 
> Very interesting, thanks! ... But, given that mdev is emulated in the
> kernel: isn't that *all the more reason* for treating the guest memory
> as writeback-cacheable?

For a 100% emulated device this would make sense indeed.  When making
some GPU resources available to VMs (including giving the GPU DMA access
to guest memory) not so much.  The later is the case with the
intel/nvidia gpu mdev devices.

take care,
  Gerd



-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#114333): https://edk2.groups.io/g/devel/message/114333
Mute This Topic: https://groups.io/mt/100367559/21656
Group Owner: devel+ow...@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Reply via email to