Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2

2019-04-17 Thread Gerd Hoffmann
On Fri, Apr 12, 2019 at 04:34:20PM -0700, Chia-I Wu wrote:
> Hi,
> 
> I am still new to virgl, and missed the last round of discussion about
> resource_create_v2.
> 
> From the discussion below, semantically resource_create_v2 creates a host
> resource object _without_ any storage; memory_create creates a host memory
> object which provides the storage.  Is that correct?

Right now all resource_create_* variants create a resource object with
host storage.  memory_create creates guest storage, and
resource_attach_memory binds things together.  Then you have to transfer
the data.

Hmm, maybe we need a flag indicating that host storage is not needed,
for resources where we want establish some kind of shared mapping later
on.

> Do we expect these new commands to be supported by OpenGL, which does not
> separate resources and memories?

Well, for opengl you need a 1:1 relationship between memory region and
resource.

> > Yes, even though it is not clear yet how we are going to handle
> > host-allocated buffers in the vhost-user case ...
> 
> This might be another dumb question, but is this only an issue for
> vhost-user(-gpu) case?  What mechanisms are used to map host dma-buf into
> the guest address space?

qemu can change the address space, that includes mmap()ing stuff there.
An external vhost-user process can't do this, it can only read the
address space layout, and read/write from/to guest memory.

> But one needs to create the resource first to know which memory types can
> be attached to it.  I think the metadata needs to be returned with
> resource_create_v2.

There is a resource_info reply for that.

> That should be good enough.  But by returning alignments, we can minimize
> the gaps when attaching multiple resources, especially when the resources
> are only used by GPU.

We can add alignments to the resource_info reply.

cheers,
  Gerd

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [RFC PATCH] virtio_ring: Use DMA API if guest memory is encrypted

2019-04-17 Thread Thiago Jung Bauermann


Michael S. Tsirkin  writes:

> On Thu, Mar 21, 2019 at 09:05:04PM -0300, Thiago Jung Bauermann wrote:
>>
>> Michael S. Tsirkin  writes:
>>
>> > On Wed, Mar 20, 2019 at 01:13:41PM -0300, Thiago Jung Bauermann wrote:
>> >> >From what I understand of the ACCESS_PLATFORM definition, the host will
>> >> only ever try to access memory addresses that are supplied to it by the
>> >> guest, so all of the secure guest memory that the host cares about is
>> >> accessible:
>> >>
>> >> If this feature bit is set to 0, then the device has same access to
>> >> memory addresses supplied to it as the driver has. In particular,
>> >> the device will always use physical addresses matching addresses
>> >> used by the driver (typically meaning physical addresses used by the
>> >> CPU) and not translated further, and can access any address supplied
>> >> to it by the driver. When clear, this overrides any
>> >> platform-specific description of whether device access is limited or
>> >> translated in any way, e.g. whether an IOMMU may be present.
>> >>
>> >> All of the above is true for POWER guests, whether they are secure
>> >> guests or not.
>> >>
>> >> Or are you saying that a virtio device may want to access memory
>> >> addresses that weren't supplied to it by the driver?
>> >
>> > Your logic would apply to IOMMUs as well.  For your mode, there are
>> > specific encrypted memory regions that driver has access to but device
>> > does not. that seems to violate the constraint.
>>
>> Right, if there's a pre-configured 1:1 mapping in the IOMMU such that
>> the device can ignore the IOMMU for all practical purposes I would
>> indeed say that the logic would apply to IOMMUs as well. :-)
>>
>> I guess I'm still struggling with the purpose of signalling to the
>> driver that the host may not have access to memory addresses that it
>> will never try to access.
>
> For example, one of the benefits is to signal to host that driver does
> not expect ability to access all memory. If it does, host can
> fail initialization gracefully.

But why would the ability to access all memory be necessary or even
useful? When would the host access memory that the driver didn't tell it
to access?

>> >> >> > But the name "sev_active" makes me scared because at least AMD guys 
>> >> >> > who
>> >> >> > were doing the sensible thing and setting ACCESS_PLATFORM
>> >> >>
>> >> >> My understanding is, AMD guest-platform knows in advance that their
>> >> >> guest will run in secure mode and hence sets the flag at the time of VM
>> >> >> instantiation. Unfortunately we dont have that luxury on our platforms.
>> >> >
>> >> > Well you do have that luxury. It looks like that there are existing
>> >> > guests that already acknowledge ACCESS_PLATFORM and you are not happy
>> >> > with how that path is slow. So you are trying to optimize for
>> >> > them by clearing ACCESS_PLATFORM and then you have lost ability
>> >> > to invoke DMA API.
>> >> >
>> >> > For example if there was another flag just like ACCESS_PLATFORM
>> >> > just not yet used by anyone, you would be all fine using that right?
>> >>
>> >> Yes, a new flag sounds like a great idea. What about the definition
>> >> below?
>> >>
>> >> VIRTIO_F_ACCESS_PLATFORM_NO_IOMMU This feature has the same meaning as
>> >> VIRTIO_F_ACCESS_PLATFORM both when set and when not set, with the
>> >> exception that the IOMMU is explicitly defined to be off or bypassed
>> >> when accessing memory addresses supplied to the device by the
>> >> driver. This flag should be set by the guest if offered, but to
>> >> allow for backward-compatibility device implementations allow for it
>> >> to be left unset by the guest. It is an error to set both this flag
>> >> and VIRTIO_F_ACCESS_PLATFORM.
>> >
>> > It looks kind of narrow but it's an option.
>>
>> Great!
>>
>> > I wonder how we'll define what's an iommu though.
>>
>> Hm, it didn't occur to me it could be an issue. I'll try.

I rephrased it in terms of address translation. What do you think of
this version? The flag name is slightly different too:


VIRTIO_F_ACCESS_PLATFORM_NO_TRANSLATION This feature has the same
meaning as VIRTIO_F_ACCESS_PLATFORM both when set and when not set,
with the exception that address translation is guaranteed to be
unnecessary when accessing memory addresses supplied to the device
by the driver. Which is to say, the device will always use physical
addresses matching addresses used by the driver (typically meaning
physical addresses used by the CPU) and not translated further. This
flag should be set by the guest if offered, but to allow for
backward-compatibility device implementations allow for it to be
left unset by the guest. It is an error to set both this flag and
VIRTIO_F_ACCESS_PLATFORM.

>> > Another idea is maybe something like virtio-iommu?
>>
>> You mean, have legacy guests use virtio-iommu to request an IOMMU
>> bypass? If so, it's an 

Re: [RFC PATCH] virtio_ring: Use DMA API if guest memory is encrypted

2019-04-17 Thread Thiago Jung Bauermann


David Gibson  writes:

> On Sat, Mar 23, 2019 at 05:01:35PM -0400, Michael S. Tsirkin wrote:
>> On Thu, Mar 21, 2019 at 09:05:04PM -0300, Thiago Jung Bauermann wrote:
>> > Michael S. Tsirkin  writes:
> [snip]
>> > >> > Is there any justification to doing that beyond someone putting
>> > >> > out slow code in the past?
>> > >>
>> > >> The definition of the ACCESS_PLATFORM flag is generic and captures the
>> > >> notion of memory access restrictions for the device. Unfortunately, on
>> > >> powerpc pSeries guests it also implies that the IOMMU is turned on
>> > >
>> > > IIUC that's really because on pSeries IOMMU is *always* turned on.
>> > > Platform has no way to say what you want it to say
>> > > which is bypass the iommu for the specific device.
>> >
>> > Yes, that's correct. pSeries guests running on KVM are in a gray area
>> > where theoretically they use an IOMMU but in practice KVM ignores it.
>> > It's unfortunate but it's the reality on the ground today. :-/
>
> Um.. I'm not sure what you mean by this.  As far as I'm concerned
> there is always a guest-visible (paravirtualized) IOMMU, and that will
> be backed onto the host IOMMU when necessary.

There is, but vhost will ignore it and directly map the guest memory
when ACCESS_PLATFORM (the flag previously known as IOMMU_PLATFORM) isn't
set. From QEMU's hw/virtio/vhost.c:

static int vhost_dev_has_iommu(struct vhost_dev *dev)
{
VirtIODevice *vdev = dev->vdev;

return virtio_host_has_feature(vdev, VIRTIO_F_IOMMU_PLATFORM);
}

static void *vhost_memory_map(struct vhost_dev *dev, hwaddr addr,
  hwaddr *plen, int is_write)
{
if (!vhost_dev_has_iommu(dev)) {
return cpu_physical_memory_map(addr, plen, is_write);
} else {
return (void *)(uintptr_t)addr;
}
}

--
Thiago Jung Bauermann
IBM Linux Technology Center

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] [PATCH v5 1/6] libnvdimm: nd_region flush callback support

2019-04-17 Thread Pankaj Gupta


Hello,

Thank you for the suggestions on this.

> 
> > On Thu 11-04-19 07:51:48, Dan Williams wrote:
> >> On Tue, Apr 9, 2019 at 9:09 PM Pankaj Gupta  wrote:
> >> > +   } else {
> >> > +   if (nd_region->flush(nd_region))
> >> > +   rc = -EIO;
> >> 
> >> Given the common case wants to be fast and synchronous I think we
> >> should try to avoid retpoline overhead by default. So something like
> >> this:
> >> 
> >> if (nd_region->flush == generic_nvdimm_flush)
> >> rc = generic_nvdimm_flush(...);
> >
> > I'd either add a comment about avoiding retpoline overhead here or just
> > make ->flush == NULL mean generic_nvdimm_flush(). Just so that people don't
> > get confused by the code.
> 
> Isn't this premature optimization?  I really don't like adding things
> like this without some numbers to show it's worth it.

Can we add the optimization after this version is merged.

Best regards,
Pankaj 

> 
> -Jeff
> 
> 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization