Re: [PATCH v2 resubmit] virtio-balloon: Disable free page reporting if page poison reporting is not enabled

2020-05-15 Thread Alexander Duyck
Just following up. It has been a week since I submitted this. I was
hoping we could get it in for 5.7 since this affects free page
reporting which will be introduced with that kernel release.

Thanks.

- Alex

On Fri, May 8, 2020 at 10:40 AM Alexander Duyck
 wrote:
>
> From: Alexander Duyck 
>
> We should disable free page reporting if page poisoning is enabled but we
> cannot report it via the balloon interface. This way we can avoid the
> possibility of corrupting guest memory. Normally the page poisoning feature
> should always be present when free page reporting is enabled on the
> hypervisor, however this allows us to correctly handle a case of the
> virtio-balloon device being possibly misconfigured.
>
> Fixes: 5d757c8d518d ("virtio-balloon: add support for providing free page 
> reports to host")
> Acked-by: David Hildenbrand 
> Signed-off-by: Alexander Duyck 
> ---
>
> Changes since v1:
> Originally this patch also modified free page hinting, that has been removed.
> Updated patch title and description.
> Added a comment explaining reasoning for disabling free page reporting.
>
> Resbumitting v2 w/ Ack from David Hildebrand.
>
>  drivers/virtio/virtio_balloon.c |9 -
>  1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
> index 51086a5afdd4..1f157d2f4952 100644
> --- a/drivers/virtio/virtio_balloon.c
> +++ b/drivers/virtio/virtio_balloon.c
> @@ -1107,11 +1107,18 @@ static int virtballoon_restore(struct virtio_device 
> *vdev)
>
>  static int virtballoon_validate(struct virtio_device *vdev)
>  {
> -   /* Tell the host whether we care about poisoned pages. */
> +   /*
> +* Inform the hypervisor that our pages are poisoned or
> +* initialized. If we cannot do that then we should disable
> +* page reporting as it could potentially change the contents
> +* of our free pages.
> +*/
> if (!want_init_on_free() &&
> (IS_ENABLED(CONFIG_PAGE_POISONING_NO_SANITY) ||
>  !page_poisoning_enabled()))
> __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_POISON);
> +   else if (!virtio_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_POISON))
> +   __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_REPORTING);
>
> __virtio_clear_bit(vdev, VIRTIO_F_IOMMU_PLATFORM);
> return 0;
>
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH] vhost: missing __user tags

2020-05-15 Thread Michael S. Tsirkin
sparse warns about converting void * to void __user *. This is not new
but only got noticed now that vhost is built on more systems.
This is just a question of __user tags missing in a couple of places,
so fix it up.

Fixes: f88949138058 ("vhost: introduce O(1) vq metadata cache")
Reported-by: kbuild test robot 
Signed-off-by: Michael S. Tsirkin 
---
 drivers/vhost/vhost.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index d450e16c5c25..21a59b598ed8 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -730,7 +730,7 @@ static inline void __user *vhost_vq_meta_fetch(struct 
vhost_virtqueue *vq,
if (!map)
return NULL;
 
-   return (void *)(uintptr_t)(map->addr + addr - map->start);
+   return (void __user *)(uintptr_t)(map->addr + addr - map->start);
 }
 
 /* Can we switch to this memory table? */
@@ -869,7 +869,7 @@ static void __user *__vhost_get_user_slow(struct 
vhost_virtqueue *vq,
  * not happen in this case.
  */
 static inline void __user *__vhost_get_user(struct vhost_virtqueue *vq,
-   void *addr, unsigned int size,
+   void __user *addr, unsigned int 
size,
int type)
 {
void __user *uaddr = vhost_vq_meta_fetch(vq,
-- 
MST

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v3 1/4] dma-buf: add support for virtio exported objects

2020-05-15 Thread Daniel Vetter
On Fri, May 15, 2020 at 02:07:06PM +0900, David Stevens wrote:
> On Thu, May 14, 2020 at 9:30 PM Daniel Vetter  wrote:
> > On Thu, May 14, 2020 at 05:19:40PM +0900, David Stevens wrote:
> > > Sorry for the duplicate reply, didn't notice this until now.
> > >
> > > > Just storing
> > > > the uuid should be doable (assuming this doesn't change during the
> > > > lifetime of the buffer), so no need for a callback.
> > >
> > > Directly storing the uuid doesn't work that well because of
> > > synchronization issues. The uuid needs to be shared between multiple
> > > virtio devices with independent command streams, so to prevent races
> > > between importing and exporting, the exporting driver can't share the
> > > uuid with other drivers until it knows that the device has finished
> > > registering the uuid. That requires a round trip to and then back from
> > > the device. Using a callback allows the latency from that round trip
> > > registration to be hidden.
> >
> > Uh, that means you actually do something and there's locking involved.
> > Makes stuff more complicated, invariant attributes are a lot easier
> > generally. Registering that uuid just always doesn't work, and blocking
> > when you're exporting?
> 
> Registering the id at creation and blocking in gem export is feasible,
> but it doesn't work well for systems with a centralized buffer
> allocator that doesn't support batch allocations (e.g. gralloc). In
> such a system, the round trip latency would almost certainly be
> included in the buffer allocation time. At least on the system I'm
> working on, I suspect that would add 10s of milliseconds of startup
> latency to video pipelines (although I haven't benchmarked the
> difference). Doing the blocking as late as possible means most or all
> of the latency can be hidden behind other pipeline setup work.
> 
> In terms of complexity, I think the synchronization would be basically
> the same in either approach, just in different locations. All it would
> do is alleviate the need for a callback to fetch the UUID.

Hm ok. I guess if we go with the older patch, where this all is a lot more
just code in virtio, doing an extra function to allocate the uuid sounds
fine. Then synchronization is entirely up to the virtio subsystem and not
a dma-buf problem (and hence not mine). You can use dma_resv_lock or so,
but no need to. But with callbacks potentially going both ways things
always get a bit interesting wrt locking - this is what makes peer2peer
dma-buf so painful right now. Hence I'd like to avoid that if needed, at
least at the dma-buf level. virtio code I don't mind what you do there :-)

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v4 16/15] virtio-mem: Don't rely on implicit compiler padding for requests

2020-05-15 Thread David Hildenbrand
The compiler will add padding after the last member, make that explicit.
The size of a request is always 24 bytes. The size of a response always
10 bytes. Add compile-time checks.

Cc: "Michael S. Tsirkin" 
Cc: Pankaj Gupta 
Cc: teawater 
Signed-off-by: David Hildenbrand 
---

Something I noticed while working on the spec (which proves that writing a
virtio-spec makes sense :) ).

---
 drivers/virtio/virtio_mem.c | 3 +++
 include/uapi/linux/virtio_mem.h | 3 +++
 2 files changed, 6 insertions(+)

diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c
index 9e523db3bee1..f658fe9149be 100644
--- a/drivers/virtio/virtio_mem.c
+++ b/drivers/virtio/virtio_mem.c
@@ -1770,6 +1770,9 @@ static int virtio_mem_probe(struct virtio_device *vdev)
struct virtio_mem *vm;
int rc = -EINVAL;
 
+   BUILD_BUG_ON(sizeof(struct virtio_mem_req) != 24);
+   BUILD_BUG_ON(sizeof(struct virtio_mem_resp) != 10);
+
vdev->priv = vm = kzalloc(sizeof(*vm), GFP_KERNEL);
if (!vm)
return -ENOMEM;
diff --git a/include/uapi/linux/virtio_mem.h b/include/uapi/linux/virtio_mem.h
index e0a9dc7397c3..a455c488a995 100644
--- a/include/uapi/linux/virtio_mem.h
+++ b/include/uapi/linux/virtio_mem.h
@@ -103,16 +103,19 @@
 struct virtio_mem_req_plug {
__virtio64 addr;
__virtio16 nb_blocks;
+   __virtio16 padding[3];
 };
 
 struct virtio_mem_req_unplug {
__virtio64 addr;
__virtio16 nb_blocks;
+   __virtio16 padding[3];
 };
 
 struct virtio_mem_req_state {
__virtio64 addr;
__virtio16 nb_blocks;
+   __virtio16 padding[3];
 };
 
 struct virtio_mem_req {
-- 
2.25.4

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v5 25/38] drm: virtio: fix common struct sg_table related issues

2020-05-15 Thread Gerd Hoffmann
On Wed, May 13, 2020 at 03:32:32PM +0200, Marek Szyprowski wrote:
> The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg() function
> returns the number of the created entries in the DMA address space.
> However the subsequent calls to the dma_sync_sg_for_{device,cpu}() and
> dma_unmap_sg must be called with the original number of the entries
> passed to the dma_map_sg().
> 
> struct sg_table is a common structure used for describing a non-contiguous
> memory buffer, used commonly in the DRM and graphics subsystems. It
> consists of a scatterlist with memory pages and DMA addresses (sgl entry),
> as well as the number of scatterlist entries: CPU pages (orig_nents entry)
> and DMA mapped pages (nents entry).
> 
> It turned out that it was a common mistake to misuse nents and orig_nents
> entries, calling DMA-mapping functions with a wrong number of entries or
> ignoring the number of mapped entries returned by the dma_map_sg()
> function.
> 
> To avoid such issues, lets use a common dma-mapping wrappers operating
> directly on the struct sg_table objects and use scatterlist page
> iterators where possible. This, almost always, hides references to the
> nents and orig_nents entries, making the code robust, easier to follow
> and copy/paste safe.

Looks all sane.

Acked-by: Gerd Hoffmann 

take care,
  Gerd

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization