[PATCH linux-next v2] vdpa: Fix memory leak in error unwinding path

2021-02-16 Thread Parav Pandit
When device get command fails to find the device or mdev,
it skips to free the skb during error unwinding path.
Fix it by freeing in error unwind path.
While at it, make error unwind path more clear to avoid such errors.

Fixes: a12a2f694ce8 ("vdpa: Enable user to query vdpa device info")
Signed-off-by: Parav Pandit 
---
changelog:
v1->v2:
 - Addressed Stefano's comment to make error unwind code more clear
---
 drivers/vdpa/vdpa.c | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
index 3d997b389345..da67f07e24fd 100644
--- a/drivers/vdpa/vdpa.c
+++ b/drivers/vdpa/vdpa.c
@@ -538,22 +538,22 @@ static int vdpa_nl_cmd_dev_get_doit(struct sk_buff *skb, 
struct genl_info *info)
mutex_lock(_dev_mutex);
dev = bus_find_device(_bus, NULL, devname, vdpa_name_match);
if (!dev) {
-   mutex_unlock(_dev_mutex);
NL_SET_ERR_MSG_MOD(info->extack, "device not found");
-   return -ENODEV;
+   err = -ENODEV;
+   goto err;
}
vdev = container_of(dev, struct vdpa_device, dev);
if (!vdev->mdev) {
-   mutex_unlock(_dev_mutex);
-   put_device(dev);
-   return -EINVAL;
+   err = -EINVAL;
+   goto mdev_err;
}
err = vdpa_dev_fill(vdev, msg, info->snd_portid, info->snd_seq, 0, 
info->extack);
if (!err)
err = genlmsg_reply(msg, info);
+mdev_err:
put_device(dev);
+err:
mutex_unlock(_dev_mutex);
-
if (err)
nlmsg_free(msg);
return err;
-- 
2.26.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v1] vdpa/mlx5: Restore the hardware used index after change map

2021-02-16 Thread Si-Wei Liu



On 2/10/2021 7:45 AM, Eli Cohen wrote:

On Wed, Feb 10, 2021 at 12:59:03AM -0800, Si-Wei Liu wrote:


On 2/9/2021 7:53 PM, Jason Wang wrote:

On 2021/2/10 上午10:30, Si-Wei Liu wrote:


On 2/8/2021 10:37 PM, Jason Wang wrote:

On 2021/2/9 下午2:12, Eli Cohen wrote:

On Tue, Feb 09, 2021 at 11:20:14AM +0800, Jason Wang wrote:

On 2021/2/8 下午6:04, Eli Cohen wrote:

On Mon, Feb 08, 2021 at 05:04:27PM +0800, Jason Wang wrote:

On 2021/2/8 下午2:37, Eli Cohen wrote:

On Mon, Feb 08, 2021 at 12:27:18PM +0800, Jason Wang wrote:

On 2021/2/6 上午7:07, Si-Wei Liu wrote:

On 2/3/2021 11:36 PM, Eli Cohen wrote:

When a change of memory map
occurs, the hardware resources
are destroyed
and then re-created again with
the new memory map. In such
case, we need
to restore the hardware
available and used indices. The
driver failed to
restore the used index which is added here.

Also, since the driver also
fails to reset the available and
used
indices upon device reset, fix
this here to avoid regression
caused by
the fact that used index may not be zero upon device reset.

Fixes: 1a86b377aa21 ("vdpa/mlx5:
Add VDPA driver for supported
mlx5
devices")
Signed-off-by: Eli Cohen
---
v0 -> v1:
Clear indices upon device reset

      drivers/vdpa/mlx5/net/mlx5_vnet.c | 18 ++
      1 file changed, 18 insertions(+)

diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c
b/drivers/vdpa/mlx5/net/mlx5_vnet.c
index 88dde3455bfd..b5fe6d2ad22f 100644
--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
+++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
@@ -87,6 +87,7 @@ struct mlx5_vq_restore_info {
      u64 device_addr;
      u64 driver_addr;
      u16 avail_index;
+    u16 used_index;
      bool ready;
      struct vdpa_callback cb;
      bool restore;
@@ -121,6 +122,7 @@ struct mlx5_vdpa_virtqueue {
      u32 virtq_id;
      struct mlx5_vdpa_net *ndev;
      u16 avail_idx;
+    u16 used_idx;
      int fw_state;
        /* keep last in the struct */
@@ -804,6 +806,7 @@ static int
create_virtqueue(struct
mlx5_vdpa_net
*ndev, struct mlx5_vdpa_virtque
        obj_context =
MLX5_ADDR_OF(create_virtio_net_q_in,
in,
obj_context);
 
MLX5_SET(virtio_net_q_object,

obj_context, hw_available_index,
mvq->avail_idx);
+    MLX5_SET(virtio_net_q_object, obj_context, hw_used_index,
mvq->used_idx);
      MLX5_SET(virtio_net_q_object, obj_context,
queue_feature_bit_mask_12_3,
get_features_12_3(ndev->mvdev.actual_features));
      vq_ctx =
MLX5_ADDR_OF(virtio_net_q_object,
obj_context,
virtio_q_context);
@@ -1022,6 +1025,7 @@ static int
connect_qps(struct mlx5_vdpa_net
*ndev, struct mlx5_vdpa_virtqueue *m
      struct mlx5_virtq_attr {
      u8 state;
      u16 available_index;
+    u16 used_index;
      };
        static int
query_virtqueue(struct
mlx5_vdpa_net *ndev, struct
mlx5_vdpa_virtqueue *mvq,
@@ -1052,6 +1056,7 @@ static int query_virtqueue(struct
mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueu
      memset(attr, 0, sizeof(*attr));
      attr->state =
MLX5_GET(virtio_net_q_object,
obj_context, state);
      attr->available_index = MLX5_GET(virtio_net_q_object,
obj_context, hw_available_index);
+    attr->used_index =
MLX5_GET(virtio_net_q_object,
obj_context,
hw_used_index);
      kfree(out);
      return 0;
      @@ -1535,6 +1540,16 @@
static void
teardown_virtqueues(struct
mlx5_vdpa_net *ndev)
      }
      }
      +static void clear_virtqueues(struct mlx5_vdpa_net *ndev)
+{
+    int i;
+
+    for (i = ndev->mvdev.max_vqs - 1; i >= 0; i--) {
+    ndev->vqs[i].avail_idx = 0;
+    ndev->vqs[i].used_idx = 0;
+    }
+}
+
      /* TODO: cross-endian support */
      static inline bool
mlx5_vdpa_is_little_endian(struct
mlx5_vdpa_dev
*mvdev)
      {
@@ -1610,6 +1625,7 @@ static int save_channel_info(struct
mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqu
      return err;
        ri->avail_index = attr.available_index;
+    ri->used_index = attr.used_index;
      ri->ready = mvq->ready;
      ri->num_ent = mvq->num_ent;
      ri->desc_addr = mvq->desc_addr;
@@ -1654,6 +1670,7 @@ static void restore_channels_info(struct
mlx5_vdpa_net *ndev)
      continue;
        mvq->avail_idx = ri->avail_index;
+    mvq->used_idx = ri->used_index;
      mvq->ready = ri->ready;
      mvq->num_ent = ri->num_ent;
      mvq->desc_addr = ri->desc_addr;
@@ -1768,6 +1785,7 @@ static void mlx5_vdpa_set_status(struct
vdpa_device *vdev, u8 status)
      if (!status) {
 
mlx5_vdpa_info(mvdev,

"performing device reset\n");
      teardown_driver(ndev);
+    clear_virtqueues(ndev);

The clearing looks fine at the first
glance, as it aligns with the other
state cleanups floating around at
the same place. However, the thing
is
get_vq_state() is supposed to be
called right after to get sync'ed
with
the latest internal avail_index from
device while vq is stopped. The
index was saved in the driver
software 

Re: [PATCH v5 4/8] x86/mm/tlb: Flush remote and local TLBs concurrently

2021-02-16 Thread Nadav Amit
> On Feb 16, 2021, at 4:10 AM, Peter Zijlstra  wrote:
> 
> On Tue, Feb 09, 2021 at 02:16:49PM -0800, Nadav Amit wrote:
>> @@ -816,8 +821,8 @@ STATIC_NOPV void native_flush_tlb_others(const struct 
>> cpumask *cpumask,
>>   * doing a speculative memory access.
>>   */
>>  if (info->freed_tables) {
>> -smp_call_function_many(cpumask, flush_tlb_func,
>> -   (void *)info, 1);
>> +on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
>> +  cpumask);
>>  } else {
>>  /*
>>   * Although we could have used on_each_cpu_cond_mask(),
>> @@ -844,14 +849,15 @@ STATIC_NOPV void native_flush_tlb_others(const struct 
>> cpumask *cpumask,
>>  if (tlb_is_not_lazy(cpu))
>>  __cpumask_set_cpu(cpu, cond_cpumask);
>>  }
>> -smp_call_function_many(cond_cpumask, flush_tlb_func, (void 
>> *)info, 1);
>> +on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
>> +  cpumask);
>>  }
>> }
> 
> Surely on_each_cpu_mask() is more appropriate? There the compiler can do
> the NULL propagation because it's on the same TU.
> 
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -821,8 +821,7 @@ STATIC_NOPV void native_flush_tlb_multi(
>* doing a speculative memory access.
>*/
>   if (info->freed_tables) {
> - on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
> -   cpumask);
> + on_each_cpu_mask(cpumask, flush_tlb_func, (void *)info, true);
>   } else {
>   /*
>* Although we could have used on_each_cpu_cond_mask(),
> @@ -849,8 +848,7 @@ STATIC_NOPV void native_flush_tlb_multi(
>   if (tlb_is_not_lazy(cpu))
>   __cpumask_set_cpu(cpu, cond_cpumask);
>   }
> - on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
> -   cpumask);
> + on_each_cpu_mask(cpumask, flush_tlb_func, (void *)info, true);
>   }
> }

Indeed, and there is actually an additional bug - I used cpumask in the
second on_each_cpu_cond_mask() instead of cond_cpumask.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [External] Re: vsock virtio: questions about supporting DGRAM type

2021-02-16 Thread Jiang Wang .
On Tue, Feb 16, 2021 at 12:50 AM Stefano Garzarella  wrote:
>
> On Tue, Feb 16, 2021 at 12:23:19AM -0800, Jiang Wang . wrote:
> >On Tue, Feb 16, 2021 at 12:09 AM Stefano Garzarella  
> >wrote:
> >>
> >> On Mon, Feb 15, 2021 at 08:50:36PM -0800, Jiang Wang . wrote:
> >> >On Mon, Feb 15, 2021 at 12:31 AM Stefano Garzarella  
> >> >wrote:
> >> >>
> >> >> On Sat, Feb 13, 2021 at 03:26:18PM -0800, Jiang Wang . wrote:
> >> >> >On Fri, Feb 12, 2021 at 1:02 AM Stefano Garzarella 
> >> >> > wrote:
> >> >> >>
> >> >> >> Hi Jiang,
> >> >> >>
> >> >> >> CCing Arseny who is working on SOCK_SEQPACKET support for 
> >> >> >> virtio-vsock
> >> >> >> [1].
> >> >> >>
> >> >> >> On Thu, Feb 11, 2021 at 10:04:34PM -0800, Jiang Wang . wrote:
> >> >> >> >Hi guys,
> >> >> >> >
> >> >> >> >I am working on supporting DGRAM type for virtio/vhost vsock. I
> >> >> >> >already did some work and a draft code is here (which passed my 
> >> >> >> >tests,
> >> >> >> >but still need some cleanup and only works from host to guest as of
> >> >> >> >now, will add host to guest soon):
> >> >> >> >https://github.com/Jiang1155/linux/commit/4e89736e0bce15496460ff411cb4694b143d1c3d
> >> >> >> >qemu changes are here:
> >> >> >> >https://github.com/Jiang1155/qemu/commit/7ab778801e3e8969ab98e44539943810a2fb03eb
> >> >> >> >
> >> >> >> >Today, I just noticed that the Asias had an old version of virtio
> >> >> >> >which had both dgram and stream support, see this link:
> >> >> >> >https://kvm.vger.kernel.narkive.com/BMvH9eEr/rfc-v2-0-7-introduce-vm-sockets-virtio-transport#post1
> >> >> >> >
> >> >> >> >But somehow, the dgram part seems never merged to upstream linux 
> >> >> >> >(the
> >> >> >> >stream part is merged). If so, does anyone know what is the reason 
> >> >> >> >for
> >> >> >> >this? Did we drop dgram support for some specific reason or the code
> >> >> >> >needs some improvement?
> >> >> >>
> >> >> >> I wasn't involved yet in virtio-vsock development when Asias posted 
> >> >> >> that
> >> >> >> patches, so I don't know the exact reason.
> >> >> >>
> >> >> >> Maybe could be related on how to handle the credit mechanism for a
> >> >> >> connection-less sockets and how to manage the packet queue, if for
> >> >> >> example no one is listening.
> >> >> >>
> >> >> >
> >> >> >I see. thanks.
> >> >> >
> >> >> >> >
> >> >> >> >My current code differs from Asias' code in some ways. It does not 
> >> >> >> >use
> >> >> >> >credit and does not support fragmentation.  It basically adds two 
> >> >> >> >virt
> >> >> >>
> >> >> >> If you don't use credit, do you have some threshold when you start to
> >> >> >> drop packets on the RX side?
> >> >> >>
> >> >> >
> >> >> >As of now, I don't have any threshold to drop packets on RX side. In my
> >> >> >view, DGRAM is like UDP and is a best effort service. If the virtual
> >> >> >queue
> >> >> >is full on TX (or the available buffer size is less than the
> >> >> >packet size),
> >> >> >I drop the packets on the TX side.
> >> >>
> >> >> I have to check the code, my concern is we should have a limit for the
> >> >> allocation, for example if the user space doesn't consume RX packets.
> >> >>
> >> >
> >> >I think we already have a limit for the allocation. If a DGRAM client
> >> >sends packets while no socket is bound on the server side ,
> >> >the packet will be dropped when looking for the socket. If the socket is
> >> >bound on the server side, but the server program somehow does not
> >> >call recv or similar functions, the packets will be dropped when the
> >> >buffer is full as in your previous patch at here:
> >> >https://lore.kernel.org/patchwork/patch/1141083/
> >> >If there are still other cases that we don't have protection, let me know 
> >> >and
> >> >I can add some threshold.
> >>
> >> Maybe I misunderstood, but I thought you didn't use credit for DGRAM
> >> messages.
> >>
> >I don't use credit for DGRAM. But the dropping code I mentioned does not
> >rely on credit too. Just to make it clear, following is one part of code that
> >I referred to:
> >
> >static bool virtio_transport_inc_rx_pkt(struct virtio_vsock_sock *vvs,
> >struct virtio_vsock_pkt *pkt)
> >{
> >if (vvs->rx_bytes + pkt->len > vvs->buf_alloc)
> >return false;
> >
> >vvs->rx_bytes += pkt->len;
> >return true;
> >}
>
> Okay, now it's clear. The transmitter doesn't care about the receiver's
> credit, but the receiver uses that part of the credit mechanism to
> discard packets.
>
> I think that's fine, but when you send the patches, explain it well in
> the cover letter/commit message.
>
Sure, thanks.

> >
> >
> >> If you use it to limit buffer occupancy, then it should be okay, but
> >> again, I still have to look at the code :-)
> >
> >Sure. I think you mean you need to look at my final patches. I will
> >send it later.  Just a friendly reminder, if you just want to get some
> >idea of my patches, you could check this link :
> >https://github.com/Jiang1155/linux/commit/4e89736e0bce15496460ff411cb4694b143d1c3d
>
> It's my fault 

Re: [PATCH] virtio_console: remove pointless check for debugfs_create_dir()

2021-02-16 Thread Amit Shah
On Tue, 2021-02-16 at 16:04 +0100, Greg Kroah-Hartman wrote:
> It is impossible for debugfs_create_dir() to return NULL, so checking
> for it gives people a false sense that they actually are doing something
> if an error occurs.  As there is no need to ever change kernel logic if
> debugfs is working "properly" or not, there is no need to check the
> return value of debugfs calls, so remove the checks here as they will
> never be triggered and are wrong.
> 
> Cc: Amit Shah 
> Cc: Arnd Bergmann 
> Cc: virtualization@lists.linux-foundation.org
> Cc: linux-ker...@vger.kernel.org
> Signed-off-by: Greg Kroah-Hartman 
> ---
>  drivers/char/virtio_console.c | 23 +--
>  1 file changed, 9 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
> index 1836cc56e357..59dfd9c421a1 100644
> --- a/drivers/char/virtio_console.c
> +++ b/drivers/char/virtio_console.c
> @@ -1456,18 +1456,15 @@ static int add_port(struct ports_device *portdev, u32 
> id)
>*/
>   send_control_msg(port, VIRTIO_CONSOLE_PORT_READY, 1);
>  
> - if (pdrvdata.debugfs_dir) {
> - /*
> -  * Finally, create the debugfs file that we can use to
> -  * inspect a port's state at any time
> -  */
> - snprintf(debugfs_name, sizeof(debugfs_name), "vport%up%u",
> -  port->portdev->vdev->index, id);
> - port->debugfs_file = debugfs_create_file(debugfs_name, 0444,
> -  pdrvdata.debugfs_dir,
> -  port,
> -  _debugfs_fops);
> - }
> + /*
> +  * Finally, create the debugfs file that we can use to
> +  * inspect a port's state at any time
> +  */
> + snprintf(debugfs_name, sizeof(debugfs_name), "vport%up%u",
> +  port->portdev->vdev->index, id);
> + port->debugfs_file = debugfs_create_file(debugfs_name, 0444,
> +  pdrvdata.debugfs_dir,
> +  port, _debugfs_fops);
>   return 0;
>  
>  free_inbufs:
> @@ -2244,8 +2241,6 @@ static int __init init(void)
>   }
>  
>   pdrvdata.debugfs_dir = debugfs_create_dir("virtio-ports", NULL);
> - if (!pdrvdata.debugfs_dir)
> - pr_warn("Error creating debugfs dir for virtio-ports\n");
>   INIT_LIST_HEAD();
>   INIT_LIST_HEAD();
> 

Reviewed-by: Amit Shah 

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH] virtio_console: remove pointless check for debugfs_create_dir()

2021-02-16 Thread Greg Kroah-Hartman
It is impossible for debugfs_create_dir() to return NULL, so checking
for it gives people a false sense that they actually are doing something
if an error occurs.  As there is no need to ever change kernel logic if
debugfs is working "properly" or not, there is no need to check the
return value of debugfs calls, so remove the checks here as they will
never be triggered and are wrong.

Cc: Amit Shah 
Cc: Arnd Bergmann 
Cc: virtualization@lists.linux-foundation.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/char/virtio_console.c | 23 +--
 1 file changed, 9 insertions(+), 14 deletions(-)

diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
index 1836cc56e357..59dfd9c421a1 100644
--- a/drivers/char/virtio_console.c
+++ b/drivers/char/virtio_console.c
@@ -1456,18 +1456,15 @@ static int add_port(struct ports_device *portdev, u32 
id)
 */
send_control_msg(port, VIRTIO_CONSOLE_PORT_READY, 1);
 
-   if (pdrvdata.debugfs_dir) {
-   /*
-* Finally, create the debugfs file that we can use to
-* inspect a port's state at any time
-*/
-   snprintf(debugfs_name, sizeof(debugfs_name), "vport%up%u",
-port->portdev->vdev->index, id);
-   port->debugfs_file = debugfs_create_file(debugfs_name, 0444,
-pdrvdata.debugfs_dir,
-port,
-_debugfs_fops);
-   }
+   /*
+* Finally, create the debugfs file that we can use to
+* inspect a port's state at any time
+*/
+   snprintf(debugfs_name, sizeof(debugfs_name), "vport%up%u",
+port->portdev->vdev->index, id);
+   port->debugfs_file = debugfs_create_file(debugfs_name, 0444,
+pdrvdata.debugfs_dir,
+port, _debugfs_fops);
return 0;
 
 free_inbufs:
@@ -2244,8 +2241,6 @@ static int __init init(void)
}
 
pdrvdata.debugfs_dir = debugfs_create_dir("virtio-ports", NULL);
-   if (!pdrvdata.debugfs_dir)
-   pr_warn("Error creating debugfs dir for virtio-ports\n");
INIT_LIST_HEAD();
INIT_LIST_HEAD();
 
-- 
2.30.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH for 5.10 v2 4/5] vdpa_sim: make 'config' generic and usable for any device type

2021-02-16 Thread Stefano Garzarella
commit f37cbbc65178e0a45823d281d290c4c02da9631c upstream.

Add new 'config_size' attribute in 'vdpasim_dev_attr' and allocates
'config' dynamically to support any device types.

Acked-by: Jason Wang 
Signed-off-by: Stefano Garzarella 
Link: https://lore.kernel.org/r/20201215144256.155342-12-sgarz...@redhat.com
Signed-off-by: Michael S. Tsirkin 
Cc:  # 5.10.x
Signed-off-by: Stefano Garzarella 
---
 drivers/vdpa/vdpa_sim/vdpa_sim.c | 17 +
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
index bc27fa485c3e..d9c494455156 100644
--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
+++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
@@ -70,6 +70,7 @@ static u64 vdpasim_features = (1ULL << VIRTIO_F_ANY_LAYOUT) |
  (1ULL << VIRTIO_NET_F_MAC);
 
 struct vdpasim_dev_attr {
+   size_t config_size;
int nvqs;
 };
 
@@ -81,7 +82,8 @@ struct vdpasim {
struct vdpasim_dev_attr dev_attr;
/* spinlock to synchronize virtqueue state */
spinlock_t lock;
-   struct virtio_net_config config;
+   /* virtio config according to device type */
+   void *config;
struct vhost_iotlb *iommu;
void *buffer;
u32 status;
@@ -380,6 +382,10 @@ static struct vdpasim *vdpasim_create(struct 
vdpasim_dev_attr *dev_attr)
goto err_iommu;
set_dma_ops(dev, _dma_ops);
 
+   vdpasim->config = kzalloc(dev_attr->config_size, GFP_KERNEL);
+   if (!vdpasim->config)
+   goto err_iommu;
+
vdpasim->vqs = kcalloc(dev_attr->nvqs, sizeof(struct vdpasim_virtqueue),
   GFP_KERNEL);
if (!vdpasim->vqs)
@@ -518,7 +524,8 @@ static u64 vdpasim_get_features(struct vdpa_device *vdpa)
 static int vdpasim_set_features(struct vdpa_device *vdpa, u64 features)
 {
struct vdpasim *vdpasim = vdpa_to_sim(vdpa);
-   struct virtio_net_config *config = >config;
+   struct virtio_net_config *config =
+   (struct virtio_net_config *)vdpasim->config;
 
/* DMA mapping must be done by driver */
if (!(features & (1ULL << VIRTIO_F_ACCESS_PLATFORM)))
@@ -588,8 +595,8 @@ static void vdpasim_get_config(struct vdpa_device *vdpa, 
unsigned int offset,
 {
struct vdpasim *vdpasim = vdpa_to_sim(vdpa);
 
-   if (offset + len < sizeof(struct virtio_net_config))
-   memcpy(buf, (u8 *)>config + offset, len);
+   if (offset + len < vdpasim->dev_attr.config_size)
+   memcpy(buf, vdpasim->config + offset, len);
 }
 
 static void vdpasim_set_config(struct vdpa_device *vdpa, unsigned int offset,
@@ -676,6 +683,7 @@ static void vdpasim_free(struct vdpa_device *vdpa)
if (vdpasim->iommu)
vhost_iotlb_free(vdpasim->iommu);
kfree(vdpasim->vqs);
+   kfree(vdpasim->config);
 }
 
 static const struct vdpa_config_ops vdpasim_net_config_ops = {
@@ -736,6 +744,7 @@ static int __init vdpasim_dev_init(void)
struct vdpasim_dev_attr dev_attr = {};
 
dev_attr.nvqs = VDPASIM_VQ_NUM;
+   dev_attr.config_size = sizeof(struct virtio_net_config);
 
vdpasim_dev = vdpasim_create(_attr);
 
-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH for 5.10 v2 5/5] vdpa_sim: add get_config callback in vdpasim_dev_attr

2021-02-16 Thread Stefano Garzarella
commit 65b709586e222fa6ffd4166ac7fdb5d5dad113ee upstream.

The get_config callback can be used by the device to fill the
config structure.
The callback will be invoked in vdpasim_get_config() before copying
bytes into caller buffer.

Move vDPA-net config updates from vdpasim_set_features() in the
new vdpasim_net_get_config() callback.
This is safe since in vdpa_get_config() we already check that
.set_features() callback is called before .get_config().

Signed-off-by: Stefano Garzarella 
Link: https://lore.kernel.org/r/20201215144256.155342-13-sgarz...@redhat.com
Signed-off-by: Michael S. Tsirkin 
Cc:  # 5.10.x
Signed-off-by: Stefano Garzarella 
---
 drivers/vdpa/vdpa_sim/vdpa_sim.c | 35 +++-
 1 file changed, 21 insertions(+), 14 deletions(-)

diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
index d9c494455156..f2ad450db547 100644
--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
+++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
@@ -69,9 +69,12 @@ static u64 vdpasim_features = (1ULL << VIRTIO_F_ANY_LAYOUT) |
  (1ULL << VIRTIO_F_ACCESS_PLATFORM) |
  (1ULL << VIRTIO_NET_F_MAC);
 
+struct vdpasim;
+
 struct vdpasim_dev_attr {
size_t config_size;
int nvqs;
+   void (*get_config)(struct vdpasim *vdpasim, void *config);
 };
 
 /* State of each vdpasim device */
@@ -524,8 +527,6 @@ static u64 vdpasim_get_features(struct vdpa_device *vdpa)
 static int vdpasim_set_features(struct vdpa_device *vdpa, u64 features)
 {
struct vdpasim *vdpasim = vdpa_to_sim(vdpa);
-   struct virtio_net_config *config =
-   (struct virtio_net_config *)vdpasim->config;
 
/* DMA mapping must be done by driver */
if (!(features & (1ULL << VIRTIO_F_ACCESS_PLATFORM)))
@@ -533,16 +534,6 @@ static int vdpasim_set_features(struct vdpa_device *vdpa, 
u64 features)
 
vdpasim->features = features & vdpasim_features;
 
-   /* We generally only know whether guest is using the legacy interface
-* here, so generally that's the earliest we can set config fields.
-* Note: We actually require VIRTIO_F_ACCESS_PLATFORM above which
-* implies VIRTIO_F_VERSION_1, but let's not try to be clever here.
-*/
-
-   config->mtu = cpu_to_vdpasim16(vdpasim, 1500);
-   config->status = cpu_to_vdpasim16(vdpasim, VIRTIO_NET_S_LINK_UP);
-   memcpy(config->mac, macaddr_buf, ETH_ALEN);
-
return 0;
 }
 
@@ -595,8 +586,13 @@ static void vdpasim_get_config(struct vdpa_device *vdpa, 
unsigned int offset,
 {
struct vdpasim *vdpasim = vdpa_to_sim(vdpa);
 
-   if (offset + len < vdpasim->dev_attr.config_size)
-   memcpy(buf, vdpasim->config + offset, len);
+   if (offset + len > vdpasim->dev_attr.config_size)
+   return;
+
+   if (vdpasim->dev_attr.get_config)
+   vdpasim->dev_attr.get_config(vdpasim, vdpasim->config);
+
+   memcpy(buf, vdpasim->config + offset, len);
 }
 
 static void vdpasim_set_config(struct vdpa_device *vdpa, unsigned int offset,
@@ -739,12 +735,23 @@ static const struct vdpa_config_ops 
vdpasim_net_batch_config_ops = {
.free   = vdpasim_free,
 };
 
+static void vdpasim_net_get_config(struct vdpasim *vdpasim, void *config)
+{
+   struct virtio_net_config *net_config =
+   (struct virtio_net_config *)config;
+
+   net_config->mtu = cpu_to_vdpasim16(vdpasim, 1500);
+   net_config->status = cpu_to_vdpasim16(vdpasim, VIRTIO_NET_S_LINK_UP);
+   memcpy(net_config->mac, macaddr_buf, ETH_ALEN);
+}
+
 static int __init vdpasim_dev_init(void)
 {
struct vdpasim_dev_attr dev_attr = {};
 
dev_attr.nvqs = VDPASIM_VQ_NUM;
dev_attr.config_size = sizeof(struct virtio_net_config);
+   dev_attr.get_config = vdpasim_net_get_config;
 
vdpasim_dev = vdpasim_create(_attr);
 
-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH for 5.10 v2 1/5] vdpa_sim: remove hard-coded virtq count

2021-02-16 Thread Stefano Garzarella
From: Max Gurtovoy 

commit 423248d60d2b655321fc49eca1545f95a1bc9d6c upstream.

Add a new attribute that will define the number of virt queues to be
created for the vdpasim device.

Signed-off-by: Max Gurtovoy 
[sgarzare: replace kmalloc_array() with kcalloc()]
Acked-by: Jason Wang 
Signed-off-by: Stefano Garzarella 
Link: https://lore.kernel.org/r/20201215144256.155342-4-sgarz...@redhat.com
Signed-off-by: Michael S. Tsirkin 
Cc:  # 5.10.x
Signed-off-by: Stefano Garzarella 
---
 drivers/vdpa/vdpa_sim/vdpa_sim.c | 18 +-
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
index 6a90fdb9cbfc..ee8f24a4643b 100644
--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
+++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
@@ -70,7 +70,7 @@ static u64 vdpasim_features = (1ULL << VIRTIO_F_ANY_LAYOUT) |
 /* State of each vdpasim device */
 struct vdpasim {
struct vdpa_device vdpa;
-   struct vdpasim_virtqueue vqs[VDPASIM_VQ_NUM];
+   struct vdpasim_virtqueue *vqs;
struct work_struct work;
/* spinlock to synchronize virtqueue state */
spinlock_t lock;
@@ -80,6 +80,7 @@ struct vdpasim {
u32 status;
u32 generation;
u64 features;
+   int nvqs;
/* spinlock to synchronize iommu table */
spinlock_t iommu_lock;
 };
@@ -144,7 +145,7 @@ static void vdpasim_reset(struct vdpasim *vdpasim)
 {
int i;
 
-   for (i = 0; i < VDPASIM_VQ_NUM; i++)
+   for (i = 0; i < vdpasim->nvqs; i++)
vdpasim_vq_reset(>vqs[i]);
 
spin_lock(>iommu_lock);
@@ -350,7 +351,7 @@ static struct vdpasim *vdpasim_create(void)
const struct vdpa_config_ops *ops;
struct vdpasim *vdpasim;
struct device *dev;
-   int ret = -ENOMEM;
+   int i, ret = -ENOMEM;
 
if (batch_mapping)
ops = _net_batch_config_ops;
@@ -361,6 +362,7 @@ static struct vdpasim *vdpasim_create(void)
if (!vdpasim)
goto err_alloc;
 
+   vdpasim->nvqs = VDPASIM_VQ_NUM;
INIT_WORK(>work, vdpasim_work);
spin_lock_init(>lock);
spin_lock_init(>iommu_lock);
@@ -371,6 +373,11 @@ static struct vdpasim *vdpasim_create(void)
goto err_iommu;
set_dma_ops(dev, _dma_ops);
 
+   vdpasim->vqs = kcalloc(vdpasim->nvqs, sizeof(struct vdpasim_virtqueue),
+  GFP_KERNEL);
+   if (!vdpasim->vqs)
+   goto err_iommu;
+
vdpasim->iommu = vhost_iotlb_alloc(2048, 0);
if (!vdpasim->iommu)
goto err_iommu;
@@ -389,8 +396,8 @@ static struct vdpasim *vdpasim_create(void)
eth_random_addr(vdpasim->config.mac);
}
 
-   vringh_set_iotlb(>vqs[0].vring, vdpasim->iommu);
-   vringh_set_iotlb(>vqs[1].vring, vdpasim->iommu);
+   for (i = 0; i < vdpasim->nvqs; i++)
+   vringh_set_iotlb(>vqs[i].vring, vdpasim->iommu);
 
vdpasim->vdpa.dma_dev = dev;
ret = vdpa_register_device(>vdpa);
@@ -659,6 +666,7 @@ static void vdpasim_free(struct vdpa_device *vdpa)
kfree(vdpasim->buffer);
if (vdpasim->iommu)
vhost_iotlb_free(vdpasim->iommu);
+   kfree(vdpasim->vqs);
 }
 
 static const struct vdpa_config_ops vdpasim_net_config_ops = {
-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH for 5.10 v2 3/5] vdpa_sim: store parsed MAC address in a buffer

2021-02-16 Thread Stefano Garzarella
commit cf1a3b35382c10ce315c32bd2b3d7789897fbe13 upstream.

As preparation for the next patches, we store the MAC address,
parsed during the vdpasim_create(), in a buffer that will be used
to fill 'config' together with other configurations.

Acked-by: Jason Wang 
Signed-off-by: Stefano Garzarella 
Link: https://lore.kernel.org/r/20201215144256.155342-11-sgarz...@redhat.com
Signed-off-by: Michael S. Tsirkin 
Cc:  # 5.10.x
Signed-off-by: Stefano Garzarella 
---
 drivers/vdpa/vdpa_sim/vdpa_sim.c | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
index 6a5603f9d24c..bc27fa485c3e 100644
--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
+++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
@@ -42,6 +42,8 @@ static char *macaddr;
 module_param(macaddr, charp, 0);
 MODULE_PARM_DESC(macaddr, "Ethernet MAC address");
 
+u8 macaddr_buf[ETH_ALEN];
+
 struct vdpasim_virtqueue {
struct vringh vring;
struct vringh_kiov iov;
@@ -392,13 +394,13 @@ static struct vdpasim *vdpasim_create(struct 
vdpasim_dev_attr *dev_attr)
goto err_iommu;
 
if (macaddr) {
-   mac_pton(macaddr, vdpasim->config.mac);
-   if (!is_valid_ether_addr(vdpasim->config.mac)) {
+   mac_pton(macaddr, macaddr_buf);
+   if (!is_valid_ether_addr(macaddr_buf)) {
ret = -EADDRNOTAVAIL;
goto err_iommu;
}
} else {
-   eth_random_addr(vdpasim->config.mac);
+   eth_random_addr(macaddr_buf);
}
 
for (i = 0; i < dev_attr->nvqs; i++)
@@ -532,6 +534,8 @@ static int vdpasim_set_features(struct vdpa_device *vdpa, 
u64 features)
 
config->mtu = cpu_to_vdpasim16(vdpasim, 1500);
config->status = cpu_to_vdpasim16(vdpasim, VIRTIO_NET_S_LINK_UP);
+   memcpy(config->mac, macaddr_buf, ETH_ALEN);
+
return 0;
 }
 
-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH for 5.10 v2 2/5] vdpa_sim: add struct vdpasim_dev_attr for device attributes

2021-02-16 Thread Stefano Garzarella
commit 6c6e28fe45794054410ad8cd2770af69fbe0338d upstream.

vdpasim_dev_attr will contain device specific attributes. We starting
moving the number of virtqueues (i.e. nvqs) to vdpasim_dev_attr.

vdpasim_create() creates a new vDPA simulator following the device
attributes defined in the vdpasim_dev_attr parameter.

Co-developed-by: Max Gurtovoy 
Signed-off-by: Max Gurtovoy 
Acked-by: Jason Wang 
Signed-off-by: Stefano Garzarella 
Link: https://lore.kernel.org/r/20201215144256.155342-7-sgarz...@redhat.com
Signed-off-by: Michael S. Tsirkin 
Cc:  # 5.10.x
Signed-off-by: Stefano Garzarella 
---
 drivers/vdpa/vdpa_sim/vdpa_sim.c | 25 +
 1 file changed, 17 insertions(+), 8 deletions(-)

diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
index ee8f24a4643b..6a5603f9d24c 100644
--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
+++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
@@ -67,11 +67,16 @@ static u64 vdpasim_features = (1ULL << VIRTIO_F_ANY_LAYOUT) 
|
  (1ULL << VIRTIO_F_ACCESS_PLATFORM) |
  (1ULL << VIRTIO_NET_F_MAC);
 
+struct vdpasim_dev_attr {
+   int nvqs;
+};
+
 /* State of each vdpasim device */
 struct vdpasim {
struct vdpa_device vdpa;
struct vdpasim_virtqueue *vqs;
struct work_struct work;
+   struct vdpasim_dev_attr dev_attr;
/* spinlock to synchronize virtqueue state */
spinlock_t lock;
struct virtio_net_config config;
@@ -80,7 +85,6 @@ struct vdpasim {
u32 status;
u32 generation;
u64 features;
-   int nvqs;
/* spinlock to synchronize iommu table */
spinlock_t iommu_lock;
 };
@@ -145,7 +149,7 @@ static void vdpasim_reset(struct vdpasim *vdpasim)
 {
int i;
 
-   for (i = 0; i < vdpasim->nvqs; i++)
+   for (i = 0; i < vdpasim->dev_attr.nvqs; i++)
vdpasim_vq_reset(>vqs[i]);
 
spin_lock(>iommu_lock);
@@ -346,7 +350,7 @@ static const struct dma_map_ops vdpasim_dma_ops = {
 static const struct vdpa_config_ops vdpasim_net_config_ops;
 static const struct vdpa_config_ops vdpasim_net_batch_config_ops;
 
-static struct vdpasim *vdpasim_create(void)
+static struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr)
 {
const struct vdpa_config_ops *ops;
struct vdpasim *vdpasim;
@@ -358,11 +362,12 @@ static struct vdpasim *vdpasim_create(void)
else
ops = _net_config_ops;
 
-   vdpasim = vdpa_alloc_device(struct vdpasim, vdpa, NULL, ops, 
VDPASIM_VQ_NUM);
+   vdpasim = vdpa_alloc_device(struct vdpasim, vdpa, NULL, ops,
+   dev_attr->nvqs);
if (!vdpasim)
goto err_alloc;
 
-   vdpasim->nvqs = VDPASIM_VQ_NUM;
+   vdpasim->dev_attr = *dev_attr;
INIT_WORK(>work, vdpasim_work);
spin_lock_init(>lock);
spin_lock_init(>iommu_lock);
@@ -373,7 +378,7 @@ static struct vdpasim *vdpasim_create(void)
goto err_iommu;
set_dma_ops(dev, _dma_ops);
 
-   vdpasim->vqs = kcalloc(vdpasim->nvqs, sizeof(struct vdpasim_virtqueue),
+   vdpasim->vqs = kcalloc(dev_attr->nvqs, sizeof(struct vdpasim_virtqueue),
   GFP_KERNEL);
if (!vdpasim->vqs)
goto err_iommu;
@@ -396,7 +401,7 @@ static struct vdpasim *vdpasim_create(void)
eth_random_addr(vdpasim->config.mac);
}
 
-   for (i = 0; i < vdpasim->nvqs; i++)
+   for (i = 0; i < dev_attr->nvqs; i++)
vringh_set_iotlb(>vqs[i].vring, vdpasim->iommu);
 
vdpasim->vdpa.dma_dev = dev;
@@ -724,7 +729,11 @@ static const struct vdpa_config_ops 
vdpasim_net_batch_config_ops = {
 
 static int __init vdpasim_dev_init(void)
 {
-   vdpasim_dev = vdpasim_create();
+   struct vdpasim_dev_attr dev_attr = {};
+
+   dev_attr.nvqs = VDPASIM_VQ_NUM;
+
+   vdpasim_dev = vdpasim_create(_attr);
 
if (!IS_ERR(vdpasim_dev))
return 0;
-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH for 5.10 v2 0/5] vdpa_sim: fix param validation in vdpasim_get_config()

2021-02-16 Thread Stefano Garzarella
v1: https://lore.kernel.org/stable/20210211162519.215418-1-sgarz...@redhat.com/

v2:
- backport the upstream patch and related patches needed

Commit 65b709586e22 ("vdpa_sim: add get_config callback in
vdpasim_dev_attr") unintentionally solved an issue in vdpasim_get_config()
upstream while refactoring vdpa_sim.c to support multiple devices.

Before that patch, if 'offset + len' was equal to
sizeof(struct virtio_net_config), the entire buffer wasn't filled,
returning incorrect values to the caller.

Since 'vdpasim->config' type is 'struct virtio_net_config', we can
safely copy its content under this condition.

The minimum set of patches to backport the patch that fixes the issue, is the
following:

   423248d60d2b vdpa_sim: remove hard-coded virtq count
   6c6e28fe4579 vdpa_sim: add struct vdpasim_dev_attr for device attributes
   cf1a3b35382c vdpa_sim: store parsed MAC address in a buffer
   f37cbbc65178 vdpa_sim: make 'config' generic and usable for any device type
   65b709586e22 vdpa_sim: add get_config callback in vdpasim_dev_attr

The patches apply fairly cleanly. There are a few contextual differences
due to the lack of the other patches:

   $ git backport-diff -u master -r linux-5.10.y..HEAD
   Key:
   [] : patches are identical
   [] : number of functional differences between upstream/downstream patch
   [down] : patch is downstream-only
   The flags [FC] indicate (F)unctional and (C)ontextual differences, 
respectively

   001/5:[] [--] 'vdpa_sim: remove hard-coded virtq count'
   002/5:[] [-C] 'vdpa_sim: add struct vdpasim_dev_attr for device 
attributes'
   003/5:[] [--] 'vdpa_sim: store parsed MAC address in a buffer'
   004/5:[] [-C] 'vdpa_sim: make 'config' generic and usable for any device 
type'
   005/5:[] [-C] 'vdpa_sim: add get_config callback in vdpasim_dev_attr'

Thanks,
Stefano

Max Gurtovoy (1):
  vdpa_sim: remove hard-coded virtq count

Stefano Garzarella (4):
  vdpa_sim: add struct vdpasim_dev_attr for device attributes
  vdpa_sim: store parsed MAC address in a buffer
  vdpa_sim: make 'config' generic and usable for any device type
  vdpa_sim: add get_config callback in vdpasim_dev_attr

 drivers/vdpa/vdpa_sim/vdpa_sim.c | 83 +++-
 1 file changed, 60 insertions(+), 23 deletions(-)

-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH for 5.10] vdpa_sim: fix param validation in vdpasim_get_config()

2021-02-16 Thread Stefano Garzarella

On Mon, Feb 15, 2021 at 04:23:54PM +0100, Greg KH wrote:

On Mon, Feb 15, 2021 at 04:03:21PM +0100, Stefano Garzarella wrote:

On Mon, Feb 15, 2021 at 03:32:19PM +0100, Greg KH wrote:
> On Thu, Feb 11, 2021 at 05:25:19PM +0100, Stefano Garzarella wrote:
> > Commit 65b709586e222fa6ffd4166ac7fdb5d5dad113ee upstream.
>
> No, this really is not that commit, so please do not say it is.

Oops, sorry.

>
> > Before this patch, if 'offset + len' was equal to
> > sizeof(struct virtio_net_config), the entire buffer wasn't filled,
> > returning incorrect values to the caller.
> >
> > Since 'vdpasim->config' type is 'struct virtio_net_config', we can
> > safely copy its content under this condition.
> >
> > Commit 65b709586e22 ("vdpa_sim: add get_config callback in
> > vdpasim_dev_attr") unintentionally solved it upstream while
> > refactoring vdpa_sim.c to support multiple devices. But we don't want
> > to backport it to stable branches as it contains many changes.
> >
> > Fixes: 2c53d0f64c06 ("vdpasim: vDPA device simulator")
> > Cc:  # 5.10.x
> > Signed-off-by: Stefano Garzarella 
> > ---
> >  drivers/vdpa/vdpa_sim/vdpa_sim.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c 
b/drivers/vdpa/vdpa_sim/vdpa_sim.c
> > index 6a90fdb9cbfc..8ca178d7b02f 100644
> > --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
> > +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
> > @@ -572,7 +572,7 @@ static void vdpasim_get_config(struct vdpa_device 
*vdpa, unsigned int offset,
> >  {
> >   struct vdpasim *vdpasim = vdpa_to_sim(vdpa);
> >
> > - if (offset + len < sizeof(struct virtio_net_config))
> > + if (offset + len <= sizeof(struct virtio_net_config))
> >   memcpy(buf, (u8 *)>config + offset, len);
> >  }
>
> I'll be glad to take a one-off patch, but why can't we take the real
> upstream patch?  That is always the better long-term solution, right?

Because that patch depends on the following patches merged in v5.11-rc1
while refactoring vdpa_sim:
  f37cbbc65178 vdpa_sim: make 'config' generic and usable for any device type
  cf1a3b35382c vdpa_sim: store parsed MAC address in a buffer
  a13b5918fdd0 vdpa_sim: add work_fn in vdpasim_dev_attr
  011c35bac5ef vdpa_sim: add supported_features field in vdpasim_dev_attr
  2f8f46188805 vdpa_sim: add device id field in vdpasim_dev_attr
  6c6e28fe4579 vdpa_sim: add struct vdpasim_dev_attr for device attributes
  36a9c3063025 vdpa_sim: rename vdpasim_config_ops variables
  423248d60d2b vdpa_sim: remove hard-coded virtq count

Maybe we can skip some of them, but IMHO should be less risky to apply only
this change.

If you want I can try to figure out the minimum sub-set of patches needed
for 65b709586e22 ("vdpa_sim: add get_config callback in vdpasim_dev_attr").


The minimum is always nice :)



The minimum set, including the patch that fixes the issue, is the 
following:


  65b709586e22 vdpa_sim: add get_config callback in vdpasim_dev_attr
  f37cbbc65178 vdpa_sim: make 'config' generic and usable for any device type
  cf1a3b35382c vdpa_sim: store parsed MAC address in a buffer
  6c6e28fe4579 vdpa_sim: add struct vdpasim_dev_attr for device attributes
  423248d60d2b vdpa_sim: remove hard-coded virtq count

The patches apply fairly cleanly. There are a few contextual differences 
due to the lack of the other patches:


  $ git backport-diff -u master -r linux-5.10.y..HEAD
  Key:
  [] : patches are identical
  [] : number of functional differences between upstream/downstream patch
  [down] : patch is downstream-only
  The flags [FC] indicate (F)unctional and (C)ontextual differences, 
respectively

  001/5:[] [--] 'vdpa_sim: remove hard-coded virtq count'
  002/5:[] [-C] 'vdpa_sim: add struct vdpasim_dev_attr for device 
attributes'
  003/5:[] [--] 'vdpa_sim: store parsed MAC address in a buffer'
  004/5:[] [-C] 'vdpa_sim: make 'config' generic and usable for any device 
type'
  005/5:[] [-C] 'vdpa_sim: add get_config callback in vdpasim_dev_attr'


If it's just too much churn for no good reason, then yes, the one-line
change above will be ok, but you need to document the heck out of why
this is not upstream and that it is a one-off thing.



Shortly I'll send the series to sta...@vger.kernel.org so you can judge 
if it's okay or better to resend this patch with a better description.


Thanks
Stefano

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 09/10] drm/qxl: map/unmap framebuffers in prepare_fb+cleanup_fb callbacks.

2021-02-16 Thread Thomas Zimmermann



Am 16.02.21 um 14:27 schrieb Thomas Zimmermann:

Hi

this is a shadow-buffered plane. Did you consider using the new helpers 
for shadow-buffered planes? They will map the user BO for you and 
provide the mapping in the plane state.


 From there, you should implement your own plane state on top of struct 
drm_shadow_plane_state, and also move all the other allocations and 
vmaps into prepare_fb and cleanup_fb. Most of this is not actually 
allowed in commit tails. All we'd have to do is to export the reset, 
duplicate and destroy code; similar to what 
__drm_atomic_helper_plane_reset() does.


AFAICT the cursor_bo is used to implement double buffering for the 
cursor image.


Ideally, you can do what ast does: pre-allocate/vmap 2 BOs at the end of 
the vram. Then pageflip between them in atomic_update(). Resolves all 
the allocation and mapping headaches.


Best regards
Thomas



Best regards
Thomas


Am 16.02.21 um 12:37 schrieb Gerd Hoffmann:

We don't have to map in atomic_update callback then,
making locking a bit less complicated.

Signed-off-by: Gerd Hoffmann 
---
  drivers/gpu/drm/qxl/qxl_display.c | 14 +-
  1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c

index 7500560db8e4..39b8c5116d34 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -584,7 +584,6 @@ static void qxl_cursor_atomic_update(struct 
drm_plane *plane,

  struct drm_gem_object *obj;
  struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo 
= NULL;

  int ret;
-    struct dma_buf_map user_map;
  struct dma_buf_map cursor_map;
  void *user_ptr;
  int size = 64*64*4;
@@ -599,11 +598,8 @@ static void qxl_cursor_atomic_update(struct 
drm_plane *plane,

  obj = fb->obj[0];
  user_bo = gem_to_qxl_bo(obj);
-    /* pinning is done in the prepare/cleanup framevbuffer */
-    ret = qxl_bo_kmap_locked(user_bo, _map);
-    if (ret)
-    goto out_free_release;
-    user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction 
properly */

+    /* mapping is done in the prepare/cleanup framevbuffer */
+    user_ptr = user_bo->map.vaddr; /* TODO: Use mapping 
abstraction properly */

  ret = qxl_alloc_bo_reserved(qdev, release,
  sizeof(struct qxl_cursor) + size,
@@ -639,7 +635,6 @@ static void qxl_cursor_atomic_update(struct 
drm_plane *plane,

  cursor->chunk.data_size = size;
  memcpy(cursor->chunk.data, user_ptr, size);
  qxl_bo_kunmap_locked(cursor_bo);
-    qxl_bo_kunmap_locked(user_bo);
  cmd = (struct qxl_cursor_cmd *) qxl_release_map(qdev, release);
  cmd->u.set.visible = 1;
@@ -778,6 +773,7 @@ static int qxl_plane_prepare_fb(struct drm_plane 
*plane,

  struct drm_gem_object *obj;
  struct qxl_bo *user_bo;
  struct qxl_surface surf;
+    struct dma_buf_map unused;
  if (!new_state->fb)
  return 0;
@@ -815,7 +811,7 @@ static int qxl_plane_prepare_fb(struct drm_plane 
*plane,

  }
  }
-    return qxl_bo_pin(user_bo);
+    return qxl_bo_kmap(user_bo, );
  }
  static void qxl_plane_cleanup_fb(struct drm_plane *plane,
@@ -834,7 +830,7 @@ static void qxl_plane_cleanup_fb(struct drm_plane 
*plane,

  obj = old_state->fb->obj[0];
  user_bo = gem_to_qxl_bo(obj);
-    qxl_bo_unpin(user_bo);
+    qxl_bo_kunmap(user_bo);
  if (old_state->fb != plane->state->fb && user_bo->shadow) {
  qxl_bo_unpin(user_bo->shadow);




___
dri-devel mailing list
dri-de...@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel



--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer



OpenPGP_signature
Description: OpenPGP digital signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH 10/10] drm/qxl: add lock asserts to qxl_bo_kmap_locked + qxl_bo_kunmap_locked

2021-02-16 Thread Thomas Zimmermann

Hi

Am 16.02.21 um 12:37 schrieb Gerd Hoffmann:

Try avoid re-introducing locking bugs.

Signed-off-by: Gerd Hoffmann 


Acked-by: Thomas Zimmermann 

IMHO this should eventually be done in the rsp TTM functions. But so 
far, I'd expect this to break some drivers.


Best regards
Thomas


---
  drivers/gpu/drm/qxl/qxl_object.c | 4 
  1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index 22748b9566af..90d5e5b7f927 100644
--- a/drivers/gpu/drm/qxl/qxl_object.c
+++ b/drivers/gpu/drm/qxl/qxl_object.c
@@ -162,6 +162,8 @@ int qxl_bo_kmap_locked(struct qxl_bo *bo, struct 
dma_buf_map *map)
  {
int r;
  
+	dma_resv_assert_held(bo->tbo.base.resv);

+
if (bo->kptr) {
bo->map_count++;
goto out;
@@ -236,6 +238,8 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
  
  void qxl_bo_kunmap_locked(struct qxl_bo *bo)

  {
+   dma_resv_assert_held(bo->tbo.base.resv);
+
if (bo->kptr == NULL)
return;
bo->map_count--;



--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer



OpenPGP_signature
Description: OpenPGP digital signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH 09/10] drm/qxl: map/unmap framebuffers in prepare_fb+cleanup_fb callbacks.

2021-02-16 Thread Thomas Zimmermann

Hi

this is a shadow-buffered plane. Did you consider using the new helpers 
for shadow-buffered planes? They will map the user BO for you and 
provide the mapping in the plane state.


From there, you should implement your own plane state on top of struct 
drm_shadow_plane_state, and also move all the other allocations and 
vmaps into prepare_fb and cleanup_fb. Most of this is not actually 
allowed in commit tails. All we'd have to do is to export the reset, 
duplicate and destroy code; similar to what 
__drm_atomic_helper_plane_reset() does.


Best regards
Thomas


Am 16.02.21 um 12:37 schrieb Gerd Hoffmann:

We don't have to map in atomic_update callback then,
making locking a bit less complicated.

Signed-off-by: Gerd Hoffmann 
---
  drivers/gpu/drm/qxl/qxl_display.c | 14 +-
  1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index 7500560db8e4..39b8c5116d34 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -584,7 +584,6 @@ static void qxl_cursor_atomic_update(struct drm_plane 
*plane,
struct drm_gem_object *obj;
struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
int ret;
-   struct dma_buf_map user_map;
struct dma_buf_map cursor_map;
void *user_ptr;
int size = 64*64*4;
@@ -599,11 +598,8 @@ static void qxl_cursor_atomic_update(struct drm_plane 
*plane,
obj = fb->obj[0];
user_bo = gem_to_qxl_bo(obj);
  
-		/* pinning is done in the prepare/cleanup framevbuffer */

-   ret = qxl_bo_kmap_locked(user_bo, _map);
-   if (ret)
-   goto out_free_release;
-   user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction 
properly */
+   /* mapping is done in the prepare/cleanup framevbuffer */
+   user_ptr = user_bo->map.vaddr; /* TODO: Use mapping abstraction 
properly */
  
  		ret = qxl_alloc_bo_reserved(qdev, release,

sizeof(struct qxl_cursor) + size,
@@ -639,7 +635,6 @@ static void qxl_cursor_atomic_update(struct drm_plane 
*plane,
cursor->chunk.data_size = size;
memcpy(cursor->chunk.data, user_ptr, size);
qxl_bo_kunmap_locked(cursor_bo);
-   qxl_bo_kunmap_locked(user_bo);
  
  		cmd = (struct qxl_cursor_cmd *) qxl_release_map(qdev, release);

cmd->u.set.visible = 1;
@@ -778,6 +773,7 @@ static int qxl_plane_prepare_fb(struct drm_plane *plane,
struct drm_gem_object *obj;
struct qxl_bo *user_bo;
struct qxl_surface surf;
+   struct dma_buf_map unused;
  
  	if (!new_state->fb)

return 0;
@@ -815,7 +811,7 @@ static int qxl_plane_prepare_fb(struct drm_plane *plane,
}
}
  
-	return qxl_bo_pin(user_bo);

+   return qxl_bo_kmap(user_bo, );
  }
  
  static void qxl_plane_cleanup_fb(struct drm_plane *plane,

@@ -834,7 +830,7 @@ static void qxl_plane_cleanup_fb(struct drm_plane *plane,
  
  	obj = old_state->fb->obj[0];

user_bo = gem_to_qxl_bo(obj);
-   qxl_bo_unpin(user_bo);
+   qxl_bo_kunmap(user_bo);
  
  	if (old_state->fb != plane->state->fb && user_bo->shadow) {

qxl_bo_unpin(user_bo->shadow);



--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer



OpenPGP_signature
Description: OpenPGP digital signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH 08/10] drm/qxl: fix monitors object kmap

2021-02-16 Thread Thomas Zimmermann



Am 16.02.21 um 12:37 schrieb Gerd Hoffmann:

Use the correct kmap variant.  We don't hold a reservation here,
so we can't use the _locked variant.  We can drop the pin because
qxl_bo_kmap will do that for us.

Signed-off-by: Gerd Hoffmann 


Acked-by: Thomas Zimmermann 


---
  drivers/gpu/drm/qxl/qxl_display.c | 7 ++-
  1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index 8672385a1caf..7500560db8e4 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -1159,12 +1159,10 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
}
qdev->monitors_config_bo = gem_to_qxl_bo(gobj);
  
-	ret = qxl_bo_pin(qdev->monitors_config_bo);

+   ret = qxl_bo_kmap(qdev->monitors_config_bo, );
if (ret)
return ret;
  
-	qxl_bo_kmap_locked(qdev->monitors_config_bo, );

-
qdev->monitors_config = qdev->monitors_config_bo->kptr;
qdev->ram_header->monitors_config =
qxl_bo_physical_address(qdev, qdev->monitors_config_bo, 0);
@@ -1189,8 +1187,7 @@ int qxl_destroy_monitors_object(struct qxl_device *qdev)
qdev->monitors_config = NULL;
qdev->ram_header->monitors_config = 0;
  
-	qxl_bo_kunmap_locked(qdev->monitors_config_bo);

-   ret = qxl_bo_unpin(qdev->monitors_config_bo);
+   ret = qxl_bo_kunmap(qdev->monitors_config_bo);
if (ret)
return ret;
  



--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer



OpenPGP_signature
Description: OpenPGP digital signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH] vdpa/mlx5: Extract correct pointer from driver data

2021-02-16 Thread Leon Romanovsky
On Tue, Feb 16, 2021 at 02:45:40PM +0200, Eli Cohen wrote:
> On Tue, Feb 16, 2021 at 09:37:34AM +0200, Leon Romanovsky wrote:
> > On Tue, Feb 16, 2021 at 08:42:26AM +0200, Eli Cohen wrote:
> > > On Tue, Feb 16, 2021 at 08:35:51AM +0200, Leon Romanovsky wrote:
> > > > On Tue, Feb 16, 2021 at 07:50:22AM +0200, Eli Cohen wrote:
> > > > > struct mlx5_vdpa_net pointer was stored in drvdata. Extract it as well
> > > > > in mlx5v_remove().
> > > > >
> > > > > Fixes: 74c9729dd892 ("vdpa/mlx5: Connect mlx5_vdpa to auxiliary bus")
> > > > > Signed-off-by: Eli Cohen 
> > > > > ---
> > > > >  drivers/vdpa/mlx5/net/mlx5_vnet.c | 4 ++--
> > > > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > > > >
> > > > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c 
> > > > > b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > index 6b0a42183622..4103d3b64a2a 100644
> > > > > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > @@ -2036,9 +2036,9 @@ static int mlx5v_probe(struct auxiliary_device 
> > > > > *adev,
> > > > >
> > > > >  static void mlx5v_remove(struct auxiliary_device *adev)
> > > > >  {
> > > > > - struct mlx5_vdpa_dev *mvdev = dev_get_drvdata(>dev);
> > > > > + struct mlx5_vdpa_net *ndev = dev_get_drvdata(>dev);
> > > > >
> > > > > - vdpa_unregister_device(>vdev);
> > > > > + vdpa_unregister_device(>mvdev.vdev);
> > > > >  }
> > > >
> > > > IMHO, The more correct solution is to fix dev_set_drvdata() call,
> > > > because we are regustering/unregistering/allocating "struct 
> > > > mlx5_vdpa_dev".
> > > >
> > >
> > > We're allocating "struct mlx5_vdpa_net". "struct mlx5_vdpa_dev" is just
> > > a member field of "struct mlx5_vdpa_net".
> >
> > I referred to these lines in the mlx5v_probe():
> >   1986 err = mlx5_vdpa_alloc_resources(>mvdev);
> >   1987 if (err)
> >   1988 goto err_mtu;
> >   1989
> >   1990 err = alloc_resources(ndev);
> >   1991 if (err)
> >   1992 goto err_res;
> >   1993
> >   1994 err = vdpa_register_device(>vdev);
> >
> > So mlx5v_remove() is better to be symmetrical.
> >
>
> It's "struct mlx5_vdpa_net" that is being allocated here so it makes
> sense to set this pointer as the the driver data.

Anyway, it doesn't important.

Thanks, for the original patch.
Reviewed-by: Leon Romanovsky 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 06/10] drm/qxl: add qxl_bo_kmap/qxl_bo_kunmap

2021-02-16 Thread Thomas Zimmermann

Hi

Am 16.02.21 um 12:37 schrieb Gerd Hoffmann:

Add kmap/kunmap variants which reserve (and pin) the bo.
They can be used in case the caller doesn't hold a reservation
for the bo.


Again, these functions should rather be called vmap/vunamp.



Signed-off-by: Gerd Hoffmann 
---
  drivers/gpu/drm/qxl/qxl_object.h |  2 ++
  drivers/gpu/drm/qxl/qxl_object.c | 36 
  2 files changed, 38 insertions(+)

diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
index 5bd33650183f..360972ae4869 100644
--- a/drivers/gpu/drm/qxl/qxl_object.h
+++ b/drivers/gpu/drm/qxl/qxl_object.h
@@ -64,7 +64,9 @@ extern int qxl_bo_create(struct qxl_device *qdev,
 u32 priority,
 struct qxl_surface *surf,
 struct qxl_bo **bo_ptr);
+extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
  extern int qxl_bo_kmap_locked(struct qxl_bo *bo, struct dma_buf_map *map);
+extern int qxl_bo_kunmap(struct qxl_bo *bo);
  extern void qxl_bo_kunmap_locked(struct qxl_bo *bo);
  void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int 
page_offset);
  void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, 
void *map);
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index b56d4dfc28cb..22748b9566af 100644
--- a/drivers/gpu/drm/qxl/qxl_object.c
+++ b/drivers/gpu/drm/qxl/qxl_object.c
@@ -29,6 +29,9 @@
  #include "qxl_drv.h"
  #include "qxl_object.h"
  
+static int __qxl_bo_pin(struct qxl_bo *bo);

+static void __qxl_bo_unpin(struct qxl_bo *bo);
+
  static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo)
  {
struct qxl_bo *bo;
@@ -179,6 +182,25 @@ int qxl_bo_kmap_locked(struct qxl_bo *bo, struct 
dma_buf_map *map)
return 0;
  }
  
+int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map)

+{
+   int r;
+
+   r = qxl_bo_reserve(bo);
+   if (r)
+   return r;
+
+   r = __qxl_bo_pin(bo);
+   if (r) {
+   qxl_bo_unreserve(bo);
+   return r;
+   }
+
+   r = qxl_bo_kmap_locked(bo, map);
+   qxl_bo_unreserve(bo);
+   return r;
+}
+
  void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
  struct qxl_bo *bo, int page_offset)
  {
@@ -223,6 +245,20 @@ void qxl_bo_kunmap_locked(struct qxl_bo *bo)
ttm_bo_vunmap(>tbo, >map);
  }
  
+int qxl_bo_kunmap(struct qxl_bo *bo)

+{
+   int r;
+
+   r = qxl_bo_reserve(bo);
+   if (r)
+   return r;
+
+   qxl_bo_kunmap_locked(bo);
+   __qxl_bo_unpin(bo);
+   qxl_bo_unreserve(bo);
+   return 0;
+}
+
  void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev,
   struct qxl_bo *bo, void *pmap)
  {



--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer



OpenPGP_signature
Description: OpenPGP digital signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH 07/10] drm/qxl: fix prime kmap

2021-02-16 Thread Thomas Zimmermann

Hi

Am 16.02.21 um 12:37 schrieb Gerd Hoffmann:

Use the correct kmap variant.  We don't have a reservation here,
so we can't use the _locked version.


I'd merge this change into patch 6. So the new functions come with a caller.

Best regards
Thomas



Signed-off-by: Gerd Hoffmann 
---
  drivers/gpu/drm/qxl/qxl_prime.c | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
index dc295b2e2b52..4aa949799446 100644
--- a/drivers/gpu/drm/qxl/qxl_prime.c
+++ b/drivers/gpu/drm/qxl/qxl_prime.c
@@ -59,7 +59,7 @@ int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct 
dma_buf_map *map)
struct qxl_bo *bo = gem_to_qxl_bo(obj);
int ret;
  
-	ret = qxl_bo_kmap_locked(bo, map);

+   ret = qxl_bo_kmap(bo, map);
if (ret < 0)
return ret;
  
@@ -71,7 +71,7 @@ void qxl_gem_prime_vunmap(struct drm_gem_object *obj,

  {
struct qxl_bo *bo = gem_to_qxl_bo(obj);
  
-	qxl_bo_kunmap_locked(bo);

+   qxl_bo_kunmap(bo);
  }
  
  int qxl_gem_prime_mmap(struct drm_gem_object *obj,




--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer



OpenPGP_signature
Description: OpenPGP digital signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH 05/10] drm/qxl: rename qxl_bo_kmap -> qxl_bo_kmap_locked

2021-02-16 Thread Thomas Zimmermann

Hi

Am 16.02.21 um 12:37 schrieb Gerd Hoffmann:

Make clear that these functions should be called with reserved
bo's only.  No functional change.

Signed-off-by: Gerd Hoffmann 
---
  drivers/gpu/drm/qxl/qxl_object.h  |  4 ++--
  drivers/gpu/drm/qxl/qxl_display.c | 14 +++---
  drivers/gpu/drm/qxl/qxl_draw.c|  8 
  drivers/gpu/drm/qxl/qxl_image.c   |  2 +-
  drivers/gpu/drm/qxl/qxl_object.c  |  8 
  drivers/gpu/drm/qxl/qxl_prime.c   |  4 ++--
  6 files changed, 20 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
index dc1659e717f1..5bd33650183f 100644
--- a/drivers/gpu/drm/qxl/qxl_object.h
+++ b/drivers/gpu/drm/qxl/qxl_object.h
@@ -64,8 +64,8 @@ extern int qxl_bo_create(struct qxl_device *qdev,
 u32 priority,
 struct qxl_surface *surf,
 struct qxl_bo **bo_ptr);
-extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
-extern void qxl_bo_kunmap(struct qxl_bo *bo);
+extern int qxl_bo_kmap_locked(struct qxl_bo *bo, struct dma_buf_map *map);
+extern void qxl_bo_kunmap_locked(struct qxl_bo *bo);


Nowadaways kmap/kunmap is a misnomer. Since you're renaming; maybe 
rename them to vmap/vunmap.


Best regards
Thomas


  void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int 
page_offset);
  void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, 
void *map);
  extern struct qxl_bo *qxl_bo_ref(struct qxl_bo *bo);
diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index a1b5cc5918bc..8672385a1caf 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -600,7 +600,7 @@ static void qxl_cursor_atomic_update(struct drm_plane 
*plane,
user_bo = gem_to_qxl_bo(obj);
  
  		/* pinning is done in the prepare/cleanup framevbuffer */

-   ret = qxl_bo_kmap(user_bo, _map);
+   ret = qxl_bo_kmap_locked(user_bo, _map);
if (ret)
goto out_free_release;
user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction 
properly */
@@ -619,7 +619,7 @@ static void qxl_cursor_atomic_update(struct drm_plane 
*plane,
if (ret)
goto out_unpin;
  
-		ret = qxl_bo_kmap(cursor_bo, _map);

+   ret = qxl_bo_kmap_locked(cursor_bo, _map);
if (ret)
goto out_backoff;
if (cursor_map.is_iomem) /* TODO: Use mapping abstraction 
properly */
@@ -638,8 +638,8 @@ static void qxl_cursor_atomic_update(struct drm_plane 
*plane,
cursor->chunk.prev_chunk = 0;
cursor->chunk.data_size = size;
memcpy(cursor->chunk.data, user_ptr, size);
-   qxl_bo_kunmap(cursor_bo);
-   qxl_bo_kunmap(user_bo);
+   qxl_bo_kunmap_locked(cursor_bo);
+   qxl_bo_kunmap_locked(user_bo);
  
  		cmd = (struct qxl_cursor_cmd *) qxl_release_map(qdev, release);

cmd->u.set.visible = 1;
@@ -681,7 +681,7 @@ static void qxl_cursor_atomic_update(struct drm_plane 
*plane,
  out_free_bo:
qxl_bo_unref(_bo);
  out_kunmap:
-   qxl_bo_kunmap(user_bo);
+   qxl_bo_kunmap_locked(user_bo);
  out_free_release:
qxl_release_free(qdev, release);
return;
@@ -1163,7 +1163,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
if (ret)
return ret;
  
-	qxl_bo_kmap(qdev->monitors_config_bo, );

+   qxl_bo_kmap_locked(qdev->monitors_config_bo, );
  
  	qdev->monitors_config = qdev->monitors_config_bo->kptr;

qdev->ram_header->monitors_config =
@@ -1189,7 +1189,7 @@ int qxl_destroy_monitors_object(struct qxl_device *qdev)
qdev->monitors_config = NULL;
qdev->ram_header->monitors_config = 0;
  
-	qxl_bo_kunmap(qdev->monitors_config_bo);

+   qxl_bo_kunmap_locked(qdev->monitors_config_bo);
ret = qxl_bo_unpin(qdev->monitors_config_bo);
if (ret)
return ret;
diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
index 7b7acb910780..294542d49015 100644
--- a/drivers/gpu/drm/qxl/qxl_draw.c
+++ b/drivers/gpu/drm/qxl/qxl_draw.c
@@ -48,7 +48,7 @@ static struct qxl_rect *drawable_set_clipping(struct 
qxl_device *qdev,
struct qxl_clip_rects *dev_clips;
int ret;
  
-	ret = qxl_bo_kmap(clips_bo, );

+   ret = qxl_bo_kmap_locked(clips_bo, );
if (ret)
return NULL;
dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */
@@ -202,7 +202,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
if (ret)
goto out_release_backoff;
  
-	ret = qxl_bo_kmap(bo, _map);

+   ret = qxl_bo_kmap_locked(bo, _map);
if (ret)
goto out_release_backoff;
surface_base = surface_map.vaddr; /* TODO: Use 

Re: [PATCH 01/10] drm/qxl: properly handle device init failures

2021-02-16 Thread Thomas Zimmermann



Am 16.02.21 um 12:37 schrieb Gerd Hoffmann:

Specifically do not try release resources which where
not allocated in the first place.


I still think this should eventually be resolved by using managed code. 
But for now


Acked-by: Thomas Zimmermann 



Cc: Tong Zhang 
Tested-by: Tong Zhang 
Signed-off-by: Gerd Hoffmann 
---
  drivers/gpu/drm/qxl/qxl_display.c | 3 +++
  drivers/gpu/drm/qxl/qxl_kms.c | 4 
  2 files changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index c326412136c5..ec50d2cfd4e1 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -1183,6 +1183,9 @@ int qxl_destroy_monitors_object(struct qxl_device *qdev)
  {
int ret;
  
+	if (!qdev->monitors_config_bo)

+   return 0;
+
qdev->monitors_config = NULL;
qdev->ram_header->monitors_config = 0;
  
diff --git a/drivers/gpu/drm/qxl/qxl_kms.c b/drivers/gpu/drm/qxl/qxl_kms.c

index 66d74aaaee06..4dc5ad13f12c 100644
--- a/drivers/gpu/drm/qxl/qxl_kms.c
+++ b/drivers/gpu/drm/qxl/qxl_kms.c
@@ -288,6 +288,10 @@ void qxl_device_fini(struct qxl_device *qdev)
  {
int cur_idx;
  
+	/* check if qxl_device_init() was successful (gc_work is initialized last) */

+   if (!qdev->gc_work.func)
+   return;
+
for (cur_idx = 0; cur_idx < 3; cur_idx++) {
if (!qdev->current_release_bo[cur_idx])
continue;



--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer



OpenPGP_signature
Description: OpenPGP digital signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH v5 4/8] x86/mm/tlb: Flush remote and local TLBs concurrently

2021-02-16 Thread Peter Zijlstra
On Tue, Feb 09, 2021 at 02:16:49PM -0800, Nadav Amit wrote:
> @@ -816,8 +821,8 @@ STATIC_NOPV void native_flush_tlb_others(const struct 
> cpumask *cpumask,
>* doing a speculative memory access.
>*/
>   if (info->freed_tables) {
> - smp_call_function_many(cpumask, flush_tlb_func,
> -(void *)info, 1);
> + on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
> +   cpumask);
>   } else {
>   /*
>* Although we could have used on_each_cpu_cond_mask(),
> @@ -844,14 +849,15 @@ STATIC_NOPV void native_flush_tlb_others(const struct 
> cpumask *cpumask,
>   if (tlb_is_not_lazy(cpu))
>   __cpumask_set_cpu(cpu, cond_cpumask);
>   }
> - smp_call_function_many(cond_cpumask, flush_tlb_func, (void 
> *)info, 1);
> + on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
> +   cpumask);
>   }
>  }

Surely on_each_cpu_mask() is more appropriate? There the compiler can do
the NULL propagation because it's on the same TU.

--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -821,8 +821,7 @@ STATIC_NOPV void native_flush_tlb_multi(
 * doing a speculative memory access.
 */
if (info->freed_tables) {
-   on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
- cpumask);
+   on_each_cpu_mask(cpumask, flush_tlb_func, (void *)info, true);
} else {
/*
 * Although we could have used on_each_cpu_cond_mask(),
@@ -849,8 +848,7 @@ STATIC_NOPV void native_flush_tlb_multi(
if (tlb_is_not_lazy(cpu))
__cpumask_set_cpu(cpu, cond_cpumask);
}
-   on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
- cpumask);
+   on_each_cpu_mask(cpumask, flush_tlb_func, (void *)info, true);
}
 }
 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 09/10] drm/qxl: map/unmap framebuffers in prepare_fb+cleanup_fb callbacks.

2021-02-16 Thread Gerd Hoffmann
We don't have to map in atomic_update callback then,
making locking a bit less complicated.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_display.c | 14 +-
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index 7500560db8e4..39b8c5116d34 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -584,7 +584,6 @@ static void qxl_cursor_atomic_update(struct drm_plane 
*plane,
struct drm_gem_object *obj;
struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
int ret;
-   struct dma_buf_map user_map;
struct dma_buf_map cursor_map;
void *user_ptr;
int size = 64*64*4;
@@ -599,11 +598,8 @@ static void qxl_cursor_atomic_update(struct drm_plane 
*plane,
obj = fb->obj[0];
user_bo = gem_to_qxl_bo(obj);
 
-   /* pinning is done in the prepare/cleanup framevbuffer */
-   ret = qxl_bo_kmap_locked(user_bo, _map);
-   if (ret)
-   goto out_free_release;
-   user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction 
properly */
+   /* mapping is done in the prepare/cleanup framevbuffer */
+   user_ptr = user_bo->map.vaddr; /* TODO: Use mapping abstraction 
properly */
 
ret = qxl_alloc_bo_reserved(qdev, release,
sizeof(struct qxl_cursor) + size,
@@ -639,7 +635,6 @@ static void qxl_cursor_atomic_update(struct drm_plane 
*plane,
cursor->chunk.data_size = size;
memcpy(cursor->chunk.data, user_ptr, size);
qxl_bo_kunmap_locked(cursor_bo);
-   qxl_bo_kunmap_locked(user_bo);
 
cmd = (struct qxl_cursor_cmd *) qxl_release_map(qdev, release);
cmd->u.set.visible = 1;
@@ -778,6 +773,7 @@ static int qxl_plane_prepare_fb(struct drm_plane *plane,
struct drm_gem_object *obj;
struct qxl_bo *user_bo;
struct qxl_surface surf;
+   struct dma_buf_map unused;
 
if (!new_state->fb)
return 0;
@@ -815,7 +811,7 @@ static int qxl_plane_prepare_fb(struct drm_plane *plane,
}
}
 
-   return qxl_bo_pin(user_bo);
+   return qxl_bo_kmap(user_bo, );
 }
 
 static void qxl_plane_cleanup_fb(struct drm_plane *plane,
@@ -834,7 +830,7 @@ static void qxl_plane_cleanup_fb(struct drm_plane *plane,
 
obj = old_state->fb->obj[0];
user_bo = gem_to_qxl_bo(obj);
-   qxl_bo_unpin(user_bo);
+   qxl_bo_kunmap(user_bo);
 
if (old_state->fb != plane->state->fb && user_bo->shadow) {
qxl_bo_unpin(user_bo->shadow);
-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 03/10] drm/qxl: use ttm bo priorities

2021-02-16 Thread Gerd Hoffmann
Allow to set priorities for buffer objects.  Use priority 1 for surface
and cursor command releases.  Use priority 0 for drawing command
releases.  That way the short-living drawing commands are first in line
when it comes to eviction, making it *much* less likely that
ttm_bo_mem_force_space() picks something which can't be evicted and
throws an error after waiting a while without success.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_object.h  |  1 +
 drivers/gpu/drm/qxl/qxl_cmd.c |  2 +-
 drivers/gpu/drm/qxl/qxl_display.c |  4 ++--
 drivers/gpu/drm/qxl/qxl_gem.c |  2 +-
 drivers/gpu/drm/qxl/qxl_object.c  |  5 +++--
 drivers/gpu/drm/qxl/qxl_release.c | 19 +--
 6 files changed, 21 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
index e60a8f88e226..dc1659e717f1 100644
--- a/drivers/gpu/drm/qxl/qxl_object.h
+++ b/drivers/gpu/drm/qxl/qxl_object.h
@@ -61,6 +61,7 @@ static inline u64 qxl_bo_mmap_offset(struct qxl_bo *bo)
 extern int qxl_bo_create(struct qxl_device *qdev,
 unsigned long size,
 bool kernel, bool pinned, u32 domain,
+u32 priority,
 struct qxl_surface *surf,
 struct qxl_bo **bo_ptr);
 extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
diff --git a/drivers/gpu/drm/qxl/qxl_cmd.c b/drivers/gpu/drm/qxl/qxl_cmd.c
index 7e22a81bfb36..7b00c955cd82 100644
--- a/drivers/gpu/drm/qxl/qxl_cmd.c
+++ b/drivers/gpu/drm/qxl/qxl_cmd.c
@@ -269,7 +269,7 @@ int qxl_alloc_bo_reserved(struct qxl_device *qdev,
int ret;
 
ret = qxl_bo_create(qdev, size, false /* not kernel - device */,
-   false, QXL_GEM_DOMAIN_VRAM, NULL, );
+   false, QXL_GEM_DOMAIN_VRAM, 0, NULL, );
if (ret) {
DRM_ERROR("failed to allocate VRAM BO\n");
return ret;
diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index ec50d2cfd4e1..a1b5cc5918bc 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -799,8 +799,8 @@ static int qxl_plane_prepare_fb(struct drm_plane *plane,
qdev->dumb_shadow_bo = NULL;
}
qxl_bo_create(qdev, surf.height * surf.stride,
- true, true, QXL_GEM_DOMAIN_SURFACE, ,
- >dumb_shadow_bo);
+ true, true, QXL_GEM_DOMAIN_SURFACE, 0,
+ , >dumb_shadow_bo);
}
if (user_bo->shadow != qdev->dumb_shadow_bo) {
if (user_bo->shadow) {
diff --git a/drivers/gpu/drm/qxl/qxl_gem.c b/drivers/gpu/drm/qxl/qxl_gem.c
index 48e096285b4c..a08da0bd9098 100644
--- a/drivers/gpu/drm/qxl/qxl_gem.c
+++ b/drivers/gpu/drm/qxl/qxl_gem.c
@@ -55,7 +55,7 @@ int qxl_gem_object_create(struct qxl_device *qdev, int size,
/* At least align on page size */
if (alignment < PAGE_SIZE)
alignment = PAGE_SIZE;
-   r = qxl_bo_create(qdev, size, kernel, false, initial_domain, surf, 
);
+   r = qxl_bo_create(qdev, size, kernel, false, initial_domain, 0, surf, 
);
if (r) {
if (r != -ERESTARTSYS)
DRM_ERROR(
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index 705b51535492..7eada4ad217b 100644
--- a/drivers/gpu/drm/qxl/qxl_object.c
+++ b/drivers/gpu/drm/qxl/qxl_object.c
@@ -103,8 +103,8 @@ static const struct drm_gem_object_funcs qxl_object_funcs = 
{
.print_info = drm_gem_ttm_print_info,
 };
 
-int qxl_bo_create(struct qxl_device *qdev,
- unsigned long size, bool kernel, bool pinned, u32 domain,
+int qxl_bo_create(struct qxl_device *qdev, unsigned long size,
+ bool kernel, bool pinned, u32 domain, u32 priority,
  struct qxl_surface *surf,
  struct qxl_bo **bo_ptr)
 {
@@ -137,6 +137,7 @@ int qxl_bo_create(struct qxl_device *qdev,
 
qxl_ttm_placement_from_domain(bo, domain);
 
+   bo->tbo.priority = priority;
r = ttm_bo_init_reserved(>mman.bdev, >tbo, size, type,
 >placement, 0, , NULL, NULL,
 _ttm_bo_destroy);
diff --git a/drivers/gpu/drm/qxl/qxl_release.c 
b/drivers/gpu/drm/qxl/qxl_release.c
index 63818b56218c..a831184e014a 100644
--- a/drivers/gpu/drm/qxl/qxl_release.c
+++ b/drivers/gpu/drm/qxl/qxl_release.c
@@ -161,11 +161,12 @@ qxl_release_free(struct qxl_device *qdev,
 }
 
 static int qxl_release_bo_alloc(struct qxl_device *qdev,
-   struct qxl_bo **bo)
+   struct qxl_bo **bo,
+   u32 priority)
 {
/* pin releases bo's they are too messy 

[PATCH 06/10] drm/qxl: add qxl_bo_kmap/qxl_bo_kunmap

2021-02-16 Thread Gerd Hoffmann
Add kmap/kunmap variants which reserve (and pin) the bo.
They can be used in case the caller doesn't hold a reservation
for the bo.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_object.h |  2 ++
 drivers/gpu/drm/qxl/qxl_object.c | 36 
 2 files changed, 38 insertions(+)

diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
index 5bd33650183f..360972ae4869 100644
--- a/drivers/gpu/drm/qxl/qxl_object.h
+++ b/drivers/gpu/drm/qxl/qxl_object.h
@@ -64,7 +64,9 @@ extern int qxl_bo_create(struct qxl_device *qdev,
 u32 priority,
 struct qxl_surface *surf,
 struct qxl_bo **bo_ptr);
+extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
 extern int qxl_bo_kmap_locked(struct qxl_bo *bo, struct dma_buf_map *map);
+extern int qxl_bo_kunmap(struct qxl_bo *bo);
 extern void qxl_bo_kunmap_locked(struct qxl_bo *bo);
 void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int 
page_offset);
 void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, 
void *map);
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index b56d4dfc28cb..22748b9566af 100644
--- a/drivers/gpu/drm/qxl/qxl_object.c
+++ b/drivers/gpu/drm/qxl/qxl_object.c
@@ -29,6 +29,9 @@
 #include "qxl_drv.h"
 #include "qxl_object.h"
 
+static int __qxl_bo_pin(struct qxl_bo *bo);
+static void __qxl_bo_unpin(struct qxl_bo *bo);
+
 static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo)
 {
struct qxl_bo *bo;
@@ -179,6 +182,25 @@ int qxl_bo_kmap_locked(struct qxl_bo *bo, struct 
dma_buf_map *map)
return 0;
 }
 
+int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map)
+{
+   int r;
+
+   r = qxl_bo_reserve(bo);
+   if (r)
+   return r;
+
+   r = __qxl_bo_pin(bo);
+   if (r) {
+   qxl_bo_unreserve(bo);
+   return r;
+   }
+
+   r = qxl_bo_kmap_locked(bo, map);
+   qxl_bo_unreserve(bo);
+   return r;
+}
+
 void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
  struct qxl_bo *bo, int page_offset)
 {
@@ -223,6 +245,20 @@ void qxl_bo_kunmap_locked(struct qxl_bo *bo)
ttm_bo_vunmap(>tbo, >map);
 }
 
+int qxl_bo_kunmap(struct qxl_bo *bo)
+{
+   int r;
+
+   r = qxl_bo_reserve(bo);
+   if (r)
+   return r;
+
+   qxl_bo_kunmap_locked(bo);
+   __qxl_bo_unpin(bo);
+   qxl_bo_unreserve(bo);
+   return 0;
+}
+
 void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev,
   struct qxl_bo *bo, void *pmap)
 {
-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 08/10] drm/qxl: fix monitors object kmap

2021-02-16 Thread Gerd Hoffmann
Use the correct kmap variant.  We don't hold a reservation here,
so we can't use the _locked variant.  We can drop the pin because
qxl_bo_kmap will do that for us.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_display.c | 7 ++-
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index 8672385a1caf..7500560db8e4 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -1159,12 +1159,10 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
}
qdev->monitors_config_bo = gem_to_qxl_bo(gobj);
 
-   ret = qxl_bo_pin(qdev->monitors_config_bo);
+   ret = qxl_bo_kmap(qdev->monitors_config_bo, );
if (ret)
return ret;
 
-   qxl_bo_kmap_locked(qdev->monitors_config_bo, );
-
qdev->monitors_config = qdev->monitors_config_bo->kptr;
qdev->ram_header->monitors_config =
qxl_bo_physical_address(qdev, qdev->monitors_config_bo, 0);
@@ -1189,8 +1187,7 @@ int qxl_destroy_monitors_object(struct qxl_device *qdev)
qdev->monitors_config = NULL;
qdev->ram_header->monitors_config = 0;
 
-   qxl_bo_kunmap_locked(qdev->monitors_config_bo);
-   ret = qxl_bo_unpin(qdev->monitors_config_bo);
+   ret = qxl_bo_kunmap(qdev->monitors_config_bo);
if (ret)
return ret;
 
-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 05/10] drm/qxl: rename qxl_bo_kmap -> qxl_bo_kmap_locked

2021-02-16 Thread Gerd Hoffmann
Make clear that these functions should be called with reserved
bo's only.  No functional change.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_object.h  |  4 ++--
 drivers/gpu/drm/qxl/qxl_display.c | 14 +++---
 drivers/gpu/drm/qxl/qxl_draw.c|  8 
 drivers/gpu/drm/qxl/qxl_image.c   |  2 +-
 drivers/gpu/drm/qxl/qxl_object.c  |  8 
 drivers/gpu/drm/qxl/qxl_prime.c   |  4 ++--
 6 files changed, 20 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
index dc1659e717f1..5bd33650183f 100644
--- a/drivers/gpu/drm/qxl/qxl_object.h
+++ b/drivers/gpu/drm/qxl/qxl_object.h
@@ -64,8 +64,8 @@ extern int qxl_bo_create(struct qxl_device *qdev,
 u32 priority,
 struct qxl_surface *surf,
 struct qxl_bo **bo_ptr);
-extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
-extern void qxl_bo_kunmap(struct qxl_bo *bo);
+extern int qxl_bo_kmap_locked(struct qxl_bo *bo, struct dma_buf_map *map);
+extern void qxl_bo_kunmap_locked(struct qxl_bo *bo);
 void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int 
page_offset);
 void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, 
void *map);
 extern struct qxl_bo *qxl_bo_ref(struct qxl_bo *bo);
diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index a1b5cc5918bc..8672385a1caf 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -600,7 +600,7 @@ static void qxl_cursor_atomic_update(struct drm_plane 
*plane,
user_bo = gem_to_qxl_bo(obj);
 
/* pinning is done in the prepare/cleanup framevbuffer */
-   ret = qxl_bo_kmap(user_bo, _map);
+   ret = qxl_bo_kmap_locked(user_bo, _map);
if (ret)
goto out_free_release;
user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction 
properly */
@@ -619,7 +619,7 @@ static void qxl_cursor_atomic_update(struct drm_plane 
*plane,
if (ret)
goto out_unpin;
 
-   ret = qxl_bo_kmap(cursor_bo, _map);
+   ret = qxl_bo_kmap_locked(cursor_bo, _map);
if (ret)
goto out_backoff;
if (cursor_map.is_iomem) /* TODO: Use mapping abstraction 
properly */
@@ -638,8 +638,8 @@ static void qxl_cursor_atomic_update(struct drm_plane 
*plane,
cursor->chunk.prev_chunk = 0;
cursor->chunk.data_size = size;
memcpy(cursor->chunk.data, user_ptr, size);
-   qxl_bo_kunmap(cursor_bo);
-   qxl_bo_kunmap(user_bo);
+   qxl_bo_kunmap_locked(cursor_bo);
+   qxl_bo_kunmap_locked(user_bo);
 
cmd = (struct qxl_cursor_cmd *) qxl_release_map(qdev, release);
cmd->u.set.visible = 1;
@@ -681,7 +681,7 @@ static void qxl_cursor_atomic_update(struct drm_plane 
*plane,
 out_free_bo:
qxl_bo_unref(_bo);
 out_kunmap:
-   qxl_bo_kunmap(user_bo);
+   qxl_bo_kunmap_locked(user_bo);
 out_free_release:
qxl_release_free(qdev, release);
return;
@@ -1163,7 +1163,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
if (ret)
return ret;
 
-   qxl_bo_kmap(qdev->monitors_config_bo, );
+   qxl_bo_kmap_locked(qdev->monitors_config_bo, );
 
qdev->monitors_config = qdev->monitors_config_bo->kptr;
qdev->ram_header->monitors_config =
@@ -1189,7 +1189,7 @@ int qxl_destroy_monitors_object(struct qxl_device *qdev)
qdev->monitors_config = NULL;
qdev->ram_header->monitors_config = 0;
 
-   qxl_bo_kunmap(qdev->monitors_config_bo);
+   qxl_bo_kunmap_locked(qdev->monitors_config_bo);
ret = qxl_bo_unpin(qdev->monitors_config_bo);
if (ret)
return ret;
diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
index 7b7acb910780..294542d49015 100644
--- a/drivers/gpu/drm/qxl/qxl_draw.c
+++ b/drivers/gpu/drm/qxl/qxl_draw.c
@@ -48,7 +48,7 @@ static struct qxl_rect *drawable_set_clipping(struct 
qxl_device *qdev,
struct qxl_clip_rects *dev_clips;
int ret;
 
-   ret = qxl_bo_kmap(clips_bo, );
+   ret = qxl_bo_kmap_locked(clips_bo, );
if (ret)
return NULL;
dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */
@@ -202,7 +202,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
if (ret)
goto out_release_backoff;
 
-   ret = qxl_bo_kmap(bo, _map);
+   ret = qxl_bo_kmap_locked(bo, _map);
if (ret)
goto out_release_backoff;
surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction 
properly */
@@ -210,7 +210,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
ret = qxl_image_init(qdev, 

[PATCH 10/10] drm/qxl: add lock asserts to qxl_bo_kmap_locked + qxl_bo_kunmap_locked

2021-02-16 Thread Gerd Hoffmann
Try avoid re-introducing locking bugs.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_object.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index 22748b9566af..90d5e5b7f927 100644
--- a/drivers/gpu/drm/qxl/qxl_object.c
+++ b/drivers/gpu/drm/qxl/qxl_object.c
@@ -162,6 +162,8 @@ int qxl_bo_kmap_locked(struct qxl_bo *bo, struct 
dma_buf_map *map)
 {
int r;
 
+   dma_resv_assert_held(bo->tbo.base.resv);
+
if (bo->kptr) {
bo->map_count++;
goto out;
@@ -236,6 +238,8 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
 
 void qxl_bo_kunmap_locked(struct qxl_bo *bo)
 {
+   dma_resv_assert_held(bo->tbo.base.resv);
+
if (bo->kptr == NULL)
return;
bo->map_count--;
-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 07/10] drm/qxl: fix prime kmap

2021-02-16 Thread Gerd Hoffmann
Use the correct kmap variant.  We don't have a reservation here,
so we can't use the _locked version.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_prime.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
index dc295b2e2b52..4aa949799446 100644
--- a/drivers/gpu/drm/qxl/qxl_prime.c
+++ b/drivers/gpu/drm/qxl/qxl_prime.c
@@ -59,7 +59,7 @@ int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct 
dma_buf_map *map)
struct qxl_bo *bo = gem_to_qxl_bo(obj);
int ret;
 
-   ret = qxl_bo_kmap_locked(bo, map);
+   ret = qxl_bo_kmap(bo, map);
if (ret < 0)
return ret;
 
@@ -71,7 +71,7 @@ void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
 {
struct qxl_bo *bo = gem_to_qxl_bo(obj);
 
-   qxl_bo_kunmap_locked(bo);
+   qxl_bo_kunmap(bo);
 }
 
 int qxl_gem_prime_mmap(struct drm_gem_object *obj,
-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 01/10] drm/qxl: properly handle device init failures

2021-02-16 Thread Gerd Hoffmann
Specifically do not try release resources which where
not allocated in the first place.

Cc: Tong Zhang 
Tested-by: Tong Zhang 
Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_display.c | 3 +++
 drivers/gpu/drm/qxl/qxl_kms.c | 4 
 2 files changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index c326412136c5..ec50d2cfd4e1 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -1183,6 +1183,9 @@ int qxl_destroy_monitors_object(struct qxl_device *qdev)
 {
int ret;
 
+   if (!qdev->monitors_config_bo)
+   return 0;
+
qdev->monitors_config = NULL;
qdev->ram_header->monitors_config = 0;
 
diff --git a/drivers/gpu/drm/qxl/qxl_kms.c b/drivers/gpu/drm/qxl/qxl_kms.c
index 66d74aaaee06..4dc5ad13f12c 100644
--- a/drivers/gpu/drm/qxl/qxl_kms.c
+++ b/drivers/gpu/drm/qxl/qxl_kms.c
@@ -288,6 +288,10 @@ void qxl_device_fini(struct qxl_device *qdev)
 {
int cur_idx;
 
+   /* check if qxl_device_init() was successful (gc_work is initialized 
last) */
+   if (!qdev->gc_work.func)
+   return;
+
for (cur_idx = 0; cur_idx < 3; cur_idx++) {
if (!qdev->current_release_bo[cur_idx])
continue;
-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 04/10] drm/qxl: fix lockdep issue in qxl_alloc_release_reserved

2021-02-16 Thread Gerd Hoffmann
Call qxl_bo_unpin (which does a reservation) without holding the
release_mutex lock.  Fixes lockdep (correctly) warning on a possible
deadlock.

Fixes: 65ffea3c6e73 ("drm/qxl: unpin release objects")
Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_release.c | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_release.c 
b/drivers/gpu/drm/qxl/qxl_release.c
index a831184e014a..352a11a8485b 100644
--- a/drivers/gpu/drm/qxl/qxl_release.c
+++ b/drivers/gpu/drm/qxl/qxl_release.c
@@ -284,7 +284,7 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev, 
unsigned long size,
   int type, struct qxl_release **release,
   struct qxl_bo **rbo)
 {
-   struct qxl_bo *bo;
+   struct qxl_bo *bo, *free_bo = NULL;
int idr_ret;
int ret = 0;
union qxl_release_info *info;
@@ -317,8 +317,7 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev, 
unsigned long size,
 
mutex_lock(>release_mutex);
if (qdev->current_release_bo_offset[cur_idx] + 1 >= 
releases_per_bo[cur_idx]) {
-   qxl_bo_unpin(qdev->current_release_bo[cur_idx]);
-   qxl_bo_unref(>current_release_bo[cur_idx]);
+   free_bo = qdev->current_release_bo[cur_idx];
qdev->current_release_bo_offset[cur_idx] = 0;
qdev->current_release_bo[cur_idx] = NULL;
}
@@ -326,6 +325,10 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev, 
unsigned long size,
ret = qxl_release_bo_alloc(qdev, 
>current_release_bo[cur_idx], priority);
if (ret) {
mutex_unlock(>release_mutex);
+   if (free_bo) {
+   qxl_bo_unpin(free_bo);
+   qxl_bo_unref(_bo);
+   }
qxl_release_free(qdev, *release);
return ret;
}
@@ -341,6 +344,10 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev, 
unsigned long size,
*rbo = bo;
 
mutex_unlock(>release_mutex);
+   if (free_bo) {
+   qxl_bo_unpin(free_bo);
+   qxl_bo_unref(_bo);
+   }
 
ret = qxl_release_list_add(*release, bo);
qxl_bo_unref();
-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 02/10] drm/qxl: more fence wait rework

2021-02-16 Thread Gerd Hoffmann
Move qxl_io_notify_oom() call into wait condition.
That way the driver will call it again if one call
wasn't enough.

Also allows to remove the extra dma_fence_is_signaled()
check and the goto.

Fixes: 5a838e5d5825 ("drm/qxl: simplify qxl_fence_wait")
Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_release.c | 9 +++--
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_release.c 
b/drivers/gpu/drm/qxl/qxl_release.c
index 6ed673d75f9f..63818b56218c 100644
--- a/drivers/gpu/drm/qxl/qxl_release.c
+++ b/drivers/gpu/drm/qxl/qxl_release.c
@@ -62,16 +62,13 @@ static long qxl_fence_wait(struct dma_fence *fence, bool 
intr,
 
qdev = container_of(fence->lock, struct qxl_device, release_lock);
 
-   if (dma_fence_is_signaled(fence))
-   goto signaled;
-
-   qxl_io_notify_oom(qdev);
if (!wait_event_timeout(qdev->release_event,
-   dma_fence_is_signaled(fence),
+   (dma_fence_is_signaled(fence) || (
+   qxl_io_notify_oom(qdev),
+   0)),
timeout))
return 0;
 
-signaled:
cur = jiffies;
if (time_after(cur, end))
return 0;
-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v2] virtio/s390: implement virtio-ccw revision 2 correctly

2021-02-16 Thread Cornelia Huck
CCW_CMD_READ_STATUS was introduced with revision 2 of virtio-ccw,
and drivers should only rely on it being implemented when they
negotiated at least that revision with the device.

However, virtio_ccw_get_status() issued READ_STATUS for any
device operating at least at revision 1. If the device accepts
READ_STATUS regardless of the negotiated revision (which some
implementations like QEMU do, even though the spec currently does
not allow it), everything works as intended. While a device
rejecting the command should also be handled gracefully, we will
not be able to see any changes the device makes to the status,
such as setting NEEDS_RESET or setting the status to zero after
a completed reset.

We negotiated the revision to at most 1, as we never bumped the
maximum revision; let's do that now and properly send READ_STATUS
only if we are operating at least at revision 2.

Cc: sta...@vger.kernel.org
Fixes: 7d3ce5ab9430 ("virtio/s390: support READ_STATUS command for virtio-ccw")
Reviewed-by: Halil Pasic 
Signed-off-by: Cornelia Huck 
---

v1->v2:
  tweak patch description and cc:stable

---
 drivers/s390/virtio/virtio_ccw.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c
index 5730572b52cd..54e686dca6de 100644
--- a/drivers/s390/virtio/virtio_ccw.c
+++ b/drivers/s390/virtio/virtio_ccw.c
@@ -117,7 +117,7 @@ struct virtio_rev_info {
 };
 
 /* the highest virtio-ccw revision we support */
-#define VIRTIO_CCW_REV_MAX 1
+#define VIRTIO_CCW_REV_MAX 2
 
 struct virtio_ccw_vq_info {
struct virtqueue *vq;
@@ -952,7 +952,7 @@ static u8 virtio_ccw_get_status(struct virtio_device *vdev)
u8 old_status = vcdev->dma_area->status;
struct ccw1 *ccw;
 
-   if (vcdev->revision < 1)
+   if (vcdev->revision < 2)
return vcdev->dma_area->status;
 
ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw));
-- 
2.26.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH] virtio/s390: implement virtio-ccw revision 2 correctly

2021-02-16 Thread Halil Pasic
On Tue, 16 Feb 2021 11:39:07 +0100
Cornelia Huck  wrote:

> > 
> > Reviewed-by: Halil Pasic   
> 
> Thanks!
> 
> I'll do a v2 with a tweaked commit message and cc:stable.

Sounds good!

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH] virtio/s390: implement virtio-ccw revision 2 correctly

2021-02-16 Thread Cornelia Huck
On Mon, 15 Feb 2021 19:51:44 +0100
Halil Pasic  wrote:

> On Mon, 15 Feb 2021 12:47:02 +0100
> Cornelia Huck  wrote:
> 
> > On Fri, 12 Feb 2021 18:04:11 +0100
> > Cornelia Huck  wrote:
> >   
> > > CCW_CMD_READ_STATUS was introduced with revision 2 of virtio-ccw,
> > > and drivers should only rely on it being implemented when they
> > > negotiated at least that revision with the device.
> > > 
> > > However, virtio_ccw_get_status() issued READ_STATUS for any
> > > device operating at least at revision 1. If the device accepts
> > > READ_STATUS regardless of the negotiated revision (which it is
> > > free to do),
> > 
> > So, looking at the standard again, the device is actually required to
> > reject the READ_STATUS if only rev 1 had been negotiated... regardless
> > of that, I don't think we should change QEMU's behaviour, as it would
> > affect existing guests (they would lose access to the status bits as
> > observed by the device, including DEVICE_NEEDS_RESET.)  
> 
> Not only that, without READ_STATUS, we can't do device reset which
> is a prerequisite for a proper cleanup, as required by the spec.
> 
> You certainly remember, the driver has may not assume the reset
> was performed (and thus virtqueues are not live) until it reads
> back status 0. But without READ_STATUS virtio_ccw_get_status() will
> keep returning the status the driver last set via
> virtio_ccw_set_status(). And CCW_CMD_VDEV_RESET is of course
> revision 1 material. This looks ugly!

Yes, that problem kind of cascades down.

> 
> >   
> > > everything works as intended; a device rejecting the
> > > command should also be handled gracefully. For correctness, we
> > > should really limit the command to revision 2 or higher, though.
> > > 
> > > We also negotiated the revision to at most 1, as we never bumped
> > > the maximum revision; let's do that now.
> > > 
> > > Fixes: 7d3ce5ab9430 ("virtio/s390: support READ_STATUS command for 
> > > virtio-ccw")
> > > Signed-off-by: Cornelia Huck 
> > > ---
> > > 
> > > QEMU does not fence off READ_STATUS for revisions < 2, which is probably
> > > why we never noticed this. I'm not aware of other hypervisors that do
> > > fence it off, nor any that cannot deal properly with an unknown command.
> > > 
> > > Not sure whether this is stable worthy?
> > 
> > Maybe it is, given the MUST reject clause in the standard?
> >   
> 
> Yes, IMHO this must be backported. A device that ain't violating the
> spec would currently reject READ_STATUS. Which would break RESET_VDEV
> like I described above.
> 
> Can we change that MUST to should? There are now good reasons for not
> doing like the spec says in case of READ_STATUS.

Yes. I'm not so sure forcing the device to reject the command was such
a good idea anyway, and relaxing the requirement keeps existing
implementations in compliance.

I've opened https://github.com/oasis-tcs/virtio-spec/issues/96 and will
send a patch for the spec later.

> 
> > > 
> > > ---
> > >  drivers/s390/virtio/virtio_ccw.c | 4 ++--
> > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/drivers/s390/virtio/virtio_ccw.c 
> > > b/drivers/s390/virtio/virtio_ccw.c
> > > index 5730572b52cd..54e686dca6de 100644
> > > --- a/drivers/s390/virtio/virtio_ccw.c
> > > +++ b/drivers/s390/virtio/virtio_ccw.c
> > > @@ -117,7 +117,7 @@ struct virtio_rev_info {
> > >  };
> > >  
> > >  /* the highest virtio-ccw revision we support */
> > > -#define VIRTIO_CCW_REV_MAX 1
> > > +#define VIRTIO_CCW_REV_MAX 2
> > >  
> > >  struct virtio_ccw_vq_info {
> > >   struct virtqueue *vq;
> > > @@ -952,7 +952,7 @@ static u8 virtio_ccw_get_status(struct virtio_device 
> > > *vdev)
> > >   u8 old_status = vcdev->dma_area->status;
> > >   struct ccw1 *ccw;
> > >  
> > > - if (vcdev->revision < 1)
> > > + if (vcdev->revision < 2)
> > >   return vcdev->dma_area->status;  
> 
> I don't think our faking of the status read (i.e. returning the old one)
> is contributing to spec compliance. Especially not if the inability to
> READ is not transient.
> 
> Also return old_status; would tell the story better, but on the
> other hand, that would be an unrelated cosmetic change. Maybe
> a separate patch?

We would also need to actively check for success or failure of the
channel program in that case.

I'm currently looking at the virtio-ccw code anyway, so I can put that
on my list as well.

> 
> Reviewed-by: Halil Pasic 

Thanks!

I'll do a v2 with a tweaked commit message and cc:stable.

> 
> Regards,
> Halil
> 
> > >  
> > >   ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw));
> >   
> 

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[RFC PATCH 10/10] vhost/vdpa: return configuration bytes read and written to user space

2021-02-16 Thread Stefano Garzarella
vdpa_get_config() and vdpa_set_config() now return the amount
of bytes read and written, so let's return them to the user space.

We also modify vhost_vdpa_config_validate() to return 0 (bytes read
or written) instead of an error, when the buffer length is 0.

Signed-off-by: Stefano Garzarella 
---
 drivers/vhost/vdpa.c | 26 +++---
 1 file changed, 15 insertions(+), 11 deletions(-)

diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index 21eea2be5afa..b754c53171a7 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -191,9 +191,6 @@ static ssize_t vhost_vdpa_config_validate(struct vhost_vdpa 
*v,
struct vdpa_device *vdpa = v->vdpa;
u32 size = vdpa->config->get_config_size(vdpa);
 
-   if (c->len == 0)
-   return -EINVAL;
-
return min(c->len, size);
 }
 
@@ -204,6 +201,7 @@ static long vhost_vdpa_get_config(struct vhost_vdpa *v,
struct vhost_vdpa_config config;
unsigned long size = offsetof(struct vhost_vdpa_config, buf);
ssize_t config_size;
+   long ret;
u8 *buf;
 
if (copy_from_user(, c, size))
@@ -217,15 +215,18 @@ static long vhost_vdpa_get_config(struct vhost_vdpa *v,
if (!buf)
return -ENOMEM;
 
-   vdpa_get_config(vdpa, config.off, buf, config_size);
-
-   if (copy_to_user(c->buf, buf, config_size)) {
-   kvfree(buf);
-   return -EFAULT;
+   ret = vdpa_get_config(vdpa, config.off, buf, config_size);
+   if (ret < 0) {
+   ret = -EFAULT;
+   goto out;
}
 
+   if (copy_to_user(c->buf, buf, config_size))
+   ret = -EFAULT;
+
+out:
kvfree(buf);
-   return 0;
+   return ret;
 }
 
 static long vhost_vdpa_set_config(struct vhost_vdpa *v,
@@ -235,6 +236,7 @@ static long vhost_vdpa_set_config(struct vhost_vdpa *v,
struct vhost_vdpa_config config;
unsigned long size = offsetof(struct vhost_vdpa_config, buf);
ssize_t config_size;
+   long ret;
u8 *buf;
 
if (copy_from_user(, c, size))
@@ -248,10 +250,12 @@ static long vhost_vdpa_set_config(struct vhost_vdpa *v,
if (IS_ERR(buf))
return PTR_ERR(buf);
 
-   vdpa_set_config(vdpa, config.off, buf, config_size);
+   ret = vdpa_set_config(vdpa, config.off, buf, config_size);
+   if (ret < 0)
+   ret = -EFAULT;
 
kvfree(buf);
-   return 0;
+   return ret;
 }
 
 static long vhost_vdpa_get_features(struct vhost_vdpa *v, u64 __user *featurep)
-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[RFC PATCH 09/10] vhost/vdpa: use get_config_size callback in vhost_vdpa_config_validate()

2021-02-16 Thread Stefano Garzarella
Let's use the new 'get_config_size()' callback available instead of
using the 'virtio_id' to get the size of the device config space.

Signed-off-by: Stefano Garzarella 
---
 drivers/vhost/vdpa.c | 9 ++---
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index 544f8582a42b..21eea2be5afa 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -188,13 +188,8 @@ static long vhost_vdpa_set_status(struct vhost_vdpa *v, u8 
__user *statusp)
 static ssize_t vhost_vdpa_config_validate(struct vhost_vdpa *v,
  struct vhost_vdpa_config *c)
 {
-   u32 size = 0;
-
-   switch (v->virtio_id) {
-   case VIRTIO_ID_NET:
-   size = sizeof(struct virtio_net_config);
-   break;
-   }
+   struct vdpa_device *vdpa = v->vdpa;
+   u32 size = vdpa->config->get_config_size(vdpa);
 
if (c->len == 0)
return -EINVAL;
-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[RFC PATCH 08/10] vhost/vdpa: allow user space to pass buffers bigger than config space

2021-02-16 Thread Stefano Garzarella
vdpa_get_config() and vdpa_set_config() now are able to read/write
only the bytes available in the device configuration space, also if
the buffer provided is bigger than that.

Let's use this feature to allow the user space application to pass any
buffer. We limit the size of the internal bounce buffer allocated with
the device config size.

Signed-off-by: Stefano Garzarella 
---
 drivers/vhost/vdpa.c | 36 
 1 file changed, 20 insertions(+), 16 deletions(-)

diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index cdd8f24168b2..544f8582a42b 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -185,10 +185,10 @@ static long vhost_vdpa_set_status(struct vhost_vdpa *v, 
u8 __user *statusp)
return 0;
 }
 
-static int vhost_vdpa_config_validate(struct vhost_vdpa *v,
- struct vhost_vdpa_config *c)
+static ssize_t vhost_vdpa_config_validate(struct vhost_vdpa *v,
+ struct vhost_vdpa_config *c)
 {
-   long size = 0;
+   u32 size = 0;
 
switch (v->virtio_id) {
case VIRTIO_ID_NET:
@@ -199,10 +199,7 @@ static int vhost_vdpa_config_validate(struct vhost_vdpa *v,
if (c->len == 0)
return -EINVAL;
 
-   if (c->len > size - c->off)
-   return -E2BIG;
-
-   return 0;
+   return min(c->len, size);
 }
 
 static long vhost_vdpa_get_config(struct vhost_vdpa *v,
@@ -211,19 +208,23 @@ static long vhost_vdpa_get_config(struct vhost_vdpa *v,
struct vdpa_device *vdpa = v->vdpa;
struct vhost_vdpa_config config;
unsigned long size = offsetof(struct vhost_vdpa_config, buf);
+   ssize_t config_size;
u8 *buf;
 
if (copy_from_user(, c, size))
return -EFAULT;
-   if (vhost_vdpa_config_validate(v, ))
-   return -EINVAL;
-   buf = kvzalloc(config.len, GFP_KERNEL);
+
+   config_size = vhost_vdpa_config_validate(v, );
+   if (config_size <= 0)
+   return config_size;
+
+   buf = kvzalloc(config_size, GFP_KERNEL);
if (!buf)
return -ENOMEM;
 
-   vdpa_get_config(vdpa, config.off, buf, config.len);
+   vdpa_get_config(vdpa, config.off, buf, config_size);
 
-   if (copy_to_user(c->buf, buf, config.len)) {
+   if (copy_to_user(c->buf, buf, config_size)) {
kvfree(buf);
return -EFAULT;
}
@@ -238,18 +239,21 @@ static long vhost_vdpa_set_config(struct vhost_vdpa *v,
struct vdpa_device *vdpa = v->vdpa;
struct vhost_vdpa_config config;
unsigned long size = offsetof(struct vhost_vdpa_config, buf);
+   ssize_t config_size;
u8 *buf;
 
if (copy_from_user(, c, size))
return -EFAULT;
-   if (vhost_vdpa_config_validate(v, ))
-   return -EINVAL;
 
-   buf = vmemdup_user(c->buf, config.len);
+   config_size = vhost_vdpa_config_validate(v, );
+   if (config_size <= 0)
+   return config_size;
+
+   buf = vmemdup_user(c->buf, config_size);
if (IS_ERR(buf))
return PTR_ERR(buf);
 
-   vdpa_set_config(vdpa, config.off, buf, config.len);
+   vdpa_set_config(vdpa, config.off, buf, config_size);
 
kvfree(buf);
return 0;
-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[RFC PATCH 06/10] virtio_vdpa: use vdpa_set_config()

2021-02-16 Thread Stefano Garzarella
Instead of calling the 'set_config' callback directly, we call the
new vdpa_set_config() helper which also checks the parameters.

Signed-off-by: Stefano Garzarella 
---
 drivers/virtio/virtio_vdpa.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c
index e28acf482e0c..2f1c4a2dd241 100644
--- a/drivers/virtio/virtio_vdpa.c
+++ b/drivers/virtio/virtio_vdpa.c
@@ -65,9 +65,8 @@ static void virtio_vdpa_set(struct virtio_device *vdev, 
unsigned offset,
const void *buf, unsigned len)
 {
struct vdpa_device *vdpa = vd_get_vdpa(vdev);
-   const struct vdpa_config_ops *ops = vdpa->config;
 
-   ops->set_config(vdpa, offset, buf, len);
+   vdpa_set_config(vdpa, offset, buf, len);
 }
 
 static u32 virtio_vdpa_generation(struct virtio_device *vdev)
-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[RFC PATCH 07/10] vhost/vdpa: use vdpa_set_config()

2021-02-16 Thread Stefano Garzarella
Instead of calling the 'set_config' callback directly, we call the
new vdpa_set_config() helper which also checks the parameters.

Signed-off-by: Stefano Garzarella 
---
 drivers/vhost/vdpa.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index ef688c8c0e0e..cdd8f24168b2 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -236,7 +236,6 @@ static long vhost_vdpa_set_config(struct vhost_vdpa *v,
  struct vhost_vdpa_config __user *c)
 {
struct vdpa_device *vdpa = v->vdpa;
-   const struct vdpa_config_ops *ops = vdpa->config;
struct vhost_vdpa_config config;
unsigned long size = offsetof(struct vhost_vdpa_config, buf);
u8 *buf;
@@ -250,7 +249,7 @@ static long vhost_vdpa_set_config(struct vhost_vdpa *v,
if (IS_ERR(buf))
return PTR_ERR(buf);
 
-   ops->set_config(vdpa, config.off, buf, config.len);
+   vdpa_set_config(vdpa, config.off, buf, config.len);
 
kvfree(buf);
return 0;
-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[RFC PATCH 04/10] vdpa: remove param checks in the get/set_config callbacks

2021-02-16 Thread Stefano Garzarella
vdpa_get_config() and vdpa_set_config() now check parameters before
calling callbacks, so we can remove these redundant checks.

Signed-off-by: Stefano Garzarella 
---
 drivers/vdpa/mlx5/net/mlx5_vnet.c | 3 +--
 drivers/vdpa/vdpa_sim/vdpa_sim.c  | 6 --
 2 files changed, 1 insertion(+), 8 deletions(-)

diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c 
b/drivers/vdpa/mlx5/net/mlx5_vnet.c
index 78043ee567b6..ab63dc9b8432 100644
--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
+++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
@@ -1825,8 +1825,7 @@ static void mlx5_vdpa_get_config(struct vdpa_device 
*vdev, unsigned int offset,
struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev);
 
-   if (offset + len <= sizeof(struct virtio_net_config))
-   memcpy(buf, (u8 *)>config + offset, len);
+   memcpy(buf, (u8 *)>config + offset, len);
 }
 
 static void mlx5_vdpa_set_config(struct vdpa_device *vdev, unsigned int 
offset, const void *buf,
diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
index 779ae6c144d7..392180c6f2cf 100644
--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
+++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
@@ -451,9 +451,6 @@ static void vdpasim_get_config(struct vdpa_device *vdpa, 
unsigned int offset,
 {
struct vdpasim *vdpasim = vdpa_to_sim(vdpa);
 
-   if (offset + len > vdpasim->dev_attr.config_size)
-   return;
-
if (vdpasim->dev_attr.get_config)
vdpasim->dev_attr.get_config(vdpasim, vdpasim->config);
 
@@ -465,9 +462,6 @@ static void vdpasim_set_config(struct vdpa_device *vdpa, 
unsigned int offset,
 {
struct vdpasim *vdpasim = vdpa_to_sim(vdpa);
 
-   if (offset + len > vdpasim->dev_attr.config_size)
-   return;
-
memcpy(vdpasim->config + offset, buf, len);
 
if (vdpasim->dev_attr.set_config)
-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[RFC PATCH 05/10] vdpa: remove WARN_ON() in the get/set_config callbacks

2021-02-16 Thread Stefano Garzarella
vdpa_get_config() and vdpa_set_config() now check parameters before
calling callbacks, so we can remove these warnings.

Signed-off-by: Stefano Garzarella 
---
Maybe we can skip this patch and leave the WARN_ONs in place.
What do you recommend?
---
 drivers/vdpa/ifcvf/ifcvf_base.c | 3 +--
 drivers/vdpa/ifcvf/ifcvf_main.c | 2 --
 2 files changed, 1 insertion(+), 4 deletions(-)

diff --git a/drivers/vdpa/ifcvf/ifcvf_base.c b/drivers/vdpa/ifcvf/ifcvf_base.c
index f2a128e56de5..5941ecf934d0 100644
--- a/drivers/vdpa/ifcvf/ifcvf_base.c
+++ b/drivers/vdpa/ifcvf/ifcvf_base.c
@@ -222,7 +222,6 @@ void ifcvf_read_net_config(struct ifcvf_hw *hw, u64 offset,
u8 old_gen, new_gen, *p;
int i;
 
-   WARN_ON(offset + length > sizeof(struct virtio_net_config));
do {
old_gen = ifc_ioread8(>common_cfg->config_generation);
p = dst;
@@ -240,7 +239,7 @@ void ifcvf_write_net_config(struct ifcvf_hw *hw, u64 offset,
int i;
 
p = src;
-   WARN_ON(offset + length > sizeof(struct virtio_net_config));
+
for (i = 0; i < length; i++)
ifc_iowrite8(*p++, hw->net_cfg + offset + i);
 }
diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c b/drivers/vdpa/ifcvf/ifcvf_main.c
index 2443271e17d2..e55f88c57461 100644
--- a/drivers/vdpa/ifcvf/ifcvf_main.c
+++ b/drivers/vdpa/ifcvf/ifcvf_main.c
@@ -343,7 +343,6 @@ static void ifcvf_vdpa_get_config(struct vdpa_device 
*vdpa_dev,
 {
struct ifcvf_hw *vf = vdpa_to_vf(vdpa_dev);
 
-   WARN_ON(offset + len > sizeof(struct virtio_net_config));
ifcvf_read_net_config(vf, offset, buf, len);
 }
 
@@ -353,7 +352,6 @@ static void ifcvf_vdpa_set_config(struct vdpa_device 
*vdpa_dev,
 {
struct ifcvf_hw *vf = vdpa_to_vf(vdpa_dev);
 
-   WARN_ON(offset + len > sizeof(struct virtio_net_config));
ifcvf_write_net_config(vf, offset, buf, len);
 }
 
-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[RFC PATCH 03/10] vdpa: add vdpa_set_config() helper

2021-02-16 Thread Stefano Garzarella
Let's add a function similar to vpda_get_config() to check the
'offset' and 'len' parameters, call the set_config() device callback,
and return the amount of bytes written.

Signed-off-by: Stefano Garzarella 
---
 include/linux/vdpa.h |  2 ++
 drivers/vdpa/vdpa.c  | 16 
 2 files changed, 18 insertions(+)

diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
index 8a679c98f8b1..562fcd14f4b5 100644
--- a/include/linux/vdpa.h
+++ b/include/linux/vdpa.h
@@ -334,6 +334,8 @@ static inline int vdpa_set_features(struct vdpa_device 
*vdev, u64 features)
 
 int vdpa_get_config(struct vdpa_device *vdev, unsigned int offset,
void *buf, unsigned int len);
+int vdpa_set_config(struct vdpa_device *vdev, unsigned int offset,
+   const void *buf, unsigned int len);
 
 /**
  * vdpa_mgmtdev_ops - vdpa device ops
diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
index 9ed6c779c63c..825afc690a7e 100644
--- a/drivers/vdpa/vdpa.c
+++ b/drivers/vdpa/vdpa.c
@@ -86,6 +86,22 @@ int vdpa_get_config(struct vdpa_device *vdev, unsigned int 
offset,
 }
 EXPORT_SYMBOL_GPL(vdpa_get_config);
 
+int vdpa_set_config(struct vdpa_device *vdev, unsigned int offset,
+   const void *buf, unsigned int len)
+{
+   const struct vdpa_config_ops *ops = vdev->config;
+   int bytes_set;
+
+   bytes_set = vdpa_config_size_wrap(vdev, offset, len);
+   if (bytes_set <= 0)
+   return bytes_set;
+
+   ops->set_config(vdev, offset, buf, bytes_set);
+
+   return bytes_set;
+}
+EXPORT_SYMBOL_GPL(vdpa_set_config);
+
 static void vdpa_release_dev(struct device *d)
 {
struct vdpa_device *vdev = dev_to_vdpa(d);
-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[RFC PATCH 01/10] vdpa: add get_config_size callback in vdpa_config_ops

2021-02-16 Thread Stefano Garzarella
This new callback is used to get the size of the configuration space
of vDPA devices.

Signed-off-by: Stefano Garzarella 
---
 include/linux/vdpa.h  | 4 
 drivers/vdpa/ifcvf/ifcvf_main.c   | 6 ++
 drivers/vdpa/mlx5/net/mlx5_vnet.c | 6 ++
 drivers/vdpa/vdpa_sim/vdpa_sim.c  | 9 +
 4 files changed, 25 insertions(+)

diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
index 4ab5494503a8..fddf42b17573 100644
--- a/include/linux/vdpa.h
+++ b/include/linux/vdpa.h
@@ -150,6 +150,9 @@ struct vdpa_iova_range {
  * @set_status:Set the device status
  * @vdev: vdpa device
  * @status: virtio device status
+ * @get_config_size:   Get the size of the configuration space
+ * @vdev: vdpa device
+ * Returns size_t: configuration size
  * @get_config:Read from device specific configuration 
space
  * @vdev: vdpa device
  * @offset: offset from the beginning of
@@ -231,6 +234,7 @@ struct vdpa_config_ops {
u32 (*get_vendor_id)(struct vdpa_device *vdev);
u8 (*get_status)(struct vdpa_device *vdev);
void (*set_status)(struct vdpa_device *vdev, u8 status);
+   size_t (*get_config_size)(struct vdpa_device *vdev);
void (*get_config)(struct vdpa_device *vdev, unsigned int offset,
   void *buf, unsigned int len);
void (*set_config)(struct vdpa_device *vdev, unsigned int offset,
diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c b/drivers/vdpa/ifcvf/ifcvf_main.c
index 7c8bbfcf6c3e..2443271e17d2 100644
--- a/drivers/vdpa/ifcvf/ifcvf_main.c
+++ b/drivers/vdpa/ifcvf/ifcvf_main.c
@@ -332,6 +332,11 @@ static u32 ifcvf_vdpa_get_vq_align(struct vdpa_device 
*vdpa_dev)
return IFCVF_QUEUE_ALIGNMENT;
 }
 
+static size_t ifcvf_vdpa_get_config_size(struct vdpa_device *vdpa_dev)
+{
+   return sizeof(struct virtio_net_config);
+}
+
 static void ifcvf_vdpa_get_config(struct vdpa_device *vdpa_dev,
  unsigned int offset,
  void *buf, unsigned int len)
@@ -392,6 +397,7 @@ static const struct vdpa_config_ops ifc_vdpa_ops = {
.get_device_id  = ifcvf_vdpa_get_device_id,
.get_vendor_id  = ifcvf_vdpa_get_vendor_id,
.get_vq_align   = ifcvf_vdpa_get_vq_align,
+   .get_config_size= ifcvf_vdpa_get_config_size,
.get_config = ifcvf_vdpa_get_config,
.set_config = ifcvf_vdpa_set_config,
.set_config_cb  = ifcvf_vdpa_set_config_cb,
diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c 
b/drivers/vdpa/mlx5/net/mlx5_vnet.c
index 10e9b09932eb..78043ee567b6 100644
--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
+++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
@@ -1814,6 +1814,11 @@ static void mlx5_vdpa_set_status(struct vdpa_device 
*vdev, u8 status)
ndev->mvdev.status |= VIRTIO_CONFIG_S_FAILED;
 }
 
+static size_t mlx5_vdpa_get_config_size(struct vdpa_device *vdev)
+{
+   return sizeof(struct virtio_net_config);
+}
+
 static void mlx5_vdpa_get_config(struct vdpa_device *vdev, unsigned int 
offset, void *buf,
 unsigned int len)
 {
@@ -1900,6 +1905,7 @@ static const struct vdpa_config_ops mlx5_vdpa_ops = {
.get_vendor_id = mlx5_vdpa_get_vendor_id,
.get_status = mlx5_vdpa_get_status,
.set_status = mlx5_vdpa_set_status,
+   .get_config_size = mlx5_vdpa_get_config_size,
.get_config = mlx5_vdpa_get_config,
.set_config = mlx5_vdpa_set_config,
.get_generation = mlx5_vdpa_get_generation,
diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
index d5942842432d..779ae6c144d7 100644
--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
+++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
@@ -439,6 +439,13 @@ static void vdpasim_set_status(struct vdpa_device *vdpa, 
u8 status)
spin_unlock(>lock);
 }
 
+static size_t vdpasim_get_config_size(struct vdpa_device *vdpa)
+{
+   struct vdpasim *vdpasim = vdpa_to_sim(vdpa);
+
+   return vdpasim->dev_attr.config_size;
+}
+
 static void vdpasim_get_config(struct vdpa_device *vdpa, unsigned int offset,
 void *buf, unsigned int len)
 {
@@ -566,6 +573,7 @@ static const struct vdpa_config_ops vdpasim_config_ops = {
.get_vendor_id  = vdpasim_get_vendor_id,
.get_status = vdpasim_get_status,
.set_status = vdpasim_set_status,
+   .get_config_size= vdpasim_get_config_size,
.get_config = vdpasim_get_config,
.set_config = vdpasim_set_config,
.get_generation = vdpasim_get_generation,
@@ -593,6 +601,7 @@ static const struct vdpa_config_ops 
vdpasim_batch_config_ops = {
.get_vendor_id  = vdpasim_get_vendor_id,
.get_status = 

[RFC PATCH 02/10] vdpa: check vdpa_get_config() parameters and return bytes read

2021-02-16 Thread Stefano Garzarella
Now we have the 'get_config_size()' callback available, so we can
check that 'offset' and 'len' parameters are valid.

When these exceed boundaries, we limit the reading to the available
configuration space in the device, and we return the amount of bytes
read.

We also move vdpa_get_config() implementation in drivers/vdpa/vdpa.c,
since the function are growing.

Signed-off-by: Stefano Garzarella 
---
 include/linux/vdpa.h | 16 ++--
 drivers/vdpa/vdpa.c  | 35 +++
 2 files changed, 37 insertions(+), 14 deletions(-)

diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
index fddf42b17573..8a679c98f8b1 100644
--- a/include/linux/vdpa.h
+++ b/include/linux/vdpa.h
@@ -332,20 +332,8 @@ static inline int vdpa_set_features(struct vdpa_device 
*vdev, u64 features)
 return ops->set_features(vdev, features);
 }
 
-
-static inline void vdpa_get_config(struct vdpa_device *vdev, unsigned offset,
-  void *buf, unsigned int len)
-{
-const struct vdpa_config_ops *ops = vdev->config;
-
-   /*
-* Config accesses aren't supposed to trigger before features are set.
-* If it does happen we assume a legacy guest.
-*/
-   if (!vdev->features_valid)
-   vdpa_set_features(vdev, 0);
-   ops->get_config(vdev, offset, buf, len);
-}
+int vdpa_get_config(struct vdpa_device *vdev, unsigned int offset,
+   void *buf, unsigned int len);
 
 /**
  * vdpa_mgmtdev_ops - vdpa device ops
diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
index 3d997b389345..9ed6c779c63c 100644
--- a/drivers/vdpa/vdpa.c
+++ b/drivers/vdpa/vdpa.c
@@ -51,6 +51,41 @@ static struct bus_type vdpa_bus = {
.remove = vdpa_dev_remove,
 };
 
+static int vdpa_config_size_wrap(struct vdpa_device *vdev, unsigned int offset,
+unsigned int len)
+{
+   const struct vdpa_config_ops *ops = vdev->config;
+   unsigned int config_size = ops->get_config_size(vdev);
+
+   if (offset > config_size || len > config_size)
+   return -1;
+
+   return min(len, config_size - offset);
+}
+
+int vdpa_get_config(struct vdpa_device *vdev, unsigned int offset,
+   void *buf, unsigned int len)
+{
+   const struct vdpa_config_ops *ops = vdev->config;
+   int bytes_get;
+
+   bytes_get = vdpa_config_size_wrap(vdev, offset, len);
+   if (bytes_get <= 0)
+   return bytes_get;
+
+   /*
+* Config accesses aren't supposed to trigger before features are set.
+* If it does happen we assume a legacy guest.
+*/
+   if (!vdev->features_valid)
+   vdpa_set_features(vdev, 0);
+
+   ops->get_config(vdev, offset, buf, bytes_get);
+
+   return bytes_get;
+}
+EXPORT_SYMBOL_GPL(vdpa_get_config);
+
 static void vdpa_release_dev(struct device *d)
 {
struct vdpa_device *vdev = dev_to_vdpa(d);
-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[RFC PATCH 00/10] vdpa: get/set_config() rework

2021-02-16 Thread Stefano Garzarella
Following the discussion with Michael and Jason [1], I reworked a bit
get/set_config() in vdpa.

I changed vdpa_get_config() to check the boundaries and added vdpa_set_config().
When 'offset' or 'len' parameters exceed boundaries, we limit the reading to
the available configuration space in the device, and we return the amount of
bytes read/written.

In this way the user space can pass buffers bigger than config space.
I also returned the amount of bytes read and written to user space.

Patches also available here:
https://github.com/stefano-garzarella/linux/tree/vdpa-get-set-config-refactoring

Thanks for your comments,
Stefano

[1] https://lkml.org/lkml/2021/2/10/350

Stefano Garzarella (10):
  vdpa: add get_config_size callback in vdpa_config_ops
  vdpa: check vdpa_get_config() parameters and return bytes read
  vdpa: add vdpa_set_config() helper
  vdpa: remove param checks in the get/set_config callbacks
  vdpa: remove WARN_ON() in the get/set_config callbacks
  virtio_vdpa: use vdpa_set_config()
  vhost/vdpa: use vdpa_set_config()
  vhost/vdpa: allow user space to pass buffers bigger than config space
  vhost/vdpa: use get_config_size callback in
vhost_vdpa_config_validate()
  vhost/vdpa: return configuration bytes read and written to user space

 include/linux/vdpa.h  | 22 ---
 drivers/vdpa/ifcvf/ifcvf_base.c   |  3 +-
 drivers/vdpa/ifcvf/ifcvf_main.c   |  8 +++-
 drivers/vdpa/mlx5/net/mlx5_vnet.c |  9 -
 drivers/vdpa/vdpa.c   | 51 
 drivers/vdpa/vdpa_sim/vdpa_sim.c  | 15 +---
 drivers/vhost/vdpa.c  | 64 ---
 drivers/virtio/virtio_vdpa.c  |  3 +-
 8 files changed, 116 insertions(+), 59 deletions(-)

-- 
2.29.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [External] Re: vsock virtio: questions about supporting DGRAM type

2021-02-16 Thread Stefano Garzarella

On Tue, Feb 16, 2021 at 12:23:19AM -0800, Jiang Wang . wrote:

On Tue, Feb 16, 2021 at 12:09 AM Stefano Garzarella  wrote:


On Mon, Feb 15, 2021 at 08:50:36PM -0800, Jiang Wang . wrote:
>On Mon, Feb 15, 2021 at 12:31 AM Stefano Garzarella  
wrote:
>>
>> On Sat, Feb 13, 2021 at 03:26:18PM -0800, Jiang Wang . wrote:
>> >On Fri, Feb 12, 2021 at 1:02 AM Stefano Garzarella  
wrote:
>> >>
>> >> Hi Jiang,
>> >>
>> >> CCing Arseny who is working on SOCK_SEQPACKET support for virtio-vsock
>> >> [1].
>> >>
>> >> On Thu, Feb 11, 2021 at 10:04:34PM -0800, Jiang Wang . wrote:
>> >> >Hi guys,
>> >> >
>> >> >I am working on supporting DGRAM type for virtio/vhost vsock. I
>> >> >already did some work and a draft code is here (which passed my tests,
>> >> >but still need some cleanup and only works from host to guest as of
>> >> >now, will add host to guest soon):
>> >> 
>https://github.com/Jiang1155/linux/commit/4e89736e0bce15496460ff411cb4694b143d1c3d
>> >> >qemu changes are here:
>> >> 
>https://github.com/Jiang1155/qemu/commit/7ab778801e3e8969ab98e44539943810a2fb03eb
>> >> >
>> >> >Today, I just noticed that the Asias had an old version of virtio
>> >> >which had both dgram and stream support, see this link:
>> >> 
>https://kvm.vger.kernel.narkive.com/BMvH9eEr/rfc-v2-0-7-introduce-vm-sockets-virtio-transport#post1
>> >> >
>> >> >But somehow, the dgram part seems never merged to upstream linux (the
>> >> >stream part is merged). If so, does anyone know what is the reason for
>> >> >this? Did we drop dgram support for some specific reason or the code
>> >> >needs some improvement?
>> >>
>> >> I wasn't involved yet in virtio-vsock development when Asias posted that
>> >> patches, so I don't know the exact reason.
>> >>
>> >> Maybe could be related on how to handle the credit mechanism for a
>> >> connection-less sockets and how to manage the packet queue, if for
>> >> example no one is listening.
>> >>
>> >
>> >I see. thanks.
>> >
>> >> >
>> >> >My current code differs from Asias' code in some ways. It does not use
>> >> >credit and does not support fragmentation.  It basically adds two virt
>> >>
>> >> If you don't use credit, do you have some threshold when you start to
>> >> drop packets on the RX side?
>> >>
>> >
>> >As of now, I don't have any threshold to drop packets on RX side. In my
>> >view, DGRAM is like UDP and is a best effort service. If the virtual
>> >queue
>> >is full on TX (or the available buffer size is less than the 
>> >packet size),

>> >I drop the packets on the TX side.
>>
>> I have to check the code, my concern is we should have a limit for the
>> allocation, for example if the user space doesn't consume RX packets.
>>
>
>I think we already have a limit for the allocation. If a DGRAM client
>sends packets while no socket is bound on the server side ,
>the packet will be dropped when looking for the socket. If the socket is
>bound on the server side, but the server program somehow does not
>call recv or similar functions, the packets will be dropped when the
>buffer is full as in your previous patch at here:
>https://lore.kernel.org/patchwork/patch/1141083/
>If there are still other cases that we don't have protection, let me know and
>I can add some threshold.

Maybe I misunderstood, but I thought you didn't use credit for DGRAM
messages.


I don't use credit for DGRAM. But the dropping code I mentioned does not
rely on credit too. Just to make it clear, following is one part of code that
I referred to:

static bool virtio_transport_inc_rx_pkt(struct virtio_vsock_sock *vvs,
struct virtio_vsock_pkt *pkt)
{
if (vvs->rx_bytes + pkt->len > vvs->buf_alloc)
return false;

vvs->rx_bytes += pkt->len;
return true;
}


Okay, now it's clear. The transmitter doesn't care about the receiver's 
credit, but the receiver uses that part of the credit mechanism to 
discard packets.


I think that's fine, but when you send the patches, explain it well in 
the cover letter/commit message.






If you use it to limit buffer occupancy, then it should be okay, but
again, I still have to look at the code :-)


Sure. I think you mean you need to look at my final patches. I will
send it later.  Just a friendly reminder, if you just want to get some
idea of my patches, you could check this link :
https://github.com/Jiang1155/linux/commit/4e89736e0bce15496460ff411cb4694b143d1c3d


It's my fault because I didn't have time, I saw the link. :-)

Remember that we should also plan changes to the VIRTIO spec for the 
DGRAM support.


Thanks,
Stefano

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [External] Re: vsock virtio: questions about supporting DGRAM type

2021-02-16 Thread Jiang Wang .
On Tue, Feb 16, 2021 at 12:09 AM Stefano Garzarella  wrote:
>
> On Mon, Feb 15, 2021 at 08:50:36PM -0800, Jiang Wang . wrote:
> >On Mon, Feb 15, 2021 at 12:31 AM Stefano Garzarella  
> >wrote:
> >>
> >> On Sat, Feb 13, 2021 at 03:26:18PM -0800, Jiang Wang . wrote:
> >> >On Fri, Feb 12, 2021 at 1:02 AM Stefano Garzarella  
> >> >wrote:
> >> >>
> >> >> Hi Jiang,
> >> >>
> >> >> CCing Arseny who is working on SOCK_SEQPACKET support for virtio-vsock
> >> >> [1].
> >> >>
> >> >> On Thu, Feb 11, 2021 at 10:04:34PM -0800, Jiang Wang . wrote:
> >> >> >Hi guys,
> >> >> >
> >> >> >I am working on supporting DGRAM type for virtio/vhost vsock. I
> >> >> >already did some work and a draft code is here (which passed my tests,
> >> >> >but still need some cleanup and only works from host to guest as of
> >> >> >now, will add host to guest soon):
> >> >> >https://github.com/Jiang1155/linux/commit/4e89736e0bce15496460ff411cb4694b143d1c3d
> >> >> >qemu changes are here:
> >> >> >https://github.com/Jiang1155/qemu/commit/7ab778801e3e8969ab98e44539943810a2fb03eb
> >> >> >
> >> >> >Today, I just noticed that the Asias had an old version of virtio
> >> >> >which had both dgram and stream support, see this link:
> >> >> >https://kvm.vger.kernel.narkive.com/BMvH9eEr/rfc-v2-0-7-introduce-vm-sockets-virtio-transport#post1
> >> >> >
> >> >> >But somehow, the dgram part seems never merged to upstream linux (the
> >> >> >stream part is merged). If so, does anyone know what is the reason for
> >> >> >this? Did we drop dgram support for some specific reason or the code
> >> >> >needs some improvement?
> >> >>
> >> >> I wasn't involved yet in virtio-vsock development when Asias posted that
> >> >> patches, so I don't know the exact reason.
> >> >>
> >> >> Maybe could be related on how to handle the credit mechanism for a
> >> >> connection-less sockets and how to manage the packet queue, if for
> >> >> example no one is listening.
> >> >>
> >> >
> >> >I see. thanks.
> >> >
> >> >> >
> >> >> >My current code differs from Asias' code in some ways. It does not use
> >> >> >credit and does not support fragmentation.  It basically adds two virt
> >> >>
> >> >> If you don't use credit, do you have some threshold when you start to
> >> >> drop packets on the RX side?
> >> >>
> >> >
> >> >As of now, I don't have any threshold to drop packets on RX side. In my
> >> >view, DGRAM is like UDP and is a best effort service. If the virtual
> >> >queue
> >> >is full on TX (or the available buffer size is less than the packet size),
> >> >I drop the packets on the TX side.
> >>
> >> I have to check the code, my concern is we should have a limit for the
> >> allocation, for example if the user space doesn't consume RX packets.
> >>
> >
> >I think we already have a limit for the allocation. If a DGRAM client
> >sends packets while no socket is bound on the server side ,
> >the packet will be dropped when looking for the socket. If the socket is
> >bound on the server side, but the server program somehow does not
> >call recv or similar functions, the packets will be dropped when the
> >buffer is full as in your previous patch at here:
> >https://lore.kernel.org/patchwork/patch/1141083/
> >If there are still other cases that we don't have protection, let me know and
> >I can add some threshold.
>
> Maybe I misunderstood, but I thought you didn't use credit for DGRAM
> messages.
>
I don't use credit for DGRAM. But the dropping code I mentioned does not
rely on credit too. Just to make it clear, following is one part of code that
I referred to:

static bool virtio_transport_inc_rx_pkt(struct virtio_vsock_sock *vvs,
struct virtio_vsock_pkt *pkt)
{
if (vvs->rx_bytes + pkt->len > vvs->buf_alloc)
return false;

vvs->rx_bytes += pkt->len;
return true;
}


> If you use it to limit buffer occupancy, then it should be okay, but
> again, I still have to look at the code :-)

Sure. I think you mean you need to look at my final patches. I will
send it later.  Just a friendly reminder, if you just want to get some
idea of my patches, you could check this link :
https://github.com/Jiang1155/linux/commit/4e89736e0bce15496460ff411cb4694b143d1c3d

thanks. :)

> Thanks,
> Stefano
>
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [External] Re: vsock virtio: questions about supporting DGRAM type

2021-02-16 Thread Stefano Garzarella

On Mon, Feb 15, 2021 at 08:50:36PM -0800, Jiang Wang . wrote:

On Mon, Feb 15, 2021 at 12:31 AM Stefano Garzarella  wrote:


On Sat, Feb 13, 2021 at 03:26:18PM -0800, Jiang Wang . wrote:
>On Fri, Feb 12, 2021 at 1:02 AM Stefano Garzarella  wrote:
>>
>> Hi Jiang,
>>
>> CCing Arseny who is working on SOCK_SEQPACKET support for virtio-vsock
>> [1].
>>
>> On Thu, Feb 11, 2021 at 10:04:34PM -0800, Jiang Wang . wrote:
>> >Hi guys,
>> >
>> >I am working on supporting DGRAM type for virtio/vhost vsock. I
>> >already did some work and a draft code is here (which passed my tests,
>> >but still need some cleanup and only works from host to guest as of
>> >now, will add host to guest soon):
>> 
>https://github.com/Jiang1155/linux/commit/4e89736e0bce15496460ff411cb4694b143d1c3d
>> >qemu changes are here:
>> 
>https://github.com/Jiang1155/qemu/commit/7ab778801e3e8969ab98e44539943810a2fb03eb
>> >
>> >Today, I just noticed that the Asias had an old version of virtio
>> >which had both dgram and stream support, see this link:
>> 
>https://kvm.vger.kernel.narkive.com/BMvH9eEr/rfc-v2-0-7-introduce-vm-sockets-virtio-transport#post1
>> >
>> >But somehow, the dgram part seems never merged to upstream linux (the
>> >stream part is merged). If so, does anyone know what is the reason for
>> >this? Did we drop dgram support for some specific reason or the code
>> >needs some improvement?
>>
>> I wasn't involved yet in virtio-vsock development when Asias posted that
>> patches, so I don't know the exact reason.
>>
>> Maybe could be related on how to handle the credit mechanism for a
>> connection-less sockets and how to manage the packet queue, if for
>> example no one is listening.
>>
>
>I see. thanks.
>
>> >
>> >My current code differs from Asias' code in some ways. It does not use
>> >credit and does not support fragmentation.  It basically adds two virt
>>
>> If you don't use credit, do you have some threshold when you start to
>> drop packets on the RX side?
>>
>
>As of now, I don't have any threshold to drop packets on RX side. In my
>view, DGRAM is like UDP and is a best effort service. If the virtual
>queue
>is full on TX (or the available buffer size is less than the packet size),
>I drop the packets on the TX side.

I have to check the code, my concern is we should have a limit for the
allocation, for example if the user space doesn't consume RX packets.



I think we already have a limit for the allocation. If a DGRAM client
sends packets while no socket is bound on the server side ,
the packet will be dropped when looking for the socket. If the socket is
bound on the server side, but the server program somehow does not
call recv or similar functions, the packets will be dropped when the
buffer is full as in your previous patch at here:
https://lore.kernel.org/patchwork/patch/1141083/
If there are still other cases that we don't have protection, let me know and
I can add some threshold.


Maybe I misunderstood, but I thought you didn't use credit for DGRAM 
messages.


If you use it to limit buffer occupancy, then it should be okay, but 
again, I still have to look at the code :-)


Thanks,
Stefano

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization