[PATCH V3] virtio: disable notification hardening by default

2022-06-21 Thread Jason Wang
We try to harden virtio device notifications in 8b4ec69d7e09 ("virtio: harden vring IRQ"). It works with the assumption that the driver or core can properly call virtio_device_ready() at the right place. Unfortunately, this seems to be not true and uncover various bugs of the existing drivers,

Re: [PATCH V2] virtio: disable notification hardening by default

2022-06-21 Thread Jason Wang
On Tue, Jun 21, 2022 at 5:58 PM Cornelia Huck wrote: > > On Tue, Jun 21 2022, Jason Wang wrote: > > > On Tue, Jun 21, 2022 at 5:16 PM Cornelia Huck wrote: > >> > >> The ifdeffery looks a big ugly, but I don't have a better idea. > > > > I guess you meant the ccw part, I leave the spinlock here

Re: [PATCH v2 2/5] vfio/iommu_type1: Prefer to reuse domains vs match enforced cache coherency

2022-06-21 Thread Alex Williamson
On Wed, 15 Jun 2022 17:03:01 -0700 Nicolin Chen wrote: > From: Jason Gunthorpe > > The KVM mechanism for controlling wbinvd is based on OR of the coherency > property of all devices attached to a guest, no matter those devices are > attached to a single domain or multiple domains. > > So,

[PATCH 3/3] vdpa_sim_blk: call vringh_complete_iotlb() also in the error path

2022-06-21 Thread Stefano Garzarella
Call vringh_complete_iotlb() even when we encounter a serious error that prevents us from writing the status in the "in" header (e.g. the header length is incorrect, etc.). The guest is misbehaving, so maybe the ring is in a bad state, but let's avoid making things worse. Signed-off-by: Stefano

Re: virtio_balloon regression in 5.19-rc3

2022-06-21 Thread Ben Hutchings
On Tue, 2022-06-21 at 17:34 +0800, Jason Wang wrote: > On Tue, Jun 21, 2022 at 5:24 PM David Hildenbrand wrote: > > > > On 20.06.22 20:49, Ben Hutchings wrote: > > > I've tested a 5.19-rc3 kernel on top of QEMU/KVM with machine type > > > pc-q35-5.2. It has a virtio balloon device defined in

[PATCH 2/3] vdpa_sim_blk: limit the number of request handled per batch

2022-06-21 Thread Stefano Garzarella
Limit the number of requests (4 per queue as for vdpa_sim_net) handled in a batch to prevent the worker from using the CPU for too long. Suggested-by: Eugenio Pérez Signed-off-by: Stefano Garzarella --- drivers/vdpa/vdpa_sim/vdpa_sim_blk.c | 15 ++- 1 file changed, 14

[PATCH 1/3] vdpa_sim_blk: use dev_dbg() to print errors

2022-06-21 Thread Stefano Garzarella
Use dev_dbg() instead of dev_err()/dev_warn() to avoid flooding the host with prints, when the guest driver is misbehaving. In this way, prints can be dynamically enabled when the vDPA block simulator is used to validate a driver. Suggested-by: Jason Wang Signed-off-by: Stefano Garzarella ---

[PATCH 0/3] vdpa_sim_blk: several fixes for the vDPA block simulator

2022-06-21 Thread Stefano Garzarella
The first two patches essentially limit the possibility of the guest doing a DoS to the host. The third makes the simulator more correct (following what we do in vdpa_sim_net) by calling vringh_complete_iotlb() in the error path as well. Stefano Garzarella (3): vdpa_sim_blk: use dev_dbg() to

Re: [PATCH v2 19/19] vdpasim: control virtqueue support

2022-06-21 Thread Stefano Garzarella
Hi Gautam, On Wed, Mar 30, 2022 at 8:21 PM Gautam Dawar wrote: > > This patch introduces the control virtqueue support for vDPA > simulator. This is a requirement for supporting advanced features like > multiqueue. > > A requirement for control virtqueue is to isolate its memory access > from

[PATCH] vdpa_sim_blk: set number of address spaces and virtqueue groups

2022-06-21 Thread Stefano Garzarella
Commit bda324fd037a ("vdpasim: control virtqueue support") added two new fields (nas, ngroups) to vdpasim_dev_attr, but we forgot to initialize them for vdpa_sim_blk. When creating a new vdpa_sim_blk device this causes the kernel to panic in this way:    $ vdpa dev add mgmtdev vdpasim_blk name

[PATCH] vdpa_sim: use max_iotlb_entries as a limit in vhost_iotlb_init

2022-06-21 Thread Stefano Garzarella
Commit bda324fd037a ("vdpasim: control virtqueue support") changed the allocation of iotlbs calling vhost_iotlb_init() for each address space, instead of vhost_iotlb_alloc(). With this change we forgot to use the limit we had introduced with the `max_iotlb_entries` module parameter. Fixes:

[PATCH net] virtio_net: fix xdp_rxq_info bug after suspend/resume

2022-06-21 Thread Stephan Gerhold
The following sequence currently causes a driver bug warning when using virtio_net: # ip link set eth0 up # echo mem > /sys/power/state (or e.g. # rtcwake -s 10 -m mem) # ip link set eth0 down Missing register, driver bug WARNING: CPU: 0 PID: 375 at net/core/xdp.c:138

[PATCH 2/2] virtio_mmio: Restore guest page size on resume

2022-06-21 Thread Stephan Gerhold
Virtio devices might lose their state when the VMM is restarted after a suspend to disk (hibernation) cycle. This means that the guest page size register must be restored for the virtio_mmio legacy interface, since otherwise the virtio queues are not functional. This is particularly problematic

[PATCH 1/2] virtio_mmio: Add missing PM calls to freeze/restore

2022-06-21 Thread Stephan Gerhold
Most virtio drivers provide freeze/restore callbacks to finish up device usage before suspend and to reinitialize the virtio device after resume. However, these callbacks are currently only called when using virtio_pci. virtio_mmio does not have any PM ops defined. This causes problems for

[PATCH 0/2] virtio_mmio: Fix suspend to disk (hibernation)

2022-06-21 Thread Stephan Gerhold
At the moment suspend to disk (hibernation) works correctly when using virtio_pci, but not when using virtio_mmio. This is because virtio_mmio does not call the freeze/restore callbacks provided by most virtio drivers. Fix this by adding the missing PM calls to virtio_mmio and restore the guest

Re: [PATCH V2] virtio: disable notification hardening by default

2022-06-21 Thread Cornelia Huck
On Tue, Jun 21 2022, Jason Wang wrote: > On Tue, Jun 21, 2022 at 5:16 PM Cornelia Huck wrote: >> >> The ifdeffery looks a big ugly, but I don't have a better idea. > > I guess you meant the ccw part, I leave the spinlock here in V1, but > Michael prefers to have that. Not doing the locking

Re: [PATCH V2] virtio: disable notification hardening by default

2022-06-21 Thread Jason Wang
On Tue, Jun 21, 2022 at 5:16 PM Cornelia Huck wrote: > > On Mon, Jun 20 2022, Jason Wang wrote: > > > We try to harden virtio device notifications in 8b4ec69d7e09 ("virtio: > > harden vring IRQ"). It works with the assumption that the driver or > > core can properly call virtio_device_ready() at

Re: virtio_balloon regression in 5.19-rc3

2022-06-21 Thread Jason Wang
On Tue, Jun 21, 2022 at 5:24 PM David Hildenbrand wrote: > > On 20.06.22 20:49, Ben Hutchings wrote: > > I've tested a 5.19-rc3 kernel on top of QEMU/KVM with machine type > > pc-q35-5.2. It has a virtio balloon device defined in libvirt as: > > > > > >> function="0x0"/> > > >

Re: virtio_balloon regression in 5.19-rc3

2022-06-21 Thread David Hildenbrand
On 20.06.22 20:49, Ben Hutchings wrote: > I've tested a 5.19-rc3 kernel on top of QEMU/KVM with machine type > pc-q35-5.2. It has a virtio balloon device defined in libvirt as: > > >function="0x0"/> > > > but the virtio_balloon driver fails to bind to it: > >

Re: [PATCH V2] virtio: disable notification hardening by default

2022-06-21 Thread Cornelia Huck
On Mon, Jun 20 2022, Jason Wang wrote: > We try to harden virtio device notifications in 8b4ec69d7e09 ("virtio: > harden vring IRQ"). It works with the assumption that the driver or > core can properly call virtio_device_ready() at the right > place. Unfortunately, this seems to be not true and

Re: [PATCH RFC 1/3] vdpa/mlx5: Implement susupend virtqueue callback

2022-06-21 Thread Jason Wang
On Tue, Jun 21, 2022 at 3:49 PM Eugenio Perez Martin wrote: > > On Tue, Jun 21, 2022 at 5:05 AM Jason Wang wrote: > > > > On Mon, Jun 20, 2022 at 5:59 PM Eugenio Perez Martin > > wrote: > > > > > > On Mon, Jun 20, 2022 at 10:56 AM Jason Wang wrote: > > > > > > > > On Thu, Jun 16, 2022 at 9:27