ered will be set to true. It will no longer be set to
> > false. Unless we explicitly call virtqueue_enable_cb_delayed or
> > virtqueue_enable_cb_prepare.
>
> This patch (commited as 35395770f803 ("virtio_ring: don't update event
> idx on get_buf") in next-20230327 apparently breaks
On Sat, Mar 25, 2023 at 8:27 AM Shannon Nelson wrote:
>
> On 3/22/23 10:18 PM, Jason Wang wrote:
> > On Thu, Mar 23, 2023 at 3:11 AM Shannon Nelson
> > wrote:
> >>
> >> This is the vDPA device support, where we advertise that we can
> >> support the virtio queues and deal with the configuration
On Tue, Mar 28, 2023 at 11:33 AM Yongji Xie wrote:
>
> On Tue, Mar 28, 2023 at 11:14 AM Jason Wang wrote:
> >
> > On Tue, Mar 28, 2023 at 11:03 AM Yongji Xie wrote:
> > >
> > > On Fri, Mar 24, 2023 at 2:28 PM Jason Wang wrote:
> > > >
> > > > On Thu, Mar 23, 2023 at 1:31 PM Xie Yongji
> > >
> > > > virtqueue_enable_cb_prepare.
> > >
> > > This patch (commited as 35395770f803 ("virtio_ring: don't update event
> > > idx on get_buf") in next-20230327 apparently breaks 9p, as reported by
> > > Luis in https://lkml.kernel.org/r/zci+7wg5o
On Tue, Mar 28, 2023 at 11:03 AM Yongji Xie wrote:
>
> On Fri, Mar 24, 2023 at 2:28 PM Jason Wang wrote:
> >
> > On Thu, Mar 23, 2023 at 1:31 PM Xie Yongji wrote:
> > >
> > > To support interrupt affinity spreading mechanism,
> > > this makes use of group_cpus_evenly() to create
> > > an irq
to true. It will no longer be set to
> > false. Unless we explicitly call virtqueue_enable_cb_delayed or
> > virtqueue_enable_cb_prepare.
>
> This patch (commited as 35395770f803 ("virtio_ring: don't update event
> idx on get_buf") in next-20230327 apparently breaks
For vhost-scsi with 3 vqs and a workload like that tries to use those vqs
like:
fio --filename=/dev/sdb --direct=1 --rw=randrw --bs=4k \
--ioengine=libaio --iodepth=128 --numjobs=3
the single vhost worker thread will become a bottlneck and we are stuck
at around 500K IOPs no matter how many
With one worker we will always send the scsi cmd responses then send the
TMF rsp, because LIO will always complete the scsi cmds first then call
into us to send the TMF response.
With multiple workers, the IO vq workers could be running while the
TMF/ctl vq worker is so this has us do a flush
This patch separates the scsi cmd completion code paths so we can complete
cmds based on their vq instead of having all cmds complete on the same
worker/CPU. This will be useful with the next patches that allow us to
create mulitple worker threads and bind them to different vqs, so we can
have
Convert from vhost_work_queue to vhost_vq_work_queue.
Signed-off-by: Mike Christie
---
drivers/vhost/vsock.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index c8e6087769a1..1dcbc8669f95 100644
--- a/drivers/vhost/vsock.c
This patch has the core work flush function take a worker. When we
support multiple workers we can then flush each worker during device
removal, stoppage, etc.
Signed-off-by: Mike Christie
---
drivers/vhost/vhost.c | 24 +++-
1 file changed, 15 insertions(+), 9 deletions(-)
vhost_work_queue is no longer used. Each driver is using the poll or vq
based queueing, so remove vhost_work_queue.
Signed-off-by: Mike Christie
---
drivers/vhost/vhost.c | 6 --
drivers/vhost/vhost.h | 1 -
2 files changed, 7 deletions(-)
diff --git a/drivers/vhost/vhost.c
Convert from vhost_work_queue to vhost_vq_work_queue.
Signed-off-by: Mike Christie
---
drivers/vhost/scsi.c | 18 +-
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index ecb5cd7450b8..3e86b5fbeca6 100644
---
The following patches were built over linux-next which contains various
vhost patches in mst's tree and the vhost_task patchset in Christian
Brauner's tree:
git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux.git
kernel.user_worker branch:
This patchset allows userspace to map vqs to different workers. This
patch adds a worker pointer to the vq so we can store that info.
Signed-off-by: Mike Christie
---
drivers/vhost/vhost.c | 24 +---
drivers/vhost/vhost.h | 1 +
2 files changed, 14 insertions(+), 11
This has the drivers pass in their poll to vq mapping and then converts
the core poll code to use the vq based helpers. In the next patches we
will allow vqs to be handled by different workers, so to allow drivers
to execute operations like queue, stop, flush, etc on specific polls/vqs
we need to
In the next patches each vq might have different workers so one could
have work but others do not. For net, we only want to check specific vqs,
so this adds a helper to check if a vq has work pending and converts
vhost-net to use it.
Signed-off-by: Mike Christie
---
drivers/vhost/net.c | 2 +-
This patch has the core work queueing function take a worker for when we
support multiple workers. It also adds a helper that takes a vq during
queueing so modules can control which vq/worker to queue work on.
This temp leaves vhost_work_queue. It will be removed when the drivers
are converted in
e_cb_delayed or
> virtqueue_enable_cb_prepare.
This patch (commited as 35395770f803 ("virtio_ring: don't update event
idx on get_buf") in next-20230327 apparently breaks 9p, as reported by
Luis in https://lkml.kernel.org/r/zci+7wg5oclsl...@bombadil.infradead.org
I've just hit had a
On Sun, Mar 26, 2023 at 04:18:19PM +0300, Eli Cohen wrote:
> The right place to add the debufs create is in
s/debufs/debugfs/
> setup_driver() and remove it in teardown_driver().
>
> Current code adds the debugfs when creating the device but resetting a
> device will remove the debugfs subtree
typos all over.
subject:
s/Veirfy/verify/
s/has/is/
On Sun, Mar 26, 2023 at 04:17:56PM +0300, Eli Cohen wrote:
> mlx5_vdpa_suspend() flushes the workqueue as part of its logic. However,
> if the devices has been deleted while a VM was runnig, the workqueue
s/the devices/the device/
And the issue was that the author self-nacked the single fix here.
So we'll merge another fix, later.
On Mon, Mar 27, 2023 at 09:30:13AM -0400, Michael S. Tsirkin wrote:
> Looks like a sent a bad pull request. Sorry!
> Please disregard.
>
> On Mon, Mar 27, 2023 at 09:19:50AM -0400, Michael S.
Looks like a sent a bad pull request. Sorry!
Please disregard.
On Mon, Mar 27, 2023 at 09:19:50AM -0400, Michael S. Tsirkin wrote:
> The following changes since commit e8d018dd0257f744ca50a729e3d042cf2ec9da65:
>
> Linux 6.3-rc3 (2023-03-19 13:27:55 -0700)
>
> are available in the Git
On Mon, Mar 27, 2023 at 03:54:34PM +0300, Eli Cohen wrote:
>
> On 27/03/2023 15:41, Michael S. Tsirkin wrote:
> > On Mon, Mar 20, 2023 at 10:01:05AM +0200, Eli Cohen wrote:
> > > Current code ignores link state updates if VIRTIO_NET_F_STATUS was not
> > > negotiated. However, link state updates
The following changes since commit e8d018dd0257f744ca50a729e3d042cf2ec9da65:
Linux 6.3-rc3 (2023-03-19 13:27:55 -0700)
are available in the Git repository at:
https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git tags/for_linus
for you to fetch changes up to
is the below same as what I posted or different? how?
On Sat, Mar 25, 2023 at 06:56:33PM +0800, Albert Huang wrote:
> in virtio_net, if we disable the napi_tx, when we triger a tx interrupt,
> the vq->event_triggered will be set to true. It will no longer be set to
> false. Unless we explicitly
On Mon, Mar 20, 2023 at 10:01:05AM +0200, Eli Cohen wrote:
> Current code ignores link state updates if VIRTIO_NET_F_STATUS was not
> negotiated. However, link state updates could be received before feature
> negotiation was completed , therefore causing link state events to be
> lost, possibly
On Mon, Mar 20, 2023 at 10:01:05AM +0200, Eli Cohen wrote:
> Current code ignores link state updates if VIRTIO_NET_F_STATUS was not
> negotiated. However, link state updates could be received before feature
> negotiation was completed , therefore causing link state events to be
> lost, possibly
On Mon, Mar 27, 2023 at 06:49:32AM +, Alvaro Karsz wrote:
> ping
tagged. in the future pls CC everyone on the cover letter too.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
On Sat, Mar 25, 2023 at 1:44 AM Bobby Eshleman wrote:
>
> On Fri, Mar 24, 2023 at 09:38:38AM +0100, Stefano Garzarella wrote:
> > Hi Bobby,
> > FYI we have also this one, but it seems related to
> > syzbot+befff0a9536049e79...@syzkaller.appspotmail.com
> >
> > Thanks,
> > Stefano
> >
>
> Got it,
ping
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
31 matches
Mail list logo