Here we copy the data from the original buf to the new page. But we
not check that it may be overflow.
As long as the size received(including vnethdr) is greater than 3840
(PAGE_SIZE -VIRTIO_XDP_HEADROOM). Then the memcpy will overflow.
And this is completely possible, as long as the MTU is large
On Fri, 14 Apr 2023 13:40:32 +0800, Jason Wang wrote:
> On Thu, Apr 13, 2023 at 8:19 PM Xuan Zhuo wrote:
> >
> > Here we copy the data from the original buf to the new page. But we
> > not check that it may be overflow.
> >
> > As long as the size received(including vnethdr) is greater than 3840
On Thu, Apr 13, 2023 at 8:19 PM Xuan Zhuo wrote:
>
> Here we copy the data from the original buf to the new page. But we
> not check that it may be overflow.
>
> As long as the size received(including vnethdr) is greater than 3840
> (PAGE_SIZE -VIRTIO_XDP_HEADROOM). Then the memcpy will overflow.
Adding netdev.
On Fri, Apr 14, 2023 at 1:09 PM Jason Wang wrote:
>
> On Thu, Apr 13, 2023 at 3:31 PM Xuan Zhuo wrote:
> >
> > On Thu, 13 Apr 2023 14:40:27 +0800, Jason Wang wrote:
> > > We used to busy waiting on the cvq command this tends to be
> > > problematic since there no way for to sched
On Thu, Apr 13, 2023 at 3:31 PM Xuan Zhuo wrote:
>
> On Thu, 13 Apr 2023 14:40:27 +0800, Jason Wang wrote:
> > We used to busy waiting on the cvq command this tends to be
> > problematic since there no way for to schedule another process which
> > may serve for the control virtqueue. This might b
Forget to cc netdev, adding.
On Fri, Apr 14, 2023 at 12:25 AM Michael S. Tsirkin wrote:
>
> On Thu, Apr 13, 2023 at 02:40:26PM +0800, Jason Wang wrote:
> > This patch convert rx mode setting to be done in a workqueue, this is
> > a must for allow to sleep when waiting for the cvq command to
> > r
On Thu, Apr 13, 2023 at 10:04 PM Jakub Kicinski wrote:
>
> On Thu, 13 Apr 2023 14:40:25 +0800 Jason Wang wrote:
> > The code used to busy poll for cvq command which turns out to have
> > several side effects:
> >
> > 1) infinite poll for buggy devices
> > 2) bad interaction with scheduler
> >
> >
On Fri, Apr 14, 2023 at 6:36 AM Mike Christie
wrote:
>
> On 4/12/23 2:56 AM, Jason Wang wrote:
> >> I can spin another patchset with the single ioctl design so we can compare.
> > So I'm fine with this approach. One last question, I see this:
> >
> > /* By default, a device gets one vhost_worker t
On 4/12/23 2:56 AM, Jason Wang wrote:
>> I can spin another patchset with the single ioctl design so we can compare.
> So I'm fine with this approach. One last question, I see this:
>
> /* By default, a device gets one vhost_worker that its virtqueues share. This
> */
>
> I'm wondering if it is
On Thu, Apr 13, 2023 at 02:40:26PM +0800, Jason Wang wrote:
> This patch convert rx mode setting to be done in a workqueue, this is
> a must for allow to sleep when waiting for the cvq command to
> response since current code is executed under addr spin lock.
>
> Signed-off-by: Jason Wang
I don'
Hi,
On 4/13/23 13:01, Akihiko Odaki wrote:
> On 2023/04/13 19:40, Jean-Philippe Brucker wrote:
>> Hello,
>>
>> On Thu, Apr 13, 2023 at 01:49:43PM +0900, Akihiko Odaki wrote:
>>> Hi,
>>>
>>> Recently I encountered a problem with the combination of Linux's
>>> virtio-iommu driver and QEMU when a SR-
On 4/13/23 15:02, Maxime Coquelin wrote:
Hi Jason,
On 4/13/23 08:40, Jason Wang wrote:
Hi all:
The code used to busy poll for cvq command which turns out to have
several side effects:
1) infinite poll for buggy devices
2) bad interaction with scheduler
So this series tries to use sleep ins
Hi Jason,
On 4/13/23 08:40, Jason Wang wrote:
Hi all:
The code used to busy poll for cvq command which turns out to have
several side effects:
1) infinite poll for buggy devices
2) bad interaction with scheduler
So this series tries to use sleep instead of busy polling. In this
version, I tak
Here we copy the data from the original buf to the new page. But we
not check that it may be overflow.
As long as the size received(including vnethdr) is greater than 3840
(PAGE_SIZE -VIRTIO_XDP_HEADROOM). Then the memcpy will overflow.
And this is completely possible, as long as the MTU is large
Hello,
On Thu, Apr 13, 2023 at 01:49:43PM +0900, Akihiko Odaki wrote:
> Hi,
>
> Recently I encountered a problem with the combination of Linux's
> virtio-iommu driver and QEMU when a SR-IOV virtual function gets disabled.
> I'd like to ask you what kind of solution is appropriate here and impleme
On Sun, Apr 09, 2023 at 10:17:51PM +0300, Arseniy Krasnov wrote:
This replaces 'skb_queue_tail()' with 'virtio_vsock_skb_queue_tail()'.
The first one uses 'spin_lock_irqsave()', second uses 'spin_lock_bh()'.
There is no need to disable interrupts in the loopback transport as
there is no access to
Hi Kamel,
On Thu, Apr 13, 2023 at 9:48 AM wrote:
> Le 2021-10-04 14:44, Geert Uytterhoeven a écrit :
> What is the status for this patch, is there any remaining
> changes to be made ?
You mean commit a00128dfc8fc0cc8 ("gpio: aggregator: Add interrupt
support") in v5.17?
Gr{oetje,eeting}s,
Add VIRTIO_F_NOTIFICATION_DATA feature support for the MMIO, channel
I/O, modern PCI and vDPA transports.
This patchset binds 2 patches that were sent separately to the mailing
lists.
The first one [1] adds support for the MMIO, channel I/O and modern PCI
transports.
The second one [2] adds supp
Add VIRTIO_F_NOTIFICATION_DATA support for vDPA transport.
If this feature is negotiated, the driver passes extra data when kicking
a virtqueue.
A device that offers this feature needs to implement the
kick_vq_with_data callback.
kick_vq_with_data receives the vDPA device and data.
data includes:
From: Viktor Prutyanov
According to VirtIO spec v1.2, VIRTIO_F_NOTIFICATION_DATA feature
indicates that the driver passes extra data along with the queue
notifications.
In a split queue case, the extra data is 16-bit available index. In a
packed queue case, the extra data is 1-bit wrap counter a
On Wed, 12 Apr 2023 03:25:36 -0400, Deming Wang wrote:
> memalign() is obsolete according to its manpage.
>
> Replace memalign() with posix_memalign() and remove malloc.h include
> that was there for memalign().
>
> As a pointer is passed into posix_memalign(), initialize *p to NULL
> to silence a
> Hmm. So it seems we need to first apply yours then this patch,
> is that right? Or the other way around? What is the right way to make it not
> break bisect?
> Do you mind including this patch with yours in a patchset
> in the correct order?
Ok, I'll create a patchset.
Thanks,
___
When suspend is called, the driver sends a suspend command to the DPU
through the control mechanism.
Signed-off-by: Alvaro Karsz
---
drivers/vdpa/solidrun/snet_ctrl.c | 6 ++
drivers/vdpa/solidrun/snet_main.c | 15 +++
drivers/vdpa/solidrun/snet_vdpa.h | 1 +
3 files changed, 2
This patch adds the get_vq_state and set_vq_state vDPA callbacks.
In order to get the VQ state, the state needs to be read from the DPU.
In order to allow that, the old messaging mechanism is replaced with a new,
flexible control mechanism.
This mechanism allows to read data from the DPU.
The mec
Add more vDPA callbacks.
[s/g]et_vq_state is added in patch 1, including a new control mechanism
to read data from the DPU.
suspend is added in patch 2.
Link to v1:
https://lore.kernel.org/virtualization/20230402125219.1084754-1-alvaro.ka...@solid-run.com/
Link to v2:
https://lore.kernel.org/vir
On Thu, 13 Apr 2023 14:40:27 +0800, Jason Wang wrote:
> We used to busy waiting on the cvq command this tends to be
> problematic since there no way for to schedule another process which
> may serve for the control virtqueue. This might be the case when the
> control virtqueue is emulated by softw
First of all, I personally love open source, linux and virtio. I have
also participated in community work such as virtio for a long time.
I think I am familiar enough with virtio/virtio-net and is adequate as a
reviewer.
Every time there is some patch/bug, I wish I can get pinged
and I will feedb
27 matches
Mail list logo