Re: [PATCH v3 2/4] tests/lcitool: Refresh generated files

2024-01-02 Thread Ilya Maximets
refresh' on current git master that doesn't happen > > FTR since commit cb039ef3d9 libxdp-devel is also being changed on my > host, similarly to libpmem-devel, so I suppose it also has some host > specific restriction. Yeah, many distributions are not building libxdp for non

[PATCH] memory: initialize 'fv' in MemoryRegionCache to make Coverity happy

2023-10-09 Thread Ilya Maximets
1631 err_undo_map: 1632 virtqueue_undo_map_desc(out_num, in_num, iov); ** CID 1522370: Memory - illegal accesses (UNINIT) Instead of trying to silence these false positive reports in 4 different places, initializing 'fv' as well, as this doesn't result in any noti

Re: [PATCH] virtio: remove unnecessary thread fence while reading next descriptor

2023-09-27 Thread Ilya Maximets
On 9/27/23 17:41, Michael S. Tsirkin wrote: > On Wed, Sep 27, 2023 at 04:06:41PM +0200, Ilya Maximets wrote: >> On 9/25/23 20:04, Ilya Maximets wrote: >>> On 9/25/23 16:32, Stefan Hajnoczi wrote: >>>> On Fri, 25 Aug 2023 at 13:02, Ilya Maximets wrote: >>>&g

Re: [PATCH] virtio: remove unnecessary thread fence while reading next descriptor

2023-09-27 Thread Ilya Maximets
On 9/25/23 20:04, Ilya Maximets wrote: > On 9/25/23 16:32, Stefan Hajnoczi wrote: >> On Fri, 25 Aug 2023 at 13:02, Ilya Maximets wrote: >>> >>> It was supposed to be a compiler barrier and it was a compiler barrier >>> initially called 'wmb' (??) when

Re: [PATCH] virtio: use shadow_avail_idx while checking number of heads

2023-09-27 Thread Ilya Maximets
On 9/26/23 00:24, Michael S. Tsirkin wrote: > On Tue, Sep 26, 2023 at 12:13:11AM +0200, Ilya Maximets wrote: >> On 9/25/23 23:24, Michael S. Tsirkin wrote: >>> On Mon, Sep 25, 2023 at 10:58:05PM +0200, Ilya Maximets wrote: >>>> On 9/25/23 17:38, Stefan Hajnoczi wrote:

[PATCH v2 0/2] virtio: clean up of virtqueue_split_read_next_desc()

2023-09-27 Thread Ilya Maximets
Version 2: - Converted into a patch set adding a new patch that removes the 'next' argument. [Stefan] - Completely removing the barrier instead of changing into compiler barrier. [Stefan] Ilya Maximets (2): virtio: remove unnecessary thread fence while reading next

[PATCH v2 1/2] virtio: remove unnecessary thread fence while reading next descriptor

2023-09-27 Thread Ilya Maximets
27;t need to be an actual barrier, as its only purpose was to ensure that the value is not read twice. And since commit aa570d6fb6bd ("virtio: combine the read of a descriptor") there is no need for a barrier at all, since we're no longer reading guest memory here, but accessing a local

[PATCH v2 2/2] virtio: remove unused next argument from virtqueue_split_read_next_desc()

2023-09-27 Thread Ilya Maximets
"virtio: combine the read of a descriptor") Remove the unused argument to simplify the code. Also, adding a comment to the function to describe what it is actually doing, as it is not obvious that the 'desc' is both an input and an output argument. Signed-off-by: Ilya Maximets

[PATCH v2] virtio: use shadow_avail_idx while checking number of heads

2023-09-27 Thread Ilya Maximets
itself. The change improves performance of the af-xdp network backend by 2-3%. Signed-off-by: Ilya Maximets --- Version 2: - Changed to not skip error checks and a barrier. - Added comments about the need for a barrier. hw/virtio/virtio.c | 18 +++--- 1 file changed, 15

Re: [PATCH] virtio: use shadow_avail_idx while checking number of heads

2023-09-25 Thread Ilya Maximets
On 9/25/23 23:24, Michael S. Tsirkin wrote: > On Mon, Sep 25, 2023 at 10:58:05PM +0200, Ilya Maximets wrote: >> On 9/25/23 17:38, Stefan Hajnoczi wrote: >>> On Mon, 25 Sept 2023 at 11:36, Ilya Maximets wrote: >>>> >>>> On 9/25/23 17:12, Stefan Hajnoczi

Re: [PATCH] virtio: use shadow_avail_idx while checking number of heads

2023-09-25 Thread Ilya Maximets
On 9/25/23 17:38, Stefan Hajnoczi wrote: > On Mon, 25 Sept 2023 at 11:36, Ilya Maximets wrote: >> >> On 9/25/23 17:12, Stefan Hajnoczi wrote: >>> On Mon, 25 Sept 2023 at 11:02, Ilya Maximets wrote: >>>> >>>> On 9/25/23 16:23, Stefan Hajnoczi

Re: [PATCH] virtio: remove unnecessary thread fence while reading next descriptor

2023-09-25 Thread Ilya Maximets
On 9/25/23 16:32, Stefan Hajnoczi wrote: > On Fri, 25 Aug 2023 at 13:02, Ilya Maximets wrote: >> >> It was supposed to be a compiler barrier and it was a compiler barrier >> initially called 'wmb' (??) when virtio core support was introduced. >> Later all th

Re: [PATCH] virtio: use shadow_avail_idx while checking number of heads

2023-09-25 Thread Ilya Maximets
On 9/25/23 17:12, Stefan Hajnoczi wrote: > On Mon, 25 Sept 2023 at 11:02, Ilya Maximets wrote: >> >> On 9/25/23 16:23, Stefan Hajnoczi wrote: >>> On Fri, 25 Aug 2023 at 13:04, Ilya Maximets wrote: >>>> >>>> We do not need the most up to date number

Re: [PATCH] virtio: use shadow_avail_idx while checking number of heads

2023-09-25 Thread Ilya Maximets
On 9/25/23 16:23, Stefan Hajnoczi wrote: > On Fri, 25 Aug 2023 at 13:04, Ilya Maximets wrote: >> >> We do not need the most up to date number of heads, we only want to >> know if there is at least one. >> >> Use shadow variable as long as it is not equal to th

Re: [PATCH] virtio: remove unnecessary thread fence while reading next descriptor

2023-09-25 Thread Ilya Maximets
On 8/25/23 19:01, Ilya Maximets wrote: > It was supposed to be a compiler barrier and it was a compiler barrier > initially called 'wmb' (??) when virtio core support was introduced. > Later all the instances of 'wmb' were switched to smp_wmb to fix memory > orde

Re: [PATCH] virtio: use shadow_avail_idx while checking number of heads

2023-09-25 Thread Ilya Maximets
On 8/25/23 19:04, Ilya Maximets wrote: > We do not need the most up to date number of heads, we only want to > know if there is at least one. > > Use shadow variable as long as it is not equal to the last available > index checked. This avoids expensive qatomic dereference of the

Re: [PATCH v2] virtio: don't zero out memory region cache for indirect descriptors

2023-09-25 Thread Ilya Maximets
On 8/11/23 16:34, Ilya Maximets wrote: > Lots of virtio functions that are on a hot path in data transmission > are initializing indirect descriptor cache at the point of stack > allocation. It's a 112 byte structure that is getting zeroed out on > each call adding unnecessar

Re: [PULL 00/17] Net patches

2023-09-19 Thread Ilya Maximets
On 9/19/23 10:40, Daniel P. Berrangé wrote: > On Mon, Sep 18, 2023 at 09:36:10PM +0200, Ilya Maximets wrote: >> On 9/14/23 10:13, Daniel P. Berrangé wrote: >>> On Wed, Sep 13, 2023 at 08:46:42PM +0200, Ilya Maximets wrote: >>>> On 9/8/23 16:15, Daniel P. Berrangé wro

Re: [PULL 00/17] Net patches

2023-09-18 Thread Ilya Maximets
On 9/14/23 10:13, Daniel P. Berrangé wrote: > On Wed, Sep 13, 2023 at 08:46:42PM +0200, Ilya Maximets wrote: >> On 9/8/23 16:15, Daniel P. Berrangé wrote: >>> On Fri, Sep 08, 2023 at 04:06:35PM +0200, Ilya Maximets wrote: >>>> On 9/8/23 14:15, Daniel P. Berrangé wro

Re: [PULL 00/17] Net patches

2023-09-13 Thread Ilya Maximets
On 9/8/23 16:15, Daniel P. Berrangé wrote: > On Fri, Sep 08, 2023 at 04:06:35PM +0200, Ilya Maximets wrote: >> On 9/8/23 14:15, Daniel P. Berrangé wrote: >>> On Fri, Sep 08, 2023 at 02:00:47PM +0200, Ilya Maximets wrote: >>>> On 9/8/23 13:49, Daniel P. Berrangé wrote:

[PATCH v4 2/2] net: add initial support for AF_XDP network backend

2023-09-13 Thread Ilya Maximets
: 1.0 Mpps L2 FWD Loopback : 0.7 Mpps Results in skb mode or over the veth are close to results of a tap backend with vhost=on and disabled segmentation offloading bridged with a NIC. Signed-off-by: Ilya Maximets --- MAINTAINERS | 4

[PATCH v4 1/2] tests: bump libvirt-ci for libasan and libxdp

2023-09-13 Thread Ilya Maximets
This pulls in the fixes for libasan version as well as support for libxdp that will be used for af-xdp netdev in the next commits. Signed-off-by: Ilya Maximets --- tests/docker/dockerfiles/debian-amd64-cross.docker | 2 +- tests/docker/dockerfiles/debian-amd64.docker | 2 +- tests

[PATCH v4 0/2] net: add initial support for AF_XDP network backend

2023-09-13 Thread Ilya Maximets
having 32 MB of RLIMIT_MEMLOCK per queue. - Refined and extended documentation. Ilya Maximets (2): tests: bump libvirt-ci for libasan and libxdp net: add initial support for AF_XDP network backend MAINTAINERS | 4 + hmp-commands.hx

Re: [PULL 00/17] Net patches

2023-09-08 Thread Ilya Maximets
On 9/8/23 14:15, Daniel P. Berrangé wrote: > On Fri, Sep 08, 2023 at 02:00:47PM +0200, Ilya Maximets wrote: >> On 9/8/23 13:49, Daniel P. Berrangé wrote: >>> On Fri, Sep 08, 2023 at 01:34:54PM +0200, Ilya Maximets wrote: >>>> On 9/8/23 13:19, Stefan Hajnoczi

Re: [PULL 00/17] Net patches

2023-09-08 Thread Ilya Maximets
On 9/8/23 13:49, Daniel P. Berrangé wrote: > On Fri, Sep 08, 2023 at 01:34:54PM +0200, Ilya Maximets wrote: >> On 9/8/23 13:19, Stefan Hajnoczi wrote: >>> Hi Ilya and Jason, >>> There is a CI failure related to a missing Debian libxdp-dev package: >>> https:/

Re: [PULL 12/17] net: add initial support for AF_XDP network backend

2023-09-08 Thread Ilya Maximets
On 9/8/23 13:48, Daniel P. Berrangé wrote: > On Fri, Sep 08, 2023 at 02:45:02PM +0800, Jason Wang wrote: >> From: Ilya Maximets >> >> AF_XDP is a network socket family that allows communication directly >> with the network device driver in the kernel, bypassing m

Re: [PULL 00/17] Net patches

2023-09-08 Thread Ilya Maximets
On 9/8/23 13:19, Stefan Hajnoczi wrote: > Hi Ilya and Jason, > There is a CI failure related to a missing Debian libxdp-dev package: > https://gitlab.com/qemu-project/qemu/-/jobs/5046139967 > > I think the issue is that the debian-amd64 container image that QEMU > uses for testing is based on Debi

[PATCH] virtio: use shadow_avail_idx while checking number of heads

2023-08-25 Thread Ilya Maximets
itself and the subsequent memory barrier. The change improves performance of the af-xdp network backend by 2-3%. Signed-off-by: Ilya Maximets --- hw/virtio/virtio.c | 10 +- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index

[PATCH] virtio: remove unnecessary thread fence while reading next descriptor

2023-08-25 Thread Ilya Maximets
27;t need to be an actual barrier. It's enough for it to stay a compiler barrier as its only purpose is to ensure that the value is not read twice. There is no counterpart read barrier in the drivers, AFAICT. And even if we needed an actual barrier, it shouldn't have been a write bar

Re: [PATCH v2 3/4] virtio: use defer_call() in virtio_irqfd_notify()

2023-08-21 Thread Ilya Maximets
On 8/17/23 17:58, Stefan Hajnoczi wrote: > virtio-blk and virtio-scsi invoke virtio_irqfd_notify() to send Used > Buffer Notifications from an IOThread. This involves an eventfd > write(2) syscall. Calling this repeatedly when completing multiple I/O > requests in a row is wasteful. > > Use the de

Re: [PATCH 1/2] virtio: use blk_io_plug_call() in virtio_irqfd_notify()

2023-08-16 Thread Ilya Maximets
On 8/16/23 17:30, Stefan Hajnoczi wrote: > On Wed, Aug 16, 2023 at 03:36:32PM +0200, Ilya Maximets wrote: >> On 8/15/23 14:08, Stefan Hajnoczi wrote: >>> virtio-blk and virtio-scsi invoke virtio_irqfd_notify() to send Used >>> Buffer Notifications from an IOThrea

Re: [PATCH 1/2] virtio: use blk_io_plug_call() in virtio_irqfd_notify()

2023-08-16 Thread Ilya Maximets
mit will remove it. I'm likely missing something, but could you explain why it is safe to batch unconditionally here? The current BH code, as you mentioned in the second patch, is only batching if EVENT_IDX is not set. Maybe worth adding a few words in the commit message for people like me, who are a bit

[PATCH v2] virtio: don't zero out memory region cache for indirect descriptors

2023-08-11 Thread Ilya Maximets
n terms of 64B packets per second by 6-14 % depending on the case. Tested with a proposed af-xdp network backend and a dpdk testpmd application in the guest, but should be beneficial for other virtio devices as well. Signed-off-by: Ilya Maximets --- Version 2: * Introduced an initialization fu

Re: [PATCH] virtio: don't zero out memory region cache for indirect descriptors

2023-08-11 Thread Ilya Maximets
On 8/11/23 15:58, Stefan Hajnoczi wrote: > > > On Fri, Aug 11, 2023, 08:50 Ilya Maximets <mailto:i.maxim...@ovn.org>> wrote: > > On 8/10/23 17:50, Stefan Hajnoczi wrote: > > On Tue, Aug 08, 2023 at 12:28:47AM +0200, Ilya Maximets wrote: > >>

Re: [PATCH] virtio: don't zero out memory region cache for indirect descriptors

2023-08-11 Thread Ilya Maximets
On 8/10/23 17:50, Stefan Hajnoczi wrote: > On Tue, Aug 08, 2023 at 12:28:47AM +0200, Ilya Maximets wrote: >> Lots of virtio functions that are on a hot path in data transmission >> are initializing indirect descriptor cache at the point of stack >> allocation. It's a

Re: [PATCH] virtio: don't zero out memory region cache for indirect descriptors

2023-08-11 Thread Ilya Maximets
On 8/9/23 04:37, Jason Wang wrote: > On Tue, Aug 8, 2023 at 6:28 AM Ilya Maximets wrote: >> >> Lots of virtio functions that are on a hot path in data transmission >> are initializing indirect descriptor cache at the point of stack >> allocation. It's a 112 byte

[PATCH] virtio: don't zero out memory region cache for indirect descriptors

2023-08-07 Thread Ilya Maximets
s in terms of 64B packets per second by 6-14 % depending on the case. Tested with a proposed af-xdp network backend and a dpdk testpmd application in the guest, but should be beneficial for other virtio devices as well. Signed-off-by: Ilya Maximets --- hw/virtio/vir

[PATCH v3] net: add initial support for AF_XDP network backend

2023-08-04 Thread Ilya Maximets
: 1.0 Mpps L2 FWD Loopback : 0.7 Mpps Results in skb mode or over the veth are close to results of a tap backend with vhost=on and disabled segmentation offloading bridged with a NIC. Signed-off-by: Ilya Maximets --- Version 3: - Bump requirements to libxdp 1.4.0+. Having that, rem

Re: [PATCH v2] net: add initial support for AF_XDP network backend

2023-08-04 Thread Ilya Maximets
On 7/25/23 08:55, Jason Wang wrote: > On Thu, Jul 20, 2023 at 9:26 PM Ilya Maximets wrote: >> >> On 7/20/23 09:37, Jason Wang wrote: >>> On Thu, Jul 6, 2023 at 4:58 AM Ilya Maximets wrote: >>>> >>>> AF_XDP is a network socket family that allows com

Re: [PATCH v2] net: add initial support for AF_XDP network backend

2023-07-20 Thread Ilya Maximets
On 7/20/23 09:37, Jason Wang wrote: > On Thu, Jul 6, 2023 at 4:58 AM Ilya Maximets wrote: >> >> AF_XDP is a network socket family that allows communication directly >> with the network device driver in the kernel, bypassing most or all >> of the kernel networking

Re: [PATCH] net: add initial support for AF_XDP network backend

2023-07-10 Thread Ilya Maximets
On 7/10/23 05:51, Jason Wang wrote: > On Fri, Jul 7, 2023 at 7:21 PM Ilya Maximets wrote: >> >> On 7/7/23 03:43, Jason Wang wrote: >>> On Fri, Jul 7, 2023 at 3:08 AM Stefan Hajnoczi wrote: >>>> >>>> On Wed, 5 Jul 2023 at 02:02, Jason Wang wrote: &

Re: [PATCH] net: add initial support for AF_XDP network backend

2023-07-07 Thread Ilya Maximets
2023 at 4:15 PM Stefan Hajnoczi >>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>> On Wed, 28 Jun 2023 at 09:59, Jason Wang wrote: >>>>>>>>>>> >>>>>>>>>>> On

[PATCH v2] net: add initial support for AF_XDP network backend

2023-07-05 Thread Ilya Maximets
Tx only : 1.2 Mpps Rx only : 1.0 Mpps L2 FWD Loopback : 0.7 Mpps Results in skb mode or over the veth are close to results of a tap backend with vhost=on and disabled segmentation offloading bridged with a NIC. Signed-off-by: Ilya Maximets --- Version 2: - Added sup

Re: [PATCH] net: add initial support for AF_XDP network backend

2023-06-30 Thread Ilya Maximets
On 6/30/23 09:44, Jason Wang wrote: > On Wed, Jun 28, 2023 at 7:14 PM Ilya Maximets wrote: >> >> On 6/28/23 05:27, Jason Wang wrote: >>> On Wed, Jun 28, 2023 at 6:45 AM Ilya Maximets wrote: >>>> >>>> On 6/27/23 04:54, Jason Wang wrote: >>&g

Re: [PATCH] net: add initial support for AF_XDP network backend

2023-06-28 Thread Ilya Maximets
On 6/28/23 05:27, Jason Wang wrote: > On Wed, Jun 28, 2023 at 6:45 AM Ilya Maximets wrote: >> >> On 6/27/23 04:54, Jason Wang wrote: >>> On Mon, Jun 26, 2023 at 9:17 PM Ilya Maximets wrote: >>>> >>>> On 6/26/23 08:32, Jason Wang wrote: >>

Re: [PATCH] net: add initial support for AF_XDP network backend

2023-06-27 Thread Ilya Maximets
> > Whether you pursue the passthrough approach or not, making -netdev > af-xdp work in an environment where QEMU runs unprivileged seems like > the most important practical issue to solve. Yes, working on it. Doesn't seem to be hard to do, but I need to test. Best regards, Ilya Maximets.

Re: [PATCH] net: add initial support for AF_XDP network backend

2023-06-27 Thread Ilya Maximets
On 6/27/23 04:54, Jason Wang wrote: > On Mon, Jun 26, 2023 at 9:17 PM Ilya Maximets wrote: >> >> On 6/26/23 08:32, Jason Wang wrote: >>> On Sun, Jun 25, 2023 at 3:06 PM Jason Wang wrote: >>>> >>>> On Fri, Jun 23, 2023 at 5:58 AM Ilya Maximets wrot

Re: [PATCH] net: add initial support for AF_XDP network backend

2023-06-26 Thread Ilya Maximets
On 6/26/23 08:32, Jason Wang wrote: > On Sun, Jun 25, 2023 at 3:06 PM Jason Wang wrote: >> >> On Fri, Jun 23, 2023 at 5:58 AM Ilya Maximets wrote: >>> >>> AF_XDP is a network socket family that allows communication directly >>> with the network device d

[PATCH] net: add initial support for AF_XDP network backend

2023-06-22 Thread Ilya Maximets
pps L2 FWD Loopback : 0.7 Mpps Results in skb mode or over the veth are close to results of a tap backend with vhost=on and disabled segmentation offloading bridged with a NIC. Signed-off-by: Ilya Maximets --- MAINTAINERS | 4 + hmp-commands.hx

[PATCH] vhost_net: Print feature masks in hex

2022-03-18 Thread Ilya Maximets
"0x2" is much more readable than "8589934592". The change saves one step (conversion) while debugging. Signed-off-by: Ilya Maximets --- hw/net/vhost_net.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/hw/net/vhost_net.c b/hw/net/vhost_ne

[Qemu-devel] [PATCH 00/97] Patch Round-up for stable 3.0.1, freeze on 2019-04-08

2019-04-08 Thread Ilya Maximets
e), but is being release now since it was > delayed from its intended release date. > > Thanks! > [...] > > Ilya Maximets (1): > migration: Stop postcopy fault thread before notifying Hi. Sorry for late response, but what about following two patches: c4f753859ae6

[Qemu-devel] [PATCH v3 4/4] memfd: improve error messages

2019-03-11 Thread Ilya Maximets
This gives more information about the failure. Additionally 'ENOSYS' returned for a non-Linux platforms instead of 'errno', which is not initilaized in this case. Signed-off-by: Ilya Maximets Reviewed-by: Marc-André Lureau --- util/memfd.c | 7 ++- 1 file changed

[Qemu-devel] [PATCH v3 3/4] memfd: set up correct errno if not supported

2019-03-11 Thread Ilya Maximets
qemu_memfd_create() prints the value of 'errno' which is not set in this case. Signed-off-by: Ilya Maximets Reviewed-by: Marc-André Lureau --- util/memfd.c | 1 + 1 file changed, 1 insertion(+) diff --git a/util/memfd.c b/util/memfd.c index d74ce4d793..393d23da96 100644 --- a/ut

[Qemu-devel] [PATCH v3 1/4] hostmem-memfd: disable for systems wihtout sealing support

2019-03-11 Thread Ilya Maximets
em,size=2M,: \ failed to create memfd: Invalid argument and actually breaks the feature on such systems. Let's restrict memfd backend to systems with sealing support. Signed-off-by: Ilya Maximets --- backends/hostmem-memfd.c | 18 -- tests/vhost-user-test.c | 5 +++--

[Qemu-devel] [PATCH v3 2/4] memfd: always check for MFD_CLOEXEC

2019-03-11 Thread Ilya Maximets
QEMU always sets this flag unconditionally. We need to check if it's supported. Signed-off-by: Ilya Maximets Reviewed-by: Marc-André Lureau --- util/memfd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/util/memfd.c b/util/memfd.c index 8debd0d037..d74ce4d793 100644

[Qemu-devel] [PATCH v3 0/4] memfd fixes.

2019-03-11 Thread Ilya Maximets
Version 3: * Rebase on top of current master. Version 2: * First patch changed to just drop the memfd backend if seals are not supported. Ilya Maximets (4): hostmem-memfd: disable for systems wihtout sealing support memfd: always check for MFD_CLOEXEC memfd: set up correct

Re: [Qemu-devel] [PATCH v2 0/4] memfd fixes.

2019-03-11 Thread Ilya Maximets
Best regards, Ilya Maximets. On 27.11.2018 16:50, Ilya Maximets wrote: > Version 2: > * First patch changed to just drop the memfd backend > if seals are not supported. > > Ilya Maximets (4): > hostmem-memfd: disable for systems wihtout sealing support >

Re: [Qemu-devel] [PATCH v2 1/4] hostmem-memfd: disable for systems wihtout sealing support

2019-01-16 Thread Ilya Maximets
On 16.01.2019 18:48, Daniel P. Berrangé wrote: > On Wed, Jan 16, 2019 at 06:46:39PM +0300, Ilya Maximets wrote: >> >> >> On 16.01.2019 18:30, Eduardo Habkost wrote: >>> On Wed, Dec 12, 2018 at 07:49:36AM +0100, Gerd Hoffmann wrote: >>>> On Tue, Dec 11, 201

Re: [Qemu-devel] [PATCH v2 1/4] hostmem-memfd: disable for systems wihtout sealing support

2019-01-16 Thread Ilya Maximets
On 16.01.2019 18:30, Eduardo Habkost wrote: > On Wed, Dec 12, 2018 at 07:49:36AM +0100, Gerd Hoffmann wrote: >> On Tue, Dec 11, 2018 at 02:09:11PM +0300, Ilya Maximets wrote: >>> On 11.12.2018 13:53, Daniel P. Berrangé wrote: >>>>> >>>>> Let&

Re: [Qemu-devel] [PATCH] .cirrus.yml: basic compile and test for FreeBSD

2019-01-16 Thread Ilya Maximets
On 16.01.2019 15:26, Alex Bennée wrote: > > Ed Maste writes: > >> From: Ed Maste >> >> Cirrus-CI (https://cirrus-ci.org) is a hosted CI service which supports >> several platforms, including FreeBSD. Later on we could build for other >> hosts in Cirrus-CI, but I'm starting with only FreeBSD as

Re: [Qemu-devel] [PATCH v2 1/4] hostmem-memfd: disable for systems wihtout sealing support

2019-01-16 Thread Ilya Maximets
So, can we have any conclusion about this patch and the series? Best regards, Ilya Maximets. On 05.01.2019 5:43, Eduardo Habkost wrote: > On Tue, Dec 11, 2018 at 04:48:23PM +0100, Igor Mammedov wrote: >> On Tue, 11 Dec 2018 13:29:19 +0300 >> Ilya Maximets wrote: >> >>

[Qemu-devel] [PATCH] virtio: add ORDER_PLATFORM feature support

2018-12-14 Thread Ilya Maximets
niques for memory ordering if negotiated. Signed-off-by: Ilya Maximets --- Note: Patch to change the name of the feature from VIRTIO_F_IO_BARRIER to VIRTIO_F_ORDER_PLATFORM is not merged yet: https://www.mail-archive.com/virtio-dev@lists.oasis-open.org/msg04114.html Patch for DPDK vir

Re: [Qemu-devel] [PATCH v2 1/4] hostmem-memfd: disable for systems wihtout sealing support

2018-12-11 Thread Ilya Maximets
On 11.12.2018 13:53, Daniel P. Berrangé wrote: > On Tue, Nov 27, 2018 at 04:50:27PM +0300, Ilya Maximets wrote: >> If seals are not supported, memfd_create() will fail. >> Furthermore, there is no way to disable it in this case because >> '.seal' property is not

Re: [Qemu-devel] [PATCH v2 1/4] hostmem-memfd: disable for systems wihtout sealing support

2018-12-11 Thread Ilya Maximets
On 10.12.2018 19:18, Igor Mammedov wrote: > On Tue, 27 Nov 2018 16:50:27 +0300 > Ilya Maximets wrote: > > s/wihtout/without/ in subj > >> If seals are not supported, memfd_create() will fail. >> Furthermore, there is no way to disable it in this case because

[Qemu-devel] [PATCH v2 4/4] memfd: improve error messages

2018-11-27 Thread Ilya Maximets
This gives more information about the failure. Additionally 'ENOSYS' returned for a non-Linux platforms instead of 'errno', which is not initilaized in this case. Signed-off-by: Ilya Maximets Reviewed-by: Marc-André Lureau --- util/memfd.c | 7 ++- 1 file changed

[Qemu-devel] [PATCH v2 3/4] memfd: set up correct errno if not supported

2018-11-27 Thread Ilya Maximets
qemu_memfd_create() prints the value of 'errno' which is not set in this case. Signed-off-by: Ilya Maximets Reviewed-by: Marc-André Lureau --- util/memfd.c | 1 + 1 file changed, 1 insertion(+) diff --git a/util/memfd.c b/util/memfd.c index d74ce4d793..393d23da96 100644 --- a/ut

[Qemu-devel] [PATCH v2 1/4] hostmem-memfd: disable for systems wihtout sealing support

2018-11-27 Thread Ilya Maximets
em,size=2M,: \ failed to create memfd: Invalid argument and actually breaks the feature on such systems. Let's restrict memfd backend to systems with sealing support. Signed-off-by: Ilya Maximets --- backends/hostmem-memfd.c | 18 -- tests/vhost-user-test.c | 6

[Qemu-devel] [PATCH v2 0/4] memfd fixes.

2018-11-27 Thread Ilya Maximets
Version 2: * First patch changed to just drop the memfd backend if seals are not supported. Ilya Maximets (4): hostmem-memfd: disable for systems wihtout sealing support memfd: always check for MFD_CLOEXEC memfd: set up correct errno if not supported memfd: improve error

[Qemu-devel] [PATCH v2 2/4] memfd: always check for MFD_CLOEXEC

2018-11-27 Thread Ilya Maximets
QEMU always sets this flag unconditionally. We need to check if it's supported. Signed-off-by: Ilya Maximets Reviewed-by: Marc-André Lureau --- util/memfd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/util/memfd.c b/util/memfd.c index 8debd0d037..d74ce4d793 100644

Re: [Qemu-devel] [PATCH 1/4] hostmem-memfd: enable seals only if supported

2018-11-27 Thread Ilya Maximets
On 27.11.2018 15:56, Marc-André Lureau wrote: > Hi > > On Tue, Nov 27, 2018 at 4:37 PM Ilya Maximets wrote: >> >> On 27.11.2018 15:29, Marc-André Lureau wrote: >>> Hi >>> >>> On Tue, Nov 27, 2018 at 4:02 PM Ilya Maximets >>> wrot

Re: [Qemu-devel] [PATCH 1/4] hostmem-memfd: enable seals only if supported

2018-11-27 Thread Ilya Maximets
On 27.11.2018 15:29, Marc-André Lureau wrote: > Hi > > On Tue, Nov 27, 2018 at 4:02 PM Ilya Maximets wrote: >> >> On 27.11.2018 15:00, Marc-André Lureau wrote: >>> Hi >>> On Tue, Nov 27, 2018 at 3:56 PM Ilya Maximets >>> wrote: >>>>

Re: [Qemu-devel] [PATCH 1/4] hostmem-memfd: enable seals only if supported

2018-11-27 Thread Ilya Maximets
On 27.11.2018 15:00, Marc-André Lureau wrote: > Hi > On Tue, Nov 27, 2018 at 3:56 PM Ilya Maximets wrote: >> >> On 27.11.2018 14:49, Marc-André Lureau wrote: >>> Hi >>> On Tue, Nov 27, 2018 at 3:11 PM Ilya Maximets >>> wrote: >>>>

Re: [Qemu-devel] [PATCH 1/4] hostmem-memfd: enable seals only if supported

2018-11-27 Thread Ilya Maximets
On 27.11.2018 14:49, Marc-André Lureau wrote: > Hi > On Tue, Nov 27, 2018 at 3:11 PM Ilya Maximets wrote: >> >> If seals are not supported, memfd_create() will fail. >> Furthermore, there is no way to disable it in this case because >> '.seal' property is n

[Qemu-devel] [PATCH 4/4] memfd: improve error messages

2018-11-27 Thread Ilya Maximets
This gives more information about the failure. Additionally 'ENOSYS' returned for a non-Linux platforms instead of 'errno', which is not initilaized in this case. Signed-off-by: Ilya Maximets --- util/memfd.c | 7 ++- 1 file changed, 6 insertions(+), 1 deletion(-) diff

[Qemu-devel] [PATCH 3/4] memfd: set up correct errno if not supported

2018-11-27 Thread Ilya Maximets
qemu_memfd_create() prints the value of 'errno' which is not set in this case. Signed-off-by: Ilya Maximets --- util/memfd.c | 1 + 1 file changed, 1 insertion(+) diff --git a/util/memfd.c b/util/memfd.c index d74ce4d793..393d23da96 100644 --- a/util/memfd.c +++ b/util/memfd.c @@ -

[Qemu-devel] [PATCH 2/4] memfd: always check for MFD_CLOEXEC

2018-11-27 Thread Ilya Maximets
QEMU always sets this flag unconditionally. We need to check if it's supported. Signed-off-by: Ilya Maximets --- util/memfd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/util/memfd.c b/util/memfd.c index 8debd0d037..d74ce4d793 100644 --- a/util/memfd.c +++ b/util/me

[Qemu-devel] [PATCH 1/4] hostmem-memfd: enable seals only if supported

2018-11-27 Thread Ilya Maximets
em,size=2M,: \ failed to create memfd: Invalid argument Signed-off-by: Ilya Maximets --- backends/hostmem-memfd.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/backends/hostmem-memfd.c b/backends/hostmem-memfd.c index b6836b28e5..ee39bdbde6 100644 --- a/backends/hostm

[Qemu-devel] [PATCH 0/4] memfd fixes.

2018-11-27 Thread Ilya Maximets
Ilya Maximets (4): hostmem-memfd: enable seals only if supported memfd: always check for MFD_CLOEXEC memfd: set up correct errno if not supported memfd: improve error messages backends/hostmem-memfd.c | 4 ++-- util/memfd.c | 10 -- 2 files changed, 10 insertions

[Qemu-devel] Are FreeBSD guest images working?

2018-11-15 Thread Ilya Maximets
achine accel=kvm -m 2048 \ -cpu host -enable-kvm -nographic -smp 2 \ -drive if=virtio,file=./FreeBSD-11.2-RELEASE-amd64.qcow2,format=qcow2 Best regards, Ilya Maximets.

[Qemu-devel] [RFC 1/2] migration: Stop postcopy fault thread before notifying

2018-10-08 Thread Ilya Maximets
END notify") Cc: qemu-sta...@nongnu.org Signed-off-by: Ilya Maximets --- migration/postcopy-ram.c | 11 ++- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c index 853d8b32ca..e5c02a32c5 100644 --- a/migration/postc

[Qemu-devel] [RFC 2/2] vhost-user: Fix userfaultfd leak

2018-10-08 Thread Ilya Maximets
ed ufd with postcopy") Cc: qemu-sta...@nongnu.org Signed-off-by: Ilya Maximets --- hw/virtio/vhost-user.c | 7 +++ 1 file changed, 7 insertions(+) diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c index c442daa562..e09bed0e4a 100644 --- a/hw/virtio/vhost-user.c +++ b/h

[Qemu-devel] [RFC 0/2] vhost+postcopy fixes

2018-10-08 Thread Ilya Maximets
Sending as RFC because it's not fully tested yet. Ilya Maximets (2): migration: Stop postcopy fault thread before notifying vhost-user: Fix userfaultfd leak hw/virtio/vhost-user.c | 7 +++ migration/postcopy-ram.c | 11 ++- 2 files changed, 13 insertions(+), 5 dele

[Qemu-devel] Have multiple virtio-net devices, but only one of them receives all traffic

2018-10-02 Thread Ilya Maximets
> Hi, > > I'm using QEMU 3.0.0 and Linux kernel 4.15.0 on x86 machines. I'm > observing pretty weird behavior when I have multiple virtio-net > devices. My KVM VM has two virtio-net devices (vhost=off) and I'm > using a Linux bridge in the host. The two devices have different > MAC/IP addresses. >

[Qemu-devel] [PATCH] vhost-user: Don't ask for reply on postcopy mem table set

2018-10-02 Thread Ilya Maximets
2c ("vhost+postcopy: Send address back to qemu") Signed-off-by: Ilya Maximets --- hw/virtio/vhost-user.c | 13 + 1 file changed, 1 insertion(+), 12 deletions(-) diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c index b041343632..c442daa562 100644 --- a/hw/virtio/v

Re: [Qemu-devel] [PATCH] virtio: add support for in-order feature

2018-08-14 Thread Ilya Maximets
On 13.08.2018 18:35, Michael S. Tsirkin wrote: > On Mon, Aug 13, 2018 at 06:28:06PM +0300, Ilya Maximets wrote: >> On 13.08.2018 12:56, Michael S. Tsirkin wrote: >>> On Mon, Aug 13, 2018 at 10:55:23AM +0300, Ilya Maximets wrote: >>>> On 10.08.2018 22:19, Michael S. Ts

Re: [Qemu-devel] [PATCH] virtio: add support for in-order feature

2018-08-13 Thread Ilya Maximets
On 13.08.2018 12:56, Michael S. Tsirkin wrote: > On Mon, Aug 13, 2018 at 10:55:23AM +0300, Ilya Maximets wrote: >> On 10.08.2018 22:19, Michael S. Tsirkin wrote: >>> On Fri, Aug 10, 2018 at 02:04:47PM +0300, Ilya Maximets wrote: >>>> On 10.08.2018 12:34, Michael S. Ts

Re: [Qemu-devel] [PATCH] virtio: add support for in-order feature

2018-08-13 Thread Ilya Maximets
On 10.08.2018 22:19, Michael S. Tsirkin wrote: > On Fri, Aug 10, 2018 at 02:04:47PM +0300, Ilya Maximets wrote: >> On 10.08.2018 12:34, Michael S. Tsirkin wrote: >>> On Fri, Aug 10, 2018 at 11:28:47AM +0300, Ilya Maximets wrote: >>>> On 10.08.2018 01:51, Michael S. Ts

Re: [Qemu-devel] [PATCH] virtio: add support for in-order feature

2018-08-10 Thread Ilya Maximets
On 10.08.2018 12:34, Michael S. Tsirkin wrote: > On Fri, Aug 10, 2018 at 11:28:47AM +0300, Ilya Maximets wrote: >> On 10.08.2018 01:51, Michael S. Tsirkin wrote: >>> On Thu, Aug 09, 2018 at 07:54:37PM +0300, Ilya Maximets wrote: >>>> New feature bit for in-order featu

Re: [Qemu-devel] [PATCH] virtio: add support for in-order feature

2018-08-10 Thread Ilya Maximets
On 10.08.2018 11:28, Ilya Maximets wrote: > On 10.08.2018 01:51, Michael S. Tsirkin wrote: >> On Thu, Aug 09, 2018 at 07:54:37PM +0300, Ilya Maximets wrote: >>> New feature bit for in-order feature of the upcoming >>> virtio 1.1. It's already supported b

Re: [Qemu-devel] [PATCH] virtio: add support for in-order feature

2018-08-10 Thread Ilya Maximets
On 10.08.2018 01:51, Michael S. Tsirkin wrote: > On Thu, Aug 09, 2018 at 07:54:37PM +0300, Ilya Maximets wrote: >> New feature bit for in-order feature of the upcoming >> virtio 1.1. It's already supported by DPDK vhost-user >> and virtio implementations. These changes re

[Qemu-devel] [PATCH] virtio: add support for in-order feature

2018-08-09 Thread Ilya Maximets
New feature bit for in-order feature of the upcoming virtio 1.1. It's already supported by DPDK vhost-user and virtio implementations. These changes required to allow feature negotiation. Signed-off-by: Ilya Maximets --- I just wanted to test this new feature in DPDK but failed to

Re: [Qemu-devel] [PATCH] virtio_error: don't invoke status callbacks

2017-12-18 Thread Ilya Maximets
On 13.12.2017 23:03, Michael S. Tsirkin wrote: > Backends don't need to know what frontend requested a reset, > and notifying then from virtio_error is messy because > virtio_error itself might be invoked from backend. > > Let's just set the status directly. >

Re: [Qemu-devel] [PATCH] vhost: fix crash on virtio_error while device stop

2017-12-14 Thread Ilya Maximets
On 14.12.2017 17:31, Ilya Maximets wrote: > One update for the testing scenario: > > No need to kill OVS. The issue reproducible with simple 'del-port' > and 'add-port'. virtio driver in guest could crash on both operations. > Most times it crashes in m

Re: [Qemu-devel] [PATCH] vhost: fix crash on virtio_error while device stop

2017-12-14 Thread Ilya Maximets
of broken guest index. Thanks. Best regards, Ilya Maximets. P.S. Previously I mentioned that I can not reproduce virtio driver crash with "[PATCH] virtio_error: don't invoke status callbacks" applied. I was wrong. I can reproduce now. System was misconfigured. So

Re: [Qemu-devel] [PATCH] vhost: fix crash on virtio_error while device stop

2017-12-13 Thread Ilya Maximets
On 13.12.2017 22:48, Michael S. Tsirkin wrote: > On Wed, Dec 13, 2017 at 04:45:20PM +0300, Ilya Maximets wrote: >>>> That >>>> looks very strange. Some of the functions gets 'old_status', others >>>> the 'new_status'. I'm a bit

Re: [Qemu-devel] [PATCH] vhost: fix crash on virtio_error while device stop

2017-12-13 Thread Ilya Maximets
On 11.12.2017 07:35, Michael S. Tsirkin wrote: > On Fri, Dec 08, 2017 at 05:54:18PM +0300, Ilya Maximets wrote: >> On 07.12.2017 20:27, Michael S. Tsirkin wrote: >>> On Thu, Dec 07, 2017 at 09:39:36AM +0300, Ilya Maximets wrote: >>>> On 06.12.2017 19:45, Michael S. Ts

Re: [Qemu-devel] [PATCH] vhost: fix crash on virtio_error while device stop

2017-12-08 Thread Ilya Maximets
On 07.12.2017 20:27, Michael S. Tsirkin wrote: > On Thu, Dec 07, 2017 at 09:39:36AM +0300, Ilya Maximets wrote: >> On 06.12.2017 19:45, Michael S. Tsirkin wrote: >>> On Wed, Dec 06, 2017 at 04:06:18PM +0300, Ilya Maximets wrote: >>>> In case virtio error occured afte

Re: [Qemu-devel] [PATCH] vhost: fix crash on virtio_error while device stop

2017-12-06 Thread Ilya Maximets
On 06.12.2017 19:45, Michael S. Tsirkin wrote: > On Wed, Dec 06, 2017 at 04:06:18PM +0300, Ilya Maximets wrote: >> In case virtio error occured after vhost_dev_close(), qemu will crash >> in nested cleanup while checking IOMMU flag because dev->vdev already >> set to zero a

[Qemu-devel] [PATCH] vhost: fix crash on virtio_error while device stop

2017-12-06 Thread Ilya Maximets
7;vhost_net_stop' to avoid any possible double frees and segmentation faults doue to using of already freed resources by setting 'vhost_started' flag to zero prior to 'vhost_net_stop' call. Signed-off-by: Ilya Maximets --- This issue was already addressed more than a year ago by th

[Qemu-devel] [PATCH] vhost: check for vhost_ops before using.

2016-08-02 Thread Ilya Maximets
ool -L eth0 combined 2' if vhost disconnected. Signed-off-by: Ilya Maximets --- hw/net/vhost_net.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c index dc61dc1..f2d49ad 100644 --- a/hw/net/vhost_net.c +++ b/hw/net/vhost_net.c @@ -428,7 +

  1   2   >