Re: Using virtio-vhost-user or vhost-pci

2020-10-12 Thread Nikos Dragazis

On 12/10/20 10:22 μ.μ., Cosmin Chenaru wrote:


Hi,

Could you please tell me if there has been any more work on virtio-vhost-user 
or vhost-pci? The last messages that I could find were from January 2018, from 
this thread [1], and from what I see the latest Qemu code does not have that 
included.


Hi Cosmin,

The thread that you are pointing to is Stefan's initial work on this
subject, but it is not the last update. Since 2018, a lot of things have
happened. I have personally put a lot of effort on pushing this further
and with the help of the community we are trying to get this merged into
QEMU eventually. You can find an overview of the up-to-date state here
[1]. Note also that, recently, we had a discussion on various on-going
inter-VM device emulation interfaces (have a look at this [2]).

In brief, the current step/goal is to get the device spec merged into the
VIRTIO spec (have a look at these [3][4]).

For more details, please just do a simple search on the spdk, dpdk,
qemu-devel and virtio-dev mailing lists. You will find a lot of threads
on this subject. If anything doesn't make sense or is not clear enough,
feel free to ask.

Nikos

[1] https://lists.gnu.org/archive/html/qemu-devel/2020-02/msg03356.html
[2] https://lists.gnu.org/archive/html/qemu-devel/2020-07/msg04934.html
[3] https://lists.oasis-open.org/archives/virtio-dev/202008/msg00083.html
[4] https://lists.oasis-open.org/archives/virtio-dev/202005/msg00132.html



I am currently running multiple VMs, connected in between by the DPDK vhost-switch. A VM 
can start, reboot, shutdown, so much of this is dynamic and the vhost-switch handles all 
of these. So these VMs are some sort of "endpoints" (I could not find a better 
naming).

The code which runs on the VM endpoints is somehow tied to the vhost-switch 
code, and if I change something on the VM which breaks the compatibility, I 
need to recompile the vhost-switch and restart. The problem is that most of the 
time I forget to update the vhost-switch, and I run into other problems.

If I could use a VM as a vhost-switch instead of the DPDK app, then I hope that 
in my endpoint code which runs on the VM, I can add functionality to make it 
also run as a switch, and forward the packets between the other VMs like the 
current DPDK switch does. Doing so would allow me to catch this out-of-sync 
between the VM endpoint code and the switch code at compile time, since they 
will be part of the same app.

This would be a two-phase process. First to run the DPDK vhost-switch inside a 
guest VM, and then to move the tx-rx part into my app.

Both Qemu and the DPDK app use "vhost-user". I was happy to see that I can 
start Qemu in vhost-user server mode:

    
      
      
      
      
        
      
      
    

This would translate to these Qemu arguments:

-chardev socket,id=charnet1,path=/home/cosmin/vsocket.server,server -netdev 
type=vhost-user,id=hostnet1,chardev=charnet1,queues=2 -device 
virtio-net-pci,mrg_rxbuf=on,mq=on,vectors=6,netdev=hostnet1,id=net1,mac=52:54:00:9c:3a:e3,bus=pci.0,addr=0x4

But at this point Qemu will not boot the VM until there is a vhost-user client connecting 
to Qemu. I even tried adding the "nowait" argument, but Qemu still waits. This 
will not work in my case, as the endpoint VMs could start and stop at any time, and I 
don't even know how many network interfaces the endpoint VMs will have.

I then found the virtio-vhost-user transport protocol [2], and was thinking 
that I could at least start the packet-switching VM, and then let the DPDK app 
inside it do the forwarding of the packets. But from what I understand, this 
creates a single network interface inside the VM on which the DPDK app can 
bind. The limitation here is that if another VM wants to connect to the 
packet-switching VM, I need to manually add another virtio-vhost-user-pci 
device (and a new vhost-user.sock) before this packet-switching VM starts, so 
this is not dynamic.

The second approach for me would be to use vhost-pci [3], which I could not 
fully understand how it works, but I think it presents a network interface to 
the guest kernel after another VM connects to the first one.

I realize I made a big story and probably don't make too much sense, but one 
more thing. The ideal solution for me would be a combination of the vhost-user 
socket and the vhost-pci socket. The Qemu will start the VM and the socket will 
wait in the background for vhost-user connections. When a new connection is 
established, Qemu should create a hot-plugable PCI network card and either the 
guest kernel or the DPDK app inside to handle the vhost-user messages.

Any feedback will be welcome, and I really appreciate all your work :)

Cosmin.

[1] https://lists.nongnu.org/archive/html/qemu-devel/2018-01/msg04806.html
[2] https://wiki.qemu.org/Features/VirtioVhostUser
[3] https://github.com/wei-w-wang/vhost-pci





Re: [PATCH v3] introduce VFIO-over-socket protocol specificaion

2020-07-21 Thread Nikos Dragazis

Hi Thanos,

I had a quick look on the spec. Leaving some comments inline.

On 17/7/20 2:20 μ.μ., Thanos Makatos wrote:


This patch introduces the VFIO-over-socket protocol specification, which
is designed to allow devices to be emulated outside QEMU, in a separate
process. VFIO-over-socket reuses the existing VFIO defines, structs and
concepts.

It has been earlier discussed as an RFC in:
"RFC: use VFIO over a UNIX domain socket to implement device offloading"

Signed-off-by: John G Johnson 
Signed-off-by: Thanos Makatos 

---

Changed since v1:
   * fix coding style issues
   * update MAINTAINERS for VFIO-over-socket
   * add vfio-over-socket to ToC

Changed since v2:
   * fix whitespace

Regarding the build failure, I have not been able to reproduce it locally
using the docker image on my Debian 10.4 machine.
---
  MAINTAINERS |6 +
  docs/devel/index.rst|1 +
  docs/devel/vfio-over-socket.rst | 1135 +++
  3 files changed, 1142 insertions(+)
  create mode 100644 docs/devel/vfio-over-socket.rst

diff --git a/MAINTAINERS b/MAINTAINERS
index 030faf0..bb81590 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1732,6 +1732,12 @@ F: hw/vfio/ap.c
  F: docs/system/s390x/vfio-ap.rst
  L: qemu-s3...@nongnu.org
  
+VFIO-over-socket

+M: John G Johnson 
+M: Thanos Makatos 
+S: Supported
+F: docs/devel/vfio-over-socket.rst
+
  vhost
  M: Michael S. Tsirkin 
  S: Supported
diff --git a/docs/devel/index.rst b/docs/devel/index.rst
index ae6eac7..0439460 100644
--- a/docs/devel/index.rst
+++ b/docs/devel/index.rst
@@ -30,3 +30,4 @@ Contents:
 reset
 s390-dasd-ipl
 clocks
+   vfio-over-socket
diff --git a/docs/devel/vfio-over-socket.rst b/docs/devel/vfio-over-socket.rst
new file mode 100644
index 000..b474f23
--- /dev/null
+++ b/docs/devel/vfio-over-socket.rst
@@ -0,0 +1,1135 @@
+***
+VFIO-over-socket Protocol Specification
+***
+
+Version 0.1
+
+Introduction
+
+VFIO-over-socket, also known as vfio-user, is a protocol that allows a device


I think there is no point in having two names for the same protocol,
"vfio-over-socket" and "vfio-user".


+to be virtualized in a separate process outside of QEMU. VFIO-over-socket
+devices consist of a generic VFIO device type, living inside QEMU, which we
+call the client, and the core device implementation, living outside QEMU, which
+we call the server. VFIO-over-socket can be the main transport mechanism for
+multi-process QEMU, however it can be used by other applications offering
+device virtualization. Explaining the advantages of a
+disaggregated/multi-process QEMU, and device virtualization outside QEMU in
+general, is beyond the scope of this document.
+
+This document focuses on specifying the VFIO-over-socket protocol. VFIO has
+been chosen for the following reasons:
+
+1) It is a mature and stable API, backed by an extensively used framework.
+2) The existing VFIO client implementation (qemu/hw/vfio/) can be largely
+   reused.
+
+In a proof of concept implementation it has been demonstrated that using VFIO
+over a UNIX domain socket is a viable option. VFIO-over-socket is designed with
+QEMU in mind, however it could be used by other client applications. The
+VFIO-over-socket protocol does not require that QEMU's VFIO client
+implementation is used in QEMU. None of the VFIO kernel modules are required
+for supporting the protocol, neither in the client nor the server, only the
+source header files are used.
+
+The main idea is to allow a virtual device to function in a separate process in
+the same host over a UNIX domain socket. A UNIX domain socket (AF_UNIX) is
+chosen because we can trivially send file descriptors over it, which in turn
+allows:
+
+* Sharing of guest memory for DMA with the virtual device process.
+* Sharing of virtual device memory with the guest for fast MMIO.
+* Efficient sharing of eventfd's for triggering interrupts.
+
+However, other socket types could be used which allows the virtual device
+process to run in a separate guest in the same host (AF_VSOCK) or remotely
+(AF_INET). Theoretically the underlying transport doesn't necessarily have to
+be a socket, however we don't examine such alternatives. In this document we
+focus on using a UNIX domain socket and introduce basic support for the other
+two types of sockets without considering performance implications.
+
+This document does not yet describe any internal details of the server-side
+implementation, however QEMU's VFIO client implementation will have to be
+adapted according to this protocol in order to support VFIO-over-socket virtual
+devices.
+
+VFIO
+
+VFIO is a framework that allows a physical device to be securely passed through
+to a user space process; the kernel does not drive the device at all.


I would remove the last part: "the kernel does not drive the device at
all". Isn't that quite inaccurate? 

Re: Inter-VM device emulation (call on Mon 20th July 2020)

2020-07-17 Thread Nikos Dragazis

On 15/7/20 7:44 μ.μ., Alex Bennée wrote:


Stefan Hajnoczi  writes:


On Wed, Jul 15, 2020 at 01:28:07PM +0200, Jan Kiszka wrote:

On 15.07.20 13:23, Stefan Hajnoczi wrote:

Let's have a call to figure out:

1. What is unique about these approaches and how do they overlap?
2. Can we focus development and code review efforts to get something
merged sooner?

Jan and Nikos: do you have time to join on Monday, 20th of July at 15:00
UTC?
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20200720T1500


Not at that slot, but one hour earlier or later would work for me (so far).

Nikos: Please let us know which of Jan's timeslots works best for you.

I'm in - the earlier slot would be preferential for me to avoid clashing with
family time.



I'm OK with all timeslots.



Re: [PATCH/RFC 0/1] Vhost User Cross Cable: Intro

2020-02-13 Thread Nikos Dragazis

On Tue, 14 Jan 2020 at 10:20 Stefan Hajnoczi
 wrote:
> On Fri, Jan 10, 2020 at 10:34 AM Marc-André Lureau
>  wrote:
> > On Wed, Jan 8, 2020 at 5:57 AM V.  wrote:
>
> Hi V.,
> I think I remember you from Etherboot/gPXE days :).
>
> > > 3.
> > > Now if Cross Cable is actually a new and (after a code-rewrite of 10) a
> > > viable way to connect 2 QEMU's together, could I actually
> > > suggest a better way?
> > > I am thinking of a '-netdev vhost-user-slave' option to connect (as client
> > > or server) to a master QEMU running '-netdev vhost-user'.
> > > This way there is no need for any external program at all, the master can
> > > have it's memory unshared and everything would just work
> > > and be fast.
> > > Also the whole thing can fall back to normal virtio if memory is not 
shared
> > > and it would even work in pure usermode without any
> > > context switch.
> > >
> > > Building a patch for this idea I could maybe get around to, don't clearly
> > > have an idea how much work this would be but I've done
> > > crazier things.
> > > But is this is something that someone might be able to whip up in an hour
> > > or two? Someone who actually does have a clue about vhost
> > > and virtio maybe? ;-)
> >
> > I believe https://wiki.qemu.org/Features/VirtioVhostUser is what you
> > are after. It's still being discussed and non-trivial, but not very
> > active lately afaik.
>
> virtio-vhost-user is being experimented with in the SPDK community but
> there has been no activity on VIRTIO standardization or the QEMU
> patches for some time.  More info here:
> https://ndragazis.github.io/spdk.html
>
> I think the new ivshmem v2 feature may provide the functionality you
> are looking for, but I haven't looked at them yet.  Here is the link:
> https://www.mail-archive.com/address@hidden/msg668749.html
>
> And here is Jan's KVM Forum presentation on ivshmem v2:
> https://www.youtube.com/watch?v=TiZrngLUFMA
>
> Stefan


Hi Stefan,

First of all, sorry for the delayed response. The mail got lost
somewhere in my inbox. Please keep Cc-ing me on all threads related to
virtio-vhost-user.

As you correctly pointed out, I have been experimenting with
virtio-vhost-user on SPDK and [1] is a working demo on this matter. I
have been working on getting it merged with SPDK and, to this end, I
have been interacting with the SPDK team [2][3] and mostly with Darek
Stojaczyk (Cc-ing him).

The reason that you haven’t seen any activity for the last months is
that I got a job and hence my schedule has become tighter. But I will
definitely find some space and fit it into my schedule. Let me give you
a heads up, so that we get synced:

Originally, I created and sent a patch (influenced from your DPDK patch
[4]) to SPDK that was enhancing SPDK’s internal rte_vhost library with
the virtio-vhost-user transport. However, a few weeks later, the SPDK
team decided to switch from their internal rte_vhost library to using
DPDK’s rte_vhost library directly [3]. Given that, I refactored and sent
the patch for the virtio-vhost-user transport to the DPDK mailing list
[5]. Regarding the virtio-vhost-user device, I have made some
enhancements [6] on your original RFC device implementation and, as you
may remember, I have sent some revised versions of the spec to the
virtio mailing list [7].

At the moment, the blocker is the virtio spec. The last update on this
was my discussion with Michael Tsirkin (Cc-ing him as well) about the
need for the VIRTIO_PCI_CAP_DOORBELL_CFG and
VIRTIO_PCI_CAP_NOTIFICATION_CFG configuration structures [8].

So, I think the next steps should be the following:

1. merging the spec
2. adding the device on QEMU
3. adding the vvu transport on DPDK
4. extending SPDK to make use of the new vhost-user transport

To conclude, I still believe that this device is useful and should be
part of virtio/qemu/dpdk/spdk and I will continue working on this
direction.

Best regards,
Nikos


[1] https://ndragazis.github.io/spdk.html
[2] 
https://lists.01.org/hyperkitty/list/s...@lists.01.org/thread/UR4FM45LEQIBJNQ4MTDZFH6SLTXHTGDR/#ZGPRKS47QWHXHFBEKSCA7Z66E2AGSLHN
[3] 
https://lists.01.org/hyperkitty/list/s...@lists.01.org/thread/WLUREJGPK5UJVTHIQ5GRL3CDWR5NN5BI/#G7P3D4KF6OQDI2RYASXQOZCMITKT5DEP
[4] http://mails.dpdk.org/archives/dev/2018-January/088155.html
[5] 
https://lore.kernel.org/dpdk-dev/e03dcc29-d472-340a-9825-48d13e472...@redhat.com/T/
[6] https://lists.gnu.org/archive/html/qemu-devel/2019-04/msg02910.html
[7] https://lists.oasis-open.org/archives/virtio-dev/201906/msg00036.html
[8] https://lists.oasis-open.org/archives/virtio-dev/201908/msg00014.html



[Qemu-devel] revising virtio-vhost-user device

2019-04-17 Thread Nikos Dragazis
Hi everyone,

as you may already know, there is an experimental feature in QEMU wiki
page called “virtio-vhost-user” [1]. Stefan Hajnoczi launched this
feature a year ago. He even wrote an RFC implementation [2]. However,
this feature is still a WIP.

At present, I am working with Dariusz Stojaczyk from Intel in order to
integrate the virtio-vhost-user device with SPDK/DPDK. Part of the plan
is to finish up and merge the virtio-vhost-user device with the latest
QEMU. Thereby, I have been working on Stefan’s device code [3] over the
last few weeks. Stefan has already made a TODO list [2] with all the
missing parts.

The updated device code is here [4]. In short, I have done the following
things so far:

- implement the virtio PCI capabilities for the additional device
  resources

- add support for guest notifications in response to master virtqueue
  kicks

- use ioeventfd mechanism for the callfds in order to avoid entering
  QEMU

- add UUID configuration field

- rebase on top of latest qemu:master

- update the virtio device spec [5][6]

I have deliberately kept the commits separate in order to clearly
demonstrate the above changes.

There is still some work to be done. However, given the fact that I am
not familiar with QEMU code and development process, I thought about
sharing this with the community at an early stage. Hopefully, I could
receive from you some reviews/pointers/comments/suggestions.

Looking forward to your feedback.

Best regards,
Nikos

[1] https://wiki.qemu.org/Features/VirtioVhostUser
[2] http://lists.nongnu.org/archive/html/qemu-devel/2018-01/msg04806.html
[3] https://github.com/stefanha/qemu/tree/virtio-vhost-user
[4] https://github.com/arrikto/qemu-qemu/commits/virtio-vhost-user
[5] https://github.com/ndragazis/virtio/commits
[6] https://ndragazis.github.io/virtio-v1.0-cs04.html#x1-2830007



[Qemu-devel] Error with Virtio DMA Remapping

2018-10-24 Thread Nikos Dragazis
Hi all,

I am trying to use QEMU vIOMMU for virtio DMA remapping. When I run the
VM, I get the following messages in stderr:

qemu-system-x86_64: vtd_iommu_translate: detected translation failure 
(dev=01:00:00, iova=0x0)
qemu-system-x86_64: New fault is not recorded due to compression of faults
qemu-system-x86_64: virtio: zero sized buffers are not allowed 

My QEMU configuration is:


./qemu-system-x86_64 -M accel=kvm -cpu host -smp 2 -m 4G \
  -enable-kvm -machine q35,accel=kvm,kernel-irqchip=split \
  -device intel-iommu,x-aw-bits=48,device-iotlb=on,pt=false \
  -device ioh3420,id=pcie.1,chassis=1 \
  -device virtio-scsi-pci,bus=pcie.1,id=scsi0,iommu_platform=on,ats=on,
disable-legacy=on,disable-modern=off \
  [...]

I use QEMU 3.0.50 and host kernel 4.4.0.

I followed the instructions here:
https://wiki.qemu.org/Features/VT-d#With_Virtio_Devices
to enable DMAR for the virtio-scsi device.

I see there is a related thread about this error here:
http://qemu.11.n7.nabble.com/PATCH-0-2-virtio-scsi-Fix-QEMU-hang-with-vIOMMU-and-ATS-td598736.html#a599086

Is this still an open issue or it has been solved?
Please let me know.

Thanks,
Nikos



[Qemu-devel] vhost: add virtio-vhost-user transport

2018-10-06 Thread Nikos Dragazis
Hi everyone,

In response to a previous email of mine here:

https://lists.01.org/pipermail/spdk/2018-September/002488.html

I would like to share that I have added support for a new
virtio-vhost-user transport to SPDK. and have a working demo of the SPDK
vhost-scsi target over this transport. I have tested it successfully
with Malloc bdev, NVMe bdev and virtio-scsi bdev.

My code currently lives here:

https://github.com/ndragazis/spdk

I'd love to get your feedback and have it merged eventually. I see there
is a relevant conversation on this topic here:

https://lists.01.org/pipermail/spdk/2018-March/001557.html

Is there anyone in this community currently working on this? What would
my next step in contributing this be?

Looking forward to your feedback,
Nikos

--
Nikos Dragazis
Undergraduate Student
School of Electrical and Computer Engineering
National Technical University of Athens



Re: [Qemu-devel] [SPDK] virtio-vhost-user with virtio-scsi: end-to-end setup

2018-10-06 Thread Nikos Dragazis
Hi Pawel,

Thank you for your quick reply. I appreciate your help.

I’m sorry for the late response. I am glad to tell you that I have a
working demo at last. I have managed to solve my problem.

You were right about the IO channels. Function
spdk_scsi_dev_allocate_io_channels() fails to allocate the IO channel
for the virtio-scsi bdev target and function spdk_vhost_scsi_start()
fails to verify its return value. My actual segfault was due to a race
on the unique virtio-scsi bdev request queue between the creation and
the destruction of the IO channel in the vhost device backend. This led
to the IO channel pointer lun->io_channel being NULL after the
vhost-user negotiation, and the bdev layer segfaulted when accessing it
in response to an IO request.

After discovering this, and spending quite some time debugging it, I
searched the bug tracker and the commit history in case I had missed
something. It seems this was a recently discovered bug, which has
fortunately already been solved:

https://github.com/spdk/spdk/commit/9ddf6438310cc97b346d805a5969af7507e84fde#diff-d361b53e911663e8c6c5890fb046a79b

I had overlooked pulling from the official repo for a while, so I missed
the patch. It works just fine after pulling the newest changes.

So, I’ll make sure to work on the latest commits next time :)

Thanks again,
Nikos


On 21/09/2018 10:31 πμ, Wodkowski, PawelX wrote:
> Hi Nikos,
>
> About SPKD backtrace you got. There is something wrong with IO channel 
> allocation.
> SPKD vhost-scsi  should check the result of 
> spdk_scsi_dev_allocate_io_channels() in
> spdk_vhost_scsi_dev_add_tgt(). But this result is not checked :(
> You can add some check or assert there.
>
> Paweł



[Qemu-devel] virtio-vhost-user with virtio-scsi: end-to-end setup

2018-09-20 Thread Nikos Dragazis
to find the cause yesterday,
I had bumped into this DPDK bug:

https://bugs.dpdk.org/show_bug.cgi?id=85

and I worked around it, essentially by short-circuiting the point where
the DPDK runtime rescans the PCI bus and corrupts the
dev->mem_resource[] field for the already-mapped-in-userspace
virtio-vhost-user PCI device. I just commented out this line:

https://github.com/spdk/dpdk/blob/08332d13b3a66cb1a8c3a184def76b039052d676/drivers/bus/pci/linux/pci.c#L355

This seems to be a good enough workaround for now. I’m not sure this bug
has been fixed, I will comment on the DPDK bugzilla.

But, now, I have really hit a roadblock. I get a segfault, I run the
exact same commands as shown above, and end up with this backtrace:

-- cut here --

#0  0x0046ae42 in spdk_bdev_get_io (channel=0x30) at bdev.c:920
#1  0x0046c985 in spdk_bdev_readv_blocks (desc=0x93f8a0, ch=0x0,
    iov=0x72fb7c88, iovcnt=1, offset_blocks=0, num_blocks=8,
    cb=0x453e1a , cb_arg=0x72fb7bc0) at 
bdev.c:1696
#2  0x0046c911 in spdk_bdev_readv (desc=0x93f8a0, ch=0x0, 
iov=0x72fb7c88,
    iovcnt=1, offset=0, nbytes=4096, cb=0x453e1a 
,
    cb_arg=0x72fb7bc0) at bdev.c:1680
#3  0x00453fe2 in spdk_bdev_scsi_read (bdev=0x941c80, 
bdev_desc=0x93f8a0,
    bdev_ch=0x0, task=0x72fb7bc0, lba=0, len=8) at scsi_bdev.c:1317
#4  0x0045462e in spdk_bdev_scsi_readwrite (task=0x72fb7bc0, lba=0,
    xfer_len=8, is_read=true) at scsi_bdev.c:1477
#5  0x00454c95 in spdk_bdev_scsi_process_block (task=0x72fb7bc0)
    at scsi_bdev.c:1662
#6  0x004559ce in spdk_bdev_scsi_execute (task=0x72fb7bc0)
    at scsi_bdev.c:2029
#7  0x004512e4 in spdk_scsi_lun_execute_task (lun=0x93f830, 
task=0x72fb7bc0)
    at lun.c:162
#8  0x00450a87 in spdk_scsi_dev_queue_task (dev=0x713c80 ,
    task=0x72fb7bc0) at dev.c:264
#9  0x0045ae48 in task_submit (task=0x72fb7bc0) at vhost_scsi.c:268
#10 0x0045c2b8 in process_requestq (svdev=0x731d9dc0, 
vq=0x731d9f40)
    at vhost_scsi.c:649
#11 0x0045c4ad in vdev_worker (arg=0x731d9dc0) at vhost_scsi.c:685
#12 0x004797f2 in _spdk_reactor_run (arg=0x944540) at reactor.c:471
#13 0x00479dad in spdk_reactors_start () at reactor.c:633
#14 0x004783b1 in spdk_app_start (opts=0x7fffe390,
    start_fn=0x404df8 , arg1=0x0, arg2=0x0) at app.c:570
#15 0x00404ec0 in main (argc=7, argv=0x7fffe4f8) at vhost.c:115

-- cut here --

I have not yet been able to debug this, it’s most probably my bug, but I
am wondering whether there could be a conflict between the two distinct
virtio drivers: (1) the pre-existing one in the SPDK virtio library
under lib/virtio/, and (2) the one I copied into lib/vhost/rte_vhost/ as
part of the vhost library.

I understand that even if I make it work for now, this cannot be a
long-term solution. I would like to re-use the pre-existing virtio-pci
code from the virtio library to support virtio-vhost-user.
Do you see any potential problems in this? Did you change the virtio
code that you placed inside rte_vhost? It seems there are subtle
differences between the two codebases.

These are my short-term issues. On the longer term, I’d be happy to
contribute to VirtioVhostUser development any way I can. I have seen
some TODOs in your QEMU code here:

https://github.com/stefanha/qemu/blob/virtio-vhost-user/hw/virtio/virtio-vhost-user.c

and I would like to contribute, but it’s not obvious to me what
progress you’ve made since.
As an example, I’d love to explore the possibility of adding support for
interrupt-driven vhost-user backends over the virtio-vhost-user
transport.

To summarize:
- I will follow up on the DPDK bug here:
  https://bugs.dpdk.org/show_bug.cgi?id=85 about a proposed fix.
- Any hints on my segfault? I will definitely continue troubleshooting.
- Once I’ve sorted this out, how can I start using a single copy of the
  virtio-pci codebase? I guess I have to make some changes to comply
  with the API and check the dependencies.
- My current plan to contribute towards an IRQ-based implementation of
  the  virtio-vhost-user transport would be to use the vhost-user kick
  file descriptors as a trigger to insert virtual interrupts and handle
  them in userspace. The virtio-vhost-user device could exploit the
  irqfd mechanism of the KVM for this purpose. I will keep you and the
  list posted on this, I would appreciate any early feedback you may
  have.

Looking forward to any comments/feedback/pointers you may have. I am
rather inexperienced with this stuff, but it’s definitely exciting and
I’d love to contribute more to QEMU and SPDK.

Thank you for reading this far,
Nikos

--
Nikos Dragazis
Undergraduate Student
School of Electrical and Computer Engineering
National Technical University of Athens