Re: [virtio-dev] Virtio-loopback: PoC of a new Hardware Abstraction Layer for non-Hypervisor environments based on Virtio

2023-04-19 Thread Timos Ampelikiotis
Hello Xuan,

The main differences between virtio-loopback and vDUSE are mainly:

1) the data sharing mechanism

2) the Virtio/Vhost-user devices which are supported by each solution


In particular, Virtio-loopback implements a zero-copy memory mapping

mechanism, the data are directly accessible by the user-space and

supports vhost-user-blk, vhost-user-input, vhost-user-rng.


At the best of my knowledge, VDUSE is based on a bouncing buffer

mechanism which does not implement the zero-copy principle. In addition,

it supports vhost-user-blk and vhost-user-net only.

Kind regards,

Timos


On Tue, Apr 18, 2023 at 11:01 AM Xuan Zhuo 
wrote:

> On Thu, 13 Apr 2023 16:35:59 +0300, Timos Ampelikiotis <
> t.ampelikio...@virtualopensystems.com> wrote:
> > Dear virtio-dev community,
> >
> > I would like to introduce you Virtio-loopback, Proof of Concept (PoC)
> that
> > we have been working on at Virtual Open Systems in the context of the
> > Automotive Grade Linux community (Virtualization & Containers expert
> > group - EG-VIRT).
> >
> > We consider this work as a PoC and we are not currently planning
> > upstream. However, if the zero-copy or any other aspect of this work
> > is interesting for other Virtio implementations, we would be glad to
> > discuss more.
>
> What is the difference between this and vduse?
>
> Thanks.
>
> >
> > Overview:
> > -
> >
> > Virtio-loopback is a new hardware abstraction layer designed for
> > non-Hypervisor
> > environments based on virtio. The main objective is to enable
> applications
> > communication with vhost-user devices in a non-hypervisor environment.
> >
> > More in details, Virtio-loopback's design consists of a new transport
> > (Virtio-loopback), a user-space daemon (Adapter), and a vhost-user
> device.
> > The data path has been implemented using the "zero-copy" principle, where
> > vhost-user devices access virtqueues directly in the kernel space. This
> > first
> > implementation supports multi-queues, does not require virtio-protocol
> > changes
> > and applies minor modifications to the vhost-user library. Supported
> > vhost-user
> > devices are today vhost-user-rng (both rust and C version),
> vhost-user-input
> > and vhost-user-blk.
> >
> > Motivation & requirements:
> > -
> >
> > 1. Enable the usage of the same user-space driver on both virtualized and
> >non-virtualized environments.
> >
> > 2. Maximize performance with zero copy design principles
> >
> > 3. Applications using such drivers are unchanged and transparently
> running
> > in
> >both virtualized or non-virtualized environment.
> >
> > Design description:
> > ---
> >
> > a) Component's description:
> > --
> >
> > The Virtio-loopback architecture consists of three main components
> > described below:
> >
> > 1) Driver: In order to route the VIRTIO communication in user-space
> >virtio-loopback driver was implemented and consists of:
> >- A new transport layer which is based on virtio-mmio and it is
> >  responsible of routing the read/write communication of the virtio
> >  device to the adapter binary.
> >- A character device which works as an intermediate layer between the
> >  user-space components and the transport layer. The character device
> > helps
> >  the adapter to provide all the required information and initialize
> the
> >  transport and at the same time, provides direct access to the vrings
> >  from user-space. The access to the vrings is based on a memory
> mapping
> >  mechanism which gives the ability to the vhost-user device to read
> and
> >  write data directly into kernel's memory without the need of any
> copy.
> >
> > 2) Adapter: Implements the role that QEMU had in the corresponding
> > virtualized
> >scenario. Specifically, combines the functionality of two main QEMU
> >components, virtio-mmio transport emulation and vhost-user backend, in
> > order
> >to work as a bridge between the transport and the vhost-user device.
> The
> > two
> >main parts of the adapter are:
> >- A vhost-user backend which is the main communication point with the
> >  vhost-user device.
> >- A virtio-emulation which handles mostly the messages coming from the
> >  driver and translates them into vhost-user messages/actions.
> >
> > 3) Vhost-user device: This components required only minimal
> modifications to
> >make the vrings directly accessible in kernel's memory.
> >
> > b) Communication between the virtio-loopback components:
> > ---
> >
> > After the describing the role of its component, a few details need to be
> > given
> > about how they interact with each other and the mechanisms used.
> >
> > 1) Transport & Adapter:
> >- The two components share a communication data structure which
> describes
> >  the current read/write operation requested by the transport.
> >- 

Re: [virtio-dev] Virtio-loopback: PoC of a new Hardware Abstraction Layer for non-Hypervisor environments based on Virtio

2023-04-18 Thread Xuan Zhuo
On Thu, 13 Apr 2023 16:35:59 +0300, Timos Ampelikiotis 
 wrote:
> Dear virtio-dev community,
>
> I would like to introduce you Virtio-loopback, Proof of Concept (PoC) that
> we have been working on at Virtual Open Systems in the context of the
> Automotive Grade Linux community (Virtualization & Containers expert
> group - EG-VIRT).
>
> We consider this work as a PoC and we are not currently planning
> upstream. However, if the zero-copy or any other aspect of this work
> is interesting for other Virtio implementations, we would be glad to
> discuss more.

What is the difference between this and vduse?

Thanks.

>
> Overview:
> -
>
> Virtio-loopback is a new hardware abstraction layer designed for
> non-Hypervisor
> environments based on virtio. The main objective is to enable applications
> communication with vhost-user devices in a non-hypervisor environment.
>
> More in details, Virtio-loopback's design consists of a new transport
> (Virtio-loopback), a user-space daemon (Adapter), and a vhost-user device.
> The data path has been implemented using the "zero-copy" principle, where
> vhost-user devices access virtqueues directly in the kernel space. This
> first
> implementation supports multi-queues, does not require virtio-protocol
> changes
> and applies minor modifications to the vhost-user library. Supported
> vhost-user
> devices are today vhost-user-rng (both rust and C version), vhost-user-input
> and vhost-user-blk.
>
> Motivation & requirements:
> -
>
> 1. Enable the usage of the same user-space driver on both virtualized and
>non-virtualized environments.
>
> 2. Maximize performance with zero copy design principles
>
> 3. Applications using such drivers are unchanged and transparently running
> in
>both virtualized or non-virtualized environment.
>
> Design description:
> ---
>
> a) Component's description:
> --
>
> The Virtio-loopback architecture consists of three main components
> described below:
>
> 1) Driver: In order to route the VIRTIO communication in user-space
>virtio-loopback driver was implemented and consists of:
>- A new transport layer which is based on virtio-mmio and it is
>  responsible of routing the read/write communication of the virtio
>  device to the adapter binary.
>- A character device which works as an intermediate layer between the
>  user-space components and the transport layer. The character device
> helps
>  the adapter to provide all the required information and initialize the
>  transport and at the same time, provides direct access to the vrings
>  from user-space. The access to the vrings is based on a memory mapping
>  mechanism which gives the ability to the vhost-user device to read and
>  write data directly into kernel's memory without the need of any copy.
>
> 2) Adapter: Implements the role that QEMU had in the corresponding
> virtualized
>scenario. Specifically, combines the functionality of two main QEMU
>components, virtio-mmio transport emulation and vhost-user backend, in
> order
>to work as a bridge between the transport and the vhost-user device. The
> two
>main parts of the adapter are:
>- A vhost-user backend which is the main communication point with the
>  vhost-user device.
>- A virtio-emulation which handles mostly the messages coming from the
>  driver and translates them into vhost-user messages/actions.
>
> 3) Vhost-user device: This components required only minimal modifications to
>make the vrings directly accessible in kernel's memory.
>
> b) Communication between the virtio-loopback components:
> ---
>
> After the describing the role of its component, a few details need to be
> given
> about how they interact with each other and the mechanisms used.
>
> 1) Transport & Adapter:
>- The two components share a communication data structure which describes
>  the current read/write operation requested by the transport.
>- When this data structure has been filled with all the required
> information
>  the transport triggers and EventFD and waits. The adapter wakes up,
> takes
>  the corresponding actions and finally notifies and unlocks the
> transport
>  by calling an IOCTL system call.
>- Compared to the virtualized environment scenario, the adapter calls an
>  IOCTL system call to the driver in place of an interrupt.
>
> 2) Adapter & Vhost-user device:
>- The mechanisms used between these two component are the same as in
>  the virtualized environment case.
>  a) A UNIX socket is in place to exchange any VHOST-USER messages.
>  b) EventFDs are being used in order to trigger VIRTIO kick/call
> requests.
>
> 3) Transport & Vhost-user device:
>- Since the Vrings are allocated into the Kernel's memory, vhost-user
>  device needs to communicate and request access from