On Mon, Jan 16, 2017 at 03:34:58PM +0100, Jan Kiszka wrote:
> On 2017-01-16 15:18, Stefan Hajnoczi wrote:
> > On Mon, Jan 16, 2017 at 09:36:51AM +0100, Jan Kiszka wrote:
> >> some of you may know that we are using a shared memory device similar to
> >> ivshmem in the partitioning hypervisor Jailhouse [1].
> >>
> >> We started as being compatible to the original ivshmem that QEMU
> >> implements, but we quickly deviated in some details, and in the recent
> >> months even more. Some of the deviations are related to making the
> >> implementation simpler. The new ivshmem takes <500 LoC - Jailhouse is
> >> aiming at safety critical systems and, therefore, a small code base.
> >> Other changes address deficits in the original design, like missing
> >> life-cycle management.
> > 
> > My first thought is "what about virtio?".  Can you share some background
> > on why ivshmem fits the use case better than virtio?
> > 
> > The reason I ask is because the ivshmem devices you define would have
> > parallels to existing virtio devices and this could lead to duplication.
> 
> virtio was created as an interface between a host and a guest. It has no
> notion of direct (or even symmetric) connection between guests. With
> ivshmem, we want to establish only a minimal host-guest interface. We
> want to keep the host out of the business negotiating protocol details
> between two connected guests.
> 
> So, the trade-off was between reusing existing virtio drivers - in the
> best case, some changes would have been required definitely - and
> requiring complex translation of virtio into a vm-to-vm model on the one
> side and establishing a new driver ecosystem on much simpler host
> services (500 LoC...). We went for the latter.

Thanks.  I was going in the same direction about vhost-pci as
Marc-André.  Let's switch to his sub-thread.

Stefan

Attachment: signature.asc
Description: PGP signature

Reply via email to