Eric Van Hensbergen wrote:
> On 5/11/07, Anthony Liguori <[EMAIL PROTECTED]> wrote:
>>
>> There's definitely a conversation to have here.  There are going to be a
>> lot of small devices that would benefit from a common transport
>> mechanism.  Someone mentioned a PV entropy device on LKML.  A
>> host=>guest filesystem is another consumer of such an interface.
>>
>> I'm inclined to think though that the abstraction point should be the
>> transport and not the actual protocol.  My concern with standardizing on
>> a protocol like 9p would be that one would lose some potential
>> optimizations (like passing PFN's directly between guest and host).
>>
>
> I think that there are two layers - having a standard, well defined,
> simple shared memory transport between partitions (or between
> emulators and the host system) is certainly a prerequisite.  There are
> lots of different decisions to made here:

What do you think about a socket interface?  I'm not sure how discovery 
would work yet, but there are a few PV socket implementations for Xen at 
the moment.

>  a) does it communicate with userspace, kernelspace, or both?

sockets are usable for both userspace/kernespace.

>  b) is it multi-channel? prioritized? interrupt driven or poll driven?

Of course, arguments can be made for any of these depending on the 
circumstance.  I think you'd have to start with something simple that 
would cover the most number of users (non-multiplexed, interrupt driven).

>  c) how big are the buffers?  is it packetized?

This could probably be tweaked with sockopts.  I suspect you would have 
an implementation for Xen, KVM, etc. and support a common set of options 
(and possible some per-VM type of options).

>  d) can all of these parameters be something controllable from userspace?
>  e) I'm sure there are many others that I can't be bothered to think
> of on a Friday

The biggest point of contention would probably be what goes in the 
sockaddr structure.

Thoughts?

Regards,

Anthony Liguori

> Regardless of the details, I think we can definitely come together on
> a common mechanism here and avoid lots of duplication in the drivers
> are already there and which will follow.  My personal preference is to
> keep things as simple and flat as possible.  No XML, no multiple
> stacks and daemons to contend with.
>
> What runs on top of the transport is no doubt going to be a touchy
> subject for some time to come.  Many of Ron's arguments for 9p mostly
> apply to this upper level.  I/we will be pursuing this as a unified PV
> resource sharing mechanism over the next few months in combination
> with reorganization and optimization of the Linux 9p code.  LANL has
> also been making progress in this same direction.  I'd have gotten
> started sooner, but I was waiting for my new Thinkpad so that I can
> actually run KVM ;)
>
>>
>> So is there any reason to even tie 9p to KVM?  Why not just have a
>> common PV transport that 9p can use.  For certain things, it may make
>> sense (like v9fs).
>>
>
> Well, I think we were discussing tying KVM to 9p, not vice-versa.
>
> My personal view is that developing a generalized solution for
> resource sharing of all manner of devices and services across
> virtualization, emulation, and network boundaries is a better way to
> spend our time than writing a bunch of specific
> drivers/protocols/interfaces for each type of device and each type of
> interconnect.
>
>              -eric


-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to