On 5/16/07, Anthony Liguori <[EMAIL PROTECTED]> wrote: > Eric Van Hensbergen wrote: > > > > From a functional standpoint I don't have a huge problem with it, > > particularly if its more of a pure socket and not something that tries > > to look like a TCP/IP endpoint -- I would prefer something closer to > > netlink. Sockets would allow the exisitng 9p stuff to pretty much > > work as-is. > > So you would prefer assigning out types instead of using an identifier > string in the sockaddr? >
I wasn't really thinking that extreme, just having an assigned type for the vm sockets so that we can minimize baggage. Perhaps I'm being overzealous. > > However, all that being said, I noticed some pretty big differences > > between sockets and shared memory in terms of overhead under Linux. > > > > If you take a look at the RPC latency graph in: > > http://plan9.escet.urjc.es/iwp9/cready/PROSE_iwp9_2006.pdf > > > > You'll see that a local socket implementation has about an order of > > magnitude worse latency than a PROSE/Libra inter-partition shared > > memory channel. > > You seem to suggest that the low latency is due to a very greedy (CPU > hungry) polling algorithm. A poll vs. interrupt model would seem to me > to be orthogonal to using sockets as an interface. > That certainly was a theory -- I never did detailed measurements, however, there is certainly extra overhead associated with the socket path due to kernel-user space boundary crossings and additional code path length associated with socket operations. Still I'm game to comparing the alternatives. -eric ------------------------------------------------------------------------- This SF.net email is sponsored by DB2 Express Download DB2 Express C - the FREE version of DB2 express and take control of your XML. No limits. Just data. Click to get it now. http://sourceforge.net/powerbar/db2/ _______________________________________________ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel