On Tue, Oct 2, 2012 at 4:08 AM, Alexey I. Froloff <ra...@raorn.name> wrote: > Hi, > > While openvswitch does not support InfiniBand natively, it is > still possible to use IB features. One can create GRE tunnels > with endpoints on IPoverIB interface. > > Recently I've come to an idea to use RDMA for tunneling packets > between hypervisors. > > Since all tunnels are point to point, one can create new port > with type, say, "rdma", provide connection parameters, like > local/remote verbs and so on. From now on, packets that sent to > this ports will be transferred to remote via RDMA. > > I have found sample kernel module that does rdma transfer of a > memory region between two nodes. I've also looked into > openvswitch sources and think that this feature could be > implemented without redisign. > > Are you interested in that functionality? If yes, I can start > working on that. I have text editor, compiler and InfiniBand > fabric available for testing.
Where specifically do you think that this will improve performance? Since these packets will already be in the kernel and need to be sent/received as skbs on the OVS side, the effect seems very similar to the normal DMA of packets to the NIC. It's not quite the same as if you are directly placing data into an application buffer - the only potential benefit that I could see is maybe you can handle larger packets. I don't know Infiniband well so it's possible that I'm wrong but I suspect that you will end up with something very similar to the existing network infrastructure. The other issue that I see is that presumably this involves some form of protocol to transmit the encapsulated Ethernet packets. Is there an existing protocol that you were planning on using? I know EoIB has been proposed but I don't think that's what you're planning. If not, I would be very reluctant to put something completely new into the tree. _______________________________________________ discuss mailing list discuss@openvswitch.org http://openvswitch.org/mailman/listinfo/discuss