We are running an Infiniband setup on FermiCloud using the SR-IOV feature
of our Mellanox clouds.  In the MPI jobs we have done we have used
IPoIB for communication.  For point to point communication between VM's
the bandwidth is similar to that between bare metal machines.
In a bigger cluster with small messages being passed there can
be some degradation but still it's decent performance.

As far as an image store accessible via infiniband we don't do
that currently, but it could be done for instance via an NFS data
store exported to your head node via ethernet and to your infiniband
nodes via IPOIB.

Steve


On Sun, 28 Sep 2014, Galimba wrote:

Hello!I've been given the task to step up our game and connect a few nodes
tru infiniband. This meaning: we've been working over gigabit ethernet this
far, but now we want our ONE nodes hosting the VMs to be interconected tru
IP over infiniband. We have the hardware to do so, but I must confess I lack
the expertise to pull this off.

I've been googling around a bit and found a couple of links. Even tho I
think the answers to both my questions are there (yes, it's possible, and no
it's not that hard).

https://github.com/OpenNebula/addon-kvm-sr-iov
http://blog.scottlowe.org/2009/12/02/what-is-sr-iov/
http://wiki.opennebula.org/ecosystem:sr-iov
http://pkg-ofed.alioth.debian.org/howto/infiniband-howto-5.html

I know little about infiniband, so here are a few questions I started with:
1) Is it hard to get the VMs to work with eachother over infiniband? I mean,
if I have a client who wants his VMs to trade messages at really fast speed.
I'd like them to be able to create VMs that work on a specific infiniband
network at high speed.

2) Is it easy to configure the hosts to deliver the images back and forth
over infiniband? This would really speed up the deploy.

3) Is someone using this setup? I mean, using a hardware interface shared
between both the nodes and the vms.

I'd apreciate any input you guys can give me =)

best regards
galimba

--




------------------------------------------------------------------
Steven C. Timm, Ph.D  (630) 840-8525
t...@fnal.gov  http://home.fnal.gov/~timm/
Fermilab Scientific Computing Division, Scientific Computing Services Quad.
Grid and Cloud Services Dept., Associate Dept. Head for Cloud Computing
_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

Reply via email to