of Galimba [
gali...@gmail.com]
*Sent:* Sunday, September 28, 2014 5:07 PM
*To:* users@lists.opennebula.org
*Subject:* [one-users] Infiniband, nodes and VMs
Hello!
I've been given the task to step up our game and connect a few nodes tru
infiniband. This meaning: we've been working over
We are running an Infiniband setup on FermiCloud using the SR-IOV feature
of our Mellanox clouds. In the MPI jobs we have done we have used
IPoIB for communication. For point to point communication between VM's
the bandwidth is similar to that between bare metal machines.
In a bigger cluster
[users-boun...@lists.opennebula.org] on behalf of Galimba [
gali...@gmail.com]
*Sent:* Sunday, September 28, 2014 5:07 PM
*To:* users@lists.opennebula.org
*Subject:* [one-users] Infiniband, nodes and VMs
Hello!
I've been given the task to step up our game and connect a few nodes tru
Hello!
I've been given the task to step up our game and connect a few nodes tru
infiniband. This meaning: we've been working over gigabit ethernet this
far, but now we want our ONE nodes hosting the VMs to be interconected tru
IP over infiniband. We have the hardware to do so, but I must confess I
PM
To: users@lists.opennebula.org
Subject: [one-users] Infiniband, nodes and VMs
Hello!
I've been given the task to step up our game and connect a few nodes tru
infiniband. This meaning: we've been working over gigabit ethernet this far,
but now we want our ONE nodes hosting the VMs
So, what about your testing made you decide to go with IPoIB rather than
iSER?
On Fri, 2012-07-13 at 14:52 -0400, Shankhadeep Shome wrote:
Hi Chris
iSER will work seamlessly as long as you run it on the hosts, it just
requires a compatible iSER target. TGTD should work, although the iser
Hi Chris
iSER will work seamlessly as long as you run it on the hosts, it just
requires a compatible iSER target. TGTD should work, although the iser
target I tested with was a zfs appliance. I've never tested a Linux iSER
target solution. One commercially supported solution you can try out is
@Shankhadeep
Hi. We chatted a while back about IB, and you explained your IPoIB setup,
and indeed you released your whole solution to the community. That was a
very cool thing to do.
I'm wondering how you arrived at your solution, because I wonder if I'm
barking up a tree you've already
Hi Shankhadeep,
they look really nice, I've adapted your README to wiki style and added it
to wiki.opennebula.org:
http://wiki.opennebula.org/infiniband
Feel free to modify it
Thanks a lot for contributing it!
Cheers,
Jaime
On Thu, May 10, 2012 at 8:46 AM, Shankhadeep Shome
Its not clear where I would upload the drivers to, I created a wiki account
and a wiki page for 1-to-1 NAT configuration for IPoIB. I can just send you
the tar file with the updated driver.
Shankhadeep
On Tue, May 8, 2012 at 9:47 AM, Jaime Melis jme...@opennebula.org wrote:
Hi Shankhadeep,
I
Just added the vmm driver for the IPoIB NAT stuff
On Thu, May 10, 2012 at 2:07 AM, Shankhadeep Shome shank15...@gmail.comwrote:
Its not clear where I would upload the drivers to, I created a wiki
account and a wiki page for 1-to-1 NAT configuration for IPoIB. I can just
send you the tar file
Hi Jamie
Thanks for the info, I am creating a readme and cleaning up the driver
scripts and will be uploading them shortly.
Shank
On Tue, May 8, 2012 at 9:47 AM, Jaime Melis jme...@opennebula.org wrote:
Hi Shankhadeep,
I think the community wiki site is the best place to upload these
On Wed, 2012-05-09 at 00:16 -0400, Shankhadeep Shome wrote:
Hi Jamie
Thanks for the info, I am creating a readme and cleaning up the driver
scripts and will be uploading them shortly.
Shank
It's not clear to me if you are using iSER or iSCSI over IPoIB. Can you
clarify that? I'm
On Thu, May 10, 2012 at 1:01 AM, Shankhadeep Shome shank15...@gmail.comwrote:
What we did was expose the IPoIB network directly to the VM using 1 to 1
NAT. The VMs themselves can now connect to a iscsi or NFS source and log in
directly. I am not sure what would be faster, iSER to the host
Hi Shankhadeep,
I think the community wiki site is the best place to upload these drivers
to:
http://wiki.opennebula.org/
It's open to registration let me know if you run into any issues.
About the blog post, our community manager will send you your login info in
a PM.
Thanks!
Cheers,
Jaime
Hello Chris,
So then to modify my question a bit, are all LUNs attached to all hosts
simultaneously? Or does the attachment only happen when a migration is to
occur? Also, is the LUN put into a read-only mode or something during
migration on the original host to protect the data? Or, must a
Hello Shankhadeep,
that sounds really nice. Would you be interested in contributing your code
to OpenNebula's ecosystem and/or publishing an entry in opennebula's blog?
Regards,
Jaime
On Sun, May 6, 2012 at 5:39 AM, Shankhadeep Shome shank15...@gmail.comwrote:
On Sat, May 5, 2012 at 11:38
Sure, where and how do I do it? I noticed that you have a community wiki
site. Do i upload the driver and make an entry there?
Shankhadeep
On Mon, May 7, 2012 at 11:54 AM, Jaime Melis jme...@opennebula.org wrote:
Hello Shankhadeep,
that sounds really nice. Would you be interested in
Greetings,
I'm interested in hearing user accounts about using infiniband as the
storage interconnect with OpenNebula if anyone has any thoughts to share.
Specifically about:
* using a shared image and live migrating it (e.g. no copying of images).
* is the guest unaware of the infiniband or is
Hi
I'm using InfiniBand to make shared storage. My setup is simple: the
opennebula installdir is shared on NFS with the worker nodes.
- I don't understand what you mean on shared image. There will be a copy
(or symlink if the image is persistent) on the NFS host and that is that
the
Hi Gubda,
Thank you for replying. My goal was to use a single shared iso image to
boot from, and use an in-memory minimal linux on each node that had no
'disk' at all, then to mount logical data volume(s) from a centralized
storage system. Perhaps that is outside the scope of opennebula's design
Reading more, I see that the available methods for block store are iSCSI,
and that the LUNS are attached to the host. From there, a symlink tree
exposes the target to the guest in a predictable way on every host.
So then to modify my question a bit, are all LUNs attached to all hosts
On Sat, May 5, 2012 at 11:38 PM, Shankhadeep Shome shank15...@gmail.comwrote:
Hi Chris
We have a solution we are using on Oracle Exalogic hardware (we are using
the bare metal boxes and gateway switches). I think I understand the
requirement, IB accessible storage from VMs is possible
23 matches
Mail list logo