Re: [one-users] infiniband

2012-05-05 Thread Guba Sándor

Hi

I'm using InfiniBand to make shared storage. My setup is simple: the 
opennebula installdir is shared on NFS with the worker nodes.


- I don't understand what you mean on shared image. There will be a copy 
(or symlink if the image is persistent) on the NFS host and that is that 
the hypervisor will use over the network. Live migrate is available 
beacuse you don't move the image only another host will use it from the 
same spot. With my linux images I have about 30s delay when livemigrate. 
You can use qcow2 driver for shared image.


- I don't understant exactly what you mean on "guest unaware". If you 
mean storage - host connection it has nothing to do with nebula. You can 
use any shared filesystem. NFS uses IPoIB connection.


2012-05-05 19:01 keltezéssel, Chris Barry írta:

Greetings,

I'm interested in hearing user accounts about using infiniband as the 
storage interconnect with OpenNebula if anyone has any thoughts to 
share. Specifically about:

* using a shared image and live migrating it (e.g. no copying of images).
* is the guest unaware of the infiniband or is it running IB drivers?
* does the host expose the volumes to the guest, or does the guest 
connect directly?

* I'd like to avoid iSCSI over IPoIB if possible.
* clustering filesystem/LVM requirements.
* file-based vdisk or logical volume usage?

Anything, an experiences at all will be helpful.

Thanks
Christopher



___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org



___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] infiniband

2012-05-05 Thread Chris Barry
Hi Gubda,

Thank you for replying. My goal was to use a single shared iso image to
boot from, and use an in-memory minimal linux on each node that had no
'disk' at all, then to mount logical data volume(s) from a centralized
storage system. Perhaps that is outside the scope of opennebula's design
goals, and may not be possible  - I'm just now investigating it.

I see you are using NFS, but my desire is to use block storage instead,
ideally LVM, and not incur the performance penalties of IPoIB. It does
sound simple though, and that's always good. Do you have any performance
data on that setup in terms of IOPs and/or MB/s write speeds? It does sound
interesting.

Thanks again,
-C

On Sat, May 5, 2012 at 4:01 PM, Guba Sándor  wrote:

>  Hi
>
> I'm using InfiniBand to make shared storage. My setup is simple: the
> opennebula installdir is shared on NFS with the worker nodes.
>
> - I don't understand what you mean on shared image. There will be a copy
> (or symlink if the image is persistent) on the NFS host and that is that
> the hypervisor will use over the network. Live migrate is available beacuse
> you don't move the image only another host will use it from the same spot.
> With my linux images I have about 30s delay when livemigrate. You can use
> qcow2 driver for shared image.
>
> - I don't understant exactly what you mean on "guest unaware". If you mean
> storage - host connection it has nothing to do with nebula. You can use any
> shared filesystem. NFS uses IPoIB connection.
>
> 2012-05-05 19:01 keltezéssel, Chris Barry írta:
>
> Greetings,
>
> I'm interested in hearing user accounts about using infiniband as the
> storage interconnect with OpenNebula if anyone has any thoughts to share.
> Specifically about:
> * using a shared image and live migrating it (e.g. no copying of images).
> * is the guest unaware of the infiniband or is it running IB drivers?
> * does the host expose the volumes to the guest, or does the guest connect
> directly?
> * I'd like to avoid iSCSI over IPoIB if possible.
> * clustering filesystem/LVM requirements.
> * file-based vdisk or logical volume usage?
>
> Anything, an experiences at all will be helpful.
>
> Thanks
> Christopher
>
>
>
> ___
> Users mailing 
> listUsers@lists.opennebula.orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] infiniband

2012-05-05 Thread Chris Barry
Reading more, I see that the available methods for block store are iSCSI,
and that the LUNS are attached to the host. From there, a symlink tree
exposes the target to the guest in a predictable way on every host.

So then to modify my question a bit, are all LUNs attached to all hosts
simultaneously? Or does the attachment only happen when a migration is to
occur? Also, is the LUN put into a read-only mode or something during
migration on the original host to protect the data? Or, must a clustering
filesystem be employed?

Guess I have a lot to read :)

Thanks
-C

On Sat, May 5, 2012 at 5:23 PM, Chris Barry  wrote:

> Hi Gubda,
>
> Thank you for replying. My goal was to use a single shared iso image to
> boot from, and use an in-memory minimal linux on each node that had no
> 'disk' at all, then to mount logical data volume(s) from a centralized
> storage system. Perhaps that is outside the scope of opennebula's design
> goals, and may not be possible  - I'm just now investigating it.
>
> I see you are using NFS, but my desire is to use block storage instead,
> ideally LVM, and not incur the performance penalties of IPoIB. It does
> sound simple though, and that's always good. Do you have any performance
> data on that setup in terms of IOPs and/or MB/s write speeds? It does sound
> interesting.
>
> Thanks again,
> -C
>
>
> On Sat, May 5, 2012 at 4:01 PM, Guba Sándor  wrote:
>
>>  Hi
>>
>> I'm using InfiniBand to make shared storage. My setup is simple: the
>> opennebula installdir is shared on NFS with the worker nodes.
>>
>> - I don't understand what you mean on shared image. There will be a copy
>> (or symlink if the image is persistent) on the NFS host and that is that
>> the hypervisor will use over the network. Live migrate is available beacuse
>> you don't move the image only another host will use it from the same spot.
>> With my linux images I have about 30s delay when livemigrate. You can use
>> qcow2 driver for shared image.
>>
>> - I don't understant exactly what you mean on "guest unaware". If you
>> mean storage - host connection it has nothing to do with nebula. You can
>> use any shared filesystem. NFS uses IPoIB connection.
>>
>> 2012-05-05 19:01 keltezéssel, Chris Barry írta:
>>
>> Greetings,
>>
>> I'm interested in hearing user accounts about using infiniband as the
>> storage interconnect with OpenNebula if anyone has any thoughts to share.
>> Specifically about:
>> * using a shared image and live migrating it (e.g. no copying of images).
>> * is the guest unaware of the infiniband or is it running IB drivers?
>> * does the host expose the volumes to the guest, or does the guest
>> connect directly?
>> * I'd like to avoid iSCSI over IPoIB if possible.
>> * clustering filesystem/LVM requirements.
>> * file-based vdisk or logical volume usage?
>>
>> Anything, an experiences at all will be helpful.
>>
>> Thanks
>> Christopher
>>
>>
>>
>> ___
>> Users mailing 
>> listUsers@lists.opennebula.orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>>
>>
>>
>> ___
>> Users mailing list
>> Users@lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] infiniband

2012-05-05 Thread Shankhadeep Shome
On Sat, May 5, 2012 at 11:38 PM, Shankhadeep Shome wrote:

> Hi Chris
>
> We have a solution we are using on Oracle Exalogic hardware (we are using
> the bare metal boxes and gateway switches). I think I understand the
> requirement, IB accessible storage from VMs is possible however its a bit
> convoluted. Our solution was to create a one-to-one NAT from the VMs to the
> IB IPoIB network. This allows the VMs to mount storage natively over the IB
> network. The performance is pretty good, about 9Gbps per node with 64k MTU
> sizes.  We created an open nebula driver for this and I'm happy to share it
> with the community. The driver handles VM migrations by enabling/diabling
> ip aliases on the host and can also be used to manipulate iptable rules on
> source and destination when open nebula moves VMs around.
>
> Shankhadeep
>
>
> On Sat, May 5, 2012 at 5:42 PM, Chris Barry  wrote:
>
>> Reading more, I see that the available methods for block store are iSCSI,
>> and that the LUNS are attached to the host. From there, a symlink tree
>> exposes the target to the guest in a predictable way on every host.
>>
>> So then to modify my question a bit, are all LUNs attached to all hosts
>> simultaneously? Or does the attachment only happen when a migration is to
>> occur? Also, is the LUN put into a read-only mode or something during
>> migration on the original host to protect the data? Or, must a clustering
>> filesystem be employed?
>>
>> Guess I have a lot to read :)
>>
>> Thanks
>> -C
>>
>>
>> On Sat, May 5, 2012 at 5:23 PM, Chris Barry  wrote:
>>
>>> Hi Gubda,
>>>
>>> Thank you for replying. My goal was to use a single shared iso image to
>>> boot from, and use an in-memory minimal linux on each node that had no
>>> 'disk' at all, then to mount logical data volume(s) from a centralized
>>> storage system. Perhaps that is outside the scope of opennebula's design
>>> goals, and may not be possible  - I'm just now investigating it.
>>>
>>> I see you are using NFS, but my desire is to use block storage instead,
>>> ideally LVM, and not incur the performance penalties of IPoIB. It does
>>> sound simple though, and that's always good. Do you have any performance
>>> data on that setup in terms of IOPs and/or MB/s write speeds? It does sound
>>> interesting.
>>>
>>> Thanks again,
>>> -C
>>>
>>>
>>> On Sat, May 5, 2012 at 4:01 PM, Guba Sándor  wrote:
>>>
  Hi

 I'm using InfiniBand to make shared storage. My setup is simple: the
 opennebula installdir is shared on NFS with the worker nodes.

 - I don't understand what you mean on shared image. There will be a
 copy (or symlink if the image is persistent) on the NFS host and that is
 that the hypervisor will use over the network. Live migrate is available
 beacuse you don't move the image only another host will use it from the
 same spot. With my linux images I have about 30s delay when livemigrate.
 You can use qcow2 driver for shared image.

 - I don't understant exactly what you mean on "guest unaware". If you
 mean storage - host connection it has nothing to do with nebula. You can
 use any shared filesystem. NFS uses IPoIB connection.

 2012-05-05 19:01 keltezéssel, Chris Barry írta:

 Greetings,

 I'm interested in hearing user accounts about using infiniband as the
 storage interconnect with OpenNebula if anyone has any thoughts to share.
 Specifically about:
 * using a shared image and live migrating it (e.g. no copying of
 images).
 * is the guest unaware of the infiniband or is it running IB drivers?
 * does the host expose the volumes to the guest, or does the guest
 connect directly?
 * I'd like to avoid iSCSI over IPoIB if possible.
 * clustering filesystem/LVM requirements.
 * file-based vdisk or logical volume usage?

 Anything, an experiences at all will be helpful.

 Thanks
 Christopher



 ___
 Users mailing 
 listUsers@lists.opennebula.orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org




 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


>>>
>>
>> ___
>> Users mailing list
>> Users@lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] infiniband

2012-05-07 Thread Jaime Melis
Hello Chris,

So then to modify my question a bit, are all LUNs attached to all hosts
> simultaneously? Or does the attachment only happen when a migration is to
> occur? Also, is the LUN put into a read-only mode or something during
> migration on the original host to protect the data? Or, must a clustering
> filesystem be employed?
>

Our stock drivers do expose all the iSCSI targets (which are LVM volumes)
to all the hosts. But only one host will log into the iscsi  target at a
time, therefore there won't be any collision. In other words: when starting
the vm we log into the iscsi target, and we log out when stopping the vm.
We also handle migrations, we issue logout in the source host and log in in
the target host.

However, these drivers are meant to be hacked to fit each datacenter
requirements.

Regards,
Jaime


> Guess I have a lot to read :)
>
> Thanks
> -C
>
>
> On Sat, May 5, 2012 at 5:23 PM, Chris Barry  wrote:
>
>> Hi Gubda,
>>
>> Thank you for replying. My goal was to use a single shared iso image to
>> boot from, and use an in-memory minimal linux on each node that had no
>> 'disk' at all, then to mount logical data volume(s) from a centralized
>> storage system. Perhaps that is outside the scope of opennebula's design
>> goals, and may not be possible  - I'm just now investigating it.
>>
>> I see you are using NFS, but my desire is to use block storage instead,
>> ideally LVM, and not incur the performance penalties of IPoIB. It does
>> sound simple though, and that's always good. Do you have any performance
>> data on that setup in terms of IOPs and/or MB/s write speeds? It does sound
>> interesting.
>>
>> Thanks again,
>> -C
>>
>>
>> On Sat, May 5, 2012 at 4:01 PM, Guba Sándor  wrote:
>>
>>>  Hi
>>>
>>> I'm using InfiniBand to make shared storage. My setup is simple: the
>>> opennebula installdir is shared on NFS with the worker nodes.
>>>
>>> - I don't understand what you mean on shared image. There will be a copy
>>> (or symlink if the image is persistent) on the NFS host and that is that
>>> the hypervisor will use over the network. Live migrate is available beacuse
>>> you don't move the image only another host will use it from the same spot.
>>> With my linux images I have about 30s delay when livemigrate. You can use
>>> qcow2 driver for shared image.
>>>
>>> - I don't understant exactly what you mean on "guest unaware". If you
>>> mean storage - host connection it has nothing to do with nebula. You can
>>> use any shared filesystem. NFS uses IPoIB connection.
>>>
>>> 2012-05-05 19:01 keltezéssel, Chris Barry írta:
>>>
>>> Greetings,
>>>
>>> I'm interested in hearing user accounts about using infiniband as the
>>> storage interconnect with OpenNebula if anyone has any thoughts to share.
>>> Specifically about:
>>> * using a shared image and live migrating it (e.g. no copying of images).
>>> * is the guest unaware of the infiniband or is it running IB drivers?
>>> * does the host expose the volumes to the guest, or does the guest
>>> connect directly?
>>> * I'd like to avoid iSCSI over IPoIB if possible.
>>> * clustering filesystem/LVM requirements.
>>> * file-based vdisk or logical volume usage?
>>>
>>> Anything, an experiences at all will be helpful.
>>>
>>> Thanks
>>> Christopher
>>>
>>>
>>>
>>> ___
>>> Users mailing 
>>> listUsers@lists.opennebula.orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@lists.opennebula.org
>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>
>>>
>>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>


-- 
Jaime Melis
Project Engineer
OpenNebula - The Open Source Toolkit for Cloud Computing
www.OpenNebula.org | jme...@opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] infiniband

2012-05-07 Thread Jaime Melis
Hello Shankhadeep,

that sounds really nice. Would you be interested in contributing your code
to OpenNebula's ecosystem and/or publishing an entry in opennebula's blog?

Regards,
Jaime

On Sun, May 6, 2012 at 5:39 AM, Shankhadeep Shome wrote:

>
>
> On Sat, May 5, 2012 at 11:38 PM, Shankhadeep Shome 
> wrote:
>
>> Hi Chris
>>
>> We have a solution we are using on Oracle Exalogic hardware (we are using
>> the bare metal boxes and gateway switches). I think I understand the
>> requirement, IB accessible storage from VMs is possible however its a bit
>> convoluted. Our solution was to create a one-to-one NAT from the VMs to the
>> IB IPoIB network. This allows the VMs to mount storage natively over the IB
>> network. The performance is pretty good, about 9Gbps per node with 64k MTU
>> sizes.  We created an open nebula driver for this and I'm happy to share it
>> with the community. The driver handles VM migrations by enabling/diabling
>> ip aliases on the host and can also be used to manipulate iptable rules on
>> source and destination when open nebula moves VMs around.
>>
>> Shankhadeep
>>
>>
>> On Sat, May 5, 2012 at 5:42 PM, Chris Barry  wrote:
>>
>>> Reading more, I see that the available methods for block store are
>>> iSCSI, and that the LUNS are attached to the host. From there, a symlink
>>> tree exposes the target to the guest in a predictable way on every host.
>>>
>>> So then to modify my question a bit, are all LUNs attached to all hosts
>>> simultaneously? Or does the attachment only happen when a migration is to
>>> occur? Also, is the LUN put into a read-only mode or something during
>>> migration on the original host to protect the data? Or, must a clustering
>>> filesystem be employed?
>>>
>>> Guess I have a lot to read :)
>>>
>>> Thanks
>>> -C
>>>
>>>
>>> On Sat, May 5, 2012 at 5:23 PM, Chris Barry wrote:
>>>
 Hi Gubda,

 Thank you for replying. My goal was to use a single shared iso image to
 boot from, and use an in-memory minimal linux on each node that had no
 'disk' at all, then to mount logical data volume(s) from a centralized
 storage system. Perhaps that is outside the scope of opennebula's design
 goals, and may not be possible  - I'm just now investigating it.

 I see you are using NFS, but my desire is to use block storage instead,
 ideally LVM, and not incur the performance penalties of IPoIB. It does
 sound simple though, and that's always good. Do you have any performance
 data on that setup in terms of IOPs and/or MB/s write speeds? It does sound
 interesting.

 Thanks again,
 -C


 On Sat, May 5, 2012 at 4:01 PM, Guba Sándor wrote:

>  Hi
>
> I'm using InfiniBand to make shared storage. My setup is simple: the
> opennebula installdir is shared on NFS with the worker nodes.
>
> - I don't understand what you mean on shared image. There will be a
> copy (or symlink if the image is persistent) on the NFS host and that is
> that the hypervisor will use over the network. Live migrate is available
> beacuse you don't move the image only another host will use it from the
> same spot. With my linux images I have about 30s delay when livemigrate.
> You can use qcow2 driver for shared image.
>
> - I don't understant exactly what you mean on "guest unaware". If you
> mean storage - host connection it has nothing to do with nebula. You can
> use any shared filesystem. NFS uses IPoIB connection.
>
> 2012-05-05 19:01 keltezéssel, Chris Barry írta:
>
> Greetings,
>
> I'm interested in hearing user accounts about using infiniband as the
> storage interconnect with OpenNebula if anyone has any thoughts to share.
> Specifically about:
> * using a shared image and live migrating it (e.g. no copying of
> images).
> * is the guest unaware of the infiniband or is it running IB drivers?
> * does the host expose the volumes to the guest, or does the guest
> connect directly?
> * I'd like to avoid iSCSI over IPoIB if possible.
> * clustering filesystem/LVM requirements.
> * file-based vdisk or logical volume usage?
>
> Anything, an experiences at all will be helpful.
>
> Thanks
> Christopher
>
>
>
> ___
> Users mailing 
> listUsers@lists.opennebula.orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>

>>>
>>> ___
>>> Users mailing list
>>> Users@lists.opennebula.org
>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>
>>>
>>
>
> ___
> Users mailing list
> Users@lists.opennebula.org

Re: [one-users] infiniband

2012-05-07 Thread Shankhadeep Shome
Sure, where and how do I do it? I noticed that you have a community wiki
site. Do i upload the driver and make an entry there?

Shankhadeep

On Mon, May 7, 2012 at 11:54 AM, Jaime Melis  wrote:

> Hello Shankhadeep,
>
> that sounds really nice. Would you be interested in contributing your code
> to OpenNebula's ecosystem and/or publishing an entry in opennebula's blog?
>
> Regards,
> Jaime
>
>
> On Sun, May 6, 2012 at 5:39 AM, Shankhadeep Shome wrote:
>
>>
>>
>> On Sat, May 5, 2012 at 11:38 PM, Shankhadeep Shome 
>> wrote:
>>
>>> Hi Chris
>>>
>>> We have a solution we are using on Oracle Exalogic hardware (we are
>>> using the bare metal boxes and gateway switches). I think I understand the
>>> requirement, IB accessible storage from VMs is possible however its a bit
>>> convoluted. Our solution was to create a one-to-one NAT from the VMs to the
>>> IB IPoIB network. This allows the VMs to mount storage natively over the IB
>>> network. The performance is pretty good, about 9Gbps per node with 64k MTU
>>> sizes.  We created an open nebula driver for this and I'm happy to share it
>>> with the community. The driver handles VM migrations by enabling/diabling
>>> ip aliases on the host and can also be used to manipulate iptable rules on
>>> source and destination when open nebula moves VMs around.
>>>
>>> Shankhadeep
>>>
>>>
>>> On Sat, May 5, 2012 at 5:42 PM, Chris Barry wrote:
>>>
 Reading more, I see that the available methods for block store are
 iSCSI, and that the LUNS are attached to the host. From there, a symlink
 tree exposes the target to the guest in a predictable way on every host.

 So then to modify my question a bit, are all LUNs attached to all hosts
 simultaneously? Or does the attachment only happen when a migration is to
 occur? Also, is the LUN put into a read-only mode or something during
 migration on the original host to protect the data? Or, must a clustering
 filesystem be employed?

 Guess I have a lot to read :)

 Thanks
 -C


 On Sat, May 5, 2012 at 5:23 PM, Chris Barry wrote:

> Hi Gubda,
>
> Thank you for replying. My goal was to use a single shared iso image
> to boot from, and use an in-memory minimal linux on each node that had no
> 'disk' at all, then to mount logical data volume(s) from a centralized
> storage system. Perhaps that is outside the scope of opennebula's design
> goals, and may not be possible  - I'm just now investigating it.
>
> I see you are using NFS, but my desire is to use block storage
> instead, ideally LVM, and not incur the performance penalties of IPoIB. It
> does sound simple though, and that's always good. Do you have any
> performance data on that setup in terms of IOPs and/or MB/s write speeds?
> It does sound interesting.
>
> Thanks again,
> -C
>
>
> On Sat, May 5, 2012 at 4:01 PM, Guba Sándor wrote:
>
>>  Hi
>>
>> I'm using InfiniBand to make shared storage. My setup is simple: the
>> opennebula installdir is shared on NFS with the worker nodes.
>>
>> - I don't understand what you mean on shared image. There will be a
>> copy (or symlink if the image is persistent) on the NFS host and that is
>> that the hypervisor will use over the network. Live migrate is available
>> beacuse you don't move the image only another host will use it from the
>> same spot. With my linux images I have about 30s delay when livemigrate.
>> You can use qcow2 driver for shared image.
>>
>> - I don't understant exactly what you mean on "guest unaware". If you
>> mean storage - host connection it has nothing to do with nebula. You can
>> use any shared filesystem. NFS uses IPoIB connection.
>>
>> 2012-05-05 19:01 keltezéssel, Chris Barry írta:
>>
>> Greetings,
>>
>> I'm interested in hearing user accounts about using infiniband as the
>> storage interconnect with OpenNebula if anyone has any thoughts to share.
>> Specifically about:
>> * using a shared image and live migrating it (e.g. no copying of
>> images).
>> * is the guest unaware of the infiniband or is it running IB drivers?
>> * does the host expose the volumes to the guest, or does the guest
>> connect directly?
>> * I'd like to avoid iSCSI over IPoIB if possible.
>> * clustering filesystem/LVM requirements.
>> * file-based vdisk or logical volume usage?
>>
>> Anything, an experiences at all will be helpful.
>>
>> Thanks
>> Christopher
>>
>>
>>
>> ___
>> Users mailing 
>> listUsers@lists.opennebula.orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>>
>>
>>
>> ___
>> Users mailing list
>> Users@lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/use

Re: [one-users] infiniband

2012-05-08 Thread Jaime Melis
Hi Shankhadeep,

I think the community wiki site is the best place to upload these drivers
to:
http://wiki.opennebula.org/

It's open to registration let me know if you run into any issues.

About the blog post, our community manager will send you your login info in
a PM.

Thanks!

Cheers,
Jaime

On Mon, May 7, 2012 at 7:42 PM, Shankhadeep Shome wrote:

> Sure, where and how do I do it? I noticed that you have a community wiki
> site. Do i upload the driver and make an entry there?
>
> Shankhadeep
>
>
> On Mon, May 7, 2012 at 11:54 AM, Jaime Melis wrote:
>
>> Hello Shankhadeep,
>>
>> that sounds really nice. Would you be interested in contributing your
>> code to OpenNebula's ecosystem and/or publishing an entry in opennebula's
>> blog?
>>
>> Regards,
>> Jaime
>>
>>
>> On Sun, May 6, 2012 at 5:39 AM, Shankhadeep Shome 
>> wrote:
>>
>>>
>>>
>>> On Sat, May 5, 2012 at 11:38 PM, Shankhadeep Shome >> > wrote:
>>>
 Hi Chris

 We have a solution we are using on Oracle Exalogic hardware (we are
 using the bare metal boxes and gateway switches). I think I understand the
 requirement, IB accessible storage from VMs is possible however its a bit
 convoluted. Our solution was to create a one-to-one NAT from the VMs to the
 IB IPoIB network. This allows the VMs to mount storage natively over the IB
 network. The performance is pretty good, about 9Gbps per node with 64k MTU
 sizes.  We created an open nebula driver for this and I'm happy to share it
 with the community. The driver handles VM migrations by enabling/diabling
 ip aliases on the host and can also be used to manipulate iptable rules on
 source and destination when open nebula moves VMs around.

 Shankhadeep


 On Sat, May 5, 2012 at 5:42 PM, Chris Barry wrote:

> Reading more, I see that the available methods for block store are
> iSCSI, and that the LUNS are attached to the host. From there, a symlink
> tree exposes the target to the guest in a predictable way on every host.
>
> So then to modify my question a bit, are all LUNs attached to all
> hosts simultaneously? Or does the attachment only happen when a migration
> is to occur? Also, is the LUN put into a read-only mode or something 
> during
> migration on the original host to protect the data? Or, must a clustering
> filesystem be employed?
>
> Guess I have a lot to read :)
>
> Thanks
> -C
>
>
> On Sat, May 5, 2012 at 5:23 PM, Chris Barry wrote:
>
>> Hi Gubda,
>>
>> Thank you for replying. My goal was to use a single shared iso image
>> to boot from, and use an in-memory minimal linux on each node that had no
>> 'disk' at all, then to mount logical data volume(s) from a centralized
>> storage system. Perhaps that is outside the scope of opennebula's design
>> goals, and may not be possible  - I'm just now investigating it.
>>
>> I see you are using NFS, but my desire is to use block storage
>> instead, ideally LVM, and not incur the performance penalties of IPoIB. 
>> It
>> does sound simple though, and that's always good. Do you have any
>> performance data on that setup in terms of IOPs and/or MB/s write speeds?
>> It does sound interesting.
>>
>> Thanks again,
>> -C
>>
>>
>> On Sat, May 5, 2012 at 4:01 PM, Guba Sándor wrote:
>>
>>>  Hi
>>>
>>> I'm using InfiniBand to make shared storage. My setup is simple: the
>>> opennebula installdir is shared on NFS with the worker nodes.
>>>
>>> - I don't understand what you mean on shared image. There will be a
>>> copy (or symlink if the image is persistent) on the NFS host and that is
>>> that the hypervisor will use over the network. Live migrate is available
>>> beacuse you don't move the image only another host will use it from the
>>> same spot. With my linux images I have about 30s delay when livemigrate.
>>> You can use qcow2 driver for shared image.
>>>
>>> - I don't understant exactly what you mean on "guest unaware". If
>>> you mean storage - host connection it has nothing to do with nebula. You
>>> can use any shared filesystem. NFS uses IPoIB connection.
>>>
>>> 2012-05-05 19:01 keltezéssel, Chris Barry írta:
>>>
>>> Greetings,
>>>
>>> I'm interested in hearing user accounts about using infiniband as
>>> the storage interconnect with OpenNebula if anyone has any thoughts to
>>> share. Specifically about:
>>> * using a shared image and live migrating it (e.g. no copying of
>>> images).
>>> * is the guest unaware of the infiniband or is it running IB drivers?
>>> * does the host expose the volumes to the guest, or does the guest
>>> connect directly?
>>> * I'd like to avoid iSCSI over IPoIB if possible.
>>> * clustering filesystem/LVM requirements.
>>> * file-based vdisk or logical volume usage?
>

Re: [one-users] infiniband

2012-05-09 Thread Shankhadeep Shome
Hi Jamie

Thanks for the info, I am creating a readme and cleaning up the driver
scripts and will be uploading them shortly.

Shank

On Tue, May 8, 2012 at 9:47 AM, Jaime Melis  wrote:

> Hi Shankhadeep,
>
> I think the community wiki site is the best place to upload these drivers
> to:
> http://wiki.opennebula.org/
>
> It's open to registration let me know if you run into any issues.
>
> About the blog post, our community manager will send you your login info
> in a PM.
>
> Thanks!
>
> Cheers,
> Jaime
>
>
> On Mon, May 7, 2012 at 7:42 PM, Shankhadeep Shome wrote:
>
>> Sure, where and how do I do it? I noticed that you have a community wiki
>> site. Do i upload the driver and make an entry there?
>>
>> Shankhadeep
>>
>>
>> On Mon, May 7, 2012 at 11:54 AM, Jaime Melis wrote:
>>
>>> Hello Shankhadeep,
>>>
>>> that sounds really nice. Would you be interested in contributing your
>>> code to OpenNebula's ecosystem and/or publishing an entry in opennebula's
>>> blog?
>>>
>>> Regards,
>>> Jaime
>>>
>>>
>>> On Sun, May 6, 2012 at 5:39 AM, Shankhadeep Shome 
>>> wrote:
>>>


 On Sat, May 5, 2012 at 11:38 PM, Shankhadeep Shome <
 shank15...@gmail.com> wrote:

> Hi Chris
>
> We have a solution we are using on Oracle Exalogic hardware (we are
> using the bare metal boxes and gateway switches). I think I understand the
> requirement, IB accessible storage from VMs is possible however its a bit
> convoluted. Our solution was to create a one-to-one NAT from the VMs to 
> the
> IB IPoIB network. This allows the VMs to mount storage natively over the 
> IB
> network. The performance is pretty good, about 9Gbps per node with 64k MTU
> sizes.  We created an open nebula driver for this and I'm happy to share 
> it
> with the community. The driver handles VM migrations by enabling/diabling
> ip aliases on the host and can also be used to manipulate iptable rules on
> source and destination when open nebula moves VMs around.
>
> Shankhadeep
>
>
> On Sat, May 5, 2012 at 5:42 PM, Chris Barry wrote:
>
>> Reading more, I see that the available methods for block store are
>> iSCSI, and that the LUNS are attached to the host. From there, a symlink
>> tree exposes the target to the guest in a predictable way on every host.
>>
>> So then to modify my question a bit, are all LUNs attached to all
>> hosts simultaneously? Or does the attachment only happen when a migration
>> is to occur? Also, is the LUN put into a read-only mode or something 
>> during
>> migration on the original host to protect the data? Or, must a clustering
>> filesystem be employed?
>>
>> Guess I have a lot to read :)
>>
>> Thanks
>> -C
>>
>>
>> On Sat, May 5, 2012 at 5:23 PM, Chris Barry wrote:
>>
>>> Hi Gubda,
>>>
>>> Thank you for replying. My goal was to use a single shared iso image
>>> to boot from, and use an in-memory minimal linux on each node that had 
>>> no
>>> 'disk' at all, then to mount logical data volume(s) from a centralized
>>> storage system. Perhaps that is outside the scope of opennebula's design
>>> goals, and may not be possible  - I'm just now investigating it.
>>>
>>> I see you are using NFS, but my desire is to use block storage
>>> instead, ideally LVM, and not incur the performance penalties of IPoIB. 
>>> It
>>> does sound simple though, and that's always good. Do you have any
>>> performance data on that setup in terms of IOPs and/or MB/s write 
>>> speeds?
>>> It does sound interesting.
>>>
>>> Thanks again,
>>> -C
>>>
>>>
>>> On Sat, May 5, 2012 at 4:01 PM, Guba Sándor wrote:
>>>
  Hi

 I'm using InfiniBand to make shared storage. My setup is simple:
 the opennebula installdir is shared on NFS with the worker nodes.

 - I don't understand what you mean on shared image. There will be a
 copy (or symlink if the image is persistent) on the NFS host and that 
 is
 that the hypervisor will use over the network. Live migrate is 
 available
 beacuse you don't move the image only another host will use it from the
 same spot. With my linux images I have about 30s delay when 
 livemigrate.
 You can use qcow2 driver for shared image.

 - I don't understant exactly what you mean on "guest unaware". If
 you mean storage - host connection it has nothing to do with nebula. 
 You
 can use any shared filesystem. NFS uses IPoIB connection.

 2012-05-05 19:01 keltezéssel, Chris Barry írta:

 Greetings,

 I'm interested in hearing user accounts about using infiniband as
 the storage interconnect with OpenNebula if anyone has any thoughts to
 share. Specifically about:

Re: [one-users] infiniband

2012-05-09 Thread Christopher Barry
On Wed, 2012-05-09 at 00:16 -0400, Shankhadeep Shome wrote:
> Hi Jamie
> 
> 
> Thanks for the info, I am creating a readme and cleaning up the driver
> scripts and will be uploading them shortly.
> 
> 
> Shank 

It's not clear to me if you are using iSER or iSCSI over IPoIB. Can you
clarify that? I'm envisioning iSER being used where the LVM volumes are
logged into from the assigned guest's host, and then exposed to the
guest as a local scsi block device and connected to via virtio.

If this is how it works, does the host itself become a storage device
within 'one'? Or, is the infiniband enabled storage device seen as the
storage device in 'one'? 

Thanks,
-C

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] infiniband

2012-05-09 Thread Shankhadeep Shome
On Thu, May 10, 2012 at 1:01 AM, Shankhadeep Shome wrote:

> What we did was expose the IPoIB network directly to the VM using 1 to 1
> NAT. The VMs themselves can now connect to a iscsi or NFS source and log in
> directly. I am not sure what would be faster, iSER to the host which is
> exposed to the VMs as a raw device or direct attached network storage. I
> think either way you lose some performance. If you want to take the iSER
> option then your best solution is to present an iSER volume to the hosts
> and use LVM to present storage to the VMs via virtio-blk mechanism. There
> is another currently proprietary way using Mellanox's SRIOV drivers to
> present an Infiniband virtual function directly to the VMs however these
> drivers have not been released to OFED yet and they have their own
> limitations.
>
> We are able to hit around 9Gbps over the IPoIB link to the VMs via NAT
> using 64K frames with connected mode IPoIB with netperf and max out our
> local NAS appliance throughput. Our cards are ConnectX-2 at 40Gbps. IPoIB
> will max out around 12-14 Gbps with multiple streams even on these cards.
> IPoIB performance is greatly dependent on the kernel and driver versions,
> if you are running an older kernel and drivers the performance is much
> lower keep that in mind. We are using the latest MLNXOFED drivers with
> Linux Kernel 3.0 to get this level of performance with large
> transmit/receive offload enabled.
>
>
> On Wed, May 9, 2012 at 3:58 PM, Christopher Barry wrote:
>
>> On Wed, 2012-05-09 at 00:16 -0400, Shankhadeep Shome wrote:
>> > Hi Jamie
>> >
>> >
>> > Thanks for the info, I am creating a readme and cleaning up the driver
>> > scripts and will be uploading them shortly.
>> >
>> >
>> > Shank
>>
>> It's not clear to me if you are using iSER or iSCSI over IPoIB. Can you
>> clarify that? I'm envisioning iSER being used where the LVM volumes are
>> logged into from the assigned guest's host, and then exposed to the
>> guest as a local scsi block device and connected to via virtio.
>>
>> If this is how it works, does the host itself become a storage device
>> within 'one'? Or, is the infiniband enabled storage device seen as the
>> storage device in 'one'?
>>
>> Thanks,
>> -C
>>
>> ___
>> Users mailing list
>> Users@lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] infiniband

2012-05-09 Thread Shankhadeep Shome
Its not clear where I would upload the drivers to, I created a wiki account
and a wiki page for 1-to-1 NAT configuration for IPoIB. I can just send you
the tar file with the updated driver.

Shankhadeep

On Tue, May 8, 2012 at 9:47 AM, Jaime Melis  wrote:

> Hi Shankhadeep,
>
> I think the community wiki site is the best place to upload these drivers
> to:
> http://wiki.opennebula.org/
>
> It's open to registration let me know if you run into any issues.
>
> About the blog post, our community manager will send you your login info
> in a PM.
>
> Thanks!
>
> Cheers,
> Jaime
>
>
> On Mon, May 7, 2012 at 7:42 PM, Shankhadeep Shome wrote:
>
>> Sure, where and how do I do it? I noticed that you have a community wiki
>> site. Do i upload the driver and make an entry there?
>>
>> Shankhadeep
>>
>>
>> On Mon, May 7, 2012 at 11:54 AM, Jaime Melis wrote:
>>
>>> Hello Shankhadeep,
>>>
>>> that sounds really nice. Would you be interested in contributing your
>>> code to OpenNebula's ecosystem and/or publishing an entry in opennebula's
>>> blog?
>>>
>>> Regards,
>>> Jaime
>>>
>>>
>>> On Sun, May 6, 2012 at 5:39 AM, Shankhadeep Shome 
>>> wrote:
>>>


 On Sat, May 5, 2012 at 11:38 PM, Shankhadeep Shome <
 shank15...@gmail.com> wrote:

> Hi Chris
>
> We have a solution we are using on Oracle Exalogic hardware (we are
> using the bare metal boxes and gateway switches). I think I understand the
> requirement, IB accessible storage from VMs is possible however its a bit
> convoluted. Our solution was to create a one-to-one NAT from the VMs to 
> the
> IB IPoIB network. This allows the VMs to mount storage natively over the 
> IB
> network. The performance is pretty good, about 9Gbps per node with 64k MTU
> sizes.  We created an open nebula driver for this and I'm happy to share 
> it
> with the community. The driver handles VM migrations by enabling/diabling
> ip aliases on the host and can also be used to manipulate iptable rules on
> source and destination when open nebula moves VMs around.
>
> Shankhadeep
>
>
> On Sat, May 5, 2012 at 5:42 PM, Chris Barry wrote:
>
>> Reading more, I see that the available methods for block store are
>> iSCSI, and that the LUNS are attached to the host. From there, a symlink
>> tree exposes the target to the guest in a predictable way on every host.
>>
>> So then to modify my question a bit, are all LUNs attached to all
>> hosts simultaneously? Or does the attachment only happen when a migration
>> is to occur? Also, is the LUN put into a read-only mode or something 
>> during
>> migration on the original host to protect the data? Or, must a clustering
>> filesystem be employed?
>>
>> Guess I have a lot to read :)
>>
>> Thanks
>> -C
>>
>>
>> On Sat, May 5, 2012 at 5:23 PM, Chris Barry wrote:
>>
>>> Hi Gubda,
>>>
>>> Thank you for replying. My goal was to use a single shared iso image
>>> to boot from, and use an in-memory minimal linux on each node that had 
>>> no
>>> 'disk' at all, then to mount logical data volume(s) from a centralized
>>> storage system. Perhaps that is outside the scope of opennebula's design
>>> goals, and may not be possible  - I'm just now investigating it.
>>>
>>> I see you are using NFS, but my desire is to use block storage
>>> instead, ideally LVM, and not incur the performance penalties of IPoIB. 
>>> It
>>> does sound simple though, and that's always good. Do you have any
>>> performance data on that setup in terms of IOPs and/or MB/s write 
>>> speeds?
>>> It does sound interesting.
>>>
>>> Thanks again,
>>> -C
>>>
>>>
>>> On Sat, May 5, 2012 at 4:01 PM, Guba Sándor wrote:
>>>
  Hi

 I'm using InfiniBand to make shared storage. My setup is simple:
 the opennebula installdir is shared on NFS with the worker nodes.

 - I don't understand what you mean on shared image. There will be a
 copy (or symlink if the image is persistent) on the NFS host and that 
 is
 that the hypervisor will use over the network. Live migrate is 
 available
 beacuse you don't move the image only another host will use it from the
 same spot. With my linux images I have about 30s delay when 
 livemigrate.
 You can use qcow2 driver for shared image.

 - I don't understant exactly what you mean on "guest unaware". If
 you mean storage - host connection it has nothing to do with nebula. 
 You
 can use any shared filesystem. NFS uses IPoIB connection.

 2012-05-05 19:01 keltezéssel, Chris Barry írta:

 Greetings,

 I'm interested in hearing user accounts about using infiniband as
 the storage interconnect with OpenNebula

Re: [one-users] infiniband

2012-05-09 Thread Shankhadeep Shome
Just added the vmm driver for the IPoIB NAT stuff

On Thu, May 10, 2012 at 2:07 AM, Shankhadeep Shome wrote:

> Its not clear where I would upload the drivers to, I created a wiki
> account and a wiki page for 1-to-1 NAT configuration for IPoIB. I can just
> send you the tar file with the updated driver.
>
> Shankhadeep
>
>
> On Tue, May 8, 2012 at 9:47 AM, Jaime Melis  wrote:
>
>> Hi Shankhadeep,
>>
>> I think the community wiki site is the best place to upload these drivers
>> to:
>> http://wiki.opennebula.org/
>>
>> It's open to registration let me know if you run into any issues.
>>
>> About the blog post, our community manager will send you your login info
>> in a PM.
>>
>> Thanks!
>>
>> Cheers,
>> Jaime
>>
>>
>> On Mon, May 7, 2012 at 7:42 PM, Shankhadeep Shome 
>> wrote:
>>
>>> Sure, where and how do I do it? I noticed that you have a community wiki
>>> site. Do i upload the driver and make an entry there?
>>>
>>> Shankhadeep
>>>
>>>
>>> On Mon, May 7, 2012 at 11:54 AM, Jaime Melis wrote:
>>>
 Hello Shankhadeep,

 that sounds really nice. Would you be interested in contributing your
 code to OpenNebula's ecosystem and/or publishing an entry in opennebula's
 blog?

 Regards,
 Jaime


 On Sun, May 6, 2012 at 5:39 AM, Shankhadeep Shome >>> > wrote:

>
>
> On Sat, May 5, 2012 at 11:38 PM, Shankhadeep Shome <
> shank15...@gmail.com> wrote:
>
>> Hi Chris
>>
>> We have a solution we are using on Oracle Exalogic hardware (we are
>> using the bare metal boxes and gateway switches). I think I understand 
>> the
>> requirement, IB accessible storage from VMs is possible however its a bit
>> convoluted. Our solution was to create a one-to-one NAT from the VMs to 
>> the
>> IB IPoIB network. This allows the VMs to mount storage natively over the 
>> IB
>> network. The performance is pretty good, about 9Gbps per node with 64k 
>> MTU
>> sizes.  We created an open nebula driver for this and I'm happy to share 
>> it
>> with the community. The driver handles VM migrations by enabling/diabling
>> ip aliases on the host and can also be used to manipulate iptable rules 
>> on
>> source and destination when open nebula moves VMs around.
>>
>> Shankhadeep
>>
>>
>> On Sat, May 5, 2012 at 5:42 PM, Chris Barry wrote:
>>
>>> Reading more, I see that the available methods for block store are
>>> iSCSI, and that the LUNS are attached to the host. From there, a symlink
>>> tree exposes the target to the guest in a predictable way on every host.
>>>
>>> So then to modify my question a bit, are all LUNs attached to all
>>> hosts simultaneously? Or does the attachment only happen when a 
>>> migration
>>> is to occur? Also, is the LUN put into a read-only mode or something 
>>> during
>>> migration on the original host to protect the data? Or, must a 
>>> clustering
>>> filesystem be employed?
>>>
>>> Guess I have a lot to read :)
>>>
>>> Thanks
>>> -C
>>>
>>>
>>> On Sat, May 5, 2012 at 5:23 PM, Chris Barry wrote:
>>>
 Hi Gubda,

 Thank you for replying. My goal was to use a single shared iso
 image to boot from, and use an in-memory minimal linux on each node 
 that
 had no 'disk' at all, then to mount logical data volume(s) from a
 centralized storage system. Perhaps that is outside the scope of
 opennebula's design goals, and may not be possible  - I'm just now
 investigating it.

 I see you are using NFS, but my desire is to use block storage
 instead, ideally LVM, and not incur the performance penalties of 
 IPoIB. It
 does sound simple though, and that's always good. Do you have any
 performance data on that setup in terms of IOPs and/or MB/s write 
 speeds?
 It does sound interesting.

 Thanks again,
 -C


 On Sat, May 5, 2012 at 4:01 PM, Guba Sándor wrote:

>  Hi
>
> I'm using InfiniBand to make shared storage. My setup is simple:
> the opennebula installdir is shared on NFS with the worker nodes.
>
> - I don't understand what you mean on shared image. There will be
> a copy (or symlink if the image is persistent) on the NFS host and 
> that is
> that the hypervisor will use over the network. Live migrate is 
> available
> beacuse you don't move the image only another host will use it from 
> the
> same spot. With my linux images I have about 30s delay when 
> livemigrate.
> You can use qcow2 driver for shared image.
>
> - I don't understant exactly what you mean on "guest unaware". If
> you mean storage - host connection it has nothing to do with n

Re: [one-users] infiniband

2012-05-14 Thread Jaime Melis
Hi Shankhadeep,

they look really nice, I've adapted your README to wiki style and added it
to wiki.opennebula.org:

http://wiki.opennebula.org/infiniband

Feel free to modify it

Thanks a lot for contributing it!

Cheers,
Jaime

On Thu, May 10, 2012 at 8:46 AM, Shankhadeep Shome wrote:

> Just added the vmm driver for the IPoIB NAT stuff
>
>
> On Thu, May 10, 2012 at 2:07 AM, Shankhadeep Shome 
> wrote:
>
>> Its not clear where I would upload the drivers to, I created a wiki
>> account and a wiki page for 1-to-1 NAT configuration for IPoIB. I can just
>> send you the tar file with the updated driver.
>>
>> Shankhadeep
>>
>>
>> On Tue, May 8, 2012 at 9:47 AM, Jaime Melis wrote:
>>
>>> Hi Shankhadeep,
>>>
>>> I think the community wiki site is the best place to upload these
>>> drivers to:
>>> http://wiki.opennebula.org/
>>>
>>> It's open to registration let me know if you run into any issues.
>>>
>>> About the blog post, our community manager will send you your login info
>>> in a PM.
>>>
>>> Thanks!
>>>
>>> Cheers,
>>> Jaime
>>>
>>>
>>> On Mon, May 7, 2012 at 7:42 PM, Shankhadeep Shome 
>>> wrote:
>>>
 Sure, where and how do I do it? I noticed that you have a community
 wiki site. Do i upload the driver and make an entry there?

 Shankhadeep


 On Mon, May 7, 2012 at 11:54 AM, Jaime Melis wrote:

> Hello Shankhadeep,
>
> that sounds really nice. Would you be interested in contributing your
> code to OpenNebula's ecosystem and/or publishing an entry in opennebula's
> blog?
>
> Regards,
> Jaime
>
>
> On Sun, May 6, 2012 at 5:39 AM, Shankhadeep Shome <
> shank15...@gmail.com> wrote:
>
>>
>>
>> On Sat, May 5, 2012 at 11:38 PM, Shankhadeep Shome <
>> shank15...@gmail.com> wrote:
>>
>>> Hi Chris
>>>
>>> We have a solution we are using on Oracle Exalogic hardware (we are
>>> using the bare metal boxes and gateway switches). I think I understand 
>>> the
>>> requirement, IB accessible storage from VMs is possible however its a 
>>> bit
>>> convoluted. Our solution was to create a one-to-one NAT from the VMs to 
>>> the
>>> IB IPoIB network. This allows the VMs to mount storage natively over 
>>> the IB
>>> network. The performance is pretty good, about 9Gbps per node with 64k 
>>> MTU
>>> sizes.  We created an open nebula driver for this and I'm happy to 
>>> share it
>>> with the community. The driver handles VM migrations by 
>>> enabling/diabling
>>> ip aliases on the host and can also be used to manipulate iptable rules 
>>> on
>>> source and destination when open nebula moves VMs around.
>>>
>>> Shankhadeep
>>>
>>>
>>> On Sat, May 5, 2012 at 5:42 PM, Chris Barry wrote:
>>>
 Reading more, I see that the available methods for block store are
 iSCSI, and that the LUNS are attached to the host. From there, a 
 symlink
 tree exposes the target to the guest in a predictable way on every 
 host.

 So then to modify my question a bit, are all LUNs attached to all
 hosts simultaneously? Or does the attachment only happen when a 
 migration
 is to occur? Also, is the LUN put into a read-only mode or something 
 during
 migration on the original host to protect the data? Or, must a 
 clustering
 filesystem be employed?

 Guess I have a lot to read :)

 Thanks
 -C


 On Sat, May 5, 2012 at 5:23 PM, Chris Barry 
 wrote:

> Hi Gubda,
>
> Thank you for replying. My goal was to use a single shared iso
> image to boot from, and use an in-memory minimal linux on each node 
> that
> had no 'disk' at all, then to mount logical data volume(s) from a
> centralized storage system. Perhaps that is outside the scope of
> opennebula's design goals, and may not be possible  - I'm just now
> investigating it.
>
> I see you are using NFS, but my desire is to use block storage
> instead, ideally LVM, and not incur the performance penalties of 
> IPoIB. It
> does sound simple though, and that's always good. Do you have any
> performance data on that setup in terms of IOPs and/or MB/s write 
> speeds?
> It does sound interesting.
>
> Thanks again,
> -C
>
>
> On Sat, May 5, 2012 at 4:01 PM, Guba Sándor 
> wrote:
>
>>  Hi
>>
>> I'm using InfiniBand to make shared storage. My setup is simple:
>> the opennebula installdir is shared on NFS with the worker nodes.
>>
>> - I don't understand what you mean on shared image. There will be
>> a copy (or symlink if the image is persistent) on the NFS host and

Re: [one-users] infiniband

2012-05-17 Thread Shankhadeep Shome
Thanks Jamie, Borja I'll update the blog soon :)

On Mon, May 14, 2012 at 8:56 AM, Jaime Melis  wrote:

> Hi Shankhadeep,
>
> they look really nice, I've adapted your README to wiki style and added it
> to wiki.opennebula.org:
>
> http://wiki.opennebula.org/infiniband
>
> Feel free to modify it
>
> Thanks a lot for contributing it!
>
> Cheers,
> Jaime
>
>
> On Thu, May 10, 2012 at 8:46 AM, Shankhadeep Shome 
> wrote:
>
>> Just added the vmm driver for the IPoIB NAT stuff
>>
>>
>> On Thu, May 10, 2012 at 2:07 AM, Shankhadeep Shome 
>> wrote:
>>
>>> Its not clear where I would upload the drivers to, I created a wiki
>>> account and a wiki page for 1-to-1 NAT configuration for IPoIB. I can just
>>> send you the tar file with the updated driver.
>>>
>>> Shankhadeep
>>>
>>>
>>> On Tue, May 8, 2012 at 9:47 AM, Jaime Melis wrote:
>>>
 Hi Shankhadeep,

 I think the community wiki site is the best place to upload these
 drivers to:
 http://wiki.opennebula.org/

 It's open to registration let me know if you run into any issues.

 About the blog post, our community manager will send you your login
 info in a PM.

 Thanks!

 Cheers,
 Jaime


 On Mon, May 7, 2012 at 7:42 PM, Shankhadeep Shome >>> > wrote:

> Sure, where and how do I do it? I noticed that you have a community
> wiki site. Do i upload the driver and make an entry there?
>
> Shankhadeep
>
>
> On Mon, May 7, 2012 at 11:54 AM, Jaime Melis wrote:
>
>> Hello Shankhadeep,
>>
>> that sounds really nice. Would you be interested in contributing your
>> code to OpenNebula's ecosystem and/or publishing an entry in opennebula's
>> blog?
>>
>> Regards,
>> Jaime
>>
>>
>> On Sun, May 6, 2012 at 5:39 AM, Shankhadeep Shome <
>> shank15...@gmail.com> wrote:
>>
>>>
>>>
>>> On Sat, May 5, 2012 at 11:38 PM, Shankhadeep Shome <
>>> shank15...@gmail.com> wrote:
>>>
 Hi Chris

 We have a solution we are using on Oracle Exalogic hardware (we are
 using the bare metal boxes and gateway switches). I think I understand 
 the
 requirement, IB accessible storage from VMs is possible however its a 
 bit
 convoluted. Our solution was to create a one-to-one NAT from the VMs 
 to the
 IB IPoIB network. This allows the VMs to mount storage natively over 
 the IB
 network. The performance is pretty good, about 9Gbps per node with 64k 
 MTU
 sizes.  We created an open nebula driver for this and I'm happy to 
 share it
 with the community. The driver handles VM migrations by 
 enabling/diabling
 ip aliases on the host and can also be used to manipulate iptable 
 rules on
 source and destination when open nebula moves VMs around.

 Shankhadeep


 On Sat, May 5, 2012 at 5:42 PM, Chris Barry 
 wrote:

> Reading more, I see that the available methods for block store are
> iSCSI, and that the LUNS are attached to the host. From there, a 
> symlink
> tree exposes the target to the guest in a predictable way on every 
> host.
>
> So then to modify my question a bit, are all LUNs attached to all
> hosts simultaneously? Or does the attachment only happen when a 
> migration
> is to occur? Also, is the LUN put into a read-only mode or something 
> during
> migration on the original host to protect the data? Or, must a 
> clustering
> filesystem be employed?
>
> Guess I have a lot to read :)
>
> Thanks
> -C
>
>
> On Sat, May 5, 2012 at 5:23 PM, Chris Barry 
> wrote:
>
>> Hi Gubda,
>>
>> Thank you for replying. My goal was to use a single shared iso
>> image to boot from, and use an in-memory minimal linux on each node 
>> that
>> had no 'disk' at all, then to mount logical data volume(s) from a
>> centralized storage system. Perhaps that is outside the scope of
>> opennebula's design goals, and may not be possible  - I'm just now
>> investigating it.
>>
>> I see you are using NFS, but my desire is to use block storage
>> instead, ideally LVM, and not incur the performance penalties of 
>> IPoIB. It
>> does sound simple though, and that's always good. Do you have any
>> performance data on that setup in terms of IOPs and/or MB/s write 
>> speeds?
>> It does sound interesting.
>>
>> Thanks again,
>> -C
>>
>>
>> On Sat, May 5, 2012 at 4:01 PM, Guba Sándor 
>> wrote:
>>
>>>  Hi
>>>
>>> I'm using InfiniBand to 

Re: [one-users] infiniband

2012-07-13 Thread Shankhadeep Shome
Hi Chris

iSER will work seamlessly as long as you run it on the hosts, it just
requires a compatible iSER target. TGTD should work, although the iser
target I tested with was a zfs appliance. I've never tested a Linux iSER
target solution. One commercially supported solution you can try out is
Mellanox's VSA, which presents block storage over iSER. You can try the
configuration with open-nebula lvm drivers. I think those drivers have
gotten a lot more robust in the last few weeks as well.

http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=105&menu_section=69

Shank

On Tue, Jul 10, 2012 at 10:08 PM, Chris Barry  wrote:

> @Shankhadeep
>
> Hi. We chatted a while back about IB, and you explained your IPoIB setup,
> and indeed you released your whole solution to the community. That was a
> very cool thing to do.
>
> I'm wondering how you arrived at your solution, because I wonder if I'm
> barking up a tree you've already determined has no cats... :)
>
> Was using IPoIB the initial goal, or did you try other methods before
> settling on this one? I ask because the tack I'm taking is to try to use
> iSER. After reading a lot more about how ONE works, it *seems* like it
> should be pretty easy (knock on wood). It seems like with tgtd (which
> luckily has iSER support built-in), it should essentially be transparent,
> as long as the initiator in the host is iSER aware as well. It almost seems
> like the iscsi method out of the box should 'just work'. Or, am I dreaming?
> :)
> Did you experiment along this vein?
>
> It would be greatly simplify the network configuration as well if it did
> work, because there would be no need for the NAT rules anymore - the guest
> would use OVS in the host for their networking.
>
> Anyway, I'm going to try that (unless you've already tried tried it to no
> avail) first, and I'll let you know how it goes.
>
>
>
> Regards,
> Christopher Barry
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] infiniband

2012-07-14 Thread christopher barry
So, what about your testing made you decide to go with IPoIB rather than
iSER?


On Fri, 2012-07-13 at 14:52 -0400, Shankhadeep Shome wrote:
> Hi Chris
> 
> 
> iSER will work seamlessly as long as you run it on the hosts, it just
> requires a compatible iSER target. TGTD should work, although the iser
> target I tested with was a zfs appliance. I've never tested a Linux
> iSER target solution. One commercially supported solution you can try
> out is Mellanox's VSA, which presents block storage over iSER. You can
> try the configuration with open-nebula lvm drivers. I think those
> drivers have gotten a lot more robust in the last few weeks as well.
> 
> 
> http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=105&menu_section=69
> 
> 
> Shank
> 
> On Tue, Jul 10, 2012 at 10:08 PM, Chris Barry 
> wrote:
> @Shankhadeep
> 
> 
> Hi. We chatted a while back about IB, and you explained your
> IPoIB setup, and indeed you released your whole solution to
> the community. That was a very cool thing to do.
> 
> 
> I'm wondering how you arrived at your solution, because I
> wonder if I'm barking up a tree you've already determined has
> no cats... :)
> 
> 
> Was using IPoIB the initial goal, or did you try other methods
> before settling on this one? I ask because the tack I'm taking
> is to try to use iSER. After reading a lot more about how ONE
> works, it *seems* like it should be pretty easy (knock on
> wood). It seems like with tgtd (which luckily has iSER support
> built-in), it should essentially be transparent, as long as
> the initiator in the host is iSER aware as well. It almost
> seems like the iscsi method out of the box should 'just work'.
> Or, am I dreaming? :)
> Did you experiment along this vein?
> 
> 
> It would be greatly simplify the network configuration as well
> if it did work, because there would be no need for the NAT
> rules anymore - the guest would use OVS in the host for their
> networking.
> 
> 
> Anyway, I'm going to try that (unless you've already tried
> tried it to no avail) first, and I'll let you know how it
> goes.
> 
> 
> 
> 
> 
> 
> Regards,
> Christopher Barry
> 
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
> 
> 
> 


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org