Re: [one-users] What kinds of shared storage are you using?

2010-09-06 Thread Székelyi Szabolcs
On Monday 06 September 2010 21:01:35 Ruben S. Montero wrote:
> You may also find interesting the drivers developed by Sander Klaus.
> The idea is the opposite, it uses LVM on top of iSCSI which may have
> some benefits over the iSCSI over LVM  proposed Székelyi (if
> understand it well). However there are some extra dependencies and
> tricks. More info at [1].

Yeah, it's quite the opposite: our driver uses iSCSI on top of LVM. :)

> Anyway I think this is a very interesting storage solution that worths
> checking it...

The release is on the way. Stay tuned.

-- 
cc


> On Mon, Sep 6, 2010 at 1:12 PM, Andreas Ntaflos  
wrote:
> > On Friday 03 September 2010 15:13:55 Székelyi Szabolcs wrote:
> >> On Friday 03 September 2010 14.54.06 Ignacio M. Llorente wrote:
> >> > You could consider to contribute the new driver to our ecosystem
> >> > and/or write a post in our blog describing your customization.
> >> 
> >> Our development is sponsored by the state, thus everything we develop
> >> will be open sourced, and so, it'd be an honour to contribute this
> >> to the OpenNebula ecosystem.
> >> 
> >> This is an ongoing development, and the TM driver is just a small
> >> part of it. On the other hand, in our environment it has been quite
> >> stable for a couple of weeks now, so I think it's time to do an
> >> alpha release.
> >> 
> >> I'll use the weekend to gather all its dependencies, wrap it up and
> >> write some docs about it. Expect a release in the beginning of next
> >> week.
> > 
> > This is excellent news, thank you! Looking forward to your first
> > release.
> > 
> > Andreas
> > --
> > Andreas Ntaflos
> > Vienna, Austria
> > 
> > GPG Fingerprint: 6234 2E8E 5C81 C6CB E5EC  7E65 397C E2A8 090C A9B4
> > 
> > ___
> > Users mailing list
> > Users@lists.opennebula.org
> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] What kinds of shared storage are you using?

2010-09-06 Thread Ruben S. Montero
Hi,

You may also find interesting the drivers developed by Sander Klaus.
The idea is the opposite, it uses LVM on top of iSCSI which may have
some benefits over the iSCSI over LVM  proposed Székelyi (if
understand it well). However there are some extra dependencies and
tricks. More info at [1].

Anyway I think this is a very interesting storage solution that worths
checking it...

Cheers

Ruben

References
[1]
http://www.nikhef.nl/pub/projects/grid/gridwiki/index.php/Virtual_Machines_working_group

On Mon, Sep 6, 2010 at 1:12 PM, Andreas Ntaflos  wrote:
> On Friday 03 September 2010 15:13:55 Székelyi Szabolcs wrote:
>> On Friday 03 September 2010 14.54.06 Ignacio M. Llorente wrote:
>> > You could consider to contribute the new driver to our ecosystem
>> > and/or write a post in our blog describing your customization.
>>
>> Our development is sponsored by the state, thus everything we develop
>> will be open sourced, and so, it'd be an honour to contribute this
>> to the OpenNebula ecosystem.
>>
>> This is an ongoing development, and the TM driver is just a small
>> part of it. On the other hand, in our environment it has been quite
>> stable for a couple of weeks now, so I think it's time to do an
>> alpha release.
>>
>> I'll use the weekend to gather all its dependencies, wrap it up and
>> write some docs about it. Expect a release in the beginning of next
>> week.
>
> This is excellent news, thank you! Looking forward to your first
> release.
>
> Andreas
> --
> Andreas Ntaflos
> Vienna, Austria
>
> GPG Fingerprint: 6234 2E8E 5C81 C6CB E5EC  7E65 397C E2A8 090C A9B4
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>



-- 
Dr. Ruben Santiago Montero
Associate Professor (Profesor Titular), Complutense University of Madrid

URL: http://dsa-research.org/doku.php?id=people:ruben
Weblog: http://blog.dsa-research.org/?author=7
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] What kinds of shared storage are you using?

2010-09-06 Thread Andreas Ntaflos
On Friday 03 September 2010 15:13:55 Székelyi Szabolcs wrote:
> On Friday 03 September 2010 14.54.06 Ignacio M. Llorente wrote:
> > You could consider to contribute the new driver to our ecosystem
> > and/or write a post in our blog describing your customization.
> 
> Our development is sponsored by the state, thus everything we develop
> will be open sourced, and so, it'd be an honour to contribute this
> to the OpenNebula ecosystem.
> 
> This is an ongoing development, and the TM driver is just a small
> part of it. On the other hand, in our environment it has been quite
> stable for a couple of weeks now, so I think it's time to do an
> alpha release.
> 
> I'll use the weekend to gather all its dependencies, wrap it up and
> write some docs about it. Expect a release in the beginning of next
> week.

This is excellent news, thank you! Looking forward to your first 
release.

Andreas
-- 
Andreas Ntaflos
Vienna, Austria

GPG Fingerprint: 6234 2E8E 5C81 C6CB E5EC  7E65 397C E2A8 090C A9B4


signature.asc
Description: This is a digitally signed message part.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] What kinds of shared storage are you using?

2010-09-03 Thread Slava Yanson
We are using GlusterFS and it works great :) With some tweaking we were able
to average around 90-120 megabits per second on reads and 25-35 megabits per
second on writes. Configuration is as follows:

2 file servers:
- Supermicro server motherboard with Intel Atom D510
- 4GB DDR2 RAM
- 6 x Western Digital RE3 500GB hard drives
- Ubuntu 10.04 x64 (on a 2GB USB stick)
- RAID 10

File servers are set up with replication and we have a total of 1.5TB of
storage dedicated to virtual machine storage with ability to grow it to
petabytes on demand - just add more nodes!


Slava Yanson, CTO
Killer Beaver, LLC

w: www.killerbeaver.net
c: (323) 963-4787
aim/yahoo/skype: urbansoot

Follow us on Facebook: http://fb.killerbeaver.net/
Follow us on Twitter: http://twitter.com/thekillerbeaver


On Wed, Sep 1, 2010 at 8:48 PM, Huang Zhiteng  wrote:

> Hi all,
>
> In my open nebula 2.0b testing, I found NFS performance was unacceptable
> (too bad).  I haven't done any tuning or optimization to NFS yet but I doubt
> if tuning can solve the problem.  So I'd like to know what kind of shared
> storage you are using.  I thought about Global File System v2 (GFSv2).
> GFSv2 does performs much better (near native performance) but there's limit
> of 32 nodes and setting up GFS is complex. So more important question, how
> can shared storage scales to > 100 node cloud?  Or this question should be
> for > 100 node cloud, what kind of storage system should be used?   Please
> give any suggestion or comments.  If you have already implement/deploy such
> an environment, it'd be great if you can share some best practice.
>
> --
> Below there's some details about my setup and issue:
>
> 1 front-end, 6 nodes.  All machines are two socket Intel Xeon x5570 2.93Ghz
> (16 threads in total), with 12GB memory.  There's one SATA RAID 0 box (630GB
> capacity) connected to front-end.  Network is 1Gb Ethernet.
>
> OpenNebula 2.0b was installed to /srv/cloud/one on front-end and then
> exported via NFSv4.  Also front-end exports RAID 0 partition to
> /srv/cloud/one/var/images.
>
> The Prolog stage of Creating VM always caused frond-end machine almost
> freeze (slow response to input, even OpenNebula command would timeout) in my
> setup.  I highly suspect the root cause is poor performance NFS.
> --
> Regards
> Huang Zhiteng
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] What kinds of shared storage are you using?

2010-09-03 Thread Ignacio M. Llorente
Happy to read this, please see the details at
http://www.opennebula.org/community:ecosystem and you can get an
account in our blog by sending an email to Borja at
,

Thanks!

2010/9/3 Székelyi Szabolcs :
> On Friday 03 September 2010 14.54.06 Ignacio M. Llorente wrote:
>> You could consider to contribute the new driver to our ecosystem
>> and/or write a post in our blog describing your customization.
>
> Our development is sponsored by the state, thus everything we develop will be
> open sourced, and so, it'd be an honour to contribute this to the OpenNebula
> ecosystem.
>
> This is an ongoing development, and the TM driver is just a small part of it.
> On the other hand, in our environment it has been quite stable for a couple of
> weeks now, so I think it's time to do an alpha release.
>
> I'll use the weekend to gather all its dependencies, wrap it up and write some
> docs about it. Expect a release in the beginning of next week.
>
> Thank you for your interest.
>
> Cheers,
> --
> cc
>
>> On Fri, Sep 3, 2010 at 1:07 AM, Andreas Ntaflos 
> wrote:
>> > On Friday 03 September 2010 00:28:23 Székelyi Szabolcs wrote:
>> >> We're using iSCSI targets directly (one target per vm), automatically
>> >> created and initialized (cloned) from images on vm deploy. Althogh
>> >> the target is based on IET behind gigabit links, it works quite
>> >> well: we haven't done performance benchmarks (yet), but the
>> >> installation time of virtual machines is kinda same than on real
>> >> hardware.
>> >
>> > That sounds interesting and very similar to what we hope to achieve
>> > using OpenNebula and a central storage server (as I've posted a few
>> > hours ago, not realising this thread here is very similar in nature).
>> >
>> >> We developed a custom TM driver for this, because this approach makes
>> >> live migration trickier, since just before live migration the target
>> >> host needs to log in to the iSCSI target hosting the disks of the
>> >> vm, and this is something ONE can't do, so we used libvirt hooks to
>> >> do this -- works like a charm. Libvirt hooks are also good for
>> >> reattaching virtual machines to their virtual networks on live
>> >> migration -- again something ONE doesn't do.
>> >
>> > Would you care to go into a little detail regarding your custom TM
>> > driver? Maybe even post the sources? I'd be very interested in learning
>> > more about your approach to this.
>> >
>> > Thanks!
>> >
>> > Andreas
>> > --
>> > Andreas Ntaflos
>> >
>> > GPG Fingerprint: 6234 2E8E 5C81 C6CB E5EC  7E65 397C E2A8 090C A9B4
>> >
>> > ___
>> > Users mailing list
>> > Users@lists.opennebula.org
>> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>



-- 
Ignacio M. Llorente, Full Professor (Catedratico):
http://dsa-research.org/llorente
DSA Research Group:  web http://dsa-research.org and blog
http://blog.dsa-research.org
OpenNebula Open Source Toolkit for Cloud Computing: http://www.OpenNebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] What kinds of shared storage are you using?

2010-09-03 Thread Székelyi Szabolcs
On Friday 03 September 2010 14.54.06 Ignacio M. Llorente wrote:
> You could consider to contribute the new driver to our ecosystem
> and/or write a post in our blog describing your customization.

Our development is sponsored by the state, thus everything we develop will be 
open sourced, and so, it'd be an honour to contribute this to the OpenNebula 
ecosystem.

This is an ongoing development, and the TM driver is just a small part of it. 
On the other hand, in our environment it has been quite stable for a couple of 
weeks now, so I think it's time to do an alpha release.

I'll use the weekend to gather all its dependencies, wrap it up and write some 
docs about it. Expect a release in the beginning of next week.

Thank you for your interest.

Cheers,
-- 
cc

> On Fri, Sep 3, 2010 at 1:07 AM, Andreas Ntaflos  
wrote:
> > On Friday 03 September 2010 00:28:23 Székelyi Szabolcs wrote:
> >> We're using iSCSI targets directly (one target per vm), automatically
> >> created and initialized (cloned) from images on vm deploy. Althogh
> >> the target is based on IET behind gigabit links, it works quite
> >> well: we haven't done performance benchmarks (yet), but the
> >> installation time of virtual machines is kinda same than on real
> >> hardware.
> > 
> > That sounds interesting and very similar to what we hope to achieve
> > using OpenNebula and a central storage server (as I've posted a few
> > hours ago, not realising this thread here is very similar in nature).
> > 
> >> We developed a custom TM driver for this, because this approach makes
> >> live migration trickier, since just before live migration the target
> >> host needs to log in to the iSCSI target hosting the disks of the
> >> vm, and this is something ONE can't do, so we used libvirt hooks to
> >> do this -- works like a charm. Libvirt hooks are also good for
> >> reattaching virtual machines to their virtual networks on live
> >> migration -- again something ONE doesn't do.
> > 
> > Would you care to go into a little detail regarding your custom TM
> > driver? Maybe even post the sources? I'd be very interested in learning
> > more about your approach to this.
> > 
> > Thanks!
> > 
> > Andreas
> > --
> > Andreas Ntaflos
> > 
> > GPG Fingerprint: 6234 2E8E 5C81 C6CB E5EC  7E65 397C E2A8 090C A9B4
> > 
> > ___
> > Users mailing list
> > Users@lists.opennebula.org
> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] What kinds of shared storage are you using?

2010-09-03 Thread Ignacio M. Llorente
Dear Szekelyi,

You could consider to contribute the new driver to our ecosystem
and/or write a post in our blog describing your customization.

Thanks!

On Fri, Sep 3, 2010 at 1:07 AM, Andreas Ntaflos  wrote:
> On Friday 03 September 2010 00:28:23 Székelyi Szabolcs wrote:
>> We're using iSCSI targets directly (one target per vm), automatically
>> created and initialized (cloned) from images on vm deploy. Althogh
>> the target is based on IET behind gigabit links, it works quite
>> well: we haven't done performance benchmarks (yet), but the
>> installation time of virtual machines is kinda same than on real
>> hardware.
>
> That sounds interesting and very similar to what we hope to achieve
> using OpenNebula and a central storage server (as I've posted a few
> hours ago, not realising this thread here is very similar in nature).
>
>> We developed a custom TM driver for this, because this approach makes
>> live migration trickier, since just before live migration the target
>> host needs to log in to the iSCSI target hosting the disks of the
>> vm, and this is something ONE can't do, so we used libvirt hooks to
>> do this -- works like a charm. Libvirt hooks are also good for
>> reattaching virtual machines to their virtual networks on live
>> migration -- again something ONE doesn't do.
>
> Would you care to go into a little detail regarding your custom TM
> driver? Maybe even post the sources? I'd be very interested in learning
> more about your approach to this.
>
> Thanks!
>
> Andreas
> --
> Andreas Ntaflos
>
> GPG Fingerprint: 6234 2E8E 5C81 C6CB E5EC  7E65 397C E2A8 090C A9B4
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>



-- 
Ignacio M. Llorente, Full Professor (Catedratico):
http://dsa-research.org/llorente
DSA Research Group:  web http://dsa-research.org and blog
http://blog.dsa-research.org
OpenNebula Open Source Toolkit for Cloud Computing: http://www.OpenNebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] What kinds of shared storage are you using?

2010-09-02 Thread Andreas Ntaflos
On Friday 03 September 2010 00:28:23 Székelyi Szabolcs wrote:
> We're using iSCSI targets directly (one target per vm), automatically
> created and initialized (cloned) from images on vm deploy. Althogh
> the target is based on IET behind gigabit links, it works quite
> well: we haven't done performance benchmarks (yet), but the
> installation time of virtual machines is kinda same than on real
> hardware.

That sounds interesting and very similar to what we hope to achieve 
using OpenNebula and a central storage server (as I've posted a few 
hours ago, not realising this thread here is very similar in nature).
 
> We developed a custom TM driver for this, because this approach makes
> live migration trickier, since just before live migration the target
> host needs to log in to the iSCSI target hosting the disks of the
> vm, and this is something ONE can't do, so we used libvirt hooks to
> do this -- works like a charm. Libvirt hooks are also good for
> reattaching virtual machines to their virtual networks on live
> migration -- again something ONE doesn't do.

Would you care to go into a little detail regarding your custom TM 
driver? Maybe even post the sources? I'd be very interested in learning 
more about your approach to this.

Thanks!

Andreas
-- 
Andreas Ntaflos 

GPG Fingerprint: 6234 2E8E 5C81 C6CB E5EC  7E65 397C E2A8 090C A9B4


signature.asc
Description: This is a digitally signed message part.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] What kinds of shared storage are you using?

2010-09-02 Thread Székelyi Szabolcs
On Thursday 02 September 2010 14:03:32 Michael Brown wrote:
> I've found that NFS is unacceptably slow too.  With both the front end and
> the nodes mounting NFS, copies have to go through the front end, then back
> out again, which is a bit wasteful.
> 
> We use a NetApp storage system, which can do flexclones.  We can't take
> advantage of that with OpenNebula because everything is exported via NFS. 
> I have a few coworkers that have experience with a NetApp api, so we plan
> on writing a tm driver for NetApp soon.
> 
> I'm interested to hear about other people's setups and solutions.

We're using iSCSI targets directly (one target per vm), automatically created 
and initialized (cloned) from images on vm deploy. Althogh the target is based 
on IET behind gigabit links, it works quite well: we haven't done performance 
benchmarks (yet), but the installation time of virtual machines is kinda same 
than on real hardware.

We developed a custom TM driver for this, because this approach makes live 
migration trickier, since just before live migration the target host needs to 
log in to the iSCSI target hosting the disks of the vm, and this is something 
ONE can't do, so we used libvirt hooks to do this -- works like a charm. 
Libvirt hooks are also good for reattaching virtual machines to their virtual 
networks on live migration -- again something ONE doesn't do.

Cheers,
-- 
cc

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] What kinds of shared storage are you using?

2010-09-02 Thread Javier Fontan
Hello,

Even if NFS with default configuration is not the most performant
shared filesystem we thought it was the most common shared filesystem
people could use for virtualization. Maybe you can make it faster
adding some parameters when mounting the shared filesystem. "async"
will make your VM's run faster as it wont need to synchronously write
to the server. Tunning "rsize" and "wsize" will make a difference in
performance also.

Anyway, I think we made a mistake calling shared filesystem drivers
"tm_nfs" as it could be used with other shared filesystem. It will
only asume that files in a certain path will be accesible by frontend
and nodes using standard fs commands. I encourage you to use other
filesystems more performant with "tm_nfs" drivers.

In our machines we have been using a hybrid system (described in
http://opennebula.org/documentation:rel1.4:sm at "Customizing and
Extending" section). This will let you have non cloned images that can
livemigrate and local images that will perform better.

I also invite you to develop new drivers for other systems if they
need changes. I hope that having the transfer commands in shell
scripts outside the core will make it easier to change or develop new
ones. If you have any doubt or need help to create those new drivers
contact us as we are also interested on interacting with other
technologies.

Bye

On Thu, Sep 2, 2010 at 2:03 PM, Michael Brown  wrote:
> I've found that NFS is unacceptably slow too.  With both the front end and
> the nodes mounting NFS, copies have to go through the front end, then back
> out again, which is a bit wasteful.
>
> We use a NetApp storage system, which can do flexclones.  We can't take
> advantage of that with OpenNebula because everything is exported via NFS.  I
> have a few coworkers that have experience with a NetApp api, so we plan on
> writing a tm driver for NetApp soon.
>
> I'm interested to hear about other people's setups and solutions.
>
> --Michael Brown
>
> On Wed, Sep 1, 2010 at 11:48 PM, Huang Zhiteng  wrote:
>>
>> Hi all,
>>
>> In my open nebula 2.0b testing, I found NFS performance was unacceptable
>> (too bad).  I haven't done any tuning or optimization to NFS yet but I doubt
>> if tuning can solve the problem.  So I'd like to know what kind of shared
>> storage you are using.  I thought about Global File System v2 (GFSv2).
>> GFSv2 does performs much better (near native performance) but there's limit
>> of 32 nodes and setting up GFS is complex. So more important question, how
>> can shared storage scales to > 100 node cloud?  Or this question should be
>> for > 100 node cloud, what kind of storage system should be used?   Please
>> give any suggestion or comments.  If you have already implement/deploy such
>> an environment, it'd be great if you can share some best practice.
>>
>> --
>> Below there's some details about my setup and issue:
>>
>> 1 front-end, 6 nodes.  All machines are two socket Intel Xeon x5570
>> 2.93Ghz (16 threads in total), with 12GB memory.  There's one SATA RAID 0
>> box (630GB capacity) connected to front-end.  Network is 1Gb Ethernet.
>>
>> OpenNebula 2.0b was installed to /srv/cloud/one on front-end and then
>> exported via NFSv4.  Also front-end exports RAID 0 partition to
>> /srv/cloud/one/var/images.
>>
>> The Prolog stage of Creating VM always caused frond-end machine almost
>> freeze (slow response to input, even OpenNebula command would timeout) in my
>> setup.  I highly suspect the root cause is poor performance NFS.
>> --
>> Regards
>> Huang Zhiteng
>>
>> ___
>> Users mailing list
>> Users@lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>



-- 
Javier Fontan, Grid & Virtualization Technology Engineer/Researcher
DSA Research Group: http://dsa-research.org
Globus GridWay Metascheduler: http://www.GridWay.org
OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] What kinds of shared storage are you using?

2010-09-02 Thread Michael Brown
I've found that NFS is unacceptably slow too.  With both the front end and
the nodes mounting NFS, copies have to go through the front end, then back
out again, which is a bit wasteful.

We use a NetApp storage system, which can do flexclones.  We can't take
advantage of that with OpenNebula because everything is exported via NFS.  I
have a few coworkers that have experience with a NetApp api, so we plan on
writing a tm driver for NetApp soon.

I'm interested to hear about other people's setups and solutions.

--Michael Brown

On Wed, Sep 1, 2010 at 11:48 PM, Huang Zhiteng  wrote:

> Hi all,
>
> In my open nebula 2.0b testing, I found NFS performance was unacceptable
> (too bad).  I haven't done any tuning or optimization to NFS yet but I doubt
> if tuning can solve the problem.  So I'd like to know what kind of shared
> storage you are using.  I thought about Global File System v2 (GFSv2).
> GFSv2 does performs much better (near native performance) but there's limit
> of 32 nodes and setting up GFS is complex. So more important question, how
> can shared storage scales to > 100 node cloud?  Or this question should be
> for > 100 node cloud, what kind of storage system should be used?   Please
> give any suggestion or comments.  If you have already implement/deploy such
> an environment, it'd be great if you can share some best practice.
>
> --
> Below there's some details about my setup and issue:
>
> 1 front-end, 6 nodes.  All machines are two socket Intel Xeon x5570 2.93Ghz
> (16 threads in total), with 12GB memory.  There's one SATA RAID 0 box (630GB
> capacity) connected to front-end.  Network is 1Gb Ethernet.
>
> OpenNebula 2.0b was installed to /srv/cloud/one on front-end and then
> exported via NFSv4.  Also front-end exports RAID 0 partition to
> /srv/cloud/one/var/images.
>
> The Prolog stage of Creating VM always caused frond-end machine almost
> freeze (slow response to input, even OpenNebula command would timeout) in my
> setup.  I highly suspect the root cause is poor performance NFS.
> --
> Regards
> Huang Zhiteng
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] What kinds of shared storage are you using?

2010-09-01 Thread Huang Zhiteng
Hi all,

In my open nebula 2.0b testing, I found NFS performance was unacceptable
(too bad).  I haven't done any tuning or optimization to NFS yet but I doubt
if tuning can solve the problem.  So I'd like to know what kind of shared
storage you are using.  I thought about Global File System v2 (GFSv2).
GFSv2 does performs much better (near native performance) but there's limit
of 32 nodes and setting up GFS is complex. So more important question, how
can shared storage scales to > 100 node cloud?  Or this question should be
for > 100 node cloud, what kind of storage system should be used?   Please
give any suggestion or comments.  If you have already implement/deploy such
an environment, it'd be great if you can share some best practice.

--
Below there's some details about my setup and issue:

1 front-end, 6 nodes.  All machines are two socket Intel Xeon x5570 2.93Ghz
(16 threads in total), with 12GB memory.  There's one SATA RAID 0 box (630GB
capacity) connected to front-end.  Network is 1Gb Ethernet.

OpenNebula 2.0b was installed to /srv/cloud/one on front-end and then
exported via NFSv4.  Also front-end exports RAID 0 partition to
/srv/cloud/one/var/images.

The Prolog stage of Creating VM always caused frond-end machine almost
freeze (slow response to input, even OpenNebula command would timeout) in my
setup.  I highly suspect the root cause is poor performance NFS.
-- 
Regards
Huang Zhiteng
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org