Re: [one-users] ttylinux via ssh not accessible any more after migrate / suspend - resume

2010-09-03 Thread Jaime Melis
Hi Viktor,

that's a very interesting problem. Let's check if it's related to the
bridge. Start the vm and execute 'brctl show' on one host. Migrate it
to another host and execute 'brctl show' on that second host. Compare
those outputs and check if libvirt/kvm is correctly attaching the
network interface to the bridge.

cheers,
Jaime

On Fri, Sep 3, 2010 at 12:54 AM, Viktor Mauch ma...@kit.edu wrote:
 hello javier,

 one more time,

 I'm now working with OpenNebula 2.0beta with the KVM Hypervisor (head node
 and cluster nodes are based on 10.04). I tried to play with the supported
 ttylinux image form the ONE website. Starting the machine and login via ssh
 is no problem. After Stop - Resume the machine is still available via
 network.

 But if I perform a migration of the the machine or suspend - resume, the
 network conncetion is gone (ping / ssh find no aim). I looked via VNC into
 the VM, and everything looks ok, the eth0 device is still configured right,
 but also no possibility to ping something others outside the VM. Restarting
 of the network did not solve the problem. The log files are clear. All
 physical hosts are in the same switch.

 Does anyone have an idee what goes wrong??

 Greets

 Viktor


 Am 30.08.2010 11:32, schrieb Javier Fontan:

 Hello,

 Can you try to access it by other means to check if there is something
 broken in the VM? If you are using xen you should be able to access it
 using xm console from the physical node. With KVM you can add VNC
 access to it.

 If the machine seems to be in good shape there can also be some
 problem in the network relearning where the machine is. Are both
 physical hosts in the same switch?

 Bye


 On Mon, Aug 30, 2010 at 12:56 AM, Viktor Mauchma...@kit.edu  wrote:

 Hello,

 I use ONE 1.4 with a shared NFS and play a litle bit with the ttylinux
 image
 which automatically configures the eth0 device during booting.

 The VM is accessible via SSH and everything looks fine. After
 livemigration,
 normal migration or suspend-resume the ssh connection is no longer
 available:

 $ ssh r...@vm_ip
 ssh: connect to host VM_IP port 22: No route to host


 Greets

 Viktor



 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org





 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] What kinds of shared storage are you using?

2010-09-03 Thread Ignacio M. Llorente
Dear Szekelyi,

You could consider to contribute the new driver to our ecosystem
and/or write a post in our blog describing your customization.

Thanks!

On Fri, Sep 3, 2010 at 1:07 AM, Andreas Ntaflos d...@pseudoterminal.org wrote:
 On Friday 03 September 2010 00:28:23 Székelyi Szabolcs wrote:
 We're using iSCSI targets directly (one target per vm), automatically
 created and initialized (cloned) from images on vm deploy. Althogh
 the target is based on IET behind gigabit links, it works quite
 well: we haven't done performance benchmarks (yet), but the
 installation time of virtual machines is kinda same than on real
 hardware.

 That sounds interesting and very similar to what we hope to achieve
 using OpenNebula and a central storage server (as I've posted a few
 hours ago, not realising this thread here is very similar in nature).

 We developed a custom TM driver for this, because this approach makes
 live migration trickier, since just before live migration the target
 host needs to log in to the iSCSI target hosting the disks of the
 vm, and this is something ONE can't do, so we used libvirt hooks to
 do this -- works like a charm. Libvirt hooks are also good for
 reattaching virtual machines to their virtual networks on live
 migration -- again something ONE doesn't do.

 Would you care to go into a little detail regarding your custom TM
 driver? Maybe even post the sources? I'd be very interested in learning
 more about your approach to this.

 Thanks!

 Andreas
 --
 Andreas Ntaflos

 GPG Fingerprint: 6234 2E8E 5C81 C6CB E5EC  7E65 397C E2A8 090C A9B4

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org





-- 
Ignacio M. Llorente, Full Professor (Catedratico):
http://dsa-research.org/llorente
DSA Research Group:  web http://dsa-research.org and blog
http://blog.dsa-research.org
OpenNebula Open Source Toolkit for Cloud Computing: http://www.OpenNebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] What kinds of shared storage are you using?

2010-09-03 Thread Székelyi Szabolcs
On Friday 03 September 2010 14.54.06 Ignacio M. Llorente wrote:
 You could consider to contribute the new driver to our ecosystem
 and/or write a post in our blog describing your customization.

Our development is sponsored by the state, thus everything we develop will be 
open sourced, and so, it'd be an honour to contribute this to the OpenNebula 
ecosystem.

This is an ongoing development, and the TM driver is just a small part of it. 
On the other hand, in our environment it has been quite stable for a couple of 
weeks now, so I think it's time to do an alpha release.

I'll use the weekend to gather all its dependencies, wrap it up and write some 
docs about it. Expect a release in the beginning of next week.

Thank you for your interest.

Cheers,
-- 
cc

 On Fri, Sep 3, 2010 at 1:07 AM, Andreas Ntaflos d...@pseudoterminal.org 
wrote:
  On Friday 03 September 2010 00:28:23 Székelyi Szabolcs wrote:
  We're using iSCSI targets directly (one target per vm), automatically
  created and initialized (cloned) from images on vm deploy. Althogh
  the target is based on IET behind gigabit links, it works quite
  well: we haven't done performance benchmarks (yet), but the
  installation time of virtual machines is kinda same than on real
  hardware.
  
  That sounds interesting and very similar to what we hope to achieve
  using OpenNebula and a central storage server (as I've posted a few
  hours ago, not realising this thread here is very similar in nature).
  
  We developed a custom TM driver for this, because this approach makes
  live migration trickier, since just before live migration the target
  host needs to log in to the iSCSI target hosting the disks of the
  vm, and this is something ONE can't do, so we used libvirt hooks to
  do this -- works like a charm. Libvirt hooks are also good for
  reattaching virtual machines to their virtual networks on live
  migration -- again something ONE doesn't do.
  
  Would you care to go into a little detail regarding your custom TM
  driver? Maybe even post the sources? I'd be very interested in learning
  more about your approach to this.
  
  Thanks!
  
  Andreas
  --
  Andreas Ntaflos
  
  GPG Fingerprint: 6234 2E8E 5C81 C6CB E5EC  7E65 397C E2A8 090C A9B4
  
  ___
  Users mailing list
  Users@lists.opennebula.org
  http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] What kinds of shared storage are you using?

2010-09-03 Thread Slava Yanson
We are using GlusterFS and it works great :) With some tweaking we were able
to average around 90-120 megabits per second on reads and 25-35 megabits per
second on writes. Configuration is as follows:

2 file servers:
- Supermicro server motherboard with Intel Atom D510
- 4GB DDR2 RAM
- 6 x Western Digital RE3 500GB hard drives
- Ubuntu 10.04 x64 (on a 2GB USB stick)
- RAID 10

File servers are set up with replication and we have a total of 1.5TB of
storage dedicated to virtual machine storage with ability to grow it to
petabytes on demand - just add more nodes!


Slava Yanson, CTO
Killer Beaver, LLC

w: www.killerbeaver.net
c: (323) 963-4787
aim/yahoo/skype: urbansoot

Follow us on Facebook: http://fb.killerbeaver.net/
Follow us on Twitter: http://twitter.com/thekillerbeaver


On Wed, Sep 1, 2010 at 8:48 PM, Huang Zhiteng winsto...@gmail.com wrote:

 Hi all,

 In my open nebula 2.0b testing, I found NFS performance was unacceptable
 (too bad).  I haven't done any tuning or optimization to NFS yet but I doubt
 if tuning can solve the problem.  So I'd like to know what kind of shared
 storage you are using.  I thought about Global File System v2 (GFSv2).
 GFSv2 does performs much better (near native performance) but there's limit
 of 32 nodes and setting up GFS is complex. So more important question, how
 can shared storage scales to  100 node cloud?  Or this question should be
 for  100 node cloud, what kind of storage system should be used?   Please
 give any suggestion or comments.  If you have already implement/deploy such
 an environment, it'd be great if you can share some best practice.

 --
 Below there's some details about my setup and issue:

 1 front-end, 6 nodes.  All machines are two socket Intel Xeon x5570 2.93Ghz
 (16 threads in total), with 12GB memory.  There's one SATA RAID 0 box (630GB
 capacity) connected to front-end.  Network is 1Gb Ethernet.

 OpenNebula 2.0b was installed to /srv/cloud/one on front-end and then
 exported via NFSv4.  Also front-end exports RAID 0 partition to
 /srv/cloud/one/var/images.

 The Prolog stage of Creating VM always caused frond-end machine almost
 freeze (slow response to input, even OpenNebula command would timeout) in my
 setup.  I highly suspect the root cause is poor performance NFS.
 --
 Regards
 Huang Zhiteng

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Hook for VM becoming UNKNOWN state

2010-09-03 Thread Ruben S. Montero
Hi,

The other thing we may want to look at is to detect that the VM is
stopped in libvirt and move the VM to shutdown-epilog-done. There is
a ticket for this http://dev.opennebula.org/issues/335

Cheers

Ruben

On Fri, Sep 3, 2010 at 8:13 PM, Shi Jin jinzish...@yahoo.com wrote:
 Hi, there,

 I would like to have a hook to run a script when the user shuts down the VM 
 from inside the OS. When that happens, the VMState=ACTIVE and 
 LCM_STATE=UNKNOWN. Is it possible to add this to the hook system of 
 OpenNebula? Otherwise, which code should I look for to add some custom code 
 after it gets into UNKNOWN?

 Thanks a lot.

 --
 Shi Jin, PhD



 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Dr. Ruben Santiago Montero
Associate Professor (Profesor Titular), Complutense University of Madrid

URL: http://dsa-research.org/doku.php?id=people:ruben
Weblog: http://blog.dsa-research.org/?author=7
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org