[one-users] Public Cloud Guest Isolation

2014-05-19 Thread Samuel Winchenbach
Hi All,

What options do I have for guest network isolation from one group to
another?  How about guest networks crossing L3 boundaries?  I am mostly
interested in Open vSwitch implementations.  The documentation sees to
cover the APIs for Public cloud, but there isn't much on the networking
side.


Thanks,
Sam
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] problems with openvswitch after upgrading to 4.4

2013-12-19 Thread samuel
It turned out to be that there was a MAC prefix of 09:00 that was causing
the problem

I'd like to thank to everyone that provide information both in the mailing
list and in the IRC channel.

Apologies for the noise,
Samuel.


On 19 December 2013 11:42, samuel  wrote:

> Still same issue.
>
> OVS rules affecting ARP are not loaded but ther's no connectivity from
> inside the VM. How can we proceed to debug it?
>
> Thanks a lot.
>
>
> On 19 December 2013 11:13, Jaime Melis  wrote:
>
>> Hi guys,
>>
>> I just spoke with Samuel on the IRC and he's going to try and manually
>> revert this patch:
>>
>> http://dev.opennebula.org/issues/2318
>> (
>> https://github.com/OpenNebula/one/commit/a775bb295802bccfd53b44d1e874b9a135efc130
>> )
>>
>> He'll reply with the results. Let's see if we can narrow it down.
>>
>> cheers,
>> Jaime
>>
>>
>> On Thu, Dec 19, 2013 at 10:14 AM, samuel  wrote:
>>
>>> Hi folks,
>>>
>>> We're just evaluating new version 4.4 and everything seems to work
>>> except for a network issue with openvswitch. From inside the VM the nic is
>>> attached but no traffic is flowing.
>>>
>>> Debugging the process everything seems ok.
>>>
>>> Sunstone log:
>>> Thu Dec 19 09:36:58 2013 [VMM][I]: Successfully execute virtualization
>>> driver operation: attach_nic.
>>> Thu Dec 19 09:36:58 2013 [VMM][I]: post: Executed "sudo ovs-vsctl set
>>> Port vnet4 tag=1030".
>>> Thu Dec 19 09:36:58 2013 [VMM][I]: post: Executed "sudo ovs-ofctl
>>> add-flow bridge
>>> in_port=112,arp,dl_src=09:00:57:ec:d9:96,priority=45000,actions=drop".
>>> Thu Dec 19 09:36:58 2013 [VMM][I]: post: Executed "sudo ovs-ofctl
>>> add-flow bridge
>>> in_port=112,arp,dl_src=09:00:57:ec:d9:96,nw_src=A.B.C.D,priority=46000,actions=normal".
>>> Thu Dec 19 09:36:58 2013 [VMM][I]: post: Executed "sudo ovs-ofctl
>>> add-flow bridge
>>> in_port=112,dl_src=09:00:57:ec:d9:96,priority=4,actions=normal".
>>> Thu Dec 19 09:36:59 2013 [VMM][I]: post: Executed "sudo ovs-ofctl
>>> add-flow bridge in_port=112,priority=39000,actions=drop".
>>> Thu Dec 19 09:36:59 2013 [VMM][I]: ExitCode: 0
>>> Thu Dec 19 09:36:59 2013 [VMM][I]: Successfully execute network driver
>>> operation: post.
>>> Thu Dec 19 09:36:59 2013 [VMM][I]: VM NIC Successfully attached.
>>>
>>> going to the host:
>>> the port is effectively created:
>>> ovs-vsctl show
>>> Port "vnet4"
>>> tag: 1030
>>> Interface "vnet4"
>>>
>>> ovs-ofctl dump-flows bridge | grep 112
>>>  cookie=0x0, duration=1714.529s, table=0, n_packets=8, n_bytes=336,
>>> idle_age=1484, priority=45000,arp,in_port=112,dl_src=09:00:57:ec:d9:96
>>> actions=drop
>>>  cookie=0x0, duration=1714.515s, table=0, n_packets=0, n_bytes=0,
>>> idle_age=1714,
>>> priority=46000,arp,in_port=112,dl_src=09:00:57:ec:d9:96,nw_src=A.B.C.D
>>> actions=NORMAL
>>>  cookie=0x0, duration=1714.501s, table=0, n_packets=264, n_bytes=35449,
>>> idle_age=56, priority=4,in_port=112,dl_src=09:00:57:ec:d9:96
>>> actions=NORMAL
>>>  cookie=0x0, duration=1714.485s, table=0, n_packets=0, n_bytes=0,
>>> idle_age=1714, priority=39000,in_port=112 actions=drop
>>>
>>>  ovs-dpctl show bridge
>>> port 112: vnet4
>>>
>>> Can anyone provide any information about how to debug further why
>>> there's no traffic?
>>>
>>> Thanks in advance,
>>> Samuel.
>>>
>>> ___
>>> Users mailing list
>>> Users@lists.opennebula.org
>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>
>>>
>>
>>
>> --
>> Jaime Melis
>> C12G Labs - Flexible Enterprise Cloud Made Simple
>> http://www.c12g.com | jme...@c12g.com
>>
>> --
>>
>> Confidentiality Warning: The information contained in this e-mail and
>> any accompanying documents, unless otherwise expressly indicated, is
>> confidential and privileged, and is intended solely for the person
>> and/or entity to whom it is addressed (i.e. those identified in the
>> "To" and "cc" box). They are the property of C12G Labs S.L..
>> Unauthorized distribution, review, use, disclosure, or copying of this
>> communication, or any part thereof, is strictly prohibited and may be
>> unlawful. If you have received this e-mail in error, please notify us
>> immediately by e-mail at ab...@c12g.com and delete the e-mail and
>> attachments and any copy from your system. C12G's thanks you for your
>> cooperation.
>>
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] problems with openvswitch after upgrading to 4.4

2013-12-19 Thread samuel
Still same issue.

OVS rules affecting ARP are not loaded but ther's no connectivity from
inside the VM. How can we proceed to debug it?

Thanks a lot.


On 19 December 2013 11:13, Jaime Melis  wrote:

> Hi guys,
>
> I just spoke with Samuel on the IRC and he's going to try and manually
> revert this patch:
>
> http://dev.opennebula.org/issues/2318
> (
> https://github.com/OpenNebula/one/commit/a775bb295802bccfd53b44d1e874b9a135efc130
> )
>
> He'll reply with the results. Let's see if we can narrow it down.
>
> cheers,
> Jaime
>
>
> On Thu, Dec 19, 2013 at 10:14 AM, samuel  wrote:
>
>> Hi folks,
>>
>> We're just evaluating new version 4.4 and everything seems to work except
>> for a network issue with openvswitch. From inside the VM the nic is
>> attached but no traffic is flowing.
>>
>> Debugging the process everything seems ok.
>>
>> Sunstone log:
>> Thu Dec 19 09:36:58 2013 [VMM][I]: Successfully execute virtualization
>> driver operation: attach_nic.
>> Thu Dec 19 09:36:58 2013 [VMM][I]: post: Executed "sudo ovs-vsctl set
>> Port vnet4 tag=1030".
>> Thu Dec 19 09:36:58 2013 [VMM][I]: post: Executed "sudo ovs-ofctl
>> add-flow bridge
>> in_port=112,arp,dl_src=09:00:57:ec:d9:96,priority=45000,actions=drop".
>> Thu Dec 19 09:36:58 2013 [VMM][I]: post: Executed "sudo ovs-ofctl
>> add-flow bridge
>> in_port=112,arp,dl_src=09:00:57:ec:d9:96,nw_src=A.B.C.D,priority=46000,actions=normal".
>> Thu Dec 19 09:36:58 2013 [VMM][I]: post: Executed "sudo ovs-ofctl
>> add-flow bridge
>> in_port=112,dl_src=09:00:57:ec:d9:96,priority=4,actions=normal".
>> Thu Dec 19 09:36:59 2013 [VMM][I]: post: Executed "sudo ovs-ofctl
>> add-flow bridge in_port=112,priority=39000,actions=drop".
>> Thu Dec 19 09:36:59 2013 [VMM][I]: ExitCode: 0
>> Thu Dec 19 09:36:59 2013 [VMM][I]: Successfully execute network driver
>> operation: post.
>> Thu Dec 19 09:36:59 2013 [VMM][I]: VM NIC Successfully attached.
>>
>> going to the host:
>> the port is effectively created:
>> ovs-vsctl show
>> Port "vnet4"
>> tag: 1030
>> Interface "vnet4"
>>
>> ovs-ofctl dump-flows bridge | grep 112
>>  cookie=0x0, duration=1714.529s, table=0, n_packets=8, n_bytes=336,
>> idle_age=1484, priority=45000,arp,in_port=112,dl_src=09:00:57:ec:d9:96
>> actions=drop
>>  cookie=0x0, duration=1714.515s, table=0, n_packets=0, n_bytes=0,
>> idle_age=1714,
>> priority=46000,arp,in_port=112,dl_src=09:00:57:ec:d9:96,nw_src=A.B.C.D
>> actions=NORMAL
>>  cookie=0x0, duration=1714.501s, table=0, n_packets=264, n_bytes=35449,
>> idle_age=56, priority=4,in_port=112,dl_src=09:00:57:ec:d9:96
>> actions=NORMAL
>>  cookie=0x0, duration=1714.485s, table=0, n_packets=0, n_bytes=0,
>> idle_age=1714, priority=39000,in_port=112 actions=drop
>>
>>  ovs-dpctl show bridge
>> port 112: vnet4
>>
>> Can anyone provide any information about how to debug further why there's
>> no traffic?
>>
>> Thanks in advance,
>> Samuel.
>>
>> ___
>> Users mailing list
>> Users@lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>>
>
>
> --
> Jaime Melis
> C12G Labs - Flexible Enterprise Cloud Made Simple
> http://www.c12g.com | jme...@c12g.com
>
> --
>
> Confidentiality Warning: The information contained in this e-mail and
> any accompanying documents, unless otherwise expressly indicated, is
> confidential and privileged, and is intended solely for the person
> and/or entity to whom it is addressed (i.e. those identified in the
> "To" and "cc" box). They are the property of C12G Labs S.L..
> Unauthorized distribution, review, use, disclosure, or copying of this
> communication, or any part thereof, is strictly prohibited and may be
> unlawful. If you have received this e-mail in error, please notify us
> immediately by e-mail at ab...@c12g.com and delete the e-mail and
> attachments and any copy from your system. C12G's thanks you for your
> cooperation.
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] problems with openvswitch after upgrading to 4.4

2013-12-19 Thread samuel
Hi folks,

We're just evaluating new version 4.4 and everything seems to work except
for a network issue with openvswitch. From inside the VM the nic is
attached but no traffic is flowing.

Debugging the process everything seems ok.

Sunstone log:
Thu Dec 19 09:36:58 2013 [VMM][I]: Successfully execute virtualization
driver operation: attach_nic.
Thu Dec 19 09:36:58 2013 [VMM][I]: post: Executed "sudo ovs-vsctl set Port
vnet4 tag=1030".
Thu Dec 19 09:36:58 2013 [VMM][I]: post: Executed "sudo ovs-ofctl add-flow
bridge
in_port=112,arp,dl_src=09:00:57:ec:d9:96,priority=45000,actions=drop".
Thu Dec 19 09:36:58 2013 [VMM][I]: post: Executed "sudo ovs-ofctl add-flow
bridge
in_port=112,arp,dl_src=09:00:57:ec:d9:96,nw_src=A.B.C.D,priority=46000,actions=normal".
Thu Dec 19 09:36:58 2013 [VMM][I]: post: Executed "sudo ovs-ofctl add-flow
bridge in_port=112,dl_src=09:00:57:ec:d9:96,priority=4,actions=normal".
Thu Dec 19 09:36:59 2013 [VMM][I]: post: Executed "sudo ovs-ofctl add-flow
bridge in_port=112,priority=39000,actions=drop".
Thu Dec 19 09:36:59 2013 [VMM][I]: ExitCode: 0
Thu Dec 19 09:36:59 2013 [VMM][I]: Successfully execute network driver
operation: post.
Thu Dec 19 09:36:59 2013 [VMM][I]: VM NIC Successfully attached.

going to the host:
the port is effectively created:
ovs-vsctl show
Port "vnet4"
tag: 1030
Interface "vnet4"

ovs-ofctl dump-flows bridge | grep 112
 cookie=0x0, duration=1714.529s, table=0, n_packets=8, n_bytes=336,
idle_age=1484, priority=45000,arp,in_port=112,dl_src=09:00:57:ec:d9:96
actions=drop
 cookie=0x0, duration=1714.515s, table=0, n_packets=0, n_bytes=0,
idle_age=1714,
priority=46000,arp,in_port=112,dl_src=09:00:57:ec:d9:96,nw_src=A.B.C.D
actions=NORMAL
 cookie=0x0, duration=1714.501s, table=0, n_packets=264, n_bytes=35449,
idle_age=56, priority=4,in_port=112,dl_src=09:00:57:ec:d9:96
actions=NORMAL
 cookie=0x0, duration=1714.485s, table=0, n_packets=0, n_bytes=0,
idle_age=1714, priority=39000,in_port=112 actions=drop

 ovs-dpctl show bridge
port 112: vnet4

Can anyone provide any information about how to debug further why there's
no traffic?

Thanks in advance,
Samuel.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] unable to kill VNC processes

2013-04-10 Thread samuel
There was some problem with the shared storage system that might have
affected how opennebula handled it. We stopped everyting and remounted the
shared storage and we were able to kill the VNC sessions.

As soon as we have time, we'll definetely try the new 4.0 version.

Thanks a lot,
Samuel.


On 10 April 2013 11:27, Daniel Molina  wrote:

> Hi
>
>
> On 9 April 2013 13:04, samuel  wrote:
>
>> Hi folks,
>>
>> In a 3.8 ONE installation we faced some issues and when executint ps aux,
>> we found lots of VNC connections that even made impossible to su - oneadmin
>> because of the large number of processes.
>>
>> When we try to kill (or -9) them, they become Zombies and it's not
>> possible to stop them.
>>
>> Is there any option to kill them? Do they fork from a parent process so
>> we can stop the parent process (already stopped sunstona and occiserver)?
>>
>>
>> Thanks a lot in advance,
>> Samuel.
>>
>> oneadmin 31218  0.0  0.0 199144  9812 ?SMar21   0:00 python
>> /srv/cloud/one/share/websockify/websocketproxy.py
>> --target-config=/srv/cloud/one/var/s
>> oneadmin 31247  0.0  0.0 197544  8248 ?SMar04   0:00 python
>> /srv/cloud/one/share/websockify/websocketproxy.py
>> --target-config=/srv/cloud/one/var/s
>> oneadmin 31296  0.0  0.0 198936  9888 ?SMar18   0:00 python
>> /srv/cloud/one/share/websockify/websocketproxy.py
>> --target-config=/srv/cloud/one/var/s
>> oneadmin 31299  0.0  0.0 198876  9776 ?SMar18   0:00 python
>> /srv/cloud/one/share/websockify/websocketproxy.py
>> --target-config=/srv/cloud/one/var/s
>> oneadmin 31307  0.0  0.0 197516  8284 ?SFeb28   0:00 python
>> /srv/cloud/one/share/websockify/websocketproxy.py
>> --target-config=/srv/cloud/one/var/s
>> oneadmin 31309  0.0  0.0 197624  8356 ?SMar04   0:00 python
>> /srv/cloud/one/share/websockify/websocketproxy.py
>> --target-config=/srv/cloud/one/var/s
>> oneadmin 31333  0.0  0.0 198416  9092 ?SMar13   0:00 python
>> /srv/cloud/one/share/websockify/websocketproxy.py
>> --target-config=/srv/cloud/one/var/s
>> oneadmin 31351  0.0  0.0 199548 10404 ?SApr04   0:00 python
>> /srv/cloud/one/share/websockify/websocketproxy.py
>> --target-config=/srv/cloud/one/var/s
>> oneadmin 31355  0.0  0.0 198420  9208 ?SMar13   0:00 python
>> /srv/cloud/one/share/websockify/websocketproxy.py
>> --target-config=/srv/cloud/one/var/s
>> oneadmin 31430  0.0  0.0 198032  8832 ?SMar06   0:00 python
>> /srv/cloud/one/share/websockify/websocketproxy.py
>> --target-config=/srv/cloud/one/var/s
>> oneadmin 31474  0.0  0.0 197516  8312 ?SMar04   0:00 python
>> /srv/cloud/one/share/websockify/websocketproxy.py
>> --target-config=/srv/cloud/one/var/s
>> oneadmin 31486  0.0  0.0 197868  8584 ?SMar04   0:00 python
>> /srv/cloud/one/share/websockify/websocketproxy.py
>> --target-config=/srv/cloud/one/var/s
>> oneadmin 31507  0.0  0.0 199408 10264 ?SApr04   0:00 python
>> /srv/cloud/one/share/websockify/websocketproxy.py
>> --target-config=/srv/cloud/one/var/s
>> oneadmin 31583  0.0  0.0 198420  9208 ?SMar13   0:00 python
>> /srv/cloud/one/share/websockify/websocketproxy.py
>> --target-config=/srv/cloud/one/var/s
>> oneadmin 31617  0.0  0.0 197828  8608 ?SMar05   0:00 python
>> /srv/cloud/one/share/websockify/websocketproxy.py
>> --target-config=/srv/cloud/one/var/s
>> oneadmin 31643  0.0  0.0 197816  8692 ?SMar05   0:00 python
>> /srv/cloud/one/share/websockify/websocketproxy.py
>> --target-config=/srv/cloud/one/var/s
>> oneadmin 31712  0.0  0.0 198588  9416 ?SMar13   0:00 python
>> /srv/cloud/one/share/websockify/websocketproxy.py
>> --target-config=/srv/cloud/one/var/s
>> oneadmin 31724  0.0  0.1 217716 28672 ?SMar19   0:00 python
>> /srv/cloud/one/share/websockify/websocketproxy.py
>> --target-config=/srv/cloud/one/var/s
>> oneadmin 31805  0.0  0.0 197808  8572 ?SMar05   0:00 python
>> /srv/cloud/one/share/websockify/websocketproxy.py
>> --target-config=/srv/cloud/one/var/s
>> oneadmin 31854  0.0  0.0 198332  9060 ?SMar12   0:00 python
>> /srv/cloud/one/share/websockify/websocketproxy.py
>> --target-config=/srv/cloud/one/var/s
>> oneadmin 31862  0.0  0.0 198160  8836 ?SMar07   0:00 python
>> /srv/cloud/one/share/websockify/websocketproxy.py
>> --target-config=/srv/cloud/one/var/s
>> oneadmin 31878  0.0  0.0 198764  9716 ?

[one-users] unable to kill VNC processes

2013-04-09 Thread samuel
Hi folks,

In a 3.8 ONE installation we faced some issues and when executint ps aux,
we found lots of VNC connections that even made impossible to su - oneadmin
because of the large number of processes.

When we try to kill (or -9) them, they become Zombies and it's not possible
to stop them.

Is there any option to kill them? Do they fork from a parent process so we
can stop the parent process (already stopped sunstona and occiserver)?


Thanks a lot in advance,
Samuel.

oneadmin 31218  0.0  0.0 199144  9812 ?SMar21   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 31247  0.0  0.0 197544  8248 ?SMar04   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 31296  0.0  0.0 198936  9888 ?SMar18   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 31299  0.0  0.0 198876  9776 ?SMar18   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 31307  0.0  0.0 197516  8284 ?SFeb28   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 31309  0.0  0.0 197624  8356 ?SMar04   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 31333  0.0  0.0 198416  9092 ?SMar13   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 31351  0.0  0.0 199548 10404 ?SApr04   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 31355  0.0  0.0 198420  9208 ?SMar13   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 31430  0.0  0.0 198032  8832 ?SMar06   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 31474  0.0  0.0 197516  8312 ?SMar04   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 31486  0.0  0.0 197868  8584 ?SMar04   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 31507  0.0  0.0 199408 10264 ?SApr04   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 31583  0.0  0.0 198420  9208 ?SMar13   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 31617  0.0  0.0 197828  8608 ?SMar05   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 31643  0.0  0.0 197816  8692 ?SMar05   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 31712  0.0  0.0 198588  9416 ?SMar13   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 31724  0.0  0.1 217716 28672 ?SMar19   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 31805  0.0  0.0 197808  8572 ?SMar05   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 31854  0.0  0.0 198332  9060 ?SMar12   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 31862  0.0  0.0 198160  8836 ?SMar07   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 31878  0.0  0.0 198764  9716 ?SMar19   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 31961  0.0  0.0 198564  9392 ?SMar18   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 31980  0.0  0.0 199472 10268 ?SApr01   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 32018  0.0  0.0 197524  8308 ?SFeb28   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 32111  0.0  0.0 199392 10312 ?SApr03   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 32118  0.0  0.0 199696 10572 ?SApr03   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 32137  0.0  0.0 199016  9688 ?SMar20   0:00 python
/srv/cloud/one/share/websockify/websocketproxy.py
--target-config=/srv/cloud/one/var/s
oneadmin 32147  0.0  0.0 198688  9528 ?SMar01  

Re: [one-users] Mouse pointer access problem in Windows XP VM

2013-01-02 Thread samuel
For more information:
https://github.com/kanaka/noVNC/issues/131

summarising, if you create the VMs with the following parameter set:

INPUT = [TYPE=tablet, BUS=usb]

The mouse shall be more user-friendly.

Best regards,

Samuel
On 1 January 2013 04:45, SRINIVASAN-ACCEL wrote:

> Hi
>
> ** **
>
> Happy New Year to all.
>
> ** **
>
> Successfully uploaded the vmdk images  ( Windows XP) to nebula server and
> started the VM .
>
> ** **
>
> Could be able to see the Windows Console but couldn’t access through
> Mouse. ( Mouse pointer is visible but its very hard to access)
>
> ** **
>
> Vmware tools was NOT installed in the uploaded VMDK file.
>
> ** **
>
> Through Vmware Vsphere client the newly provisioned VM’s(one-32) OS type
> is “Other” instead of Windows XP.
>
> ** **
>
> ** **
>
> Regards
>
> Srinivasan T
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] ganglia integration and base64 errors

2012-12-28 Thread samuel
Hi folks,

We've started playing in the integration of opennebula (3.8) and ganglia
and we faced an issue in the decoding section.

when we launch the poll command, we got the following output:
-bash-4.1$ $HOME/var/remotes/vmm/kvm/poll --kvm
eJx1jUsKwzAMBfc+hfDewZKMJfs2pvXSSUkcaG/fJG2hH7p8b2DGOQdmGqvDkMEA5HWp51bbNN8ykE8pIe33WPt8zcDeK6vfn6WXXjOUJ+0bJRbVY5e2oYf2JT1d1gwWwxDsEVT+DWIUEcL4VsRNyiiCf5ohKQX8qip/VkmHaM0do/I9ug==

and when we try to decode it, there's no human-readable information:
-bash-4.1$ $HOME/var/remotes/vmm/kvm/poll --kvm | base64 -d
x�u�
�0��}�b�>%��e&o4ˤҦ�oo[,���p>�5��m]�֥\k���H�F>��1�Gޗ��^�w�[E�|^mM0X7���~Ab�h×hŋ#�c:
�Dg���"PO��=�-bash-4.1$

Can anyone point where the error can be located?

Thanks in advance,
Samuel.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] The Marketplace and Demo Cloud will go down for maintenance

2012-10-23 Thread samuel
Hi again,

Just let another minnor issue with the new interface (apologies if this is
not the right place to post them).

I had some resources (images, templates, VMs,..) created in the previous
version (3.6) of the demo interface and I've lost some priviliges and can
not delete them...

For example:
[ImageDelete] User [1296] : Not authorized to perform MANAGE IMAGE [6].

It's not a big issue, and case in it's hard to recreate ids on permissions
table it can just let be as it is now.

Thanks again!

Samuel.

On 23 October 2012 08:56, samuel  wrote:

> Hi all,
>
> First of all, congratulations for the new release. I was trying web
> accesses (sunstone+selfservice) but upon creating a VM, there's a minnor
> error:
>
> clone: Command "cd /var/lib/one/datastores/0/4149
> cp -r /var/lib/one/datastoresdummy_path
> /var/lib/one/datastores/0/4149/disk.0" failed: ssh:
>  ssh: Could not resolve hostname cloud.server221: Name or service not known
>
> So it seems as a DNS problem, but it's hard to guess from the distance.
>
> Thanks for your effort,
> Samuel.
>
> On 22 October 2012 20:02, Carlos Martín Sánchez wrote:
>
>> Hi,
>>
>> It is up and running again. If you don't have an account [1], get one now!
>>
>> Cheers
>>
>> [1] http://opennebula.org/cloud:cloud#getting_an_account
>>
>> --
>> Carlos Martín, MSc
>> Project Engineer
>> OpenNebula - The Open-source Solution for Data Center Virtualization
>> www.OpenNebula.org | cmar...@opennebula.org | 
>> @OpenNebula<http://twitter.com/opennebula>
>>
>>
>>
>> On Mon, Oct 22, 2012 at 6:51 PM, Carlos Martín Sánchez <
>> cmar...@opennebula.org> wrote:
>>
>>> Dear users,
>>>
>>> Our Demo Coud [1] and Marketplace [2] services will be down for a while,
>>> we are upgrading them to the final 3.8 version.
>>>
>>> We apologize for any inconvenience,
>>> Carlos
>>>
>>> [1] http://opennebula.org/cloud:tryout
>>> [2] http://marketplace.c12g.com
>>> --
>>> Carlos Martín, MSc
>>> Project Engineer
>>> OpenNebula - The Open-source Solution for Data Center Virtualization
>>> www.OpenNebula.org | cmar...@opennebula.org | 
>>> @OpenNebula<http://twitter.com/opennebula>
>>>
>>>
>>
>> ___
>> Users mailing list
>> Users@lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] The Marketplace and Demo Cloud will go down for maintenance

2012-10-22 Thread samuel
Hi all,

First of all, congratulations for the new release. I was trying web
accesses (sunstone+selfservice) but upon creating a VM, there's a minnor
error:

clone: Command "cd /var/lib/one/datastores/0/4149
cp -r /var/lib/one/datastoresdummy_path
/var/lib/one/datastores/0/4149/disk.0" failed: ssh:
 ssh: Could not resolve hostname cloud.server221: Name or service not known

So it seems as a DNS problem, but it's hard to guess from the distance.

Thanks for your effort,
Samuel.

On 22 October 2012 20:02, Carlos Martín Sánchez wrote:

> Hi,
>
> It is up and running again. If you don't have an account [1], get one now!
>
> Cheers
>
> [1] http://opennebula.org/cloud:cloud#getting_an_account
>
> --
> Carlos Martín, MSc
> Project Engineer
> OpenNebula - The Open-source Solution for Data Center Virtualization
> www.OpenNebula.org | cmar...@opennebula.org | 
> @OpenNebula<http://twitter.com/opennebula>
>
>
>
> On Mon, Oct 22, 2012 at 6:51 PM, Carlos Martín Sánchez <
> cmar...@opennebula.org> wrote:
>
>> Dear users,
>>
>> Our Demo Coud [1] and Marketplace [2] services will be down for a while,
>> we are upgrading them to the final 3.8 version.
>>
>> We apologize for any inconvenience,
>> Carlos
>>
>> [1] http://opennebula.org/cloud:tryout
>> [2] http://marketplace.c12g.com
>> --
>> Carlos Martín, MSc
>> Project Engineer
>> OpenNebula - The Open-source Solution for Data Center Virtualization
>> www.OpenNebula.org | cmar...@opennebula.org | 
>> @OpenNebula<http://twitter.com/opennebula>
>>
>>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] webinars

2012-10-19 Thread samuel
+1

On 19 October 2012 14:43, christopher barry  wrote:

> One Folks,
>
> I had actually signed up for the latest webinar, but then realized I
> could not participate and had to cancel because I use Linux. Seems a bit
> ironic when you think about it.
>
> It would be great if you could use a webinar methodology that did not
> discriminate against Linux users in the future.
>
> Regards,
> -C
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] cancel live migration in progress

2011-10-20 Thread samuel
Just to conclude the issue:

After almost 2 days, the migration has succeed:

Tue Oct 18 11:00:31 2011 [LCM][I]: New VM state is MIGRATE
Thu Oct 20 06:06:57 2011 [LCM][I]: New VM state is RUNNING

And the machine seems to be working perfectly so I'll just say:
*stay calm and patience

Thanks a lot for the support and for the great product,
Samuel.

On 19 October 2011 11:56, samuel  wrote:

> So the only option is to backup somehow the migrating instance and manually
> recover Open Nebula? The main concern is what will be the status of the
> currently migrating VMprobably the best is to recreate it?
>
> Would it be good to add a timeout in the migrate script so it launches an
> error to Open Nebula? I'm sorry but I'm not familiar with One internals and
> I'm not sure what will be affected if we add timeout to the live migration
> virsh command...
>
> Thanks for your support,
> Samuel.
>
>
> On 18 October 2011 22:53, Javier Fontan  wrote:
>
>> Unfortunately I don't know of any way to stop or recover the failed
>> migration with OpenNebula or manually.
>>
>> Rebooting a physical host will basically destroy the running VMs and
>> most probably the disks will be corrupted.
>>
>> On Tue, Oct 18, 2011 at 2:34 PM, samuel  wrote:
>> > I add more information so you can follow the steps taken and the final
>> > issue.
>> >
>> > 1)segfault on node 2:
>> > [620617.517308] kvm[28860]: segfault at 420 ip 00413714 sp
>> > 7fff9136ea70 error 4 in qemu-system-x86_64[40+335000]
>> >
>> > VMs work OK
>> >
>> > 2)restart libvirt on node 2
>> > /init.d/libvirt-bin restart
>> >
>> > libvirt is able to check local VM:
>> > # virsh list
>> >  Id Nombre   Estado
>> > --
>> >   2 one-47   ejecutando
>> >   3 one-44   ejecutando
>> >
>> > 3)tried to live-migrate one-44 from node2 to node3 using the sunstone
>> web
>> > interface
>> >
>> > vm.log:
>> > Tue Oct 18 11:00:31 2011 [LCM][I]: New VM state is MIGRATE
>> >
>> > oned.log:
>> > Tue Oct 18 11:00:31 2011 [DiM][D]: Live-migrating VM 44
>> > Tue Oct 18 11:00:31 2011 [ReM][D]: VirtualMachineInfo method invoked
>> >
>> > 4)the end situation is:
>> > one-44 is in MIGRATE state for opennebula (there's no timeout paremeter
>> set
>> > for the virsh live-migrate so it will be there forever (?))
>> >
>> > root@node3:# virsh list
>> >  Id Nombre   Estado
>> > --
>> >
>> > root@node2:# virsh list
>> >  Id Nombre   Estado
>> > ------
>> >   2 one-47   ejecutando
>> >   3 one-44   ejecutando
>> >
>> >
>> > /var/log/libvirt/qemu/one-44.log is empty in both nodes (node2 and
>> node3).
>> >
>> > My question is:
>> >
>> > i)How can I stop the live migration from the open nebula view so it does
>> not
>> > lose the whole picture of the cloud and it keeps consistency?
>> > ii)is it safe to restart node2 or node3?
>> >
>> > Thank you in advance for any hint on this issue.
>> >
>> > Samuel.
>> > On 18 October 2011 11:58, samuel  wrote:
>> >>
>> >> hi all,
>> >>
>> >> I'm having an issue with live migration.T here was a running instance
>> on a
>> >> node that had a qemu segfault (i've noticed afterwards because the
>> instances
>> >> were working).I've tried to live migrate the instance to another node
>> >> without problems but the instance remains in MIGRATE state "forever".
>> >> *is there any method to stope the live migration?
>> >> *if I restart the node with a qemu segfault, will the instances run ok
>> >> again? They have been running but the communication between opennebula
>> and
>> >> KVM is broken so I'm not sure whether the cloud will keep consistency.
>> I
>> >> think I read that if the name of the instance is the same and the node
>> is
>> >> the same, opennebula will keep consistency.
>> >>
>> >> Can anyone help me, please?
>> >>
>> >> Thanks in advance,.
>> >> Samuel.
>> >>
>> >
>> >
>> > ___
>> > Users mailing list
>> > Users@lists.opennebula.org
>> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> >
>> >
>>
>>
>>
>> --
>> Javier Fontán Muiños
>> Project Engineer
>> OpenNebula Toolkit | opennebula.org
>>
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] cancel live migration in progress

2011-10-19 Thread samuel
So the only option is to backup somehow the migrating instance and manually
recover Open Nebula? The main concern is what will be the status of the
currently migrating VMprobably the best is to recreate it?

Would it be good to add a timeout in the migrate script so it launches an
error to Open Nebula? I'm sorry but I'm not familiar with One internals and
I'm not sure what will be affected if we add timeout to the live migration
virsh command...

Thanks for your support,
Samuel.

On 18 October 2011 22:53, Javier Fontan  wrote:

> Unfortunately I don't know of any way to stop or recover the failed
> migration with OpenNebula or manually.
>
> Rebooting a physical host will basically destroy the running VMs and
> most probably the disks will be corrupted.
>
> On Tue, Oct 18, 2011 at 2:34 PM, samuel  wrote:
> > I add more information so you can follow the steps taken and the final
> > issue.
> >
> > 1)segfault on node 2:
> > [620617.517308] kvm[28860]: segfault at 420 ip 00413714 sp
> > 7fff9136ea70 error 4 in qemu-system-x86_64[40+335000]
> >
> > VMs work OK
> >
> > 2)restart libvirt on node 2
> > /init.d/libvirt-bin restart
> >
> > libvirt is able to check local VM:
> > # virsh list
> >  Id Nombre   Estado
> > --
> >   2 one-47   ejecutando
> >   3 one-44   ejecutando
> >
> > 3)tried to live-migrate one-44 from node2 to node3 using the sunstone web
> > interface
> >
> > vm.log:
> > Tue Oct 18 11:00:31 2011 [LCM][I]: New VM state is MIGRATE
> >
> > oned.log:
> > Tue Oct 18 11:00:31 2011 [DiM][D]: Live-migrating VM 44
> > Tue Oct 18 11:00:31 2011 [ReM][D]: VirtualMachineInfo method invoked
> >
> > 4)the end situation is:
> > one-44 is in MIGRATE state for opennebula (there's no timeout paremeter
> set
> > for the virsh live-migrate so it will be there forever (?))
> >
> > root@node3:# virsh list
> >  Id Nombre   Estado
> > --
> >
> > root@node2:# virsh list
> >  Id Nombre   Estado
> > --
> >   2 one-47   ejecutando
> >   3 one-44   ejecutando
> >
> >
> > /var/log/libvirt/qemu/one-44.log is empty in both nodes (node2 and
> node3).
> >
> > My question is:
> >
> > i)How can I stop the live migration from the open nebula view so it does
> not
> > lose the whole picture of the cloud and it keeps consistency?
> > ii)is it safe to restart node2 or node3?
> >
> > Thank you in advance for any hint on this issue.
> >
> > Samuel.
> > On 18 October 2011 11:58, samuel  wrote:
> >>
> >> hi all,
> >>
> >> I'm having an issue with live migration.T here was a running instance on
> a
> >> node that had a qemu segfault (i've noticed afterwards because the
> instances
> >> were working).I've tried to live migrate the instance to another node
> >> without problems but the instance remains in MIGRATE state "forever".
> >> *is there any method to stope the live migration?
> >> *if I restart the node with a qemu segfault, will the instances run ok
> >> again? They have been running but the communication between opennebula
> and
> >> KVM is broken so I'm not sure whether the cloud will keep consistency. I
> >> think I read that if the name of the instance is the same and the node
> is
> >> the same, opennebula will keep consistency.
> >>
> >> Can anyone help me, please?
> >>
> >> Thanks in advance,.
> >> Samuel.
> >>
> >
> >
> > ___
> > Users mailing list
> > Users@lists.opennebula.org
> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
> >
> >
>
>
>
> --
> Javier Fontán Muiños
> Project Engineer
> OpenNebula Toolkit | opennebula.org
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] cancel live migration in progress

2011-10-18 Thread samuel
I add more information so you can follow the steps taken and the final
issue.

1)segfault on node 2:
[620617.517308] kvm[28860]: segfault at 420 ip 00413714 sp
7fff9136ea70 error 4 in qemu-system-x86_64[40+335000]

VMs work OK

2)restart libvirt on node 2
/init.d/libvirt-bin restart

libvirt is able to check local VM:
# virsh list
 Id Nombre   Estado
--
  2 one-47   ejecutando
  3 one-44   ejecutando

3)tried to live-migrate one-44 from node2 to node3 using the sunstone web
interface

vm.log:
Tue Oct 18 11:00:31 2011 [LCM][I]: New VM state is MIGRATE

oned.log:
Tue Oct 18 11:00:31 2011 [DiM][D]: Live-migrating VM 44
Tue Oct 18 11:00:31 2011 [ReM][D]: VirtualMachineInfo method invoked

4)the end situation is:
one-44 is in MIGRATE state for opennebula (there's no timeout paremeter set
for the virsh live-migrate so it will be there forever (?))

root@node3:# virsh list
 Id Nombre   Estado
--

root@node2:# virsh list
 Id Nombre   Estado
--
  2 one-47   ejecutando
  3 one-44   ejecutando


/var/log/libvirt/qemu/one-44.log is empty in both nodes (node2 and node3).

My question is:

i)How can I stop the live migration from the open nebula view so it does not
lose the whole picture of the cloud and it keeps consistency?
ii)is it safe to restart node2 or node3?

Thank you in advance for any hint on this issue.

Samuel.
On 18 October 2011 11:58, samuel  wrote:

> hi all,
>
> I'm having an issue with live migration.T here was a running instance on a
> node that had a qemu segfault (i've noticed afterwards because the instances
> were working).I've tried to live migrate the instance to another node
> without problems but the instance remains in MIGRATE state "forever".
> *is there any method to stope the live migration?
> *if I restart the node with a qemu segfault, will the instances run ok
> again? They have been running but the communication between opennebula and
> KVM is broken so I'm not sure whether the cloud will keep consistency. I
> think I read that if the name of the instance is the same and the node is
> the same, opennebula will keep consistency.
>
> Can anyone help me, please?
>
> Thanks in advance,.
> Samuel.
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] cancel live migration in progress

2011-10-18 Thread samuel
hi all,

I'm having an issue with live migration.T here was a running instance on a
node that had a qemu segfault (i've noticed afterwards because the instances
were working).I've tried to live migrate the instance to another node
without problems but the instance remains in MIGRATE state "forever".
*is there any method to stope the live migration?
*if I restart the node with a qemu segfault, will the instances run ok
again? They have been running but the communication between opennebula and
KVM is broken so I'm not sure whether the cloud will keep consistency. I
think I read that if the name of the instance is the same and the node is
the same, opennebula will keep consistency.

Can anyone help me, please?

Thanks in advance,.
Samuel.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] onevm STOP error

2011-09-22 Thread samuel
Which NFS version are you using?
I had troubles with some distributions using NFS4, which had some permission
issues. I read that performance-wise version 3 and 4 are alike so I
recommend you to add the following:

nfsvers=3

to your exports file:

/srv/cloud
192.168.1.6(rw,no_root_squash,anonuid=0,anongid=0,sync,nfsvers=3)

Hope it helps,
Samuel.


On 19 September 2011 12:30, bala suru  wrote:

>
> Hi,
> I'm getting the same error .
> here is the export file of nfs server my /etc/export file
>
> /srv/cloud
> 192.168.1.6(rw,no_root_squash,anonuid=0,anongid=0,sync)
>
> Still same error will be logged in oned.log file .
>
> That is ->
>
>
> Tue Sep 13 15:53:46 2011 [VMM][D]: Message received: SAVE FAILURE 57 error:
> Failed to save domain one-57 to /srv/cloud/one/var/57/images/checkpoint
>
> Tue Sep 13 15:53:46 2011 [VMM][D]: Message received: error: unable to set
> ownership of '/srv/cloud/one/var/57/images/checkpoint' to user 0:0:
> Operation not permitted
>
>
> pls help .
> regards
> Bala
>
> On Sat, Sep 17, 2011 at 12:37 AM, wrote:
>
>> Send Users mailing list submissions to
>>users@lists.opennebula.org
>>
>> To subscribe or unsubscribe via the World Wide Web, visit
>>
>>http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> or, via email, send a message with subject or body 'help' to
>>users-requ...@lists.opennebula.org
>>
>> You can reach the person managing the list at
>>users-ow...@lists.opennebula.org
>>
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of Users digest..."
>>
>>
>> Today's Topics:
>>
>>   1. Re: onevm saveas error (Fabian Wenk)
>>   2. Re: onevm STOP error (Fabian Wenk)
>>   3. Re: Workflow Management in Open Nebula (Carlos Mart?n S?nchez)
>>
>>
>> --
>>
>> Message: 1
>> Date: Fri, 16 Sep 2011 13:20:19 +0200
>> From: Fabian Wenk 
>> To: users@lists.opennebula.org
>> Subject: Re: [one-users] onevm saveas error
>> Message-ID: <4e7330f3.30...@wenks.ch>
>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>>
>> Hello
>>
>> On 16.09.2011 12:57, bharath pb wrote:
>> > The image path location  has set to default location only . i.e
>> > /srv/cloud/one/var/images  , but I could not see any image after
>> executing
>>
>> Ok, I think this should also be fine.
>>
>> >   the saveas and shutdown commands ,.
>> > And also there are no errors in the log file .
>> >
>> > why this cant save the VM ..?
>>
>> Because it could not write the file
>> /srv/cloud/one/var//images/e0561a492c9aaac280479f2f0d85dcced9156fbf as
>> the log file you have posted shows. Is the oneadmin user able to
>> write into this folder?
>>
>> Also check your configuration, this // is not normal (but usualy
>> should not be a problem). Eventually you have set / at the end of
>> the paths in your oned.conf, remove it.
>>
>> > Note: I use nfs for transferring the image .
>>
>> You need to have the folder /srv/cloud/one/var/images/ also
>> available on the cluster node and there the user oneadmin (or
>> eventually libvirtd) needs to be able to write into it.
>>
>>
>> bye
>> Fabian
>>
>>
>> --
>>
>> Message: 2
>> Date: Fri, 16 Sep 2011 13:25:40 +0200
>> From: Fabian Wenk 
>> To: users@lists.opennebula.org
>> Subject: Re: [one-users] onevm STOP error
>> Message-ID: <4e733234.1050...@wenks.ch>
>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>>
>>
>> Hello Bala
>>
>> On 16.09.2011 13:14, bala suru wrote:
>> > Sat Sep 10 16:41:05 2011 [VMM][D]: Message received: SAVE FAILURE 49
>> error:
>> > Failed to save domain one-49 to /srv/cloud/one/var//49/images/checkpoint
>> >
>> > Sat Sep 10 16:41:05 2011 [VMM][D]: Message received: error: unable to
>> set
>> > ownership of '/srv/cloud/one/var//49/images/checkpoint' to user 0:0:
>> > Operation not permitted .
>> >
>> > I run oneadmin as sudo user , and i'm using nfs for transferring the
>> images
>> > (tm_nfs) .
>>
>> The cluster node (or libvirtd) needs to be able to write into this
>> folder. It looks like it is running as root, so you need to allow
>> root to write to the NFS file system, this needs to be adjusted on
>

[one-users] changing virtual network "online"

2011-09-21 Thread samuel
Hi folks,

I've just wondering whether is it possible to change the virtual network
that a virtual machine is attached once it has been working (deploy->run).
I've tried to modify the deployment.0 file but it did not affect the new
restarted machine. Might be a problem with the underlying MySQL database
that has to be also changed?

Use Case:
*create a new virtual machine and just forgot to attach a virtual network
*modify a virtual network (in case of VLAN change)
*attach a new interface to a running machine.

Thank you very much in advance,
Samuel.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] wrong restart -> delete disk image!

2011-09-08 Thread samuel
Thank you very much!

Is it safe to manually change the code and just perform a ./install.sh from
the sources on a running installation? I'm using Mysql backend so I expect
that the modification of the sources will only affect the compillation of
the modified library and the rest will continue working ok.

Am I right?

I really appreciate the fast response.

Samuel Osorio.

On 8 September 2011 12:20, Ruben S. Montero  wrote:

> Hi,
>
> Yes you are right. There is an issue open [1]. We are planning to
> apply the proposed solution in that issue for 3.0 (i.e. clean-up will
> happen only when you issue a delete operation). I think this will
> address your use-case.
>
> [1] http://dev.opennebula.org/issues/265
>
> Thanks
>
> Ruben
> On Tue, Sep 6, 2011 at 5:28 PM, samuel  wrote:
> > Hi folks,
> >
> > Recently there was a network problem and one instance became unreachable.
> We
> > tried to restart it with stop and resume actions but there's been a
> problem
> > and the disk has been deleted. The main concern is why, after trying to
> > restart and an error happened, the directory where the disk image resides
> > has been deleted? There was no sensible data on it but I just don't get
> why
> > there has been a rm -rf of the directory.
> >
> > Details:
> >
> > The configuration is KVM with shared storage using open nebula 2.2.
> >
> > output of virsh version
> > Compilado contra la biblioteca: libvir 0.8.8
> > Utilizando la biblioteca: libvir 0.8.8
> > Utilizando API: QEMU 0.8.8
> > Ejecutando hypervisor: QEMU 0.14.0
> >
> > related logs:
> >
> > Tue Sep  6 12:37:49 2011 [VMM][D]: Message received: SAVE SUCCESS 22
> Domain
> > one-22 saved to /srv/cloud/one/var//22/images/checkpoint
> > Tue Sep  6 12:37:49 2011 [VMM][D]: Message received:
> > Tue Sep  6 12:37:49 2011 [TM][D]: Message received: LOG - 22 tm_mv.sh:
> Will
> > not move, is not saving image
> > Tue Sep  6 12:37:49 2011 [TM][D]: Message received: TRANSFER SUCCESS 22 -
> >
> > Tue Sep  6 12:38:12 2011 [DiM][D]: Restarting VM 22
> > Tue Sep  6 12:38:12 2011 [DiM][E]: Could not restart VM 22, wrong state.
> > Tue Sep  6 12:38:12 2011 [ReM][E]: Wrong state to perform action
> >
> > Tue Sep  6 12:38:18 2011 [ReM][D]: VirtualMachineAction invoked
> > Tue Sep  6 12:38:18 2011 [DiM][D]: Resuming VM 22
> > Tue Sep  6 12:38:47 2011 [DiM][D]: Deploying VM 22
> >
> > Tue Sep  6 12:38:47 2011 [ReM][D]: VirtualMachineInfo method invoked
> > Tue Sep  6 12:38:47 2011 [TM][D]: Message received: LOG - 22 tm_mv.sh:
> Will
> > not move, is not saving image
> >
> > Tue Sep  6 12:38:47 2011 [TM][D]: Message received: TRANSFER SUCCESS 22 -
> >
> > Tue Sep  6 12:38:48 2011 [ReM][D]: VirtualMachineInfo method invoked
> > Tue Sep  6 12:38:49 2011 [VMM][D]: Message received: LOG - 22 Command
> > execution fail: 'if [ -x "/var/tmp/one/vmm/kvm/restore" ]; then
> > /var/tmp/one/vmm/kvm/restore /srv/cloud/one/var//22/images/checkpoint;
> > else  exit 42; fi'
> > Tue Sep  6 12:38:49 2011 [VMM][D]: Message received: LOG - 22 STDERR
> > follows.
> > Tue Sep  6 12:38:49 2011 [VMM][D]: Message received: LOG - 22 error:
> Failed
> > to restore domain from /srv/cloud/one/var//22/images/checkpoint
> > Tue Sep  6 12:38:49 2011 [VMM][D]: Message received: LOG - 22 error:
> cannot
> > close file: Bad file descriptor
> > Tue Sep  6 12:38:49 2011 [VMM][D]: Message received: LOG - 22 ExitCode: 1
> > Tue Sep  6 12:38:49 2011 [VMM][D]: Message received: RESTORE FAILURE 22
> > error: Failed to restore domain from
> > /srv/cloud/one/var//22/images/checkpoint
> > Tue Sep  6 12:38:49 2011 [VMM][D]: Message received: error: cannot close
> > file: Bad file descriptor
> > Tue Sep  6 12:38:49 2011 [VMM][D]: Message received: ExitCode: 1
> >
> > Tue Sep  6 12:38:50 2011 [TM][D]: Message received: LOG - 22
> tm_delete.sh:
> > Deleting /srv/cloud/one/var//22/images
> > Tue Sep  6 12:38:50 2011 [TM][D]: Message received: LOG - 22
> tm_delete.sh:
> > Executed "rm -rf /srv/cloud/one/var//22/images".
> > Tue Sep  6 12:38:50 2011 [TM][D]: Message received: TRANSFER SUCCESS 22 -
> >
> >
> > Thank you in advance for any hint!
> > Samuel.
> >
> > ___
> > Users mailing list
> > Users@lists.opennebula.org
> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
> >
> >
>
>
>
> --
> Dr. Ruben Santiago Montero
> Associate Professor (Profesor Titular), Complutense University of Madrid
>
> URL: http://dsa-research.org/doku.php?id=people:ruben
> Weblog: http://blog.dsa-research.org/?author=7
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] wrong restart -> delete disk image!

2011-09-06 Thread samuel
Hi folks,

Recently there was a network problem and one instance became unreachable. We
tried to restart it with stop and resume actions but there's been a problem
and the disk has been deleted. The main concern is why, after trying to
restart and an error happened, the directory where the disk image resides
has been deleted? There was no sensible data on it but I just don't get why
there has been a rm -rf of the directory.

Details:

The configuration is KVM with shared storage using open nebula 2.2.

output of virsh version
Compilado contra la biblioteca: libvir 0.8.8
Utilizando la biblioteca: libvir 0.8.8
Utilizando API: QEMU 0.8.8
Ejecutando hypervisor: QEMU 0.14.0

related logs:

Tue Sep  6 12:37:49 2011 [VMM][D]: Message received: SAVE SUCCESS 22 Domain
one-22 saved to /srv/cloud/one/var//22/images/checkpoint
Tue Sep  6 12:37:49 2011 [VMM][D]: Message received:
Tue Sep  6 12:37:49 2011 [TM][D]: Message received: LOG - 22 tm_mv.sh: Will
not move, is not saving image
Tue Sep  6 12:37:49 2011 [TM][D]: Message received: TRANSFER SUCCESS 22 -

Tue Sep  6 12:38:12 2011 [DiM][D]: Restarting VM 22
Tue Sep  6 12:38:12 2011 [DiM][E]: Could not restart VM 22, wrong state.
Tue Sep  6 12:38:12 2011 [ReM][E]: Wrong state to perform action

Tue Sep  6 12:38:18 2011 [ReM][D]: VirtualMachineAction invoked
Tue Sep  6 12:38:18 2011 [DiM][D]: Resuming VM 22
Tue Sep  6 12:38:47 2011 [DiM][D]: Deploying VM 22

Tue Sep  6 12:38:47 2011 [ReM][D]: VirtualMachineInfo method invoked
Tue Sep  6 12:38:47 2011 [TM][D]: Message received: LOG - 22 tm_mv.sh: Will
not move, is not saving image

Tue Sep  6 12:38:47 2011 [TM][D]: Message received: TRANSFER SUCCESS 22 -

Tue Sep  6 12:38:48 2011 [ReM][D]: VirtualMachineInfo method invoked
Tue Sep  6 12:38:49 2011 [VMM][D]: Message received: LOG - 22 Command
execution fail: 'if [ -x "/var/tmp/one/vmm/kvm/restore" ]; then
/var/tmp/one/vmm/kvm/restore /srv/cloud/one/var//22/images/checkpoint;
else  exit 42; fi'
Tue Sep  6 12:38:49 2011 [VMM][D]: Message received: LOG - 22 STDERR
follows.
Tue Sep  6 12:38:49 2011 [VMM][D]: Message received: LOG - 22 error: Failed
to restore domain from /srv/cloud/one/var//22/images/checkpoint
Tue Sep  6 12:38:49 2011 [VMM][D]: Message received: LOG - 22 error: cannot
close file: Bad file descriptor
Tue Sep  6 12:38:49 2011 [VMM][D]: Message received: LOG - 22 ExitCode: 1
Tue Sep  6 12:38:49 2011 [VMM][D]: Message received: RESTORE FAILURE 22
error: Failed to restore domain from
/srv/cloud/one/var//22/images/checkpoint
Tue Sep  6 12:38:49 2011 [VMM][D]: Message received: error: cannot close
file: Bad file descriptor
Tue Sep  6 12:38:49 2011 [VMM][D]: Message received: ExitCode: 1

Tue Sep  6 12:38:50 2011 [TM][D]: Message received: LOG - 22 tm_delete.sh:
Deleting /srv/cloud/one/var//22/images
Tue Sep  6 12:38:50 2011 [TM][D]: Message received: LOG - 22 tm_delete.sh:
Executed "rm -rf /srv/cloud/one/var//22/images".
Tue Sep  6 12:38:50 2011 [TM][D]: Message received: TRANSFER SUCCESS 22 -


Thank you in advance for any hint!
Samuel.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] ping stops working after a normal migration

2011-09-06 Thread samuel
Hi,

You shall increase hypervisor log to see what the problem might be. I
experienced this in several tests and one of them happened to be a problem
with an old kernel and another with a wrong netwrok configuration (wrong
entry in hosts file).

Hope it helps,
Samuel.

On 18 August 2011 16:43, Adnan Pasic  wrote:

> Hi,
> strangely, the VM stops reacting completely, which means the VM seems
> frozen!
> I tried it already with the ACPI approach, but this has been set up
> properly. Any next steps???
>
> Regards
>
> --
> *Von:* Tino Vazquez 
> *An:* Adnan Pasic 
> *Cc:* "users@lists.opennebula.org" 
> *Gesendet:* 16:37 Donnerstag, 18.August 2011
> *Betreff:* Re: [one-users] ping stops working after a normal migration
>
> Hi,
>
> That warning is ok in shared filesystem configurations, it just means
> that since the file is already present as well in the destination,
> nothing is required.
>
> The lost of connectivity must be for other reasons. Can you check
> using VNC what is going on inside the VM?
>
> Regards,
>
> -Tino
>
> --
> Constantino Vázquez Blanco, MSc
> OpenNebula Major Contributor
> www.OpenNebula.org | @tinova79
>
>
>
> On Wed, Aug 17, 2011 at 6:00 PM, Adnan Pasic  wrote:
> > I'll freshen up this topic, because I still experience the same problem.
> > I came up with another possible reason, why this cold migration still
> won't
> > work. Every time I perform a cold migration, the logs say also this:
> >
> >
> > tm_mv.sh: Will not move, source and destination are equal
> >
> > Could it be that this might be the problem why the saving behaviour
> doesn't
> > really work and the VM stops reacting?
> >
> > Thanks for any help!
> >
> > ___
> > Users mailing list
> > Users@lists.opennebula.org
> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
> >
> >
>
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] OpenNebula 3.0 Beta 1 out for testing!

2011-07-20 Thread samuel
Hi all,

First of all congratulations for such a piece of code and for releasing the
beta the announced day without delay!
I've had yet time to put my hands on the new version but, from what I could
read, there's been no differences regarding taking backup/snapshot of
running VMs. Is just my appreciation or there had been no time to work on
this area?

The "problem" with current version is that onevm saveas forces to shut down
the VM and it does not work for all OS. Is there any other approach for this
issue?

Thank you very much!
Samuel.

On 20 July 2011 13:06, Ruben S. Montero  wrote:

> Hi,
>
> The OpenNebula team is happy to announce the availability of the first beta
> release for OpenNebula 3.0.  This new release includes exciting new features
> in multiple areas like user & group management, networking, multi-tier
> deployments or ACLs... just to mention a few.  You can find further details
> official announcement [1] and release notes [2].
>
> [1] http://blog.opennebula.org/?p=1769
> [2] http://www.opennebula.org/software:rnotes:rn-rel3.0b1
>
> Thank you for your continued support!
>
> OpenNebula Team
>
> --
> Ruben S. Montero, PhD
> Project co-Lead and Chief Architect
> OpenNebula - The Open Source Toolkit for Cloud Computing
> www.OpenNebula.org | rsmont...@opennebula.org 
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] [OT] xen to kvm images

2011-07-06 Thread samuel
Hi folks,

I'm struggling with a few issues converting xen to kvm images. I know this
might not be the right mailing list to ask but I'm sure there's someone that
can point me to the right mailing list or documentation. So, first of all,
apologies fot the Off-Topic post.

In my lab environment I have KVM-only nodes and I'm interested in porting
Xen images. I've created Xen images either with dd inside the virtual Xen
machine or just taking the xen disk file. In both cases, doing and fdisk of
the resulting image I got:
*
Warning: invalid flag 0x of partition table 4 will be corrected by
w(rite)
You must set cylinders.
You can do this from the extra functions menu.*

I've tried 2 methods:
1)just press w(rite) in fdisk which would insert a valid flag number
2) using sfidk to add a manual flag number (echo '1' | sfdisk image.img)

No matter what I use, the resulting file disk seems to not work and the
partition table seems broken. I've tried to continue installing grub and a
working kernel in the resulting image.img so they can be boot from KVM but I
always got No Boot Device available.

Can anyone tell what shall be done with a Xen disc image to be ported to a
KVM environment?

Thank you very much in advance,
Samuel Osorio.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] live migration fails on ubuntu 11.04

2011-06-30 Thread samuel
I did a full verification and turned out to be the same problem, a wrong
entry in /etc/hosts. One of the node's entry was not propoerly set
(mispelled domain) and it made impossible for one node's Kvm connect to the
other one.

In order to find out the problem I increased libvirt's debug level to
maximum and I saw thew wrong remote host.domain error.

Thank you very much for the support and apologies for the noise,

Samuel.

On 30 June 2011 19:15, Javier Fontan  wrote:

> I cannot see any info that leads me to find the problem. Have you
> tried migrating VM's manually, that is, using libvirt/kvm manually,
> not OpenNebula. Also check that both machines have the same processor
> and libvirt/kvm versions.
>
> On Fri, Jun 17, 2011 at 5:58 PM, samuel  wrote:
> >
> > The error happened to be a wrong entry in the file /etc/hosts, where the
> > remote node's IP was set to the local one and there were several errors.
> >
> > However, it is not yet possible to perform live migration on the same
> > escenario (normal migration works perfectly), I always end up with the
> > following error:
> > Fri Jun 17 17:47:58 2011 [LCM][I]: New VM state is MIGRATE
> > Fri Jun 17 17:51:09 2011 [VMM][I]: Command execution fail: 'if [ -x
> > "/var/tmp/one/vmm/kvm/migrate" ]; then /var/tmp/one/vmm/kvm/migrate
> one-21
> > node2; else  exit 42; fi'
> > Fri Jun 17 17:51:09 2011 [VMM][I]: STDERR follows.
> > Fri Jun 17 17:51:09 2011 [VMM][I]: error: operation failed: migration
> job:
> > unexpectedly failed
> > Fri Jun 17 17:51:09 2011 [VMM][I]: ExitCode: 1
> > Fri Jun 17 17:51:09 2011 [VMM][E]: Error live-migrating VM, error:
> operation
> > failed: migration job: unexpectedly failed
> > Fri Jun 17 17:51:09 2011 [LCM][I]: Fail to life migrate VM. Assuming that
> > the VM is still RUNNING (will poll VM).
> >
> > This is the output of the file /var/log/libvirt/qemu/one-21.log
> > 2011-06-17 17:48:02.232: starting up
> > LC_ALL=C
> PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin
> > QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.14 -cpu qemu32 -enable-kvm -m
> > 2048 -smp 1,sockets=1,cores=1,threads=1 -name one-21 -uuid
> > b9330d8d-3d2e-666a-c9e5-5e32e81c29dc -nodefconfig -nodefaults -chardev
> >
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-21.monitor,server,nowait
> > -mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc -boot c
> > -drive
> >
> file=/srv/cloud/one/var//21/images/disk.0,if=none,id=drive-ide0-0-0,format=raw
> > -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0
> -netdev
> > tap,fd=18,id=hostnet0 -device
> > rtl8139,netdev=hostnet0,id=net0,mac=02:00:c0:a8:32:03,bus=pci.0,addr=0x3
> > -usb -vnc 0.0.0.0:21 -vga cirrus -incoming tcp:0.0.0.0:49152 -device
> > virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
> > 2011-06-17 17:51:11.997: shutting down
> >
> > And on /var/log/syslog, the folloing line:
> > Jun 17 17:51:46 node2 libvirtd: 17:51:46.798: 1200: error :
> > qemuDomainWaitForMigrationComplete:4218 : operation failed: migration
> job:
> > unexpectedly failed
> >
> > Can anyone provide help on this issue? How can I debug the live
> migration?
> >
> > Thank you very much in advance,
> > Samuel.
> >
> > On 7 June 2011 17:22, samuel  wrote:
> >>
> >> Hi folks,
> >>
> >> After few tricks to the standard configuration (controller exporting via
> >> NFS opennebula directories to 2 other nodes) seems to work except for
> one
> >> point: live migration.
> >>
> >> When starting live migration (from sunstone web interface), the
> following
> >> problem appears:
> >>
> >> Tue Jun  7 17:12:51 2011 [VMM][I]: Command execution fail: 'if [ -x
> >> "/var/tmp/one/vmm/kvm/migrate" ]; then /var/tmp/one/vmm/kvm/migrate
> one-131
> >> node1; else  exit 42; fi'
> >> Tue Jun  7 17:12:51 2011 [VMM][I]: STDERR follows.
> >> Tue Jun  7 17:12:51 2011 [VMM][I]: error: Requested operation is not
> >> valid: domain is already active as 'one-131'
> >> Tue Jun  7 17:12:51 2011 [VMM][I]: ExitCode: 1
> >> Tue Jun  7 17:12:51 2011 [VMM][E]: Error live-migrating VM, error:
> >> Requested operation is not valid: domain is already active as 'one-131'
> >> Tue Jun  7 17:12:51 2011 [LCM][I]: Fail to life migrate VM. Assuming
> that
> >> the VM is still RUNNING (will poll VM).
> >>
> >> I'm using qemu+ssh transport wit

Re: [one-users] Running VMs = -1

2011-06-29 Thread samuel
Hi folks,

Using opennebula 2.2 on top of ubuntu 11.04 and doing nasty stuff (power
shutdown, network disconnect, migration, etc..) I also got negative virtual
machine counters although onevm list reports none working:

$ onevm list
   ID USER NAME STAT CPU MEMHOSTNAMETIME

$ onehost list
  ID NAME  CLUSTER  RVM   TCPU   FCPU   ACPUTMEMFMEM
STAT
   0 node1 default   -1800588830   47.3G 44G
on
   1 node2 default   -1800800830   39.4G   38.5G
on

It would be hard to tell exactly when the first -1 happened, but I'll try to
pay more attention next time I see it (it happened in another test
installation).

I've not tried to remove hosts, but I think it's the only way to recover the
right vm counters, isn't it?

Best regards,
Samuel.

On 27 June 2011 18:52, Aleksandar Draganov  wrote:

> Hi Steve, hi Carlos,
>
> VMs stay in pending mode, so not allocation(this is propably another
> problem) takes place:
> -bash-4.1$ onevm list
>   ID USER NAME STAT CPU MEMHOSTNAMETIME
>   25 oneadmin   one-25 pend   0  0K 00 00:47:58
>   26 oneadmin   one-26 pend   0  0K 00 00:28:17
>   27 oneadmin   one-27 pend   0  0K 00 00:01:14
>
> There's nothing interesting in oned.log - only several methods invocations
> and successfull monitoring of the host1...
>
> I should probably share that I have mounted /srv/cloud/images and
> /srv/cloud0/one/var with folders on other drive(I needed more space) which
> at least for the front-end(host1) worked yesterday.
>
> The version of ONe I am using is 2.2. Unfortunately I can't reproduce all
> the operations I have performed, but I only delete/restart/shutdown VMs and
> enable/disable hosts with the sunstone GUI and I only submit VMs through the
> command line if it makes any difference. If you need some specific log file
> I can send it to you. I also reinstalled libvirt at some point today.
>
> Cheers,
> Sasho
>
>
>
> On 27/06/2011 17:04, Steven Timm wrote:
>
>> What does oned.log say from the monitoring of host1?
>> And what does onevm list say about which VM's it thinks are running where?
>>
>> Steve
>>
>>
>>
>> On Mon, 27 Jun 2011, Aleksandar Draganov wrote:
>>
>>  Hello everybody,
>>>
>>> I am running ONe on 2 hosts both with Scientific Linux 6. Host 1 holds
>>> the front-end, but I also want to run VMs on it.
>>> I added, restarted, deleted some number of VMs while playing with ONe and
>>> at some point I got this:
>>>
>>> -bash-4.1$ onehost list
>>>  ID NAME  CLUSTER  RVM   TCPU   FCPU   ACPUTMEMFMEM
>>> STAT
>>>  3 host1  default   -14003925007.6G
>>>  7G on
>>>  5 host2  default04004004007.6G
>>>  7.3G on
>>>
>>> As a result from this I can not delete host1. Is there some way to force
>>> it?
>>> I am using KVM. I was also playing a bit with virt-manager - could the
>>> problem be from this?
>>>
>>> Cheers,
>>> Sasho
>>>
>>>
>>>
>>
>
> --
> The University of Edinburgh is a charitable body, registered in
> Scotland, with registration number SC005336.
>
> __**_
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/**listinfo.cgi/users-opennebula.**org<http://lists.opennebula.org/listinfo.cgi/users-opennebula.org>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Settings for LiveMigration

2011-06-20 Thread samuel
Hi,

Indeed nodes shall connect via ssh passwordless.
You should also check whether DNS or /etc/hosts have the right network
parameters.

I had few issues with ubuntu 10.04 and had to change to 11.04...but it might
work on your setup.

Hope it helps,
Samuel.

On 20 June 2011 15:44, fanttazio  wrote:

> Hi,
>
> My ONE seems to work fine except that I cant do Live Migrate. From hosts I
> can connect to other hosts virtual manager's console and I can perform
> migrate from host's console and also I can perform migrate from front-end.
>
> I did uncomment listen on TCP in libvirtd.conf and changed 
> QEMU_PROTOCOL=qemu+ssh
> in /var/tmp/one/vmm/kvm/kvmrc but i cant perform livemigrate and VM logs
> this:
>
>
> 
> Mon Jun 20 01:10:56 2011 [DiM][I]: New VM state is ACTIVE. Mon Jun 20
> 01:10:56 2011 [LCM][I]: New VM state is PROLOG. Mon Jun 20 01:10:56 2011
> [VM][I]: Virtual Machine has no context Mon Jun 20 01:10:56 2011 [TM][I]:
> tm_clone.sh: front-end:/srv/cloud/images/ttylinux.img 
> 192.168.0.244:/srv/cloud/one/var//110/images/disk.0
> Mon Jun 20 01:10:56 2011 [TM][I]: tm_clone.sh: DST:
> /srv/cloud/one/var//110/images/disk.0 Mon Jun 20 01:10:56 2011 [TM][I]:
> tm_clone.sh: Creating directory /srv/cloud/one/var//110/images Mon Jun 20
> 01:10:56 2011 [TM][I]: tm_clone.sh: Executed "mkdir -p
> /srv/cloud/one/var//110/images". Mon Jun 20 01:10:56 2011 [TM][I]:
> tm_clone.sh: Executed "chmod a+w /srv/cloud/one/var//110/images". Mon Jun 20
> 01:10:56 2011 [TM][I]: tm_clone.sh: Cloning /srv/cloud/images/ttylinux.img
> Mon Jun 20 01:10:56 2011 [TM][I]: tm_clone.sh: Executed "cp -r
> /srv/cloud/images/ttylinux.img /srv/cloud/one/var//110/images/disk.0". Mon
> Jun 20 01:10:56 2011 [TM][I]: tm_clone.sh: Executed "chmod a+rw
> /srv/cloud/one/var//110/images/disk.0". Mon Jun 20 01:10:56 2011 [LCM][I]:
> New VM state is BOOT Mon Jun 20 01:10:56 2011 [VMM][I]: Generating
> deployment file: /srv/cloud/one/var/110/deployment.0 Mon Jun 20 01:10:57
> 2011 [LCM][I]: New VM state is RUNNING Mon Jun 20 01:12:50 2011 [LCM][I]:
> New VM state is SAVE_MIGRATE Mon Jun 20 01:12:56 2011 [LCM][I]: New VM state
> is PROLOG_MIGRATE Mon Jun 20 01:12:56 2011 [TM][I]: tm_mv.sh: Will not move,
> source and destination are equal Mon Jun 20 01:12:56 2011 [LCM][I]: New VM
> state is BOOT Mon Jun 20 01:12:59 2011 [LCM][I]: New VM state is RUNNING Mon
> Jun 20 01:14:02 2011 [LCM][I]: New VM state is SAVE_MIGRATE Mon Jun 20
> 01:14:11 2011 [LCM][I]: New VM state is PROLOG_MIGRATE Mon Jun 20 01:14:11
> 2011 [TM][I]: tm_mv.sh: Will not move, source and destination are equal Mon
> Jun 20 01:14:11 2011 [LCM][I]: New VM state is BOOT Mon Jun 20 01:14:12 2011
> [LCM][I]: New VM state is RUNNING Mon Jun 20 01:14:54 2011 [LCM][I]: New VM
> state is SAVE_MIGRATE Mon Jun 20 01:14:56 2011 [LCM][I]: New VM state is
> PROLOG_MIGRATE Mon Jun 20 01:14:56 2011 [TM][I]: tm_mv.sh: Will not move,
> source and destination are equal Mon Jun 20 01:14:56 2011 [LCM][I]: New VM
> state is BOOT Mon Jun 20 01:14:58 2011 [LCM][I]: New VM state is RUNNING Mon
> Jun 20 01:16:18 2011 [LCM][I]: New VM state is MIGRATE Mon Jun 20 01:16:18
> 2011 [VMM][I]: Command execution fail: 'if [ -x
> "/var/tmp/one/vmm/kvm/migrate" ]; then /var/tmp/one/vmm/kvm/migrate one-110
> 192.168.0.246; else exit 42; fi' Mon Jun 20 01:16:18 2011 [VMM][I]: STDERR
> follows. Mon Jun 20 01:16:18 2011 [VMM][I]: error: cannot recv data:
> Connection reset by peer Mon Jun 20 01:16:18 2011 [VMM][I]: ExitCode: 1 Mon
> Jun 20 01:16:18 2011 [VMM][E]: Error live-migrating VM, error: cannot recv
> data: Connection reset by peer Mon Jun 20 01:16:18 2011 [LCM][I]: Fail to
> life migrate VM. Assuming that the VM is still RUNNING (will poll VM). Mon
> Jun 20 01:16:19 2011 [VMM][D]: Monitor Information: CPU : 2 Memory: 262144
> Net_TX: 1438 Net_RX: 8036
>
> ###
>
> if I change kvmrc to LIBVIRT_URI=qemu+ssh:///system it gives me that it
> can not connect to hypervisor. the libvirtd log doesn't show any error and
> even the VM log in /var/log/libvirt/qemu/ does not show any error. Should I
> make hosts to have an ssh passwordless connection as well? or I am missing
> something here?
> All my hosts and front-end are Ubuntu 10.04.
>
> Mehdi
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] live migration fails on ubuntu 11.04

2011-06-17 Thread samuel
The error happened to be a wrong entry in the file /etc/hosts, where the
remote node's IP was set to the local one and there were several errors.

However, it is not yet possible to perform live migration on the same
escenario (normal migration works perfectly), I always end up with the
following error:
Fri Jun 17 17:47:58 2011 [LCM][I]: New VM state is MIGRATE
Fri Jun 17 17:51:09 2011 [VMM][I]: Command execution fail: 'if [ -x
"/var/tmp/one/vmm/kvm/migrate" ]; then /var/tmp/one/vmm/kvm/migrate one-21
node2; else  exit 42; fi'
Fri Jun 17 17:51:09 2011 [VMM][I]: STDERR follows.
Fri Jun 17 17:51:09 2011 [VMM][I]: error: operation failed: migration job:
unexpectedly failed
Fri Jun 17 17:51:09 2011 [VMM][I]: ExitCode: 1
Fri Jun 17 17:51:09 2011 [VMM][E]: Error live-migrating VM, error: operation
failed: migration job: unexpectedly failed
Fri Jun 17 17:51:09 2011 [LCM][I]: Fail to life migrate VM. Assuming that
the VM is still RUNNING (will poll VM).

This is the output of the file /var/log/libvirt/qemu/one-21.log
2011-06-17 17:48:02.232: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin
QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.14 -cpu qemu32 -enable-kvm -m
2048 -smp 1,sockets=1,cores=1,threads=1 -name one-21 -uuid
b9330d8d-3d2e-666a-c9e5-5e32e81c29dc -nodefconfig -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-21.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc -boot c
-drive
file=/srv/cloud/one/var//21/images/disk.0,if=none,id=drive-ide0-0-0,format=raw
-device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=18,id=hostnet0 -device
rtl8139,netdev=hostnet0,id=net0,mac=02:00:c0:a8:32:03,bus=pci.0,addr=0x3
-usb -vnc 0.0.0.0:21 -vga cirrus -incoming tcp:0.0.0.0:49152 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
2011-06-17 17:51:11.997: shutting down

And on /var/log/syslog, the folloing line:
Jun 17 17:51:46 node2 libvirtd: 17:51:46.798: 1200: error :
qemuDomainWaitForMigrationComplete:4218 : operation failed: migration job:
unexpectedly failed

Can anyone provide help on this issue? How can I debug the live migration?

Thank you very much in advance,
Samuel.

On 7 June 2011 17:22, samuel  wrote:

> Hi folks,
>
> After few tricks to the standard configuration (controller exporting via
> NFS opennebula directories to 2 other nodes) seems to work except for one
> point: live migration.
>
> When starting live migration (from sunstone web interface), the following
> problem appears:
>
> Tue Jun  7 17:12:51 2011 [VMM][I]: Command execution fail: 'if [ -x
> "/var/tmp/one/vmm/kvm/migrate" ]; then /var/tmp/one/vmm/kvm/migrate one-131
> node1; else  exit 42; fi'
> Tue Jun  7 17:12:51 2011 [VMM][I]: STDERR follows.
> Tue Jun  7 17:12:51 2011 [VMM][I]: error: Requested operation is not valid:
> domain is already active as 'one-131'
> Tue Jun  7 17:12:51 2011 [VMM][I]: ExitCode: 1
> Tue Jun  7 17:12:51 2011 [VMM][E]: Error live-migrating VM, error:
> Requested operation is not valid: domain is already active as 'one-131'
> Tue Jun  7 17:12:51 2011 [LCM][I]: Fail to life migrate VM. Assuming that
> the VM is still RUNNING (will poll VM).
>
> I'm using qemu+ssh transport with the following version:
> $ virsh version
> Compilado contra la biblioteca: libvir 0.8.8
> Utilizando la biblioteca: libvir 0.8.8
> Utilizando API: QEMU 0.8.8
> Ejecutando hypervisor: QEMU 0.14.0
>
> Installed version of open nebula is 2.2.
>
> Could anyone shed some light on this issue? I've looked in the Internet and
> found some posts relating to qemu bugs but I'd like to know whether can I
> get more information about this issue.
>
> Thank you very much in advance,
> Samuel.
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] can't livemigrate VMs

2011-06-08 Thread samuel
the kvmrc file on opennebula is not used by the nodes so you shall change
/var/tmp/one/vmm/kvm/kvmrc on every node where kvm is running.

After that change, in the error log you shall see connecting to uri:
qemu+ssh:// instead of quemu://

About the other error, failed to connecto to the hypervisor, I can not help
much but it seems as if there is no hypervisor running and you should check
the installation and the running processes on all nodes.

I'm sorry I cant not help further,

Samuel.

On 8 June 2011 10:26, Khoa Nguyen  wrote:

> Hi samuel!
> Change the file kvmrc on opennebula or kvm ?
> the fist time , I have just change the file kvmrc on opennebula.
> file path is
> /srv/cloud/one/lib/remotes/vmm/kvm/kvmrc
> /srv/cloud/one/var/remotes/vmm/kvm/kvmrc
>
> However, when i change kvmrc file on KVM
> file path is
> /var/tmp/one/vmm/kvm/kvmrc.
> I can't deploy VMs.
>
> Wed Jun  8 15:13:09 2011 [VMM][I]: Connecting to uri: qemu+ssh:///system
> Wed Jun  8 15:13:09 2011 [VMM][I]: error: cannot recv data: Connection
> reset by peer
> Wed Jun  8 15:13:09 2011 [VMM][I]: error: failed to connect to the
> hypervisor
> Wed Jun  8 15:13:09 2011 [VMM][I]: ExitCode: 255
> Wed Jun  8 15:13:09 2011 [VMM][E]: Error deploying virtual machine:
> Connecting to uri: qemu+ssh:///system
> Wed Jun  8 15:13:09 2011 [DiM][I]: New VM state is FAILED
>
> do you have idea?
> thank you.
>
> On Wed, Jun 8, 2011 at 3:10 PM, samuel  wrote:
>
>> This error is directly related to the qemu transport. Have you applied the
>> changes to every node?
>>
>> Hope it helps,
>> samuel.
>>
>>
>> On 8 June 2011 08:44, Khoa Nguyen  wrote:
>>
>>> Thank you for your reply. I did your way but It can't solve problem.
>>> Anyone have idea?
>>> I install opennebula 2.0 on Ubuntu 9.10.
>>>
>>> Thank you.
>>>
>>>
>>> On Tue, Jun 7, 2011 at 3:31 PM, samuel  wrote:
>>>
>>>> You have 2 options:
>>>>
>>>> 1) configure SSL connection for qemu, or
>>>> 2) use ssh as transport for qemu.
>>>>
>>>> To do the second option, which is the easier, you should change the file
>>>> kvmrc
>>>> LIBVIRT_URI=qemu+ssh:///system
>>>> QEMU_PROTOCOL=qemu+ssh
>>>>
>>>> Hope it helps,
>>>> Samuel.
>>>> On 7 June 2011 10:12, Khoa Nguyen  wrote:
>>>>
>>>>> Hi everyone
>>>>>
>>>>> I want to live migrate VMs from kvm node to others. However, there have
>>>>> problem which I can't solve it.
>>>>> Some information in *vm.log*
>>>>>
>>>>> Jun  7 14:55:02 2011 [TM][I]: tm_context.sh: Executed "rm -rf
>>>>> /srv/cloud/one/var//272/images/isofiles".
>>>>> Tue Jun  7 14:55:02 2011 [LCM][I]: New VM state is BOOT
>>>>> Tue Jun  7 14:55:02 2011 [VMM][I]: Generating deployment file:
>>>>> /srv/cloud/one/var/272/deployment.0
>>>>> Tue Jun  7 14:55:03 2011 [LCM][I]: New VM state is RUNNING
>>>>> Tue Jun  7 14:55:37 2011 [LCM][I]: New VM state is MIGRATE
>>>>> Tue Jun  7 14:55:37 2011 [VMM][I]: Command execution fail: 'if [ -x
>>>>> "/var/tmp/one/vmm/kvm/migrate" ]; then /var/tmp/one/vmm/kvm/migrate 
>>>>> one-272
>>>>> 172.29.70.137; else  exit 42; fi'
>>>>> Tue Jun  7 14:55:37 2011 [VMM][I]: STDERR follows.
>>>>> Tue Jun  7 14:55:37 2011 [VMM][I]: Connecting to uri: qemu:///system
>>>>> Tue Jun  7 14:55:37 2011 [VMM][I]: error: Cannot access CA certificate
>>>>> '/etc/pki/CA/cacert.pem': No such file or directory
>>>>> Tue Jun  7 14:55:37 2011 [VMM][I]: ExitCode: 1
>>>>> Tue Jun  7 14:55:37 2011 [VMM][E]: Error live-migrating VM, Connecting
>>>>> to uri: qemu:///system
>>>>> Tue Jun  7 14:55:37 2011 [LCM][I]: Fail to life migrate VM. Assuming
>>>>> that the VM is still RUNNING (will poll VM).
>>>>>
>>>>> * one.log*
>>>>>
>>>>>
>>>>>
>>>>> Tue Jun  7 14:55:36 2011 [ReM][D]: VirtualMachineMigrate invoked
>>>>> Tue Jun  7 14:55:36 2011 [DiM][D]: Live-migrating VM 272
>>>>> Tue Jun  7 14:55:37 2011 [VMM][D]: Message received: LOG - 272 Command
>>>>> execution fail: 'if [ -x "/var/tmp/one/vmm/kvm/migrate" ]; then
>>>>> /var/tmp/one/vmm/kvm/migrate one-272 172.29.70.137;
>>>>> el

Re: [one-users] can't livemigrate VMs

2011-06-08 Thread samuel
This error is directly related to the qemu transport. Have you applied the
changes to every node?

Hope it helps,
samuel.

On 8 June 2011 08:44, Khoa Nguyen  wrote:

> Thank you for your reply. I did your way but It can't solve problem. Anyone
> have idea?
> I install opennebula 2.0 on Ubuntu 9.10.
>
> Thank you.
>
>
> On Tue, Jun 7, 2011 at 3:31 PM, samuel  wrote:
>
>> You have 2 options:
>>
>> 1) configure SSL connection for qemu, or
>> 2) use ssh as transport for qemu.
>>
>> To do the second option, which is the easier, you should change the file
>> kvmrc
>> LIBVIRT_URI=qemu+ssh:///system
>> QEMU_PROTOCOL=qemu+ssh
>>
>> Hope it helps,
>> Samuel.
>> On 7 June 2011 10:12, Khoa Nguyen  wrote:
>>
>>> Hi everyone
>>>
>>> I want to live migrate VMs from kvm node to others. However, there have
>>> problem which I can't solve it.
>>> Some information in *vm.log*
>>>
>>> Jun  7 14:55:02 2011 [TM][I]: tm_context.sh: Executed "rm -rf
>>> /srv/cloud/one/var//272/images/isofiles".
>>> Tue Jun  7 14:55:02 2011 [LCM][I]: New VM state is BOOT
>>> Tue Jun  7 14:55:02 2011 [VMM][I]: Generating deployment file:
>>> /srv/cloud/one/var/272/deployment.0
>>> Tue Jun  7 14:55:03 2011 [LCM][I]: New VM state is RUNNING
>>> Tue Jun  7 14:55:37 2011 [LCM][I]: New VM state is MIGRATE
>>> Tue Jun  7 14:55:37 2011 [VMM][I]: Command execution fail: 'if [ -x
>>> "/var/tmp/one/vmm/kvm/migrate" ]; then /var/tmp/one/vmm/kvm/migrate one-272
>>> 172.29.70.137; else  exit 42; fi'
>>> Tue Jun  7 14:55:37 2011 [VMM][I]: STDERR follows.
>>> Tue Jun  7 14:55:37 2011 [VMM][I]: Connecting to uri: qemu:///system
>>> Tue Jun  7 14:55:37 2011 [VMM][I]: error: Cannot access CA certificate
>>> '/etc/pki/CA/cacert.pem': No such file or directory
>>> Tue Jun  7 14:55:37 2011 [VMM][I]: ExitCode: 1
>>> Tue Jun  7 14:55:37 2011 [VMM][E]: Error live-migrating VM, Connecting to
>>> uri: qemu:///system
>>> Tue Jun  7 14:55:37 2011 [LCM][I]: Fail to life migrate VM. Assuming that
>>> the VM is still RUNNING (will poll VM).
>>>
>>> * one.log*
>>>
>>>
>>>
>>> Tue Jun  7 14:55:36 2011 [ReM][D]: VirtualMachineMigrate invoked
>>> Tue Jun  7 14:55:36 2011 [DiM][D]: Live-migrating VM 272
>>> Tue Jun  7 14:55:37 2011 [VMM][D]: Message received: LOG - 272 Command
>>> execution fail: 'if [ -x "/var/tmp/one/vmm/kvm/migrate" ]; then
>>> /var/tmp/one/vmm/kvm/migrate one-272 172.29.70.137;
>>> else  exit 42; fi'
>>>
>>> Tue Jun  7 14:55:37 2011 [VMM][D]: Message received: LOG - 272 STDERR
>>> follows.
>>>
>>> Tue Jun  7 14:55:37 2011 [VMM][D]: Message received: LOG - 272 Connecting
>>> to uri: qemu:///system
>>>
>>> Tue Jun  7 14:55:37 2011 [VMM][D]: Message received: LOG - 272 error:
>>> Cannot access CA certificate '/etc/pki/CA/cacert.pem': No such file or
>>> directory
>>>
>>> Tue Jun  7 14:55:37 2011 [VMM][D]: Message received: LOG - 272 ExitCode:
>>> 1
>>>
>>> Tue Jun  7 14:55:37 2011 [VMM][D]: Message received: MIGRATE FAILURE 272
>>> Connecting to uri: qemu:///system
>>>
>>> Tue Jun  7 14:55:37 2011 [VMM][D]: Message received: error: Cannot access
>>> CA certificate '/etc/pki/CA/cacert.pem': No such file or directory
>>>
>>> Tue Jun  7 14:55:37 2011 [VMM][D]: Message received: ExitCode: 1
>>>
>>> Please help me?
>>> Thank you
>>>
>>>
>>> --
>>> Nguyễn Vũ Văn Khoa
>>> Đại học Khoa Học Tự Nhiên TP HCM
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@lists.opennebula.org
>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>
>>>
>>
>
>
> --
> Nguyễn Vũ Văn Khoa
> Đại học Khoa Học Tự Nhiên TP HCM
>
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] live migration fails on ubuntu 11.04

2011-06-07 Thread samuel
Hi folks,

After few tricks to the standard configuration (controller exporting via NFS
opennebula directories to 2 other nodes) seems to work except for one point:
live migration.

When starting live migration (from sunstone web interface), the following
problem appears:

Tue Jun  7 17:12:51 2011 [VMM][I]: Command execution fail: 'if [ -x
"/var/tmp/one/vmm/kvm/migrate" ]; then /var/tmp/one/vmm/kvm/migrate one-131
node1; else  exit 42; fi'
Tue Jun  7 17:12:51 2011 [VMM][I]: STDERR follows.
Tue Jun  7 17:12:51 2011 [VMM][I]: error: Requested operation is not valid:
domain is already active as 'one-131'
Tue Jun  7 17:12:51 2011 [VMM][I]: ExitCode: 1
Tue Jun  7 17:12:51 2011 [VMM][E]: Error live-migrating VM, error: Requested
operation is not valid: domain is already active as 'one-131'
Tue Jun  7 17:12:51 2011 [LCM][I]: Fail to life migrate VM. Assuming that
the VM is still RUNNING (will poll VM).

I'm using qemu+ssh transport with the following version:
$ virsh version
Compilado contra la biblioteca: libvir 0.8.8
Utilizando la biblioteca: libvir 0.8.8
Utilizando API: QEMU 0.8.8
Ejecutando hypervisor: QEMU 0.14.0

Installed version of open nebula is 2.2.

Could anyone shed some light on this issue? I've looked in the Internet and
found some posts relating to qemu bugs but I'd like to know whether can I
get more information about this issue.

Thank you very much in advance,
Samuel.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] can't livemigrate VMs

2011-06-07 Thread samuel
You have 2 options:

1) configure SSL connection for qemu, or
2) use ssh as transport for qemu.

To do the second option, which is the easier, you should change the file
kvmrc
LIBVIRT_URI=qemu+ssh:///system
QEMU_PROTOCOL=qemu+ssh

Hope it helps,
Samuel.
On 7 June 2011 10:12, Khoa Nguyen  wrote:

> Hi everyone
>
> I want to live migrate VMs from kvm node to others. However, there have
> problem which I can't solve it.
> Some information in *vm.log*
>
> Jun  7 14:55:02 2011 [TM][I]: tm_context.sh: Executed "rm -rf
> /srv/cloud/one/var//272/images/isofiles".
> Tue Jun  7 14:55:02 2011 [LCM][I]: New VM state is BOOT
> Tue Jun  7 14:55:02 2011 [VMM][I]: Generating deployment file:
> /srv/cloud/one/var/272/deployment.0
> Tue Jun  7 14:55:03 2011 [LCM][I]: New VM state is RUNNING
> Tue Jun  7 14:55:37 2011 [LCM][I]: New VM state is MIGRATE
> Tue Jun  7 14:55:37 2011 [VMM][I]: Command execution fail: 'if [ -x
> "/var/tmp/one/vmm/kvm/migrate" ]; then /var/tmp/one/vmm/kvm/migrate one-272
> 172.29.70.137; else  exit 42; fi'
> Tue Jun  7 14:55:37 2011 [VMM][I]: STDERR follows.
> Tue Jun  7 14:55:37 2011 [VMM][I]: Connecting to uri: qemu:///system
> Tue Jun  7 14:55:37 2011 [VMM][I]: error: Cannot access CA certificate
> '/etc/pki/CA/cacert.pem': No such file or directory
> Tue Jun  7 14:55:37 2011 [VMM][I]: ExitCode: 1
> Tue Jun  7 14:55:37 2011 [VMM][E]: Error live-migrating VM, Connecting to
> uri: qemu:///system
> Tue Jun  7 14:55:37 2011 [LCM][I]: Fail to life migrate VM. Assuming that
> the VM is still RUNNING (will poll VM).
>
> * one.log*
>
>
>
> Tue Jun  7 14:55:36 2011 [ReM][D]: VirtualMachineMigrate invoked
> Tue Jun  7 14:55:36 2011 [DiM][D]: Live-migrating VM 272
> Tue Jun  7 14:55:37 2011 [VMM][D]: Message received: LOG - 272 Command
> execution fail: 'if [ -x "/var/tmp/one/vmm/kvm/migrate" ]; then
> /var/tmp/one/vmm/kvm/migrate one-272 172.29.70.137;
> else  exit 42; fi'
>
> Tue Jun  7 14:55:37 2011 [VMM][D]: Message received: LOG - 272 STDERR
> follows.
>
> Tue Jun  7 14:55:37 2011 [VMM][D]: Message received: LOG - 272 Connecting
> to uri: qemu:///system
>
> Tue Jun  7 14:55:37 2011 [VMM][D]: Message received: LOG - 272 error:
> Cannot access CA certificate '/etc/pki/CA/cacert.pem': No such file or
> directory
>
> Tue Jun  7 14:55:37 2011 [VMM][D]: Message received: LOG - 272 ExitCode: 1
>
> Tue Jun  7 14:55:37 2011 [VMM][D]: Message received: MIGRATE FAILURE 272
> Connecting to uri: qemu:///system
>
> Tue Jun  7 14:55:37 2011 [VMM][D]: Message received: error: Cannot access
> CA certificate '/etc/pki/CA/cacert.pem': No such file or directory
>
> Tue Jun  7 14:55:37 2011 [VMM][D]: Message received: ExitCode: 1
>
> Please help me?
> Thank you
>
>
> --
> Nguyễn Vũ Văn Khoa
> Đại học Khoa Học Tự Nhiên TP HCM
>
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] access just created vm

2011-05-25 Thread samuel
Hi folks,

Thanks a lot for the fast and concise response. As i suspected the answer
was just in front of my eyes and I did not payed enough attention. Apologies
for the noise.

Thanks especially to Vivek and Carlos, who just pointed at the important
fact:

connect via VNC to the host IP, not the VM one.

Best regards,
Samuel.

2011/5/25 Carlos Martín Sánchez 

> Hi Samuel,
>
> Please take a look at this FAQ entry [1], those are the common connectivity
> problems.
> VNC is the best way to solve this kind of problems, but you have to connect
> to your physical host IP, not the VM one:
>
>
>  GRAPHICS = [TYPE = "vnc", LISTEN = "0.0.0.0", port="5901"]
>
> Means that the physical host hypervisor will open a VNC server at port
> 5901, accepting connections from any IP.
>
> Regards,
> Carlos.
>
> [1]
> http://opennebula.org/documentation:community:faq#my_vm_is_running_but_i_get_no_answer_from_pings_what_s_wrong
> --
> Carlos Martín, MSc
> Project Major Contributor
> OpenNebula - The Open Source Toolkit for Cloud Computing
> www.OpenNebula.org <http://www.opennebula.org/> | cmar...@opennebula.org
>
>
> On Tue, May 24, 2011 at 6:37 PM, samuel  wrote:
>
>> Dear all,
>>
>> First of all congratulations for this projects, I've just started using it
>> and looks really promising.
>>
>> I've created the simple controller+2nodes following the website
>> documentation (opennebula version 2.2) and the basics seems to work (at
>> cluster level): that is I can create hosts and nets. I'm also using the web
>> front-end sunstone and really simplifies the managing of the underlying
>> structure.
>>
>> The problem I'm facing is accessing the just created virtual machines.
>> I've downloaded  a qcow2 debian instance from
>> http://people.debian.org/~aurel32/qemu/amd64/ and used the next template
>> to create a virtual machine:
>>
>> NAME=debian.squeeze.qcow2
>> MEMORY=1025
>> CPU=0.5
>>
>> OS = [
>>  BOOT="hd",
>>  ROOT="hda"
>>  ]
>>
>> DISK = [
>>  TYPE= "disk",
>>  DRIVER="qcow2",
>>  SOURCE =
>> "/srv/cloud/images/qcow/debian_squeeze_amd64_standard.qcow2",
>>  TARGET = "hda",
>>  CLONE = "no",
>>  SAVE = "no"
>>  ]
>>
>>
>>  GRAPHICS = [TYPE = "vnc", LISTEN = "0.0.0.0", port="5901"]
>>  NIC= [ NETWORK = "control" ]
>>
>>  FEATURES=[ acpi="no" ]
>>
>> The virtual machine is created (appears as running in the web interface)
>> and from the command line I can see the vm running:
>> onevm show 26
>> VIRTUAL MACHINE 26
>> INFORMATION
>> ID : 26
>> NAME   : debian.squeeze.qcow2
>> STATE  : ACTIVE
>> LCM_STATE  : RUNNING
>> START TIME : 05/24 18:20:18
>> END TIME   : -
>> DEPLOY ID: : one-26
>>
>> VIRTUAL MACHINE
>> MONITORING
>> NET_TX : 0
>> NET_RX : 0
>> USED MEMORY: 0
>> USED CPU   : 0
>>
>> VIRTUAL MACHINE
>> TEMPLATE
>> CPU=0.5
>> DISK=[
>>   CLONE=no,
>>   DISK_ID=0,
>>   DRIVER=qcow2,
>>   SAVE=no,
>>   SOURCE=/srv/cloud/images/qcow/debian_squeeze_amd64_standard.qcow2,
>>   TARGET=hda,
>>   TYPE=disk ]
>> FEATURES=[
>>   ACPI=no ]
>> GRAPHICS=[
>>   LISTEN=0.0.0.0,
>>   PORT=5901,
>>   TYPE=vnc ]
>> MEMORY=1025
>> NAME=debian.squeeze.qcow2
>> NIC=[
>>   BRIDGE=vbr0,
>>   IP=192.168.50.5,
>>   MAC=02:00:c0:a8:32:05,
>>   NETWORK=control,
>>   NETWORK_ID=2 ]
>> OS=[
>>   BOOT=hd,
>>   ROOT=hda ]
>> VMID=26
>>
>> The problems is that  I can not acces the console via virsh:
>> virsh # list
>>  Id Nombre   Estado
>> --
>>  16 one-17   en pausa
>>  21 one-22   en pausa
>>  23 one-26   ejecutando
>>
>> virsh # console 23
>> No existe una terminal disponible para el dominio
>>
>>
>> neither to the VNC access:
>>
>> $vncviewer 192.168.50.5
>>
>> Tue May 24 18:21:02 2011
>>  main:unable to connect to host: No route to host (113)
>>
>> From the nodes or the controller doing a mtr,ping, nmap it appears as if
>> there is no route to the new IP 192.168.50.5 but the problem is that I don't
>> know what can be wrong. Anyone can point to any documentation or how to
>> debug the connection?
>>
>> I've also tried to create images from iso files but I'm not 100% sure I've
>> done it right. Is there any documentation about how to create virtual
>> machine from a .iso linux or windows burned CD?
>>
>>
>> Thanks a lot in advance and apologies for the size of this email.
>>
>> Samuel.
>>
>>
>> ___
>> Users mailing list
>> Users@lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] access just created vm

2011-05-24 Thread samuel
Dear all,

First of all congratulations for this projects, I've just started using it
and looks really promising.

I've created the simple controller+2nodes following the website
documentation (opennebula version 2.2) and the basics seems to work (at
cluster level): that is I can create hosts and nets. I'm also using the web
front-end sunstone and really simplifies the managing of the underlying
structure.

The problem I'm facing is accessing the just created virtual machines.
I've downloaded  a qcow2 debian instance from
http://people.debian.org/~aurel32/qemu/amd64/ and used the next template to
create a virtual machine:

NAME=debian.squeeze.qcow2
MEMORY=1025
CPU=0.5

OS = [
 BOOT="hd",
 ROOT="hda"
 ]

DISK = [
 TYPE= "disk",
 DRIVER="qcow2",
 SOURCE =
"/srv/cloud/images/qcow/debian_squeeze_amd64_standard.qcow2",
 TARGET = "hda",
 CLONE = "no",
 SAVE = "no"
 ]


 GRAPHICS = [TYPE = "vnc", LISTEN = "0.0.0.0", port="5901"]
 NIC= [ NETWORK = "control" ]

 FEATURES=[ acpi="no" ]

The virtual machine is created (appears as running in the web interface) and
from the command line I can see the vm running:
onevm show 26
VIRTUAL MACHINE 26
INFORMATION
ID : 26
NAME   : debian.squeeze.qcow2
STATE  : ACTIVE
LCM_STATE  : RUNNING
START TIME : 05/24 18:20:18
END TIME   : -
DEPLOY ID: : one-26

VIRTUAL MACHINE
MONITORING
NET_TX : 0
NET_RX : 0
USED MEMORY: 0
USED CPU   : 0

VIRTUAL MACHINE
TEMPLATE
CPU=0.5
DISK=[
  CLONE=no,
  DISK_ID=0,
  DRIVER=qcow2,
  SAVE=no,
  SOURCE=/srv/cloud/images/qcow/debian_squeeze_amd64_standard.qcow2,
  TARGET=hda,
  TYPE=disk ]
FEATURES=[
  ACPI=no ]
GRAPHICS=[
  LISTEN=0.0.0.0,
  PORT=5901,
  TYPE=vnc ]
MEMORY=1025
NAME=debian.squeeze.qcow2
NIC=[
  BRIDGE=vbr0,
  IP=192.168.50.5,
  MAC=02:00:c0:a8:32:05,
  NETWORK=control,
  NETWORK_ID=2 ]
OS=[
  BOOT=hd,
  ROOT=hda ]
VMID=26

The problems is that  I can not acces the console via virsh:
virsh # list
 Id Nombre   Estado
--
 16 one-17   en pausa
 21 one-22   en pausa
 23 one-26   ejecutando

virsh # console 23
No existe una terminal disponible para el dominio


neither to the VNC access:

$vncviewer 192.168.50.5

Tue May 24 18:21:02 2011
 main:unable to connect to host: No route to host (113)

>From the nodes or the controller doing a mtr,ping, nmap it appears as if
there is no route to the new IP 192.168.50.5 but the problem is that I don't
know what can be wrong. Anyone can point to any documentation or how to
debug the connection?

I've also tried to create images from iso files but I'm not 100% sure I've
done it right. Is there any documentation about how to create virtual
machine from a .iso linux or windows burned CD?


Thanks a lot in advance and apologies for the size of this email.

Samuel.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org