Hello All,

I have an opennebula installation with 1 frontend and two nodes. The base OS
is ubuntu 10.10 64 bit edition. When I try to deploy a VM using the command
*onevm create ubuntu.template* the VM gets deployed successfully on one node
but it gets stuck in BOOT state on the other node. I am enclosing
the log file and deployment.0 file of that VM and transcript of the log from
$ONE_LOCATION/var/oned.log.


*vm.log*
*Tue May 10 18:01:54 2011 [DiM][I]: New VM state is ACTIVE.*
*Tue May 10 18:01:54 2011 [LCM][I]: New VM state is PROLOG.*
*Tue May 10 18:01:54 2011 [VM][I]: Virtual Machine has no context*
*Tue May 10 18:01:55 2011 [TM][I]: tm_clone.sh:
cloud-3:/srv/cloud/images/ubuntu/ubu64.img 192.168.2.5:
/srv/cloud/one/var//54/images/disk.0*
*Tue May 10 18:01:55 2011 [TM][I]: tm_clone.sh: DST:
/srv/cloud/one/var//54/images/disk.0*
*Tue May 10 18:01:55 2011 [TM][I]: tm_clone.sh: Creating directory
/srv/cloud/one/var//54/images*
*Tue May 10 18:01:55 2011 [TM][I]: tm_clone.sh: Executed "mkdir -p
/srv/cloud/one/var//54/images".*
*Tue May 10 18:01:55 2011 [TM][I]: tm_clone.sh: Executed "chmod a+w
/srv/cloud/one/var//54/images".*
*Tue May 10 18:01:55 2011 [TM][I]: tm_clone.sh: Cloning
/srv/cloud/images/ubuntu/ubu64.img*
*Tue May 10 18:01:55 2011 [TM][I]: tm_clone.sh: Executed "cp -r
/srv/cloud/images/ubuntu/ubu64.img /srv/cloud/one/var//54/images/disk.0".*
*Tue May 10 18:01:55 2011 [TM][I]: tm_clone.sh: Executed "chmod a+rw
/srv/cloud/one/var//54/images/disk.0".*
*Tue May 10 18:01:57 2011 [LCM][I]: New VM state is BOOT*
*Tue May 10 18:01:57 2011 [VMM][I]: Generating deployment file:
/srv/cloud/one/var/54/deployment.0*

*deployment.0*
*
*
*<domain type='kvm'>*
*        <name>one-54</name>*
*        <memory>614400</memory>*
*        <os>*
*                <type arch='x86_64'>hvm</type>*
*                <boot dev='hd'/>*
*        </os>*
*        <devices>*
*                <emulator>/usr/bin/kvm</emulator>*
*                <disk type='file' device='disk'>*
*                        <source
file='/srv/cloud/one/var//54/images/disk.0'/>*
*                        <target dev='sda'/>*
*                        <driver name='qemu' type='qcow2'/>*
*                </disk>*
*                <interface type='bridge'>*
*                        <source bridge='br0'/>*
*                        <mac address='02:00:c0:a8:02:03'/>*
*                </interface>*
*        </devices>*
*        <features>*
*                <acpi/>*
*        </features>*
*</domain>*
*
*
*/srv/cloud/one/var/oned.log*
*
*
*Tue May 10 18:01:54 2011 [DiM][D]: Deploying VM 54*
*Tue May 10 18:01:55 2011 [TM][D]: Message received: LOG - 54 tm_clone.sh:
cloud-3:/srv/cloud/images/ubuntu/ubu64.img 192.168.2.5:
/srv/cloud/one/var//54/images/disk.0*
*
*
*Tue May 10 18:01:55 2011 [TM][D]: Message received: LOG - 54 tm_clone.sh:
DST: /srv/cloud/one/var//54/images/disk.0*
*
*
*Tue May 10 18:01:55 2011 [TM][D]: Message received: LOG - 54 tm_clone.sh:
Creating directory /srv/cloud/one/var//54/images*
*
*
*Tue May 10 18:01:55 2011 [TM][D]: Message received: LOG - 54 tm_clone.sh:
Executed "mkdir -p /srv/cloud/one/var//54/images".*
*
*
*Tue May 10 18:01:55 2011 [TM][D]: Message received: LOG - 54 tm_clone.sh:
Executed "chmod a+w /srv/cloud/one/var//54/images".*
*
*
*Tue May 10 18:01:55 2011 [TM][D]: Message received: LOG - 54 tm_clone.sh:
Cloning /srv/cloud/images/ubuntu/ubu64.img*
*
*
*Tue May 10 18:01:55 2011 [TM][D]: Message received: LOG - 54 tm_clone.sh:
Executed "cp -r /srv/cloud/images/ubuntu/ubu64.img
/srv/cloud/one/var//54/images/disk.0".*
*
*
*Tue May 10 18:01:55 2011 [TM][D]: Message received: LOG - 54 tm_clone.sh:
Executed "chmod a+rw /srv/cloud/one/var//54/images/disk.0".*
*
*
*Tue May 10 18:01:55 2011 [TM][D]: Message received: TRANSFER SUCCESS 54 -*
*
*
onevm show command for that particular VM gives the following output.
*
*
*VIRTUAL MACHINE 54 INFORMATION
     *
ID             : 54
NAME           : ubuntu
STATE          : ACTIVE
LCM_STATE      : BOOT
START TIME     : 05/10 18:01:39
END TIME       : -
DEPLOY ID:     : -

The strange thing with the issue at hand is that, we had successfully
deployed VMs before on this particular node. But recently we had upgraded
the libvirt version to 0.9 on both
the nodes. This particular node was operational even after the libvirt
version upgrade. Deployment on this node stopped working from the time we
upgraded the libvirt version
of the 2nd node. By the way, the host monitoring seems to be working as it
is evident from the oned.log file.
Could some one please throw some light on this issue. I have tried to dig up
the old threads but the suggested fixes did not really work in my case.

Thanks a lot for your time.

Regards,
Karthik
_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

Reply via email to