Hello everybody,
a new day, a new issue :)
I was able to successfully create my cloud and connect the clusternodes to my 
front-end. Even the test-VM "ttylinux" is running as expected on one of the 
nodes. Or so it seems... 

The problem I am having right now is, that when I start a migration or 
live-migration I get the following errors:

Tue May 24 12:58:32 2011 [LCM][I]: New VM state is MIGRATE
Tue May 24 12:58:33 2011 [VMM][I]: Command execution fail: 'if [ -x 
"/var/tmp/one/vmm/kvm/migrate" ]; then /var/tmp/one/vmm/kvm/migrate one-16 
192.168.0.3; else                              exit 42; fi'
Tue May 24 12:58:33 2011 [VMM][I]: STDERR follows.
Tue May 24 12:58:33 2011 [VMM][I]: error: Unknown failure
Tue May 24 12:58:33 2011 [VMM][I]: ExitCode: 1
Tue May 24 12:58:33 2011 [VMM][E]: Error live-migrating VM, error: Unknown 
failure
Tue May 24 12:58:33 2011 [LCM][I]: Fail to life migrate VM. Assuming that the 
VM is still RUNNING (will poll VM).
Tue May 24 12:58:34 2011 [VMM][D]: Monitor Information:
CPU   : 6
Memory: 65536
Net_TX: 0
Net_RX: 6657
Tue May 24 13:01:16 2011 [LCM][I]: New VM state is SAVE_MIGRATE
Tue May 24 13:01:17 2011 [VMM][I]: Command execution fail: 'if [ -x 
"/var/tmp/one/vmm/kvm/save" ]; then /var/tmp/one/vmm/kvm/save one-16 
/srv/cloud/one/var//16/images/checkpoint; else                              
exit 42; fi'
Tue May 24 13:01:17 2011 [VMM][I]: STDERR follows.
Tue May 24 13:01:17 2011 [VMM][I]: error: Failed to save domain one-16 to 
/srv/cloud/one/var//16/images/checkpoint
Tue May 24 13:01:17 2011 [VMM][I]: error: unable to set ownership of 
'/srv/cloud/one/var//16/images/checkpoint' to user 0:0: Operation not permitted
Tue May 24 13:01:17 2011 [VMM][I]: ExitCode: 1
Tue May 24 13:01:17 2011 [VMM][E]: Error saving VM state, error: Failed to save 
domain one-16 to /srv/cloud/one/var//16/images/checkpoint
Tue May 24 13:01:17 2011 [LCM][I]: Fail to save VM state while migrating. 
Assuming that the VM is still RUNNING (will poll VM).
Tue May 24 13:01:17 2011 [VMM][I]: VM running but new state from monitor is 
PAUSED.
Tue May 24 13:01:17 2011 [LCM][I]: VM is suspended.
Tue May 24 13:01:17 2011 [DiM][I]: New VM state is SUSPENDED

As you can see in the log, I first tried to do a live-migration and then a 
normal migration right afterwards. unfortunately, both tries ended 
unsuccessfully.
I was searching for an error, but all I could find was one strange thing: When 
I initiate a "virsh list" on the node where the VM is running, I don't get any 
results.
Could this be the problem? And if so, how could I resolve it???


Thank you!
_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

Reply via email to