Re: [one-users] unable to attach disks to VMs: 'driver' expects a driver name and other error messages

2013-06-01 Thread Lars Buitinck
2013/5/31  users-requ...@lists.opennebula.org:
 Date: Fri, 31 May 2013 15:28:26 +0200
 From: Lars Buitinck l.j.buiti...@uva.nl
 To: users@lists.opennebula.org
 Subject: [one-users] unable to attach disks to VMs: 'driver' expects a
 driver name and other error messages
 Message-ID:
 cakz-xufiat1zzshu8ohqa-97os8wmfovhobebq-macmn5k_...@mail.gmail.com
 Content-Type: text/plain; charset=UTF-8

 I've been struggling for hours now trying to attach disk images to
 running VMs in OpenNebula. While for two of my five VMs, this has
 actually worked, it's failing for the three remaining ones.

 I've successfully created images, using DEV_PREFIX=vd; this is the
 setup that eventually worked for two of the VMs. When I try to attach
 these disks, I get various different error messages, depending on the
 VM.

[snip]

 But filling in a DRIVER in the image template doesn't work either.
 Making a new VM with either a raw or other driver also fails. (I
 must admit I don't understand what this driver field really does and
 I couldn't find anything in the documentation.)

Just to let you know, I've resolved the problem by setting DRIVER to
raw at image construction and filling in the TARGET field with
vdb. Strangely, the disks still emerge as /dev/vda, which is
actually convenient for sysad purposes, but still somewhat surprising.

-- 
Lars Buitinck
Scientific programmer, ILPS
University of Amsterdam
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] unable to attach disks to VMs: 'driver' expects a driver name and other error messages

2013-06-01 Thread Shankhadeep Shome
raw is the type of image, vda is a virtio block device, they are not
related directly, that is a vda device can be raw or other formats.


On Sat, Jun 1, 2013 at 7:13 AM, Lars Buitinck l.j.buiti...@uva.nl wrote:

 2013/5/31  users-requ...@lists.opennebula.org:
  Date: Fri, 31 May 2013 15:28:26 +0200
  From: Lars Buitinck l.j.buiti...@uva.nl
  To: users@lists.opennebula.org
  Subject: [one-users] unable to attach disks to VMs: 'driver' expects a
  driver name and other error messages
  Message-ID:
  
 cakz-xufiat1zzshu8ohqa-97os8wmfovhobebq-macmn5k_...@mail.gmail.com
  Content-Type: text/plain; charset=UTF-8
 
  I've been struggling for hours now trying to attach disk images to
  running VMs in OpenNebula. While for two of my five VMs, this has
  actually worked, it's failing for the three remaining ones.
 
  I've successfully created images, using DEV_PREFIX=vd; this is the
  setup that eventually worked for two of the VMs. When I try to attach
  these disks, I get various different error messages, depending on the
  VM.

 [snip]

  But filling in a DRIVER in the image template doesn't work either.
  Making a new VM with either a raw or other driver also fails. (I
  must admit I don't understand what this driver field really does and
  I couldn't find anything in the documentation.)

 Just to let you know, I've resolved the problem by setting DRIVER to
 raw at image construction and filling in the TARGET field with
 vdb. Strangely, the disks still emerge as /dev/vda, which is
 actually convenient for sysad purposes, but still somewhat surprising.

 --
 Lars Buitinck
 Scientific programmer, ILPS
 University of Amsterdam
 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] VM state is UNKNOWN

2013-06-01 Thread Rolandas Naujikas

On 2013-06-01 05:07, Dmitri Chebotarov wrote:

Hello

I'm seeing following interesting behavior :

I've a VM in RUNNING state, everything works OK. Then I issue
'shutdown' command from within the running OS (ie. it's a Linux VM
and I run 'init 0' to shut it down), the VM shuts down OK, but the
ONE's state changes from RUNNING to UNKNOWN and I cannot start the VM


onevm restart should work to boot again VM.


anymore. I expected the status to change to SHUTDOWN, which would
allow me to start the VM later.


OpenNebula cannot guess want you are doing inside of VM.
If you want really shutdown VM and remove it from OpenNebula, you have 
to do that from OpenNebula.


Regards, Rolandas


I watched 'virsh list -all' on the host while doing it, and status of
the VM changes from 'running' to 'in shutdown' for about 10-14
seconds and then VM is removed from the host (I assume by ONE).

Am I missing something? Or is it expected?

ACPI is enabled for the VM and I can send Shutdown signal to the VM
from the Sunstore interface, which changes the state to SHUTDOWN, but
not when I run 'shutdown' from within the OK.

Any suggestions?

Thank you. -- next part -- An HTML attachment
was scrubbed... URL:
http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20130601/8001e4fc/attachment.html






___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org