Hi,
I tried to redefine the features, you can see bellow, but the hypervisor still keeps it as active.. According to my understanding anc compute.js it should change to shutdown isnt it? Is there any state diagram fot STATE as is for LVM_STATE?

Milos

BTW current:
[oneadmin@kvasi occi]$ onevm show 112
VIRTUAL MACHINE 112 INFORMATION
ID                  : 112
NAME                : one-112
USER                : oneadmin
GROUP               : oneadmin
STATE               : ACTIVE
LCM_STATE           : SHUTDOWN_POWEROFF
RESCHED             : No
HOST                : kvasi.k13132.local
START TIME          : 06/07 08:02:30
END TIME            : -
DEPLOY ID           : one-112

VIRTUAL MACHINE MONITORING
NET_TX              : 0K
NET_RX              : 0K
USED MEMORY         : 0K
USED CPU            : 0

PERMISSIONS
OWNER               : um-
GROUP               : ---
OTHER               : ---

VIRTUAL MACHINE TEMPLATE
CPU="1"
DISK=[
  CLONE="NO",
  DATASTORE="emc-spc",
  DATASTORE_ID="104",
  DEV_PREFIX="hd",
  DISK_ID="0",
  DRIVER="raw",
  IMAGE="ttylinux-per",
  IMAGE_ID="76",
  IMAGE_UNAME="oneadmin",
  PERSISTENT="YES",
  READONLY="NO",
  SAVE="YES",
  SOURCE="/dev/vg-c/lv-one-76",
  TARGET="hda",
  TM_MAD="shared_lvm",
  TYPE="FILE" ]
FEATURES=[
  ACPI="yes" ]
GRAPHICS=[
  LISTEN="0.0.0.0",
  PORT="6012",
  TYPE="vnc" ]
MEMORY="1024"
NAME="one-112"
OS=[
  ARCH="i686",
  BOOT="hd" ]
RAW=[
  TYPE="kvm" ]
TEMPLATE_ID="1"
VCPU="2"
VMID="112"

VIRTUAL MACHINE HISTORY
 SEQ HOST            REASON           START            TIME PROLOG_TIME
   0 kvasi.k13132.lo user    06/07 08:02:56    0d 00h01m45s    0d 00h00m00s
   1 kvasi.k13132.lo none    06/07 08:04:56    0d 00h01m14s    0d 00h00m00s
[oneadmin@kvasi occi]$
[oneadmin@kvasi occi]$ occi-compute show 112
<COMPUTE href='http://127.0.0.1:4567/compute/112'>
  <ID>112</ID>
  <USER name='oneadmin' href='http://127.0.0.1:4567/user/0'/>
  <GROUP>oneadmin</GROUP>
  <CPU>1</CPU>
  <MEMORY>1024</MEMORY>
  <NAME>one-112</NAME>
  <STATE>ACTIVE</STATE>
  <DISK id='0'>
    <STORAGE name='ttylinux-per' href='http://127.0.0.1:4567/storage/76'/>
    <TYPE>FILE</TYPE>
    <TARGET>hda</TARGET>
  </DISK>
</COMPUTE>


Dne 6.6.2013 17:47, Daniel Molina napsal(a):
Hi Miloš,


On 6 June 2013 10:37, Miloš Kozák <milos.ko...@lejmr.com <mailto:milos.ko...@lejmr.com>> wrote:

    Hi,

    Template:
    ACPI="yes"
    CPU="1"
    DISK=[
      IMAGE="ttylinux-per",
      IMAGE_UNAME="oneadmin" ]
    GRAPHICS=[
      LISTEN="0.0.0.0",
      TYPE="vnc" ]
    MEMORY="1024"
    NAME="ttylinux"
    OS=[
      ARCH="i686",
      BOOT="hd" ]
    RAW=[
      TYPE="kvm" ]
    TEMPLATE_ID="1"
    VCPU="2"


    States:

    110 oneadmin oneadmin one-110         shut    0 0K kvasi.k131   0d
    00h01
    occi-compute show 110
    <COMPUTE href='http://127.0.0.1:4567/compute/110'>
      <ID>110</ID>
      <USER href='http://127.0.0.1:4567/user/0' name='oneadmin'/>
      <GROUP>oneadmin</GROUP>
      <CPU>1</CPU>
      <MEMORY>1024</MEMORY>
      <NAME>one-110</NAME>
      <STATE>ACTIVE</STATE>
      <DISK id='0'>
        <STORAGE href='http://127.0.0.1:4567/storage/76'
    name='ttylinux-per'/>
        <TYPE>FILE</TYPE>
        <TARGET>hda</TARGET>
      </DISK>
    </COMPUTE>

    After poweroff:

    onevm show 110
    VIRTUAL MACHINE 110 INFORMATION
    ID                  : 110
    NAME                : one-110
    USER                : oneadmin
    GROUP               : oneadmin
    STATE               : ACTIVE
    LCM_STATE           : SHUTDOWN_POWEROFF
    RESCHED             : No
    HOST                : kvasi.k13132.local
    START TIME          : 06/03 10:00:41
    END TIME            : -
    DEPLOY ID           : one-110

    VIRTUAL MACHINE MONITORING
    NET_RX              : 0K
    NET_TX              : 0K
    USED MEMORY         : 0K
    USED CPU            : 0

    PERMISSIONS
    OWNER               : um-
    GROUP               : ---
    OTHER               : ---

    VIRTUAL MACHINE TEMPLATE
    ACPI="yes"
    CPU="1"
    DISK=[
      CLONE="NO",
      DATASTORE="emc-spc",
      DATASTORE_ID="104",
      DEV_PREFIX="hd",
      DISK_ID="0",
      DRIVER="raw",
      IMAGE="ttylinux-per",
      IMAGE_ID="76",
      IMAGE_UNAME="oneadmin",
      PERSISTENT="YES",
      READONLY="NO",
      SAVE="YES",
      SOURCE="/dev/vg-c/lv-one-76",
      TARGET="hda",
      TM_MAD="shared_lvm",
      TYPE="FILE" ]
    GRAPHICS=[
      LISTEN="0.0.0.0",
      PORT="6010",
      TYPE="vnc" ]
    MEMORY="1024"
    NAME="one-110"
    OS=[
      ARCH="i686",
      BOOT="hd" ]
    RAW=[
      TYPE="kvm" ]
    TEMPLATE_ID="1"
    VCPU="2"
    VMID="110"

    VIRTUAL MACHINE HISTORY
     SEQ HOST            REASON START            TIME     PROLOG_TIME
       0 kvasi.k13132.lo user    06/03 10:00:56    0d 00h04m10s    0d
    00h00m00s
       1 kvasi.k13132.lo none    06/03 10:05:26    0d 00h00m26s    0d
    00h00m00s
    [oneadmin@kvasi occi]$ occi-compute show 110
    <COMPUTE href='http://127.0.0.1:4567/compute/110'>
      <ID>110</ID>
      <USER name='oneadmin' href='http://127.0.0.1:4567/user/0'/
    <http://127.0.0.1:4567/user/0%27/>>
      <GROUP>oneadmin</GROUP>
      <CPU>1</CPU>
      <MEMORY>1024</MEMORY>
      <NAME>one-110</NAME>
      <STATE>ACTIVE</STATE>
      <DISK id='0'>
        <STORAGE name='ttylinux-per'
    href='http://127.0.0.1:4567/storage/76'/
    <http://127.0.0.1:4567/storage/76%27/>>
        <TYPE>FILE</TYPE>
        <TARGET>hda</TARGET>
      </DISK>
    </COMPUTE>


    Is that all you need to know? BTW it is ONE 3.8.3.



The state of the VirtualMachine is ACTIVE, that's why OCCI expose also the ACTIVE state

onevm:

STATE               : ACTIVE
LCM_STATE           : SHUTDOWN_POWEROFF

occi:

<STATE>ACTIVE</STATE>

The VirtualMachine will stay in that state until it disappears from the hypervisor, if the action does not succeed after a while the lcm_state will change to running.

I think the problem is how you are specifying the ACPI attr, you have to include it in a FEATURES section: http://opennebula.org/documentation:rel4.0:kv <http://opennebula.org/documentation:rel4.0:kvmg#features>


mg#features <http://opennebula.org/documentation:rel4.0:kvmg#features>
Cheers



    BTW I am sorry for resending. First, I sent it directly outside of
    the mailing list..


    Dne 3.6.2013 9:53, Daniel Molina napsal(a):
    Hi,


    On 2 June 2013 10:10, Miloš Kozák <milos.ko...@lejmr.com
    <mailto:milos.ko...@lejmr.com>> wrote:

        Hi,
        thank you for the answer. I tried to verify that. It is quite
        easy to sent LCM_STATES to XML, thought. But at this point I
        would rather tried to resolve it with VM_STATE. I am afraid
        that there might be a bug. Source from compute.js:

        function VMStateBulletStr(vm){
            var vm_state = vm.COMPUTE.STATE;
            var state_html = "";
            switch (vm_state) {
            case "INIT":
            case "PENDING":
            case "HOLD":
            case "STOPPED":
            case "SUSPENDED":
            case "POWEROFF":
                state_html = '<img
        style="display:inline-block;margin-right:5px;;"
        src="images/yellow_bullet.png" alt="'+vm_state+'"
        title="'+vm_state+'" />';
                break;
            case "ACTIVE":
            case "DONE":
                state_html = '<img
        style="display:inline-block;margin-right:5px;"
        src="images/green_bullet.png" alt="'+vm_state+'"
        title="'+vm_state+'"/>';
                break;
            case "FAILED":
                state_html = '<img
        style="display:inline-block;margin-right:5px;"
        src="images/red_bullet.png" alt="'+vm_state+'"
        title="'+vm_state+'"/>';
                break;
            };
            return state_html;
        }

        As I read it, the XML should contain states as poweroff and
        so on, but it gives only done, pending, done and active. I
        ran small script on a VM:

        until [ `sleep 0.7` ]; do  occi-compute show 109 | grep
        STATE;  done;

        And triggered all thinkable commands on the VM. When I tryed
        poweroff and shutdown it prevailed in ACTIVE. That is why I
        think there might by a problem..

        I tried to resolve it on my own, but I dont know ruby....


    Could you check the states with onevm show and confirm that the
    action (shutdown/power off) doesn't fail. Note that you will need
    ACPI activated on your VMs to run these actions.

    Cheers


        Thanks for answer,
        Milos

        Dne 26.4.2013 11:23, Daniel Molina napsal(a):
        Hi ,


        On 25 April 2013 09:28, Miloš Kozák <milos.ko...@lejmr.com
        <mailto:milos.ko...@lejmr.com>> wrote:

            Hi,
            I am running opennebula 3.8.3 and OCCI self-service
            portal. My problem is that the VM indication is
            misleading. There 3 statuses - green, yellow, red. When
            I stop VM it turns to yellow, if anything is wrong red..
            that is perfectly correct but the VM is indicated by
            green for shutdown, poweroff and all other statuses.. I
            was trying to fix compute.js, but it didnt worked out..
            So I assume there is a deeper problem? Can you confirm that?


        When using OCCI the VM xml that is sent in a OCCI
        /compute/:id GET request include the VM_STATE [1].

        VM_STATE=%w{INIT PENDING HOLD ACTIVE STOPPED SUSPENDED DONE
        FAILED
                    POWEROFF}

        The problem is that the states you are looking for are
        LCM_STATES.

        LCM_STATE=%w{LCM_INIT PROLOG BOOT RUNNING MIGRATE SAVE_STOP
        SAVE_SUSPEND
        SAVE_MIGRATE PROLOG_MIGRATE PROLOG_RESUME EPILOG_STOP EPILOG
        SHUTDOWN CANCEL FAILURE CLEANUP UNKNOWN HOTPLUG
        SHUTDOWN_POWEROFF
        BOOT_UNKNOWN BOOT_POWEROFF BOOT_SUSPENDED BOOT_STOPPED}

        If you want to include this information you have to modify
        the VirtualMachineOCCI class to include these states [2]

        Hope this helps

        [1]
        
https://github.com/OpenNebula/one/blob/release-3.8.3/src/oca/ruby/OpenNebula/VirtualMachine.rb
        [2]
        
https://github.com/OpenNebula/one/blob/release-3.8.3/src/cloud/occi/lib/VirtualMachineOCCI.rb


            Thank you, Milos
            _______________________________________________
            Users mailing list
            Users@lists.opennebula.org
            <mailto:Users@lists.opennebula.org>
            http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- Daniel Molina




-- Join us at OpenNebulaConf2013 <http://opennebulaconf.com/> in
    Berlin, 24-26 September, 2013
    --
    Daniel Molina
    Project Engineer
    OpenNebula - The Open Source Solution for Data Center Virtualization
    www.OpenNebula.org <http://www.OpenNebula.org> |
    dmol...@opennebula.org <mailto:dmol...@opennebula.org> | @OpenNebula


    _______________________________________________
    Users mailing list
    Users@lists.opennebula.org <mailto:Users@lists.opennebula.org>
    http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




--
Join us at OpenNebulaConf2013 <http://opennebulaconf.com/> in Berlin, 24-26 September, 2013
--
Daniel Molina
Project Engineer
OpenNebula - The Open Source Solution for Data Center Virtualization
www.OpenNebula.org <http://www.OpenNebula.org> | dmol...@opennebula.org <mailto:dmol...@opennebula.org> | @OpenNebula

_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

Reply via email to