Re: [one-users] VM running on multiple hosts

2014-07-06 Thread Miloš Kozák

Hi,

Recently I found there is a support for locks in libvirt, has anybody 
tried that? Anybody uses fcntl with lvm?


Thanks

Dne 14-07-06 02:41 AM, Fabian Zimmermann napsal(a):

Hi,

Am 03.07.2014 16:59, schrieb Jaime Melis:

As far as I know, the shared_lvm hasn't been updated lately:
- https://github.com/OpenNebula/addon-shared-lvm-single-lock
- http://community.opennebula.org/shared_lvm

also having the issue in OpenNebula 4.6.0

Fabian

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] VM running on multiple hosts

2014-06-27 Thread Miloš Kozák

Hi,

it is great to know, is it related to the improved monitoring in ONE? I 
am getting ready for trials of ONE 4.6.


We are using shared_lvm driver, so I would like to know if the new LVM 
driver is compatible with it? Or I can expect some major issues? I 
havent had enough time to check..


Thanks Milos


Dne 14-06-27 12:18 PM, Tino Vazquez napsal(a):

Hi,

Issues with delete and shutdown have been greatly improved in
OpenNebula 4.4+, I would recommend upgrading as far as possible.

Best,

-Tino

--
OpenNebula - Flexible Enterprise Cloud Made Simple

--
Constantino Vázquez Blanco, PhD, MSc
Senior Infrastructure Architect at C12G Labs
www.c12g.com | @C12G | es.linkedin.com/in/tinova

--
Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
"To" and "cc" box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G thanks you for your
cooperation.


On 26 June 2014 16:42, Steven Timm  wrote:

We have also seen this behavior in OpenNebula 3.2.
It appears that the failure mode occurs because the onevm delete
(or shutdown or migrate) doesn't correctly verify that the virtual machine
has gone away.
It sends the acpi terminate signal to the virtual machine
but if that fails, the VM will keep running.  There is no
signal sent to libvirt to kill the machine regardless.
OpenNebula
deletes the disk.0 from underneath it but that doesn't stop
the vm from running, it stays running on the deleted file handle.

On the plus side, I once was able to recover the full disk image
of a VM that shouldn't have been deleted, that way, by going
to the /proc file system and dd'ing from the still-open file
handle of the process.

We've written a set of utilities to check the consistency
of the leases database with what is actually running on the cloud,
and alert us if there are any differences.

Steve Timm



On Thu, 26 Jun 2014, Milos Kozak wrote:


Hi, I would like to add that I have experienced it few times with ONE
3.8..

On 6/26/2014 9:34 AM, Robert Tanase wrote:

  Hi all,

  We are using Opennebula 4.2 system with several hosts ( KVM +
  network storage) .

  Recently we have discovered, by having disk r/w issues on a VM, that
  after a delete - recreate action, specific VM is running on two
  different hosts: the old placement host and the new placement host.

  We are using the hooks system for host failure and a cron job at 5
  minutes which is (re)deploying pending machines on available running
  hosts.

  By checking oned log files we couldn't find any abnormal behavior
  and we are stuck.

  Please guide us to find the root cause of this issue if is possible.

  --
  Thank you,
  Robert Tanase



  ___
  Users mailing list
  Users@lists.opennebula.org
  http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org



--
Steven C. Timm, Ph.D  (630) 840-8525
t...@fnal.gov  http://home.fnal.gov/~timm/
Fermilab Scientific Computing Division, Scientific Computing Services Quad.
Grid and Cloud Services Dept., Associate Dept. Head for Cloud Computing

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] XML-RPC and VM info

2013-10-13 Thread Miloš Kozák

Thanks, that is what I was looking for!

Milos

Dne 7.10.2013 10:42, Daniel Molina napsal(a):

Hi,


On 6 October 2013 15:25, Miloš Kozák <mailto:milos.ko...@lejmr.com>> wrote:


Hi,
I am working on our own customers frontend communicating with ONE
using XML-RPC. At the moment I am inplementing VNC transport using
websocketfy. Therefore I need to find HOST and PORT that are
target of my socket actually.


Using one.vm.info <http://one.vm.info> a get only limited set of
information but the VNC PORT and host where the VM is deployed. I
checked all API but I didnt find my answer..


All the information is included in the vm.info <http://vm.info> call. 
You may find useful the following code in OpenNebulaVNC.rb:

https://github.com/OpenNebula/one/blob/master/src/sunstone/OpenNebulaVNC.rb#L161

Cheers


Thank you for suggestions,
Milos
___
Users mailing list
Users@lists.opennebula.org <mailto:Users@lists.opennebula.org>
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




--
--
Daniel Molina
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org <http://www.OpenNebula.org> | 
dmol...@opennebula.org <mailto:dmol...@opennebula.org> | @OpenNebula


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] XML-RPC and VM info

2013-10-06 Thread Miloš Kozák

Hi,
I am working on our own customers frontend communicating with ONE using 
XML-RPC. At the moment I am inplementing VNC transport using 
websocketfy. Therefore I need to find HOST and PORT that are target of 
my socket actually.



Using one.vm.info a get only limited set of information but the VNC PORT 
and host where the VM is deployed. I checked all API but I didnt find my 
answer..


Thank you for suggestions,
Milos
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Simplified VM creation?

2013-09-04 Thread Miloš Kozák
Hi, could you please elaborate on this golden template or provide some links? 
Thank you

S pozdravem,
Miloš Kozák

5. 9. 2013 v 5:24, Shek Mohd Fahmi Abdul Latip :

> Hi Pentium100,
>  
> What I can suggest to simplify VM deployment by developing a so called 
> “golden image” and “golden template” in advanced. From there, you just make 
> use of context and cloning features to ease your deployment.
>  
> Just my 20cent.
>  
> Best regards,
> .fahmie
>  
> From: users-boun...@lists.opennebula.org 
> [mailto:users-boun...@lists.opennebula.org] On Behalf Of Pentium100
> Sent: Wednesday, September 04, 2013 7:31 PM
> To: users@lists.opennebula.org
> Subject: [one-users] Simplified VM creation?
>  
> Hello,
> 
> While trying out OpenNebula, I noticed that the VM creation process is quite 
> inconvenient if I want different VMs (as opposed to scaleout situation).
> 
> 1. Create the image (upload a new one or copy some other image), set it as 
> persistent (I don't want the VM to disappear if I shut it down).
> 2. Create a template that uses the image.
> 3. Finally create a VM from the template. There will only be one VM using 
> that template (because I don't really need two identical VMs).
> 
> To create 10 virtual servers (all the same "hardware" but different images) I 
> need to repeat steps 1-3 ten times.
>  
> It would be nice, if there was a way to simplify this. I can think of 3 ways 
> to do it:
> 
> 1. Skip the creation of a template. Create an image then create a VM based on 
> that image.
> 2. Do not define an image while creating a template. Assign the image when 
> creating the VM.
> 3. Have some "template" image that gets copied when a VM is created. The copy 
> should be persistent.
> 
> Is there a way to do it? Non-persistent images behave almost like option 3, 
> but accidentally shutting down a VM would mean data loss.
>  
> We have quite a few VMs, but they all are used as real servers would be - all 
> have different data (no scaleout) and none can be deleted easily.
>  
>  
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Simplified VM creation?

2013-09-04 Thread Miloš Kozák


S pozdravem,
Miloš Kozák

5. 9. 2013 v 5:24, Shek Mohd Fahmi Abdul Latip :

> Hi Pentium100,
>  
> What I can suggest to simplify VM deployment by developing a so called 
> “golden image” and “golden template” in advanced. From there, you just make 
> use of context and cloning features to ease your deployment.
>  
> Just my 20cent.
>  
> Best regards,
> .fahmie
>  
> From: users-boun...@lists.opennebula.org 
> [mailto:users-boun...@lists.opennebula.org] On Behalf Of Pentium100
> Sent: Wednesday, September 04, 2013 7:31 PM
> To: users@lists.opennebula.org
> Subject: [one-users] Simplified VM creation?
>  
> Hello,
> 
> While trying out OpenNebula, I noticed that the VM creation process is quite 
> inconvenient if I want different VMs (as opposed to scaleout situation).
> 
> 1. Create the image (upload a new one or copy some other image), set it as 
> persistent (I don't want the VM to disappear if I shut it down).
> 2. Create a template that uses the image.
> 3. Finally create a VM from the template. There will only be one VM using 
> that template (because I don't really need two identical VMs).
> 
> To create 10 virtual servers (all the same "hardware" but different images) I 
> need to repeat steps 1-3 ten times.
>  
> It would be nice, if there was a way to simplify this. I can think of 3 ways 
> to do it:
> 
> 1. Skip the creation of a template. Create an image then create a VM based on 
> that image.
> 2. Do not define an image while creating a template. Assign the image when 
> creating the VM.
> 3. Have some "template" image that gets copied when a VM is created. The copy 
> should be persistent.
> 
> Is there a way to do it? Non-persistent images behave almost like option 3, 
> but accidentally shutting down a VM would mean data loss.
>  
> We have quite a few VMs, but they all are used as real servers would be - all 
> have different data (no scaleout) and none can be deleted easily.
>  
>  
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Problem with status

2013-08-19 Thread Miloš Kozák

Hi,
I have been running an instance of ONE 3.8.3. I have got two main nodes 
providing HA. Today I was updating VNETs .. (adding one new bridge). 
Unfortunately this bridge lies over the management interface. So when I 
changed configuration HA reacted and switched. So far so good. But when 
everything was done I could see that some VMs are in FAILED state. But 
when I checked them with virsh list they were installed and running 
properly.. I let ONE be but after a while all FAILED VMs stayed in 
FAILED state. Worst thing happen when I tried to resubmit them, It let 
the original instance running on the original node and ran a new 
instance on different node. As long as I dont have CLVM and my datastore 
uses shared_lvm it allows to start a VM on multiple hosts at same time.. 
Thankfully I stopped one instance before it could damage FS.


Is there any way how to make hypervisor announce proper state since they 
are running but ONE assumes it is FAILED? Or avoid this situation? I 
find that ONE doesnt update the state upon it finds it in FAILED.


Thanks, Milos
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] OCCI vm status indication

2013-08-19 Thread Miloš Kozák

Hi,
I tried to redefine the features, you can see bellow, but the hypervisor 
still keeps it as active.. According to my understanding anc compute.js 
it should change to shutdown isnt it? Is there any state diagram fot 
STATE as is for LVM_STATE?


Milos

BTW current:
[oneadmin@kvasi occi]$ onevm show 112
VIRTUAL MACHINE 112 INFORMATION
ID  : 112
NAME: one-112
USER: oneadmin
GROUP   : oneadmin
STATE   : ACTIVE
LCM_STATE   : SHUTDOWN_POWEROFF
RESCHED : No
HOST: kvasi.k13132.local
START TIME  : 06/07 08:02:30
END TIME: -
DEPLOY ID   : one-112

VIRTUAL MACHINE MONITORING
NET_TX  : 0K
NET_RX  : 0K
USED MEMORY : 0K
USED CPU: 0

PERMISSIONS
OWNER   : um-
GROUP   : ---
OTHER   : ---

VIRTUAL MACHINE TEMPLATE
CPU="1"
DISK=[
  CLONE="NO",
  DATASTORE="emc-spc",
  DATASTORE_ID="104",
  DEV_PREFIX="hd",
  DISK_ID="0",
  DRIVER="raw",
  IMAGE="ttylinux-per",
  IMAGE_ID="76",
  IMAGE_UNAME="oneadmin",
  PERSISTENT="YES",
  READONLY="NO",
  SAVE="YES",
  SOURCE="/dev/vg-c/lv-one-76",
  TARGET="hda",
  TM_MAD="shared_lvm",
  TYPE="FILE" ]
FEATURES=[
  ACPI="yes" ]
GRAPHICS=[
  LISTEN="0.0.0.0",
  PORT="6012",
  TYPE="vnc" ]
MEMORY="1024"
NAME="one-112"
OS=[
  ARCH="i686",
  BOOT="hd" ]
RAW=[
  TYPE="kvm" ]
TEMPLATE_ID="1"
VCPU="2"
VMID="112"

VIRTUAL MACHINE HISTORY
 SEQ HOSTREASON   STARTTIME PROLOG_TIME
   0 kvasi.k13132.lo user06/07 08:02:560d 00h01m45s0d 00h00m00s
   1 kvasi.k13132.lo none    06/07 08:04:560d 00h01m14s0d 00h00m00s
[oneadmin@kvasi occi]$
[oneadmin@kvasi occi]$ occi-compute show 112

  112
  
  oneadmin
  1
  1024
  one-112
  ACTIVE
  

FILE
hda
  



Dne 6.6.2013 17:47, Daniel Molina napsal(a):

Hi Miloš,


On 6 June 2013 10:37, Miloš Kozák <mailto:milos.ko...@lejmr.com>> wrote:


Hi,

Template:
ACPI="yes"
CPU="1"
DISK=[
  IMAGE="ttylinux-per",
  IMAGE_UNAME="oneadmin" ]
GRAPHICS=[
  LISTEN="0.0.0.0",
  TYPE="vnc" ]
MEMORY="1024"
NAME="ttylinux"
OS=[
  ARCH="i686",
  BOOT="hd" ]
RAW=[
  TYPE="kvm" ]
TEMPLATE_ID="1"
VCPU="2"


States:

110 oneadmin oneadmin one-110 shut0 0K kvasi.k131   0d
00h01
occi-compute show 110

  110
  
  oneadmin
  1
  1024
  one-110
  ACTIVE
  

FILE
hda
  


After poweroff:

onevm show 110
VIRTUAL MACHINE 110 INFORMATION
ID  : 110
NAME: one-110
USER: oneadmin
GROUP   : oneadmin
STATE   : ACTIVE
LCM_STATE   : SHUTDOWN_POWEROFF
RESCHED : No
HOST: kvasi.k13132.local
START TIME  : 06/03 10:00:41
END TIME: -
DEPLOY ID   : one-110

VIRTUAL MACHINE MONITORING
NET_RX  : 0K
NET_TX  : 0K
USED MEMORY : 0K
USED CPU: 0

PERMISSIONS
OWNER   : um-
GROUP   : ---
OTHER   : ---

VIRTUAL MACHINE TEMPLATE
ACPI="yes"
CPU="1"
DISK=[
  CLONE="NO",
  DATASTORE="emc-spc",
  DATASTORE_ID="104",
  DEV_PREFIX="hd",
  DISK_ID="0",
  DRIVER="raw",
  IMAGE="ttylinux-per",
  IMAGE_ID="76",
  IMAGE_UNAME="oneadmin",
  PERSISTENT="YES",
  READONLY="NO",
  SAVE="YES",
  SOURCE="/dev/vg-c/lv-one-76",
  TARGET="hda",
  TM_MAD="shared_lvm",
  TYPE="FILE" ]
GRAPHICS=[
  LISTEN="0.0.0.0",
  PORT="6010",
  TYPE="vnc" ]
MEMORY="1024"
NAME="one-110"
OS=[
  ARCH="i686",
  BOOT="hd" ]
RAW=[
  TYPE="kvm" ]
TEMPLATE_ID="1"
VCPU="2"
VMID="110"

VIRTUAL MACHINE HISTORY
 SEQ HOSTREASON STARTTIME PROLOG_TIME
   0 kvasi.k13132.lo user06/03 10:00:560d 00h04m10s0d
00h00m00s
   1 kvasi.k13132.lo none06/03 10:05:260d 00h00m26s0d
00h00m00s
[oneadmin@kvasi occi]$

[one-users] Strange behaviour

2013-08-19 Thread Miloš Kozák

Hello,
I would like to ask if happen to anybody the same? I have been running 
ONE 3.8.3 for a few months (8). Everything works perfect, until 13th 
Aug. Something went wrong and ONE deployed VMs onto other Host even if 
the VMs were running at a Host. Currently I have got two Hosts 
(kvasi,sol), therefore there were VMs running on kvasi and other VMs 
running on sol. Eventually VMs not running on sol were running on sol 
and kvasi at the same time, some for the rest..


I am using shared storage using LVM and iSCSI (shared_lvm driver). I am 
enclosing url to my dropbox with logs. I find it very disturbing that 
anything like that can happen. On top of that more concerning is that 
ONE didnt know until a logged onto Hosts and using virsh list verified 
running VMs..


Thanks for the answer,
Milos Kozak


https://www.dropbox.com/s/04aqd2vrd6cjkmu/logs.tar.bz2
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Problem with status

2013-06-10 Thread Miloš Kozák

Hi,
I have been running an instance of ONE 3.8.3. I have got two main nodes 
providing HA. Today I was updating VNETs .. (adding one new bridge). 
Unfortunately this bridge lies over the management interface. So when I 
changed configuration HA reacted and switched. So far so good. But when 
everything was done I could see that some VMs are in FAILED state. But 
when I checked them with virsh list they were installed and running 
properly.. I let ONE be but after a while all FAILED VMs stayed in 
FAILED state. Worst thing happen when I tried to resubmit them, It let 
the original instance running on the original node and ran a new 
instance on different node. As long as I dont have CLVM and my datastore 
uses shared_lvm it allows to start a VM on multiple hosts at same time.. 
Thankfully I stopped one instance before it could damage FS.


Is there any way how to make hypervisor announce proper state since they 
are running but ONE assumes it is FAILED? Or avoid this situation? I 
find that ONE doesnt update the state upon it finds it in FAILED.


Thanks, Milos
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] OCCI vm status indication

2013-06-06 Thread Miloš Kozák

Hi,
I tried to redefine the features, you can see bellow, but the hypervisor 
still keeps it as active.. According to my understanding anc compute.js 
it should change to shutdown isnt it? Is there any state diagram fot 
STATE as is for LVM_STATE?


Milos

BTW current:
[oneadmin@kvasi occi]$ onevm show 112
VIRTUAL MACHINE 112 INFORMATION
ID  : 112
NAME: one-112
USER: oneadmin
GROUP   : oneadmin
STATE   : ACTIVE
LCM_STATE   : SHUTDOWN_POWEROFF
RESCHED : No
HOST: kvasi.k13132.local
START TIME  : 06/07 08:02:30
END TIME: -
DEPLOY ID   : one-112

VIRTUAL MACHINE MONITORING
NET_TX  : 0K
NET_RX  : 0K
USED MEMORY : 0K
USED CPU: 0

PERMISSIONS
OWNER   : um-
GROUP   : ---
OTHER   : ---

VIRTUAL MACHINE TEMPLATE
CPU="1"
DISK=[
  CLONE="NO",
  DATASTORE="emc-spc",
  DATASTORE_ID="104",
  DEV_PREFIX="hd",
  DISK_ID="0",
  DRIVER="raw",
  IMAGE="ttylinux-per",
  IMAGE_ID="76",
  IMAGE_UNAME="oneadmin",
  PERSISTENT="YES",
  READONLY="NO",
  SAVE="YES",
  SOURCE="/dev/vg-c/lv-one-76",
  TARGET="hda",
  TM_MAD="shared_lvm",
  TYPE="FILE" ]
FEATURES=[
  ACPI="yes" ]
GRAPHICS=[
  LISTEN="0.0.0.0",
  PORT="6012",
  TYPE="vnc" ]
MEMORY="1024"
NAME="one-112"
OS=[
  ARCH="i686",
  BOOT="hd" ]
RAW=[
  TYPE="kvm" ]
TEMPLATE_ID="1"
VCPU="2"
VMID="112"

VIRTUAL MACHINE HISTORY
 SEQ HOSTREASON   STARTTIME PROLOG_TIME
   0 kvasi.k13132.lo user06/07 08:02:560d 00h01m45s0d 00h00m00s
   1 kvasi.k13132.lo none    06/07 08:04:560d 00h01m14s0d 00h00m00s
[oneadmin@kvasi occi]$
[oneadmin@kvasi occi]$ occi-compute show 112

  112
  
  oneadmin
  1
  1024
  one-112
  ACTIVE
  

FILE
hda
  



Dne 6.6.2013 17:47, Daniel Molina napsal(a):

Hi Miloš,


On 6 June 2013 10:37, Miloš Kozák <mailto:milos.ko...@lejmr.com>> wrote:


Hi,

Template:
ACPI="yes"
CPU="1"
DISK=[
  IMAGE="ttylinux-per",
  IMAGE_UNAME="oneadmin" ]
GRAPHICS=[
  LISTEN="0.0.0.0",
  TYPE="vnc" ]
MEMORY="1024"
NAME="ttylinux"
OS=[
  ARCH="i686",
  BOOT="hd" ]
RAW=[
  TYPE="kvm" ]
TEMPLATE_ID="1"
VCPU="2"


States:

110 oneadmin oneadmin one-110 shut0 0K kvasi.k131   0d
00h01
occi-compute show 110

  110
  
  oneadmin
  1
  1024
  one-110
  ACTIVE
  

FILE
hda
  


After poweroff:

onevm show 110
VIRTUAL MACHINE 110 INFORMATION
ID  : 110
NAME: one-110
USER: oneadmin
GROUP   : oneadmin
STATE   : ACTIVE
LCM_STATE   : SHUTDOWN_POWEROFF
RESCHED : No
HOST: kvasi.k13132.local
START TIME  : 06/03 10:00:41
END TIME: -
DEPLOY ID   : one-110

VIRTUAL MACHINE MONITORING
NET_RX  : 0K
NET_TX  : 0K
USED MEMORY : 0K
USED CPU: 0

PERMISSIONS
OWNER   : um-
GROUP   : ---
OTHER   : ---

VIRTUAL MACHINE TEMPLATE
ACPI="yes"
CPU="1"
DISK=[
  CLONE="NO",
  DATASTORE="emc-spc",
  DATASTORE_ID="104",
  DEV_PREFIX="hd",
  DISK_ID="0",
  DRIVER="raw",
  IMAGE="ttylinux-per",
  IMAGE_ID="76",
  IMAGE_UNAME="oneadmin",
  PERSISTENT="YES",
  READONLY="NO",
  SAVE="YES",
  SOURCE="/dev/vg-c/lv-one-76",
  TARGET="hda",
  TM_MAD="shared_lvm",
  TYPE="FILE" ]
GRAPHICS=[
  LISTEN="0.0.0.0",
  PORT="6010",
  TYPE="vnc" ]
MEMORY="1024"
NAME="one-110"
OS=[
  ARCH="i686",
  BOOT="hd" ]
RAW=[
  TYPE="kvm" ]
TEMPLATE_ID="1"
VCPU="2"
VMID="110"

VIRTUAL MACHINE HISTORY
 SEQ HOSTREASON STARTTIME PROLOG_TIME
   0 kvasi.k13132.lo user06/03 10:00:560d 00h04m10s0d
00h00m00s
   1 kvasi.k13132.lo none06/03 10:05:260d 00h00m26s0d
00h00m00s
[oneadmin@kvasi occi]$

Re: [one-users] OCCI vm status indication

2013-06-06 Thread Miloš Kozák

Hi,

Template:
ACPI="yes"
CPU="1"
DISK=[
  IMAGE="ttylinux-per",
  IMAGE_UNAME="oneadmin" ]
GRAPHICS=[
  LISTEN="0.0.0.0",
  TYPE="vnc" ]
MEMORY="1024"
NAME="ttylinux"
OS=[
  ARCH="i686",
  BOOT="hd" ]
RAW=[
  TYPE="kvm" ]
TEMPLATE_ID="1"
VCPU="2"


States:

110 oneadmin oneadmin one-110 shut0  0K kvasi.k131 0d 00h01
occi-compute show 110

  110
  
  oneadmin
  1
  1024
  one-110
  ACTIVE
  

FILE
hda
  


After poweroff:

onevm show 110
VIRTUAL MACHINE 110 INFORMATION
ID  : 110
NAME: one-110
USER: oneadmin
GROUP   : oneadmin
STATE   : ACTIVE
LCM_STATE   : SHUTDOWN_POWEROFF
RESCHED : No
HOST: kvasi.k13132.local
START TIME  : 06/03 10:00:41
END TIME: -
DEPLOY ID   : one-110

VIRTUAL MACHINE MONITORING
NET_RX  : 0K
NET_TX  : 0K
USED MEMORY : 0K
USED CPU: 0

PERMISSIONS
OWNER   : um-
GROUP   : ---
OTHER   : ---

VIRTUAL MACHINE TEMPLATE
ACPI="yes"
CPU="1"
DISK=[
  CLONE="NO",
  DATASTORE="emc-spc",
  DATASTORE_ID="104",
  DEV_PREFIX="hd",
  DISK_ID="0",
  DRIVER="raw",
  IMAGE="ttylinux-per",
  IMAGE_ID="76",
  IMAGE_UNAME="oneadmin",
  PERSISTENT="YES",
  READONLY="NO",
  SAVE="YES",
  SOURCE="/dev/vg-c/lv-one-76",
  TARGET="hda",
  TM_MAD="shared_lvm",
  TYPE="FILE" ]
GRAPHICS=[
  LISTEN="0.0.0.0",
  PORT="6010",
  TYPE="vnc" ]
MEMORY="1024"
NAME="one-110"
OS=[
  ARCH="i686",
  BOOT="hd" ]
RAW=[
  TYPE="kvm" ]
TEMPLATE_ID="1"
VCPU="2"
VMID="110"

VIRTUAL MACHINE HISTORY
 SEQ HOSTREASON   STARTTIME PROLOG_TIME
   0 kvasi.k13132.lo user06/03 10:00:560d 00h04m10s0d 00h00m00s
   1 kvasi.k13132.lo none06/03 10:05:260d 00h00m26s0d 00h00m00s
[oneadmin@kvasi occi]$ occi-compute show 110

  110
  
  oneadmin
  1
  1024
  one-110
  ACTIVE
  

FILE
hda
  



Is that all you need to know? BTW it is ONE 3.8.3.


BTW I am sorry for resending. First, I sent it directly outside of the 
mailing list..


Dne 3.6.2013 9:53, Daniel Molina napsal(a):

Hi,


On 2 June 2013 10:10, Miloš Kozák <mailto:milos.ko...@lejmr.com>> wrote:


Hi,
thank you for the answer. I tried to verify that. It is quite easy
to sent LCM_STATES to XML, thought. But at this point I would
rather tried to resolve it with VM_STATE. I am afraid that there
might be a bug. Source from compute.js:

function VMStateBulletStr(vm){
var vm_state = vm.COMPUTE.STATE;
var state_html = "";
switch (vm_state) {
case "INIT":
case "PENDING":
case "HOLD":
case "STOPPED":
case "SUSPENDED":
case "POWEROFF":
state_html = '';
break;
case "ACTIVE":
case "DONE":
state_html = '';
break;
case "FAILED":
state_html = '';
break;
};
return state_html;
}

As I read it, the XML should contain states as poweroff and so on,
but it gives only done, pending, done and active. I ran small
script on a VM:

until [ `sleep 0.7` ]; do  occi-compute show 109 | grep STATE;  done;

And triggered all thinkable commands on the VM. When I tryed
poweroff and shutdown it prevailed in ACTIVE. That is why I think
there might by a problem..

I tried to resolve it on my own, but I dont know ruby


Could you check the states with onevm show and confirm that the action 
(shutdown/power off) doesn't fail. Note that you will need ACPI 
activated on your VMs to run these actions.


Cheers


Thanks for answer,
Milos

Dne 26.4.2013 11:23, Daniel Molina napsal(a):

Hi ,


On 25 April 2013 09:28, Miloš Kozák mailto:milos.ko...@lejmr.com>> wrote:

Hi,
I am running opennebula 3.8.3 and OCCI self-service portal.
My problem is that the VM indication is misleading. There 3
statuses - green, yellow, red. When I stop VM it turns to
yellow, if anything is wrong red.. that is perfectly correct
but the VM is indicated by green for shutdown, poweroff and
all other statuses.. I was trying to fix compute.js, but it
didnt worked out.. So I assume there is a deeper problem? Can
you confirm that?


When using OCCI the VM xml tha

Re: [one-users] OCCI vm status indication

2013-06-04 Thread Miloš Kozák

Hi,
thank you for the answer. I tried to verify that. It is quite easy to 
sent LCM_STATES to XML, thought. But at this point I would rather tried 
to resolve it with VM_STATE. I am afraid that there might be a bug. 
Source from compute.js:


function VMStateBulletStr(vm){
var vm_state = vm.COMPUTE.STATE;
var state_html = "";
switch (vm_state) {
case "INIT":
case "PENDING":
case "HOLD":
case "STOPPED":
case "SUSPENDED":
case "POWEROFF":
state_html = 'style="display:inline-block;margin-right:5px;;" 
src="images/yellow_bullet.png" alt="'+vm_state+'" title="'+vm_state+'" />';

break;
case "ACTIVE":
case "DONE":
state_html = 'style="display:inline-block;margin-right:5px;" 
src="images/green_bullet.png" alt="'+vm_state+'" title="'+vm_state+'"/>';

break;
case "FAILED":
state_html = 'style="display:inline-block;margin-right:5px;" 
src="images/red_bullet.png" alt="'+vm_state+'" title="'+vm_state+'"/>';

break;
};
return state_html;
}

As I read it, the XML should contain states as poweroff and so on, but 
it gives only done, pending, done and active. I ran small script on a VM:


until [ `sleep 0.7` ]; do  occi-compute show 109 | grep STATE; done;

And triggered all thinkable commands on the VM. When I tryed poweroff 
and shutdown it prevailed in ACTIVE. That is why I think there might by 
a problem..


I tried to resolve it on my own, but I dont know ruby


Thanks for answer,
Milos

Dne 26.4.2013 11:23, Daniel Molina napsal(a):

Hi ,


On 25 April 2013 09:28, Miloš Kozák <mailto:milos.ko...@lejmr.com>> wrote:


Hi,
I am running opennebula 3.8.3 and OCCI self-service portal. My
problem is that the VM indication is misleading. There 3 statuses
- green, yellow, red. When I stop VM it turns to yellow, if
anything is wrong red.. that is perfectly correct but the VM is
indicated by green for shutdown, poweroff and all other statuses..
I was trying to fix compute.js, but it didnt worked out.. So I
assume there is a deeper problem? Can you confirm that?


When using OCCI the VM xml that is sent in a OCCI /compute/:id GET 
request include the VM_STATE [1].


VM_STATE=%w{INIT PENDING HOLD ACTIVE STOPPED SUSPENDED DONE FAILED
POWEROFF}

The problem is that the states you are looking for are LCM_STATES.

LCM_STATE=%w{LCM_INIT PROLOG BOOT RUNNING MIGRATE SAVE_STOP SAVE_SUSPEND
SAVE_MIGRATE PROLOG_MIGRATE PROLOG_RESUME EPILOG_STOP EPILOG
SHUTDOWN CANCEL FAILURE CLEANUP UNKNOWN HOTPLUG SHUTDOWN_POWEROFF
BOOT_UNKNOWN BOOT_POWEROFF BOOT_SUSPENDED BOOT_STOPPED}

If you want to include this information you have to modify the 
VirtualMachineOCCI class to include these states [2]


Hope this helps

[1] 
https://github.com/OpenNebula/one/blob/release-3.8.3/src/oca/ruby/OpenNebula/VirtualMachine.rb
[2] 
https://github.com/OpenNebula/one/blob/release-3.8.3/src/cloud/occi/lib/VirtualMachineOCCI.rb



Thank you, Milos
___
Users mailing list
Users@lists.opennebula.org <mailto:Users@lists.opennebula.org>
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




--
Daniel Molina


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] FW:

2013-05-02 Thread Miloš Kozák
http://kyokushin-aoki.sakura.ne.jp/www/tvojdi.php

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] OCCI vm status indication

2013-04-25 Thread Miloš Kozák

Hi,
I am running opennebula 3.8.3 and OCCI self-service portal. My problem 
is that the VM indication is misleading. There 3 statuses - green, 
yellow, red. When I stop VM it turns to yellow, if anything is wrong 
red.. that is perfectly correct but the VM is indicated by green for 
shutdown, poweroff and all other statuses.. I was trying to fix 
compute.js, but it didnt worked out.. So I assume there is a deeper 
problem? Can you confirm that?


Thank you, Milos
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Marketplace centos

2013-03-14 Thread Miloš Kozák
Hello, I am having hard times enabling password to centos which is in 
market place. I have got a customer who needs it. So I ran a single 
mode, ran passwd as ussualy and nothing havent happend evantually (after 
resubmit for sure).


So I am wondering if is there anything hidden behind?

Thanks Milos

BTW for debian this procedure worked!
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] opennebula upgrade to 3.8.3 minor errors

2013-03-10 Thread Miloš Kozák

Hi, is there any progress or any workaround how to show logs in sunstone?


Dne 7.2.2013 12:11, Andreas Calvo Gómez napsal(a):

Daniel,
On 06/02/13 14:22, Daniel Molina wrote:
On 6 February 2013 12:36, Andreas Calvo Gómez 
 wrote:

Daniel,

On 06/02/13 12:34, Daniel Molina wrote:
On 6 February 2013 12:17, Andreas Calvo Gómez 


wrote:

Hi Daniel!

On 06/02/13 11:15, Daniel Molina wrote:

Hi,

On 1 February 2013 13:05, Andreas Calvo Gómez 


wrote:

We have performed a update of both the OS (CentOS 6.2 to 6.3) and
OpenNebula
(3.8.1 to 3.8.3), and are facing some errors that weren't in the 
older

release:
- Logs can't be viewed from the webUI: selecting the "VM Log" 
tab shows

an
error message "Log for VM XXX not available".
There was a bug in previous releases with the vm log directory, 
but it

should be fixed in 3.8.3. Did you restart the sunstone-server after
upgrading?
Yes, several times because Sunstone or ONE becomes unresponsive 
from time

to
time.

Is Sunstone running on the same machine as OpenNebula?
Yes, we have a frontend with Sunstone and OpenNebula (using NFS to 
share the

common storage).

The new vms logs should be in:
 system-wide: LOG_LOCATION + "/vms/#{id}.log"
 self-contained: LOG_LOCATION + "/vms/#{id}/vm.log"
the former directory was
 system-wide: LOG_LOCATION + "/#{id}.log"
 self-contained: LOG_LOCATION + "/#{id}/vm.log"

The logs from vms created in old versions have to be moved/linked to
this new directory. I will update the upgrade guide with this step,
sorry for the inconvenience..
However, logs for new VMs are created under /var/log/one, which I 
thought is where ONE is looking to show them.




- One node can't establish network connections: started VMs on one
specific
node don't have network connectivity, while they get the IP 
address by

context. All other nodes work fine.

Please check the following link:


http://wiki.opennebula.org/faq#my_vm_is_running_but_i_get_no_answer_from_pings_what_s_wrong 



Maybe there is something wrong with the host configuration.
I'll try that link, but all nodes performed the same update 
procedure and

only one is not working.

- Sunstone/ONE seems to be stucked from time to time when 
performing

large
operations: as the old version (3.8.1), Sunstone becomes 
unresponsive

and
it
is necessary to shut down all applications (one, sunstone, occi) 
and

start
them again. It usually happens when performing an operation with a
large
set
of VMs.

For all errors, no log in /var/log/one give any hint.
Is there any other way to try to narrow down the root source of 
this

problems?

What are exactly the actions you performed and with how many vms?
Could you send us your sched.conf and oned.conf files?
When a batch command is executed (usually deleting 50+ VMs), 
Sunstone or

ONE
becomes unresponsive and must be restarted.
Yes, attached you will find our config files.

Can you interact with OpenNebula through the cli, or is Sunstone the
only one that becomes unresponsive.

Everything becomes unresponsive, but not at the same time.
When a user performs a large batch operation, the same user can't 
run more

commands, but other users are fine for a while.
After some time, users can't log into Sunstone and other services 
(such as

OCCI) become unresponsive too.


Could you send us the oned.log part when this operation is performed
I've attached a compressed file with all logs (besides VM related 
ones) performing the batch operation.

Hope it's usefull.


Cheers





___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Internal marketplace

2013-02-06 Thread Miloš Kozák

Hello,
I am facing problem of Windows appliances which we need to deploy onto 
our ONE infrastructure. Feasible solution seems to me run our own 
internal marketplace. I was searching for info howto make it happen, but 
I could not find any reasonable document. So my question is, is there 
any way how to run appliance repository on our internal network?


Thank you, Milos
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] iSCSI multipath

2013-01-30 Thread Miloš Kozák
Hi, thank you. I checked source codes and I found it is very similar to 
LVM TM/Datastore drivers which is facilitated in ONE already only you 
added lvchange -ay DEV. Do you run CLVM along that or not?


I worry about parallel changes of LVM metadata which might destroy them. 
From sequential behaviour it is probably not an issues can you prove it 

to me? Or  is it highly dangerous to run lvm_shared without CLVM?

Thanks, Milos


Dne 30.1.2013 10:09, Marlok Tamás napsal(a):

Hi,

We have a custom datastore, and transfer manager driver, which runs 
the lvchange command when it is needed.

In order to work, you have to enable it in oned.conf.

for example:

DATASTORE_MAD = [
executable = "one_datastore",
arguments  = "-t 10 -d fs,vmware,iscsi,lvm,shared_lvm"]

TM_MAD = [
executable = "one_tm",
arguments  = "-t 10 -d 
dummy,lvm,shared,qcow2,ssh,vmware,iscsi,shared_lvm" ]


After that, you can create a datastore, with the shared_lvm tm and 
datastore driver.


The only limitation is that you can't live migrate VM-s. We have a 
working solution for that as well, but it is still untested.I can send 
you that too, if you want to help us testing it.


Anyway, here are the drivers, feel free to use or modify it.
https://dl.dropbox.com/u/140123/shared_lvm.tar.gz

--
Cheers,
Marlok Tamas
MTA Sztaki



On Thu, Jan 24, 2013 at 11:32 PM, Mihály Héder 
mailto:mihaly.he...@sztaki.mta.hu>> wrote:


Hi,

Well, if you can run the lvs or lvscan on at least one server
successfully, then the metadata is probably fine.
We had similar issues before we learned how to exclude unnecessary
block devices in the lvm config.

The thing is that lvscan and lvs will try to check _every_ potential
block device by default for LVM partitions. If you are lucky, this is
only annoying, because it will throw 'can't read /dev/sdX' or similar
messages. However, if you are using dm-multipath, you will have one
device for each path, like /dev/sdr _plus_ the aggregated device with
the name you have configured in multipath.conf (/dev/mapper/yourname)
what you actually need. LVM did not quite understand this situation
and got stuck on the individual path devices, so we have configured to
look for lvm only on the right place. In man page of lvm.conf look for
the devices / scan and filter options. Also there are quite good
examples in the comments there.

Also, there could be a much simpler explanation to the issue:
something with the iSCSI connection or multipath that are one layer
below.

I hope this helps.

Cheers
Mihály

On 24 January 2013 23:18, Miloš Kozák mailto:milos.ko...@lejmr.com>> wrote:
> Hi, thank you. I tried to update TM ln script, which works but
it is not
> clean solution. So I will try to write hook code and then we can
discuss it.
>
> I deployed a few VM and now on the other server lvs command
freezes. I have
> not set up clvm, do you think it could be caused by lvm metadata
corruption?
> The thing is I can not longer start a VM on the other server.
>
> Miloš
>
> Dne 24.1.2013 23:10, Mihály Héder napsal(a):
>
>> Hi!
>>
>> We solve this problem via hooks that are activating the LV-s for us
>> when we start/migrate a VM. Unfortunately I will be out of office
>> until early next week but then I will consult with my colleague who
>> did the actual coding of this part and we will share the code.
>>
>> Cheers
>> Mihály
>>
>> On 24 January 2013 20:15, Miloš Kozákmailto:milos.ko...@lejmr.com>>  wrote:
>>>
>>> Hi, I have just set it up having two hosts with shared
blockdevice. On
>>> top
>>> of that LVM, as discussed earlier. Triggering lvs I can see
all logical
>>> volumes. When I create a new LV  on the other server, I can
see the LV
>>> being
>>> inactive, so I have to run lvchange -ay VG/LV enable it then
this LV can
>>> be
>>> used for VM..
>>>
>>> Is there any trick howto auto enable newly created LV on every
host?
>>>
>>> Thanks Milos
>>>
>>> Dne 22.1.2013 18:22, Mihály Héder napsal(a):
>>>
>>>> Hi!
>>>>
>>>> You need to look at locking_type in the lvm.conf manual [1]. The
>>>> default - locking in a local directory - is ok for the
frontend, and
>>>> type 4 is read-only. However, you should not forget that this
only
>>>> prevents damaging thing by the lvm commands. If you start to
write
>>>> zeros to y

Re: [one-users] iSCSI multipath

2013-01-24 Thread Miloš Kozák
Hi, I have just set it up having two hosts with shared blockdevice. On 
top of that LVM, as discussed earlier. Triggering lvs I can see all 
logical volumes. When I create a new LV  on the other server, I can see 
the LV being inactive, so I have to run lvchange -ay VG/LV enable it 
then this LV can be used for VM..


Is there any trick howto auto enable newly created LV on every host?

Thanks Milos

Dne 22.1.2013 18:22, Mihály Héder napsal(a):

Hi!

You need to look at locking_type in the lvm.conf manual [1]. The
default - locking in a local directory - is ok for the frontend, and
type 4 is read-only. However, you should not forget that this only
prevents damaging thing by the lvm commands. If you start to write
zeros to your disk with the dd command for example, that will kill
your partition regardless the lvm setting. So this is against user or
middleware errors mainly, not against malicious attacks.

Cheers
Mihály Héder
MTA SZTAKI

[1] http://linux.die.net/man/5/lvm.conf

On 21 January 2013 18:58, Miloš Kozák  wrote:

Oh snap, that sounds great I didn't know about that.. it makes all easier.
In this scenario only frontend can work with LVM, so no issues of concurrent
change. Only one last think to make it really safe against that. Is there
any way to suppress LVM changes from hosts, make it read only? And let it RW
at frontend?

Thanks


Dne 21.1.2013 18:50, Mihály Héder napsal(a):


Hi,

no, you don't have to do any of that. Also, nebula doesn't have to
care about LVM metadata at all and therefore there is no corresponding
function in it. At /etc/lvm there is no metadata, only configuration
files.

Lvm metadata simply sits somewhere at the beginning of your
iscsi-shared disk, like a partition table. So it is on the storage
that is accessed by all your hosts, and no distribution is necessary.
Nebula frontend simply issues lvcreate, lvchange, etc, on this shared
disk and those commands will manipulate the metadata.

It is really LVM's internal business, many layers below opennebula.
All you have to make sure that you don't run these commands
concurrently  from multiple hosts on the same iscsi-attached disk,
because then they could interfere with each other. This setting is
what you have to indicate in /etc/lvm on the server hosts.

Cheers
Mihály

On 21 January 2013 18:37, Miloš Kozák  wrote:

Thank you. does it mean, that I can distribute metadata files located in
/etc/lvm on frontend onto other hosts and these hosts will see my logical
volumes? Is there any code in nebula which would provide it? Or I need to
update DS scripts to update/distribute LVM metadata among servers?

Thanks, Milos

Dne 21.1.2013 18:29, Mihály Héder napsal(a):


Hi,

lvm metadata[1] is simply stored on the disk. In the setup we are
discussing this happens to be a  shared virtual disk on the storage,
so any other hosts that are attaching the same virtual disk should see
the changes as they happen, provided that they re-read the disk. This
re-reading step is what you can trigger with lvscan, but nowadays that
seems to be unnecessary. For us it works with Centos 6.3 so I guess Sc
Linux should be fine as well.

Cheers
Mihály


[1]

http://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/lvm_metadata.html

On 21 January 2013 12:53, Miloš Kozák  wrote:

Hi,
thank you for great answer. As I wrote my objective is to avoid as much
of
clustering sw (pacemaker,..) as possible, so clvm is one of these
things
I
feel bad about them in my configuration.. Therefore I would rather let
nebula manage LVM metadata in the first place as I you wrote. Only one
last
thing I dont understand is a way nebula distributes LVM metadata?

Is kernel in Scientific Linux 6.3 new enought to LVM issue you
mentioned?

Thanks Milos




Dne 21.1.2013 12:34, Mihály Héder napsal(a):


Hi!

Last time we could test an Equalogic it did not have option for
create/configure Virtual Disks inside in it by an API, so I think the
iSCSI driver is not an alternative, as it would require a
configuration step per virtual machine on the storage.

However, you can use your storage just fine in a shared LVM scenario.
You need to consider two different things:
-the LVM metadata, and the actual VM data on the partitions. It is
true, that the concurrent modification of the metadata should be
avoided as in theory it can damage the whole virtual group. You could
use clvm which avoids that by clustered locking, and then every
participating machine can safely create/modify/delete LV-s. However,
in a nebula setup this is not necessary in every case: you can make
the LVM metadata read only on your host servers, and let only the
frontend modify it. Then it can use local locking that does not
require clvm.
-of course the host servers can write the data inside the partitions
regardless that the metadata is read-only for them. It should work
just fine as long as you don't start two VMs for one partition.

We are running this setup with a dual controller Dell

Re: [one-users] iSCSI multipath

2013-01-21 Thread Miloš Kozák
Oh snap, that sounds great I didn't know about that.. it makes all 
easier. In this scenario only frontend can work with LVM, so no issues 
of concurrent change. Only one last think to make it really safe against 
that. Is there any way to suppress LVM changes from hosts, make it read 
only? And let it RW at frontend?


Thanks


Dne 21.1.2013 18:50, Mihály Héder napsal(a):

Hi,

no, you don't have to do any of that. Also, nebula doesn't have to
care about LVM metadata at all and therefore there is no corresponding
function in it. At /etc/lvm there is no metadata, only configuration
files.

Lvm metadata simply sits somewhere at the beginning of your
iscsi-shared disk, like a partition table. So it is on the storage
that is accessed by all your hosts, and no distribution is necessary.
Nebula frontend simply issues lvcreate, lvchange, etc, on this shared
disk and those commands will manipulate the metadata.

It is really LVM's internal business, many layers below opennebula.
All you have to make sure that you don't run these commands
concurrently  from multiple hosts on the same iscsi-attached disk,
because then they could interfere with each other. This setting is
what you have to indicate in /etc/lvm on the server hosts.

Cheers
Mihály

On 21 January 2013 18:37, Miloš Kozák  wrote:

Thank you. does it mean, that I can distribute metadata files located in
/etc/lvm on frontend onto other hosts and these hosts will see my logical
volumes? Is there any code in nebula which would provide it? Or I need to
update DS scripts to update/distribute LVM metadata among servers?

Thanks, Milos

Dne 21.1.2013 18:29, Mihály Héder napsal(a):


Hi,

lvm metadata[1] is simply stored on the disk. In the setup we are
discussing this happens to be a  shared virtual disk on the storage,
so any other hosts that are attaching the same virtual disk should see
the changes as they happen, provided that they re-read the disk. This
re-reading step is what you can trigger with lvscan, but nowadays that
seems to be unnecessary. For us it works with Centos 6.3 so I guess Sc
Linux should be fine as well.

Cheers
Mihály


[1]
http://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/lvm_metadata.html

On 21 January 2013 12:53, Miloš Kozák  wrote:

Hi,
thank you for great answer. As I wrote my objective is to avoid as much
of
clustering sw (pacemaker,..) as possible, so clvm is one of these things
I
feel bad about them in my configuration.. Therefore I would rather let
nebula manage LVM metadata in the first place as I you wrote. Only one
last
thing I dont understand is a way nebula distributes LVM metadata?

Is kernel in Scientific Linux 6.3 new enought to LVM issue you mentioned?

Thanks Milos




Dne 21.1.2013 12:34, Mihály Héder napsal(a):


Hi!

Last time we could test an Equalogic it did not have option for
create/configure Virtual Disks inside in it by an API, so I think the
iSCSI driver is not an alternative, as it would require a
configuration step per virtual machine on the storage.

However, you can use your storage just fine in a shared LVM scenario.
You need to consider two different things:
-the LVM metadata, and the actual VM data on the partitions. It is
true, that the concurrent modification of the metadata should be
avoided as in theory it can damage the whole virtual group. You could
use clvm which avoids that by clustered locking, and then every
participating machine can safely create/modify/delete LV-s. However,
in a nebula setup this is not necessary in every case: you can make
the LVM metadata read only on your host servers, and let only the
frontend modify it. Then it can use local locking that does not
require clvm.
-of course the host servers can write the data inside the partitions
regardless that the metadata is read-only for them. It should work
just fine as long as you don't start two VMs for one partition.

We are running this setup with a dual controller Dell MD3600 storage
without issues so far. Before that, we used to do the same with XEN
machines for years on an older EMC (that was before nebula). Now with
nebula we have been using a home-grown module for doing that, which I
can send you any time - we plan to submit that as a feature
enhancement anyway. Also, there seems to be a similar shared LVM
module in the nebula upstream which we could not get to work yet, but
did not try much.

The plus side of this setup is that you can make live migration work
nicely. There are two points to consider however: once you set the LVM
metadata read-only you wont be able to modify the local LVMs in your
servers, if there are any. Also, in older kernels, when you modified
the LVM on one machine the others did not get notified about the
changes, so you had to issue an lvs command. However in new kernels
this issue seems to be solved, the LVs get instantly updated. I don't
know when and what exactly changed though.

Cheers
Mihály Héder
MTA SZTAKI ITAK

On 18 January 2013 08:57, Mi

Re: [one-users] iSCSI multipath

2013-01-21 Thread Miloš Kozák
Thank you. does it mean, that I can distribute metadata files located in 
/etc/lvm on frontend onto other hosts and these hosts will see my 
logical volumes? Is there any code in nebula which would provide it? Or 
I need to update DS scripts to update/distribute LVM metadata among 
servers?


Thanks, Milos

Dne 21.1.2013 18:29, Mihály Héder napsal(a):

Hi,

lvm metadata[1] is simply stored on the disk. In the setup we are
discussing this happens to be a  shared virtual disk on the storage,
so any other hosts that are attaching the same virtual disk should see
the changes as they happen, provided that they re-read the disk. This
re-reading step is what you can trigger with lvscan, but nowadays that
seems to be unnecessary. For us it works with Centos 6.3 so I guess Sc
Linux should be fine as well.

Cheers
Mihály


[1] 
http://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/lvm_metadata.html

On 21 January 2013 12:53, Miloš Kozák  wrote:

Hi,
thank you for great answer. As I wrote my objective is to avoid as much of
clustering sw (pacemaker,..) as possible, so clvm is one of these things I
feel bad about them in my configuration.. Therefore I would rather let
nebula manage LVM metadata in the first place as I you wrote. Only one last
thing I dont understand is a way nebula distributes LVM metadata?

Is kernel in Scientific Linux 6.3 new enought to LVM issue you mentioned?

Thanks Milos




Dne 21.1.2013 12:34, Mihály Héder napsal(a):


Hi!

Last time we could test an Equalogic it did not have option for
create/configure Virtual Disks inside in it by an API, so I think the
iSCSI driver is not an alternative, as it would require a
configuration step per virtual machine on the storage.

However, you can use your storage just fine in a shared LVM scenario.
You need to consider two different things:
-the LVM metadata, and the actual VM data on the partitions. It is
true, that the concurrent modification of the metadata should be
avoided as in theory it can damage the whole virtual group. You could
use clvm which avoids that by clustered locking, and then every
participating machine can safely create/modify/delete LV-s. However,
in a nebula setup this is not necessary in every case: you can make
the LVM metadata read only on your host servers, and let only the
frontend modify it. Then it can use local locking that does not
require clvm.
-of course the host servers can write the data inside the partitions
regardless that the metadata is read-only for them. It should work
just fine as long as you don't start two VMs for one partition.

We are running this setup with a dual controller Dell MD3600 storage
without issues so far. Before that, we used to do the same with XEN
machines for years on an older EMC (that was before nebula). Now with
nebula we have been using a home-grown module for doing that, which I
can send you any time - we plan to submit that as a feature
enhancement anyway. Also, there seems to be a similar shared LVM
module in the nebula upstream which we could not get to work yet, but
did not try much.

The plus side of this setup is that you can make live migration work
nicely. There are two points to consider however: once you set the LVM
metadata read-only you wont be able to modify the local LVMs in your
servers, if there are any. Also, in older kernels, when you modified
the LVM on one machine the others did not get notified about the
changes, so you had to issue an lvs command. However in new kernels
this issue seems to be solved, the LVs get instantly updated. I don't
know when and what exactly changed though.

Cheers
Mihály Héder
MTA SZTAKI ITAK

On 18 January 2013 08:57, Miloš Kozák  wrote:

Hi, I am setting up a small installation of opennebula with sharedstorage
using iSCSI. THe storage is Equilogic EMC with two controllers. Nowadays
we
have only two host servers so we use backed direct connection between
storage and each server, see attachment. For this purpose we set up
dm-multipath. Cause in the future we want to add other servers and some
other technology will be necessary in the network segment. Thesedays we
try
to make it as same as possible with future topology from protocols point
of
view.

My question is related to the way how to define datastore, which driver
and
TM is the best and which?

My primal objective is to avoid GFS2 or any other cluster filesystem I
would
prefer to keep datastore as block devices. Only option I see is to use
LVM
but I worry about concurent writes isn't it a problem? I was googling a
bit
and I found I would need to set up clvm - is it really necessary?

Or is better to use iSCSI driver, drop the dm-multipath and hope?

Thanks, Milos

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org



___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] iSCSI multipath

2013-01-17 Thread Miloš Kozák
Hi, I am setting up a small installation of opennebula with 
sharedstorage using iSCSI. THe storage is Equilogic EMC with two 
controllers. Nowadays we have only two host servers so we use backed 
direct connection between storage and each server, see attachment. For 
this purpose we set up dm-multipath. Cause in the future we want to add 
other servers and some other technology will be necessary in the network 
segment. Thesedays we try to make it as same as possible with future 
topology from protocols point of view.


My question is related to the way how to define datastore, which driver 
and TM is the best and which?


My primal objective is to avoid GFS2 or any other cluster filesystem I 
would prefer to keep datastore as block devices. Only option I see is to 
use LVM but I worry about concurent writes isn't it a problem? I was 
googling a bit and I found I would need to set up clvm - is it really 
necessary?


Or is better to use iSCSI driver, drop the dm-multipath and hope?

Thanks, Milos
<>___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Image resize

2013-01-13 Thread Miloš Kozák
Hi, is there, in opennebula, a feature for image resizing? If I download 
a img from market store the size of it is usually not the one I desire 
to be so I need to resizeit by hand. Do you find it as a handy feature 
guys? Or you have found other workaround?


Milos
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Developer

2012-12-28 Thread Miloš Kozák
Hi, I am looking for someone who would do small tweeking on 
Self-service. Basicaly I am not comfortable with the VM templates 
(especially if it defines CPU and RAM only). I would prefer to have it 
as an input in the form.


Is there around anybody who would program it in clean way such that it 
could be send back to the upstream? Tell my your hour rate. Thanks


Best regards,
Milos
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] A new auth module

2012-12-27 Thread Miloš Kozák

Hello,
I am preparing a new service based on OpenNebula along to other services 
we provide such as webhosting. Currently we have central database of 
users so we are thinking how to get along the opennebula cloud. I found 
?obsolete? documentation of auth modul 
(http://opennebula.org/documentation:archives:rel2.2:auth) which I find 
useful but I am not sure if it is still valid? If it does, does it mean 
I can prepare own auth module and that is it? How ACL and groups are 
treated then? Is there any source which would guide me through this 
problem?


Simply I need to reuse my current users, groups in opennebula (guess 
that ACL, quotas need to be implemented in my billing and then 
transported to opennebula?)


Best regards,
Milos Kozak

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org