Re: [one-users] Question

2012-06-26 Thread Alberto Zuin - Liste

Will be queued, the instance stay in Pending state.
Alberto

On 26/06/2012 07:34, Mohsen Amini wrote:

Hello everybody,

Today, a question crossed my mind...I'd be pleased if someone can help 
me with that...


I am wondering how OpenNebula treats a VM request when there is not 
enough resources?


For example, If there are 4 cores and 4VMs already occupied those 
cores along with the memory.
In this situation, if a new VM request arrives, then what will be the 
OpenNebula reaction? Will the request be rejected or queued?


Unfortunately, I am not in situation to test that myself.

Thanks.

Mohsen.



___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org



--

Alberto Zuin
via Mare, 36/A
36030 Lugo di Vicenza (VI)
Italy
P.I. 04310790284
Tel. +39.0499271575
Fax. +39.0492106654
Cell. +39.3286268626
www.azns.it - albe...@azns.it

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] deleting stopped VM leaves files in frontend

2012-06-26 Thread Rolandas Naujikas

Hi,

Deleting stopped VM in opennebula 3.4 (probably also in previous 
versions) leaves saved VM files on fronted (or in nodes with shared 
storage). delete action of system datastore is not called.


Regards, Rolandas Naujikas
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] impossible to setup different transfer manager for system datastore on different hosts

2012-06-26 Thread Rolandas Naujikas

Hi,

In opennebula 3.4.x there is not possible to setup different transfer 
manager for system datastore on different hosts. That was possible in 
opennebula 3.2.x and early. That looks like REGRESSION.


That could be useful for opennebula with different visualization hosts 
types (KVM, Xen, VMware) or different system datastore storage 
configurations (filesystem + ssh/shared, filesystem + lvm2ssh/shared).


Regards, Rolandas Naujikas
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] mv action is not called for swap/image disks when migrating/stopping/resuming

2012-06-26 Thread Rolandas Naujikas

Hi,

We found, that mv action is not called for swap/image disks when doing 
migration/stopping/resuming of VM in opennebula 4.3.x (probably in early 
version also). That could make mv driver part easier to write.


With several disks there is bigger risks, that some of mv actions 
could fail - what action should take opennebula to recover ?


Regards, Rolandas Naujikas
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] contribution of LVM2 transfer manager driver for opennebula 3.4.x

2012-06-26 Thread Rolandas Naujikas

Hi,

Because Debian 6.0 Xen doesn't support tap:aio: and because LVM disks 
are faster, I wrote modified transfer manager driver for opennebula 
3.4.x, that use LVM volumes on local disks in virtualization hosts.


There are 3 kinds:

*) lvm2 - works with shared or not shared filesystem datastore (for 
system datastore there is parameter in lvm.conf to tell shared or not it 
is).


*) lvm2ssh - the same, but removed code to detect that datastore is shared.

*) lvm2shared - the same, but with only assumption, than datastore is 
shared.


URL of all them is at http://mif.vu.lt/~rolnas/one/one34/tm/

Regards, Rolandas Naujikas

P.S. This driver was in almost working condition in opennebula 3.2, but 
was lost in opennebula 3.4. lvm driver in opennebula 3.4.1 is not for 
system datastore and is different by design.

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Hostname column in Sunstone is misleading

2012-06-26 Thread Rolandas Naujikas

Hi,

Some users are mislead of Hostname column, because it could mean VM 
hostname, but really it means location of VM. Probably it is better to 
rename it to Location. Don't forget in this case change it also in VM 
information and other tabs.


Regards, Rolandas Naujikas
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] KVM hosts switch between OK and Error

2012-06-26 Thread Vogl, Yves
Hi,

I've 9 KVM hosts running with Open Nebula 3.4 and I can use them without any 
problems.
But sometimes when I want to deploy a VM a host ist not available because it's 
in error state. After a few seconds it's OK again without any interaction of 
mine.

When I've a look at the hosts in Sunstone or with onehost the stats also 
are reported incorrectly (number of running VMs sometimes is -1).


  ID NAME CLUSTER RVM   TCPU   FCPU   ACPU   TMEM   FMEM   AMEM STAT
  10 kvm050800800800  31.4G  29.7G  31.4G   on
  11 kvm060800800800  31.4G  29.7G  31.4G   on
  12 kvm070800800800  31.4G  29.7G  31.4G   on
  13 kvm024800716 -39200  31.4G  20.1G20G   on
  14 kvm010800793800  31.4G  19.9G  31.4G   on
  15 kvm030800800800  31.4G  30.9G  31.4G   on
  16 kvm040800800800  31.4G  29.8G  31.4G   on
  19 kvm080800800800  31.4G  30.8G  31.4G  err
  20 kvm094800  0 -39200  31.4G  24.4G  15.4G   on



  A few seconds later

  ID NAME CLUSTER RVM   TCPU   FCPU   ACPU   TMEM   FMEM   AMEM STAT
  10 kvm05 0800800800  31.4G  29.7G  31.4G   on
  11 kvm06 1800800  -9200  31.4G  29.7G  29.7G   on
  12 kvm07 0800800800  31.4G  29.7G  31.4G   on
  13 kvm02 4800661 -39200  31.4G  20.1G20G   on
  14 kvm01 4800775 -39200  31.4G  19.9G20G   on
  15 kvm03-1800800  10800  31.4G  30.9G  33.4G   on
  16 kvm04 0800795800  31.4G  29.8G  31.7G   on
  19 kvm08 0800800800  31.4G  30.8G  31.4G  err
  20 kvm09 0  0  0100 0K 0K 0K  err


  Another seconds later...

  ID NAME CLUSTER RVM   TCPU   FCPU   ACPU   TMEM   FMEM   AMEM STAT
  10 kvm05 0800800800  31.4G  29.7G  31.4G   on
  11 kvm06 0800800800  31.4G  29.7G  31.4G   on
  12 kvm07 0800799800  31.4G  29.7G  31.4G   on
  13 kvm02 4800661 -39200  31.4G  20.1G20G   on
  14 kvm01 0800792800  31.4G  19.9G  31.4G   on
  15 kvm03 0800800800  31.4G  30.9G  31.4G   on
  16 kvm04 0800800800  31.4G  29.8G  31.4G   on
  19 kvm08 4800390 -39200  31.4G  24.4G  15.4G   on
  20 kvm09 4800 32 -39200  31.4G  23.5G  15.4G   on



I've attached some logs.

oneacctd.log

Tue Jun 26 15:55:18 +0200 2012 OneWatch::HostMonitoring
Tue Jun 26 16:00:18 +0200 2012 OneWatch::VmMonitoring
Tue Jun 26 16:05:18 +0200 2012 OneWatch::VmMonitoring
Tue Jun 26 16:10:18 +0200 2012 OneWatch::VmMonitoring
Tue Jun 26 16:10:18 +0200 2012 OneWatch::HostMonitoring
Tue Jun 26 16:15:18 +0200 2012 OneWatch::VmMonitoring
Tue Jun 26 16:15:19 +0200 2012 OneWatch::Accounting
Tue Jun 26 16:20:18 +0200 2012 OneWatch::VmMonitoring
[oneadmin@ams-one-node01 one]$ 




oned.log

Tue Jun 26 16:15:48 2012 [ReM][D]: GroupPoolInfo method invoked
Tue Jun 26 16:15:48 2012 [AuM][D]: Message received: LOG I 11415 ExitCode: 0

Tue Jun 26 16:15:48 2012 [AuM][I]: ExitCode: 0
Tue Jun 26 16:15:48 2012 [AuM][D]: Message received: AUTHENTICATE SUCCESS 11415 
-

Tue Jun 26 16:15:57 2012 [ReM][D]: HostPoolInfo method invoked
Tue Jun 26 16:15:57 2012 [InM][I]: Monitoring host kvm05.domain-removed.example 
(10)
Tue Jun 26 16:15:57 2012 [ReM][D]: VirtualMachinePoolInfo method invoked
Tue Jun 26 16:15:57 2012 [ReM][D]: AclInfo method invoked
Tue Jun 26 16:15:57 2012 [InM][I]: Monitoring host kvm07.domain-removed.example 
(12)
Tue Jun 26 16:15:57 2012 [ReM][D]: ClusterPoolInfo method invoked
Tue Jun 26 16:15:57 2012 [AuM][D]: Message received: LOG I 11416 ExitCode: 0

Tue Jun 26 16:15:57 2012 [AuM][I]: ExitCode: 0
Tue Jun 26 16:15:57 2012 [AuM][D]: Message received: AUTHENTICATE SUCCESS 11416 
-

Tue Jun 26 16:15:59 2012 [ReM][D]: TemplatePoolInfo method invoked
Tue Jun 26 16:16:00 2012 [InM][I]: ExitCode: 0
Tue Jun 26 16:16:00 2012 [InM][D]: Host 10 successfully monitored.
Tue Jun 26 16:16:00 2012 [InM][I]: ExitCode: 0
Tue Jun 26 16:16:00 2012 [InM][D]: Host 12 successfully monitored.
Tue Jun 26 16:16:04 2012 [ReM][D]: VirtualNetworkPoolInfo method invoked
Tue Jun 26 16:16:12 2012 [InM][I]: Monitoring host kvm09.domain-removed.example 
(20)
Tue Jun 26 16:16:15 2012 [InM][I]: ExitCode: 0
Tue Jun 26 16:16:15 2012 [InM][D]: Host 20 successfully monitored.
Tue Jun 26 16:16:19 2012 [ReM][D]: DatastorePoolInfo method invoked
Tue Jun 26 16:16:23 2012 [ReM][D]: ImagePoolInfo method invoked
Tue Jun 26 16:16:23 2012 [ReM][D]: AclInfo method invoked
Tue Jun 26 16:16:27 2012 [ReM][D]: HostPoolInfo method invoked
Tue Jun 26 16:16:27 2012 

Re: [one-users] mv action is not called for swap/image disks when migrating/stopping/resuming

2012-06-26 Thread Jaime Melis
Hello Rolandas,

thank you for reporting this. I've created a bug report to look into it.
http://dev.opennebula.org/issues/1315

cheers,
Jaime

On Tue, Jun 26, 2012 at 12:47 PM, Rolandas Naujikas 
rolandas.nauji...@mif.vu.lt wrote:

 Hi,

 We found, that mv action is not called for swap/image disks when doing
 migration/stopping/resuming of VM in opennebula 4.3.x (probably in early
 version also). That could make mv driver part easier to write.

 With several disks there is bigger risks, that some of mv actions could
 fail - what action should take opennebula to recover ?

 Regards, Rolandas Naujikas
 __**_
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/**listinfo.cgi/users-opennebula.**orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Jaime Melis
Project Engineer
OpenNebula - The Open Source Toolkit for Cloud Computing
www.OpenNebula.org | jme...@opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] mv action is not called for swap/image disks when migrating/stopping/resuming

2012-06-26 Thread Rolandas Naujikas

On 2012-06-26 19:37, Jaime Melis wrote:

Hello Rolandas,

thank you for reporting this. I've created a bug report to look into it.
http://dev.opennebula.org/issues/1315

cheers,
Jaime

On Tue, Jun 26, 2012 at 12:47 PM, Rolandas Naujikas 
rolandas.nauji...@mif.vu.lt wrote:


Hi,

We found, that mv action is not called for swap/image disks when doing
migration/stopping/resuming of VM in opennebula 4.3.x (probably in early


There should be in opennebula 3.4.x (mistake in typing).

Rolandas


version also). That could make mv driver part easier to write.

With several disks there is bigger risks, that some of mv actions could
fail - what action should take opennebula to recover ?

Regards, Rolandas Naujikas
__**_
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/**listinfo.cgi/users-opennebula.**orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org








___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] impossible to setup different transfer manager for system datastore on different hosts

2012-06-26 Thread Ruben S. Montero
Hi,

The rationale behind this is the following:

The current datastore system allows you to setup a host that uses multiple
datastores, each one with a different transfer driver. In this way, you can
have FS datastores that is exported through a shared FS other FS datasores
with SSH, and even one using an iSCSI server. With OpenNebula 3.4 you can
use all of them at the same time in every single host (each host using
tm_shared, tm_ssh, tm_iscsi depending on the image).

In previous version you are restricted to a single TM for each host. This
usually means for example that you are restricted to a single NFS export or
iSCSI server. IMHO this is a clear gain on the storage subsystem.

Now, the system datastore . It is used to create end-points in the target
host, so the operations specific to the system datastore are just:

context, mkimage, and mkswap: These by default create files for the ISO
context CD-ROM or volatile disks

mv: that mv's VM directories across hosts

delete: to delete any temporal content created in the system datastore

NOTE: clone, mvds, and ln operations are datastore specific, and we are not
using the system ones.

So I think that there is no regression. Note that the use of multiple
system datastores will basically affect cold migrations (mv), which are not
possible across hypervisors, and in general very limited across hosts with
different configurations (i.e migrating a VM with a LVM as disk that need
to be converted to a file in other host)

However, I see situations where creating a context volume or a volatile
volume in a LVM device or in a file depending on the host can be useful.
So, probably a good trade-off would be setting up the system datastore per
cluster instead of opennebula installation. What do you think?

Thanks for your comments!

BTW, Hope this helps you to tune the LVM2 drivers... Thanks also for that
one :)

Cheers

Ruben



On Tue, Jun 26, 2012 at 12:43 PM, Rolandas Naujikas 
rolandas.nauji...@mif.vu.lt wrote:

 Hi,

 In opennebula 3.4.x there is not possible to setup different transfer
 manager for system datastore on different hosts. That was possible in
 opennebula 3.2.x and early. That looks like REGRESSION.

 That could be useful for opennebula with different visualization hosts
 types (KVM, Xen, VMware) or different system datastore storage
 configurations (filesystem + ssh/shared, filesystem + lvm2ssh/shared).

 Regards, Rolandas Naujikas
 __**_
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/**listinfo.cgi/users-opennebula.**orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Ruben S. Montero, PhD
Project co-Lead and Chief Architect
OpenNebula - The Open Source Solution for Data Center Virtualization
www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] OpenNebula Demo Cloud Updated to 3.6 Beta

2012-06-26 Thread Carlos Martín Sánchez
Dear community,

We have updated the Demo Cloud [1] to the just released OpenNebula 3.6 Beta.
I'm sure all of you already have an account, but just in case we have
also published some screenshots in our blog [2] to let everybody get an
idea of the new Sunstone look.

Enjoy, test, and report your feedback!
Carlos

[1] http://blog.opennebula.org/?p=3072
[2] http://blog.opennebula.org/?p=3083

--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] impossible to setup different transfer manager for system datastore on different hosts

2012-06-26 Thread Rolandas Naujikas

On 2012-06-26 20:39, Ruben S. Montero wrote:

Hi,

The rationale behind this is the following:

The current datastore system allows you to setup a host that uses multiple
datastores, each one with a different transfer driver. In this way, you can
have FS datastores that is exported through a shared FS other FS datasores
with SSH, and even one using an iSCSI server. With OpenNebula 3.4 you can
use all of them at the same time in every single host (each host using
tm_shared, tm_ssh, tm_iscsi depending on the image).

In previous version you are restricted to a single TM for each host. This
usually means for example that you are restricted to a single NFS export or
iSCSI server. IMHO this is a clear gain on the storage subsystem.

Now, the system datastore . It is used to create end-points in the target
host, so the operations specific to the system datastore are just:

context, mkimage, and mkswap: These by default create files for the ISO
context CD-ROM or volatile disks

mv: that mv's VM directories across hosts

delete: to delete any temporal content created in the system datastore

NOTE: clone, mvds, and ln operations are datastore specific, and we are not
using the system ones.

So I think that there is no regression. Note that the use of multiple
system datastores will basically affect cold migrations (mv), which are not
possible across hypervisors, and in general very limited across hosts with
different configurations (i.e migrating a VM with a LVM as disk that need
to be converted to a file in other host)

However, I see situations where creating a context volume or a volatile
volume in a LVM device or in a file depending on the host can be useful.
So, probably a good trade-off would be setting up the system datastore per
cluster instead of opennebula installation. What do you think?


I think it is logical, because in any case I would make hosts of 
different virtualization technologies in different clusters (like 
different clusters can have different images datastores in opennebula 
3.4.x).


Regards, Rolandas


Thanks for your comments!

BTW, Hope this helps you to tune the LVM2 drivers... Thanks also for that
one :)

Cheers

Ruben



On Tue, Jun 26, 2012 at 12:43 PM, Rolandas Naujikas 
rolandas.nauji...@mif.vu.lt wrote:


Hi,

In opennebula 3.4.x there is not possible to setup different transfer
manager for system datastore on different hosts. That was possible in
opennebula 3.2.x and early. That looks like REGRESSION.

That could be useful for opennebula with different visualization hosts
types (KVM, Xen, VMware) or different system datastore storage
configurations (filesystem + ssh/shared, filesystem + lvm2ssh/shared).

Regards, Rolandas Naujikas
__**_
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/**listinfo.cgi/users-opennebula.**orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org








___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] impossible to setup different transfer manager for system datastore on different hosts

2012-06-26 Thread Rolandas Naujikas

On 2012-06-26 20:39, Ruben S. Montero wrote:

Hi,

The rationale behind this is the following:

The current datastore system allows you to setup a host that uses multiple
datastores, each one with a different transfer driver. In this way, you can
have FS datastores that is exported through a shared FS other FS datasores
with SSH, and even one using an iSCSI server. With OpenNebula 3.4 you can
use all of them at the same time in every single host (each host using
tm_shared, tm_ssh, tm_iscsi depending on the image).

In previous version you are restricted to a single TM for each host. This
usually means for example that you are restricted to a single NFS export or
iSCSI server. IMHO this is a clear gain on the storage subsystem.

Now, the system datastore . It is used to create end-points in the target
host, so the operations specific to the system datastore are just:

context, mkimage, and mkswap: These by default create files for the ISO
context CD-ROM or volatile disks

mv: that mv's VM directories across hosts

delete: to delete any temporal content created in the system datastore

NOTE: clone, mvds, and ln operations are datastore specific, and we are not
using the system ones.

So I think that there is no regression. Note that the use of multiple
system datastores will basically affect cold migrations (mv), which are not
possible across hypervisors, and in general very limited across hosts with
different configurations (i.e migrating a VM with a LVM as disk that need
to be converted to a file in other host)


At least I saw that in some (commercial) cloud software, which really 
use VirtualBox/qemu-utils to convert different image formats for 
different virtualization platform.


Regards, Rolandas

P.S. Really I didn't test it too much, because I hate UI in browser made 
with Flash Player.



However, I see situations where creating a context volume or a volatile
volume in a LVM device or in a file depending on the host can be useful.
So, probably a good trade-off would be setting up the system datastore per
cluster instead of opennebula installation. What do you think?

Thanks for your comments!

BTW, Hope this helps you to tune the LVM2 drivers... Thanks also for that
one :)

Cheers

Ruben



On Tue, Jun 26, 2012 at 12:43 PM, Rolandas Naujikas 
rolandas.nauji...@mif.vu.lt wrote:


Hi,

In opennebula 3.4.x there is not possible to setup different transfer
manager for system datastore on different hosts. That was possible in
opennebula 3.2.x and early. That looks like REGRESSION.

That could be useful for opennebula with different visualization hosts
types (KVM, Xen, VMware) or different system datastore storage
configurations (filesystem + ssh/shared, filesystem + lvm2ssh/shared).

Regards, Rolandas Naujikas
__**_
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/**listinfo.cgi/users-opennebula.**orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org








___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org