Re: [one-users] Little help with first building cloud

2011-12-20 Thread Fabian Wenk

Hello Matheus

On 20.12.2011 14:41, matheus tor4 wrote:

At the moment I got this error message:

Tue Dec 20 10:20:09 2011 [InM][I]: Monitoring host PacsOnCloud-FrontEnd (0)
Tue Dec 20 10:20:10 2011 [InM][I]: Command execution fail: 'if [ -x
/var/tmp/one/im/run_probes ]; then /var/tmp/one/im/run_probes kvm 0
PacsOnCloud-FrontEnd; else  exit 42; fi'
Tue Dec 20 10:20:10 2011 [InM][I]: ExitCode: 42
Tue Dec 20 10:20:10 2011 [InM][E]: Error monitoring host 0 : MONITOR
FAILURE 0 -


Is your sunstone server running on the same system as the 
front-end? With which user is sunstone running, as root or as 
oneadmin?


Is root from the front-end also able to password less login to 
the cluster nodes?



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Virtual Machine Lock Manager

2011-12-20 Thread Fabian Wenk

Hello Upendra

On 15.12.2011 13:54, Upendra Moturi wrote:

Hello Fabian

Can you please explain me the work flow locking.
i.e how one is throwing error when an image is registered with it.


You need to create a persistent image (define in the image 
template), then it can only be used from one VM at the same time.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] application integration (service publishing) in OpenNebula?

2011-12-19 Thread Fabian Wenk

Hello Biro

On 15.12.2011 09:54, biro lehel wrote:

Hello Fabian. Thanks again for your reply. I really appreciate
you for taking the time.


You're welcome.


I read what you wrote a couple of times, and (I think) it
helped me to clarify some things. But still, I have a few
questions and issues for which I am looking for a clear
answer. I put them in bullets:


I do not see any bullets, this is probably only available when 
viewed in HTML. I read (and also write) e-mails as text only, so 
the part below looks quite confusing to me and those it is very 
hard to answer, but I try.



As I understand so far, OpenNebula has two types of users: the
administrator, who basically has control over everything,


Only everything regarding the management of the VMs, but 
depending on who did the installation of the OS (Operating 
System) inside the VM, he may not have access to it. But as he 
can control the virtual hardware (the VM), he could eventually 
circumvent security measures done inside the OS of the VM.



and the users, who can authenticate securely, instantiate some
VM's, and do the work necessary for them. My question: can
OpenNebula have another layer of users, some kind of
end-users? What I mean is: suppose I, as a user of


This is not the duty of OpenNebula, this is something which needs 
to be done by the administrator of the OS inside the VM. This 
depends a lot of the used OS inside this VMs, but tools should be 
available.



OpenNebula, using my created VM's, create a Web Service, which
I publish on the Internet. Can anyone access this (someone who
has no idea about the private cloud, someone who is simply
accessing the URL), and by this way uses my Web Service
(created on the VM's by the means of OpenNebula), so,
basically, uses OpenNebula remotely (without knowing it)? Or


As above, this service provisioning and user management of the 
web service depends on the person who creates and runs this web 
service. This is independent of OpenNebula, as OpenNebula only 
provides the VMs to run any OS in it. As I already wrote, 
OpenNebula is just an abstraction layer between physical computer 
hardware and the OS you run inside the VM. Without the OpenNebula 
cloud platform you would just install physical computers with the 
OS of your choice and the services and applications you would 
like to run. There you also need to create the necessary system / 
application to manage end users visiting your web service.



this just doesn't make sense, since the whole idea of a
private cloud is not to provide/publish information and
services to the outside world, and this is not even possible
since the virtual context?Are the most important reasons for


The private cloud does just provide you with virtual computers to 
run your OS and application of choice on it. This helps to better 
use the physical computer with more virtual machines on it. It 
gives you more flexibility with the available hardware resources 
to run more then one OS installation a the same time.



installing OpenNebula the performance needs? Is there any


OpenNebula does reduce the performance of your hardware a little 
bit, as the additional layer also needs some capacity of the 
physical hardware, but I guess this can be ignored. Your hardware 
can be used more flexible with OpenNebula (or any other cloud 
abstraction layer), as you can use more then one OS (in a VM) in 
parallel on the same hardware.



other reason because of which I may want to install it,
besides the fact that I might need multiple VM's (that I can
manage) to perform a task (and to achieve platform
interoperability)? I mean this has be the main point of it,
right?When the load reaches its maximum (on a task which a
user tries to perform on OpenNebula VM's), are new VM's
created automatically (it the physical resources allow this)
to support the performace needs? Or the only way of creating


No, OpenNebula does not out of the box start new VMs when the 
currently running VMs are at a capacity limit. You need to build 
your own monitoring system, which does monitor your web service 
and act on the needs of more performance. This monitoring can 
then use OpenNebula to start additional VMs with your service / 
application. But additional VMs can only be started when there is 
enough physical hardware (eg. cluster nodes) available to support 
more VMs. It can not give you more raw hardware power as when 
your service / application would run directly on several physical 
servers instead. But it gives you more flexibility.



VM's is the manual one?Can OpenNebula be installed on any
type of physical network, or does it have some special needs?


The front end can be any i386 or amd64 (preferred) compatible 
computer which supports a current Linux distribution. But for the 
cluster nodes it would probably help if you use a CPU with VT 
support. Check the needs through the Virtualization Subsystem 
3.0 [1] with the details of the type of virtualization you would 
like to 

Re: [one-users] Concurrent disk access

2011-12-12 Thread Fabian Wenk

Hello Richard

On 12.12.2011 00:55, richard -rw- weinberger wrote:

On Mon, Dec 12, 2011 at 12:50 AM, Steven Timmt...@fnal.gov  wrote:

 What kind of transfer method are you using?  shared, ssh, lvm?
 You can load the .raw file into the image repository, make
 it persistent, and that will take care of it.


I'm using shared.

But if I use the image repository, it will *copy* my image at least once, right?
My images are very big.500GiB.


Yes, it will be copied, unless you use NFS or any other shared 
storage for the images repository. Using a registered image is 
the only chance for OpenNebula to know that a certain image is 
being used by a VM. This will be updated in the images table in 
the database. OpenNebula does not care about a more then once 
used path to a disk in the VM template.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Virtual Machine Lock Manager

2011-12-12 Thread Fabian Wenk

Hello Upendra

On 12.12.2011 08:04, Upendra Moturi wrote:

Is there any locking mechanism to lock vms ,so that there are no two vms
using same hard disk


Register the hard disk image in the image repository (with 
'oneimage register ...') and then in the VM template use the 
registered image.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] error status on disk image

2011-12-12 Thread Fabian Wenk

Hello Wojciech

On 12.12.2011 15:18, Wojciech Giel wrote:

TIMESTAMP   Mon Dec 12 14:07:32 2011
MESSAGE Error copying image in the repository: Not allowed to copy
image file /var/lib/one/templates/linux_generic.img

oneadmin is the owner of /var/lib/one/templates/ directory.
what might be a problem ?


Check the permissions on the folder of the image repository, as 
far as I know, at least oneadmin:cloud should be able to write there.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] [help] How to connet to a virtual machine?

2011-12-11 Thread Fabian Wenk

Hello Cat

On 11.12.2011 03:12, cat fa wrote:

You meat I should set up DHCP server on my host?


You did write, that your server got his IP 1.185.2.21 trough 
DHCP, so on then I guess on the LAN with probably 1.185.2.0/24 is 
already a DHCP server running, so this one should also provide 
the IP addresses to your VM, as they are through the bridge in 
the same LAN.


Login in through VNC and check if they not already got an IP with 
'ifconfig' or 'ifconfig -a' to see all interfaces including the 
assigned MAC address (which you then could register in the DHCP 
to assign the corresponding IP address).



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] application integration (service publishing) in OpenNebula?

2011-12-11 Thread Fabian Wenk

Hello Lehel

On 11.12.2011 09:42, biro lehel wrote:

Hello Fabian. First of all, thanks for your answer.


You're welcome.


So, are you telling me, that there is no way for an
application to exploit the advantages of OpenNebula? What is


Not directly, but through the setup with pre-installed VMs which 
could be started on an as needed basis. But in this VMs your 
(cloud) application needs to be installed and able to run.


The misunderstanding probably comes from the term Cloud which 
is used for at least two different types of clouds. One type of 
Cloud is to offer a platform to distribute VMs (virtual machines) 
of pre-installed systems on a cluster of hardware servers which 
are able to run concurrent VMs. This is was OpenNebula can 
provide. And the other type of Cloud is an application cloud, 
like eg. Google Apps [1], which does offer applications (eg. 
mail, calendar, docs) for a certain user group, but on a shared 
bigger platform. I do not know if there are any frameworks around 
to create such application clouds. As far as I know, they are all 
custom built depending on the needed services.


  [1] http://www.google.com/apps/

Unfortunately the term Cloud is such a hype nowadays, that it 
is used for a lot of stuff and those helping to confuse many. For 
example Apples iCloud [2] actually is just a centralized storage 
with some added features to be able to sync apps or pictures to 
your other iDevices. But this is probably mostly done by the 
client device, which needs to also connect to the iCloud and 
waits for new content in your account.


  [2] http://www.apple.com/icloud/


the use of it then? :) Basically, all I want to do is the
following: when I will have OpenNebula set up and running (on
a small scale), I will try to experiment and exemplify its
benefits, by the means of an application that uses the private
cloud. Tests, performance benefits, see how the nodes
communicate, etc. But I don't understand exactly how a web
application (for instance) is made and written such that it
can use OpenNebula, it can exploit the benefits of the running
VM's, so that it can be more performant. How is this whole
integration done? How can an application make use of
OpenNebula? Couldn't it be published somehow, such that its
final users (clients) could use it as a service (through
OpenNebula), in a way that is totally transparent to them? I
think what I'm referring to is exactly this communication
with the outside world that you were writing about.


Does this application which you would like to offer to clients 
already exist, or is this something you are developing?


As far as I understand it, you would like to create something 
like Google Apps and then offer it to potential customers, right?



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] application integration (service publishing) in OpenNebula?

2011-12-11 Thread Fabian Wenk

Hello Lehel

On 11.12.2011 14:33, biro lehel wrote:

what I've been referring to. I will have OpenNebula set up,
and (as common sense would tell) I will have my application
installed on the created VM's. My question only referred to:
how can I install an application on these VM's (should I only
just copy it, or is it more complex than this), or stuff like:


Look at the VM like at any other physical computer. It is just a 
container (eg. a virtual computer) where you can install the OS 
of your choice. The installation of your application inside the 
OS of your VM needs to be done the same as you would do it on a 
physical computer. But the installation of the OS in the VM needs 
to be done first. See my recent posting Re: Creating virtual 
machines from scratch [1] to this mailing list.


  [1] 
http://lists.opennebula.org/pipermail/users-opennebula.org/2011-December/007156.html


Look at an OpenNeubla cluster / cloud like on an additional 
abstraction layer between a physical computer and your OS 
installation.


An example:
If you have 3 computers, you can install on each one the OS of 
your choice and run it, but then you have only 3 concurrent 
running OS installation available. With OpenNebula you need to 
install Linux on all 3 computers (1x front-end and 2x cluster 
nodes). The cluster nodes also need to support some kind of 
hypervisor (eg. KVM or XEN). Then you install OpenNebula on the 
front-end and then adjust the configuration for the shared file 
systems to be used by the cluster nodes. Then you can create VMs 
(virtual machines / virtual computers) and deploy them through 
the front-end (with Sunstone you also have a web GUI). Now you 
can create as many VMs as the two cluster nodes can support 
(depending on CPU power an available memory). You even can stop 
or terminate VMs and reuse them (with persistent image) at a 
later time.



can the different tiers of the application (interface,
business logic, and data repository) be on different VM's, but


Sure, they can.


most importantly: how can an end-user (not the administrator,
but a potencial client) use the application? Or there is no
such thing as the end-user / client concept in OpenNebula,
since the only user is the administrator who has control over
the infrastructure? If OpenNebula provides IaaS support, I


In OpenNebula the administrator has full control over the running 
VMs, eg. he can stop (pause), resume or even shutdown / destroy 
them. OpenNebula also knows users, which eg. could create their 
own VMs (with their choice of OS installation) or can use 
pre-created shared system image to boot a VM. But as far as I 
know, out of the box OpenNebula is not able to provide 
virtualization on application level. But it has a very open and 
flexible design and you should be able to customize it to your 
needs, eg. with contextualization.



suppose this means that he does not have control over the
application only as a service, but rather he, as the admin,
has control over the whole physical application?


What do you understand as physical application?

OpenNebula controls the distribution and monitoring of the VMs. 
It will place a newly created VM on a cluster node which has the 
requested requirements and resources available. It also manages 
all the system images (persistent and public / shared) and 
network interfaces (done through bridges) which the VMs need to run.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] [help]Could not create domain

2011-12-10 Thread Fabian Wenk

Hello Cat

On 10.12.2011 02:06, cat fa wrote:

I modified the user and group in /etc/libvirt/qemu.conf , is that correct?


No, qemu should run as root, to be able to use the kernel KVM 
stuff. You need to adjust it libvirtd.conf.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] [help] How to connet to a virtual machine?

2011-12-10 Thread Fabian Wenk

Hello Cat

On 10.12.2011 19:28, cat fa wrote:

I create a virtual network in SunStone with leases like

1.185.2.22
1.185.2.23
1.185.2.24


This IP addresses are only used with contextualization. If your 
VM does not support contextualization, then you need to assign 
the IP address inside the VM or assign through DHCP from the 
bridged Network.



My host abtains its own ip (1.185.2.21) through DHCP.


Then your VM should also get an IP from there.


bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] [help] How to connet to a virtual machine?

2011-12-10 Thread Fabian Wenk

Hello

On 10.12.2011 21:35, Fabian Wenk wrote:

This IP addresses are only used with contextualization. If your
VM does not support contextualization, then you need to assign
the IP address inside the VM or assign through DHCP from the
bridged Network.


I just missed the fact, that this IP addresses also create 
corresponding MAC addresses which are uses by your VMs. So you 
could create the VM template which includes one of the existing 
MAC addresses, so it will stay the same for this VM.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Creating virtual machines from scratch

2011-12-09 Thread Fabian Wenk

Hello Richard

On 08.12.2011 17:45, richard -rw- weinberger wrote:

I'm a bit confused how to create a vm from scratch.
Assume I want a vm running with CentOS6 and a new virtual hard disk of 500GiB.

How can I create a new disk using OpenNebula (especially with Sunstone)?


I do not know how to do this steps in Sunstone, I did it with the 
command line tools.


Create the image manually (outside of OpenNebula) with this steps 
(for KVM):


On a system which has KVM available:
qemu-img create -f raw servername.img 10G
qemu-system-x86_64 -hda servername.img -cdrom 
/path/to/install.iso -boot d -m 512
Connect through VNC for installation, the above command will 
report you the used port (default 5900), see blow as 
qemu-system-x86_64 listen only on localhost for VNC

qemu-system-x86_64 servername.img -m 512  # to test after install
Connect through VNC
login and run 'poweroff' as root or with sudo

Now on the front-end:
Create an image template (servername-image.one)
oneimage register servername-image.one
Create an VM template for the host (servername.one)
onevm create servername.one


Connect to VNC on the cluster node:
I do not know about your workstation, but from my Mac client I 
use Chicken [1], which supports connection through ssh. I guess 
there is a VNC client for the OS of your workstation available 
which also can do this. Else you could run it with manual ssh 
forwarding like this:

ssh -L localhost:5900:localhost:5907 server-with-KVM
replace 5907 with the port which qemu-system-x86_64 as reported 
and then use the local VNC client to connect to localhost port 5900.


   [1] http://sourceforge.net/projects/chicken/


In my setup each vm will have it's own disk image, thus no disk image
needs to be copied.
Is there a way to enforce this?


The best is to register each image in the Image Repository with 
'oneimage register ...'



A final question, is it possible to change the boot order of a vm?
Do I really have to delete and recreate it?


You need to shutdown and recreate the VM. Best done with the 
command line tool 'onevm create template' and the template you 
can modify.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Access network by name (custom attributes)

2011-12-09 Thread Fabian Wenk

Hello Tomáš

On 09.12.2011 10:42, Tomáš Nechutný wrote:

possible to define network by name (instead of id)? I tried
NETWORK=local, NETWORK=\local\ and same with NAME instead of
NETWORK_ID=1, but it doesn't work.


As far as I know with OpenNebula 3.0 only IDs can be uses.


bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Little help with first building cloud

2011-12-07 Thread Fabian Wenk

Hello Matheus

On 07.12.2011 01:44, matheus tor4 wrote:

Make the changes on files like oned.conf using my standart user will brings
for me troubles in future, or not?


Configuration files in /etc/ usually are changed with the root 
user, normal users should not be able to write or depending on 
the content even be able to read it.



What you recommend?
- Put the rights on the oneadmin user, or
- Use root user to make changes (painlessly)


What kind of changes?

On my system the /etc/one/oned.conf belongs to root (rw) with 
only read permissions for the cloud group. The startup script 
/etc/init.d/opennebula takes care to start the OpenNebula daemons 
with the user oneadmin (eg. dropping privileges). But this is 
something which could depend on the used Linux distribution if 
you used a distribution provided package.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] create image vm

2011-12-07 Thread Fabian Wenk

Hello Dian

On 07.12.2011 20:02, Dian Djaelani wrote:

ow sory i not have sever with GUI
ok thanks for advise
i want tray install ubuntu with gui for manage my VM


Ah, I probably missed to tell, that qemu-system-x86_64 listen 
only on localhost for VNC. But you do not need GUI on the server.


I do not know about your workstation, but from my Mac client I 
use Chicken [1], which supports connection through ssh. I guess 
there is a VNC client for the OS of your workstation available 
which also can do this. Else you could run it with manual ssh 
forwarding like this:

ssh -L localhost:5900:localhost:5907 server-with-KVM
replace 5907 with the port which qemu-system-x86_64 as reported 
and then use the local VNC client to connect to localhost port 5900.


  [1] http://sourceforge.net/projects/chicken/


bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] modifying a virtual machine resources

2011-12-06 Thread Fabian Wenk

Hello Davood

On 29.11.2011 23:23, davood ghatreh wrote:

Does anyone know how to managea virtual machine resources? consider that I
create a vps with one CPU, and after a wile,  i decide to increase its
CPU's to two. Is it possible in OpenNebula? i dont want to re-deploy ma
machine, and dont want to loose any file of existing virtual machine in
this process.


As far as I know, it is not possible. You need to shutdown the 
VM, modify the template an recreate the VM again. It is like with 
real hardware, if you need to change something, you need to 
shutdown, do the modification and boot up again.


If you need to be able to modify a VM without loosing anything, 
you should use persistent images out of the Image Repository.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] A question about the utilization of VMs

2011-12-06 Thread Fabian Wenk

Hello Adnan

On 04.12.2011 18:18, Adnan Pasic wrote:

I can't really tell you what lookbusy does, as it's not a programme coded by
me, but found on the internet to fulfill its duties ;)


Or it does not properly as you would need it.


Also, the website where I downloaded it from doesn't say anything on how the
programme is really working, and if it's possible to somehow upgrade it with
for the purpose of, e.g. filling zeros.


Google pointed me to [1], is this the one you are using? If yes, 
you could ask the developer, if he could adjust it for your needs 
(eg. filling the memory with random data, or even changing it 
during runtime), or ask some other developer which knows about C, 
if he could do it for you.


  [1] http://www.devin.com/lookbusy/


So, do you maybe know a programme, or could tell me what to do in my case?
This is becoming frustrating, as I'm almost finished with my thesis and need
only a couple more measurements!!!


Sorry, I do not know any such program. But I hope one of the 
possibilities I mention above are an option for you.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Little help with first building cloud

2011-12-06 Thread Fabian Wenk

Hello Matheus

On 06.12.2011 20:26, matheus tor4 wrote:

I dont create any group named 'cloud'.
Is it created automatically?



oneadmin@PacsOnCloud-FrontEnd:/etc/one$ id oneadmin
uid=107(oneadmin) gid=118(oneadmin) grupos=118(oneadmin)

What changes I have to do ?


It does also work with the group oneadmin instead of cloud, just 
adjust your libvirtd.conf according. The OpenNebula documentation 
usualy talks about oneadmin:cloud, so you just have to use 
oneadmin:oneadmin instead.


About the permissions, is the images folder and the VM_DIR on a 
NFS storage? If yes, you need to adjust this exports in 
/etc/exports according to allow root to read/write files there, 
as with KVM the VMs will be run with root privileges.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] modifying a virtual machine resources

2011-12-06 Thread Fabian Wenk

Hello Davood

On 06.12.2011 21:32, davood ghatreh wrote:

Do you know how this save as feature works? I can use it. when I save as
a VM, it goes to my images, and after a while of being LOCKED, its status
changes to FAILED. how can i save a machine as an image?


If you want to use the 'onevm saveas vm_id disk_id 
image_name' on a running VM, then you need to be able properly 
shutdown this running VM in OpenNebula and wait until the image 
is written to the Image Repository. The cluster node needs to be 
able to write to the Image folder. The error you see, could be 
because this writing is not possible. Check your log files.


Alternatively you could do the following steps to create an image 
into the Image Repository:


Create the image manually (outside of OpenNebula) with this steps 
(for KVM):


On a system which has KVM available:
qemu-img create -f raw servername.img 10G
qemu-system-x86_64 -hda servername.img -cdrom 
/path/to/install.iso -boot d -m 512
Connect through VNC for installation, the above command will 
report you the used port (default 5900)

qemu-system-x86_64 servername.img -m 512  # to test after install
Connect through VNC
login and run 'poweroff' as root or with sudo

Now on the front-end:
Create an image template (servername-image.one)
oneimage register servername-image.one
Create an VM template for the host (servername.one)
onevm create servername.one

Hope this helps.


bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Little help with first building cloud

2011-11-27 Thread Fabian Wenk

Hello Matheus

On 23.11.2011 12:39, matheus tor4 wrote:

I have only two servers (With VT) and two Core2Duo (Without VT). I want to
build a little private cloud.


I have a system with Core2Duo CPU which does support VT, so I am 
not sure if this is not available for all Core2Duo CPUs or not, 
but check out [1], this could probably help you to enable VT on 
this systems (if available) too.


  [1] 
http://en.wikipedia.org/wiki/X86_virtualization#Intel_virtualization_.28VT-x.29


Else as pointed out in a other recent post to this mailing list, 
you could use XEN on this two systems without VT, but then you 
are limited to paravirtualization. As far as I know, the guest OS 
in the VM needs to support this.



My doubt is the following:
- Can I use a server as Front-End +  Image Repository + Cluster at the same
time?


You can, I have running the front end and the cluster node on a 
single system. When adding the local cluster node, I used 
'onehost create localhost im_kvm vmm_kvm tm_nfs'. Also adding the 
other systems as cluster nodes will be possible.



- Or, It's more advantageous use a Core2Duo as a Front-End and release all
resources of the server to be use by cloud?


This depends on the usage of the OpenNebula cloud as a whole. On 
a rather small installation, it is probably not a problem to use 
the front end also as a cluster node.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Problem with DataBlock iamges

2011-11-27 Thread Fabian Wenk

Hello Salma

On 24.11.2011 15:13, salma rebai wrote:

Now, when I added  DISK = [ IMAGE_ID = 72, TARGET=hdb ] to my VM
template. I succeed to create the VM. Its state is « runn ».

But when i have access to the VM and i check partition disk , I don't find
any mounting to the DATABLOCK Disk

---

ubuntu@ubuntu-KVM:~$ df -h


Sys. de fichiersTaille  Uti. Disp. Uti% Monté sur
/dev/sda1 8,7G  2,7G  5,6G  32% /
none  492M  584K  491M   1% /dev
none  499M  164K  499M   1% /dev/shm
none  499M   96K  499M   1% /var/run
none  499M 0  499M   0% /var/lock


OpenNebula and the hypervisor just gives the VM the hardware you 
have configured. The usage depends on the OS configuration in the 
VM. So you need to mount this second disk in the OS inside the 
VM, as you would do it on a physical system when you have added a 
second hard disk for the first time. Eventually you also need to 
create the file system on this second disk (also from within the VM).



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] A question about the utilization of VMs

2011-11-27 Thread Fabian Wenk

Hello Adnan

On 24.11.2011 21:16, Adnan Pasic wrote:

Is there a way to circumvent that? For my diploma thesis I need to utilize
deployed VMs for a period of up to 12 hours. However, if KSM is active, the
physical host doesn't hold this utilization, but adjusts the used memory
pages.


I do not know, but I guess your 'lookbusy' tool is just 
allocating the memory, but not really using it (eg. filling it up 
with random data). I guess using the allocated memory should 
help you even with KSM turned on. I am not sure about freeing the 
used memory after the process has stopped. Usually this memory is 
just marked as free from the kernel, but will only be purged 
over time or when an other process is requesting memory. Probably 
your 'lookbusy' process needs to clear (fill with zeros?) the 
memory before ending. Then I guess KSM on the cluster node will 
do its work and reduces the memory usage of the VM.



Since I want to measure the power consumption of the physical host under
full VM utilization, I need the real memory to stay utilized as well, yet I


Does your 'lookbusy' process also use CPU cycles (eg. creating 
full load on the VM) or only memory? The most power consumption 
difference you will see on different CPU load and not memory 
allocation. Also the amount disk I/O does affect the power 
consumption.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] FW: Disk copy error when creating VM

2011-11-27 Thread Fabian Wenk

Hello Sergio

On 25.11.2011 10:25, Sergio Garcia Villalonga wrote:

ERROR=[

MESSAGE=Error excuting image transfer script: Error copying
mehmet:/var/lib/one/images/31dd9191dd9e5ac0cc86954f86f75dda to
localhost:/var/lib/one//6/images/disk.0,
TIMESTAMP=Fri Nov 25 09:38:08 2011 ]


Did you try manually to copy this file on the cluster node with 
the oneadmin user? Could be, that there is a read error when 
copying this files, eg. from a hardware or filesystem problems. 
Or is the disk full on the cluster node?



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Little help with first building cloud

2011-11-27 Thread Fabian Wenk

Hello Matheus

On 27.11.2011 17:49, matheus tor4 wrote:

At the time I'm using the second strategy: Using a Core2Duo as a Front-End
and release all
resources of the server to be use by cloud.
I choose this option because it is more close to the proposal of official
OpenNebula Documentation.
I hope it works now!

So I use one of my servers (With VT) as a Image Repository too.


This is up to you, but I would keep the image repository on the 
front end or eventually on the second system without VT. So you 
could use the two systems with VT as two identical cluster nodes.



And now I have a little more question:

With a NFS server configured. I have the Image Repository sharing its
directory /var/lib/one. My doubt is:
- Have I to insert the shared directory in the file /etc/fstab on all my
nodes? So the node already have this directory avaiable when the machine
starts.


Yes, and also on the front end.


- Or Haven't I to do this? Does the directory only be mount when it is
necessary? (sudo mount ...)


It needs to be already mounted when OpenNebula will deploy a VM.


And, if I edit the /etc/fstab, Do I have to do this in the node
ImageRepository+Cluster at the same time? I hope not, because the directory
is on the machine


I am not sure about OpenNebula 3.0, but with 2.2 the image 
repository and also the VM_DIR folder need to be mounted in the 
same path as they are on the front end.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] error create VM KVM

2011-11-16 Thread Fabian Wenk

Hello Dian

On 15.11.2011 20:54, Dian Djaelani wrote:

anyone can help me ?? i`m try to build opennebula but error when create
virtual machine with KVM hypervisor



Wed Nov 16 02:31:07 2011 [VMM][I]: error: unknown OS type hvm


From where does this hvm come? Could be a type somewhere in 
your VM template. Is the image you have really able to run on 
KVM? Eg. was it created with the qemu/kvm tools?



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Looking at moving to OpenNebula with a few questions

2011-11-01 Thread Fabian Wenk

Hello Donny

On 27.10.2011 18:48, Donny Brooks wrote:

Currently all machines have their network cards bonded and vlans passed
over the trunked interface as we have approximately 20 vlans we use.
This should be fairly simple to do with OpenNebula correct?


Yes, you need to create bridge interfaces for each VLAN you need. 
And then in OpenNebula create the networks (with 'onevnet create 
template'), more details are in [1].


  [1] http://www.opennebula.org/documentation:rel3.0:vgg


I have a mix of local and network storage. Should OpenNebula be able to
handle both local and SAN storage?


This is (currently) not really possible.

But let me discuss something else. As far as I guess you probably 
would like to have persistent VMs, right? The main idea of 
OpenNebula is to use it for on demand cloud computing, but 
persistent VMs are also possible.


To accomplish persistent VMs, the following broad steps are needed:
1. register the OS image for the VM in the image repository
   'oneimage register image_template'
2. create  start the VM with using the registered image in the 
vm_templat.

   'onevm create vm_template'

Please also read the threads Storage subsystem: which one? [2] 
and Stopping persistent image corruption [3] in the mailing 
list archive. They have some more details about the whole working 
of OpenNebula with persistent VMs and storage, which should help 
you with your decisions.


  [2] 
http://lists.opennebula.org/pipermail/users-opennebula.org/2011-October/thread.html#6616
  [3] 
http://lists.opennebula.org/pipermail/users-opennebula.org/2011-October/thread.html#6617



All raid is hardware based. With the current setup what is the best way
to set it up for best fault tolerance/speed/space?
What is the best OS to start with? We currently use Centos 5.5 on all 3
nodes but would prefer Fedora or similar. Debian would be doable also.


About the RAID and storage solutions the thread linked above 
should also help with your decisions. Regarding the Linux 
distribution, I guess you should use the one which fits the 
available know-how in your team.



Would I be able to import the existing virtual machines that are running
into OpenNebula?


Yes, you need to register a persistent image ('oneimage register 
image_template') with the current image as start and then 
create the VM, as described above.



We are a small state government agency with little to no IT budget so I
have to work with what I have. Please keep that in mind before
suggesting why not buy such and such Thanks in advance for the input.


The OpenNebula documentation describes the front end (where the 
OpenNebula deamons run) and cluster nodes (where the VMs run), 
but it is possible to combine it on a single system, see [4]. So 
for example you could then use the system xen-test as your test 
system with front end and cluster node on the same server. And 
for production use the system xen1 as front end and cluster node, 
and xen2 only as cluster node.


  [4] 
http://lists.opennebula.org/pipermail/users-opennebula.org/2011-October/006748.html


I hope this will give you some more insight into OpenNebula and 
points you to some possible ways to migrate your current setup. 
Feel free to discuss further things on the mailing list.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Storage subsystem: which one?

2011-10-30 Thread Fabian Wenk

Hello Humberto

Sorry for the delay.

On 18.10.2011 10:35, Humberto N. Castejon Martinez wrote:

Thank you very much, Fabian and Carlos, for your help. Things are much more
clear now, I think.


You're welcome.


*Sharing the image repository.
If I understood right, the aim with sharing the image repository between the
front-end and the workers is to increase performance  by reducing (or
eliminating) the time needed to transfer an image from the repository to the
worker that will run an instance of such image. I have, however, a


With the shared images folder you are able to distribute the 
transfer over time, as the VM does only read (eg. transfer over 
NFS and those over the network) the parts of the image on an as 
needed basis. But from the performance point of view, NFS is most 
often slower then access to the image on the local disk. This may 
be different with other storage solutions, eg. with a distributed 
FS over all the cluster nodes, or eg. an other backend storage 
solution with iSCSI and 10 GBit/s Ethernet to the cluster nodes. 
This stuff most often depends on the complete setup and network 
infrastructure you have available. The best would be, if you can 
do performance testing by yourself on your own site and 
infrastructure to find the best solution depending on your 
expectation and needs of the VM cluster.



question/remark here. To really reduce or eliminate the transfer time, the
image should already reside on the worker node or close to it. If the image
resides on a central server (case of NFS, if I am not wrong) or on an
external shared distributed storage space (case of MooseFS, GlusterFS,
Lustre, and the like), there is still a need to transfer the image to the
worker, right? In the case of a distributed storage solution like MooseFs,
etc., the worker could itself be part of the distributed storage space. In
that case, the image may already reside on the worker, although not
necessarily, right? But using the worker as both a storage server and client
may actually compromise performance, for what I have read.


With a distributed file system, it depends how this is working 
with such stuff. An example (I do not have any experience with 
it, but this is how I would expect the work of such a distributed 
file system to be done):


In the example we would have cluster node 1 to 10, all set up 
with eg. MooseFS. We also would have a permanent image, which is 
located on the MooseFS storage (which for redundancy is 
physically distributed over several cluster nodes, probably also 
in parts). For the example, we assume that the image is 
physically on node 3, 5 and 7. Now when you start a VM which will 
use this image, the VM will be started on node 1, in the 
beginning, it will read the image through MooseFS over the 
network from one or more of the nodes 3, 5 or 7 and it can be 
used immediately. Now I expect from MooseFS, that it will modify 
the distributed file system in such a way, that over time the 
image physically will be stored on node 1 and to do this in the 
background. After a while the whole image should be available 
from the local disk of node 1, and those have the same 
performance as from a normal local disk.


If somebody has experience with such stuff, please tell if my 
idea is right or wrong.



Am I totally wrong with my thoughts here? If not, do we really increase
transfer performance by sharing the image repository using, e.g. NFS? Are
there any performance numbers for the different cases that could be shared?

* Sharing theVM_dir.
Sharing theVM_dir  between the front-end and the workers is not really
needed, but it is more of a convenient solution, right?
Sharing theVM_dir  between the workers  themselves might be needed for
live migration. I say might because i have just seen that, for example,
with KVM we may perform live migrations without a shared storage [2]. Has
anyone experimented with this?


I'm not sure, but I guess OpenNebula depends on a shared file 
system for live migration, independent of the used Hypervisor. 
Probably you could do live migration with KVM and local storage 
when you are using KVM without OpenNebula.



Regarding the documentation, Carlos, it looks fine. I would only suggest the
possibility of documenting the 3rd case where the image repository is not
shared but theVM_dir  is shared.


I am not sure, but I think OpenNebula is currently not able to 
handle this two differently, as it is defined per cluster node 
with the 'onehost create ...' command where you define to use 
tm_nfs or tm_ssh.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Storage subsystem: which one?

2011-10-17 Thread Fabian Wenk

Hello Carlos

On 17.10.2011 11:34, Carlos Martín Sánchez wrote:

Thank you for your great contributions to the list!


You're welcome.


I'd like to add that we tried to summarize the implications of the shared
[1] and non-shared [2] approaches in the documentation, let us know if there
are any big gaps we forgot about.


Thank you for documenting it on the website. I think it is 
complete and mentions all important facts about the two storage 
possibilities.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Bacula and OpenNebula

2011-10-17 Thread Fabian Wenk

Hello Richard

On 17.10.2011 12:45, Richard Palmer wrote:

Or perhaps the backup agent should run inside each virtual machine
rather than backing up the vm image ?. Any advantages/disadvantages ?


Personally I would run the bacula-fd inside the VM and back it up 
as I do it with a physical systems. The advantage is, that the OS 
in the VM is running without any interruption.


With the stopped/suspended VM and backing up the image file, you 
have the advantage of doing a snapshot backup of the file system 
of the VM. But then it would probably also help to backup the 
memory of the running VM (does the OpenNebula or Hypervisor write 
this to disk when doing 'onevm suspend vm_id'?). But the 
disadvantage is the paused OS during the backup, any running 
services on it (eg. web sites) are not available then. And 
probably also currently open network connections can break. This 
depends of the duration of the backup.


I guess the decision depends also of the usage of the VM. Both 
scenarios have advantages and disadvantages.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Stopping persistent image corruption

2011-10-15 Thread Fabian Wenk

Hello Richard

On 14.10.2011 12:19, Richard Palmer wrote:

Ah right, makes sense. At the moment I'm still just using the VM
templates to manage disc images and haven't got my image repository configured;
had planned to do that when I move to 3.0 but perhaps I should get on with
it now...


Ok, If you just assign an image located anywhere on disk to a VM, 
then OpenNebula does not know, that this is a persistent image. 
But it is also strange, because when you start an VM with an 
image located eg. in /scratch/, then this will be copied to 
VM_DIR and the VM will be started with the copy there.


For the images repository you do not need much, just a folder 
which is also on the shared storage (like VM_DIR) and available 
also on the cluster nodes. Create a template and then use the 
'oneimage register /path/to/template'. Then in the VM template 
modify the DISK entry like this:


DISK   = [ IMAGE = name_of_image ]


bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Error in VM deploy

2011-10-13 Thread Fabian Wenk

Hello Rubens

On 10.10.2011 21:27, Rubens Pinheiro wrote:

Hello,
I've configured two machines, one with opennebula and another configured
to be a host.
I've created the host in opennebula and it's all ok (status: MONITORED)

But when I try to create a vm, there is a problem in the deployment.
Here is the vm.log:



Mon Oct 10 15:15:30 2011 [VMM][I]: Command execution fail: 'if [ -x 
/var/tmp/one/vmm/kvm/deploy ]; then /var/tmp/one/vmm/kvm/deploy 
/srv/cloud/one/var//5/images/deployment.0; else  exit 42; fi'
Mon Oct 10 15:15:30 2011 [VMM][I]: STDERR follows.
Mon Oct 10 15:15:30 2011 [VMM][I]: error: Failed to create domain from 
/srv/cloud/one/var//5/images/deployment.0
Mon Oct 10 15:15:30 2011 [VMM][I]: error: unable to set user and group to 
'118:131' on '/srv/cloud/one/var//5/images/disk.0': No such file or directory
Mon Oct 10 15:15:30 2011 [VMM][I]: ExitCode: 255
Mon Oct 10 15:15:30 2011 [VMM][E]: Error deploying virtual machine: error: 
Failed to create domain from /srv/cloud/one/var//5/images/deployment.0



I think the error is there:

_Command execution fail: 'if [ -x /var/tmp/one/vmm/kvm/deploy ]; then 
/var/tmp/one/vmm/kvm/deploy /srv/cloud/one/var//5/images/deployment.0; else_

It's a sintax error?


More details are in the line error: unable to set user and group 
to '118:131' on '/srv/cloud/one/var//5/images/disk.0': No such 
file or directory


Is tries to set permissions to some user:group which fails. To 
which user:group do this UID:GID match on the cluster node?



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Storage subsystem: which one?

2011-10-13 Thread Fabian Wenk

Hello Humberto

On 13.10.2011 11:03, Humberto N. Castejon Martinez wrote:

Reading the Opennebula documentation, I believe there are two things I have
to deal with:

1) The image repository, and whether it is shared or not between the
front-end and the workers


I have some persistent Images which are used for persistent 
VMs. I have the images folder shared with NFS and I use tm_nfs. 
When starting a VM with a persistent images, it creates only a 
soft link in VM_DIR which points to the image in the images 
folder. If you want to have copied the persistent image into the 
VM_DIR on the cluster node, you would need tm_ssh. But then on 
startup the whole image will be copied to the cluster node into 
VM_DIR and on shutdown it will be copied back to the images folder.



2) TheVM_DIR, that contains deployment files, etc for the VMs running on
a worker. This directory may or not be shared between the front-end and the
workers, but it should always be shared between the workers if we want live
migration, right?


If you use tm_ssh you do not need to share this, if you use 
tm_nfs, you need to have it shared. The same with the images 
folder. For live migration you need a shared images folder and 
VM_DIR.



Some of the questions I have are these (sorry if some of them seem stupid
:-)):


They are not stupid, It took me some time to try out and see 
through how this things works, and so I had to change my setup a 
few times until OpenNebula an I were happy.



- What are the implications of sharing or not the image repository  between
the front-end and the workers (apart from the need to transfer images to the
worker nodes in the latter case)?


See above.


- What are the implications of sharing or not theVM_DIR  between the
front-end and the workers?


Also above.


- Can I use ZFS or MooseFs and still be able to live migrate VMs?


MooseFS [1] is a fault tolerant, network distributed file system 
(eg. over several servers). I do not know if ZFS can do this. 
MooseFS is similar like NFS, only that the data are distributed 
over several servers (including your local server). I guess live 
migration should work.


 [1] http://www.moosefs.org/


- Will theVM_DIR  always hold a (copy of a) VM image for every VM running
on a worker? Must theVM-DIR  be in the local hard drive of a worker or may
it reside on an external shared storage?


See above, if used with tm_nfs, it is on a external shared 
storage (NFS server). When used with tm_ssh, it is on the local 
disk and all images (public or persistent) will be copied in full 
through ssh.



I guess two factors i should also consider when choosing a solution are the
following, right?

- The speed of transferring VM images
- The speed of cloning VM images


Yes. Access to a persistent image through NFS only transfers the 
data needed, cloning always creates a full copy. When you use 
tm_ssh, the image will always be copied in full through ssh 
(which also gives some CPU overhead on the front end and cluster 
node).


I hope this helps and my information are correct, if not, could 
somebody from OpenNebula please correct me.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Stopping persistent image corruption

2011-10-13 Thread Fabian Wenk

Hello Richard

On 13.10.2011 12:06, Richard Palmer wrote:

After stupidly launching a VM using a persistent disc image twice,
I'm very grateful to e2fsck for cleaning up the ensuing filesystem
corruption, but wondered if there is anything I could put in the
template file to tell opennebula not to allow this to happen?. Some
sort of unique instance flag ?.


Strange, this should be done with this settings in the image 
template (with OpenNebula 2.2.1, do not know about 3.0):


PUBLIC  = NO
PERSISTENT  = YES

Check the output of 'oneimage show image_id' when one VM is 
running with this image, it should show this (which should lock 
the image for any other VM):


# oneimage show 6
IMAGE INFORMATION
---
ID : 6
NAME   : image-name
TYPE   : OS
REGISTER TIME  : 10/05 15:38:33
PUBLIC : No
PERSISTENT : Yes
SOURCE : /path/to/image-uniq_id
STATE  : used
RUNNING_VMS: 1


bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Bugreport / Patch for MySQL with InnoDB (instead of MyISAM)

2011-09-29 Thread Fabian Wenk

Hello Carlos

On 29.09.2011 15:43, Carlos Martín Sánchez wrote:

It's a bit too late to apply and test this patch for the final 3.0 release,


Ok, no problem for me. If now somebody is running into this 
problem, it is at least documented with a few possible solutions 
or workarounds.



but I've opened a ticket [1] to include it in the next release.


Thank you.


bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Bugreport / Patch for MySQL with InnoDB (instead of MyISAM)

2011-09-28 Thread Fabian Wenk

Hello

According to the posting Re: [one-users] Opennebula 2.2.1 Failed 
to create database tables [1] from Max Hennig, I prepared the 
attached patches (for 2.2.x and 2.9.90), which solve to problem 
with the first start of oned when the database will be initialized.


  [1] 
http://lists.opennebula.org/pipermail/users-opennebula.org/2011-August/006260.html


As far as I had the problem with the first start of oned, it 
could not create the tables, when in my.cnf the setting 
default_storage_engine = InnoDB is present. After removing it 
(and restarting MySQL), it was working, as MySQL then is using 
the default MyISAM storage engine. But there are reasons for 
using the InnoDB storage engine as default in MySQL. So it would 
be helpful to OpenNebula if this is also working.


The attached patches only change all the VARCHAR(256) to 
VARCHAR(255). I did test the patch with OpenNebula 2.2.1 (MySQL 
with InnoDB) and it is working fine so far. But I guess this 
should also work with 2.9.90. It would probably help if somebody 
could test this with 2.9.90 and then do this changes in the 
source repository before the next RC or final build for 3.0.


I do not know, if it is a good idea or not to have the upgrade 
script also do this modifications on an already running MySQL 
database. To do this, the three 'alter table ... VARCHAR(255);' 
commands from below would be needed (for an existing 2.2.1 
database). But reducing the field length could cause some 
problems if it is filled to the limit. I even do not know, if 
oned or the one* commands do check the field length before 
entering data into the database. If yes, then this should also be 
adjusted there in the source code.


To convert an already running MySQL opennebula database from 
MyISAM to InnoDB, I did the following steps (with OpenNebula 
2.2.1). It is probably a good idea to stop OpenNebula during this 
modifications. Then first create a backup with:

mysqldump -u root -p opennebula  opennebula.mysql

And then convert the tables with the mysql client:
mysql -u root -p
mysql use opennebula
mysql alter table host_pool modify host_name VARCHAR(255);
mysql alter table network_pool modify name VARCHAR(255);
mysql alter table user_pool modify user_name VARCHAR(255);
mysql alter table cluster_pool ENGINE=InnoDB;
mysql alter table history ENGINE=InnoDB;
mysql alter table host_pool ENGINE=InnoDB;
mysql alter table host_shares ENGINE=InnoDB;
mysql alter table image_pool ENGINE=InnoDB;
mysql alter table leases ENGINE=InnoDB;
mysql alter table network_pool ENGINE=InnoDB;
mysql alter table user_pool ENGINE=InnoDB;
mysql alter table vm_pool ENGINE=InnoDB;

To check the current properties of a table the following MySQL 
command can be used:

mysql show create table table_name;


bye
Fabian
--- a/src/host/Host.cc  2011-06-08 15:15:46.0 +0200
+++ b/src/host/Host.cc  2011-09-28 15:42:11.0 +0200
@@ -56,7 +56,7 @@
   tm_mad,last_mon_time, cluster, template;
 
 const char * Host::db_bootstrap = CREATE TABLE IF NOT EXISTS host_pool (
-oid INTEGER PRIMARY KEY,host_name VARCHAR(256), state INTEGER,
+oid INTEGER PRIMARY KEY,host_name VARCHAR(255), state INTEGER,
 im_mad VARCHAR(128),vm_mad VARCHAR(128),tm_mad VARCHAR(128),
 last_mon_time INTEGER, cluster VARCHAR(128), template TEXT, 
 UNIQUE(host_name));
--- a/src/um/User.cc2011-06-08 15:15:46.0 +0200
+++ b/src/um/User.cc2011-09-28 15:42:25.0 +0200
@@ -53,7 +53,7 @@
 const char * User::db_names = oid,user_name,password,enabled;
 
 const char * User::db_bootstrap = CREATE TABLE IF NOT EXISTS user_pool (
-oid INTEGER PRIMARY KEY, user_name VARCHAR(256), password TEXT,
+oid INTEGER PRIMARY KEY, user_name VARCHAR(255), password TEXT,
 enabled INTEGER, UNIQUE(user_name));
 
 /* -- 
*/
--- a/src/vnm/VirtualNetwork.cc 2011-06-08 15:15:46.0 +0200
+++ b/src/vnm/VirtualNetwork.cc 2011-09-28 15:42:39.0 +0200
@@ -78,7 +78,7 @@
 
 const char * VirtualNetwork::db_bootstrap = CREATE TABLE IF NOT EXISTS
  network_pool (
- oid INTEGER PRIMARY KEY, uid INTEGER, name VARCHAR(256), type INTEGER, 
+ oid INTEGER PRIMARY KEY, uid INTEGER, name VARCHAR(255), type INTEGER, 
  bridge TEXT, public INTEGER, template TEXT, UNIQUE(name));
 
 /* -- 
*/
--- a/src/group/Group.cc2011-09-23 16:56:55.0 +0200
+++ b/src/group/Group.cc2011-09-28 17:27:03.0 +0200
@@ -27,7 +27,7 @@
 const char * Group::db_names = oid, name, body;
 
 const char * Group::db_bootstrap = CREATE TABLE IF NOT EXISTS group_pool (
-oid INTEGER PRIMARY KEY, name VARCHAR(256), body TEXT, 
+oid INTEGER PRIMARY KEY, name VARCHAR(255), body TEXT, 
 UNIQUE(name));
 
 /*  */
--- a/src/host/Host.cc  2011-09-23 

Re: [one-users] Opennebula 3.0 RC1 and persistent images in KVM VMs

2011-09-26 Thread Fabian Wenk

Hello Alberto

On 25.09.2011 00:27, Alberto Picón Couselo wrote:

We have some a problems using persistent KVM images in Opennebula 3.0 RC1.

Our configuration is as follows:

Opennebula Front-End Ubuntu LTS 10.04
KVM worker node Debian Queeze 6.0.2
NAS for NFS Shared storage



Sat Sep 24 23:49:08 2011 [VMM][I]: Command execution fail: 'if [ -x
/var/lib/one/remotes/vmm/kvm/deploy ]; then
/var/lib/one/remotes/vmm/kvm/deploy /var/lib/one/212/images/deployment.0
tc-kvm-hv02 212 tc-kvm-hv02; else  exit 42; fi'
Sat Sep 24 23:49:08 2011 [VMM][I]: error: Failed to create domain from
/var/lib/one/212/images/deployment.0
Sat Sep 24 23:49:08 2011 [VMM][I]: error: internal error process exited
while connecting to monitor: qemu: could not open disk image
/var/lib/one/212/images/disk.0: Permission denied



Please, can you give us any clue regarding this issue?. Persistent mode
for KVM VMs is essential for us...


Is root allowed to read/write in the NFS mounted images folder? 
Check the options in /etc/exports on the NFS server.
Eventually you also need to force the client (cluster node) to 
mount it using NFSv3 (instead of NFSv4).


With persistent images, the images stays in the images folder and 
is only linked from the vm_id/images/ folder. KVM does run with 
root privileges.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] REG: onevnet list

2011-09-22 Thread Fabian Wenk

Hello Sriviatsan

On 22.09.2011 10:12, srivatsan jagannathan wrote:


Having same problem with this configuration also,
NAME   = test150
TYPE   = FIXED
BRIDGE = eth1
LEASES = [IP=192.168.58.150]

OUTPUT
onevnet show 108
VIRTUAL NETWORK 108 INFORMATION
ID:   : 108
UID:  : 0
PUBLIC: N

VIRTUAL NETWORK TEMPLATE
BRIDGE=eth1
LEASES=[ IP=192.168.58.150 ]
NAME=test150
TYPE=FIXED

LEASES INFORMATION
LEASE=[ IP=192.168.58.100, MAC=02:00:c0:a8:3a:64, USED=0, VID=-1 ]
LEASE=[ IP=192.168.58.150, MAC=02:00:c0:a8:3a:96, USED=0, VID=-1 ]


I have just tried your setup on my system, and everything looks good:

# cat test150.net
NAME   = test150
TYPE   = FIXED
BRIDGE = eth1
LEASES = [IP=192.168.58.150]
#

# onevnet create test150.net
#

# onevnet list
  ID USER NAME  TYPE BRIDGE P #LEASES
[...]
   5 admintest150  Fixed   eth1 N   0
#

# onevnet show 5
VIRTUAL NETWORK 5 INFORMATION 


ID:   : 5
UID:  : 0
PUBLIC: N

VIRTUAL NETWORK TEMPLATE
BRIDGE=eth1 


LEASES=[ IP=192.168.58.150 ]
NAME=test150
TYPE=FIXED

LEASES INFORMATION
LEASE=[ IP=192.168.58.150, MAC=02:00:c0:a8:3a:96, USED=0, VID=-1 ]
#

Did the network named test150 already exists when you created it 
with this template? This could explain why you are seeing two 
LEASE lines (192.168.58.100 and 192.168.58.150).


But it is rather strange, that your 'onevnet list' does not show 
any output at all. Could it be, that the database got hit by 
something which trashed it in some parts?


What are you using as backend database, sqlite or MySQL?


bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] REG: onevnet list

2011-09-21 Thread Fabian Wenk

Hello Srivatsan

On 20.09.11 12:06, srivatsan jagannathan wrote:

Trying to add virtual network, type fixed see bottom.
onevnet -v create X.net
-  return vnet-number (10)

onevnet show vnet-number (works fine, list information)

onevnet list   ---  displays nothing (only header information rest
blank.)


somehow strange, did you see any errors in the log files?


#Now we'll use the cluster private network (physical)
BRIDGE = virbr0


Does the interface virbr0 exist? Is it really a bridge interface? 
You can check with 'brctl show'.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Users Digest, Vol 43, Issue 32

2011-09-21 Thread Fabian Wenk

Hello Bala

On 19.09.11 18:11, bala suru wrote:

But I'm facing the onemore problem that when I try to save the VM using the
following command .
onevm saveasvm-id  disk-id  imagename
onevm shutdownvm-id


Ok, this steps are correct. Does 'onevm list' still show your VM 
which you just did shutdown?


Because:


Here is the output of oneimage show 32
ID : 32
NAME   : alliswell
TYPE   : OS
REGISTER TIME  : 09/13 16:12:13
PUBLIC : No
PERSISTENT : No
SOURCE :
/srv/cloud/one/var/images/def71f75ceab7d2fa444927b8e1588633c547422
STATE  : rdy
RUNNING_VMS: 1


There is still is a VM running which does use this image. Also you 
need to change some other settings, like maybe PERSISTENT (see 
help of oneimage) to Yes.



IMAGE
TEMPLATE
DEV_PREFIX=hd
NAME=alliswell
TYPE=OS


Depending on your settings during the installation of this disk, 
you probably want to change DEV_PREFIX to sd (for SCSI disk) with 
the 'oneimage' command. This is a known bug of the saveas command 
in 2.2.1.



Since I'm able to save/ clone the VM , I guess there is no problem with user
rights access to the nfs shared folder .


But your error says:
cp: cannot stat 
`/srv/cloud/one/var/images/def71f75ceab7d2fa444927b8e1588633c547422': 
No such file or directory


When the VM did not shutdown properly (from OpenNeubla's point of 
view), then the image is not copied/moved to the images folder. 
The entry with 'oneimage list' and 'oneimage show image_id' is 
there, but the file in you images folder is not yet.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] onevm save method error

2011-09-21 Thread Fabian Wenk

Hello Bala

On 20.09.11 13:44, bala suru wrote:

I need to save modified VMs as new one -
  here can I do normal copy and save the VM which is running and modified ..?


You could first register a persistent image, then create a 
persistent VM, run it and do your installation/updates. Then run 
'onevm shutdown vm_id'. Now you could create an other VM using 
the same registered image (makes probably not much sense), or copy 
the image away from your image folder, create a new template and 
register the copied image with a new name. Then create a new 
persistent VM which will use the new image.



Can I deploy the new VM using the save Image ..?


Probably as described above. But to clone a running image, the 
'onevm saveas vm_id disk_id image_name' command and then 
'onevm shutdown vm_id' (as in your link described the VM and the 
OS running in the VM needs to support ACPI) are probably easier to 
use and you do not need to manually copy around the image.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] onevm saveas error

2011-09-21 Thread Fabian Wenk

Hello Bala

On 21.09.11 07:37, bala suru wrote:

I have used a some simple way to copy the image running Vm image(insted of
onevm saveas) ,
1. I did some modification over the running image(creating some files)
2. issued onevm suspend
3. copied var/images/uuid   file of the running image
4. Registered the above copied image as a new image
5 . launched the new VM using above image

But I could not see any modified files on this VM ,,?


This could have two reasons. First, is this a persistent image? 
Second, did the VM have enough time to write back the changes to 
the image? But I guess the second one is difficult to find out, as 
the OS in the VM and probably also the VM layer do some caching in 
memory and the write back to the image is delayed for a longer time.



Are the steps which I followed are correct ..?


I do not think, that this is really supported from ONE. You are 
taking an image away from a running VM (even in the state 
suspended), which could have an not cleanly saved state of the 
file system in the image. It is much better to use an OS and VM 
which does support ACPI and then use the 'onevm saveas vm_id 
disk_id image_name' and 'onevm shutdown vm_id' commands.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] changing virtual network online

2011-09-21 Thread Fabian Wenk

Hello Samuel

On 21.09.11 14:45, samuel wrote:

I've just wondering whether is it possible to change the virtual network
that a virtual machine is attached once it has been working (deploy-run).


With 'onevnet' you can change some settings of an network, eg. 
leases / mac addresses. But as I understand, you try to modify 
stuff which belongs to the VM itself.



I've tried to modify the deployment.0 file but it did not affect the new
restarted machine. Might be a problem with the underlying MySQL database
that has to be also changed?


As far as I know, it is currently not supported to change an 
already running VM.



Use Case:
*create a new virtual machine and just forgot to attach a virtual network
*modify a virtual network (in case of VLAN change)
*attach a new interface to a running machine.


The steps which could work, and will give minimum downtime are 
probably the following:


- update your VM template with the new / changed network / NIC
- run 'onevm shutdown vm_id' (with a registered persistent disk 
image, all the modifications in the VM are preserved).
- wait until shutdown has completed, check with 'onevm list' or 
better 'onevm top'
- run 'onevm create template' and the same VM will boot up (now 
with a different VM ID, but with the same registered persistent 
disk image)



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Error monitoring host

2011-09-21 Thread Fabian Wenk

Hello Humberto

On 21.09.11 17:21, Humberto N. Castejon Martinez wrote:

Wed Sep 21 17:03:44 2011 [InM][I]: Command execution fail: 'if [ -x
/var/tmp/one/im/run_probes ]; then /var/tmp/one/im/run_probes kvm joker;
else $
Wed Sep 21 17:03:44 2011 [InM][I]: STDERR follows.
Wed Sep 21 17:03:44 2011 [InM][I]: Permission denied, please try again.
Wed Sep 21 17:03:44 2011 [InM][I]: Permission denied, please try again.
Wed Sep 21 17:03:44 2011 [InM][I]: Permission denied (publickey,password).
Wed Sep 21 17:03:44 2011 [InM][I]: ExitCode: 255
Wed Sep 21 17:03:44 2011 [InM][E]: Error monitoring host 0 : MONITOR FAILURE
0 Could not monitor host joker.


I guess the ssh login from the front end to the cluster node with 
the user oneadmin does not work.


Try manually from the front end to use 'ssh -v oneadmin@joker' 
(the -v gives some verbose output, if you increase eg. with -vvv, 
it will give even more). It guess on the cluster node the file 
~oneadmin/.ssh/id_dsa.pub is missing (or not readable for 
oneadmin). Check also the sshd log file on the cluster node.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] onevm saveas error

2011-09-15 Thread Fabian Wenk

Hello

On 15.09.2011 07:39, bharath pb wrote:

I tried to save the running Vm image using
onevm saveasvm_id  dick_id  imagename
onevm shutdownvm_id



Fri Sep  9 10:11:46 2011 [TM][E]: Error excuting image transfer script: cp:
cannot stat
`/srv/cloud/one/var//images/e0561a492c9aaac280479f2f0d85dcced9156fbf': No
such file or directory


Does the oneadmin user have write permissions in that path?
Which path have you set in the VM_DIR option in oned.conf?
What does 'oneimage list' and 'oneimage show id' say?


Fri Sep  9 10:11:47 2011 [DiM][I]: New VM state is FAILED
Fri Sep  9 10:11:47 2011 [TM][W]: Ignored: LOG - 44 tm_delete.sh: Deleting
/srv/cloud/one/var//44/images


I guess the registered images (which also happens when you run 
'onevm saveas ...') should be somewhere else and not in the var 
folder, which is used by the running VMs.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Non clonable readonly shared OS image?

2011-09-08 Thread Fabian Wenk

Hello Ismael

On 07.09.2011 19:15, Ismael Farfán wrote:

2011/9/7 Roger Pau Monné:

 I don't think it's possible to launch an OS from a single image
 multiple times...


Actually I can, the problem is that the contextualization needs to modify
the hostname, interfaces, passwd, shadow... since many VMs do that
it corrupts the FS (actually, only those files, I think).


Usually every Unix like OS does write to the disk, mostly in /var/ 
eg. log, pid and lock files. Now when you have several VMs writing 
to the same disk based file system during the same time, it will 
corrupt the file system. There is no clean file locking available 
when a file system on a disk (or image) is mounted from more then 
one running OS.


I guess your only chance is to create a Live CD based on your own 
installation (your distribution should provide tools to do this), 
then it should work, as an OS bootet from Live CD does create 
union mounts with a RAM disk, so writing to the file system is 
going into RAM and not written back to the disk (in this case the 
ISO image).



Mainly I was wondering why with readonly=yes the VM doesn't boot at
all, it fails before even calling kvm and I can't figure out why.


I guess this is not supported from ONE if an OS image (not CD 
image) is set to read only, and it will abort at an early stage. 
Did you see anything in the log/one/VID.log file?



I'll try modifying fstab to mount / as readonly and set readonly=no in the
VM description file, maybe that'll work.


I guess your OS will then not work properly and have some other 
strange problems, if the / is read only.



2011/9/7 Matthew Smith:

 Hi

 Have you tried doing this with a 'live CD' distribution image (which is by
 its nature normally a read only boot)?


I haven't. The idea is that my OS image works as a live CD since I need to
install some random stuff until it works as a virtual cluster.


During the first phase (as long as you need to setup your system), 
just create a persistent disk image and run a single VM with it. 
When you have everything installed as needed, use the tools from 
your distribution to create a Live CD. Eventually you need to 
start your installation based on the Live CD provided from your 
distribution. When you have created your Live CD you have to 
register this ISO image in ONE. And then you can create several 
VMs which use the registered ISO image (as CD) as the boot file 
system.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Non clonable readonly shared OS image?

2011-09-08 Thread Fabian Wenk

Hello Ismael

On 08.09.2011 20:03, Ismael Farfán wrote:

2011/9/8 Fabian Wenk:



 Mainly I was wondering why with readonly=yes the VM doesn't boot at
 all, it fails before even calling kvm and I can't figure out why.


 I guess this is not supported from ONE if an OS image (not CD image) is set
 to read only, and it will abort at an early stage. Did you see anything in
 the log/one/VID.log file?


I attached the log (almost the same as in the firs mail), whatever ONE does
different while setting the image as readonly makes libvirt trow this error that
I haven't been able to fix jet:
failed to retrieve chardev info in qemu with 'info chardev'


Google [42] does find many reports with this particular message, 
maybe you can find a hint there.


  [42] 
http://www.google.com/search?q=%22failed%20to%20retrieve%20chardev%20info%20in%20qemu%20with%20%27info%20chardev%27%22


According to your attached log file it fails when ONE tries to 
start kvm, probably because kvm does not support booting from a hd 
image which is set to read only. I guess all this parameters are 
delivered through libvirt to kvm. So kvm is able to honor such 
parameters as read only disk from the hardware point of view, so 
the OS in the VM does not have any possibility to write to the disk.


I have seen similar errors with my first steps with ONE, which 
most often where of no real help. But sometimes I found a hint in 
the libvirt log file. I do not remember if I tried it with a hd 
image which was set to read only.



bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org