hi ,
i have shared the directory then after i am getting the same error then i
have shared the hole datastore dir
but the problem is still there now i have installed it in same node the
after the problem is there...
Can you please tell me the problem ASAP
Mon Sep 10 11:53:32 2012 [DiM][I]:
i am still getting the error
Mon Sep 10 11:53:32 2012 [DiM][I]: New VM state is ACTIVE.
Mon Sep 10 11:53:32 2012 [LCM][I]: New VM state is PROLOG.
Mon Sep 10 11:53:32 2012 [VM][I]: Virtual Machine has no context
Mon Sep 10 11:53:34 2012 [TM][I]: Command execution fail:
Hi,
I was unable to launch new vm if i mention the value passwd=P@$$w0rd in vm
template.
my vm template is as below.
ARCH = x86_64
CONTEXT = [
BROADCAST = $NETWORK[BROADCAST, NETWORK_ID=\0\],
DNS = $NETWORK[DNS, NETWORK_ID=\0\],
FILES=
Hi
When I deploy the machine over vmware 5.0 I get: internal error HTTP response
code 500 for upload.
May anybody help, please?
Thanks in advance.
Mon Sep 10 07:11:04 2012 [TM][D]: Message received: LOG - 37 tm_ln.sh: Creating
directory /srv/cloud/one/var/images/pedrot/37/images
Mon Sep 10
Just a suggestion: can you try to replicate the
/srv/cloud/one/bin/tty_expect -u oneadmin -p virsh -c
esx://esx/?no_verify=1 define /srv/cloud/one/var/37/deployment.0
command by hand? You can pass the -d2 or -d3 option to virsh so that you'll
have more debug output and decide what is
Hi,
OpenNebula is running in CentOS 6.3.
The /var/log/libvirt/qemu/one-9.log file doesn't exist in the host. When I
create a virtual machine manually in the host (with virt-manager) a
/var/log/libvirtt/qemu/newVM.log file is created, but when the virtual machine
is created with OpenNebula,
Hello,
The parser evaluates variables starting with $ inside the strings, so
that's not a valid character. An alternative would be to use a base64
encoded value and decode it inside the VM.
Cheers,
Jaime
On Mon, Sep 10, 2012 at 8:36 AM, Mrt Raju mrtr...@gmail.com wrote:
Hi,
I was unable to
This is what I get:
oneadmin@opennebula:~/var$ /srv/cloud/one/bin/tty_expect -u oneadmin -p
virsh -c esx://esx/?no_verify=1 define
/srv/cloud/one/var/37/deployment.0
error:Falló al definir un dominio para /srv/cloud/one/var/37/deployment.0
error:error interno Código de respuesta
Hi,
On 6 September 2012 16:08, Emmanuel Mathot emmanuel.mat...@terradue.comwrote:
Hello,
Is it possible to give specific rights to an instance type to a user group
in OCCI frontend?
This could be done easily if the OCCI interface would see the registered
Virtual Machine Templates
What I meant was:
/srv/cloud/one/bin/tty_expect -u oneadmin -p virsh -d2 -c
esx://esx/?no_verify=1 define /srv/cloud/one/var/37/
deployment.0
Try with -d1, -d2, -d3 and check the output for more informative error.
Regards
Marco
On Mon, Sep 10, 2012 at 11:02 AM, PJ PJ
Hello,
Short answer: can't be achieved currently, but it's an interesting use case
and we will include it in OpenNebula 3.8 [1].
Long answer:
What you need is a DISK which will not be cloned (CLONE = NO), will not be
saved (SAVE = NO) and that it's marked as readonly (READONLY = YES).
Currently
Hi,
Datastores were introduced in 3.4. Unless there is a good reason not to,
I'd suggest you to upgrade to the latest 3.6 version.
Regards
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org |
Hello Anton,
The qcow2 drivers assume, like with shared, that the datastore is exported
to the host using a distrubuted file system like NFS.
Considering we're talking about persistent images, why bother doing
qemu-img create and then qemu-img commit to save the changes, if you
can just do ln -s
Hi,
I have managed to get OpenNebula running on a CentOS 6.3 host.
First thing that comes in mind is polkit. I honestly don't remember the
exact error I have received but I do know that OpenNebula didn't work until
I have configure polkit.
# cat
Dear Jaime,
thank you for the quick reply.
Thanks to your explanations, we have managed to get the whole process
working as we wanted it to, without modifying the qcow2-scripts.
But to achieve that, we had to manually convert our image to the
qcow2-format before importing it into OpenNebula.
Is
Hi,
We don't have global quotas, the limits can only be set for each user or
group. You could develop an authentication driver that denies the creation
of new VMs based on global usage limits; but this is not straight-forward.
If you execute any one* command from the drivers, the request will be
Oops, sorry, the if line should be negated:
wrong:
-if file -b $DST|grep -q QCOW; then
good:
+if ! file -b $DST|grep -q QCOW; then
cheers,
Jaime
On Mon, Sep 10, 2012 at 3:57 PM, Jaime Melis jme...@opennebula.org wrote:
Hello Anton,
I would do something like:
Thanks for looking into this Jaime.
Ideally, we would not need a template per machine, but just one for all
machines using a specific readonly base image. Not sure if that's what
you meant below or not...
I would love to be in the loop as you plan this feature.
Thanks!
-C
On Mon, 2012-09-10
Hi
This is now implemented and ready for 3.8. Basically, there are two
scripts premigrate and postmigrate executed before and after the
livemigration command on the hypervisor.
You can check the docs [1], here with a description of the arguments.
Just search for premigrate.
Let us know if you
Hi
My System:
# uname -a
Linux chaos 3.2.0-3-amd64 #1 SMP Mon Jul 23 02:45:17 UTC 2012 x86_64
GNU/Linux
ii opennebula 3.4.1-3.1 amd64controller
which executes the OpenNebula cluster services
ii opennebula-common 3.4.1-3.1 all empty
I have made the below error disappear by setting the system password to
the same password found in the oneadmins ~/.one/one_auth file.
eg. passwd oneadmin
set the password.
I can now restart opennebula without an auth error but my pending VM's
still remain.
I have tried resubmitting and
21 matches
Mail list logo