Hi Tino,
I don't see a vmware.log file in that directory:
~ # ls -al /vmfs/volumes/135/27/disk.0/
drwxr-xr-x1 root root 560 Jan 17 12:28 .
drwxr-xr-x1 root root 700 Jan 17 12:29 ..
-rw---1 root root 107374080 Jan 17 12:28 disk-flat.vmdk
Hi Stefan,
Thank you for sharing this, we'll keep an eye on this initiative.
Regards
--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org
On Fri,
Hi Dirk,
That means that the ESX is not even trying to boot up the VM. Let's
see if at least the deployment file is generated, in the front-end,
what are the contents of :
/var/lib/one/vms/26/deployment.0
Regards,
-Tino
--
OpenNebula - Flexible Enterprise Cloud Made Simple
--
Constantino
Hi Ed,
I think this would be better answered by the author of that drivers. I've
CC'd him.
By the way, as far as I know, he is working on this addon:
https://github.com/OpenNebula/addon-shared-lvm-single-lock
Which is the same as described in that wiki page.
Cheers,
Jaime
On Fri, Jan 17, 2014
Hi Ruben,
Thank you for your reply. Here it is what I found out:
1- I had a cluster defined but I deleted it, so right now I don't have any.
2- The only host I have at the moment is correctly monitored. And the
output of monitor_ds.sh is correctly showing info about the datastore (I
cannot
Have you upgraded to 4.4 from an older version? In previous versions
system datastores are not monitored and old probes do not get that
information. In this case execute:
$ onehost sync
It should copy the new ones to the hosts.
On Mon, Jan 20, 2014 at 12:09 PM, Javier javier.alva...@bsc.es
Dear all,
we will participate at the next CentOS Dojo in Brussels (31st January), and
in the FOSDEM the following day.
More information:
http://opennebula.org/opennebula-at-centos-brussels-dojo-and-fosdem-2014/
Hope to see you there!
cheers,
Jaime
--
Jaime Melis
Project Engineer
OpenNebula -
Hi everybody,
Is someone using hardware of Huawei for your OpenNebula infrastructure? We're
thinking about evaluating Huawei Blade chasis with a NAS storage server, and
I'd like to know if someone have experience with this hardware.
Thank you,
Juanjo Fuentes
SIGMA Gestion Universitaria
De:
Hi Ruben,
Below is the output of 'ps -ef | grep one' on a host that has been
disabled, rebooted and enabled. There are multiple versions of
collectd-client.rb kvm running.
We have discovered today a serious issue that is having an adverse
effect on our DNS system. When the
Hello list,
Could anybody clarify, currently does OpenNebula support Qemu Guest Agent (
http://wiki.libvirt.org/page/Qemu_guest_agent) in someway?
--
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
Users mailing list
Hi Javier,
Even though I updated to 4.4 after the problem appeared, it looks like
'onehost sync' has solved the issue.
Thank you,
Javier
On 20/01/14 12:25, Javier Fontan wrote:
Have you upgraded to 4.4 from an older version? In previous versions
system datastores are not monitored and old
Thank you. I'm hoping to get some clarity in the next day or so on this
matter.
On Mon, Jan 20, 2014 at 4:26 AM, Jaime Melis jme...@opennebula.org wrote:
Hi Ed,
I think this would be better answered by the author of that drivers. I've
CC'd him.
By the way, as far as I know, he is working
Hi Dirk,
Excellent, let's see what happens when libvirt is invoked directly. As
oneadmin, in the front-end:
$ virsh -c esx://devmesx1.intern.vgt.vito.be/?no_verify=1auto_answer=1
define /var/lib/one/vms/26/deployment.0
and then
$ virsh -c
Hi Tino,
In the mean time I deployed and undeployed some VMs, so the ID used is now 38
instead of 26.
If I execute the commands using the virsh shell, the VM gets deployed in VMWare
and I can see it running in the vSphere client.
See below:
virsh # define /var/lib/one/vms/38/deployment.0
The problem seems to be the high amount of collectd processes running.
Try killing all collectd-client.rb processes. There should be only
one running per host.
In case you want to use the old method of monitoring you can follow this guide:
Hi, is it possible to tell OpenNebula that a VM that is in the unknown
state on a failed host is now running on a new host? It doesn't seem to
be possible to edit the database to do this as the changes get overwritten.
Thanks,
Stuart.
___
Users
Hi,
I think we've figured out the cause of the issues reported above
and they are particular to our installation.
All our hosts use an NFS mounted root partition. The reasons for
using this approach are historical and were supposed to make it easier
to keep the hosts equally
I've been trying to reproduce the problem, that is, making OpenNebula
start a high amount of collectd-client processes. The only way I was
able to do it is when the file /tmp/one-collectd-client.pid exists
and has wrong permissions. Can you check the ownership and permissions
of that file?
On
Hello list,
Could anybody clarify how to separate live migration traffic to dedicated
interface?
--
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
Users mailing list
Users@lists.opennebula.org
hi, all
I prefer to run vm on top of lvm on host, but I don't like cLVM, so, which
tm I should use ?
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Hi Javier,
See my previous email. Another scenario is when
/tmp/one-collectd-client.pid does not exist due to issues with /tmp.
A change seems to have been made to put a pid file in /tmp instead
of /run or /var/run.
Regards,
Gerry
On 20/01/2014 17:44, Javier Fontan
Hi,
you can check out the single lock shared lvm addon:
http://wiki.opennebula.org/shared_lvm
https://github.com/OpenNebula/addon-shared-lvm-single-lock
cheers,
Jaime
On Tue, Jan 21, 2014 at 6:32 AM, darkblue darkblue2...@gmail.com wrote:
hi, all
I prefer to run vm on top of lvm on host,
Hi,
apologies for the huge delay. Have you solved this issue? To debug this
kind of problems it would be best to run ip route in the frontend,
hypervisor and virtual machine. If you can send those outputs to the
mailing list we will probably be able to help you out.
Cheers,
Jaime
On Sat, Dec
23 matches
Mail list logo