Hi,
It seems that it is complaining about not having sched-cred, that it is used
to implement the capacity distribution set in OpenNebula (a VM with CPU=0.5,
will be assigned a proportional amount of credits so it gets half of the
credits of a VM with CPU=1.0)
Just edit
Hi,
Yes that's much better thanks ;)
Cheers
Ruben
On Wed, May 4, 2011 at 12:09 PM, Prakhar Srivastava
prakhar@gmail.comwrote:
Hi,
I was facing a similar issue. It seems that Xen 4.1 doesnt understand
sched-cred but it does understand* sched-credit*. To resolve the issue, I
changed
Hi Antoni,
Which version of ESX are you using? When developing the drivers, I
found that the driver line didn't impede the deployment of VMs. If
with other ESX versions this is a problem, I will make it an optional
parameter with no hardcoded value to avoid this kind of situations.
Best regards,
Hi Sebastien,
As this kind of recipes and how-tos can be difficult to locate in the list
archives, we've copied your recipe into the community wiki [1].
I'd like to remind all of you that you are welcome to improve our (your)
community wiki, just request an account from [2].
Thank you for your
Hi Koushik,
Are you using NFS? If so, it is probably a permissions issue, could
you check that oneadmin in the nodes can write in the NFS mounted
export? Even if oneadmin can write in the front-end, it may have a
different id in the nodes.
The configuration has to be so that allows a
$ virsh
Hello
I'm not usint ESX. I'm using vmware server 2.0.2
And now. How I can solve my problem? I have to change the
LibVirtDriverVMware.cc and recompile and reinstall my opennebula?
Thanks in advance
Antoni Artigues
El mié, 04-05-2011 a las 12:52 +0200, Tino Vazquez escribió:
Hi Antoni,
The information pushed to ganglia in this variable is then read by
ganglia drivers in opennebula to extract the information about the
VMs.
On Wed, Apr 27, 2011 at 8:32 PM, Craig Dawson craig.daw...@sas.com wrote:
Thanks Javier,
Thanks for the explanation. I was under the impression that the
Hello Tino,
I am using NFS. oneadmin in the nodes is able to write in the NFS exported
directories.
I tried saving the vm using virsh save command on the node.
I got the following error
error: Failed to save domain one-42 to saved
error: unable to set user and group to '0:0' on
The econe tools use the ruby URI gem that will take the host part of
the string and will ignore the path.
It is probably a misnomer to call this a URL, then. Perhaps the
documentation should be clear that is really just a hostname.
running the OCCI server and the EC2 server in different
Hi,
Yes, this is a permissions issue. Please see if this [1] provides any
hints to solve the issue.
Regards,
-Tino
[1] http://osdir.com/ml/libvir-list/2011-04/msg01220.html
--
Constantino Vázquez Blanco, MSc
OpenNebula Major Contributor
www.OpenNebula.org | @tinova79
On Wed, May 4, 2011 at
Hi Antoni,
This feature has been introduced by ticket [1]. Unfortunately, it has
not been tested against VMware server. I've opened another ticket [2]
to avoid a mandatory default value for the disk driver.
Meanwhile, you can change LibVirtDriverVMware.cc from
--
if ( !driver.empty() )
Excellent, thanks for the explanation Javier!
Craig
-Original Message-
From: Javier Fontan [mailto:jfon...@gmail.com]
Sent: Wednesday, May 04, 2011 8:30 AM
To: Craig Dawson
Cc: users@lists.opennebula.org
Subject: Re: [one-users] Ganglia reports
The information pushed to ganglia in this
Hello again!
Thanks for the help!
Some more questions [for more details of what I have done previously see my
previous e-mail below this post]:
1. Why isn't this working?
oneadmin@ubuntu:/$ oneuser create helen mypass
This prints out in the terminal:
/usr/lib/one/ruby/OpenNebula.rb:93:in
Hi,
I noticed only now that I've exhausted my opennebula available CPU
resources:
ID NAME CLUSTER RVM TCPU FCPU ACPUTMEMFMEM STAT
2 nebula01 default2400369 0 11.8G 10.7G on
3 nebula02 default4800792
That's all explained here:
http://opennebula.org/documentation:rel2.2:cg#start_stop_opennebula
--
Carlos Martín, MSc
Project Major Contributor
OpenNebula - The Open Source Toolkit for Cloud Computing
www.OpenNebula.org http://www.opennebula.org/ | cmar...@opennebula.org
2011/5/4 mahirudin
Hey again,
Update:
The first paragraphs solved by closing the terminal and after that logging
in as oneadmin again.
However this doesn't work for me.:
Users can be easily added to the system like this:
$ oneuser create helen mypass
When I execute that command the user is added in the user
Anders,
You need the ONE_AUTH file in place so that the current user can
authenticate, and hence have permission to create the new ONE user.
The ONE_AUTH file needs to be created by you manually; it will not be
created by ONE.
On Wed, May 4, 2011 at 10:46 AM, Anders Branderud
Hi,
You can guide the overcommitment by using the CPU attribute of the template.
For example if you want to put 16 VMs in nebula02 with 8 cores, just define
the VMs with
CPU = 0.5
If you need those VMs to have 2 virtual cores use:
CPU=0.5
VCPU=2
Cheers
Ruben
On Wed, May 4, 2011 at 4:32 PM,
Maybe the problem is related to the previous point. The Signature that
will authenticate the user is generated using the EC2_URL, maybe the
server is ignoring the path section. Would you mind to try starting
the server without path?.
I've been able to get the EC2 service to work using
there was a bug about the CPU template parameter that was not working
for vmware so it was needed to use VCPU instead. Thus, I have no way to
manage the overcommitment of VMs to a host.
Is this issue solved in the current version? (i have noticed that memory
issues are supposed to be
Would you mind to try specifying:
SSL_SERVER=arc-vm-opennebula.int.seas.harvard.edu:80
instead of
SSL_SERVER=arc-vm-opennebula.int.seas.harvard.edu
Kind regards.
On 4 May 2011 19:14, Lars Kellogg-Stedman l...@seas.harvard.edu wrote:
Maybe the problem is related to the previous point. The
Would you mind to try specifying:
SSL_SERVER=arc-vm-opennebula.int.seas.harvard.edu:80
instead of
SSL_SERVER=arc-vm-opennebula.int.seas.harvard.edu
Both euca-describe-images and econe-describe-images fail with this change.
--
Lars Kellogg-Stedman l...@seas.harvard.edu
Senior Technologist
Ok, I think the problem was that the EC2_SECRET_KEY for euca tools is
the sha1 password and in the econe client is the plain password.
On 4 May 2011 19:41, Lars Kellogg-Stedman l...@seas.harvard.edu wrote:
Would you mind to try specifying:
SSL_SERVER=arc-vm-opennebula.int.seas.harvard.edu:80
Ok, I think the problem was that the EC2_SECRET_KEY for euca tools is
the sha1 password and in the econe client is the plain password.
Ah, that did it.
The fact the econe-* expects different values from both EcuaTools and
Elastic Fox is somewhat confusing. Do you think it would make sense
for
24 matches
Mail list logo