Hi,
Thanks for the explanation.
The Ubuntu Saucy Cloud image works out of the box without the metadata
server. Cloud-init 0.7.3 is compatible with the OpenNebula
contextualization system [1].
But I'm trying to get the metadata server working anyway. The problem
I'm facing is how I can make
Hi Tobias,
I would say that the NAT method will not work if both the metadata-server and
the instances are on the same IP network.
You can try to do the following:
- On your router, add a new IP to the VLAN = 169.254.169.253/30
- On the metadata-server, add the IP 169.254.169.254/30 to the
Which linux distribution are you using? Under Ubuntu you will have to install
additional packages or compile your own:
https://wiki.ubuntu.com/spice
cheers,
carlo
- Messaggio originale -
Da: M Fazli A Jalaluddin fazli.jalalud...@gmail.com
A: Carlos Martín Sánchez cmar...@opennebula.org
Hi again,
Alternatively, you can try to run the following on the metadata server:
# route add -net your instance network/24 gw your router ip
If that does not work, you can also try to also disable ICMP redirects on your
router:
# echo 0 /proc/sys/net/ipv4/conf/all/send_redirects
And
Hello,
I am using the online documentation for adding a vmfs datastore, but it doesn't
seem to show the correct capacity. I think i may be missing something.
/var/log/one/oned.log does not show any errors. Here are the datastores
templates:
oneadmin@OpenNebula:/root$ onedatastore show 0
oneadmin@OpenNebula:/root$ /var/lib/one/remotes/datastore/vmfs/monitor
Hello guys:
I am trying to attach disk to a running vm on a CentOS 6.4 node.The
guest OS is redhat 5.7.Libvirt version is 0.9.4,and qemu version is
0.12.1.Attach operation is failed and the log as follows:
Wed Oct 23 17:22:05 2013 [VMM][E]: attach_disk: Command virsh --connect
qemu:///system
Hi Mark,
In the jenkins template you have ARCH=x86_64 and in the second OS = [
ARCH=x86_64 ]. The latter is the correct way of setting the architecture
for the VM. See the OS and Boot Options Section [1] from the docs.
On Wed, Oct 23, 2013 at 12:29:27PM +0200, Mark Kusch wrote:
cat
Hi,
I've added ARCH = x86_64 in the template additionally for testing
purposos.
I've experienced the same problem with another pre-made persistent
VM with nearly identical settings.
I'll have some further diving deeper on this, but it feels like
I'm stuck on this.
Any further input greatly
Got it.
Thanks Valentin, your description helped me identify my issue here.
# kraM
On 15:04 Wed 23 Oct, Mark Kusch wrote:
Hi,
I've added ARCH = x86_64 in the template additionally for testing
purposos.
I've experienced the same problem with another pre-made persistent
VM with nearly
Hi,
Is the new user configured to use the 'core' auth driver?
What are the contents of /path/to/home/dir/.one/one_auth? It should be
username:password
Regards
--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | cmar...@opennebula.org |
Hi Carlos,
already fixed. PEBKAC.
I've had a typo in the environment variable, called it
- ONE_XMLPRC instead of
- ONE_XMLRPC .
Stupid (and sometimes stressed) me ;)
# kraM
On 15:52 Wed 23 Oct, Carlos Martín Sánchez wrote:
Hi,
Is the new user configured to use the 'core' auth driver?
Hi Tobias,
As there is no standard dns suffix, I just added a .cloud to the hostname.
You can change it by doing the following:
- Edit /usr/lib/one/ruby/cloud/metadata/MetadataServer.rb- And replace .cloud
with your domain: when 'local-hostname' # 2007-01-19
@value =
Hi,
I think the problem here is due to the BASE_PATH referenced in the DS
templates. Where it reads
BASE PATH : /var/lib/one/datastores/0
it should read
BASE PATH : /vmfs/volumes/datastores/0
In order to achieve this, you will need to create another two
datastores (say 100, 101),
Hello,
I have tried the second idea, with the cluster, and it still does not change
the path, it's /var/lib/one/datastores.
onecluster show datastore-test
CLUSTER 100 INFORMATION
ID : 100
NAME : datastore-test
SYSTEM DS : 112
CLUSTER TEMPLATE
Hello all,
I'm trying to integrate OpenNebula in an API project (mine) where I
don't have a user DB and I have to use OpenNebula to authorize users
like a SSO service.
In other situations, I found an API where I send a token and the
authentication server answer me with a 200 code if the token
Hi,
I'm just wondering what tool other users of OpenNebula are using for
billing of customer VMs? I know there is the oneacct tool which records
many meterings of a VM, but I'm wondering what tool can be used to
actually create and manage bills for VM usage (thinking of a public
cloud).
17 matches
Mail list logo