Hi Tobias, As there is no standard "dns" suffix, I just added a .cloud to the hostname. You can change it by doing the following: - Edit /usr/lib/one/ruby/cloud/metadata/MetadataServer.rb- And replace .cloud with your domain: when 'local-hostname' # 2007-01-19 @value = "i-#{vm.id}.cloud" when 'public-hostname' @value = "i-#{vm.id}.cloud" If you use the econe-* commands, userdata and public key should work without any change.If you create your own template, just add the following to context: CONTEXT = [ (...) EC2_USER_DATA = <YOUR USER DATA> EC2_PUBLIC_KEY = <YOUR PUBLIC KEY> EC2_KEYNAME = <THE NAME OF THE KEY> ] If you want to use your own variables instead, just change them on /usr/lib/one/ruby/cloud/metadata/MetadataServer.rb.Search and replace with your own data: TEMPLATE/CONTEXT/EC2_KEYNAME TEMPLATE/CONTEXT/EC2_PUBLIC_KEY TEMPLATE/CONTEXT/EC2_USER_DATA The metadata server was thought to work with instances launched by eucatools, hybridfox or econe-* , and assumes that you are using the standard /etc/one/ec2query_templates/*.erb .But it can be easily modified to be more generic. Regards,Ricardo Duarte ---------------------------------------------------------------------------------------------------------------------------------------------------------------For reference, this is the ec2 template I'm currently using: NAME = eco-vm-<%= erb_vm_info[:instance_type ]%><% if erb_vm_info[:instance_type ] == "m1.small" %> CPU = 0.1MEMORY = 256<% elsif erb_vm_info[:instance_type ] == "m1.medium" %>CPU = 0.2MEMORY = 1024<% elsif erb_vm_info[:instance_type ] == "m1.large" %>CPU = 0.4MEMORY = 2048<% elsif erb_vm_info[:instance_type ] == "m1.xlarge" %>CPU = 0.8MEMORY = 4096<% end %>OS = [ ARCH="x86_64", BOOT="hd" ]DISK = [ IMAGE_ID = <%= erb_vm_info[:img_id] %> , CACHE="none", TARGET="vd", DRIVER="qcow2" ]NIC = [NETWORK_ID=0, MODEL="virtio"]IMAGE_ID = <%= erb_vm_info[:ec2_img_id] %>INSTANCE_TYPE = <%= erb_vm_info[:instance_type]%>GRAPHICS = [ TYPE="vnc" ]FEATURES = [ ACPI="yes" ]RAW = [ DATA="<devices><serial type='pty'><target port='0'/></serial></devices>", TYPE="kvm" ]CONTEXT = [ HOSTNAME = "i-$VMID", ETH0_DNS = "$NETWORK[DNS, NETWORK_ID=0]", ETH0_GATEWAY = "$NETWORK[GATEWAY, NETWORK_ID=0]", ETH0_IP = "$NIC[IP]", ETH0_MASK = "$NETWORK[MASK, NETWORK_ID=0]" <% if @password %>, PASSWORD = "<%= @password %>"<% end %> <% if erb_vm_info[:user_data] %>, EC2_USER_DATA = "<%= erb_vm_info[:user_data] %>"<% end %> <% if erb_vm_info[:public_key] %>, EC2_PUBLIC_KEY = "<%= erb_vm_info[:key_name] %>"<% end %> <% if erb_vm_info[:public_key] %>, EC2_KEYNAME = "<%= erb_vm_info[:public_key] %>"<% end %>]--------------------------------------------------------------------------------------------------------------------------------------------------------------- > Date: Wed, 23 Oct 2013 16:26:49 +0200 > From: tob...@tobru.ch > To: rjt...@hotmail.com > CC: users@lists.opennebula.org > Subject: RE: [one-users] Ubuntu Cloud Images > > Hi, > > Ok, I got this working now! Thanks a lot for your ideas... I very > appreciate the nice help. > > Now the metadata server returns the following values: > > ubuntu@ip-192-168-49-152:~$ ./ec2-metadata-mod --all > ami-id: not available > ami-launch-index: 76 > ami-manifest-path: none > ancestor-ami-ids: not available > block-device-mapping: > not available > instance-id: i-76 > instance-type: not available > local-hostname: one-76.cloud > local-ipv4: 192.168.49.152 > kernel-id: not available > placement: not available > product-codes: not available > public-hostname: one-76.cloud > public-ipv4: 192.168.49.152 > public-keys: > not available > ramdisk-id: not available > reservation-id: r-76 > security-groups: not available > user-data: not available > > Some questions regarding the values: > > > local-hostname: one-76.cloud > > public-hostname: one-76.cloud > How are these values created? Shouldn't the value correspond to the vm > name I've chosen in OpenNebula? > > > public-keys: > > not available > There is a public key available in the template, so I'm wondering why > it's not available. > > Will the metadata server find it's way into the official OpenNebula > distribution, or will it stay separate? > > Cheers, > Tobias > > > On 23.10.2013 10:58, Ricardo Duarte wrote: > > Hi Tobias, > > > > I would say that the NAT method will not work if both the > > metadata-server and the instances are on the same IP network. > > > > You can try to do the following: > > > > - On your router, add a new IP to the VLAN => 169.254.169.253/30 > > - On the metadata-server, add the IP 169.254.169.254/30 to the > > interface > > - Edit the /etc/one/metadata.conf to listen on 169.254.169.254, and > > port 80 > > - Make sure your router forwards packets from your instance network to > > this network > > > > To ensure metadata server is working fine, before making the changes, > > just change the IP address and port on the ec2-metadata script to the > > real ip and port of the server, and check if it returns any value. > >
_______________________________________________ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org