Il 06/02/2015 15:15, Fiorenza Meini ha scritto:
Il 06/02/2015 15:08, Lars Kellogg-Stedman ha scritto:
On Thu, Feb 05, 2015 at 09:10:43AM +0100, Fiorenza Meini wrote:
Thanks for your suggestion, my configuration of tenant_network_types
is gre.
Neutron has different component:
neutron-dhcp-agent
neutron-l3-agent
neutron-metadata-agent
neutron-plugin-openvswitch-agent
neutron-server

On my second node, I started only neutron-plugin-openvswitch-agent.
Which is the virtual network point of union between two nodes ?

I'm not sure I understand your question...

Your compute nodes connect to your controller(s) via GRE tunnels,
which are set up by the Neutron openvswitch agent.  In a typical
configuration, L3 routing happens on the network host, while the
compute hosts are stricly L2 environments.

The compute hosts only need the openvswitch agent, which in addition
to setting up the tunnels is also responsible managing port
assignments on the OVS integration bridge, br-int.  All the other
logic happens on your controller(s), where all the other neutron
agents are running.

This is an old post of mine that talks about how things are connected
in a GRE (or VXLAN) environment:

   http://blog.oddbit.com/2013/11/14/quantum-in-too-much-detail/

This post doesn't cover the new hotness that is Neutron DVR or HA
routers, but it's still a good starting point.


Thanks,
I think  L2 layer is working: I can see this on my second node:

5 DEBUG neutronclient.client [-] RESP:200 {'date': 'Fri, 06 Feb 2015
13:28:11 GMT', 'content-length': '731', 'content-type':
'application/json; charset=UTF-8', 'x-openstack-request-id':
'req-ab82f7a9-b557-44b3-a474-26c8e37003c4'} {"ports": [{"status":
"ACTIVE", "binding:host_id": "elanor", "allowed_address_pairs": [],
"extra_dhcp_opts": [], "device_owner": "compute:None",
"binding:profile": {}, "fixed_ips": [{"subnet_id":
"42b697f2-2f68-4218-bb86-35337881fdd2", "ip_address": "10.10.0.39"}],
"id": "27e71fa9-5a48-40fc-b006-5593ec34e837", "security_groups":
["26ead0e4-84e1-4593-b612-f6ddc46f937b"], "device_id":
"23cbaabb-96df-45be-aefa-ec9c8d114cf0", "name": "", "admin_state_up":
true, "network_id": "489ec53f-56a8-4e5a-bcc3-f27c68de39f3", "tenant_id":
"9a26a147891a46249567d793a6e7c9b7", "binding:vif_details":
{"port_filter": true, "ovs_hybrid_plug": true}, "binding:vnic_type":
"normal", "binding:vif_type": "ovs", "mac_address": "fa:16:3e:5a:99:14"}]}

  http_log_resp
/usr/lib/python2.7/dist-packages/neutronclient/common/utils.py:186

The problem is that the VM hasn't the network interface configured with
the ip 10.10.0.39, and I see on VM log file that cloud-init cannot
contact http://169.254.169.254:80 (which I presume is metadata service).

How can I tell to nova-compute on the second node, that metaservice is
on another host ? I have no more ideas

Regards
Fiorenza Meini


I solved the problem: on the controller I configured nova_metadata_ip to the ip of the server, not localhost.
nova_metadata_ip is the ip address called by nova to connect to metaservice.

Regards
Fiorenza

--
Spazio Web S.r.l.
V. Dante, 10
13900 Biella
Tel.: +39 015 2431982
Fax.: +39 015 2522600
Numero d'Iscrizione al Registro Imprese presso CCIAA Biella, Cod.Fisc.e P.Iva: 02414430021
Iscriz. REA: BI - 188936 Cap. Soc.: €. 30.000 i.v.

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to