On Mar 29, 2014, at 2:02 PM, Gary Duan <garyd...@gmail.com<mailto:garyd...@gmail.com>> wrote:
I guess you need bind the port you just create. PCM: Can you elaborate on what is needed for the binding? In the call to create the port in Neutron, the script passes in an additional dict item for binding (n bold): p_spec = {'port': {'admin_state_up': True, 'name': port_name, 'network_id': nw_id, 'mac_address': mac_addr, 'binding:host_id': hostname, 'device_id': vm_uuid, 'device_owner': 'compute:None'}} Is there more action that is needed? Also, the port need to be plugged into the VM, right? I don't see that in the code. Maybe you are doing it outside of OpenStack. PCM: Not sure I understand how this all hooks up, as I’m still trying to figure out these scripts I acquired. Essentially, the VM is started by KVM and scripts are associated with each of the interfaces. The scripts handle I/F up and make calls to Neutron (or Neutron/Nova) for hooking into Neutron. I guess it is hooked to the VM by the I/F up handling? With the original setup, there were three scripts to hook up the three interfaces in the VM… One uses br-ex and all Neutron calls. It connects the interface, I can ping on it and it shows up with an IP in Neutron port list (though port-show says it is down). The port is used by the VM to access the “public” network that devstack creates (e.g. 172.24.4.13, with Neutron router at 1712.24.4.11). Another uses br-int and all Neutron calls too, for a management port of the VM. It too pings fine, and is used by a Neutron agent to send REST messages to the VM (e.g. 192.168.200.2). We also can telnet to that I/F from the host. The third, uses br-int and originally used Neutron to create the port and Nova to plug the VIF. It would connect to the “private” network that devstack had created and gives the VM access to that private network. Neutron’s port-show would indicate the port is active, had an IP (10.1.0.6), and pinging worked fine. It looks like, since two weeks ago, Nova changed to use an object for the VIF, instead of a dict and this code no longer works for this third script. I *thought* maybe I could use the same “all neutron” code for this interface too. I took the original script and replaced the plugging part with the same logic that the br-ex script used for plugging the VIF. IT doesn’t show an error, but the interface is reporting as down and (obviously) pings are not working. I’m not sure if I’m missing some step on this third script or if I have to use Nova for this interface (not sure why the original scripts use Nova on this one). Anyone have any thoughts on why Nova may be needed in this case ore what I’m missing? Here is the full original script for the br-ex interface (works) (the management interface is the same with br-int vs be-ex set as the br_name) : #!/usr/bin/python import sys from oslo.config import cfg import neutron.openstack.common.gettextutils as gtutil gtutil.install('') import neutron.agent.linux.interface as vif_driver from neutronclient.neutron import client as qclient import neutronclient.common.exceptions as qcexp from neutron.agent.common import config # Arg 1: controller host # Arg 2: name of admin user # Arg 3: admin user password # Arg 4: tenant name # Arg 5: uuid of VM # Arg 6: MAC address of tap interface # Arg 7: name of tap interface host = sys.argv[1] user = sys.argv[2] pw = sys.argv[3] tenant = sys.argv[4] vm_uuid = sys.argv[5] mac_addr = sys.argv[6] interface = sys.argv[7] KEYSTONE_URL='http://' + host + ':5000/v2.0' qc = qclient.Client('2.0', auth_url=KEYSTONE_URL, username=user, tenant_name=tenant, password=pw) prefix, net_name = interface.split('__') port_name = net_name + '_p' try: nw_id = qc.list_networks(name=net_name)['networks'][0]['id'] except qcexp.NeutronClientException as e: print >> sys.stderr, e print >> sys.stderr, 'Number of arguments:', len(sys.argv), 'arguments.' print >> sys.stderr, 'Argument List:', str(sys.argv) exit(1) p_spec = {'port': {'admin_state_up': True, 'name': port_name, 'network_id': nw_id, 'mac_address': mac_addr, 'device_id': vm_uuid, 'device_owner': 'compute:None'}} try: port = qc.create_port(p_spec) except qcexp.NeutronClientException as e: print >> sys.stderr, e exit(1) port_id = port['port']['id'] br_name = 'br-ex' conf = cfg.CONF config.register_root_helper(conf) conf.register_opts(vif_driver.OPTS) driver = vif_driver.OVSInterfaceDriver(cfg.CONF) driver.plug(nw_id, port_id, interface, mac_addr, br_name) print br_name, port_name, port_id, net_name, nw_id Here is the full original script for the br-int interface (no longer works): #!/usr/bin/python import socket import sys import nova.openstack.common.gettextutils as gtutil gtutil.install('') import nova.virt.libvirt.vif as vif_driver from nova.network import linux_net from nova.network import model as network_model from neutronclient.neutron import client as qclient import neutronclient.common.exceptions as qcexp # LOG = logging.getLogger(__name__) # Arg 1: controller host # Arg 2: name of admin user # Arg 3: admin user password # Arg 4: tenant name # Arg 5: uuid of VM # Arg 6: MAC address of tap interface # Arg 7: hostname # Arg 8: name of tap interface host = sys.argv[1] user = sys.argv[2] pw = sys.argv[3] tenant = sys.argv[4] vm_uuid = sys.argv[5] mac_addr = sys.argv[6] hostname = sys.argv[7] interface = sys.argv[8] KEYSTONE_URL='http://' + host + ':5000/v2.0' qc = qclient.Client('2.0', auth_url=KEYSTONE_URL, username=user, tenant_name=tenant, password=pw) prefix, net_name = interface.split('__') port_name = net_name + '_p' try: nw_id = qc.list_networks(name=net_name)['networks'][0]['id'] except qcexp.NeutronClientException as e: print >> sys.stderr, e print >> sys.stderr, 'Number of arguments:', len(sys.argv), 'arguments.' print >> sys.stderr, 'Argument List:', str(sys.argv) exit(1) p_spec = {'port': {'admin_state_up': True, 'name': port_name, 'network_id': nw_id, 'mac_address': mac_addr, 'binding:host_id': hostname, 'device_id': vm_uuid, 'device_owner': 'compute:None'}} try: port = qc.create_port(p_spec) except qcexp.NeutronClientException as e: print >> sys.stderr, e exit(1) port_id = port['port']['id'] instance = {'uuid': vm_uuid} network = {'bridge': 'br-int'} vif = {'id': port_id, 'address': mac_addr, 'network': network, 'type': network_model.VIF_TYPE_OVS} # For OVS # driver = vif_driver.LibvirtHybridOVSBridgeDriver({}) # For ML2 plugin driver = vif_driver.LibvirtGenericVIFDriver({}) driver.plug(instance, vif) br_name = driver.get_br_name(port_id) print br_name, port_name, port_id, net_name, nw_id The bold text was changed to set br_name to ‘br-int’ and use the same calling as in the br-ex example. Thanks! PCM (Paul Michali) MAIL …..…. p...@cisco.com<mailto:p...@cisco.com> IRC ……..… pcm_ (irc.freenode.com<http://irc.freenode.com>) TW ………... @pmichali GPG Key … 4525ECC253E31A83 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 Thanks, Gary On Fri, Mar 28, 2014 at 3:15 PM, Paul Michali (pcm) <p...@cisco.com<mailto:p...@cisco.com>> wrote: Hi, I have a VM that I start up outside of OpenStack (as a short term solution, until we get it working inside a Nova VM), using KVM. It has scrips associated with the three interfaces that are created, to hook this VM into Neutron. One I/F is on br-ex (connected to the “public" network for DevStack), another to br-int (connected to a management network that is created), and a third is connected to br-int (connected to the “private” network for DevStack). It’s understood these are hacks to get things going and can be brittle. With DevStack, I have a vanilla localrc, so using ML2, without any ML2 settings specified. Now, the first two scripts use internal Neutron client calls to create the port, and then plug the VIF. The third, uses Neutron to create the port, and then Nova to plug the VIF. I don’t know why - I inherited the scripts. On one system, where Nova is based on commit b3e2e05 (10 days ago), this all works just peachy. Interfaces are hooked in and I can ping to my hearts content. On another system, that I just reimaged today, using the latest and greatest OpenStack projects, the third script fails. I talked to Nova folks, and the vic is now an object, instead of a plain dict, and therefore calls on the object fail (as the script just provides a dict). I started trying to convert the vif to an object, but in discussing with a co-worker, we thought that we could too use Neutron calls for all of the setup of this third interface. Well, I tried, and the port is created, but unlike the other system, the port is DOWN, and I cannot ping to or from it (the other ports still work fine, with this newer OpenStack repo). One difference is that the port is showing {"port_filter": true, "ovs_hybrid_plug": true} for binding:vif_details, in the neutron port-show output. On the older system this is empty (so must be new changes in Neutron?) Here is the Neutron based code (trimmed) to do the create and plugging: import neutron.agent.linux.interface as vif_driver from neutronclient.neutron import client as qclient qc = qclient.Client('2.0', auth_url=KEYSTONE_URL, username=user, tenant_name=tenant, password=pw) prefix, net_name = interface.split('__') port_name = net_name + '_p' try: nw_id = qc.list_networks(name=net_name)['networks'][0]['id'] except qcexp.NeutronClientException as e: … p_spec = {'port': {'admin_state_up': True, 'name': port_name, 'network_id': nw_id, 'mac_address': mac_addr, 'binding:host_id': hostname, 'device_id': vm_uuid, 'device_owner': 'compute:None'}} try: port = qc.create_port(p_spec) except qcexp.NeutronClientException as e: ... port_id = port['port']['id'] br_name = 'br-int' conf = cfg.CONF config.register_root_helper(conf) conf.register_opts(vif_driver.OPTS) driver = vif_driver.OVSInterfaceDriver(cfg.CONF) driver.plug(nw_id, port_id, interface, mac_addr, br_name) Finally, here are the questions (hope you stuck with the long message)… Any idea why the neutron version is not working? I know there were a bunch of recent changes. Is there a way for me to turn off the ova_hybrid_plug and port_filter flags? Should I? Should I go back to using Nova and build a VIF object? If so, any reason why the Neutron version would not work? Is there a way to do a similar thing, but via using Northbound APIs (so it isn’t as brittle)? Thanks in advance! PCM (Paul Michali) MAIL …..…. p...@cisco.com<mailto:p...@cisco.com> IRC ……..… pcm_ (irc.freenode.com<http://irc.freenode.com/>) TW ………... @pmichali GPG Key … 4525ECC253E31A83 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev