Ok,
I know the issue – the problem is that the entries in the OVS are not being 
configured with the VLAN tag.
The reason for this is that the plugin does not have an agent that configures 
them. You can patch the DHCP agent with the following code:

In neutron/agent/linux/dhcp.py:

    def setup(self, network):
        """Create and initialize a device for network's DHCP on this host."""
        port = self.setup_dhcp_port(network)
        self._update_dhcp_port(network, port)
        interface_name = self.get_interface_name(network, port)

        if ip_lib.ensure_device_is_ready(interface_name,
                                         namespace=network.namespace):
            LOG.debug('Reusing existing device: %s.', interface_name)
        else:
            try:
                if (cfg.CONF.core_plugin and
                    cfg.CONF.core_plugin.endswith('NsxDvsPlugin')):
                    mac_address = port.mac_address
                    self.driver.plug(network.id,
                                     port.id,
                                     interface_name,
                                     mac_address,
                                     namespace=network.namespace,
                                     mtu=network.get('mtu'),
                                     bridge=self.conf.dvs_integration_bridge)
                    vlan_tag = getattr(network, 'provider:segmentation_id',
                                       None)
                    # Treat vlans
                    if vlan_tag != 0:
                        br_dvs = ovs_lib.OVSBridge(
                            self.conf.dvs_integration_bridge)
                        # When ovs_use_veth is set to True, the DEV_NAME_PREFIX
                        # will be changed from 'tap' to 'ns-' in
                        # OVSInterfaceDriver
                        dvs_port_name = interface_name.replace('ns-', 'tap')
                        br_dvs.set_db_attribute(
                            "Port", dvs_port_name, "tag", vlan_tag)
                else:
                    self.driver.plug(network.id,
                                     port.id,
                                     interface_name,
                                     port.mac_address,
                                     namespace=network.namespace,
                                     mtu=network.get('mtu'))


    def destroy(self, network, device_name):
        """Destroy the device used for the network's DHCP on this host."""
        if device_name:
            if (cfg.CONF.core_plugin and
                cfg.CONF.core_plugin.endswith('NsxDvsPlugin')):
                self.driver.unplug(
                    device_name, bridge=self.conf.dvs_integration_bridge,
                   namespace=network.namespace)
            else:
                self.driver.unplug(device_name, namespace=network.namespace)
            # VIO - end
        else:
            LOG.debug('No interface exists for network %s', network.id)

        self.plugin.release_dhcp_port(network.id,
                                      self.get_device_id(network))

We still need to figure out how to upstream this code. The issue is that the 
DHCP agent is configured by the OVS agent and that is not needed….
Thanks
Gary

From: Vaidyanath Manogaran <vaidyanat...@gmail.com>
Date: Thursday, July 28, 2016 at 8:33 PM
To: Gary Kotton <gkot...@vmware.com>
Cc: Scott Lowe <scott.l...@scottlowe.org>, "openstack@lists.openstack.org" 
<openstack@lists.openstack.org>, "commun...@lists.openstack.org" 
<commun...@lists.openstack.org>
Subject: Re: [Openstack] vm unable to get ip neutron with vmware nsx plugin

The DHCP Agent is part of the controller node.
The Agent is connected to DVS. what I mean is, when i create a network in 
neutron the Portgroup is getting created successfully.
I just need to make sure how my MAC is getting assigned.

Also i see that the vlan tag ID is not getting mapped to the Tap device in ovs.

root@controller:~# neutron agent-list
+--------------------------------------+----------------+------------+-------+----------------+------------------------+
| id                                   | agent_type     | host       | alive | 
admin_state_up | binary                 |
+--------------------------------------+----------------+------------+-------+----------------+------------------------+
| 5555dbd8-14d0-4a47-83bd-890737bcfe08 | DHCP agent     | controller | :-)   | 
True           | neutron-dhcp-agent     |
| f183a3b6-b065-4b90-b5b7-b3d819c30f5b | Metadata agent | controller | :-)   | 
True           | neutron-metadata-agent |
+--------------------------------------+----------------+------------+-------+----------------+------------------------+
root@controller:~# vi /etc/neutron/neutron.conf
root@controller:~# ovs-vsctl show
d516b5b1-db3f-4acd-856c-10d530c58c23
    Bridge br-dvs
        Port "eth1"
            Interface "eth1"
        Port br-dvs
            Interface br-dvs
                type: internal
        Port "tap707eb11b-4b"
            Interface "tap707eb11b-4b"
                type: internal
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.5.0"
root@controller:~#






On Thu, Jul 28, 2016 at 10:57 PM, Gary Kotton 
<gkot...@vmware.com<mailto:gkot...@vmware.com>> wrote:
Ok, thanks.
Where is the DHCP agent running?
You need to make sure that the agent is connected to the DVS that you are using 
in Nova. In addition to this you need to make sure that it can use MAC’s that 
are allocated by OpenStack.


From: Vaidyanath Manogaran 
<vaidyanat...@gmail.com<mailto:vaidyanat...@gmail.com>>
Date: Thursday, July 28, 2016 at 8:25 PM
To: Gary Kotton <gkot...@vmware.com<mailto:gkot...@vmware.com>>
Cc: Scott Lowe <scott.l...@scottlowe.org<mailto:scott.l...@scottlowe.org>>, 
"openstack@lists.openstack.org<mailto:openstack@lists.openstack.org>" 
<openstack@lists.openstack.org<mailto:openstack@lists.openstack.org>>, 
"commun...@lists.openstack.org<mailto:commun...@lists.openstack.org>" 
<commun...@lists.openstack.org<mailto:commun...@lists.openstack.org>>

Subject: Re: [Openstack] vm unable to get ip neutron with vmware nsx plugin

its just simple DVS.

core_plugin = vmware_nsx.plugin.NsxDvsPlugin


On Thu, Jul 28, 2016 at 10:54 PM, Gary Kotton 
<gkot...@vmware.com<mailto:gkot...@vmware.com>> wrote:
Hi,
Which backend NSX version are you using? Is this NSX|V, NSX|MH or simple DVS?
Thanks
Gary

From: Vaidyanath Manogaran 
<vaidyanat...@gmail.com<mailto:vaidyanat...@gmail.com>>
Date: Thursday, July 28, 2016 at 8:04 PM
To: Scott Lowe <scott.l...@scottlowe.org<mailto:scott.l...@scottlowe.org>>
Cc: "openstack@lists.openstack.org<mailto:openstack@lists.openstack.org>" 
<openstack@lists.openstack.org<mailto:openstack@lists.openstack.org>>, 
"commun...@lists.openstack.org<mailto:commun...@lists.openstack.org>" 
<commun...@lists.openstack.org<mailto:commun...@lists.openstack.org>>
Subject: Re: [Openstack] vm unable to get ip neutron with vmware nsx plugin

Hi Scott,
Thank you for the reply. my replies inline[MV]

On Thu, Jul 28, 2016 at 8:29 PM, Scott Lowe 
<scott.l...@scottlowe.org<mailto:scott.l...@scottlowe.org>> wrote:
Please see my responses inline, prefixed by [SL].


On Jul 28, 2016, at 2:43 AM, Vaidyanath Manogaran 
<vaidyanat...@gmail.com<mailto:vaidyanat...@gmail.com>> wrote:
>
> 1- Controller node Services - keystone, glance, neutron, nova neutron plugins 
> used - vmware-nsx - 
> https://github.com/openstack/vmware-nsx/<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_vmware-2Dnsx_&d=CwMFaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNc&m=P9nqTtbpb0cd3RCGTLZ2FVIXDztbr46L6s8pM3ulswk&s=sHANOvMVbB4vailvn1AO1bxWfs6epyOTAAcuDkWKSEE&e=>
>  neutron agents - openvswitch agent 2- compute node Services - nova-compute


[SL] May I ask what version of NSX you're running?
[MV] I have installed it from source picked up from github stable/mitaka - 
https://github.com/openstack/vmware-nsx/tree/stable/mitaka<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_vmware-2Dnsx_tree_stable_mitaka&d=CwMFaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNc&m=P9nqTtbpb0cd3RCGTLZ2FVIXDztbr46L6s8pM3ulswk&s=1y5-WQ8XAnjDMxyPqes3I2h6E9TfJzwPlTu70EJpTfY&e=>

> I have all the services up and running. but when i provision the vm the vm is 
> not assigning the IP address which is offered from DHCP server


[SL] NSX doesn't currently handle DHCP on its own, so you'll need the Neutron 
DHCP agent running somewhere. Wherever it's running will need to have OVS 
installed and be registered into NSX as a "hypervisor" so that the DHCP agent 
can be plumbed into the overlay networks.

One common arrangement is to build a Neutron "network node" that is running the 
DHCP agent and metadata agent, and register that into NSX.
 [MV] I have setup only controller with neutron metadata and neutron dhcp

root@controller:~# neutron agent-list
+--------------------------------------+----------------+------------+-------+----------------+------------------------+
| id                                   | agent_type     | host       | alive | 
admin_state_up | binary                 |
+--------------------------------------+----------------+------------+-------+----------------+------------------------+
| 5555dbd8-14d0-4a47-83bd-890737bcfe08 | DHCP agent     | controller | :-)   | 
True           | neutron-dhcp-agent     |
| f183a3b6-b065-4b90-b5b7-b3d819c30f5b | Metadata agent | controller | :-)   | 
True           | neutron-metadata-agent |
+--------------------------------------+----------------+------------+-------+----------------+------------------------+
root@controller:~#



> here are the config details:-
>
> root@controller:~# neutron net-show test 
> +---------------------------+--------------------------------------+ | Field 
> | Value | 
> +---------------------------+--------------------------------------+ | 
> admin_state_up | True | | created_at | 2016-07-28T13:35:22 | | description | 
> | | id | be2178a3-a268-47f4-809e-8e0024c6f054 | | name | test | | 
> port_security_enabled | True | | provider:network_type | vlan | | 
> provider:physical_network | dvs | | provider:segmentation_id | 110 | | 
> router:external | False | | shared | True | | status | ACTIVE | | subnets | 
> 5009ec57-4ca7-4e2b-962e-549e6bbee408 | | tags | | | tenant_id | 
> ce581005def94bb1947eac9ac15f15ea | | updated_at | 2016-07-28T13:35:22 | 
> +---------------------------+--------------------------------------+
>
> root@controller:~# neutron subnet-show testsubnet 
> +-------------------+------------------------------------------------------+ 
> | Field | Value | 
> +-------------------+------------------------------------------------------+ 
> | allocation_pools | {"start": "192.168.18.246", "end": "192.168.18.248"} | | 
> cidr | 
> 192.168.18.0/24<https://urldefense.proofpoint.com/v2/url?u=http-3A__192.168.18.0_24&d=CwMFaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNc&m=P9nqTtbpb0cd3RCGTLZ2FVIXDztbr46L6s8pM3ulswk&s=nky7Szid45D670NmpZ_3U5oQEt2c9uGU6boDOAH5YdY&e=>
>  | | created_at | 2016-07-28T14:56:54 | | description | | | dns_nameservers | 
> 192.168.13.12<tel:192.168.13.12> | | enable_dhcp | True | | gateway_ip | 
> 192.168.18.1 | | host_routes | | | id | 5009ec57-4ca7-4e2b-962e-549e6bbee408 
> | | ip_version | 4 | | ipv6_address_mode | | | ipv6_ra_mode | | | name | 
> testsubnet | | network_id | be2178a3-a268-47f4-809e-8e0024c6f054 | | 
> subnetpool_id | | | tenant_id | ce581005def94bb1947eac9ac15f15ea | | 
> updated_at | 2016-07-28T14:56:54 | 
> +-------------------+------------------------------------------------------+
>
> root@controller:~# ovs-vsctl show d516b5b1-db3f-4acd-856c-10d530c58c23 Bridge 
> br-dvs Port br-dvs Interface br-dvs type: internal Port "eth1" Interface 
> "eth1" Bridge br-int Port br-int Interface br-int type: internal Port 
> "tap91d8accd-6d" Interface "tap91d8accd-6d" type: internal ovs_version: 
> "2.5.0"
>
> root@controller:~# ip netns qdhcp-be2178a3-a268-47f4-809e-8e0024c6f054
>
> root@controller:~# ip netns exec qdhcp-be2178a3-a268-47f4-809e-8e0024c6f054 
> ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 
> inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX 
> packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 
> dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) 
> TX bytes:0 (0.0 B)
>
> tap91d8accd-6d Link encap:Ethernet HWaddr fa:16:3e:7f:5e:03 inet 
> addr:192.168.18.246 Bcast:192.168.18.255 Mask:255.255.255.0 inet6 addr: 
> fe80::f816:3eff:fe7f:5e03/64 Scope:Link UP BROADCAST RUNNING MULTICAST 
> MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX 
> packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 
> RX bytes:0 (0.0 B) TX bytes:648 (648.0 B)
>
> root@controller:~# ping 192.168.18.246 PING 192.168.18.246 (192.168.18.246) 
> 56(84) bytes of data. ^C --- 192.168.18.246 ping statistics --- 20 packets 
> transmitted, 0 received, 100% packet loss, time 18999ms
>
> I dont have any agents running. because vmware_nsx should be taking care of 
> the communication with openvswitch.
>
> Commandline: apt install openvswitch-switch Install: openvswitch-switch:amd64 
> (2.5.0-0ubuntu1~cloud0), openvswitch-common:amd64 (2.5.0-0ubuntu1~cloud0, 
> automatic)
>

[SL] You need to ensure you are using the version of OVS that is matched 
against your version of NSX. At this time, I don't believe it's OVS 2.5.0 (as 
noted in your command-line installation of OVS).
how to I ensure the supported version is installed. is there a support matrix? 
if so could you please share it?
--
Scott



--
Regards,

Vaidyanath
+91-9483465528<tel:%2B91-9483465528>(M)



--
Regards,

Vaidyanath
+91-9483465528<tel:%2B91-9483465528>(M)



--
Regards,

Vaidyanath
+91-9483465528(M)
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to