[Yahoo-eng-team] [Bug 1912672] Re: [RFE] Enable set quota per floating-ips pool.

2021-08-26 Thread Slawek Kaplonski
Due to no activity on that RFE for few months now, I'm going to close it
for now. Feel free to reopen it and provide additional information about
it when needed.

** Changed in: neutron
   Status: New => Opinion

** Tags removed: rfe
** Tags added: rfe-postponed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1912672

Title:
  [RFE] Enable set quota per floating-ips pool.

Status in neutron:
  Opinion

Bug description:
  [Request]
  In the OpenStack environment when setting a quota for total floating-ips this 
is defined only at the level of tenants/projects, I would like to see the 
possibility of having the option of limiting the number of floating-ips per 
pool of available networks.

  [Example]
  An environment with two floating-ips pools network -  internet and intranet.
  The internet pool has less IPs available than the intranet pool in total, so 
the need to have different quotas per floating-ips pool and to ensure that the 
number of IPs is not exceeded.

  From internet pool max. 1 floating-ips 
  From intranet pool max. 5 floating-ips

  [Env]
  The current environment is Bionic/Train Openstack/Juju-charms but this is 
valid for other versions of Openstack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1912672/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1941725] [NEW] neutron-lib unit tests broken with oslo.messaging 12.9.0

2021-08-26 Thread Rodolfo Alonso
Public bug reported:

oslo.messaging 12.9.0 support for metrics (see [0]) is a new feature,
and it breaks some RPC related neutron-lib unit tests.

[0]https://review.opendev.org/c/openstack/oslo.messaging/+/761848

** Affects: neutron
 Importance: Critical
 Status: Confirmed

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1941725

Title:
  neutron-lib unit tests broken with oslo.messaging 12.9.0

Status in neutron:
  Confirmed

Bug description:
  oslo.messaging 12.9.0 support for metrics (see [0]) is a new feature,
  and it breaks some RPC related neutron-lib unit tests.

  [0]https://review.opendev.org/c/openstack/oslo.messaging/+/761848

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1941725/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600004] Re: UX: Inconsistent table color in Flavor Popover

2021-08-26 Thread Vishal Manchanda
it is fixed by https://review.openstack.org/#/c/534602/

** Changed in: horizon
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/164

Title:
  UX: Inconsistent table color in Flavor Popover

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Looks like tr:hover bgcolor attributes are propagated to a child table
  that exists in a popover shown when user clicks on flavor name.

  How to reproduce:
  1) Go to Project->Instances
  2) Locate Size Column (Flavor)
  3) Click on Flavor Name.
  4) Mouse over and out some of the table rows and see how the table that shows 
"Flavor Details" changes its background color.

  See screenshots:
  1. http://pasteboard.co/8JUQd3oMS.png
  2. http://pasteboard.co/8JWTGTYde.png

  Note: this behavior is more notorious when seeing a flavor detail of a
  row with class "even" (tr.even).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/164/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709765] Re: Failed to create keypair in ng create instance form when the quota exceeded

2021-08-26 Thread Vishal Manchanda
It is fixed by https://review.opendev.org/c/openstack/horizon/+/677580

** Changed in: horizon
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1709765

Title:
  Failed to create keypair in ng create instance form  when the quota
  exceeded

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  In ng create instance form, we can create and import keypair, keypair
  has quota management, when the keypair quota exceeded, if we create or
  import keypair, it will fail and the API will return "Quota exceeded,
  too many key pairs. (HTTP 403) (Request-ID:
  req-841e0499-ae34-4029-9a2f-04a5a6d3e3f7)"

  We should like keypairs panel to add quota check in ng create instance
  keypair tab page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1709765/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1941744] [NEW] Rescue of volume based instance fails silently due to wrong API version

2021-08-26 Thread Christian Rohmann
Public bug reported:

Even though the feature is noted in the release notes of the Ussuri release 
(https://docs.openstack.org/releasenotes/nova/ussuri.html#relnotes-21-0-0-stable-ussuri)
 a volume based instance cannot be "rescued" 
(https://docs.openstack.org/nova/ussuri/user/rescue.html) via the Horizon 
Dashboard. 
After clicking "rescue" the action silently fails and nothing happens.

I believe this is simply due to the Nova API version used not being recent 
enough.
According to 
https://github.com/openstack/nova/commit/ff3fd846362dfaa7d880dca83f1482b7a8ce80c5
 at least version 2.87 is required and I was able to reproduce the issue using 
the CLI:


1) no version selected 

# openstack  server rescue --image $IMAGE_ID --password abc123
$INSTANCE_ID

Instance $INSTANCE_ID cannot be rescued: Cannot rescue a volume-backed
instance (HTTP 400) (Request-ID:
req-8608b9a3-2a10-40ce-a76e-21c93c3bbd8b)


2) selected API version 2.87

# openstack --os-compute-api-version 2.87 server rescue -image $IMAGE_ID
--password abc123 $INSTANCE_ID


-> success.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1941744

Title:
  Rescue of volume based instance fails silently due to wrong API
  version

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Even though the feature is noted in the release notes of the Ussuri release 
(https://docs.openstack.org/releasenotes/nova/ussuri.html#relnotes-21-0-0-stable-ussuri)
 a volume based instance cannot be "rescued" 
(https://docs.openstack.org/nova/ussuri/user/rescue.html) via the Horizon 
Dashboard. 
  After clicking "rescue" the action silently fails and nothing happens.

  I believe this is simply due to the Nova API version used not being recent 
enough.
  According to 
https://github.com/openstack/nova/commit/ff3fd846362dfaa7d880dca83f1482b7a8ce80c5
 at least version 2.87 is required and I was able to reproduce the issue using 
the CLI:

  
  1) no version selected 

  # openstack  server rescue --image $IMAGE_ID --password abc123
  $INSTANCE_ID

  Instance $INSTANCE_ID cannot be rescued: Cannot rescue a volume-backed
  instance (HTTP 400) (Request-ID:
  req-8608b9a3-2a10-40ce-a76e-21c93c3bbd8b)


  
  2) selected API version 2.87

  # openstack --os-compute-api-version 2.87 server rescue -image
  $IMAGE_ID --password abc123 $INSTANCE_ID

  
  -> success.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1941744/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1941757] [NEW] docs: neutron_tunnel vs. neutron_tunneled

2021-08-26 Thread Stephan Pampel
Public bug reported:

Description
===
The configuration section in nova.conf for L3 tunneled networking is called 
"neutron_tunnel" in 
https://docs.openstack.org/nova/latest/configuration/config.html#neutron.physnets
 but "neutron_tunneled" in 
https://docs.openstack.org/nova/latest/admin/networking.html

Expected result
===
I assume this refers to the same section and therefore should be called the 
same.
I think the admin/networking guide is wrong and that the section title is 
"neutron_tunnel" because of 
https://opendev.org/openstack/nova/src/branch/master/nova/conf/neutron.py#L160

** Affects: nova
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1941757

Title:
  docs: neutron_tunnel vs. neutron_tunneled

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  The configuration section in nova.conf for L3 tunneled networking is called 
"neutron_tunnel" in 
https://docs.openstack.org/nova/latest/configuration/config.html#neutron.physnets
 but "neutron_tunneled" in 
https://docs.openstack.org/nova/latest/admin/networking.html

  Expected result
  ===
  I assume this refers to the same section and therefore should be called the 
same.
  I think the admin/networking guide is wrong and that the section title is 
"neutron_tunnel" because of 
https://opendev.org/openstack/nova/src/branch/master/nova/conf/neutron.py#L160

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1941757/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1941725] Re: neutron-lib unit tests broken with oslo.messaging 12.9.0

2021-08-26 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron-lib/+/806129
Committed: 
https://opendev.org/openstack/neutron-lib/commit/c3f540a4cd252a4147e4e013df114a2303c810b7
Submitter: "Zuul (22348)"
Branch:master

commit c3f540a4cd252a4147e4e013df114a2303c810b7
Author: Rodolfo Alonso Hernandez 
Date:   Thu Aug 26 09:54:06 2021 +

Disable "oslo.messaging" metrics

Since [1], "oslo.messaging" provides a feature to send RPC metrics
to "oslo.metrics". This patch is breaking the CI unit tests jobs.

Because those metrics are not needed in this CI, this patch disables
this feature.

[1]https://review.opendev.org/c/openstack/oslo.messaging/+/761848

Change-Id: I8611e140305636685c1532dc36812d24a234dc9b
Closes-Bug: #1941725


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1941725

Title:
  neutron-lib unit tests broken with oslo.messaging 12.9.0

Status in neutron:
  Fix Released

Bug description:
  oslo.messaging 12.9.0 support for metrics (see [0]) is a new feature,
  and it breaks some RPC related neutron-lib unit tests.

  [0]https://review.opendev.org/c/openstack/oslo.messaging/+/761848

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1941725/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1941784] [NEW] OVN Failed to bind SRIOV port

2021-08-26 Thread Satish Patel
Public bug reported:

Testing OVN with SRIOV with latest wallaby. I have following
configuration for SRIOV.

# created network (my gateway is my datacenter physical router)
neutron subnet-create net_vlan69 10.69.0.0/21 --name sub_vlan69 
--allocation-pool start=10.69.7.1,end=10.69.7.254 --dns-nameservers 10.64.0.10 
10.64.0.11 --gateway=10.69.0.1

# ml2_config.ini 
mechanism_drivers = ovn,sriovnicswitch

# sriov_nic_agent.ini
[agent]
[securitygroup]
firewall_driver = neutron.agent.firewall.NoopFirewallDriver
[sriov_nic]
exclude_devices =
physical_device_mappings = vlan:eno49,vlan:eno50

# compute / nova.conf 
[pci]
# White list of PCI devices available to VMs.
passthrough_whitelist = { "physical_network":"vlan", "devname":"eno49" }

I have created neutron port and then try to create instance using it i
got following error in neutron-server.log

Aug 26 17:37:40 ovn-lab-infra-1-neutron-server-container-bbc2e2bc 
neutron-server[7325]: 2021-08-26 17:37:40.926 7325 ERROR 
neutron.plugins.ml2.managers [req-7cd1f547-f909-41cf-95bf-4bd6ee60fa3a 
8f68544ba1ce4f32b7
8a53ee9de0fcc4 47bbb171bfad4b109a4f93e25b9e5cc8 - default default] Failed to 
bind port ee7432c4-3b55-4290-8666-b6088ae5214e on host 
ovn-lab-comp-sriov-1.v1v0x.net for vnic_type direct using segments [{'id': 
'43d98d4d-9a41-4f40-ab1c-6086
4289301a', 'network_type': 'vlan', 'physical_network': 'vlan', 
'segmentation_id': 69, 'network_id': '73915d6b-155b-46c4-9755-edd4ceb8aaa9'}]

Here is the output of OVN

root@ovn-lab-infra-1-neutron-ovn-northd-container-cb55f5ef:~# ovn-sbctl list 
Chassis
_uuid   : 9e834f0d-b86c-47a6-8f95-57aab89a56cb
encaps  : [5d349a0f-7660-45fb-8acb-a30123cf3292, 
dc1478ea-769e-477c-bce4-fd020950894f]
external_ids: {datapath-type=system, 
iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan",
 is-interconn="false", 
"neutron:ovn-metadata-id"="e36ecbc7-6468-5912-ab9e-c35e37f7ae28", 
"neutron:ovn-metadata-sb-cfg"="11", ovn-bridge-mappings="vlan:br-provider", 
ovn-chassis-mac-mappings="", ovn-cms-options=enable-chassis-as-gw}
hostname: ovn-lab-comp-gen-1.v1v0x.net
name: "86dafd8a-0bc2-4225-ad69-00c86412b92c"
nb_cfg  : 11
transport_zones : []
vtep_logical_switches: []

_uuid   : 672ebc1a-09e4-4a3a-82e1-40ab982169f3
encaps  : [1386ce35-02cf-43f7-bf7d-7045b96330fe, 
b684747a-d881-4758-9023-052801a36f12]
external_ids: {datapath-type="", 
iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan",
 is-interconn="false", 
"neutron:ovn-metadata-id"="1e56c7cb-7d9d-5794-9ab5-dccbab988e54", 
"neutron:ovn-metadata-sb-cfg"="11", ovn-bridge-mappings="vlan:br-provider", 
ovn-chassis-mac-mappings="", ovn-cms-options=enable-chassis-as-gw}
hostname: ovn-lab-comp-sriov-1.v1v0x.net
name: "7b867957-bcf6-4ae7-a0b5-b6948ec85155"
nb_cfg  : 11
transport_zones : []
vtep_logical_switches: []


root@ovn-lab-infra-1-neutron-ovn-northd-container-cb55f5ef:~# ovn-nbctl list 
HA_Chassis
_uuid   : c4c08be8-5cdc-4cd7-99ed-a22bb787ab55
chassis_name: "86dafd8a-0bc2-4225-ad69-00c86412b92c"
external_ids: {}
priority: 32767

_uuid   : 8f072dda-070a-4ff3-805c-bb0f40b99348
chassis_name: "7b867957-bcf6-4ae7-a0b5-b6948ec85155"
external_ids: {}
priority: 32766


root@ovn-lab-infra-1-neutron-ovn-northd-container-cb55f5ef:~# ovn-nbctl find 
Logical_Switch_Port type=external
_uuid   : b83399de-eced-49cb-bfb1-b356ccaaa399
addresses   : ["fa:16:3e:97:d2:a0 10.69.7.30"]
dhcpv4_options  : 50522284-63ad-4f3c-8b74-05f2b0462171
dhcpv6_options  : []
dynamic_addresses   : []
enabled : true
external_ids: {"neutron:cidrs"="10.69.7.30/21", "neutron:device_id"="", 
"neutron:device_owner"="", 
"neutron:network_name"=neutron-73915d6b-155b-46c4-9755-edd4ceb8aaa9, 
"neutron:port_name"=sriov-port-1, 
"neutron:project_id"=a1f725b0477a4281bebf76d0765add18, 
"neutron:revision_number"="6", 
"neutron:security_group_ids"="5d0f9c38-85aa-42d4-8420-c76328606dbd"}
ha_chassis_group: d8e28798-412f-434d-b625-624f957be1e2
name: "ee7432c4-3b55-4290-8666-b6088ae5214e"
options : {mcast_flood_reports="true"}
parent_name : []
port_security   : ["fa:16:3e:97:d2:a0 10.69.7.30"]
tag : []
tag_request : []
type: external
up  : true


For testing i have upgraded my neutron with latest master also to see if i 
missed any patch but still result is same.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1941784

Title:
  OVN  Failed to bind SRIOV port

Status in neutron:
  New

Bug description:
  Testing OVN with SR

[Yahoo-eng-team] [Bug 1941784] Re: OVN Failed to bind SRIOV port

2021-08-26 Thread Satish Patel
Turn out my issue was sriov-nic-agent was down or not configure
properly. sorry for the spam.

** Description changed:

  Testing OVN with SRIOV with latest wallaby. I have following
  configuration for SRIOV.
  
  # created network (my gateway is my datacenter physical router)
  neutron subnet-create net_vlan69 10.69.0.0/21 --name sub_vlan69 
--allocation-pool start=10.69.7.1,end=10.69.7.254 --dns-nameservers 10.64.0.10 
10.64.0.11 --gateway=10.69.0.1
  
- # ml2_config.ini 
+ # ml2_config.ini
  mechanism_drivers = ovn,sriovnicswitch
  
  # sriov_nic_agent.ini
  [agent]
  [securitygroup]
  firewall_driver = neutron.agent.firewall.NoopFirewallDriver
  [sriov_nic]
  exclude_devices =
  physical_device_mappings = vlan:eno49,vlan:eno50
  
- # compute / nova.conf 
+ # compute / nova.conf
  [pci]
  # White list of PCI devices available to VMs.
  passthrough_whitelist = { "physical_network":"vlan", "devname":"eno49" }
  
  I have created neutron port and then try to create instance using it i
  got following error in neutron-server.log
  
  Aug 26 17:37:40 ovn-lab-infra-1-neutron-server-container-bbc2e2bc 
neutron-server[7325]: 2021-08-26 17:37:40.926 7325 ERROR 
neutron.plugins.ml2.managers [req-7cd1f547-f909-41cf-95bf-4bd6ee60fa3a 
8f68544ba1ce4f32b7
  8a53ee9de0fcc4 47bbb171bfad4b109a4f93e25b9e5cc8 - default default] Failed to 
bind port ee7432c4-3b55-4290-8666-b6088ae5214e on host 
ovn-lab-comp-sriov-1.v1v0x.net for vnic_type direct using segments [{'id': 
'43d98d4d-9a41-4f40-ab1c-6086
  4289301a', 'network_type': 'vlan', 'physical_network': 'vlan', 
'segmentation_id': 69, 'network_id': '73915d6b-155b-46c4-9755-edd4ceb8aaa9'}]
  
  Here is the output of OVN
  
  root@ovn-lab-infra-1-neutron-ovn-northd-container-cb55f5ef:~# ovn-sbctl list 
Chassis
  _uuid   : 9e834f0d-b86c-47a6-8f95-57aab89a56cb
  encaps  : [5d349a0f-7660-45fb-8acb-a30123cf3292, 
dc1478ea-769e-477c-bce4-fd020950894f]
  external_ids: {datapath-type=system, 
iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan",
 is-interconn="false", 
"neutron:ovn-metadata-id"="e36ecbc7-6468-5912-ab9e-c35e37f7ae28", 
"neutron:ovn-metadata-sb-cfg"="11", ovn-bridge-mappings="vlan:br-provider", 
ovn-chassis-mac-mappings="", ovn-cms-options=enable-chassis-as-gw}
  hostname: ovn-lab-comp-gen-1.v1v0x.net
  name: "86dafd8a-0bc2-4225-ad69-00c86412b92c"
  nb_cfg  : 11
  transport_zones : []
  vtep_logical_switches: []
  
  _uuid   : 672ebc1a-09e4-4a3a-82e1-40ab982169f3
  encaps  : [1386ce35-02cf-43f7-bf7d-7045b96330fe, 
b684747a-d881-4758-9023-052801a36f12]
  external_ids: {datapath-type="", 
iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan",
 is-interconn="false", 
"neutron:ovn-metadata-id"="1e56c7cb-7d9d-5794-9ab5-dccbab988e54", 
"neutron:ovn-metadata-sb-cfg"="11", ovn-bridge-mappings="vlan:br-provider", 
ovn-chassis-mac-mappings="", ovn-cms-options=enable-chassis-as-gw}
  hostname: ovn-lab-comp-sriov-1.v1v0x.net
  name: "7b867957-bcf6-4ae7-a0b5-b6948ec85155"
  nb_cfg  : 11
  transport_zones : []
  vtep_logical_switches: []
  
- 
  root@ovn-lab-infra-1-neutron-ovn-northd-container-cb55f5ef:~# ovn-nbctl list 
HA_Chassis
  _uuid   : c4c08be8-5cdc-4cd7-99ed-a22bb787ab55
  chassis_name: "86dafd8a-0bc2-4225-ad69-00c86412b92c"
  external_ids: {}
  priority: 32767
  
  _uuid   : 8f072dda-070a-4ff3-805c-bb0f40b99348
  chassis_name: "7b867957-bcf6-4ae7-a0b5-b6948ec85155"
  external_ids: {}
  priority: 32766
- 
  
  root@ovn-lab-infra-1-neutron-ovn-northd-container-cb55f5ef:~# ovn-nbctl find 
Logical_Switch_Port type=external
  _uuid   : b83399de-eced-49cb-bfb1-b356ccaaa399
  addresses   : ["fa:16:3e:97:d2:a0 10.69.7.30"]
  dhcpv4_options  : 50522284-63ad-4f3c-8b74-05f2b0462171
  dhcpv6_options  : []
  dynamic_addresses   : []
  enabled : true
  external_ids: {"neutron:cidrs"="10.69.7.30/21", 
"neutron:device_id"="", "neutron:device_owner"="", 
"neutron:network_name"=neutron-73915d6b-155b-46c4-9755-edd4ceb8aaa9, 
"neutron:port_name"=sriov-port-1, 
"neutron:project_id"=a1f725b0477a4281bebf76d0765add18, 
"neutron:revision_number"="6", 
"neutron:security_group_ids"="5d0f9c38-85aa-42d4-8420-c76328606dbd"}
  ha_chassis_group: d8e28798-412f-434d-b625-624f957be1e2
  name: "ee7432c4-3b55-4290-8666-b6088ae5214e"
  options : {mcast_flood_reports="true"}
  parent_name : []
  port_security   : ["fa:16:3e:97:d2:a0 10.69.7.30"]
  tag : []
  tag_request : []
  type: external
  up  : true
  
- 
- For testing i have upgraded my neutron with latest master also to see if i 
missed any patch but still result is same.
+ For testing i have upgraded my