[Yahoo-eng-team] [Bug 1908408] [NEW] Need to retry vmware instance creation

2020-12-16 Thread Adit Sarfaty
Public bug reported:

In some cases vm boot fails with this trace:

^[[01;31mERROR nova.compute.manager [^[[01;36mNone 
req-93f69841-12ab-42c5-b56a-83e964f4a374 
^[[00;36mtempest-TestNetworkBasicOps-721884738 
tempest-TestNetworkBasicOps-721884738^[[01;31m] ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[01;31mInstance failed too
 spawn^[[00m: oslo_vmware.exceptions.VimFaultException: Network interface 
'VirtualE1000' uses network 'tempest-network-smoke--802933861_b25e6...71f8b 
(nsx.LogicalSwitch:6b0a691d-794f-42f7-b648-d67e8a10c0eb)', which is not 
accessible.
Faults: ['CannotAccessNetwork']
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00mTraceback (most recent call last):
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00m  File 
"/opt/stack/nova/nova/compute/manager.py", line 2663, in _build_resources
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00myield resources
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00m  File 
"/opt/stack/nova/nova/compute/manager.py", line 2437, in _build_and_run_instance
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00m
block_device_info=block_device_info)
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00m  File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 544, in spawn
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00madmin_password, network_info, 
block_device_info)
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00m  File 
"/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 753, in spawn
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00mmetadata)
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00m  File 
"/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 304, in 
build_virtual_machine
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00mconfig_spec, 
self._root_resource_pool)
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00m  File 
"/opt/stack/nova/nova/virt/vmwareapi/vm_util.py", line 1392, in create_vm
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00m{'ostype': config_spec.guestId})
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00m  File 
"/usr/local/lib/python3.6/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00mself.force_reraise()
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00m  File 
"/usr/local/lib/python3.6/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00msix.reraise(self.type_, 
self.value, self.tb)
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00m  File 
"/usr/local/lib/python3.6/dist-packages/six.py", line 693, in reraise
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00mraise value
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00m  File 
"/opt/stack/nova/nova/virt/vmwareapi/vm_util.py", line 1377, in create_vm
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00mtask_info = 
session._wait_for_task(vm_create_task)
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00m  File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 771, in _wait_for_task
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00mreturn 
self.wait_for_task(task_ref)
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00m  File 
"/usr/local/lib/python3.6/dist-packages/oslo_vmware/api.py", line 399, in 
wait_for_task
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00mreturn evt.wait()
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00m  File 
"/usr/local/lib/python3.6/dist-packages/eventlet/event.py", line 125, in wait
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00mresult = hub.switch()
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00m  File 
"/usr/local/lib/python3.6/dist-packages/eventlet/hubs/hub.py", line 298, in 
switch
ERROR nova.compute.manager ^[[01;35m[instance: 
26d7da79-438a-41e2-bb57-8019f90f3fa2] ^[[00mreturn 

[Yahoo-eng-team] [Bug 1882873] Re: internal server error on updating no-gateway on the dhcpv6 subnet

2020-10-06 Thread Adit Sarfaty
** Changed in: neutron
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1882873

Title:
  internal server error on updating no-gateway on the dhcpv6 subnet

Status in neutron:
  New

Bug description:
  nicira@bionic-template:~/devstack$ neutron net-create netbug
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Created a new network:
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | availability_zone_hints |  |
  | availability_zones  | defaultp |
  | created_at  | 2020-05-27T09:52:16Z |
  | description |  |
  | dns_domain  |  |
  | id  | ef1b20da-443c-4032-b148-c594ff7ef5b5 |
  | ipv4_address_scope  |  |
  | ipv6_address_scope  |  |
  | name| netbug   |
  | port_security_enabled   | True |
  | project_id  | 77a3f40660c24d20baf65780d5910efc |
  | qos_policy_id   |  |
  | revision_number | 2|
  | router:external | False|
  | shared  | False|
  | status  | ACTIVE   |
  | subnets |  |
  | tags|  |
  | tenant_id   | 77a3f40660c24d20baf65780d5910efc |
  | updated_at  | 2020-05-27T09:52:16Z |
  +-+--+
  nicira@bionic-template:~/devstack$ neutron subnet-create netbug 29::0/64 
--name subnetbug  --ip-version 6
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Created a new subnet:
  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  | {"start": "29::2", "end": "29:::::"} |
  | cidr  | 29::/64  |
  | created_at| 2020-05-27T09:52:52Z |
  | description   |  |
  | dns_nameservers   |  |
  | enable_dhcp   | True |
  | gateway_ip| 29::1|
  | host_routes   |  |
  | id| f37571f1-d142-4281-a60c-404b263dd95d |
  | ip_version| 6|
  | ipv6_address_mode |  |
  | ipv6_ra_mode  |  |
  | name  | subnetbug|
  | network_id| ef1b20da-443c-4032-b148-c594ff7ef5b5 |
  | project_id| 77a3f40660c24d20baf65780d5910efc |
  | revision_number   | 0|
  | subnetpool_id |  |
  | tags  |  |
  | tenant_id | 77a3f40660c24d20baf65780d5910efc |
  | updated_at| 2020-05-27T09:52:52Z |
  +---+--+
  nicira@bionic-template:~/devstack$ neutron router-create rtrbug
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Created a new router:
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | availability_zone_hints |  |
  | availability_zones  | defaultp |
  | created_at  | 

[Yahoo-eng-team] [Bug 1279611] Re: urlparse is incompatible for python 3

2020-07-29 Thread Adit Sarfaty
** Changed in: vmware-nsx
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279611

Title:
   urlparse is incompatible for python 3

Status in Astara:
  Fix Committed
Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in gce-api:
  In Progress
Status in Glance:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in openstack-doc-tools:
  Fix Released
Status in python-barbicanclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in RACK:
  Fix Committed
Status in Sahara:
  Fix Released
Status in Solar:
  Invalid
Status in storyboard:
  Fix Committed
Status in surveil:
  In Progress
Status in OpenStack Object Storage (swift):
  Fix Released
Status in swift-bench:
  Fix Committed
Status in OpenStack DBaaS (Trove):
  Fix Released
Status in tuskar:
  Fix Released
Status in vmware-nsx:
  Fix Released
Status in zaqar:
  Fix Released
Status in Zuul:
  Fix Committed

Bug description:
  import urlparse

  should be changed to :
  import six.moves.urllib.parse as urlparse

  for python3 compatible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/astara/+bug/1279611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1410777] Re: Floating IP ops lock wait timeout

2020-07-29 Thread Adit Sarfaty
** Changed in: vmware-nsx
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1410777

Title:
  Floating IP ops lock wait timeout

Status in neutron:
  Won't Fix
Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Released

Bug description:
  Under heavy load floating IP operations can trigger a lock wait
  timeout, thus causing the operation itself to fail.

  The reason for the timeout is the usual untimely eventlet yield which
  can be triggered in many places during the operation. The chances of
  this happening are increased by the fact that _update_fip_assoc
  (called within a DB transaction) does several interactions with the
  NSX backend.

  Unfortunately it is not practical to change the logic of the plugin in
  a way such that _update_fip_assoc does not go to the backend anymore,
  especially because the fix would be so extensive that it would be
  hardly backportable. An attempt in this direction also did not provide
  a solution: https://review.openstack.org/#/c/138078/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1410777/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416596] Re: vmware unit tests broken

2020-07-29 Thread Adit Sarfaty
** Changed in: vmware-nsx
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416596

Title:
  vmware unit tests broken

Status in neutron:
  Fix Released
Status in vmware-nsx:
  Fix Released

Bug description:
  commit 79c97120de9cff4d0992b5d41ff4bbf05e890f89 introduced a constraint which 
causes a vmware unit test to fail.
  This unit test indeed directly exercises the plugin - creating a context with 
get_admin_context. For such context, tenant_id is None, and the DB constraint 
on the default security group table fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376037] Re: NSX: switch chaining logic is obsolete

2020-07-29 Thread Adit Sarfaty
** Changed in: vmware-nsx
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1376037

Title:
  NSX: switch chaining logic is obsolete

Status in neutron:
  Won't Fix
Status in neutron juno series:
  New
Status in vmware-nsx:
  Fix Released

Bug description:
  The NSX plugin implements a "logical switch chaining" logic for
  implementing flat/vlan neutron networks with a very large number of
  ports on NSX backends for which the number of ports per logical switch
  is limited.

  This limitation however pertains exclusively to now old and
  discontinued versions of NSX, and therefore the corresponding logic
  for creating such chained switches can be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1376037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481346] Re: MH: router delete might return a 500 error

2020-07-29 Thread Adit Sarfaty
** Changed in: vmware-nsx
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1481346

Title:
  MH: router delete might return a 500 error

Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Invalid

Bug description:
  If a logical router has been removed from the backend, and the DB is
  an inconsistent state where no NSX mapping is stored for the neutron
  logical router, the backend will fail when attempting eletion of the
  router, causing the neutron operation to return a 500.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1481346/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483920] Re: NSX-mh: honour distributed_router config flag

2020-07-29 Thread Adit Sarfaty
** Changed in: vmware-nsx
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483920

Title:
  NSX-mh: honour distributed_router config flag

Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Released

Bug description:
  The VMware NSX plugin is not honoring the "router_distributed = True"
  flag when set in /etc/neutron.conf.  If the router_distributed
  parameter is set to "True", this should result in all routers that are
  created by tenants to default to distributed routers.  For example,
  the below CLI command should create a distributed logical router, but
  instead it creates a non-distributed router.

  neutron router-create --tenant-id $TENANT tenant-router

  In order to create a distributed router the "--distributed True"
  option must be passed, as show below.

  neutron router-create --tenant-id $TENANT csinfra-router-test
  --distributed True

  This happens because the NSX-mh plugin relies on the default value
  implemented in the backend rather than in the neutron configuration
  and should be changed to ensure this plugin behaves like the reference
  implementation

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1483920/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1578734] Re: Instance Creation fails because of Timeout and Duplicate Hostname for Bindings on NSX Edge (DHCP)

2020-07-29 Thread Adit Sarfaty
** Changed in: vmware-nsx
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1578734

Title:
  Instance Creation fails because of Timeout and Duplicate Hostname for
  Bindings on NSX Edge (DHCP)

Status in OpenStack Compute (nova):
  Invalid
Status in vmware-nsx:
  Fix Released

Bug description:
  Using Openstack Liberty with VMware Driver / NSX Integration.

  When deploying a Template with Heat (3 Instances, with 3 Security Groups, own 
Network)
  The creation failes with an Error: no Hosts available.

  But the main reason is a Timeout that occures between nova and neutron.
  See the logs below.

  For me it seems that Nova creates with Neutron a new Port in NSX
  (Create DHCP reservation) , this times out, but the port gets created.

  2016-05-05 15:53:33.165 8705 ERROR nova.compute.manager [instance: 
6129429c-803b-47bb-a5cd-afb0419e2a12] RequestTimeout: Request to 
http://172.17.99.211:9696/v2.0/ports/4832456a-c007-4c06-bcbe-718cf9346a93.json 
timed out (HTTP 408)
  29c-803b-47bb-a5cd-afb0419e2a12] Error from last host: 
vcenter-MirantisLiberty (node 
domain-c197.11160f6a-be80-4025-a0be-b915de612a16): [u'Traceback (most recent 
call last):\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compu
  te/manager.py", line 1907, in _do_build_and_run_instance\n
filter_properties)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2059, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, reaso
  n=six.text_type(e))\n', u'RescheduledException: Build of instance 
6129429c-803b-47bb-a5cd-afb0419e2a12 was re-scheduled: Request to 
http://172.17.99.211:9696/v2.0/ports/4832456a-c007-4c06-bcbe-718cf9346a93.json 
timed out (HTTP 408)
  \n']

  I can see, that the Edge DHCP, that has been created with HEAT,  has
  also created a reservation for this machine.

  Then, as the instance creation is rescheduled, it tries again to
  create that port but fails, as the port is alredy configured on the
  Edge:

  
  2016-05-05 15:54:51.191 8705 ERROR nova.compute.manager [instance: 
18c9db8e-1a89-4ac6-a432-3ca75e92dbf2] InternalServerError: Request 
https://172.17.99.8/api/4.0/edges/edge-115/dhcp/config/bindings is Bad, 
response {"details":"[Dhcp] Duplicate hostname for binding 
4832456a-c007-4c06-bcbe-718cf9346a93 : VmId null and VnicId 
null.","errorCode":12504,"moduleName":"vShield Edge"}

  
  The environment consists of following:

  Vsphere Version: 6.0.2
  Openstack Liberty (Deployed from Mirantis)
  There is only one Controller which includes all the Parts of Openstack (Nova, 
Neutron, Cinder, Glance, Horizon)

  Thanks for any help to solve the Problem.


  Here are Logs in Detail:

  
  83a2-445a-b43a-4a0b59b8e3f5] Claim successful
  <179>May  5 15:53:44 node-3 nova-compute: 2016-05-05 15:53:44.592 8705 ERROR 
nova.compute.manager [-] Instance failed network setup after 1 attempt(s)
  2016-05-05 15:53:44.592 8705 ERROR nova.compute.manager Traceback (most 
recent call last):
  2016-05-05 15:53:44.592 8705 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1566, in 
_allocate_network_async
  2016-05-05 15:53:44.592 8705 ERROR nova.compute.manager 
bind_host_id=bind_host_id)
  2016-05-05 15:53:44.592 8705 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 727, in 
allocate_for_instance
  2016-05-05 15:53:44.592 8705 ERROR nova.compute.manager 
self._delete_ports(neutron, instance, created_port_ids)
  2016-05-05 15:53:44.592 8705 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in __exit__
  2016-05-05 15:53:44.592 8705 ERROR nova.compute.manager 
six.reraise(self.type_, self.value, self.tb)
  2016-05-05 15:53:44.592 8705 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 712, in 
allocate_for_instance
  2016-05-05 15:53:44.592 8705 ERROR nova.compute.manager 
port_client.update_port(port['id'], port_req_body)
  2016-05-05 15:53:44.592 8705 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 102, in 
with_params
  2016-05-05 15:53:44.592 8705 ERROR nova.compute.manager ret = 
self.function(instance, *args, **kwargs)
  2016-05-05 15:53:44.592 8705 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 562, in 
update_port
  2016-05-05 15:53:44.592 8705 ERROR nova.compute.manager return 
self.put(self.port_path % (port), body=body)
  2016-05-05 15:53:44.592 8705 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 302, in 
put
  2016-05-05 15:53:44.592 8705 ERROR nova.compute.manager headers=headers, 
params=params)
  

[Yahoo-eng-team] [Bug 1640319] Re: AttributeError: 'module' object has no attribute 'convert_to_boolean'

2020-07-29 Thread Adit Sarfaty
** Changed in: vmware-nsx
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1640319

Title:
  AttributeError: 'module' object has no attribute 'convert_to_boolean'

Status in networking-midonet:
  Fix Released
Status in neutron:
  Fix Released
Status in openstack-ansible:
  Fix Released
Status in vmware-nsx:
  Fix Released

Bug description:
  With latest neutron master code, neutron service q-svc could start due to the 
following error:
  2016-11-08 21:54:39.435 DEBUG oslo_concurrency.lockutils [-] Lock "manager" 
released by "neutron.manager._create_instance" :: held 1.467s from (pid=18534) 
inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
  2016-11-08 21:54:39.435 ERROR neutron.service [-] Unrecoverable error: please 
check log for details.
  2016-11-08 21:54:39.435 TRACE neutron.service Traceback (most recent call 
last):
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/opt/stack/neutron/neutron/service.py", line 87, in serve_wsgi
  2016-11-08 21:54:39.435 TRACE neutron.service service.start()
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/opt/stack/neutron/neutron/service.py", line 63, in start
  2016-11-08 21:54:39.435 TRACE neutron.service self.wsgi_app = 
_run_wsgi(self.app_name)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/opt/stack/neutron/neutron/service.py", line 289, in _run_wsgi
  2016-11-08 21:54:39.435 TRACE neutron.service app = 
config.load_paste_app(app_name)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/opt/stack/neutron/neutron/common/config.py", line 125, in load_paste_app
  2016-11-08 21:54:39.435 TRACE neutron.service app = 
loader.load_app(app_name)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/wsgi.py", line 353, in 
load_app
  2016-11-08 21:54:39.435 TRACE neutron.service return 
deploy.loadapp("config:%s" % self.config_path, name=name)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 247, in 
loadapp
  2016-11-08 21:54:39.435 TRACE neutron.service return loadobj(APP, uri, 
name=name, **kw)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 272, in 
loadobj
  2016-11-08 21:54:39.435 TRACE neutron.service return context.create()
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in 
create
  2016-11-08 21:54:39.435 TRACE neutron.service return 
self.object_type.invoke(self)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 144, in 
invoke
  2016-11-08 21:54:39.435 TRACE neutron.service **context.local_conf)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, in 
fix_call
  2016-11-08 21:54:39.435 TRACE neutron.service val = callable(*args, **kw)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/urlmap.py", line 31, in 
urlmap_factory
  2016-11-08 21:54:39.435 TRACE neutron.service app = 
loader.get_app(app_name, global_conf=global_conf)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 350, in 
get_app
  2016-11-08 21:54:39.435 TRACE neutron.service name=name, 
global_conf=global_conf).create()
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in 
create
  2016-11-08 21:54:39.435 TRACE neutron.service return 
self.object_type.invoke(self)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 144, in 
invoke
  2016-11-08 21:54:39.435 TRACE neutron.service **context.local_conf)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, in 
fix_call
  2016-11-08 21:54:39.435 TRACE neutron.service val = callable(*args, **kw)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/opt/stack/neutron/neutron/auth.py", line 71, in pipeline_factory
  2016-11-08 21:54:39.435 TRACE neutron.service app = 
loader.get_app(pipeline[-1])
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 350, in 
get_app
  2016-11-08 21:54:39.435 TRACE neutron.service name=name, 
global_conf=global_conf).create()
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in 

[Yahoo-eng-team] [Bug 1433550] Re: DVR: VMware NSX plugins do not need centralized snat interfaces

2020-07-29 Thread Adit Sarfaty
** Changed in: vmware-nsx
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433550

Title:
  DVR: VMware NSX plugins do not need centralized snat interfaces

Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Released

Bug description:
  When creating a distributed router, a centralized SNAT port is
  created.

  However since the NSX backend does not need it to implement
  distributed routing, this is just a waste of resources (port and IP
  address). Also, it might confuse users with admin privileges as they
  won't know what these ports are doing.

  So even if they do no harm they should be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/juno/+bug/1433550/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420278] Re: API workers might not work when sync thread is enabled

2020-07-29 Thread Adit Sarfaty
** Changed in: vmware-nsx
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1420278

Title:
  API workers might not work when sync thread is enabled

Status in neutron:
  Invalid
Status in neutron icehouse series:
  New
Status in neutron juno series:
  New
Status in vmware-nsx:
  Fix Released

Bug description:
  API workers are started with a fork().
  It is well known that this operation uses CoW on the child process - and this 
should not constitute a problem.
  However, the status synch thread is started at plugin initialization and 
might be already running when the API workers are forked.
  The NSX API client,  extensively used by this thread, uses an eventlet 
semaphore to grab backend connections from a pool.
  It is therefore possible that when a worker process is forked it receives 
semaphore which are in "busy" state. Once forked these semaphores are new 
objects, and they will never be unblocked. The API worker therefore simply 
hangs.

  This behaviour has been confirmed by observation on the field

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1420278/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433554] Re: DVR: metada network not created for NSX-mh

2020-07-29 Thread Adit Sarfaty
** Changed in: vmware-nsx
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433554

Title:
  DVR: metada network not created for NSX-mh

Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Released

Bug description:
  When creating a distributed router, instances attached to it do not
  have metadata access.

  This is happening because the metadata network is not being created
  and connected to the router - since the process for handling metadata
  network has not been updated with the new interface type for DVR
  router ports.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/juno/+bug/1433554/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426121] Re: vmw nsx: add/remove interface on dvr is broken

2020-07-29 Thread Adit Sarfaty
** Changed in: vmware-nsx
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1426121

Title:
  vmw nsx: add/remove interface on dvr is broken

Status in neutron:
  Incomplete
Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Released

Bug description:
  When the NSX specific extension was dropped in favour of the community
  one, there was a side effect that unfortunately caused add/remove
  interface operations to fail when executed passing a subnet id.

  This should be fixed soon and backported to Juno.
  Icehouse is not affected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1426121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462973] Re: Network gateway flat connection fail because of None tenant_id

2020-07-29 Thread Adit Sarfaty
** Changed in: vmware-nsx
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1462973

Title:
  Network gateway flat connection fail because of None tenant_id

Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Released

Bug description:
  The NSX-mh backend does not accept "None" values for tags.
  Tags are applied to all NSX-mh ports. In particular there is always a tag 
with the neutron tenant_id (q_tenant_id)

  _get_tenant_id_for_create now in admin context returns the tenant_id of the 
resource being created, if there is one.
  Otherwise still returns context.tenant_id.
  The default L2 gateway unfortunately does not have a tenant_id, but has the 
tenant_id attribute in its data structure.
  This means that _get_tenant_id_for_create will return None, and NSX-mh will 
reject the request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1462973/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474785] Re: NSX-mh: agentless modes are available only for 4.1

2020-07-29 Thread Adit Sarfaty
** Changed in: vmware-nsx
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1474785

Title:
  NSX-mh: agentless modes are available only for 4.1

Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Released

Bug description:
  DHCP and Metadata agentless modes are unfortunately available only in
  NSX-mh 4.1

  The version requirements for enabling the agentless mode should be
  amended

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1474785/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462974] Re: Network gateway vlan connection fails because of int conversion

2020-07-29 Thread Adit Sarfaty
** Changed in: vmware-nsx
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1462974

Title:
  Network gateway vlan connection fails because of int conversion

Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Released

Bug description:
  So far there has been an implicit assumption that segmentation_id would be an 
integer.
  In fact, it is a string value, which was been passed down to NSX.

  This means that passing a string value, like "xyz", rather than a validation 
error would have triggered a backend error.
  Moreover, the check for validity of the VLAN tag is now in the form min < tag 
< max, and this does not work unless tag is converted to integer.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1462974/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463363] Re: NSX-mh: Decimal RXTX factor not honoured

2020-07-29 Thread Adit Sarfaty
** Changed in: vmware-nsx
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463363

Title:
  NSX-mh: Decimal RXTX factor not honoured

Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  New
Status in vmware-nsx:
  Fix Released

Bug description:
  A decimal RXTX factor, which is allowed by nova flavors, is not
  honoured by the NSX-mh plugin, but simply truncated to integer.

  To reproduce:

  * Create a neutron queue
  * Create a neutron net / subnet using the queue
  * Create a new flavor which uses an RXTX factor other than an integer value
  * Boot a VM on the net above using the flavor
  * View the NSX queue for the VM's VIF -- notice it does not have the RXTX 
factor applied correctly (for instance if it's 1.2 it does not multiply it at 
all, if it's 3.4 it applies a RXTX factor of 3)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463363/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485883] Re: NSX-mh: bad retry behaviour on controller connection issues

2020-07-29 Thread Adit Sarfaty
** Changed in: vmware-nsx
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1485883

Title:
  NSX-mh: bad retry behaviour on controller connection issues

Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Released

Bug description:
  If the connection to a NSX-mh controller fails - for instance because
  there is a network issue or the controller is unreachable - the
  neutron plugin keeps retrying the connection to the same controller
  until it times out, whereas a  correct behaviour would be to try to
  connect to the other controllers in the cluster.

  The issue can be reproduced with the following steps:
  1. Three Controllers in the cluster 10.25.56.223,10.25.101.133,10.25.56.222
  2. Neutron net-create dummy-1 from openstack cli
  3. Vnc into controller-1, ifconfig eth0 down
  4. Do neutron net-create dummy-2 from openstack cli

  The API requests were forwarded to 10.25.56.223 originally. eth0
  interface was shutdown on 10.25.56.223. But the requests continued to
  get forwarded to the same Controllers and timed out.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1485883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479309] Re: Wrong pre-delete checks for distributed routers

2020-07-29 Thread Adit Sarfaty
** Changed in: vmware-nsx
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1479309

Title:
  Wrong pre-delete checks for distributed routers

Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Released

Bug description:
  The pre-delete checks [1] do not take into account DVR interfaces.
  This means that they will fail to raise an error when deleting a
  router with DVR interfaces on it, thus causing the router to be
  removed from the backend and leaving the system in an inconsistent
  state (as the subsequent db operation will fail)

  
  [1] 
http://git.openstack.org/cgit/openstack/vmware-nsx/tree/vmware_nsx/neutron/plugins/vmware/plugins/base.py#n1573

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1479309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405311] Re: Incorrect check for security groups in create_port

2020-07-29 Thread Adit Sarfaty
** Changed in: vmware-nsx
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1405311

Title:
  Incorrect check for security groups in create_port

Status in neutron:
  Fix Released
Status in vmware-nsx:
  Fix Released

Bug description:
  The check will fail if security groups in the request body are an
  empty string.

  In http://git.openstack.org/cgit/stackforge/vmware-
  nsx/tree/vmware_nsx/neutron/plugins/vmware/plugins/base.py#n1127 the
  code should not raise if security groups are an empty list

  This causes tempest's smoke and full test suites to fail always

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1405311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433553] Re: DVR: remove interface fails on NSX-mh

2020-07-29 Thread Adit Sarfaty
** Changed in: vmware-nsx
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433553

Title:
  DVR: remove interface fails on NSX-mh

Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Released

Bug description:
  The DVR mixin, which the MH plugin is now using, assumes that routers
  are deployed on l3 agents, which is not the case for VMware plugins.

  While it is generally wrong that a backend agnostic management layer
  makes assumptions about the backend, the VMware plugins should work
  around this issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/juno/+bug/1433553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1835369] [NEW] QoS plugin slows down get_ports operation

2019-07-04 Thread Adit Sarfaty
Public bug reported:

When the QoS plugin is enabled, get_ports for about 10K ports takes over 10 
minutes.
removing _extend_port_resource_request which was added in 
https://review.opendev.org/#/c/590363/
reduces the time to around 3 minutes only.
This was tested with stable/stein

The "blame" is with the code that retrieves the network object for each
port (unless the port was assigned with a qos policy):
https://opendev.org/openstack/neutron/src/branch/stable/stein/neutron/services/qos/qos_plugin.py#L103

This code should be improved.
Another partial fix can be to skip _extend_port_resource_request in case the 
driver does not support minimum-bw rules

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1835369

Title:
  QoS plugin slows down get_ports operation

Status in neutron:
  New

Bug description:
  When the QoS plugin is enabled, get_ports for about 10K ports takes over 10 
minutes.
  removing _extend_port_resource_request which was added in 
https://review.opendev.org/#/c/590363/
  reduces the time to around 3 minutes only.
  This was tested with stable/stein

  The "blame" is with the code that retrieves the network object for
  each port (unless the port was assigned with a qos policy):
  
https://opendev.org/openstack/neutron/src/branch/stable/stein/neutron/services/qos/qos_plugin.py#L103

  This code should be improved.
  Another partial fix can be to skip _extend_port_resource_request in case the 
driver does not support minimum-bw rules

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1835369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1830679] [NEW] Security groups RBAC cause a major performance degradation

2019-05-28 Thread Adit Sarfaty
Public bug reported:

On stable/Stein & Train, a setup with about 6000 security groups of different 
tenants.
Using admin user, getting all security groups with GET /v2.0/security-groups 
HTTP/1.1 takes about 70 seconds.
Using the credentials of one of the tenants, who has only 1 security groups 
takes about 800 seconds.

Looking at the mysql DB logs reveals lots of RBAC related queries during thoee 
800 seconds.
Tried to revert the RBAC PATCH https://review.opendev.org/#/c/635311/ that is a 
partial fix of https://bugs.launchpad.net/neutron/+bug/1817119 , and it solved 
the issue completely. 
Now it takes less than a seconds to get security groups of a tenant.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1830679

Title:
  Security groups RBAC cause a major performance degradation

Status in neutron:
  New

Bug description:
  On stable/Stein & Train, a setup with about 6000 security groups of different 
tenants.
  Using admin user, getting all security groups with GET /v2.0/security-groups 
HTTP/1.1 takes about 70 seconds.
  Using the credentials of one of the tenants, who has only 1 security groups 
takes about 800 seconds.

  Looking at the mysql DB logs reveals lots of RBAC related queries during 
thoee 800 seconds.
  Tried to revert the RBAC PATCH https://review.opendev.org/#/c/635311/ that is 
a partial fix of https://bugs.launchpad.net/neutron/+bug/1817119 , and it 
solved the issue completely. 
  Now it takes less than a seconds to get security groups of a tenant.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1830679/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1817455] [NEW] FWaaS V2 removing a port from the FW group set the FWG to INACTIVE

2019-02-24 Thread Adit Sarfaty
Public bug reported:

Creating a firewall group with policies and 2 interface ports.
Now removing 1 of the ports using:
openstack firewall group unset  --port 
the firewall group is updated, and now has only 1 interface port, but its 
status is changed to INACTIVE.

The reason seems to be in update_firewall_group_postcommit: 
https://github.com/openstack/neutron-fwaas/blob/master/neutron_fwaas/services/firewall/service_drivers/agents/agents.py#L329
last-port is set to True if no new ports are added, instead of setting it to 
True only if there are no ports left.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1817455

Title:
  FWaaS V2 removing a port from the FW group set the FWG to INACTIVE

Status in neutron:
  New

Bug description:
  Creating a firewall group with policies and 2 interface ports.
  Now removing 1 of the ports using:
  openstack firewall group unset  --port 
  the firewall group is updated, and now has only 1 interface port, but its 
status is changed to INACTIVE.

  The reason seems to be in update_firewall_group_postcommit: 
https://github.com/openstack/neutron-fwaas/blob/master/neutron_fwaas/services/firewall/service_drivers/agents/agents.py#L329
  last-port is set to True if no new ports are added, instead of setting it to 
True only if there are no ports left.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1817455/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815424] [NEW] Port gets port security disabled if using --no-security-groups

2019-02-11 Thread Adit Sarfaty
Public bug reported:

When a port is created on a network with port security disabled, by default it 
should have port-security disabled too.
But if using --no-security-group in the creation, than the port is created 
without security groups, but with port-security enabled.

openstack network show no-ps
+---+--+
| Field | Value|
+---+--+
| admin_state_up| UP   |
| availability_zone_hints   |  |
| availability_zones| defaultv3|
| created_at| 2019-02-11T07:58:34Z |
| description   |  |
| dns_domain|  |
| id| 58404ae1-650d-40c0-9ba9-9558f34fe81a |
| ipv4_address_scope| None |
| ipv6_address_scope| None |
| is_default| None |
| is_vlan_transparent   | None |
| location  | None |
| mtu   | None |
| name  | no-ps|
| port_security_enabled | False|
| project_id| 8d4f3035db954f32b320475c1213657c |
| provider:network_type | None |
| provider:physical_network | None |
| provider:segmentation_id  | None |
| qos_policy_id | None |
| revision_number   | 3|
| router:external   | Internal |
| segments  | None |
| shared| False|
| status| ACTIVE   |
| subnets   | 605cabbe-4064-4e66-8d3d-a5320abdfe2d |
| tags  |  |
| updated_at| 2019-02-11T07:58:39Z |
+---+--+

openstack port create --network no-ps --no-security-group no-sg
+-+---+
| Field   | Value   
  |
+-+---+
| admin_state_up  | UP  
  |
| allowed_address_pairs   | 
  |
| binding_host_id | None
  |
| binding_profile | 
  |
| binding_vif_details | 
nsx-logical-switch-id='ca492f0f-34c3-4b9a-947c-1c53d651140f', 
ovs_hybrid_plug='False', port_filter='True' |
| binding_vif_type| ovs 
  |
| binding_vnic_type   | normal  
  |
| created_at  | 2019-02-11T08:55:50Z
  |
| data_plane_status   | None
  |
| description | 
  |
| device_id   | 
  |
| device_owner| 
  |
| dns_assignment  | fqdn='host-66-0-0-16.openstacklocal.', 
hostname='host-66-0-0-16', ip_address='66.0.0.16'  |
| dns_domain  | None
  |
| dns_name| 

[Yahoo-eng-team] [Bug 1792890] [NEW] The user can delete a security group which is used as remote-group-id

2018-09-17 Thread Adit Sarfaty
Public bug reported:

A security group which is used as a remotr-group by another security group 
rule, can be deleted by the user.
This action should be blocked with "security group in use" error.
In the current state, the rule of the other SG is deleted from the DB (because 
of cascade in the DB table definition).

CLI example:
neutron security-group-create sg1
neutron security-group-create sg2
neutron security-group-rule-create sg1 --remote-group-id sg2
neutron security-group-delete sg2

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1792890

Title:
  The user can delete a security group which is used as remote-group-id

Status in neutron:
  New

Bug description:
  A security group which is used as a remotr-group by another security group 
rule, can be deleted by the user.
  This action should be blocked with "security group in use" error.
  In the current state, the rule of the other SG is deleted from the DB 
(because of cascade in the DB table definition).

  CLI example:
  neutron security-group-create sg1
  neutron security-group-create sg2
  neutron security-group-rule-create sg1 --remote-group-id sg2
  neutron security-group-delete sg2

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1792890/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1764259] Re: neutron openstack client returns ' Unknown error' instead of the real error

2018-04-16 Thread Adit Sarfaty
You are right. The fix will probably need to be in the python-
openstackclient code.

** Project changed: neutron => python-openstackclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1764259

Title:
  neutron openstack client returns ' Unknown error' instead of the real
  error

Status in python-openstackclient:
  Incomplete

Bug description:
  For several neutron create actions, when called via the openstack client you 
do not get the real error issued by the plugin, as you do with the 
neutronclient. instead yo get: 
  BadRequestException: Unknown error

  
  For example, try to create a subnet without a cidr:
  1) with the neutron client you see the real error:
  neutron subnet-create --name sub1 net1
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Bad subnets request: a subnetpool must be specified in the absence of a cidr.
  Neutron server returns request_ids: 
['req-8ee84525-6e98-4774-9392-ab8b596cde1a']

  2) with the openstack client the information is missing:
  openstack subnet create --network net1 sub1
  BadRequestException: Unknown error

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-openstackclient/+bug/1764259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1764259] [NEW] neutron openstack client returns ' Unknown error' instead of the real error

2018-04-15 Thread Adit Sarfaty
Public bug reported:

For several neutron create actions, when called via the openstack client you do 
not get the real error issued by the plugin, as you do with the neutronclient. 
instead yo get: 
BadRequestException: Unknown error


For example, try to create a subnet without a cidr:
1) with the neutron client you see the real error:
neutron subnet-create --name sub1 net1
neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.
Bad subnets request: a subnetpool must be specified in the absence of a cidr.
Neutron server returns request_ids: ['req-8ee84525-6e98-4774-9392-ab8b596cde1a']

2) with the openstack client the information is missing:
openstack subnet create --network net1 sub1
BadRequestException: Unknown error

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1764259

Title:
  neutron openstack client returns ' Unknown error' instead of the real
  error

Status in neutron:
  New

Bug description:
  For several neutron create actions, when called via the openstack client you 
do not get the real error issued by the plugin, as you do with the 
neutronclient. instead yo get: 
  BadRequestException: Unknown error

  
  For example, try to create a subnet without a cidr:
  1) with the neutron client you see the real error:
  neutron subnet-create --name sub1 net1
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Bad subnets request: a subnetpool must be specified in the absence of a cidr.
  Neutron server returns request_ids: 
['req-8ee84525-6e98-4774-9392-ab8b596cde1a']

  2) with the openstack client the information is missing:
  openstack subnet create --network net1 sub1
  BadRequestException: Unknown error

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1764259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1746498] [NEW] VPNaaS update connection validation is missing important data

2018-01-31 Thread Adit Sarfaty
Public bug reported:

In VPNaaS, when updating a connection, the plugin calls 
river.validator.validate_ipsec_site_connection with only the updated fields 
data.
This may not be enough for the validation, since the data of the existing 
(pre-change) connection is missing.
Any validation that depends on a combination of fields cannot be performed.
Instead, the connection-id or original-connection should be added as argument.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: vpnaas

** Tags added: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1746498

Title:
  VPNaaS update connection validation is missing important data

Status in neutron:
  New

Bug description:
  In VPNaaS, when updating a connection, the plugin calls 
river.validator.validate_ipsec_site_connection with only the updated fields 
data.
  This may not be enough for the validation, since the data of the existing 
(pre-change) connection is missing.
  Any validation that depends on a combination of fields cannot be performed.
  Instead, the connection-id or original-connection should be added as argument.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1746498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708081] [NEW] Router view issues an error if extension l3_agent_scheduler is not supported

2017-08-01 Thread Adit Sarfaty
Public bug reported:

Vieing a router from the admin->routers->router view may issue a warning
popup in case l3_agent_scheduler extension is not supported.

list_l3_agent_hosting_router should be called only if supported.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1708081

Title:
  Router view issues an error if extension l3_agent_scheduler is not
  supported

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Vieing a router from the admin->routers->router view may issue a
  warning popup in case l3_agent_scheduler extension is not supported.

  list_l3_agent_hosting_router should be called only if supported.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1708081/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1700449] [NEW] Update firewall rule with a protocol fails

2017-06-25 Thread Adit Sarfaty
Public bug reported:

Create a firewall rule via horizon with UDP protocol, and later try to
update it to TCP.

The Horizon UI will show this error:
Error: Failed to update rule non shared rule: Invalid input for protocol. 
Reason: TCP is not in valid_values. Neutron server returns request_ids: 
['req-66afa587-cafb-4e6d-8229-8081e8092437']

and the logs:
2017-06-26 05:35:37.708918 
DEBUG:urllib3.connectionpool:http://10.160.90.203:9696 "PUT 
//v2.0/fw/firewall_rules/507f825b-6798-410f-bab7-41c4dcd7d99b HTTP/1.1" 400 136 
   
2017-06-26 05:35:37.709929 ERROR 
openstack_dashboard.dashboards.project.firewalls.forms Failed to update rule 
507f825b-6798-410f-bab7-41c4dcd7d99b: Invalid input for protocol. Reason: TCP 
is not in valid_values.  
2017-06-26 05:35:37.709938 Neutron server returns request_ids: 
['req-66afa587-cafb-4e6d-8229-8081e8092437']

   
2017-06-26 05:35:37.710425 WARNING horizon.exceptions Recoverable error: 
Invalid input for protocol. Reason: TCP is not in valid_values. 


The reason is that the protocols should be sent to neutron in lower-
case.

** Affects: horizon
 Importance: Undecided
 Assignee: Adit Sarfaty (asarfaty)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Adit Sarfaty (asarfaty)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1700449

Title:
  Update firewall rule with a protocol fails

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Create a firewall rule via horizon with UDP protocol, and later try to
  update it to TCP.

  The Horizon UI will show this error:
  Error: Failed to update rule non shared rule: Invalid input for protocol. 
Reason: TCP is not in valid_values. Neutron server returns request_ids: 
['req-66afa587-cafb-4e6d-8229-8081e8092437']

  and the logs:
  2017-06-26 05:35:37.708918 
DEBUG:urllib3.connectionpool:http://10.160.90.203:9696 "PUT 
//v2.0/fw/firewall_rules/507f825b-6798-410f-bab7-41c4dcd7d99b HTTP/1.1" 400 136 
   
  2017-06-26 05:35:37.709929 ERROR 
openstack_dashboard.dashboards.project.firewalls.forms Failed to update rule 
507f825b-6798-410f-bab7-41c4dcd7d99b: Invalid input for protocol. Reason: TCP 
is not in valid_values.  
  2017-06-26 05:35:37.709938 Neutron server returns request_ids: 
['req-66afa587-cafb-4e6d-8229-8081e8092437']

   
  2017-06-26 05:35:37.710425 WARNING horizon.exceptions Recoverable error: 
Invalid input for protocol. Reason: TCP is not in valid_values. 


  The reason is that the protocols should be sent to neutron in lower-
  case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1700449/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566661] [NEW] Prevent adding static routes to shared routers

2016-04-06 Thread Adit Sarfaty
Public bug reported:

Currently we cannot fully support static routes on shared routers
so we should fail update_router in case static routes are added,
and fail router type migration to shared, if it already has static routes.

** Affects: neutron
 Importance: Undecided
 Assignee: Adit Sarfaty (asarfaty)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Adit Sarfaty (asarfaty)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/151

Title:
  Prevent adding static routes to shared routers

Status in neutron:
  In Progress

Bug description:
  Currently we cannot fully support static routes on shared routers
  so we should fail update_router in case static routes are added,
  and fail router type migration to shared, if it already has static routes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/151/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1556836] [NEW] QOS: TestQosPlugin should receive plugin name as an input

2016-03-14 Thread Adit Sarfaty
Public bug reported:

The neutron QoS plugin test class 'TestQosPlugin' uses the hard coded name 
'qos' as the plugin name (alias, in this case).
In order to inherit from this test class, and use it in the vmware-nsx 
integration, we need to set the plugin name dynamically, as a parameter of the 
test setup function, or in a separate internal method of this class.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1556836

Title:
  QOS: TestQosPlugin should receive plugin name as an input

Status in neutron:
  New

Bug description:
  The neutron QoS plugin test class 'TestQosPlugin' uses the hard coded name 
'qos' as the plugin name (alias, in this case).
  In order to inherit from this test class, and use it in the vmware-nsx 
integration, we need to set the plugin name dynamically, as a parameter of the 
test setup function, or in a separate internal method of this class.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1556836/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1556812] [NEW] QoS: missing getter functions for policy network/port binding, needed for vmware-nsx integration

2016-03-14 Thread Adit Sarfaty
Public bug reported:

The neutron QoS api, includes create & delete DB bindings between a QoS policy 
id and network or port id.
But currently there are no getters for the same bindings.

For the vmware-nsx integration with QoS, we will need the getters too,
so that whenever a policy (or it's rule) changes, we can find the
related networks.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1556812

Title:
  QoS: missing getter functions for policy network/port binding, needed
  for vmware-nsx integration

Status in neutron:
  New

Bug description:
  The neutron QoS api, includes create & delete DB bindings between a QoS 
policy id and network or port id.
  But currently there are no getters for the same bindings.

  For the vmware-nsx integration with QoS, we will need the getters too,
  so that whenever a policy (or it's rule) changes, we can find the
  related networks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1556812/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp