[Yahoo-eng-team] [Bug 1807673] Re: Networking (neutron) concepts in neutron

2019-02-11 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1807673

Title:
  Networking (neutron) concepts in neutron

Status in neutron:
  Expired

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [ ] This doc is inaccurate in this way: What is meant by "networking 
facets" in first paragraph ?
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 13.1.0.dev473 on 2017-06-30 05:58:47
  SHA: 396d5b4f84acc467b3e78347437cbc351e9c91f3
  Source: 
https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/install/concepts.rst
  URL: https://docs.openstack.org/neutron/latest/install/concepts.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1807673/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815539] [NEW] Self-service policies for credential APIs are boken in stable/rocky

2019-02-11 Thread Guang Yee
Public bug reported:

Service-service policies for credential APIs are broken in stable/rocky.
More specifically, Get/Update/Delete no longer works with the following
policies.

"identity:get_credential": "rule:admin_required or 
user_id:%(target.credential.user_id)s"
"identity:update_credential": "rule:admin_required or 
user_id:%(target.credential.user_id)s"
"identity:delete_credential": "rule:admin_required or 
user_id:%(target.credential.user_id)s"

This used to work in Pike and Queens because we pass the entity to
policy enforcement via get_member_from_driver.

https://github.com/openstack/keystone/blob/stable/queens/keystone/credential/controllers.py#L36

However, in stable/rocky we no longer pass the entity as part of the
target.

https://github.com/openstack/keystone/blob/stable/rocky/keystone/api/credentials.py#L86

Therefore, any policy rule which has target.credential.* no longer
works.

Stein seems to be working again as the problem was fixed as part of
https://bugs.launchpad.net/keystone/+bug/1788415.

We'll need to fix stable/rocky by conveying the credential entity to the
target again.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1815539

Title:
  Self-service policies for credential APIs are boken in stable/rocky

Status in OpenStack Identity (keystone):
  New

Bug description:
  Service-service policies for credential APIs are broken in
  stable/rocky. More specifically, Get/Update/Delete no longer works
  with the following policies.

  "identity:get_credential": "rule:admin_required or 
user_id:%(target.credential.user_id)s"
  "identity:update_credential": "rule:admin_required or 
user_id:%(target.credential.user_id)s"
  "identity:delete_credential": "rule:admin_required or 
user_id:%(target.credential.user_id)s"

  This used to work in Pike and Queens because we pass the entity to
  policy enforcement via get_member_from_driver.

  
https://github.com/openstack/keystone/blob/stable/queens/keystone/credential/controllers.py#L36

  However, in stable/rocky we no longer pass the entity as part of the
  target.

  
https://github.com/openstack/keystone/blob/stable/rocky/keystone/api/credentials.py#L86

  Therefore, any policy rule which has target.credential.* no longer
  works.

  Stein seems to be working again as the problem was fixed as part of
  https://bugs.launchpad.net/keystone/+bug/1788415.

  We'll need to fix stable/rocky by conveying the credential entity to
  the target again.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1815539/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815498] [NEW] Use pyroute2 to check vlan/vxlan in use

2019-02-11 Thread Rodolfo Alonso
Public bug reported:

Now ip_lib.get_devices_info function is implemented using pyroute2,
"vlan_in_use" and "vxlan_in_use" can make use of it.

** Affects: neutron
 Importance: Undecided
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1815498

Title:
  Use pyroute2 to check vlan/vxlan in use

Status in neutron:
  New

Bug description:
  Now ip_lib.get_devices_info function is implemented using pyroute2,
  "vlan_in_use" and "vxlan_in_use" can make use of it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1815498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815476] [NEW] Flavor attribute 'swap' returns unicode'' instead of int 0

2019-02-11 Thread Jelle Leempoels
Public bug reported:

Description
===
When a flavor is created in Horizon with 'Swap Disk (MB)' -> 0.
Nova python API returns unicode'' on flavor.swap attribute.

When swap disk is changed to 10.
API returns -> int 10 instead of unicode.

Steps to reproduce
==
- In horizon create a new flavor with swap disk 0
- connect to the nova python API.

fl = nova.flavors.find(name="your flavor")
print fl.swap
print(type(fl.swap))

output:




Expected result
===
int 0

Actual result
=
unicode ''


Environment
===


ubuntu@juju-5dc387-0-lxd-6:~$ nova-manage --version
15.1.5

ubuntu@juju-5dc387-0-lxd-6:~$ dpkg -l | grep nova
ii  nova-api-os-compute  2:15.1.5-0ubuntu1~cloud0   
all  OpenStack Compute - OpenStack Compute API frontend
ii  nova-common  2:15.1.5-0ubuntu1~cloud0   
all  OpenStack Compute - common files
ii  nova-conductor   2:15.1.5-0ubuntu1~cloud0   
all  OpenStack Compute - conductor service
ii  nova-consoleauth 2:15.1.5-0ubuntu1~cloud0   
all  OpenStack Compute - Console Authenticator
ii  nova-novncproxy  2:15.1.5-0ubuntu1~cloud0   
all  OpenStack Compute - NoVNC proxy
ii  nova-placement-api   2:15.1.5-0ubuntu1~cloud0   
all  OpenStack Compute - placement API frontend
ii  nova-scheduler   2:15.1.5-0ubuntu1~cloud0   
all  OpenStack Compute - virtual machine scheduler
ii  python-nova  2:15.1.5-0ubuntu1~cloud0   
all  OpenStack Compute Python libraries


xx@xx-dev:~/Documents$ openstack --version
openstack 3.14.2

** Affects: nova
 Importance: Undecided
 Status: New

** Summary changed:

- Flavor attribute returns unicode'' when 0
+ Flavor attribute 'swap' returns unicode'' when 0 instead of int

** Summary changed:

- Flavor attribute 'swap' returns unicode'' when 0 instead of int
+ Flavor attribute 'swap' returns unicode'' instead of int 0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1815476

Title:
  Flavor attribute 'swap' returns unicode'' instead of int 0

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  When a flavor is created in Horizon with 'Swap Disk (MB)' -> 0.
  Nova python API returns unicode'' on flavor.swap attribute.

  When swap disk is changed to 10.
  API returns -> int 10 instead of unicode.

  Steps to reproduce
  ==
  - In horizon create a new flavor with swap disk 0
  - connect to the nova python API.

  fl = nova.flavors.find(name="your flavor")
  print fl.swap
  print(type(fl.swap))

  output:

  

  
  Expected result
  ===
  int 0

  Actual result
  =
  unicode ''

  
  Environment
  ===

  
  ubuntu@juju-5dc387-0-lxd-6:~$ nova-manage --version
  15.1.5

  ubuntu@juju-5dc387-0-lxd-6:~$ dpkg -l | grep nova
  ii  nova-api-os-compute  2:15.1.5-0ubuntu1~cloud0 
  all  OpenStack Compute - OpenStack Compute API frontend
  ii  nova-common  2:15.1.5-0ubuntu1~cloud0 
  all  OpenStack Compute - common files
  ii  nova-conductor   2:15.1.5-0ubuntu1~cloud0 
  all  OpenStack Compute - conductor service
  ii  nova-consoleauth 2:15.1.5-0ubuntu1~cloud0 
  all  OpenStack Compute - Console Authenticator
  ii  nova-novncproxy  2:15.1.5-0ubuntu1~cloud0 
  all  OpenStack Compute - NoVNC proxy
  ii  nova-placement-api   2:15.1.5-0ubuntu1~cloud0 
  all  OpenStack Compute - placement API frontend
  ii  nova-scheduler   2:15.1.5-0ubuntu1~cloud0 
  all  OpenStack Compute - virtual machine scheduler
  ii  python-nova  2:15.1.5-0ubuntu1~cloud0 
  all  OpenStack Compute Python libraries

  
  xx@xx-dev:~/Documents$ openstack --version
  openstack 3.14.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1815476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815478] [NEW] Error message 'Invalid protocol %(protocol)s for port range, only ...' is difficult to understand

2019-02-11 Thread Andreas Karis
Public bug reported:

Error message 'Invalid protocol %(protocol)s for port range, only ...' is 
difficult to understand.
~~~
 43 class SecurityGroupInvalidProtocolForPortRange(exceptions.InvalidInput):
 44 message = _("Invalid protocol %(protocol)s for port range, only "
 45 "supported for TCP, UDP, UDPLITE, SCTP and DCCP.")
~~~

Thinking about it logically, the port range is invalid for the protocol,
not the other way around.

The error message would be better as:
~~~
Invalid port range specified for protocol %(protocol)s. Do not specify a port 
range. Port ranges are only supported for TCP, UDP, UDPLITE, SCTP and DCCP.
~~~

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1815478

Title:
  Error message 'Invalid protocol %(protocol)s for port range, only ...'
  is difficult to understand

Status in neutron:
  New

Bug description:
  Error message 'Invalid protocol %(protocol)s for port range, only ...' is 
difficult to understand.
  ~~~
   43 class SecurityGroupInvalidProtocolForPortRange(exceptions.InvalidInput):
   44 message = _("Invalid protocol %(protocol)s for port range, only "
   45 "supported for TCP, UDP, UDPLITE, SCTP and DCCP.")
  ~~~

  Thinking about it logically, the port range is invalid for the
  protocol, not the other way around.

  The error message would be better as:
  ~~~
  Invalid port range specified for protocol %(protocol)s. Do not specify a port 
range. Port ranges are only supported for TCP, UDP, UDPLITE, SCTP and DCCP.
  ~~~

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1815478/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1812969] Re: catch VolumeAttachmentNotFound when attach failed

2019-02-11 Thread Matt Riedemann
** Changed in: nova
   Importance: Undecided => Low

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Changed in: nova/queens
   Status: New => Confirmed

** Changed in: nova/rocky
   Status: New => Confirmed

** Changed in: nova/queens
   Importance: Undecided => Low

** Changed in: nova/rocky
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1812969

Title:
  catch VolumeAttachmentNotFound when attach failed

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) queens series:
  Confirmed
Status in OpenStack Compute (nova) rocky series:
  Confirmed

Bug description:
  [stack@localhost devstack]$ cinder type-create encrypt-NoOpEncryptor
  
+--+---+-+---+
  | ID | Name | Description | Is_Public |
  
+--+---+-+---+
  | 054fad16-6b6b-4426-ac9b-af63bc43d113 | encrypt-NoOpEncryptor | - | True |
  
+--+---+-+---+
  [stack@localhost devstack]$ cinder encryption-type-create 
054fad16-6b6b-4426-ac9b-af63bc43d113 --control-location front-end NoOpEncryptor
  
+--+---++--+--+
  | Volume Type ID | Provider | Cipher | Key Size | Control Location |
  
+--+---++--+--+
  | 054fad16-6b6b-4426-ac9b-af63bc43d113 | NoOpEncryptor | - | - | front-end |
  
+--+---++--+--+
  [stack@localhost devstack]$ cinder create --volume-type encrypt-NoOpEncryptor 
--name yenai 1
  ++--+
  | Property | Value |
  ++--+
  | attachments | [] |
  | availability_zone | nova |
  | bootable | false |
  | consistencygroup_id | None |
  | created_at | 2019-01-12T01:36:36.00 |
  | description | None |
  | encrypted | True |
  | id | 247bd974-3cb5-44a5-9b7d-599ca2fc9bda |
  | metadata | {} |
  | migration_status | None |
  | multiattach | False |
  | name | yenai |
  | os-vol-host-attr:host | None |
  | os-vol-mig-status-attr:migstat | None |
  | os-vol-mig-status-attr:name_id | None |
  | os-vol-tenant-attr:tenant_id | 9b881cf6b92049b8a155178274662836 |
  | replication_status | None |
  | size | 1 |
  | snapshot_id | None |
  | source_volid | None |
  | status | creating |
  | updated_at | None |
  | user_id | 657205033877433db3dd8dadb212d9fb |
  | volume_type | encrypt-NoOpEncryptor |
  ++--+
  [stack@localhost libvirt]$ cinder list
  
+--+---+---+--+---+--+--+
  | ID | Status | Name | Size | Volume Type | Bootable | Attached to |
  
+--+---+---+--+---+--+--+
  | 247bd974-3cb5-44a5-9b7d-599ca2fc9bda | available | yenai | 1 | 
encrypt-NoOpEncryptor | false | |
  | f1c15752-c247-470b-9a5f-1f5559f51ce7 | in-use | yenai | 1 | encrypt-luks | 
false | 6ce2254a-0e05-4e63-874e-44d112bed7d0 |
  
+--+---+---+--+---+--+--+
  [[stack@localhost devstack]$ nova volume-attach 
6ce2254a-0e05-4e63-874e-44d112bed7d0 247bd974-3cb5-44a5-9b7d-599ca2fc9bda
  +--+--+
  | Property | Value |
  +--+--+
  | device | /dev/vdc |
  | id | 247bd974-3cb5-44a5-9b7d-599ca2fc9bda |
  | serverId | 6ce2254a-0e05-4e63-874e-44d112bed7d0 |
  | volumeId | 247bd974-3cb5-44a5-9b7d-599ca2fc9bda |
  +--+--+
  [stack@localhost devstack]$
  Jan 12 13:58:48 localhost.localdomain nova-compute[9]: ERROR 
nova.virt.block_device [None req-c7c9bfab-75a6-41b8-a677-d00aac80b11b admin 
admin] [instance: 6ce2254a-0e05-4e63-874e-44d112bed7d0] Driver failed to attach 
volume 247bd974-3cb5-44a5-9b7d-599ca2fc9bda at /dev/vdc: TypeError: 
attach_volume() got an unexpected keyword argument 'cipher'
  Jan 12 13:58:48 localhost.localdomain nova-compute[9]: ERROR 
nova.virt.block_device [instance: 6ce2254a-0e05-4e63-874e-44d112bed7d0] 
Traceback (most recent call last):
  Jan 12 13:58:48 localhost.localdomain nova-compute[9]: ERROR 
nova.virt.block_device [instance: 6ce2254a-0e05-

[Yahoo-eng-team] [Bug 1815466] [NEW] Live migrate abort results in instance going into an error state

2019-02-11 Thread Gary Kotton
Public bug reported:

Only the libvirt driver supports live migration abort (and that may have
some caveates if the action results in a exception). If the call takes
places with a driver that does not support the opertaion then the follow
stack trace appears:

nova live-migration-abort doesn’t work.

It failed with exception "NotImplementedError".


Steps:
1) Do live migration (make sure vmkernel adaptors has vmotion enabled)
2) During live-migration is going on from one cluster to another trigger 
command nova live-migration-abort.
3) you will see nova instance goes into ERROR state.

nova-compute error logs:

2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server 
[req-8cca230a-2ef0-4af4-9172-2259a38496ba fbb20822241440e7acac310fdc238280 
bf2edf2a7ddc48a299f183b8da055c46 - d5b840b9e52c43d3ad9e208e22ad531f 
d5b840b9e52c43d3ad9e208e22ad531f] Exception during message handling: 
NotImplementedError
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 163, in 
_process_incoming
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 220, 
in dispatch
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 190, 
in _do_dispatch
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 76, in 
wrapped
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server function_name, 
call_dict, binary)
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server 
self.force_reraise()
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 67, in 
wrapped
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server return f(self, 
context, *args, **kw)
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/nova/compute/utils.py", line 977, in 
decorated_function
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 214, in 
decorated_function
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server 
kwargs['instance'], e, sys.exc_info())
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server 
self.force_reraise()
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 202, in 
decorated_function
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6304, in 
live_migration_abort
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server 
self.driver.live_migration_abort(instance)
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/nova/virt/driver.py", line 943, in 
live_migration_abort
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server raise 
NotImplementedError()
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server 
NotImplementedError
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server

This will resulty in the instance being in an ERROR state (although it
clearly is not)

** Affects: nova
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progre

[Yahoo-eng-team] [Bug 1815463] [NEW] [dev] Agent RPC version does not auto upgrade if neutron-server restart first

2019-02-11 Thread LIU Yulong
Public bug reported:

For instance:
This neutron server was restarted 6 seconds earlier than l3-agent with a RPC 
version upgrading:
http://logs.openstack.org/71/633871/5/check/neutron-grenade-dvr-multinode/3fcd71c/logs/screen-q-svc.txt.gz#_Feb_10_06_32_10_279268
http://logs.openstack.org/71/633871/5/check/neutron-grenade-dvr-multinode/3fcd71c/logs/screen-q-l3.txt.gz#_Feb_10_06_32_16_750133

And then, we met many UnsupportedVersion exception:
http://logs.openstack.org/71/633871/5/check/neutron-grenade-dvr-multinode/3fcd71c/logs/screen-q-svc.txt.gz#_Feb_10_06_38_05_749257

This issue can also be seen in our local dev branch. A workaround is
restart the agent first, and then the neutron server.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1815463

Title:
  [dev] Agent RPC version does not auto upgrade if neutron-server
  restart first

Status in neutron:
  New

Bug description:
  For instance:
  This neutron server was restarted 6 seconds earlier than l3-agent with a RPC 
version upgrading:
  
http://logs.openstack.org/71/633871/5/check/neutron-grenade-dvr-multinode/3fcd71c/logs/screen-q-svc.txt.gz#_Feb_10_06_32_10_279268
  
http://logs.openstack.org/71/633871/5/check/neutron-grenade-dvr-multinode/3fcd71c/logs/screen-q-l3.txt.gz#_Feb_10_06_32_16_750133

  And then, we met many UnsupportedVersion exception:
  
http://logs.openstack.org/71/633871/5/check/neutron-grenade-dvr-multinode/3fcd71c/logs/screen-q-svc.txt.gz#_Feb_10_06_38_05_749257

  This issue can also be seen in our local dev branch. A workaround is
  restart the agent first, and then the neutron server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1815463/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815461] [NEW] create port in shared network from tenant fail because horizon add wrong SecurityGroup

2019-02-11 Thread SharonBarak
Public bug reported:

[centos-binary-horizon:rocky-latest]

if as tenant creating port in shared network , the ports fail to be
created because horizon take the security group of the source network (
admin in my case , vlan net ... )

neutron api log:
2019-02-11 10:17:28.492 33 DEBUG neutron.api.v2.base 
[req-14fa3687-65ea-4f0e-880c-6fc5336a93ca 640f75a14d77430a9230d720db90046e 
2c7927cda1614d7a924614b0c310ab6f - default default] Request body: {u'port': 
{u'name': u'd', u'admin_state_up'
: True, u'network_id': u'0c0b01f3-f73f-4b2f-95ee-6c3e8b93ebd9', u'tenant_id': 
u'2c7927cda1614d7a924614b0c310ab6f', u'binding:vnic_type': u'normal', 
u'device_owner': u'', u'port_security_enabled': True, u'security_groups': 
[u'49eed7e4-600b-457b-a367-5d1ec20faad6'], u'device_id': u''}} 
prepare_request_body /usr/lib/python2.7/site-packages/neutron/api/v2/base.py:716

req:
sg -> 49eed7e4-600b-457b-a367-5d1ec20faad6
tenant -> 2c7927cda1614d7a924614b0c310ab6f

ID   Name Project
1deb753d-dbad-4668-8c4e-72096e43673e smoketest
8d54453c9c82423b9f173997be5fcd54
1ee7e92d-7330-4b4d-b2f8-68f7d936418c CloudBand-Security-Group-DU1 
750b1cc920354372b2b6149abec1a9f9
3f63ca70-54ab-4723-ae7b-63449ebccb2e default  
5811183c896242dbaabd9504b2de14a1
49eed7e4-600b-457b-a367-5d1ec20faad6 default  
8d54453c9c82423b9f173997be5fcd54
5acdac84-8ff7-49de-9372-3113a7ee3f2a default  
29d066aff3614837892b45e658615d25
739c5f71-dfb2-48cb-9e0a-d364e5d0a2cd default  
2c7927cda1614d7a924614b0c310ab6f
74f6eef0-2baa-40c6-b34c-487d2478153c CloudBand-Security-Group-BH1 
2c7927cda1614d7a924614b0c310ab6f
e2193e72-c721-4bc6-895a-0828e9596673 default  
750b1cc920354372b2b6149abec1a9f9

as you can see above the request combine sg which not belong to existing
tenant .

later on we its fail ...
2019-02-11 10:47:13.918 38 INFO neutron.pecan_wsgi.hooks.translation 
[req-ac726455-d25c-497b-9d11-388d794760f8 640f75a14d77430a9230d720db90046e 
2c7927cda1614d7a924614b0c310ab6f - default default] POST failed (client error): 
The resource could not be found.
2019-02-11 10:47:13.918 38 DEBUG neutron.pecan_wsgi.hooks.notifier 
[req-ac726455-d25c-497b-9d11-388d794760f8 640f75a14d77430a9230d720db90046e 
2c7927cda1614d7a924614b0c310ab6f - default default] No notification will be 
sent due to unsuccessful status code: 404 after 
/usr/lib/python2.7/site-packages/neutron/pecan_wsgi/hooks/notifier.py:79

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

+ [centos-binary-horizon:rocky-latest]
+ 
  if as tenant creating port in shared network , the ports fail to be
  created because horizon take the security group of the source network (
  admin in my case , vlan net ... )
  
  neutron api log:
  2019-02-11 10:17:28.492 33 DEBUG neutron.api.v2.base 
[req-14fa3687-65ea-4f0e-880c-6fc5336a93ca 640f75a14d77430a9230d720db90046e 
2c7927cda1614d7a924614b0c310ab6f - default default] Request body: {u'port': 
{u'name': u'd', u'admin_state_up'
  : True, u'network_id': u'0c0b01f3-f73f-4b2f-95ee-6c3e8b93ebd9', u'tenant_id': 
u'2c7927cda1614d7a924614b0c310ab6f', u'binding:vnic_type': u'normal', 
u'device_owner': u'', u'port_security_enabled': True, u'security_groups': 
[u'49eed7e4-600b-457b-a367-5d1ec20faad6'], u'device_id': u''}} 
prepare_request_body /usr/lib/python2.7/site-packages/neutron/api/v2/base.py:716
  
  req:
  sg -> 49eed7e4-600b-457b-a367-5d1ec20faad6
  tenant -> 2c7927cda1614d7a924614b0c310ab6f
  
  ID   Name Project
  1deb753d-dbad-4668-8c4e-72096e43673e smoketest
8d54453c9c82423b9f173997be5fcd54
  1ee7e92d-7330-4b4d-b2f8-68f7d936418c CloudBand-Security-Group-DU1 
750b1cc920354372b2b6149abec1a9f9
  3f63ca70-54ab-4723-ae7b-63449ebccb2e default  
5811183c896242dbaabd9504b2de14a1
  49eed7e4-600b-457b-a367-5d1ec20faad6 default  
8d54453c9c82423b9f173997be5fcd54
  5acdac84-8ff7-49de-9372-3113a7ee3f2a default  
29d066aff3614837892b45e658615d25
  739c5f71-dfb2-48cb-9e0a-d364e5d0a2cd default  
2c7927cda1614d7a924614b0c310ab6f
  74f6eef0-2baa-40c6-b34c-487d2478153c CloudBand-Security-Group-BH1 
2c7927cda1614d7a924614b0c310ab6f
  e2193e72-c721-4bc6-895a-0828e9596673 default  
750b1cc920354372b2b6149abec1a9f9
  
  as you can see above the request combine sg which not belong to existing
  tenant .
  
  later on we its fail ...
  2019-02-11 10:47:13.918 38 INFO neutron.pecan_wsgi.hooks.translation 
[req-ac726455-d25c-497b-9d11-388d794760f8 640f75a14d77430a9230d720db90046e 
2c7927cda1614d7a924614b0c310ab6f - default default] POST failed (client error): 
The resource could not be found.
  2019-02-11 10:47:13.918 38 DEBUG neutron.pecan_wsgi.hooks.notifier 
[req-ac726455-d25c-497b-9d11-388d794760f8 640f75a14d77430

[Yahoo-eng-team] [Bug 1815433] Re: Code crash with invalid connection limit of listener

2019-02-11 Thread Akihiro Motoki
neutron-lbaas uses the storyboard as a bug tracker.
Could you file a bug there?

https://storyboard.openstack.org/#!/project/openstack/neutron-lbaas


** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1815433

Title:
   Code crash with invalid connection limit of listener

Status in neutron:
  Invalid

Bug description:
  Code crash with invalid connection limit of listener.

  root@utu1604template:~# neutron --insecure lbaas-listener-update 
listenerlbfk1 --connection-limit 10009
  Request Failed: internal server error while processing your request.
  Neutron server returns request_ids: 
['req-7dde4518-f561-4a5d-a1ba-4fbcf8b7aca1']

  Logs has been captured below:

  http://paste.openstack.org/show/744825/

  
  We should restrict to provide the boundary checks on integers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1815433/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815433] [NEW] Code crash with invalid connection limit of listener

2019-02-11 Thread Puneet Arora
Public bug reported:

Code crash with invalid connection limit of listener.

root@utu1604template:~# neutron --insecure lbaas-listener-update listenerlbfk1 
--connection-limit 10009
Request Failed: internal server error while processing your request.
Neutron server returns request_ids: ['req-7dde4518-f561-4a5d-a1ba-4fbcf8b7aca1']

Logs has been captured below:

http://paste.openstack.org/show/744825/


We should restrict to provide the boundary checks on integers.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1815433

Title:
   Code crash with invalid connection limit of listener

Status in neutron:
  New

Bug description:
  Code crash with invalid connection limit of listener.

  root@utu1604template:~# neutron --insecure lbaas-listener-update 
listenerlbfk1 --connection-limit 10009
  Request Failed: internal server error while processing your request.
  Neutron server returns request_ids: 
['req-7dde4518-f561-4a5d-a1ba-4fbcf8b7aca1']

  Logs has been captured below:

  http://paste.openstack.org/show/744825/

  
  We should restrict to provide the boundary checks on integers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1815433/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815424] [NEW] Port gets port security disabled if using --no-security-groups

2019-02-11 Thread Adit Sarfaty
Public bug reported:

When a port is created on a network with port security disabled, by default it 
should have port-security disabled too.
But if using --no-security-group in the creation, than the port is created 
without security groups, but with port-security enabled.

openstack network show no-ps
+---+--+
| Field | Value|
+---+--+
| admin_state_up| UP   |
| availability_zone_hints   |  |
| availability_zones| defaultv3|
| created_at| 2019-02-11T07:58:34Z |
| description   |  |
| dns_domain|  |
| id| 58404ae1-650d-40c0-9ba9-9558f34fe81a |
| ipv4_address_scope| None |
| ipv6_address_scope| None |
| is_default| None |
| is_vlan_transparent   | None |
| location  | None |
| mtu   | None |
| name  | no-ps|
| port_security_enabled | False|
| project_id| 8d4f3035db954f32b320475c1213657c |
| provider:network_type | None |
| provider:physical_network | None |
| provider:segmentation_id  | None |
| qos_policy_id | None |
| revision_number   | 3|
| router:external   | Internal |
| segments  | None |
| shared| False|
| status| ACTIVE   |
| subnets   | 605cabbe-4064-4e66-8d3d-a5320abdfe2d |
| tags  |  |
| updated_at| 2019-02-11T07:58:39Z |
+---+--+

openstack port create --network no-ps --no-security-group no-sg
+-+---+
| Field   | Value   
  |
+-+---+
| admin_state_up  | UP  
  |
| allowed_address_pairs   | 
  |
| binding_host_id | None
  |
| binding_profile | 
  |
| binding_vif_details | 
nsx-logical-switch-id='ca492f0f-34c3-4b9a-947c-1c53d651140f', 
ovs_hybrid_plug='False', port_filter='True' |
| binding_vif_type| ovs 
  |
| binding_vnic_type   | normal  
  |
| created_at  | 2019-02-11T08:55:50Z
  |
| data_plane_status   | None
  |
| description | 
  |
| device_id   | 
  |
| device_owner| 
  |
| dns_assignment  | fqdn='host-66-0-0-16.openstacklocal.', 
hostname='host-66-0-0-16', ip_address='66.0.0.16'  |
| dns_domain  | None
  |
| dns_name| 

[Yahoo-eng-team] [Bug 1815421] [NEW] Customized confirm message for actions.

2019-02-11 Thread Yan Chen
Public bug reported:

When an action needs double confirm from users, it will be great if we
can have an API to specify customized confirmation message. This can
help to provide more detailed information (any warnings and
consequences) of the action to the users.

** Affects: horizon
 Importance: Undecided
 Assignee: Yan Chen (ychen2u)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Yan Chen (ychen2u)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1815421

Title:
  Customized confirm message for actions.

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When an action needs double confirm from users, it will be great if we
  can have an API to specify customized confirmation message. This can
  help to provide more detailed information (any warnings and
  consequences) of the action to the users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1815421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp