[Yahoo-eng-team] [Bug 1822924] [NEW] Failed to create extra spec under Volume Type Panel

2019-04-02 Thread pengyuesheng
Public bug reported:

When there is more than one extra spec in the volume type, then create
an extra spec with the name "#^&*", the form prompts an error.

** Affects: horizon
 Importance: Undecided
 Assignee: pengyuesheng (pengyuesheng)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => pengyuesheng (pengyuesheng)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1822924

Title:
  Failed to create extra spec under Volume Type Panel

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When there is more than one extra spec in the volume type, then create
  an extra spec with the name "#^&*", the form prompts an error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1822924/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822921] [NEW] [VPNaaS]: Skip check process status for HA backup routers

2019-04-02 Thread Dongcan Ye
Public bug reported:

Since we disable vpn process on backup router, we should skips check for
those process status and then report status to neutron server.

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: In Progress


** Tags: vpnaas

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1822921

Title:
  [VPNaaS]:  Skip check process status for HA backup routers

Status in neutron:
  In Progress

Bug description:
  Since we disable vpn process on backup router, we should skips check
  for those process status and then report status to neutron server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1822921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822917] [NEW] "Overwriting current allocation" warnings in logs during move operations although there are no failures

2019-04-02 Thread Matt Riedemann
Public bug reported:

I'm not sure why this is a warning:

https://github.com/openstack/nova/blob/b33fa1c054ba4b7d4e789aa51250ad5c8325da2d/nova/scheduler/client/report.py#L1880

Because it shows up in normal operations during a resize where
allocations are moved from the instance to the migration record as of
this blueprint in queens:

https://specs.openstack.org/openstack/nova-
specs/specs/queens/implemented/migration-allocations.html

This ends up being a lot of log spam:

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Overwriting%20current%20allocation%5C%22%20AND%20tags%3A%5C%22screen%5C%22=7d

The warning was added in this change:

https://github.com/openstack/nova/commit/53ca096750685b7bb88c2a5062db19d92facc548

But I'm not sure why. It should probably be INFO or DEBUG.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: compute resize serviceability

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1822917

Title:
  "Overwriting current allocation" warnings in logs during move
  operations although there are no failures

Status in OpenStack Compute (nova):
  New

Bug description:
  I'm not sure why this is a warning:

  
https://github.com/openstack/nova/blob/b33fa1c054ba4b7d4e789aa51250ad5c8325da2d/nova/scheduler/client/report.py#L1880

  Because it shows up in normal operations during a resize where
  allocations are moved from the instance to the migration record as of
  this blueprint in queens:

  https://specs.openstack.org/openstack/nova-
  specs/specs/queens/implemented/migration-allocations.html

  This ends up being a lot of log spam:

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Overwriting%20current%20allocation%5C%22%20AND%20tags%3A%5C%22screen%5C%22=7d

  The warning was added in this change:

  
https://github.com/openstack/nova/commit/53ca096750685b7bb88c2a5062db19d92facc548

  But I'm not sure why. It should probably be INFO or DEBUG.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1822917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1821824] Re: Forbidden traits in flavor properties don't work

2019-04-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/648653
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=c088856c8c08f0ab0746db7c120da494c9dd42d4
Submitter: Zuul
Branch:master

commit c088856c8c08f0ab0746db7c120da494c9dd42d4
Author: mb 
Date:   Fri Mar 29 10:20:58 2019 +0100

Fix bug preventing forbidden traits from working

Modifies _clean_empties function to take forbidden traits into account
in addition to required traits.

Added unit test test_resources_from_request_spec_flavor_forbidden_trait
to test that a single forbidden trait doesn't get lost in the
resources_from_request_spec function.

Also updated the functional test
test_flavor_forbidden_traits_based_scheduling to do the right thing.

Change-Id: I491b10c9c202baae4a37034848147f910a50eebf
Closes-Bug: #1821824


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1821824

Title:
  Forbidden traits in flavor properties don't work

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) rocky series:
  Confirmed
Status in OpenStack Compute (nova) stein series:
  Confirmed

Bug description:
  Due to an error when implementing forbidden traits they are stripped
  off in the _clean_empties function in nova/scheduler/utils.py which
  only takes required_traits into account.

  This means that forbidden traits won't be acted upon and an instance
  started with a flavor with a forbidden trait still can end up on a
  resource provider with that trait set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1821824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1821824] Re: Forbidden traits in flavor properties don't work

2019-04-02 Thread Matt Riedemann
** Also affects: nova/stein
   Importance: Undecided
   Status: New

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Changed in: nova
   Importance: Undecided => High

** Changed in: nova/rocky
   Status: New => Confirmed

** Changed in: nova/stein
   Status: New => Confirmed

** Changed in: nova/rocky
   Importance: Undecided => High

** Changed in: nova/stein
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1821824

Title:
  Forbidden traits in flavor properties don't work

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) rocky series:
  Confirmed
Status in OpenStack Compute (nova) stein series:
  Confirmed

Bug description:
  Due to an error when implementing forbidden traits they are stripped
  off in the _clean_empties function in nova/scheduler/utils.py which
  only takes required_traits into account.

  This means that forbidden traits won't be acted upon and an instance
  started with a flavor with a forbidden trait still can end up on a
  resource provider with that trait set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1821824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1819460] Re: instance stuck in BUILD state due to unhandled exceptions in conductor

2019-04-02 Thread Matt Riedemann
** Also affects: nova/stein
   Importance: Undecided
   Status: New

** Changed in: nova/stein
   Status: New => Confirmed

** Changed in: nova/stein
   Importance: Undecided => Low

** Changed in: nova
   Importance: Low => Medium

** Changed in: nova/stein
   Importance: Low => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1819460

Title:
  instance stuck in BUILD state due to unhandled exceptions in conductor

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) stein series:
  Confirmed

Bug description:
  There are two calls[1][2] during ConductorTaskManager.build_instances,
  used during re-schedule, that could potentially raise exceptions which
  leads to that the instance is stuck in BUILD state instead of going to
  ERROR state.


  [1] 
https://github.com/openstack/nova/blob/892ead1438abc9a8a876209343e6a85c80f0059f/nova/conductor/manager.py#L670
  [2] 
https://github.com/openstack/nova/blob/892ead1438abc9a8a876209343e6a85c80f0059f/nova/conductor/manager.py#L679

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1819460/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822884] [NEW] live migration fails due to port binding duplicate key entry in post_live_migrate

2019-04-02 Thread Robert Colvin
Public bug reported:

We are converting a site from RDO to OSA; At this stage all control
nodes and net nodes are running OSA (Rocky), some compute are running
RDO (Queens), some are RDO (Rocky) and the remaining are OSA (Rocky).

We are attempting to Live Migrate VMs from the RDO (Rocky) nodes to OSA
(Rocky) before reinstalling the RDO nodes as OSA (Rocky).

When Live Migrating between RDO nodes we see no issues similarly when
migrating between OSA nodes, we see no issue, however Live Migrating RDO
-> OSA fails with the below error on the target.

2019-01-24 13:33:11.701 85926 INFO nova.network.neutronv2.api 
[req-3c4ceac0-7c12-428d-9f63-7c1d6879be4e a3bee416cf67420995855d602d2bccd3 
a564613210ee43708b8a7fc6274ebd63 - default default] [instance: 
8ecbfc14-2699-4276-a577-18ed6a662232] Updating port 
419c18e1-05c0-44c3-9a97-8334d0c15cc1 with attributes {'binding:profile': {}, 
'binding:host_id': 'cc-compute24-kna1'}
2019-01-24 13:33:59.357 85926 ERROR nova.network.neutronv2.api 
[req-3c4ceac0-7c12-428d-9f63-7c1d6879be4e a3bee416cf67420995855d602d2bccd3 
a564613210ee43708b8a7fc6274ebd63 - default default] [instance: 
8ecbfc14-2699-4276-a577-18ed6a662232] Unable to update binding details for port 
419c18e1-05c0-44c3-9a97-8334d0c15cc1: InternalServerError: Request Failed: 
internal server error while processing your request.

Digging further into the logs, reveals an issue with duplicate keys:

2019-02-01 09:48:10.268 11854 ERROR oslo_db.api 
[req-152bce20-8895-4238-910c-b26fde44913d e7bbce5e15994104bdf5e3af68a55b31 
a894e8109af3430aa7ae03e0c49a0aa0 - default default] DB exceeded retry limit.: 
DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, u"Duplicate entry 
'5bedceef-1b65-4b19-a6d4-da3595adaf61-cc-com
pute24-kna1' for key 'PRIMARY'") [SQL: u'UPDATE ml2_port_bindings SET 
host=%(host)s, profile=%(profile)s, vif_type=%(vif_type)s, 
vif_details=%(vif_details)s WHERE ml2_port_bindings.port_id = 
%(ml2_port_bindings_port_id)s AND ml2_port_bindings.host = 
%(ml2_port_bindings_host)s'] [parameters: {'profile': '{}', 'vif_ty
pe': 'unbound', 'ml2_port_bindings_host': u'cc-compute06-kna1', 'vif_details': 
'', 'ml2_port_bindings_port_id': u'5bedceef-1b65-4b19-a6d4-da3595adaf61', 
'host': u'cc-compute24-kna1'}] (Background on this error at: 
http://sqlalche.me/e/gkpj)

this is confirmed when reviewing the ml2_port_bindings table:

MariaDB [neutron]> select * from ml2_port_bindings where port_id = 
'5bedceef-1b65-4b19-a6d4-da3595adaf61';
+--+---+--+---+---+---+--+
| port_id  | host  | vif_type | 
vnic_type | vif_details 
  | profile   | status   |
+--+---+--+---+---+---+--+
| 5bedceef-1b65-4b19-a6d4-da3595adaf61 | cc-compute06-kna1 | ovs  | normal  
  | {"port_filter": true, "datapath_type": "system", "ovs_hybrid_plug": true} | 
{"migrating_to": "cc-compute24-kna1"} | ACTIVE   |
| 5bedceef-1b65-4b19-a6d4-da3595adaf61 | cc-compute24-kna1 | ovs  | normal  
  | {"port_filter": true, "datapath_type": "system", "ovs_hybrid_plug": true} | 
  | INACTIVE |
+--+---+--+---+---+---+--+

The exception is not caught and handled, the VM is stuck in migrating.
According to OpenStack, the VM is still on the source compute node,
whilst libvirt/virsh believes it to be on the target. Forcing the VM
state to active, keeps the VM available, however rebooting will result
in an ERROR state (this is resolved by destroying the VM in virsh on the
target, forcing back to active state, and rebooting) nor can it be
attempted to be migrated due to the error state in the DB (this can be
fixed by manually removing the inactive port, and clearing the profile
from the active port).

In discussions with both mnasser and sean-k-mooney, it is understood
that there are two distinct live migration flows with regards to port
binding

1) the "old" flow: the port is deactivated on the source before being activated 
on the target - meaning only one entry in the ml2_port_bindings tables, at the 
expense of added network outage during live migration
2) the "new" flow: an inactive port is added to the target, before the old port 
is removed and the new port activated

We can see is monitoring the ml2_port_binding table during live
migrations that

1. RDO -> RDO use the old flow (only one entry in the ml2_port_bindings table 
at all times)
2. OSA -> 

[Yahoo-eng-team] [Bug 1816727] Re: nova-novncproxy does not handle TCP RST cleanly when using SSL 

2019-04-02 Thread melanie witt
** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Also affects: nova/stein
   Importance: Undecided
   Status: New

** Changed in: nova/stein
   Importance: Undecided => Medium

** Changed in: nova/stein
   Status: New => In Progress

** Changed in: nova/stein
 Assignee: (unassigned) => Colleen Murphy (krinkle)

** Changed in: nova/rocky
   Importance: Undecided => Medium

** Changed in: nova/rocky
   Status: New => In Progress

** Changed in: nova/rocky
 Assignee: (unassigned) => Colleen Murphy (krinkle)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1816727

Title:
  nova-novncproxy does not handle TCP RST cleanly when using SSL

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) rocky series:
  In Progress
Status in OpenStack Compute (nova) stein series:
  In Progress
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Description
  ===

  We have nova-novncproxy configured to use SSL:

  ```
  [DEFAULT]
  ssl_only=true
  cert = /etc/nova/ssl/certs/signing_cert.pem
  key = /etc/nova/ssl/private/signing_key.pem
  ...
  [vnc]
  enabled = True
  server_listen = "0.0.0.0"
  server_proxyclient_address = 192.168.237.81
  novncproxy_host = 192.168.237.81
  novncproxy_port = 5554
  novncproxy_base_url = https://:6080/vnc_auto.html
  xvpvncproxy_host = 192.168.237.81
  ```

  We also have haproxy acting as a load balancer, but not terminating
  SSL. We have an haproxy health check configured like this for nova-
  novncproxy:

  ```
  listen nova-novncproxy
  # irrelevant config...
  server  192.168.237.84:5554 check check-ssl verify 
none inter 2000 rise 5 fall 2
  ```

  where 192.168.237.81 is a virtual IP address and 192.168.237.84 is the
  node's individual IP address.

  With that health check enabled, we found the nova-novncproxy process
  CPU spiking and eventually causing the node to hang. With debug
  logging enabled, we noticed this in the nova-novncproxy logs:

  2019-02-19 15:02:44.148 2880 INFO nova.console.websocketproxy [-] WebSocket 
server settings:
  2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-]   - Listen 
on 192.168.237.81:5554
  2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-]   - Flash 
security policy server
  2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-]   - Web 
server (no directory listings). Web root: /usr/share/novnc
  2019-02-19 15:02:44.150 2880 INFO nova.console.websocketproxy [-]   - SSL/TLS 
support
  2019-02-19 15:02:44.151 2880 INFO nova.console.websocketproxy [-]   - 
proxying from 192.168.237.81:5554 to None:None
  2019-02-19 15:02:45.015 2880 DEBUG nova.console.websocketproxy [-] 
192.168.237.85: new handler Process vmsg 
/usr/lib/python2.7/site-packages/websockify/websocket.py:873
  2019-02-19 15:02:45.184 2889 DEBUG oslo_db.sqlalchemy.engines 
[req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to 
STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI
  TUTION _check_effective_sql_mode 
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308
  2019-02-19 15:02:45.377 2889 DEBUG nova.context 
[req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Found 2 cells: 
----(cell0),9f9825dd-868f-41cc-9c8e-e544f1528d6a(cell1)
 load_cells /usr/lib/python2.7/site-packages/nova/context.py:479
  2019-02-19 15:02:45.380 2889 DEBUG oslo_concurrency.lockutils 
[req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock 
"----" acquired by 
"nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s 
inner /usr/lib/python2.7/site-packag
  es/oslo_concurrency/lockutils.py:273
  2019-02-19 15:02:45.382 2889 DEBUG oslo_concurrency.lockutils 
[req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock 
"----" released by 
"nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.002s inner 
/usr/lib/python2.7/site-packages
  /oslo_concurrency/lockutils.py:285
  2019-02-19 15:02:45.393 2889 DEBUG oslo_concurrency.lockutils 
[req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock 
"9f9825dd-868f-41cc-9c8e-e544f1528d6a" acquired by 
"nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s 
inner /usr/lib/python2.7/site-packag
  es/oslo_concurrency/lockutils.py:273
  2019-02-19 15:02:45.395 2889 DEBUG oslo_concurrency.lockutils 
[req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock 
"9f9825dd-868f-41cc-9c8e-e544f1528d6a" released by 
"nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.003s inner 
/usr/lib/python2.7/site-packages
  /oslo_concurrency/lockutils.py:285
  2019-02-19 15:02:45.437 2889 DEBUG oslo_db.sqlalchemy.engines 

[Yahoo-eng-team] [Bug 1822729] Re: DeprecationWarning: Property 'tenant' has moved to 'project_id' in version '2.6' and will be removed in version '3.0'

2019-04-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/649234
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=c3af63472a9e1cba81592b37718a905e63baaa1b
Submitter: Zuul
Branch:master

commit c3af63472a9e1cba81592b37718a905e63baaa1b
Author: Takashi NATSUME 
Date:   Tue Apr 2 16:14:53 2019 +0900

Fix a deprecation warning

Replace context.tenant with context.project_id
to fix the following warning.

  DeprecationWarning: Property 'tenant' has moved to 'project_id'
  in version '2.6' and will be removed in version '3.0'

Change-Id: I7ae02d7f25486ee7a424c96e6c61b66461a01f09
Closes-Bug: #1822729


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1822729

Title:
  DeprecationWarning: Property 'tenant' has moved to 'project_id' in
  version '2.6' and will be removed in version '3.0'

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The following test outputs DeprecationWarning. Is should be fixed.

  
  {5} 
nova.tests.unit.network.security_group.test_neutron_driver.TestNeutronDriver.test_list_with_all_tenants_not_admin
 [0.016566s] ... ok

  Captured stderr:
  
  
b"/tmp/nova/nova/tests/unit/network/security_group/test_neutron_driver.py:140: 
DeprecationWarning: Property 'tenant' has moved to 'project_id' in version 
'2.6' and will be removed in version '3.0'"
  b'  tenant_id=self.context.tenant)'
  b''

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1822729/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784342] Re: AttributeError: 'Subnet' object has no attribute '_obj_network_id'

2019-04-02 Thread Drew Freiberger
We have had an incident of a network being deleted, but the recursive
subnet deletions failing under load happening in a production Xenial-
Queens cloud.

Can this somehow be mitigated by api update to require additional user
intervention in case of existing subnets?

** Changed in: neutron
   Status: Expired => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1784342

Title:
  AttributeError: 'Subnet' object has no attribute '_obj_network_id'

Status in neutron:
  Confirmed

Bug description:
  Running rally caused subnets to be created without a network_id
  causing this AttributeError.

  OpenStack Queens RDO packages
  [root@controller1 ~]# rpm -qa | grep -i neutron
  python-neutron-12.0.2-1.el7.noarch
  openstack-neutron-12.0.2-1.el7.noarch
  python2-neutron-dynamic-routing-12.0.1-1.el7.noarch
  python2-neutron-lib-1.13.0-1.el7.noarch
  openstack-neutron-dynamic-routing-common-12.0.1-1.el7.noarch
  python2-neutronclient-6.7.0-1.el7.noarch
  openstack-neutron-bgp-dragent-12.0.1-1.el7.noarch
  openstack-neutron-common-12.0.2-1.el7.noarch
  openstack-neutron-ml2-12.0.2-1.el7.noarch

  
  MariaDB [neutron]> select project_id, id, name, network_id, cidr from subnets 
where network_id is null;

  
+--+--+---++-+

  | project_id   | id
  | name  | network_id | cidr|

  
+--+--+---++-+

  | b80468629bc5410ca2c53a7cfbf002b3 | 7a23c72b-
  3df8-4641-a494-af7642563c8e | s_rally_1e4bebf1_1s3IN6mo | NULL   |
  1.9.13.0/24 |

  | b80468629bc5410ca2c53a7cfbf002b3 |
  f7a57946-4814-477a-9649-cc475fb4e7b2 | s_rally_1e4bebf1_qWSFSMs9 |
  NULL   | 1.5.20.0/24 |

  
+--+--+---++-+

  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
[req-c921b9fb-499b-41c1-9103-93e71a70820c b6b96932bbef41fdbf957c2dc01776aa 
050c556faa5944a8953126c867313770 - default default] GET failed.: 
AttributeError: 'Subnet' object has no attribute '_obj_network_id'
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
Traceback (most recent call last):
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/pecan/core.py", line 678, in __call__
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
self.invoke_controller(controller, args, kwargs, state)
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/pecan/core.py", line 569, in 
invoke_controller
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
result = controller(*args, **kwargs)
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 91, in wrapped
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
setattr(e, '_RETRY_EXCEEDED', True)
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
self.force_reraise()
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
six.reraise(self.type_, self.value, self.tb)
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 87, in wrapped
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
return f(*args, **kwargs)
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/oslo_db/api.py", line 147, in wrapper
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
ectxt.value = e.inner_exc
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
self.force_reraise()
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
six.reraise(self.type_, self.value, 

[Yahoo-eng-team] [Bug 1821938] Re: No nova hypervisor can be enabled on workers with QAT devices

2019-04-02 Thread sean mooney
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
   Importance: Undecided => High

** Changed in: nova
 Assignee: (unassigned) => sean mooney (sean-k-mooney)

** Changed in: nova
   Status: New => In Progress

** Tags added: stein-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1821938

Title:
  No nova hypervisor can be enabled on workers with QAT devices

Status in OpenStack Compute (nova):
  In Progress
Status in StarlingX:
  Triaged

Bug description:
  Brief Description
  -
  Unable to enable a host as nova hypervisor due to pci device cannot be found 
if the host has QAT devices (C62x or DH895XCC) configured.

  Severity
  
  Major

  
  Steps to Reproduce
  --
  - Install and configure a system where worker nodes have QAT devices 
configured. e.g.,
  [wrsroot@controller-0 ~(keystone_admin)]$ system host-device-list compute-0
  
+--+--+--+---+---+---+-++---+-+
  | name | address | class id | vendor id | device id | class name | vendor 
name | device name | numa_node | enabled |
  
+--+--+--+---+---+---+-++---+-+
  | pci__09_00_0 | :09:00.0 | 0b4000 | 8086 | 0435 | Co-processor | 
Intel Corporation | DH895XCC Series QAT | 0 | True |
  | pci__0c_00_0 | :0c:00.0 | 03 | 102b | 0522 | VGA compatible 
controller | Matrox Electronics Systems Ltd. | MGA G200e [Pilot] ServerEngines 
(SEP1) | 0 | True |
  
+--+--+--+---+---+---+-++---+-+

  compute-0:~$ lspci | grep QAT
  09:00.0 Co-processor: Intel Corporation DH895XCC Series QAT
  09:01.0 Co-processor: Intel Corporation DH895XCC Series QAT Virtual Function
  09:01.1 Co-processor: Intel Corporation DH895XCC Series QAT Virtual Function
  ...

  - check nova hypervisor-list

  Expected Behavior
  --
  - Nova hypervisors exist on system

  Actual Behavior
  
  [wrsroot@controller-0 ~(keystone_admin)]$ nova hypervisor-list
  ++-+---++
  | ID | Hypervisor hostname | State | Status |
  ++-+---++
  ++-+---++

  
  Reproducibility
  ---
  Reproducible

  System Configuration
  
  Any system type with QAT devices configured on worker node

  Branch/Pull Time/Commit
  ---
  master as of 2019-03-18

  Last Pass
  --
  on f/stein branch in early feb

  Timestamp/Logs
  --
  # nova-compute pods are spewing errors so they can't register themselves 
properly as hypervisors:
  2019-03-25 18:46:49,899.899 62394 ERROR nova.compute.manager 
[req-4f652d4c-da7e-4516-9baa-915265c3fdda - - - - -] Error updating resources 
for node compute-0.: PciDeviceNotFoundById: PCI device :09:02.3 not found
  2019-03-25 18:46:49,899.899 62394 ERROR nova.compute.manager Traceback (most 
recent call last):
  2019-03-25 18:46:49,899.899 62394 ERROR nova.compute.manager File 
"/var/lib/openstack/lib/python2.7/site-packages/nova/compute/manager.py", line 
7956, in _update_available_resource_for_node
  2019-03-25 18:46:49,899.899 62394 ERROR nova.compute.manager startup=startup)
  2019-03-25 18:46:49,899.899 62394 ERROR nova.compute.manager File 
"/var/lib/openstack/lib/python2.7/site-packages/nova/compute/resource_tracker.py",
 line 727, in update_available_resource
  2019-03-25 18:46:49,899.899 62394 ERROR nova.compute.manager resources = 
self.driver.get_available_resource(nodename)
  2019-03-25 18:46:49,899.899 62394 ERROR nova.compute.manager File 
"/var/lib/openstack/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", 
line 7098, in get_available_resource
  2019-03-25 18:46:49,899.899 62394 ERROR nova.compute.manager 
self._get_pci_passthrough_devices()
  2019-03-25 18:46:49,899.899 62394 ERROR nova.compute.manager File 
"/var/lib/openstack/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", 
line 6102, in _get_pci_passthrough_devices
  2019-03-25 18:46:49,899.899 62394 ERROR nova.compute.manager 
pci_info.append(self._get_pcidev_info(name))
  2019-03-25 18:46:49,899.899 62394 ERROR nova.compute.manager File 
"/var/lib/openstack/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", 
line 6062, in _get_pcidev_info
  2019-03-25 18:46:49,899.899 62394 ERROR nova.compute.manager 
device.update(_get_device_type(cfgdev, address))
  2019-03-25 

[Yahoo-eng-team] [Bug 1669054] Re: RequestSpec.ignore_hosts from resize is reused in subsequent evacuate

2019-04-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/647512
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=e4c998e57390c15891131205f7443fee98dde4ee
Submitter: Zuul
Branch:master

commit e4c998e57390c15891131205f7443fee98dde4ee
Author: Matt Riedemann 
Date:   Mon Mar 25 11:27:57 2019 -0400

Do not persist RequestSpec.ignore_hosts

Change Ic3968721d257a167f3f946e5387cd227a7eeec6c in Newton
started setting the RequestSpec.ignore_hosts field to the
source instance.host during resize/cold migrate if
allow_resize_to_same_host=False in config, which it is by
default.

Change I8abdf58a6537dd5e15a012ea37a7b48abd726579 also in
Newton persists changes to the RequestSpec in conductor
in order to save the RequestSpec.flavor for the new flavor.
This inadvertently persists the ignore_hosts field as well.

Later if you try to evacuate or unshelve the server it will ignore
the original source host because of the persisted ignore_hosts
value. This is obviously a problem in a small deployment with only
a few compute nodes (like an edge deployment). As a result, an
evacuation can fail if the only available host is the one being
ignored.

This change does two things:

1. In order to deal with existing corrupted RequestSpecs in the DB,
   this change simply makes conductor overwrite RequestSpec.ignore_hosts
   rather than append during evacuate before calling the scheduler so
   the current instance host (which is down) is filtered out.

   This evacuate code dealing with ignore_hosts goes back to Mitaka:

 I7fe694175bb47f53d281bd62ac200f1c8416682b

   The test_rebuild_instance_with_request_spec unit test is updated
   and renamed to actually be doing an evacuate which is what it was
   intended for, i.e. the host would not change during rebuild.

2. This change makes the RequestSpec no longer persist the ignore_hosts
   field like several other per-operation fields in the RequestSpec.
   The only operations that use ignore_hosts are resize (if
   allow_resize_to_same_host=False), evacuate and live migration, and
   the field gets reset in each case to ignore the source instance.host.

The related functional recreate test is also updated to show the
bug is fixed. Note that as part of that, the confirm_migration method
in the fake virt driver needed to be implemented otherwise trying to
evacuate back to the source host fails with an InstanceExists error since
the confirmResize operation did not remove the guest from the source host.

Change-Id: I3f488be6f3c399f23ccf2b9ee0d76cd000da0e3e
Closes-Bug: #1669054


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1669054

Title:
  RequestSpec.ignore_hosts from resize is reused in subsequent evacuate

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  In Progress
Status in OpenStack Compute (nova) pike series:
  In Progress
Status in OpenStack Compute (nova) queens series:
  In Progress
Status in OpenStack Compute (nova) rocky series:
  In Progress
Status in OpenStack Compute (nova) stein series:
  In Progress

Bug description:
  When doing a resize, if CONF.allow_resize_to_same_host is False, then
  we set RequestSpec.ignore_hosts and then save the RequestSpec.

  When we go to use the same RequestSpec on a subsequent
  rebuild/evacuate, ignore_hosts is still set from the previous resize.

  In RequestSpec.reset_forced_destinations() we reset force_hosts and
  force_nodes, it might make sense to also reset ignore_hosts.

  We may also want to change other things...for example in
  ConductorManager.rebuild_instance() we set request_spec.ignore_hosts
  to itself if it's set...that makes no sense if we're just going to
  reset it to nothing immediately afterwards.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1669054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822575] Re: lower-constraints are not used in gate job

2019-04-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/622972
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=e5269f2fbb773953266fccbfd9df3f75e351eeb9
Submitter: Zuul
Branch:master

commit e5269f2fbb773953266fccbfd9df3f75e351eeb9
Author: Chris Dent 
Date:   Wed Dec 5 14:01:04 2018 +

Correct lower-constraints.txt and the related tox job

In the review of a similar change in placement [1], it was realized that
the nova lower-constraints tox job probably had the same problems.
Testing revealed this to be the case. This change fixes the job and
updates the related requirements problems accordingly.

The are two main factors at play here:

* The default install_command in tox.ini uses the upper_contraints.txt
  file. When there is more than one constraints.txt they are merged and
  the higher constraints win. Using upper and lower at the same time
  violates the point of lower (which is to indicate the bare minimum
  we are capable of using).

* When usedevelop is true in tox, the command that is run to install the
  current projects code is something like 'python setup.py develop',
  which installs a project's requirements _after_ the install_command has
  run, clobbering the constrained installs. When using pbr,
  'python setup.py install' (used when usedevelop is False) does not do
  this.

Fixing those then makes it possible to use the test to fix the
lower-constraints.txt and *requirements.txt files, changes include:

* Defining 'usedevelop = False' in the 'lower-constraints' target and
  removing the otherwise superfluous 'skipsdist' global setting to
  ensure requirements aren't clobbered.

* Removing packages which show up in lower-constraints.txt but not in
  the created virtualenv. Note that the job only runs unit tests, so
  this may be incomplete. In the placement version of this both unit and
  functional are run. We may want to consider that here.

* Updating cryptography. This version is needed with more recent
  pyopenssl.

* Updated keystonemiddleware. This is needed for some tests which
  confirm passing configuration to the middleware.

* Update psycopg2 to a version that can talk with postgresql 10.

* Add PyJWT, used by zVMCloudConnector

* Update zVMCloudConnector to a version that works with Python 3.5 and
  beyond.

* Update olso.messaging to versions that work with the tests, under
  Python 3.

* Adding missing transitive packages.

* Adjusting alpha-ordering to map to how pip freeze does it.

* setuptools is removed from requirements.txt because the created
  virtualenv doesn't contain it

NOTE: The lower-constraints.txt file makes no commitment to expressing
minimum requirements for anything other than the current basepython.
So the fact that a different set of lower-constraints would be present
if we were using python2 is not relevant. See discussion at [1].
However, since requirements.txt _is_ used for python2, the
requirements-check gate job requires that enum34 be present in
lower-constraints.txt because it is in requirements.txt.

NOTE: A test is removed because it cannot work in the
lower-constraints context: 'test_policy_generator_from_command_line'
forks a call to 'oslopolicy-policy-generator --namespace nova' which
fails because stevedore fails to pick up nova-based entry points when
in a different process. This is because of the change to usedevelop.
After discussion with the original author of the test removal was
considered an acceptable choice.

[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2019-03-05.log.html#t2019-03-05T13:28:23

Closes-Bug: #1822575

Change-Id: Ic6466b0440a4fe012731a63715cf5d793b6ae4dd


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1822575

Title:
  lower-constraints are not used in gate job

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  the lower constraints tox env attempts to run nova's unit tests with
  the minium supported software versions declared in nova lower-constraints.txt

  due to the way the install command is specified in the default tox env

  install_command = pip install
  
-c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain
  /upper-constraints.txt} {opts} {packages}

  the upper-constraints.txt was also passed to pip.

  pips constraint solver takes the first deffintion of a constraint and
  discards all redfinitoins.

  because upper-constraints.txt was included before lower-constraints.txt the 
lower constraints were
  ignored.

[Yahoo-eng-team] [Bug 1822849] [NEW] Timezone offset displayed in horizon / user / settings is always using daylight saving

2019-04-02 Thread David Hill
Public bug reported:

Timezone offset displayed in horizon / user / settings is always using
daylight saving

** Affects: horizon
 Importance: Undecided
 Assignee: David Hill (david-hill-ubisoft)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1822849

Title:
  Timezone offset displayed in horizon / user / settings is always using
  daylight saving

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Timezone offset displayed in horizon / user / settings is always using
  daylight saving

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1822849/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822823] Re: changing 'event_time_to_live' for panko service via service-parameter-modify failed

2019-04-02 Thread Ghada Khalil
>From Brent:
Service parms are no longer valid for openstack components. You need to do a 
helm chart override instead.

Marking this bug as Invalid

** Project changed: neutron => starlingx

** Tags added: stx.containers

** Changed in: starlingx
   Importance: Undecided => Low

** Changed in: starlingx
 Assignee: (unassigned) => Brent Rowsell (brent-rowsell)

** Changed in: starlingx
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1822823

Title:
  changing 'event_time_to_live' for panko service via service-parameter-
  modify failed

Status in StarlingX:
  Invalid

Bug description:
  Brief Description 
  - 
  CLI: system service-parameter-list shows settings of services including 
panko, which leads users to think CLI 'system service-parameter-modify' is a 
way to change the settings. 
  But changing setting value 'event_time_to_live' for service panko did not 
work (major alarms Configuration is out-of-date were triggered): 
  1 CLI executed without error and returned expected output (as before): 
  system service-parameter-modify panko database event_time_to_live=12345 
  +-+--+ 
  | Property | Value | 
  +-+--+ 
  | uuid | 82ccc101-0c90-4af3-a0bb-193843025752 | 
  | service | panko | 
  | section | database | 
  | name | event_time_to_live | 
  | value | 12345 | 
  | personality | None | 
  | resource | None | 
  +-+--+ 

  2 system alarms were triggered as (before): 
   +-+--+ 
  | Property | Value | 
  +-+--+ 
  | uuid | 82ccc101-0c90-4af3-a0bb-193843025752 | 
  | service | panko | 
  | section | database | 
  | name | event_time_to_live | 
  | value | 12345 | 
  | personality | None | 
  | resource | None | 
  +-+--+ 

  3 but the value '12345' did not populated into config files: 
 /etc/pank/pank.conf on controllers still have the old value 
(now the file do not exist anymore) 
 /etc/pank/pank.conf on panko-api containers still have the old 
value 

  So here're some issues: 
  1 If it is not supported any more for user to modify 
'event_time_to_live' for panko using CLI 'system service-parameter-modify panko 
database', it might be better to reject the request and return messages telling 
user so. 
  2 Major type of alarm should not be raised if it changes will not be 
done. 
  3 Acceptable valid values for users to set should be documented or 
even better included in the output message in cases user inputs were invalid. 

  Severity 
   
  Major 

  
  Steps to Reproduce 
  -- 
  system service-parameter-modify panko database event_time_to_live=12345 

  Expected Behavior 
  -- 
  1 the CLI executed without errors 
  2 'Configuration is out-of-date' alarms were triggered, and cleared after a 
period of time 
  3 new values were populated to persistent storage /etc/panko/panko.conf 
 on controllers (platform) or inside containers of panko-api 

  Actual Behavior 
   
  1 the CLI executed without errors 
  2 'Configuration is out-of-date' alarms were triggered, and cleared after a 
period of time 
  3 No changes made to /etc/panko/panko.conf 
 on controllers (platform) or inside containers of panko-api 

  Reproducibility 
  --- 
  Reproducible 

  System Configuration 
   
  Multi-node system 

  Branch/Pull Time/Commit 
  --- 
  master as of 20190330T013000Z 

  Last Pass 
  -- 
  2019-02-15_20-18-00 

  Timestamp/Logs 
  -- 
  2019-03-31 15:55:5

To manage notifications about this bug go to:
https://bugs.launchpad.net/starlingx/+bug/1822823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822823] [NEW] changing 'event_time_to_live' for panko service via service-parameter-modify failed

2019-04-02 Thread mhg
Public bug reported:

Brief Description 
- 
CLI: system service-parameter-list shows settings of services including panko, 
which leads users to think CLI 'system service-parameter-modify' is a way to 
change the settings. 
But changing setting value 'event_time_to_live' for service panko did not work 
(major alarms Configuration is out-of-date were triggered): 
1 CLI executed without error and returned expected output (as before): 
system service-parameter-modify panko database event_time_to_live=12345 
+-+--+ 
| Property | Value | 
+-+--+ 
| uuid | 82ccc101-0c90-4af3-a0bb-193843025752 | 
| service | panko | 
| section | database | 
| name | event_time_to_live | 
| value | 12345 | 
| personality | None | 
| resource | None | 
+-+--+ 

2 system alarms were triggered as (before): 
 +-+--+ 
| Property | Value | 
+-+--+ 
| uuid | 82ccc101-0c90-4af3-a0bb-193843025752 | 
| service | panko | 
| section | database | 
| name | event_time_to_live | 
| value | 12345 | 
| personality | None | 
| resource | None | 
+-+--+ 

3 but the value '12345' did not populated into config files: 
   /etc/pank/pank.conf on controllers still have the old value (now 
the file do not exist anymore) 
   /etc/pank/pank.conf on panko-api containers still have the old 
value 

So here're some issues: 
1 If it is not supported any more for user to modify 
'event_time_to_live' for panko using CLI 'system service-parameter-modify panko 
database', it might be better to reject the request and return messages telling 
user so. 
2 Major type of alarm should not be raised if it changes will not be 
done. 
3 Acceptable valid values for users to set should be documented or even 
better included in the output message in cases user inputs were invalid. 

Severity 
 
Major 


Steps to Reproduce 
-- 
system service-parameter-modify panko database event_time_to_live=12345 

Expected Behavior 
-- 
1 the CLI executed without errors 
2 'Configuration is out-of-date' alarms were triggered, and cleared after a 
period of time 
3 new values were populated to persistent storage /etc/panko/panko.conf 
   on controllers (platform) or inside containers of panko-api 

Actual Behavior 
 
1 the CLI executed without errors 
2 'Configuration is out-of-date' alarms were triggered, and cleared after a 
period of time 
3 No changes made to /etc/panko/panko.conf 
   on controllers (platform) or inside containers of panko-api 

Reproducibility 
--- 
Reproducible 

System Configuration 
 
Multi-node system 

Branch/Pull Time/Commit 
--- 
master as of 20190330T013000Z 

Last Pass 
-- 
2019-02-15_20-18-00 

Timestamp/Logs 
-- 
2019-03-31 15:55:5

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1822823

Title:
  changing 'event_time_to_live' for panko service via service-parameter-
  modify failed

Status in neutron:
  New

Bug description:
  Brief Description 
  - 
  CLI: system service-parameter-list shows settings of services including 
panko, which leads users to think CLI 'system service-parameter-modify' is a 
way to change the settings. 
  But changing setting value 'event_time_to_live' for service panko did not 
work (major alarms Configuration is out-of-date were triggered): 
  1 CLI executed without error and returned expected output (as before): 
  system service-parameter-modify panko database event_time_to_live=12345 
  +-+--+ 
  | Property | Value | 
  +-+--+ 
  | uuid | 82ccc101-0c90-4af3-a0bb-193843025752 | 
  | service | panko | 
  | section | database | 
  | name | event_time_to_live | 
  | value | 12345 | 
  | personality | None | 
  | resource | None | 
  +-+--+ 

  2 system alarms were triggered as (before): 
   +-+--+ 
  | Property | Value | 
  +-+--+ 
  | uuid | 82ccc101-0c90-4af3-a0bb-193843025752 | 
  | service | panko | 
  | section | database | 
  | name | event_time_to_live | 
  | value | 12345 | 
  | personality | None | 
  | resource | None | 
  +-+--+ 

  3 but the value '12345' did not populated into config files: 
 /etc/pank/pank.conf on controllers still have the old value 
(now 

[Yahoo-eng-team] [Bug 1813007] Re: [SRU] Unable to install new flows on compute nodes when having broken security group rules

2019-04-02 Thread Corey Bryant
This bug was fixed in the package neutron - 2:14.0.0~rc1-0ubuntu2~cloud0
---

 neutron (2:14.0.0~rc1-0ubuntu2~cloud0) bionic-stein; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 neutron (2:14.0.0~rc1-0ubuntu2) disco; urgency=medium
 .
   * d/p/fix-KeyError-in-OVS-firewall.patch: Cherry-picked from upstream
 to prevent neutron ovs agent from crashing due to creation of two
 security groups that both use the same remote security group, where
 the first group's port range is a subset of the second (LP: #1813007).


** Changed in: cloud-archive
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1813007

Title:
  [SRU] Unable to install new flows on compute nodes when having broken
  security group rules

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive pike series:
  Fix Committed
Status in Ubuntu Cloud Archive queens series:
  Fix Committed
Status in Ubuntu Cloud Archive rocky series:
  Fix Committed
Status in Ubuntu Cloud Archive stein series:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Security Advisory:
  Incomplete
Status in neutron package in Ubuntu:
  Fix Committed
Status in neutron source package in Bionic:
  Fix Committed
Status in neutron source package in Cosmic:
  Fix Committed
Status in neutron source package in Disco:
  Fix Committed

Bug description:
  It appears that we have found that neutron-openvswitch-agent appears to have 
a bug where two security group rules that have two different port ranges that 
overlap tied to the same parent security group will cause neutron to not be 
able to configure networks on the compute nodes where those security groups are 
present.
  Those are the broken security rules: 
https://pastebin.canonical.com/p/wSy8RSXt85/
  Here is the log when we discovered the issue: 
https://pastebin.canonical.com/p/wvFKjNWydr/

  
  Ubuntu SRU Details
  --
  [Impact]
  Neutron openvswitch agent crashes due to creation of two security groups that 
both use the same remote security group, where the first group doesn't define a 
port range and the second one does (one is a subset of the other; specifying no 
port range is the same as a full port range).

  [Test case]
  See comment #18 below.

  [Regression Potential]
  The fix is fairly minimal and has landed upstream in master branch. It has 
therefore passed all unit and function tests that get run in the upstream gate 
and has been reviewed by upstream neutron core reviewers. This all helps to 
minimize the regression potential.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1813007/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822801] [NEW] Baremetal port's host_id get updated during instance restart

2019-04-02 Thread Hamdy Khader
Public bug reported:

In case of baremetal overcloud, the instance ports gets updated during instance 
reboot[1] to change host_id
to be the nova compute host_id.

This way baremetal ports' host_id will be changed to indicate the nova
host_id instead of ironic node uuid.

In case of normal instance or even baremetal instance it wouldn't be a problem 
but in case of SmartNIC
baremetal instance the port's host_id is important to communicate with the 
relevant Neutron agent running on the SmartNIC as the port's host_id contains 
the SmartNIC host name.


Reproduce:
- deploy baremetal overcloud 
- create baremetal instance
- after creation complete, check port details and notice 
binding_host_id=overcloud-controller-0.localdomain


[1] https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L7191


Nova version:
()[root@overcloud-controller-0 /]# rpm -qa | grep nova
puppet-nova-14.4.1-0.20190322112825.740f45a.el7.noarch
python2-nova-19.0.0-0.20190322140639.d7c8924.el7.noarch
python2-novajoin-1.1.2-0.20190322123935.e8b18c4.el7.noarch
openstack-nova-compute-19.0.0-0.20190322140639.d7c8924.el7.noarch
python2-novaclient-13.0.0-0.20190311121537.62bf880.el7.noarch
openstack-nova-common-19.0.0-0.20190322140639.d7c8924.el7.noarch

** Affects: nova
 Importance: Undecided
 Assignee: Hamdy Khader (hamdyk)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Hamdy Khader (hamdyk)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1822801

Title:
  Baremetal port's host_id get updated during instance restart

Status in OpenStack Compute (nova):
  New

Bug description:
  In case of baremetal overcloud, the instance ports gets updated during 
instance reboot[1] to change host_id
  to be the nova compute host_id.

  This way baremetal ports' host_id will be changed to indicate the nova
  host_id instead of ironic node uuid.

  In case of normal instance or even baremetal instance it wouldn't be a 
problem but in case of SmartNIC
  baremetal instance the port's host_id is important to communicate with the 
relevant Neutron agent running on the SmartNIC as the port's host_id contains 
the SmartNIC host name.

  
  Reproduce:
  - deploy baremetal overcloud 
  - create baremetal instance
  - after creation complete, check port details and notice 
binding_host_id=overcloud-controller-0.localdomain

  
  [1] 
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L7191

  
  Nova version:
  ()[root@overcloud-controller-0 /]# rpm -qa | grep nova
  puppet-nova-14.4.1-0.20190322112825.740f45a.el7.noarch
  python2-nova-19.0.0-0.20190322140639.d7c8924.el7.noarch
  python2-novajoin-1.1.2-0.20190322123935.e8b18c4.el7.noarch
  openstack-nova-compute-19.0.0-0.20190322140639.d7c8924.el7.noarch
  python2-novaclient-13.0.0-0.20190311121537.62bf880.el7.noarch
  openstack-nova-common-19.0.0-0.20190322140639.d7c8924.el7.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1822801/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1202965] Re: The name VCPUs (total) of Hypervisors is confusing

2019-04-02 Thread Roman
** Also affects: charm-cinder-backup
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1202965

Title:
  The name VCPUs (total) of Hypervisors is confusing

Status in OpenStack cinder-backup charm:
  New
Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Compute (nova):
  Opinion

Bug description:
  In Hypervisors panel, VCPUs(total) and VCPUs(used) fields causes
  confusion as used always bigger than total.

  Virtual CPU to Physical CPU allocation ratio is default to 16.0.  put
  Physical CPU total to VCPUS(total) is not correct.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-cinder-backup/+bug/1202965/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822521] Re: On the Upload form in container panel, the file is required, but there is no required mark.

2019-04-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/648895
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=41dd4836c47339826e650c0a66dc4f38baccb91d
Submitter: Zuul
Branch:master

commit 41dd4836c47339826e650c0a66dc4f38baccb91d
Author: pengyuesheng 
Date:   Mon Apr 1 10:22:09 2019 +0800

Add the required mark

Change-Id: I8c4d0243716bbdbc8d4195d5dfa1b24e54e03978
Closes-Bug: #1822521


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1822521

Title:
  On the Upload form in container panel, the file is required, but there
  is no required mark.

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  On the Upload form in container panel, the file is required, but there
  is no required mark.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1822521/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822749] [NEW] Security group settings are invalid when editing an instance with the error status

2019-04-02 Thread pengyuesheng
Public bug reported:

1. Select the failed instance and click on "Edit Instance" in the drop-down 
arrow.
2. select a security group.
3. Click "Save"
4. The page has a prompt to successfully edit the instance.
5. The security group has not been modified in the instance details.

** Affects: horizon
 Importance: Undecided
 Assignee: pengyuesheng (pengyuesheng)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => pengyuesheng (pengyuesheng)

** Description changed:

- Security group settings are invalid when editing an instance with the
- error status
+ 1. Select the failed instance and click on "Edit Instance" in the drop-down 
arrow.
+ 2. select a security group.
+ 3. Click "Save"
+ 4. The page has a prompt to successfully edit the instance.
+ 5. The security group has not been modified in the instance details.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1822749

Title:
  Security group settings are invalid when editing an instance with the
  error status

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  1. Select the failed instance and click on "Edit Instance" in the drop-down 
arrow.
  2. select a security group.
  3. Click "Save"
  4. The page has a prompt to successfully edit the instance.
  5. The security group has not been modified in the instance details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1822749/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681627] Re: Page not found error on refreshing bowser (in AngularJS-based detail page)

2019-04-02 Thread Edward Hope-Morley
Pike patch will be included in PR done in bug 1822192

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/pike
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ocata
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/pike
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1681627

Title:
  Page not found error on refreshing bowser (in AngularJS-based detail
  page)

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive ocata series:
  New
Status in Ubuntu Cloud Archive pike series:
  Fix Committed
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Zun UI:
  Fix Released

Bug description:
  Once I get into the container detail view, refresh the browser will
  show a page not found error:

The current URL, ngdetails/OS::Zun::Container/c54ba416-a955-45b2
  -848b-aee57b748e08, didn't match any of these

  Full output: http://paste.openstack.org/show/605296/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1681627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822605] Re: nova-live-migration fails 100% with "Multiple possible networks found, use a Network ID to be more specific"

2019-04-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/649036
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=bed9d49163a70e063835d49aff3757f2c1a798f3
Submitter: Zuul
Branch:master

commit bed9d49163a70e063835d49aff3757f2c1a798f3
Author: Matt Riedemann 
Date:   Mon Apr 1 09:58:01 2019 -0400

Pass --nic when creating servers in evacuate integration test script

Devstack change Ib2e7096175c991acf35de04e840ac188752d3c17 started
creating a second network which is shared when tempest is enabled.
This causes the "openstack server create" and "nova boot" commands
in test_evacuate.sh to fail with:

  Multiple possible networks found, use a Network ID to be more specific.

This change selects the non-shared network and uses it to create
the servers during evacuate testing.

Change-Id: I2085a306e4d6565df4a641efabd009a3bc182e87
Closes-Bug: #1822605


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1822605

Title:
  nova-live-migration fails 100% with "Multiple possible networks found,
  use a Network ID to be more specific"

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Over the weekend (so since about 3/29) the nova-live-migration job has
  been failing 100% with the message:

  "Multiple possible networks found, use a Network ID to be more
  specific"

  Example: http://logs.openstack.org/12/648912/1/check/nova-live-
  migration/48932a5/job-output.txt.gz#_2019-04-01_08_27_07_901239

  Matt identified this as a suspect:

  https://review.openstack.org/#/c/601433/

  ...since it causes us now to create multiple networks. In tempest a
  network is always explicitly given, but in the nova-live-migration job
  it's not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1822605/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822734] [NEW] On "Manage QoS Spec Association" form, enter '*<>%' when creating a key, the error messages is shown

2019-04-02 Thread pengyuesheng
Public bug reported:

On "Manage QoS Spec Association" form, enter '*<>%' when creating a key,
the error messages is shown

** Affects: horizon
 Importance: Undecided
 Assignee: pengyuesheng (pengyuesheng)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1822734

Title:
  On "Manage QoS Spec Association" form, enter '*<>%' when creating a
  key, the error messages is shown

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On "Manage QoS Spec Association" form, enter '*<>%' when creating a
  key, the error messages is shown

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1822734/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822729] [NEW] DeprecationWarning: Property 'tenant' has moved to 'project_id' in version '2.6' and will be removed in version '3.0'

2019-04-02 Thread Takashi NATSUME
Public bug reported:

The following test outputs DeprecationWarning. Is should be fixed.


{5} 
nova.tests.unit.network.security_group.test_neutron_driver.TestNeutronDriver.test_list_with_all_tenants_not_admin
 [0.016566s] ... ok

Captured stderr:


b"/tmp/nova/nova/tests/unit/network/security_group/test_neutron_driver.py:140: 
DeprecationWarning: Property 'tenant' has moved to 'project_id' in version 
'2.6' and will be removed in version '3.0'"
b'  tenant_id=self.context.tenant)'
b''

** Affects: nova
 Importance: Undecided
 Assignee: Takashi NATSUME (natsume-takashi)
 Status: In Progress


** Tags: testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1822729

Title:
  DeprecationWarning: Property 'tenant' has moved to 'project_id' in
  version '2.6' and will be removed in version '3.0'

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The following test outputs DeprecationWarning. Is should be fixed.

  
  {5} 
nova.tests.unit.network.security_group.test_neutron_driver.TestNeutronDriver.test_list_with_all_tenants_not_admin
 [0.016566s] ... ok

  Captured stderr:
  
  
b"/tmp/nova/nova/tests/unit/network/security_group/test_neutron_driver.py:140: 
DeprecationWarning: Property 'tenant' has moved to 'project_id' in version 
'2.6' and will be removed in version '3.0'"
  b'  tenant_id=self.context.tenant)'
  b''

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1822729/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822724] [NEW] QoS Specs 'Manage QoS Spec Association' overrides existing key value

2019-04-02 Thread pengyuesheng
Public bug reported:

QoS Specs 'Manage QoS Spec Association' overrides existing key value

** Affects: horizon
 Importance: Undecided
 Assignee: pengyuesheng (pengyuesheng)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => pengyuesheng (pengyuesheng)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1822724

Title:
  QoS Specs 'Manage QoS Spec Association' overrides existing key value

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  QoS Specs 'Manage QoS Spec Association' overrides existing key value

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1822724/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822719] [NEW] The policy is not translated on the server group form in the launch instance workflow

2019-04-02 Thread pengyuesheng
Public bug reported:

The policy is not translated on the server group form in the launch
instance workflow

** Affects: horizon
 Importance: Undecided
 Assignee: pengyuesheng (pengyuesheng)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => pengyuesheng (pengyuesheng)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1822719

Title:
  The policy is not translated on the server group form in the launch
  instance workflow

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The policy is not translated on the server group form in the launch
  instance workflow

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1822719/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822155] Re: neutron-keepalived-state-change can not start on some python3 distro

2019-04-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/648459
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=97923ae4a89217f90417d2561a1366e8aacd7f25
Submitter: Zuul
Branch:master

commit 97923ae4a89217f90417d2561a1366e8aacd7f25
Author: LIU Yulong 
Date:   Fri Mar 29 00:16:10 2019 +0800

Convert int to bytes for py3

The following error raised during the functional test:
"TypeError: unsupported operand type(s) for %: 'bytes' and 'int'"
This patch converts the string to bytes for py3.

Closes-Bug: #1822155
Change-Id: I3de92ef830e5f424aa83b57d8ed843a7c4349e8a


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1822155

Title:
  neutron-keepalived-state-change can not start on some python3 distro

Status in neutron:
  Fix Released

Bug description:
  LOG:
  DEBUG neutron.common.config [-] command line: 
/opt/stack/neutron/.tox/dsvm-functional/bin/neutron-keepalived-state-change 
--router_id=82ec666b-621b-4e70-bac1-f43f325540f8 
--namespace=snat-82ec666b-621b-4e70-bac1-f43f325540f8@agent1 
--conf_dir=/tmp/tmp9_bn0tg1/tmpd9zia3km/ha_confs/82ec666b-621b-4e70-bac1-f43f325540f8
 
--log-file=/tmp/tmp9_bn0tg1/tmpd9zia3km/ha_confs/82ec666b-621b-4e70-bac1-f43f325540f8/82ec666b-621b-4e70-bac1-f43f325540f8_state_change.log
 --monitor_interface=ha-76979d9f-9f --monitor_cidr=169.254.0.14/24 
--pid_file=/tmp/tmp9_bn0tg1/tmpd9zia3km/external/pids/82ec666b-621b-4e70-bac1-f43f325540f8.monitor.pid
 --state_path=/tmp/tmp9_bn0tg1/tmpd9zia3km --user=1000 --group=1000 
--AGENT-root_helper=sudo 
/opt/stack/neutron/.tox/dsvm-functional/bin/neutron-rootwrap 
/opt/stack/neutron/.tox/dsvm-functional/etc/neutron/rootwrap.conf 
--AGENT-root_helper_daemon=sudo 
/opt/stack/neutron/.tox/dsvm-functional/bin/neutron-rootwrap-daemon 
/opt/stack/neutron/.tox/dsvm-functional/etc/neutron/rootwrap.conf {{(pid=12369) 
setup_logging 
/opt/stack/neutron/.tox/dsvm-functional/lib/python3.4/site-packages/neutron/common/config.py:103}}
  CRITICAL neutron [-] Unhandled error: TypeError: unsupported operand type(s) 
for %: 'bytes' and 'int'
  ERROR neutron Traceback (most recent call last):
  ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/bin/neutron-keepalived-state-change", 
line 10, in 
  ERROR neutron sys.exit(main())
  ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/lib/python3.4/site-packages/neutron/cmd/keepalived_state_change.py",
 line 19, in main
  ERROR neutron keepalived_state_change.main()
  ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/lib/python3.4/site-packages/neutron/agent/l3/keepalived_state_change.py",
 line 178, in main
  ERROR neutron cfg.CONF.monitor_cidr).start()
  ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/lib/python3.4/site-packages/neutron/agent/linux/daemon.py",
 line 247, in start
  ERROR neutron self.daemonize()
  ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/lib/python3.4/site-packages/neutron/agent/linux/daemon.py",
 line 228, in daemonize
  ERROR neutron self.pidfile.write(os.getpid())
  ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/lib/python3.4/site-packages/neutron/agent/linux/daemon.py",
 line 139, in write
  ERROR neutron os.write(self.fd, b"%d" % pid)
  ERROR neutron TypeError: unsupported operand type(s) for %: 'bytes' and 'int'
  ERROR neutron

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1822155/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp