[Yahoo-eng-team] [Bug 1657998] [NEW] Missing information about createImage on Compute API Reference

2017-01-19 Thread Enol Fernández
Public bug reported:

createImage action on success returns the location of the new image at
the 'location' header, however this is not documented in the API
reference

---
Release: 15.0.0.0b3.dev402 on 'Tue Jan 17 19:04:29 2017, commit 1a487f3'
SHA: 
Source: Can't derive source file URL
URL: http://developer.openstack.org/api-ref/compute/

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1657998

Title:
  Missing information about createImage on Compute API Reference

Status in OpenStack Compute (nova):
  New

Bug description:
  createImage action on success returns the location of the new image at
  the 'location' header, however this is not documented in the API
  reference

  ---
  Release: 15.0.0.0b3.dev402 on 'Tue Jan 17 19:04:29 2017, commit 1a487f3'
  SHA: 
  Source: Can't derive source file URL
  URL: http://developer.openstack.org/api-ref/compute/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1657998/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657377] Re: FWaaS - message of FirewallGroupPortInvalidProject should be fixed

2017-01-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/422480
Committed: 
https://git.openstack.org/cgit/openstack/neutron-fwaas/commit/?id=19033950d515128417c3929527126c07608d7223
Submitter: Jenkins
Branch:master

commit 19033950d515128417c3929527126c07608d7223
Author: Reedip 
Date:   Thu Jan 19 03:58:46 2017 -0500

TrivialFix : Fix Port Error Message

If a port does not belong to the current project,
then an Incorrect message is populated.
The following patch fixes the issue.

TrivialFix

Change-Id: I38016ee9a49ca1b044989ce68baaf6cc982a9d76
Closes-Bug:#1657377


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1657377

Title:
  FWaaS - message of FirewallGroupPortInvalidProject should be fixed

Status in neutron:
  Fix Released

Bug description:
  When updating 'ports' attribute for firewall_group with admin
  privilege and specified port belongs to different project, following
  error occurred:

  {"NeutronError": {"message": "Firewall Group 704d951e-980e-451f-bfae-
  854bcc094fa7 in invalid Project", "type":
  "FirewallGroupPortInvalidProject", "detail": ""}}

  704d951e-980e-451f-bfae-854bcc094fa7 is not firewall group but port.
  Therefore, message should be fixed like as follows:

    Specified port  in invalid Project   or
    Port  in invalid Project

  [How to reproduce]
  1. Create firewall_group in admin project
    source devstack/openrc admin admin
export TOKEN=`openstack token issue| grep ' id ' | get_field 2`
    openstack firewall group create --name fwg

  2. Create router-port in demo project
    source devstack/openrc demo demo
    neutron net-create test; neutron subnet-create test 192.168.200.0/24 --name 
subnet; neutron router-create test-router; neutron router-interface-add 
test-router subnet
    neutron port-update `neutron port-list | grep 192.168.200.1 | get_field 1` 
--name target_l3

  3. Update firewall_group with 'ports' attribute
    source devstack/openrc admin admin
    export TOKEN=`openstack token issue| grep ' id ' | get_field 2`
    curl -X PUT -H "x-auth-token:$TOKEN" -H "content-type:application/json" -d 
'{"firewall_group":{"ports":["704d951e-980e-451f-bfae-854bcc094fa7"]}}' 
localhost:9696/v2.0/fwaas/firewall_groups/bba0311b-db3d-4989-bfa3-3e132d719b94

  Note: 704d951e-980e-451f-bfae-854bcc094fa7 is ID for port named
  'target_l3'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1657377/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657981] [NEW] FloatingIPs not reachable after restart of compute node (DVR)

2017-01-19 Thread Jens Offenbach
Public bug reported:

I am running OpenStack Newton on Ubuntu 16.04 using DVR. When I restart
a compute node, the FloatingIPs of those vms running on this node are
unreachable. A manual restart of the service "neutron-l3-agent" or
"neutron-vpn-agent" running in on node solves the issue.

I think there must be a race condition at startup.

I get the following error in the neutron-vpn-agent.log:
2017-01-20 07:04:52.379 2541 INFO neutron.common.config [-] Logging enabled!
2017-01-20 07:04:52.379 2541 INFO neutron.common.config [-] 
/usr/bin/neutron-vpn-agent version 9.0.0
2017-01-20 07:04:52.380 2541 WARNING stevedore.named [-] Could not load 
neutron.agent.linux.interface.OVSInterfaceDriver
2017-01-20 07:04:53.112 2541 WARNING stevedore.named 
[req-a9e10331-51ab-4c67-bfdd-0f6296510594 - - - - -] Could not load 
neutron_fwaas.services.firewall.drivers.linux.iptables_fwaas.IptablesFwaasDriver
2017-01-20 07:04:53.127 2541 INFO neutron.agent.agent_extensions_manager 
[req-a9e10331-51ab-4c67-bfdd-0f6296510594 - - - - -] Loaded agent extensions: 
['fwaas']
2017-01-20 07:04:53.128 2541 INFO neutron.agent.agent_extensions_manager 
[req-a9e10331-51ab-4c67-bfdd-0f6296510594 - - - - -] Initializing agent 
extension 'fwaas'
2017-01-20 07:04:53.163 2541 WARNING oslo_config.cfg 
[req-bdd95fb9-bcd7-473e-a350-3bd8d6be8758 - - - - -] Option 
"external_network_bridge" from group "DEFAULT" is deprecated for removal.  Its 
value may be silently ignored in the future.
2017-01-20 07:04:53.165 2541 WARNING stevedore.named 
[req-bdd95fb9-bcd7-473e-a350-3bd8d6be8758 - - - - -] Could not load 
neutron_vpnaas.services.vpn.device_drivers.strongswan_ipsec.StrongSwanDriver
2017-01-20 07:04:53.236 2541 INFO eventlet.wsgi.server [-] (2541) wsgi starting 
up on http:/var/lib/neutron/keepalived-state-change
2017-01-20 07:04:53.261 2541 INFO neutron.agent.l3.agent [-] Agent has just 
been revived. Doing a full sync.
2017-01-20 07:04:53.373 2541 INFO neutron.agent.l3.agent [-] L3 agent started
2017-01-20 07:05:22.832 2541 ERROR neutron.agent.linux.utils [-] Exit code: 1; 
Stdin: ; Stdout: ; Stderr: Cannot find device "fg-67afaa06-bb"

2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info [-] Exit code: 
1; Stdin: ; Stdout: ; Stderr: Cannot find device "fg-67afaa06-bb"
2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info Traceback (most 
recent call last):
2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/common/utils.py", line 239, in call
2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info return 
func(*args, **kwargs)
2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 1062, 
in process
2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info 
self.process_external(agent)
2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/dvr_local_router.py", line 
515, in process_external
2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info 
self.create_dvr_fip_interfaces(ex_gw_port)
2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/dvr_local_router.py", line 
546, in create_dvr_fip_interfaces
2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info 
self.fip_ns.update_gateway_port(fip_agent_port)
2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/dvr_fip_ns.py", line 239, in 
update_gateway_port
2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info 
ipd.route.add_gateway(gw_ip)
2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 702, in 
add_gateway
2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info 
self._as_root([ip_version], tuple(args))
2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 373, in 
_as_root
2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info 
use_root_namespace=use_root_namespace)
2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 95, in 
_as_root
2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info 
log_fail_as_error=self.log_fail_as_error)
2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 104, in 
_execute
2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info 
log_fail_as_error=log_fail_as_error)
2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py", line 138, in 
execute

[Yahoo-eng-team] [Bug 1657978] [NEW] Internal Server Error: KeyError: 'domain'

2017-01-19 Thread Eric Brown
Public bug reported:

I get the following message in Horizon when trying to authenticate a
federated user with a misconfigured mapping (in Mitaka)

{"error": {"message": "An unexpected error prevented the server from
fulfilling your request: 'domain' (Disable insecure_debug mode to
suppress these details.)", "code": 500, "title": "Internal Server
Error"}}


This is my mapping.json. Notice no domain is part of the "group" parameter 
(even though there is one at one level higher). 
[
{
"local": [
{
"domain": {
"name": "Default"
}, 
"group": {
"name": "Federated Users"
}, 
"user": {
"name": "{0}", 
"email": "{1}"
}, 
"groups": "{2}"
}
], 
"remote": [
{
"type": "REMOTE_USER"
}, 
{
"type": "MELLON_userEmail"
}, 
{
"type": "MELLON_groups"
}
]
}
]


This is the log output of the keyerror containing the assertion.

http://paste.openstack.org/show/595730/

** Affects: keystone
 Importance: Undecided
 Assignee: Eric Brown (ericwb)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Eric Brown (ericwb)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1657978

Title:
  Internal Server Error: KeyError: 'domain'

Status in OpenStack Identity (keystone):
  New

Bug description:
  I get the following message in Horizon when trying to authenticate a
  federated user with a misconfigured mapping (in Mitaka)

  {"error": {"message": "An unexpected error prevented the server from
  fulfilling your request: 'domain' (Disable insecure_debug mode to
  suppress these details.)", "code": 500, "title": "Internal Server
  Error"}}

  
  This is my mapping.json. Notice no domain is part of the "group" parameter 
(even though there is one at one level higher). 
  [
  {
  "local": [
  {
  "domain": {
  "name": "Default"
  }, 
  "group": {
  "name": "Federated Users"
  }, 
  "user": {
  "name": "{0}", 
  "email": "{1}"
  }, 
  "groups": "{2}"
  }
  ], 
  "remote": [
  {
  "type": "REMOTE_USER"
  }, 
  {
  "type": "MELLON_userEmail"
  }, 
  {
  "type": "MELLON_groups"
  }
  ]
  }
  ]

  
  This is the log output of the keyerror containing the assertion.

  http://paste.openstack.org/show/595730/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1657978/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605713] Re: neutron subnet.shared attributes out of sync with parent network and no available CLI to update subnet.shared

2017-01-19 Thread Aihua Edward Li
Hi, Srilanth,

O.K. I just traced the git log and found the Subnet.shared was removed
in stable/liberty.

Let's close it.

 class Subnet(model_base.BASEV2, HasId, HasTenant):
@@ -200,12 +191,12 @@ class Subnet(model_base.BASEV2, HasId, HasTenant):
 dns_nameservers = orm.relationship(DNSNameServer,
backref='subnet',
cascade='all, delete, delete-orphan',
+   order_by=DNSNameServer.order,
lazy='joined')
 routes = orm.relationship(SubnetRoute,
   backref='subnet',
   cascade='all, delete, delete-orphan',
   lazy='joined')
-shared = sa.Column(sa.Boolean)
 ipv6_ra_mode = sa.Column(sa.Enum(constants.IPV6_SLAAC,
  constants.DHCPV6_STATEFUL,
  constants.DHCPV6_STATELESS,
@@ -214,6 +205,7 @@ class Subnet(model_base.BASEV2, HasId, HasTenant):
   constants.DHCPV6_STATEFUL,
   constants.DHCPV6_STATELESS,
   name='ipv6_address_modes'), nullable=True)


** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1605713

Title:
  neutron subnet.shared attributes out of sync with parent network and
  no available CLI to update subnet.shared

Status in neutron:
  Invalid

Bug description:
  1. create a network with shared=0.
  2. create a subnet for the network
  3. update the network with shared attributes.
  4. examine the shared in neutron subnet db, the shared remains 0 while the 
shared in network table is updated to 1.

  e.g.

  neutron net-create --provider:physical_network phsynet1 
--provider:network_type flat net-1
  neutron subnet-create net-1 192.168.30.0/24
  neutron net-update --shared net-1

  now examine the database directly for the subnet 192.168.30.0, its
  shared attribute remains as 0.

  There is no CLI available to update it.

  versions tested:
  neutron stable/kilo
  neutron client: 2.6.0
  linux distro: Linux 12.04, Linux 14.04

  Expected, one of the following solutions:
  1). remove subnet.shared attribute and use its parent shared attribute.
  2). have a mechanism to update subnet.shared

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1605713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506567] Re: No information from Neutron Metering agent

2017-01-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/377108
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=ec6ed98cfaca455a663c0e6909b7faeb703a349e
Submitter: Jenkins
Branch:master

commit ec6ed98cfaca455a663c0e6909b7faeb703a349e
Author: Swaminathan Vasudevan 
Date:   Mon Sep 26 14:08:50 2016 -0700

DVR: Fix IPtables driver for metering with DVR routers

IPTables driver for metering was not handling the DVR router
namespaces properly for configuring the metering forward rules.

This patch addresses the issue by configuring the iptables
manager based on the availability of the namespace and selecting
the respective namespaces such as router namespace and snat
namespace with the right external device.

Change-Id: I6790d6ff42d9f8fa220e1a231ae94cbf8b60506c
Closes-Bug: #1506567


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506567

Title:
  No information from Neutron Metering agent

Status in neutron:
  Fix Released

Bug description:
  I deployed OpenStack cloud with stable/kilo code - a controller/network node 
and a compute node (Ubuntu 14.04). I have Metering agent enabled and configured 
 as follows:
  driver = 
neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver
  interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

  When I try to check information from it in Ceilometer, I always get
  zero. e.x.:

  root@node-1:~# ceilometer sample-list -m bandwidth  -l10
  
+--+---+---++--++
  | Resource ID  | Name  | Type  | Volume | Unit | 
Timestamp  |
  
+--+---+---++--++
  | 66fab4a9-aefe-4534-b8a3-b0c0db9edf82 | bandwidth | delta | 0.0| B| 
2015-10-15T16:55:26.766000 |

  Is it expected? I spawned two VMs and tried to pass some traffic
  between them using iperf, but still no results

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506567/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657953] [NEW] angular module load error occur when using developer panel

2017-01-19 Thread Kenji Ishii
Public bug reported:

When I use Developer panel and not use profiler
(OPENSTACK_PROFILER['enabled']=False), the error below would occur.
Because Developer panel loads profiler module without any condition but
'horizon.dashboard.developer.profiler.headers' is not loaded.

Uncaught Error: [$injector:modulerr] Failed to instantiate module horizon.app 
due to:
Error: [$injector:modulerr] Failed to instantiate module 
horizon.dashboard.developer due to:
Error: [$injector:modulerr] Failed to instantiate module 
horizon.dashboard.developer.profiler due to:
Error: [$injector:unpr] Unknown provider: 
horizon.dashboard.developer.profiler.headers
http://errors.angularjs.org/1.5.8/$injector/unpr?p0=horizon.dashboard.developer.profiler.headers
at 
http://localhost:8080/dashboard/static/horizon/lib/angular/angular.js:68:12
at 
http://localhost:8080/dashboard/static/horizon/lib/angular/angular.js:4511:19
at getService 
(http://localhost:8080/dashboard/static/horizon/lib/angular/angular.js:4664:39)
at injectionArgs 
(http://localhost:8080/dashboard/static/horizon/lib/angular/angular.js:4688:58)
at Object.invoke 
(http://localhost:8080/dashboard/static/horizon/lib/angular/angular.js:4710:18)
at runInvokeQueue 
(http://localhost:8080/dashboard/static/horizon/lib/angular/angular.js:4611:35)
at 
http://localhost:8080/dashboard/static/horizon/lib/angular/angular.js:4620:11
at forEach 
(http://localhost:8080/dashboard/static/horizon/lib/angular/angular.js:321:20)
at loadModules 
(http://localhost:8080/dashboard/static/horizon/lib/angular/angular.js:4601:5)
at 
http://localhost:8080/dashboard/static/horizon/lib/angular/angular.js:4618:40
・・・

** Affects: horizon
 Importance: Undecided
 Assignee: Kenji Ishii (ken-ishii)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Kenji Ishii (ken-ishii)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1657953

Title:
  angular module load error occur when using developer panel

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When I use Developer panel and not use profiler
  (OPENSTACK_PROFILER['enabled']=False), the error below would occur.
  Because Developer panel loads profiler module without any condition
  but 'horizon.dashboard.developer.profiler.headers' is not loaded.

  Uncaught Error: [$injector:modulerr] Failed to instantiate module horizon.app 
due to:
  Error: [$injector:modulerr] Failed to instantiate module 
horizon.dashboard.developer due to:
  Error: [$injector:modulerr] Failed to instantiate module 
horizon.dashboard.developer.profiler due to:
  Error: [$injector:unpr] Unknown provider: 
horizon.dashboard.developer.profiler.headers
  
http://errors.angularjs.org/1.5.8/$injector/unpr?p0=horizon.dashboard.developer.profiler.headers
  at 
http://localhost:8080/dashboard/static/horizon/lib/angular/angular.js:68:12
  at 
http://localhost:8080/dashboard/static/horizon/lib/angular/angular.js:4511:19
  at getService 
(http://localhost:8080/dashboard/static/horizon/lib/angular/angular.js:4664:39)
  at injectionArgs 
(http://localhost:8080/dashboard/static/horizon/lib/angular/angular.js:4688:58)
  at Object.invoke 
(http://localhost:8080/dashboard/static/horizon/lib/angular/angular.js:4710:18)
  at runInvokeQueue 
(http://localhost:8080/dashboard/static/horizon/lib/angular/angular.js:4611:35)
  at 
http://localhost:8080/dashboard/static/horizon/lib/angular/angular.js:4620:11
  at forEach 
(http://localhost:8080/dashboard/static/horizon/lib/angular/angular.js:321:20)
  at loadModules 
(http://localhost:8080/dashboard/static/horizon/lib/angular/angular.js:4601:5)
  at 
http://localhost:8080/dashboard/static/horizon/lib/angular/angular.js:4618:40
  ・・・

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1657953/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657940] [NEW] eni rendering dhcp6 writes aliases fails to bring up dhcp6

2017-01-19 Thread Scott Moser
Public bug reported:

Currently, a config like this:
| version: 1
| config:
|- 'type': 'physical'
|  'name': 'iface0'
|  'subnets':
|- {'type': 'dhcp4'}
|- {'type': 'dhcp6'}

Will render:
| auto lo
| iface lo inet loopback
| 
| auto iface0
| iface iface0 inet dhcp
| post-up ifup iface0:1
| 
| 
| auto iface0:1
| iface iface0:1 inet6 dhcp

Below is an example test case that shows the output.
Heres the problem:
$ sudo sh -c 'ifdown eth0; ifup eth0'
$ sudo sh -c 'ifdown eth0; ifup eth0'
Killed old client process
Internet Systems Consortium DHCP Client 4.3.3
Copyright 2004-2015 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/

Listening on LPF/eth0/06:b3:0a:3a:2d:e3
Sending on   LPF/eth0/06:b3:0a:3a:2d:e3
Sending on   Socket/fallback
DHCPRELEASE on eth0 to 172.31.16.1 port 67 (xid=0x32b625f1)
Internet Systems Consortium DHCP Client 4.3.3
Copyright 2004-2015 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/

Listening on LPF/eth0/06:b3:0a:3a:2d:e3
Sending on   LPF/eth0/06:b3:0a:3a:2d:e3
Sending on   Socket/fallback
DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 3 (xid=0xa4d5f301)
DHCPREQUEST of 172.31.29.161 on eth0 to 255.255.255.255 port 67 (xid=0x1f3d5a4)
DHCPOFFER of 172.31.29.161 from 172.31.16.1
DHCPACK of 172.31.29.161 from 172.31.16.1
bound to 172.31.29.161 -- renewal in 1801 seconds.

Failed to bring up eth0:1.
Failed to bring up eth0.

$ sudo ifup -v eth0:1
Parsing file /etc/network/interfaces.d/50-cloud-init.cfg
Parsing file /etc/network/interfaces.d/60-ipv6.cfg
Configuring interface eth0:1=eth0:1 (inet6)
/bin/run-parts --exit-on-error --verbose /etc/network/if-pre-up.d
run-parts: executing /etc/network/if-pre-up.d/ethtool
run-parts: executing /etc/network/if-pre-up.d/ifenslave
+ [ inet6 = meta ]
+ IF_BOND_SLAVES=
+ [  ]
+ [  ]
+ [ -z  ]
+ exit
run-parts: executing /etc/network/if-pre-up.d/vlan
/sbin/modprobe -q net-pf-10 > /dev/null 2>&1 || true # ignore failure.
/sbin/sysctl -q -e -w net.ipv6.conf.eth0:1.accept_ra=1

/bin/ip link set dev eth0:1  up
/lib/ifupdown/wait-for-ll6.sh
/sbin/dhclient -1 -6 -pf /run/dhclient6.eth0:1.pid -lf 
/var/lib/dhcp/dhclient6.eth0:1.leases -I -df 
/var/lib/dhcp/dhclient.eth0:1.leases eth0:1


--- a/tests/unittests/test_net.py
+++ b/tests/unittests/test_net.py
@@ -813,6 +813,27 @@ class TestEniRoundTrip(TestCase):
 self.assertEqual(
 expected, [line for line in found if line])
 
+def test_dhcp4_and_dhcp6(self):
+conf = yaml.load(textwrap.dedent("""\
+version: 1
+config:
+   - 'type': 'physical'
+ 'name': 'iface0'
+ 'subnets': 
+   - {'type': 'dhcp4'}
+   - {'type': 'dhcp6'}
+   """))
+
+#conf = [
+#{'type': 'physical', 'name': 'iface0',
+# 'subnets': [
+# {'type': 'dhcp4'},
+# {'type': 'dhcp6'},
+# ]},
+#]
+files = self._render_and_read(network_config=conf)
+raise Exception(files['/etc/network/interfaces'])
+

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1657940

Title:
  eni rendering dhcp6 writes aliases fails to bring up dhcp6

Status in cloud-init:
  New

Bug description:
  Currently, a config like this:
  | version: 1
  | config:
  |- 'type': 'physical'
  |  'name': 'iface0'
  |  'subnets':
  |- {'type': 'dhcp4'}
  |- {'type': 'dhcp6'}

  Will render:
  | auto lo
  | iface lo inet loopback
  | 
  | auto iface0
  | iface iface0 inet dhcp
  | post-up ifup iface0:1
  | 
  | 
  | auto iface0:1
  | iface iface0:1 inet6 dhcp

  Below is an example test case that shows the output.
  Heres the problem:
  $ sudo sh -c 'ifdown eth0; ifup eth0'
  $ sudo sh -c 'ifdown eth0; ifup eth0'
  Killed old client process
  Internet Systems Consortium DHCP Client 4.3.3
  Copyright 2004-2015 Internet Systems Consortium.
  All rights reserved.
  For info, please visit https://www.isc.org/software/dhcp/

  Listening on LPF/eth0/06:b3:0a:3a:2d:e3
  Sending on   LPF/eth0/06:b3:0a:3a:2d:e3
  Sending on   Socket/fallback
  DHCPRELEASE on eth0 to 172.31.16.1 port 67 (xid=0x32b625f1)
  Internet Systems Consortium DHCP Client 4.3.3
  Copyright 2004-2015 Internet Systems Consortium.
  All rights reserved.
  For info, please visit https://www.isc.org/software/dhcp/

  Listening on LPF/eth0/06:b3:0a:3a:2d:e3
  Sending on   LPF/eth0/06:b3:0a:3a:2d:e3
  Sending on   Socket/fallback
  DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 3 (xid=0xa4d5f301)
  DHCPREQUEST of 172.31.29.161 on eth0 to 255.255.255.255 port 67 
(xid=0x1f3d5a4)
  DHCPOFFER of 172.31.29.161 from 172.31.16.1
  

[Yahoo-eng-team] [Bug 1657935] [NEW] intermittent failures in setup_physical_bridges

2017-01-19 Thread Armando Migliaccio
Public bug reported:

http://logs.openstack.org/39/388939/18/check/gate-neutron-dsvm-
functional-ubuntu-xenial/71d901a/testr_results.html.gz

http://logs.openstack.org/50/402750/31/check/gate-neutron-dsvm-
functional-ubuntu-xenial/6c1d7b4/testr_results.html.gz

http://logs.openstack.org/04/395504/15/check/gate-neutron-dsvm-
functional-ubuntu-xenial/dc17d0b/testr_results.html.gz

http://logs.openstack.org/09/422309/1/check/gate-neutron-dsvm-
functional-ubuntu-xenial/e0c8a6c/testr_results.html.gz

http://logs.openstack.org/04/395504/13/check/gate-neutron-dsvm-
functional-ubuntu-xenial/ea7661f/testr_results.html.gz

http://logs.openstack.org/28/340228/30/check/gate-neutron-dsvm-
functional-ubuntu-xenial/f229bb2/testr_results.html.gz

http://logs.openstack.org/02/416302/13/check/gate-neutron-dsvm-
functional-ubuntu-xenial/d4d757f/testr_results.html.gz

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: functional-tests gate-failure

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

** Tags added: functional-tests gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1657935

Title:
  intermittent failures in setup_physical_bridges

Status in neutron:
  Confirmed

Bug description:
  http://logs.openstack.org/39/388939/18/check/gate-neutron-dsvm-
  functional-ubuntu-xenial/71d901a/testr_results.html.gz

  http://logs.openstack.org/50/402750/31/check/gate-neutron-dsvm-
  functional-ubuntu-xenial/6c1d7b4/testr_results.html.gz

  http://logs.openstack.org/04/395504/15/check/gate-neutron-dsvm-
  functional-ubuntu-xenial/dc17d0b/testr_results.html.gz

  http://logs.openstack.org/09/422309/1/check/gate-neutron-dsvm-
  functional-ubuntu-xenial/e0c8a6c/testr_results.html.gz

  http://logs.openstack.org/04/395504/13/check/gate-neutron-dsvm-
  functional-ubuntu-xenial/ea7661f/testr_results.html.gz

  http://logs.openstack.org/28/340228/30/check/gate-neutron-dsvm-
  functional-ubuntu-xenial/f229bb2/testr_results.html.gz

  http://logs.openstack.org/02/416302/13/check/gate-neutron-dsvm-
  functional-ubuntu-xenial/d4d757f/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1657935/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657930] [NEW] neutron-db-manage fails with ImportError

2017-01-19 Thread seyyed ahmad javadi
Public bug reported:

Hi,

I am upgrading our OpenStack cluster from liberty to newton. But, I am
getting the following error when I run neutron-db-manage to sync the
neutron database. The upgrade of nova was successful but the neutron
processes fail to start (I am working on controller node for now). I am
guessing it is because of the fact that first I need to sync the
database where I get the following ImportError.

I am following the official Guide of the newtown installation (to
complete the upgrade) and every thing was fine to the point before
calling neutron-db-manage.

Can you please let me know your suggestion to fix this issue? 
 

root@controller:/var/log# /bin/sh -c "neutron-db-manage --config-file 
/etc/neutron/neutron.conf \  --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
Traceback (most recent call last):
  File "/usr/local/bin/neutron-db-manage", line 10, in 
sys.exit(main())
  File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
686, in main
return_val |= bool(CONF.command.func(config, CONF.command.name))
  File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
205, in do_upgrade
run_sanity_checks(config, revision)
  File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
670, in run_sanity_checks
script_dir.run_env()
  File "/usr/local/lib/python2.7/dist-packages/alembic/script/base.py", line 
407, in run_env
util.load_python_file(self.dir, 'env.py')
  File "/usr/local/lib/python2.7/dist-packages/alembic/util/pyfiles.py", line 
93, in load_python_file
module = load_module_py(module_id, path)
  File "/usr/local/lib/python2.7/dist-packages/alembic/util/compat.py", line 
79, in load_module_py
mod = imp.load_source(module_id, path, fp)
  File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/env.py",
 line 17, in 
from neutron_lib.db import model_base
ImportError: cannot import name model_base


I can not even start neutron-server and the log file is empty. 


ahmad@controller:/var/log$ sudo service neutron-server start
ahmad@controller:/var/log$ 
ahmad@controller:/var/log$ 
ahmad@controller:/var/log$ service neutron-server status
● neutron-server.service - OpenStack Neutron Server
   Loaded: loaded (/lib/systemd/system/neutron-server.service; enabled; vendor 
preset: enabled)
   Active: inactive (dead) (Result: exit-code) since Thu 2017-01-19 19:48:12 
EST; 6s ago
  Process: 13772 ExecStart=/etc/init.d/neutron-server systemd-start 
(code=exited, status=1/FAILURE)
  Process: 13769 ExecStartPre=/bin/chown neutron:adm /var/log/neutron 
(code=exited, status=0/SUCCESS)
  Process: 13766 ExecStartPre=/bin/chown neutron:neutron /var/lock/neutron 
/var/lib/neutron (code=exited, stat
  Process: 13763 ExecStartPre=/bin/mkdir -p /var/lock/neutron /var/log/neutron 
/var/lib/neutron (code=exited, 
 Main PID: 13772 (code=exited, status=1/FAILURE)

Jan 19 19:48:12 controller systemd[1]: neutron-server.service: Unit entered 
failed state.
Jan 19 19:48:12 controller systemd[1]: neutron-server.service: Failed with 
result 'exit-code'.
Jan 19 19:48:12 controller systemd[1]: neutron-server.service: Service hold-off 
time over, scheduling restart.
Jan 19 19:48:12 controller systemd[1]: Stopped OpenStack Neutron Server.
Jan 19 19:48:12 controller systemd[1]: neutron-server.service: Start request 
repeated too quickly.
Jan 19 19:48:12 controller systemd[1]: Failed to start OpenStack Neutron Server.
ahmad@controller:/var/log$ 
ahmad@controller:/var/log$ cat /var/log/neutron/neutron-server.log
ahmad@controller:/var/log$

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1657930

Title:
  neutron-db-manage fails with ImportError

Status in neutron:
  New

Bug description:
  Hi,

  I am upgrading our OpenStack cluster from liberty to newton. But, I am
  getting the following error when I run neutron-db-manage to sync the
  neutron database. The upgrade of nova was successful but the neutron
  processes fail to start (I am working on controller node for now). I
  am guessing it is because of the fact that first I need to sync the
  database where I get the following ImportError.

  I am following the official Guide of the newtown installation (to
  complete the upgrade) and every thing was fine to the point before
  calling neutron-db-manage.

  Can you please let me know your suggestion to fix this issue? 
   

  root@controller:/var/log# /bin/sh -c "neutron-db-manage --config-file 
/etc/neutron/neutron.conf \  --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
  Traceback (most recent call last):
File "/usr/local/bin/neutron-db-manage", line 10, in 
  sys.exit(main())
File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
686, in main
  

[Yahoo-eng-team] [Bug 1657923] [NEW] Add a ReST client for placement API

2017-01-19 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/414726
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit ebe62dcd33b54165a0e06340295c4d832f89d3dc
Author: Miguel Lavalle 
Date:   Sat Dec 24 18:38:57 2016 -0600

Add a ReST client for placement API

This patchset adds a ReST client for the placement API. This
client is used to update the IPv4 inventories associated with
routed networks segments. This information is used by the
Nova scheduler to decide the placement of instances in hosts,
based on the availability of IPv4 addresses in routed
networks segments

DocImpact: Adds [placement] section to neutron.conf with two
   options: region_name and endpoint_type

Change-Id: I2aa614d4e6229161047b08c8bdcbca0e2e5d1f0b
Partially-Implements: blueprint routed-networks

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1657923

Title:
  Add a ReST client for placement API

Status in neutron:
  New

Bug description:
  https://review.openstack.org/414726
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit ebe62dcd33b54165a0e06340295c4d832f89d3dc
  Author: Miguel Lavalle 
  Date:   Sat Dec 24 18:38:57 2016 -0600

  Add a ReST client for placement API
  
  This patchset adds a ReST client for the placement API. This
  client is used to update the IPv4 inventories associated with
  routed networks segments. This information is used by the
  Nova scheduler to decide the placement of instances in hosts,
  based on the availability of IPv4 addresses in routed
  networks segments
  
  DocImpact: Adds [placement] section to neutron.conf with two
 options: region_name and endpoint_type
  
  Change-Id: I2aa614d4e6229161047b08c8bdcbca0e2e5d1f0b
  Partially-Implements: blueprint routed-networks

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1657923/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657757] Re: py35 unit tests are hugely failing

2017-01-19 Thread Matt Riedemann
The mox3 revert is merged, released and in upper-constraints now so
we're good again.

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1657757

Title:
  py35 unit tests are hugely failing

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  We have a long list of Nova unit tests that fail for the py35 job with
  this pattern :

  ft2.9: 
nova.tests.unit.api.openstack.compute.test_create_backup.CreateBackupTestsV21.test_create_backup_raises_conflict_on_invalid_state_StringException:
 Empty attachments:
pythonlogging:''
stderr
stdout

  traceback-1: {{{
  Traceback (most recent call last):
File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/fixtures/fixture.py",
 line 125, in cleanUp
  return self._cleanups(raise_errors=raise_first)
File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/fixtures/callmany.py",
 line 89, in __call__
  reraise(error[0], error[1], error[2])
File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/testtools/_compat3x.py",
 line 16, in reraise
  raise exc_obj.with_traceback(exc_tb)
File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/fixtures/callmany.py",
 line 83, in __call__
  cleanup(*args, **kwargs)
File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/mox3/mox.py",
 line 289, in VerifyAll
  mock_obj._Verify()
File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/mox3/mox.py",
 line 543, in _Verify
  raise ExpectedMethodCallsError(self._expected_calls_queue)
  mox3.mox.ExpectedMethodCallsError: Verify: Expected methods never called:
0.  
function.__call__(, {}) -> None
  }}}

  Traceback (most recent call last):
File 
"/home/jenkins/workspace/gate-nova-python35-db/nova/tests/unit/api/openstack/compute/test_create_backup.py",
 line 281, in test_create_backup_raises_conflict_on_invalid_state
  exception_arg='createBackup')
File 
"/home/jenkins/workspace/gate-nova-python35-db/nova/tests/unit/api/openstack/compute/admin_only_action_common.py",
 line 123, in _test_invalid_state
  instance = self._stub_instance_get()
File 
"/home/jenkins/workspace/gate-nova-python35-db/nova/tests/unit/api/openstack/compute/admin_only_action_common.py",
 line 40, in _stub_instance_get
  self.context, uuid, expected_attrs=None).AndReturn(instance)
File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/mox3/mox.py",
 line 814, in __call__
  return mock_method(*params, **named_params)
File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/mox3/mox.py",
 line 1152, in __call__
  self._checker.Check(params, named_params)
File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/mox3/mox.py",
 line 1067, in Check
  self._RecordArgumentGiven(arg_name, arg_status)
File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/mox3/mox.py",
 line 994, in _RecordArgumentGiven
  raise AttributeError('%s provided more than once' % (arg_name,))
  AttributeError: expected_attrs provided more than once

  eg. http://logs.openstack.org/34/418134/7/check/gate-nova-
  python35-db/578aaff/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1657757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614026] Re: RFE: Port mirror in neutron

2017-01-19 Thread Armando Migliaccio
** Changed in: neutron
   Status: Confirmed => Invalid

** Changed in: neutron
 Assignee: Reedip (reedip-banerjee) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614026

Title:
  RFE: Port mirror in neutron

Status in neutron:
  Invalid

Bug description:
  In Physical Switch, we can use port mirror function to do some trouble 
shooting and monitoring,in ovs it has the same
  func,but neutron don't support it.I think we should add this func in neutron 
for the same reason as physical switch .
  The first step,we can mirroring one port to another in the same node first, 
that won't be difficult.The second step, we
  can finger out how to mirror port in the different computer node.
  For example:
  neutron mirror-create src_port dst_port
  neutron mirror-delete src_port dst_port
  neutron mirror-list

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1614026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1625305] Re: neutron-openvswitch-agent is crashing due to KeyError in _restore_local_vlan_map()

2017-01-19 Thread George Shuklin
We've got same issue after upgrading from liberty. It was really
painful, and we've been forced to manually patch agent on hosts.

This is  a real issue, please fix it.

** Changed in: neutron
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1625305

Title:
  neutron-openvswitch-agent is crashing due to KeyError in
  _restore_local_vlan_map()

Status in neutron:
  New

Bug description:
  Neutron openvswitch agent is unable to restart because vms with
  untagged/flat networks (tagged 3999) cause issue with
  _restore_local_vlan_map

  Loaded agent extensions: []
  2016-09-06 07:57:39.682 70085 CRITICAL neutron 
[req-ef8eea4f-c1ed-47a0-8318-eb5473b7c667 - - - - -] KeyError: 3999
  2016-09-06 07:57:39.682 70085 ERROR neutron Traceback (most recent call last):
  2016-09-06 07:57:39.682 70085 ERROR neutron   File 
"/usr/bin/neutron-openvswitch-agent", line 28, in 
  2016-09-06 07:57:39.682 70085 ERROR neutron sys.exit(main())
  2016-09-06 07:57:39.682 70085 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 235, in __init__
  2016-09-06 07:57:39.682 70085 ERROR neutron self._restore_local_vlan_map()
  2016-09-06 07:57:39.682 70085 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 356, in _restore_local_vlan_map
  2016-09-06 07:57:39.682 70085 ERROR neutron 
self.available_local_vlans.remove(local_vlan)
  2016-09-06 07:57:39.682 70085 ERROR neutron KeyError: 3999
  2016-09-06 07:57:39.682 70085 ERROR neutron
  2016-09-06 07:57:39.684 70085 INFO oslo_rootwrap.client 
[req-ef8eea4f-c1ed-47a0-8318-eb5473b7c667 - - - - -] Stopping rootwrap daemon 
process with pid=70197

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1625305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657137] Re: network-ip-availabilities API returns 500 when non-existence network-id is passed

2017-01-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/421304
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=66c7c604fe3be6b80deaa1cac86117ebad9ee3d1
Submitter: Jenkins
Branch:master

commit 66c7c604fe3be6b80deaa1cac86117ebad9ee3d1
Author: Hirofumi Ichihara 
Date:   Wed Jan 18 14:57:03 2017 +0900

Fix importing old path for exceptions

The network_ip_availability plugin uses old path for exceptions.
This patch fixes the path and adds negative test for the exception.

Change-Id: I9021f76e21b386f371ff73b926553611ab87fb66
Closes-bug: #1657137


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1657137

Title:
  network-ip-availabilities API returns 500 when non-existence network-
  id is passed

Status in neutron:
  Fix Released

Bug description:
  The network-ip-availabilities API returns 500 when non-existence
  network-id is passed.

  $ curl -g -i -X GET 
http://127.0.0.1:9696/v2.0/network-ip-availabilities/090db2f5-3d0f-41f2-b932-a50d33960bca
 -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H 
"X-Auth-Token: $TOKEN"
  HTTP/1.1 500 Internal Server Error
  Content-Type: application/json
  Content-Length: 150
  X-Openstack-Request-Id: req-80ec0a12-d6ee-4cab-8649-070e0a986b2e
  Date: Tue, 17 Jan 2017 14:16:59 GMT

  {"NeutronError": {"message": "Request Failed: internal server error
  while processing your request.", "type": "HTTPInternalServerError",
  "detail": ""}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1657137/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657452] Re: Incompatibility with python-webob 1.7.0

2017-01-19 Thread Corey Bryant
This appears to also be affecting python-oslo.middleware.  Adding test
output from dup bug report:

==
FAIL: 
oslo_middleware.tests.test_sizelimit.TestRequestBodySizeLimiter.test_request_too_large_no_content_length
tags: worker-7
--
Empty attachments:
  stderr
  stdout

Traceback (most recent call last):
  File "oslo_middleware/tests/test_sizelimit.py", line 108, in 
test_request_too_large_no_content_length
self.assertEqual(413, response.status_int)
  File 
"/home/chuck/work/server/openstack/packaging/git/openstack/oslo.middleware/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/home/chuck/work/server/openstack/packaging/git/openstack/oslo.middleware/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 413 != 200
Ran 103 tests in 0.217s (-0.074s)
FAILED (id=3, failures=1)
error: testr failed (1)
ERROR: InvocationError:

** Also affects: oslo.middleware
   Importance: Undecided
   Status: New

** Changed in: oslo.middleware
   Status: New => Confirmed

** Also affects: python-oslo.middleware (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: python-oslo.middleware (Ubuntu)
   Status: New => Triaged

** Changed in: python-oslo.middleware (Ubuntu)
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1657452

Title:
  Incompatibility with python-webob 1.7.0

Status in Glance:
  Confirmed
Status in OpenStack Identity (keystone):
  In Progress
Status in OpenStack Compute (nova):
  Confirmed
Status in oslo.middleware:
  Confirmed
Status in glance package in Ubuntu:
  Triaged
Status in keystone package in Ubuntu:
  Triaged
Status in nova package in Ubuntu:
  Triaged
Status in python-oslo.middleware package in Ubuntu:
  Triaged

Bug description:
  
  
keystone.tests.unit.test_v3_federation.WebSSOTests.test_identity_provider_specific_federated_authentication
  
---

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "keystone/tests/unit/test_v3_federation.py", line 4067, in 
test_identity_provider_specific_federated_authentication
  self.PROTOCOL)
File "keystone/federation/controllers.py", line 345, in 
federated_idp_specific_sso_auth
  return self.render_html_response(host, token_id)
File "keystone/federation/controllers.py", line 357, in 
render_html_response
  headerlist=headers)
File "/usr/lib/python2.7/dist-packages/webob/response.py", line 310, in 
__init__
  "You cannot set the body to a text value without a "
  TypeError: You cannot set the body to a text value without a charset

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1657452/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657865] [NEW] It is possible to create cross domain implied roles

2017-01-19 Thread Rodrigo Duarte
Public bug reported:

Since we can't assign a project a role from a different domain, it is
expected to not create implied roles from different domains as well. For
example:

* user1
* project1 - domainA
* role1 - domainA
* role2 - domainB
* create an assignment: user1/project1/role1

If we create a rule where role1 implies role2, we would bypass the
domain restriction.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1657865

Title:
  It is possible to create cross domain implied roles

Status in OpenStack Identity (keystone):
  New

Bug description:
  Since we can't assign a project a role from a different domain, it is
  expected to not create implied roles from different domains as well.
  For example:

  * user1
  * project1 - domainA
  * role1 - domainA
  * role2 - domainB
  * create an assignment: user1/project1/role1

  If we create a rule where role1 implies role2, we would bypass the
  domain restriction.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1657865/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1531303] Re: Network Topology is slower for Admin users

2017-01-19 Thread Rob Cresswell
This should be fixed by https://review.openstack.org/#/c/358790/

** Changed in: horizon
Milestone: next => None

** Changed in: horizon
   Status: Confirmed => Invalid

** Changed in: horizon
 Assignee: Rob Cresswell (robcresswell) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1531303

Title:
  Network Topology is slower for Admin users

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  The new network topology seems to be much slower if you have an admin
  token; almost entirely locking up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1531303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656051] Re: Angular registry detailViews no longer display as tabs

2017-01-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/422686
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=a194157cd9e6cfcf03610c68b86e8bc34545c14d
Submitter: Jenkins
Branch:master

commit a194157cd9e6cfcf03610c68b86e8bc34545c14d
Author: Rob Cresswell 
Date:   Thu Jan 19 15:12:30 2017 +

Fix Angular tabs

Change-Id: I0108e200d10f6bfc7e835f6c53d9e30abd4eafdf
Closes-Bug: 1656051


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1656051

Title:
  Angular registry detailViews no longer display as tabs

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) ocata series:
  Fix Released
Status in OpenStack Search (Searchlight):
  New

Bug description:
  The angular registry allows you to create tabs dynamically by
  registering them [0][1].  This technique is used by the instances
  panel in the searchlight UI [2].  Sometime in Ocata it got broken and
  now tabs no longer show up in the instances panel in either default or
  material theme.

  https://imgur.com/a/JIO3Z

  [0]
  
https://github.com/openstack/horizon/blob/2394397bfd258fe584cc565a924e4dd216f6b224/horizon/static/framework/conf
  /resource-type-registry.service.js#L84

  [1]
  
https://github.com/openstack/horizon/blob/2925562c1a3f0a9b3e2d55833691a7b0ad10eb2a/horizon/static/framework/widgets/details/details.directive.js#L28-L61

  [2] https://github.com/openstack/searchlight-
  ui/blob/master/searchlight_ui/static/resources/os-nova-
  servers/details/details.module.js#L52-L72

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1656051/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394299] Re: Shared image shows as private

2017-01-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/369110
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=265659e8c34865331568b069fdb27ea272df4eaa
Submitter: Jenkins
Branch:master

commit 265659e8c34865331568b069fdb27ea272df4eaa
Author: Timothy Symanczyk 
Date:   Sat Sep 3 07:57:50 2016 -0700

Implement and Enable Community Images

This change replaces the existing boolean 'is_public' column for
the 'images' table with enum 'visibility' column featuring the
four explicit visibility values - public, private, shared,
and community.

This change also implements and enables all backend code to
utilize the new values.

Co-Authored-By: Timothy Symanczyk 
Co-Authored-By: Dharini Chandrasekar 

Implements: blueprint community-level-v2-image-sharing
Closes-Bug: #1394299
Closes-Bug: #1452443
Depends-On: I6e3268f3712cbc0aadb51d204c694023b92d55a5
Change-Id: I94bc7708b291ce37319539e27b3e88c9a17e1a9f


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1394299

Title:
  Shared image shows as private

Status in Glance:
  Fix Released

Bug description:

  image  281d576a-9e4b-4d11-94bb-8b1e89f62a71 is owned by this user and
  correctly shows as 'private', however image '795518ca-
  13a6-4493-b3a3-91519ad7c067' is not owned by this user, it is a shared
  image.


   $ glance --os-image-api-version 2 image-list --visibility shared
   +--+-+
   | ID| Name|
   +--+-+
   | 795518ca-13a6-4493-b3a3-91519ad7c067 | accepted--image | <<< correct, the 
shared image is shown
   +--+-+

  
   $ glance --os-image-api-version 2 image-list --visibility private
   +--+-+
   | ID   | Name|
   +--+-+
   | 281d576a-9e4b-4d11-94bb-8b1e89f62a71 | private-image   |
   | 795518ca-13a6-4493-b3a3-91519ad7c067 | accepted--image |  <<< wrong, I 
think, this is shared, not private
   +--+-+

  
   $ glance --os-image-api-version 2 image-show 
281d576a-9e4b-4d11-94bb-8b1e89f62a71
   +--+--+
   | Property | Value|
   +--+--+
   | checksum | 398759a311bf25c6f1d67e753bb24dae |
   | container_format | bare |
   | created_at   | 2014-11-18T11:16:33Z |
   | disk_format  | raw  |
   | id   | 281d576a-9e4b-4d11-94bb-8b1e89f62a71 |
   | min_disk | 0|
   | min_ram  | 0|
   | name | private-image|
   | owner| f68be3a5c2b14721a9e0ed2fcb750481 |
   | protected| False|
   | size | 106  |
   | status   | active   |
   | tags | []   |
   | updated_at   | 2014-11-18T15:51:35Z |
   | visibility   | private  | <<< correct
   +--+--+

  
   (py27)ubuntu in ~/git/python-glanceclient on master*
   $ glance --os-image-api-version 2 image-show 
795518ca-13a6-4493-b3a3-91519ad7c067
   +--+--+
   | Property | Value|
   +--+--+
   | checksum | 398759a311bf25c6f1d67e753bb24dae |
   | container_format | bare |
   | created_at   | 2014-11-18T11:14:58Z |
   | disk_format  | raw  |
   | id   | 795518ca-13a6-4493-b3a3-91519ad7c067 |
   | min_disk | 0|
   | min_ram  | 0|
   | name | accepted--image  |
   | owner| 2dcea26aa97a41fa9547a133f6c7f5b4 | <<< different 
owner
   | protected| False|
   | size | 106  |
   | status   | active   |
   | tags | []   

[Yahoo-eng-team] [Bug 1648242] Re: Failure to retry update_ha_routers_states

2017-01-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/411784
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=1927da1bc7c4e56162dd3704d58d3b922d4ebce9
Submitter: Jenkins
Branch:master

commit 1927da1bc7c4e56162dd3704d58d3b922d4ebce9
Author: AKamyshnikova 
Date:   Fri Dec 16 16:00:59 2016 +0400

Add check for ha state

If all agents are shown as a standby it is possible changing state
were lost due to problems with RabbitMQ. Current change adds check
for ha state in fetch_and_sync_all_routers. If state is different -
 notify server that state should be changed.

Also change _get_bindings_and_update_router_state_for_dead_agents
to set standby for dead agent only in case we have more than one
active.

Change-Id: If5596eb24041ea9fae1d5d2563dcaf655c5face7
Closes-bug:#1648242


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1648242

Title:
  Failure to retry update_ha_routers_states

Status in neutron:
  Fix Released

Bug description:
  Version: Mitaka

  While performing failover testing of L3 HA routers, we've discovered
  an issue with regards to the failure of an agent to report its state.

  In this scenario, we have a router (7629f5d7-b205-4af5-8e0e-
  a3c4d15e7677) scheduled to (3) L3 agents:

  
+--+--++---+--+
  | id   | host 
| admin_state_up | alive | ha_state |
  
+--+--++---+--+
  | 4434f999-51d0-4bbb-843c-5430255d5c64 | 
726404-infra03-neutron-agents-container-a8bb0b1f | True   | :-)   | 
active  |
  | 710e7768-df47-4bfe-917f-ca35c138209a | 
726402-infra01-neutron-agents-container-fc937477 | True   | :-)   | 
standby   |
  | 7f0888ba-1e8a-4a36-8394-6448b8c606fb | 
726403-infra02-neutron-agents-container-0338af5a | True   | :-)   | 
standby   |
  
+--+--++---+--+

  The infra03 node was shut down completely and abruptly. The router
  transitioned to master on infra02 as indicated in these log messages:

  2016-12-06 16:15:06.457 18450 INFO neutron.agent.linux.interface [-] Device 
qg-d48918fa-eb already exists
  2016-12-07 15:16:51.145 18450 INFO neutron.agent.l3.ha [-] Router 
c8b5d5b7-ab57-4f56-9838-0900dc304af6 transitioned to master
  2016-12-07 15:16:51.811 18450 INFO eventlet.wsgi.server [-]  - - 
[07/Dec/2016 15:16:51] "GET / HTTP/1.1" 200 115 0.666464
  2016-12-07 15:18:29.167 18450 INFO neutron.agent.l3.ha [-] Router 
c8b5d5b7-ab57-4f56-9838-0900dc304af6 transitioned to backup
  2016-12-07 15:18:29.229 18450 INFO eventlet.wsgi.server [-]  - - 
[07/Dec/2016 15:18:29] "GET / HTTP/1.1" 200 115 0.062110
  2016-12-07 15:21:48.870 18450 INFO neutron.agent.l3.ha [-] Router 
7629f5d7-b205-4af5-8e0e-a3c4d15e7677 transitioned to master
  2016-12-07 15:21:49.537 18450 INFO eventlet.wsgi.server [-]  - - 
[07/Dec/2016 15:21:49] "GET / HTTP/1.1" 200 115 0.667920
  2016-12-07 15:22:08.796 18450 INFO neutron.agent.l3.ha [-] Router 
4676e7a5-279c-4114-8674-209f7fd5ab1a transitioned to master
  2016-12-07 15:22:09.515 18450 INFO eventlet.wsgi.server [-]  - - 
[07/Dec/2016 15:22:09] "GET / HTTP/1.1" 200 115 0.719848

  Traffic to/from VMs through the new master router functioned as
  expected. However, the ha_state remained 'standby':

  
  
+--+--++---+--+
  | id   | host 
| admin_state_up | alive | ha_state |
  
+--+--++---+--+
  | 4434f999-51d0-4bbb-843c-5430255d5c64 | 
726404-infra03-neutron-agents-container-a8bb0b1f | True   | xxx   | 
standby  |
  | 710e7768-df47-4bfe-917f-ca35c138209a | 
726402-infra01-neutron-agents-container-fc937477 | True   | :-)   | 
standby   |
  | 7f0888ba-1e8a-4a36-8394-6448b8c606fb | 
726403-infra02-neutron-agents-container-0338af5a | True   | :-)   | 
standby   |
  
+--+--++---+--+

  A traceback was observed in the logs related to a message timeout,
  probably due to the cut of AMQP on infra03:

  2016-12-07 15:22:30.525 18450 ERROR oslo.messaging._drivers.impl_rabbit [-] 
AMQP server on 172.29.237.155:5671 is unreachable: timed out. Trying again in 1 

[Yahoo-eng-team] [Bug 1657843] [NEW] TypeError: coercing to Unicode: need string or buffer, NoneType found

2017-01-19 Thread Travis Tripp
Public bug reported:

Using brand new cloned horizon with tox -e runserver, I will always get
the following error the first time I try to go directly to urls like:

http://127.0.0.1:8000/identity/

Once I change it to anything else, it works.

Environment:


Request Method: GET
Request URL: http://127.0.0.1:8000/identity/

Django Version: 1.8.17
Python Version: 2.7.10
Installed Applications:
['openstack_dashboard.dashboards.project',
 'searchlight_ui',
 'openstack_dashboard.dashboards.admin',
 'openstack_dashboard.dashboards.identity',
 'openstack_dashboard.dashboards.settings',
 'openstack_dashboard',
 'django.contrib.contenttypes',
 'django.contrib.auth',
 'django.contrib.sessions',
 'django.contrib.messages',
 'django.contrib.staticfiles',
 'django.contrib.humanize',
 'django_pyscss',
 'openstack_dashboard.django_pyscss_fix',
 'compressor',
 'horizon',
 'openstack_auth']
Installed Middleware:
('django.middleware.common.CommonMiddleware',
 'django.middleware.csrf.CsrfViewMiddleware',
 'django.contrib.sessions.middleware.SessionMiddleware',
 'django.contrib.auth.middleware.AuthenticationMiddleware',
 'horizon.middleware.OperationLogMiddleware',
 'django.contrib.messages.middleware.MessageMiddleware',
 'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
 'horizon.middleware.HorizonMiddleware',
 'horizon.themes.ThemeMiddleware',
 'django.middleware.locale.LocaleMiddleware',
 'django.middleware.clickjacking.XFrameOptionsMiddleware',
 
'openstack_dashboard.contrib.developer.profiler.middleware.ProfilerClientMiddleware',
 'openstack_dashboard.contrib.developer.profiler.middleware.ProfilerMiddleware')


Traceback:
File 
"/Users/ttripp/dev/openstack/horizon/.tox/runserver/lib/python2.7/site-packages/django/core/handlers/base.py"
 in get_response
  132. response = wrapped_callback(request, *callback_args, 
**callback_kwargs)
File "/Users/ttripp/dev/openstack/horizon/horizon/decorators.py" in dec
  36. return view_func(request, *args, **kwargs)
File "/Users/ttripp/dev/openstack/horizon/horizon/decorators.py" in dec
  52. return view_func(request, *args, **kwargs)
File "/Users/ttripp/dev/openstack/horizon/horizon/decorators.py" in dec
  36. return view_func(request, *args, **kwargs)
File 
"/Users/ttripp/dev/openstack/horizon/.tox/runserver/lib/python2.7/site-packages/django/views/generic/base.py"
 in view
  71. return self.dispatch(request, *args, **kwargs)
File 
"/Users/ttripp/dev/openstack/horizon/.tox/runserver/lib/python2.7/site-packages/django/views/generic/base.py"
 in dispatch
  89. return handler(request, *args, **kwargs)
File "/Users/ttripp/dev/openstack/horizon/horizon/tables/views.py" in get
  219. handled = self.construct_tables()
File "/Users/ttripp/dev/openstack/horizon/horizon/tables/views.py" in 
construct_tables
  210. handled = self.handle_table(table)
File "/Users/ttripp/dev/openstack/horizon/horizon/tables/views.py" in 
handle_table
  123. data = self._get_data_dict()
File "/Users/ttripp/dev/openstack/horizon/horizon/tables/views.py" in 
_get_data_dict
  247. self._data = {self.table_class._meta.name: self.get_data()}
File 
"/Users/ttripp/dev/openstack/horizon/openstack_dashboard/dashboards/identity/projects/views.py"
 in get_data
  84. self.request):
File "/Users/ttripp/dev/openstack/horizon/openstack_dashboard/policy.py" in 
check
  24. return policy_check(actions, request, target)
File 
"/Users/ttripp/dev/openstack/horizon/.tox/runserver/lib/python2.7/site-packages/openstack_auth/policy.py"
 in check
  147. enforcer = _get_enforcer()
File 
"/Users/ttripp/dev/openstack/horizon/.tox/runserver/lib/python2.7/site-packages/openstack_auth/policy.py"
 in _get_enforcer
  57. if os.path.isfile(enforcer.policy_path):
File 
"/Users/ttripp/dev/openstack/horizon/.tox/runserver/lib/python2.7/genericpath.py"
 in isfile
  37. st = os.stat(path)

Exception Type: TypeError at /identity/
Exception Value: coercing to Unicode: need string or buffer, NoneType found

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1657843

Title:
  TypeError: coercing to Unicode: need string or buffer, NoneType found

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Using brand new cloned horizon with tox -e runserver, I will always
  get the following error the first time I try to go directly to urls
  like:

  http://127.0.0.1:8000/identity/

  Once I change it to anything else, it works.

  Environment:

  
  Request Method: GET
  Request URL: http://127.0.0.1:8000/identity/

  Django Version: 1.8.17
  Python Version: 2.7.10
  Installed Applications:
  ['openstack_dashboard.dashboards.project',
   'searchlight_ui',
   

[Yahoo-eng-team] [Bug 1657814] [NEW] Incorrect DNS assignment

2017-01-19 Thread Volodymyr Litovka
Public bug reported:

Dear friends,

the problem is in the following: Neutron assign global "dns_domain"
value (specified in neutron.conf) to ports in project's network
regardless of network's "dns_domain" value specified in network
definition and incorrectly updates dnsmasq.

# cat /etc/neutron/neutron.conf |grep dns_domain
dns_domain = yo.

# cat /etc/neutron/plugins/ml2/ml2_conf.ini |grep dns
extension_drivers = port_security,dns


# openstack network show ctl-net
+---+--+
| Field | Value|
+---+--+
| admin_state_up| UP   |
| availability_zone_hints   |  |
| availability_zones| nova |
| created_at| 2017-01-19T15:19:12Z |
| description   |  |
| dns_domain| ctl.ams. | <--- 
specific domain
| id| c1d688da-a92e-4fc0-bdc5-d9361a8e15b6 |
| ipv4_address_scope| None |
| ipv6_address_scope| None |
| mtu   | 1450 |
| name  | ctl-net  |
| port_security_enabled | True |
| project_id| 1a69577c17884582861ec3904cd957cf |
| project_id| 1a69577c17884582861ec3904cd957cf |
| provider:network_type | vxlan|
| provider:physical_network | None |
| provider:segmentation_id  | 101  |
| revision_number   | 6|
| router:external   | Internal |
| shared| False|
| status| ACTIVE   |
| subnets   | a9294448-9ebb-42be-8597-b206f3efc018 |
| tags  | []   |
| updated_at| 2017-01-19T15:19:14Z |
+---+--+

# neutron port-create --name poi --dns-name poi ctl-net
Created a new port:
+---+---+
| Field | Value 
|
+---+---+
| admin_state_up| True  
|
| allowed_address_pairs |   
|
| binding:host_id   |   
|
| binding:profile   | {}
|
| binding:vif_details   | {}
|
| binding:vif_type  | unbound   
|
| binding:vnic_type | normal
|
| created_at| 2017-01-19T15:34:32Z  
|
| description   |   
|
| device_id |   
|
| device_owner  |   
|
| dns_assignment| {"hostname": "poi", "ip_address": "10.16.1.21", 
"fqdn": "poi.yo."}|<- YO!
| dns_name  | poi   
|
| extra_dhcp_opts   |   
|
| fixed_ips | {"subnet_id": "a9294448-9ebb-42be-8597-b206f3efc018", 
|
|   |  "ip_address": "10.16.1.21"}  
|
| id| 5f002d62-267f-4470-a6af-c14119b1de61  
|
| mac_address   | fa:16:3e:e7:a5:f8 
|
| name  | poi   
|
| network_id| c1d688da-a92e-4fc0-bdc5-d9361a8e15b6  
|
| port_security_enabled | True  
|
| project_id| 1a69577c17884582861ec3904cd957cf  
|
| revision_number   | 6 
|
| security_groups 

[Yahoo-eng-team] [Bug 1657791] [NEW] cpu pining fails with 1 sibling cpu set and 2 single cpu set

2017-01-19 Thread Ritesh
Public bug reported:

Description
===

CPU pinning fails when the sibling set consists of single VCPUS and a sibling 
set. 
For example, 

CoercedSet([45]), CoercedSet([31]), CoercedSet([1, 25])

Considering a case, 
When  a compute with NUMA NODE 1 which has 4 VCPU and 16GB RAM available to 
use. 

And when you to try to launch a instance with a flavor of 4 VCPU and
16GB RAM, it is failed to launch i.e "message": "No valid host was
found. There are not enough hosts available."

Total Available VCPUS 
1,3,5,7,9,11,13,15,17,19,21,25,27,29,31,33,35,37,39,41,43,45 
Used VCPUS 
3,5,7,9,11,13,15,17,19,21,27,29,33,35,37,39,41,43 
Free VCPUS
1,25,31,45


VCPUS STATISTICS 
Total : 22
Usage : 18
Free  : 4

siblings=set([21,45]),set([7,31]),set([15,39]),set([11,35]),set([17,41]),set([3,27]),set([13,37]),set([9,33]),set([19,43]),set([5,29]),set([1,25])])

1, 25 , 31 and 45 VCPUS are available to use but nova could not able to
use them.

As 31 and 45 are not siblings , Nova is not considering them to boot the VM.
Because of Numa Topology Filter returns with 0 hosts available.   

Steps to reproduce
==

We have 22 VCPUS available for Pinning in NUMA NODE 1.

1. Launch an instance with 1 VCPU
2. Launch an instance with 1 VCPU
3. Launch an instance with 8 VCPU
4. Launch an instance with 8 VCPU 
5. Launch an instance with 8 VCPU


Expected result
===

Instance should be in active state

Actual result
=

Instance is in Error state. 
| fault| {"message": "No valid host was found. 
There are not enough hosts available.", "code": 500, "details": "  File 
\"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py\", line 397, in 
build_instances |


Environment
===
1. Exact version of OpenStack you are running. 

   Mitaka Stable
  
2. Which hypervisor did you use?
   Libvirt + KVM

2. Which storage type did you use?
   None

3. Which networking type did you use?
   Neutron with OpenVSwitch

** Affects: nova
 Importance: Low
 Assignee: Ritesh (rsritesh)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1657791

Title:
  cpu pining fails with 1 sibling cpu set and 2 single cpu set

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  CPU pinning fails when the sibling set consists of single VCPUS and a sibling 
set. 
  For example, 

  CoercedSet([45]), CoercedSet([31]), CoercedSet([1, 25])

  Considering a case, 
  When  a compute with NUMA NODE 1 which has 4 VCPU and 16GB RAM available to 
use. 

  And when you to try to launch a instance with a flavor of 4 VCPU and
  16GB RAM, it is failed to launch i.e "message": "No valid host was
  found. There are not enough hosts available."

  Total Available VCPUS 
  1,3,5,7,9,11,13,15,17,19,21,25,27,29,31,33,35,37,39,41,43,45 
  Used VCPUS 
  3,5,7,9,11,13,15,17,19,21,27,29,33,35,37,39,41,43 
  Free VCPUS
  1,25,31,45

  
  VCPUS STATISTICS 
  Total : 22
  Usage : 18
  Free  : 4

  
siblings=set([21,45]),set([7,31]),set([15,39]),set([11,35]),set([17,41]),set([3,27]),set([13,37]),set([9,33]),set([19,43]),set([5,29]),set([1,25])])

  1, 25 , 31 and 45 VCPUS are available to use but nova could not able
  to use them.

  As 31 and 45 are not siblings , Nova is not considering them to boot the VM.
  Because of Numa Topology Filter returns with 0 hosts available.   

  Steps to reproduce
  ==

  We have 22 VCPUS available for Pinning in NUMA NODE 1.

  1. Launch an instance with 1 VCPU
  2. Launch an instance with 1 VCPU
  3. Launch an instance with 8 VCPU
  4. Launch an instance with 8 VCPU 
  5. Launch an instance with 8 VCPU

  
  Expected result
  ===

  Instance should be in active state

  Actual result
  =

  Instance is in Error state. 
  | fault| {"message": "No valid host was 
found. There are not enough hosts available.", "code": 500, "details": "  File 
\"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py\", line 397, in 
build_instances |

  
  Environment
  ===
  1. Exact version of OpenStack you are running. 

 Mitaka Stable

  2. Which hypervisor did you use?
 Libvirt + KVM

  2. Which storage type did you use?
 None

  3. Which networking type did you use?
 Neutron with OpenVSwitch

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1657791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459402] Re: Conceptual overview of the Keystone service catalog

2017-01-19 Thread Dolph Mathews
** Project changed: openstack-manuals => keystone

** Changed in: keystone
Milestone: ocata => None

** Changed in: keystone
 Assignee: Alexandra Settle (alexandra-settle) => Dolph Mathews (dolph)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1459402

Title:
  Conceptual overview of the Keystone service catalog

Status in OpenStack Identity (keystone):
  Confirmed

Bug description:
  As a result of a cross-project design session on the service catalog
  [1] at the OpenStack Liberty design summit in Vancouver, I wrote a
  commentary on a few key aspects of the service catalog that would
  certainly find a better home if disseminated into relevant docs:

http://dolphm.com/openstack-keystone-service-catalog/

  [1] https://etherpad.openstack.org/p/service-catalog-cross-project-
  vancouver

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1459402/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657774] [NEW] Nova does not re-raise 401 Unauthorized received from Neutron for admin users

2017-01-19 Thread Roman Podoliaka
Public bug reported:

Description
===

If a Keystone token issued for a admin user (e.g. ceilometer) is expired
or revoked right after it's been validated by
keystoneauthtoken_middleware in nova-api, but before it's validated by
the very same middleware in neutron-server, nova-api will respond with
400 Bad Request instead of expected 401 Unauthorized, so that the
original request can be properly retried after re-authentication.


Steps to reproduce
==

The condition described above is easy to reproduce synthetically by
putting breakpoints into Nova code and revoking a token. One can
reproduce the very same problem in real life by running enough
ceilometer polling agents.

Make sure you use credentials of an admin user (e.g. admin or ceilometer
in Devstack) and have at least 1 instance running (so that `nova list`
triggers an HTTP request to neutron-server).

1. Put a breakpoint on entering get_client() nova/network/neutronv2/api.py
2. Do `nova list`
3. Revoke the the issued token with `openstack token revoke $token` (you may 
also need to restart memcached to make sure token validation result is not 
cached)
4. Continue execution of nova-api

Expected result
===

As token is now invalid (expired or revoked), it's expected that nova-
api responds with 401 Unauthorized, so that a client can handle this,
re-authenticate and retry the original request.

Actual result
=

nova-api responds with 400 Bad Request and outputs the following error
into logs

2017-01-19 15:02:09.952 595 ERROR nova.network.neutronv2.api 
[req-0c1558f5-9cc8-4411-9fb1-2fe7cb232725 admin admin] Neutron client was not 
able
 to generate a valid admin token, please verify Neutron admin credential 
located in nova.conf

Environment
===

Devstack, master (Ocata), nova HEAD at
da54487edad28c87accbf6439471e7341b52ff48

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: In Progress


** Tags: api neutron

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Tags added: api neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1657774

Title:
  Nova does not re-raise 401 Unauthorized received from Neutron for
  admin users

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===

  If a Keystone token issued for a admin user (e.g. ceilometer) is
  expired or revoked right after it's been validated by
  keystoneauthtoken_middleware in nova-api, but before it's validated by
  the very same middleware in neutron-server, nova-api will respond with
  400 Bad Request instead of expected 401 Unauthorized, so that the
  original request can be properly retried after re-authentication.

  
  Steps to reproduce
  ==

  The condition described above is easy to reproduce synthetically by
  putting breakpoints into Nova code and revoking a token. One can
  reproduce the very same problem in real life by running enough
  ceilometer polling agents.

  Make sure you use credentials of an admin user (e.g. admin or
  ceilometer in Devstack) and have at least 1 instance running (so that
  `nova list` triggers an HTTP request to neutron-server).

  1. Put a breakpoint on entering get_client() nova/network/neutronv2/api.py
  2. Do `nova list`
  3. Revoke the the issued token with `openstack token revoke $token` (you may 
also need to restart memcached to make sure token validation result is not 
cached)
  4. Continue execution of nova-api

  Expected result
  ===

  As token is now invalid (expired or revoked), it's expected that nova-
  api responds with 401 Unauthorized, so that a client can handle this,
  re-authenticate and retry the original request.

  Actual result
  =

  nova-api responds with 400 Bad Request and outputs the following error
  into logs

  2017-01-19 15:02:09.952 595 ERROR nova.network.neutronv2.api 
[req-0c1558f5-9cc8-4411-9fb1-2fe7cb232725 admin admin] Neutron client was not 
able
   to generate a valid admin token, please verify Neutron admin credential 
located in nova.conf

  Environment
  ===

  Devstack, master (Ocata), nova HEAD at
  da54487edad28c87accbf6439471e7341b52ff48

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1657774/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647914] Re: Cannot use minimized_polling when hypervisor is XenServer

2017-01-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/407813
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=71edbb0d7de06458bb1be345e13841ad4e7c2bb9
Submitter: Jenkins
Branch:master

commit 71edbb0d7de06458bb1be345e13841ad4e7c2bb9
Author: Huan Xie 
Date:   Wed Dec 7 02:13:19 2016 +

Support ovsdb-client monitor with remote connection

When using XenServer as hypervisor, we found we cannot set ovs
agent which runs in compute node with "minimize_polling=True".
Because once set it True, the ovs agent will monitor the local
ovsdb, but we want it monitor ovsdb in Dom0.
This patch is to add support for ovsdb-client to monitor remote
connection.

Change-Id: Idf2a564e96626dab3c4421a1f9139ed9ffcbcab1
Closes-bug: 1647914


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1647914

Title:
  Cannot use minimized_polling when hypervisor is XenServer

Status in neutron:
  Fix Released

Bug description:
  Env:
  XenServer as hypervisor
  Neutron ML2 use ovs agent

  When using XenServer as hypervisor, the ovs agent running in compute node 
cannot set 
  [agent]
  minimize_polling = True

  See related logs:

  on/api/rpc/callbacks/resource_manager.py:74
  2016-12-07 02:28:55.856 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-dfcefef1-efbd-4896-af21-70f6fae0c0b8 None None] Agent initialized 
successfully, now running... 
  2016-12-07 02:28:55.857 DEBUG neutron.agent.linux.async_process 
[req-dfcefef1-efbd-4896-af21-70f6fae0c0b8 None None] Launching async process 
[ovsdb-client monitor Interface name,ofport,external_ids --format=json]. from 
(pid=6224) start /opt/stack/neutron/neutron/agent/linux/async_process.py:110
  2016-12-07 02:28:55.857 DEBUG neutron.agent.linux.utils 
[req-dfcefef1-efbd-4896-af21-70f6fae0c0b8 None None] Running command: 
['/usr/local/bin/neutron-rootwrap-xen-dom0', '/etc/neutron/rootwrap.conf', 
'ovsdb-client', 'monitor', 'Interface', 'name,ofport,external_ids', 
'--format=json'] from (pid=6224) create_process 
/opt/stack/neutron/neutron/agent/linux/utils.py:92
  2016-12-07 02:28:55.879 DEBUG neutron.agent.linux.utils 
[req-dfcefef1-efbd-4896-af21-70f6fae0c0b8 None None] Running command: ['ps', 
'--ppid', '6253', '-o', 'pid='] from (pid=6224) create_process 
/opt/stack/neutron/neutron/agent/linux/utils.py:92

  2016-12-07 02:29:00.863 DEBUG neutron.agent.linux.utils 
[req-dfcefef1-efbd-4896-af21-70f6fae0c0b8 None None] Running command: ['ps', 
'--ppid', '6253', '-o', 'pid='] from (pid=6224) create_process 
/opt/stack/neutron/neutron/agent/linux/utils.py:92
  2016-12-07 02:29:00.976 ERROR ryu.lib.hub 
[req-dfcefef1-efbd-4896-af21-70f6fae0c0b8 None None] hub: uncaught exception: 
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/ryu/lib/hub.py", line 54, in 
_launch
  return func(*args, **kwargs)
File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py",
 line 42, in agent_main_wrapper
  ovs_agent.main(bridge_classes)
File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2154, in main
  agent.daemon_loop()
File "/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 
154, in wrapper
  return f(*args, **kwargs)
File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2073, in daemon_loop
  self.ovsdb_monitor_respawn_interval) as pm:
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
  return self.gen.next()
File "/opt/stack/neutron/neutron/agent/linux/polling.py", line 35, in 
get_polling_manager
  pm.start()
File "/opt/stack/neutron/neutron/agent/linux/polling.py", line 57, in start
  self._monitor.start(block=True)
File "/opt/stack/neutron/neutron/agent/linux/ovsdb_monitor.py", line 117, 
in start
  while not self.is_active():
File "/opt/stack/neutron/neutron/agent/linux/async_process.py", line 101, 
in is_active
  self.pid, self.cmd_without_namespace)
File "/opt/stack/neutron/neutron/agent/linux/async_process.py", line 160, 
in pid
  run_as_root=self.run_as_root)
File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 246, in 
get_root_helper_child_pid
  pid = find_child_pids(pid)[0]
File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 179, in 
find_child_pids
  log_fail_as_error=False)
File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 127, in execute
  _stdout, _stderr = obj.communicate(_process_input)
File "/usr/lib/python2.7/subprocess.py", line 799, in communicate
  return self._communicate(input)
File "/usr/lib/python2.7/subprocess.py", line 1403, in _communicate
  

[Yahoo-eng-team] [Bug 1459402] [NEW] Conceptual overview of the Keystone service catalog

2017-01-19 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

As a result of a cross-project design session on the service catalog [1]
at the OpenStack Liberty design summit in Vancouver, I wrote a
commentary on a few key aspects of the service catalog that would
certainly find a better home if disseminated into relevant docs:

  http://dolphm.com/openstack-keystone-service-catalog/

[1] https://etherpad.openstack.org/p/service-catalog-cross-project-
vancouver

** Affects: keystone
 Importance: Wishlist
 Assignee: Dolph Mathews (dolph)
 Status: Confirmed


** Tags: keystone
-- 
Conceptual overview of the Keystone service catalog
https://bugs.launchpad.net/bugs/1459402
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Identity (keystone).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657668] Re: Grammer error in LoadBalancerPluginv2

2017-01-19 Thread Darek Smigiel
*** This bug is a duplicate of bug 1607585 ***
https://bugs.launchpad.net/bugs/1607585

1912294464 you're right. It's a bug, although it's duplicate.

** Project changed: neutron => octavia

** This bug has been marked a duplicate of bug 1607585
   neutron server report error when get loadbalance status

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1657668

Title:
  Grammer error in LoadBalancerPluginv2

Status in octavia:
  New

Bug description:
  Hi
  I use OpenStack Mitaka with Ubuntu 14.04.
  I found a grammar error.

  /usr/lib/python2.7/dist-
  packages/neutron_lbaas/services/loadbalancer/plugin.py   line 1344

  Whether need 'self' when invoke '_set_degraded' function?

  1340 if (pool_status["id"] not in
  1341 [ps["id"] for ps in lb_status["pools"]]):
  1342 lb_status["pools"].append(pool_status)
  1343 if self._is_degraded(curr_listener.default_pool):
  1344 self._set_degraded(self, listener_status, lb_status)
  1345 members = curr_listener.default_pool.members
  1346 for curr_member in members:
  1347 if not curr_member.admin_state_up:

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1657668/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657761] Re: neutron.tests.functional.agent.test_l2_ovs_agent.TestOVSAgent.test_assert_pings_*(native) fail randomly

2017-01-19 Thread Miguel Angel Ajo
** Project changed: nova => neutron

** Summary changed:

- 
neutron.tests.functional.agent.test_l2_ovs_agent.TestOVSAgent.test_assert_pings_*(native)
 fail randomly
+ ovsdb native interface timeouts sometimes causing random functional failures

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1657761

Title:
  ovsdb native interface timeouts sometimes causing random functional
  failures

Status in neutron:
  New

Bug description:
  Functional tests 
neutron.tests.functional.agent.test_l2_ovs_agent.TestOVSAgent.test_assert_pings_*(native)
 fail randomly with:
   
  Traceback (most recent call last):
File "neutron/tests/base.py", line 115, in func
  return f(self, *args, **kwargs)
File "neutron/tests/functional/agent/test_l2_ovs_agent.py", line 262, in 
test_assert_pings_during_br_phys_setup_not_lost_in_vlan_to_flat
  self._test_assert_pings_during_br_phys_setup_not_lost(provider_net)
File "neutron/tests/functional/agent/test_l2_ovs_agent.py", line 299, in 
_test_assert_pings_during_br_phys_setup_not_lost
  self.agent.setup_physical_bridges(self.agent.bridge_mappings)
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/osprofiler/profiler.py",
 line 154, in wrapper
  return f(*args, **kwargs)
File "neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", 
line 1080, in setup_physical_bridges
  ovs_bridges = ovs.get_bridges()
File "neutron/agent/common/ovs_lib.py", line 139, in get_bridges
  return self.ovsdb.list_br().execute(check_error=True)
File "neutron/agent/ovsdb/native/commands.py", line 43, in execute
  ctx.reraise = False
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "neutron/agent/ovsdb/native/commands.py", line 36, in execute
  txn.add(self)
File "neutron/agent/ovsdb/api.py", line 76, in __exit__
  self.result = self.commit()
File "neutron/agent/ovsdb/impl_idl.py", line 73, in commit
  'timeout': self.timeout})
  neutron.agent.ovsdb.api.TimeoutException: Commands [ListBridgesCommand()] 
exceeded timeout 10 seconds

  or

  ft39.16: 
neutron.tests.functional.agent.test_l2_ovs_agent.TestOVSAgent.test_assert_pings_during_br_int_setup_not_lost(native)_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  DEBUG [stevedore.extension] found extension EntryPoint.parse('zmq = 
oslo_messaging._drivers.impl_zmq:ZmqDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('fake = 
oslo_messaging._drivers.impl_fake:FakeDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('pika = 
oslo_messaging._drivers.impl_pika:PikaDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('amqp = 
oslo_messaging._drivers.impl_amqp1:ProtonDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('kafka = 
oslo_messaging._drivers.impl_kafka:KafkaDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('kombu = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('rabbit = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('zmq = 
oslo_messaging._drivers.impl_zmq:ZmqDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('fake = 
oslo_messaging._drivers.impl_fake:FakeDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('pika = 
oslo_messaging._drivers.impl_pika:PikaDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('amqp = 
oslo_messaging._drivers.impl_amqp1:ProtonDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('kafka = 
oslo_messaging._drivers.impl_kafka:KafkaDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('kombu = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('rabbit = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('zmq = 
oslo_messaging._drivers.impl_zmq:ZmqDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('fake = 
oslo_messaging._drivers.impl_fake:FakeDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('pika = 
oslo_messaging._drivers.impl_pika:PikaDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('amqp = 

[Yahoo-eng-team] [Bug 1657761] [NEW] neutron.tests.functional.agent.test_l2_ovs_agent.TestOVSAgent.test_assert_pings_*(native) fail randomly

2017-01-19 Thread Miguel Angel Ajo
Public bug reported:

Functional tests 
neutron.tests.functional.agent.test_l2_ovs_agent.TestOVSAgent.test_assert_pings_*(native)
 fail randomly with:
 
Traceback (most recent call last):
  File "neutron/tests/base.py", line 115, in func
return f(self, *args, **kwargs)
  File "neutron/tests/functional/agent/test_l2_ovs_agent.py", line 262, in 
test_assert_pings_during_br_phys_setup_not_lost_in_vlan_to_flat
self._test_assert_pings_during_br_phys_setup_not_lost(provider_net)
  File "neutron/tests/functional/agent/test_l2_ovs_agent.py", line 299, in 
_test_assert_pings_during_br_phys_setup_not_lost
self.agent.setup_physical_bridges(self.agent.bridge_mappings)
  File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/osprofiler/profiler.py",
 line 154, in wrapper
return f(*args, **kwargs)
  File "neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", 
line 1080, in setup_physical_bridges
ovs_bridges = ovs.get_bridges()
  File "neutron/agent/common/ovs_lib.py", line 139, in get_bridges
return self.ovsdb.list_br().execute(check_error=True)
  File "neutron/agent/ovsdb/native/commands.py", line 43, in execute
ctx.reraise = False
  File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
self.force_reraise()
  File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File "neutron/agent/ovsdb/native/commands.py", line 36, in execute
txn.add(self)
  File "neutron/agent/ovsdb/api.py", line 76, in __exit__
self.result = self.commit()
  File "neutron/agent/ovsdb/impl_idl.py", line 73, in commit
'timeout': self.timeout})
neutron.agent.ovsdb.api.TimeoutException: Commands [ListBridgesCommand()] 
exceeded timeout 10 seconds

or

ft39.16: 
neutron.tests.functional.agent.test_l2_ovs_agent.TestOVSAgent.test_assert_pings_during_br_int_setup_not_lost(native)_StringException:
 Empty attachments:
  stderr
  stdout

pythonlogging:'': {{{
DEBUG [stevedore.extension] found extension EntryPoint.parse('zmq = 
oslo_messaging._drivers.impl_zmq:ZmqDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('fake = 
oslo_messaging._drivers.impl_fake:FakeDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('pika = 
oslo_messaging._drivers.impl_pika:PikaDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('amqp = 
oslo_messaging._drivers.impl_amqp1:ProtonDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('kafka = 
oslo_messaging._drivers.impl_kafka:KafkaDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('kombu = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('rabbit = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('zmq = 
oslo_messaging._drivers.impl_zmq:ZmqDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('fake = 
oslo_messaging._drivers.impl_fake:FakeDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('pika = 
oslo_messaging._drivers.impl_pika:PikaDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('amqp = 
oslo_messaging._drivers.impl_amqp1:ProtonDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('kafka = 
oslo_messaging._drivers.impl_kafka:KafkaDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('kombu = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('rabbit = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('zmq = 
oslo_messaging._drivers.impl_zmq:ZmqDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('fake = 
oslo_messaging._drivers.impl_fake:FakeDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('pika = 
oslo_messaging._drivers.impl_pika:PikaDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('amqp = 
oslo_messaging._drivers.impl_amqp1:ProtonDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('kafka = 
oslo_messaging._drivers.impl_kafka:KafkaDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('kombu = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('rabbit = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('zmq = 
oslo_messaging._drivers.impl_zmq:ZmqDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('fake = 
oslo_messaging._drivers.impl_fake:FakeDriver')
   DEBUG [stevedore.extension] found extension 

[Yahoo-eng-team] [Bug 1657757] [NEW] py35 unit tests are hugely failing

2017-01-19 Thread Sylvain Bauza
Public bug reported:

We have a long list of Nova unit tests that fail for the py35 job with
this pattern :

ft2.9: 
nova.tests.unit.api.openstack.compute.test_create_backup.CreateBackupTestsV21.test_create_backup_raises_conflict_on_invalid_state_StringException:
 Empty attachments:
  pythonlogging:''
  stderr
  stdout

traceback-1: {{{
Traceback (most recent call last):
  File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/fixtures/fixture.py",
 line 125, in cleanUp
return self._cleanups(raise_errors=raise_first)
  File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/fixtures/callmany.py",
 line 89, in __call__
reraise(error[0], error[1], error[2])
  File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/testtools/_compat3x.py",
 line 16, in reraise
raise exc_obj.with_traceback(exc_tb)
  File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/fixtures/callmany.py",
 line 83, in __call__
cleanup(*args, **kwargs)
  File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/mox3/mox.py",
 line 289, in VerifyAll
mock_obj._Verify()
  File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/mox3/mox.py",
 line 543, in _Verify
raise ExpectedMethodCallsError(self._expected_calls_queue)
mox3.mox.ExpectedMethodCallsError: Verify: Expected methods never called:
  0.  function.__call__(, {}) -> None
}}}

Traceback (most recent call last):
  File 
"/home/jenkins/workspace/gate-nova-python35-db/nova/tests/unit/api/openstack/compute/test_create_backup.py",
 line 281, in test_create_backup_raises_conflict_on_invalid_state
exception_arg='createBackup')
  File 
"/home/jenkins/workspace/gate-nova-python35-db/nova/tests/unit/api/openstack/compute/admin_only_action_common.py",
 line 123, in _test_invalid_state
instance = self._stub_instance_get()
  File 
"/home/jenkins/workspace/gate-nova-python35-db/nova/tests/unit/api/openstack/compute/admin_only_action_common.py",
 line 40, in _stub_instance_get
self.context, uuid, expected_attrs=None).AndReturn(instance)
  File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/mox3/mox.py",
 line 814, in __call__
return mock_method(*params, **named_params)
  File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/mox3/mox.py",
 line 1152, in __call__
self._checker.Check(params, named_params)
  File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/mox3/mox.py",
 line 1067, in Check
self._RecordArgumentGiven(arg_name, arg_status)
  File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/mox3/mox.py",
 line 994, in _RecordArgumentGiven
raise AttributeError('%s provided more than once' % (arg_name,))
AttributeError: expected_attrs provided more than once

eg. http://logs.openstack.org/34/418134/7/check/gate-nova-
python35-db/578aaff/testr_results.html.gz

** Affects: nova
 Importance: Critical
 Status: New


** Tags: gate-failure testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1657757

Title:
  py35 unit tests are hugely failing

Status in OpenStack Compute (nova):
  New

Bug description:
  We have a long list of Nova unit tests that fail for the py35 job with
  this pattern :

  ft2.9: 
nova.tests.unit.api.openstack.compute.test_create_backup.CreateBackupTestsV21.test_create_backup_raises_conflict_on_invalid_state_StringException:
 Empty attachments:
pythonlogging:''
stderr
stdout

  traceback-1: {{{
  Traceback (most recent call last):
File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/fixtures/fixture.py",
 line 125, in cleanUp
  return self._cleanups(raise_errors=raise_first)
File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/fixtures/callmany.py",
 line 89, in __call__
  reraise(error[0], error[1], error[2])
File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/testtools/_compat3x.py",
 line 16, in reraise
  raise exc_obj.with_traceback(exc_tb)
File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/fixtures/callmany.py",
 line 83, in __call__
  cleanup(*args, **kwargs)
File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/mox3/mox.py",
 line 289, in VerifyAll
  mock_obj._Verify()
File 
"/home/jenkins/workspace/gate-nova-python35-db/.tox/py35/lib/python3.5/site-packages/mox3/mox.py",
 line 543, in _Verify
  raise ExpectedMethodCallsError(self._expected_calls_queue)
  

[Yahoo-eng-team] [Bug 1602422] Re: Many tracebacks building keystone docs

2017-01-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/421468
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=9785f6a006ad0ef9d6a9a6e6436895401b0cbf7f
Submitter: Jenkins
Branch:master

commit 9785f6a006ad0ef9d6a9a6e6436895401b0cbf7f
Author: Steve Martinelli 
Date:   Tue Jan 17 14:35:20 2017 -0500

switch @hybrid_property to @property

The purpose of hybrid is so that you can use your attribute at the
class level in a query. At the instance level, it is identical to
@property

A hybrid property is only needed when referenced at the class level,
for instance.

class Wozzle(...)

  @hybrid_property
  def foo(..)

If referencing Wozzle.foo, then you need a hybrid property, otherwise
a regular @property will work just fine.

Change-Id: Ifd3ff9a888919e09b80e87282af829bc14693369
Closes-Bug: 1602422


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1602422

Title:
  Many tracebacks building keystone docs

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  When building keystone's documentation with tox -e venv -- python
  setup.py build_sphinx there are many errors/tracebacks. You can see
  examples at http://logs.openstack.org/86/320586/61/check/gate-
  keystone-docs/e10418b/console.html#_2016-07-12_18_34_14_520037.

  To reproduce run:

tox -e venv -- python setup.py build_sphinx

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1602422/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1633066] Re: Input int value into service_type list hit internal error

2017-01-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/386086
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=a594df67a73a73dc6d26b62b29aa93dfb5f8d00c
Submitter: Jenkins
Branch:master

commit a594df67a73a73dc6d26b62b29aa93dfb5f8d00c
Author: ZhaoBo 
Date:   Fri Oct 14 00:52:27 2016 +0800

Catch invalid subnet service_types on input

_validate_subnet_service_types function does not consider the
data type of service_type on creatation or update. For example,
if an integer type is passed, the neutron-server will hit
internal error.

This patch adds a check to make sure the data type is unicode.

Change-Id: I5e6d15def3e23f51172b69e1287ff18ab5d3f6aa
Closes-Bug: #1633066


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1633066

Title:
  Input int value into service_type list hit internal error

Status in neutron:
  Fix Released

Bug description:
  I use REST api to create/update subnet with serive_types. I use some
  int type values in the list.

  The creation body is:
  {
"subnet": {
"network_id": "0d04102a-ba15-4d6c-94ee-8ac480cbb1ba",
"name": "hellowor",
"cidr": "99.99.99.99/24",
"service_types" : ["network:1",2,3],
"ip_version": 4
}
  }

  neutron server hit internal error.
  2016-10-13 17:44:27.842 ERROR neutron.api.v2.resource 
[req-b9846c07-3574-4094-b6fe-ca978c1d31ce admin 
488da3aab0ff45df9e85e17e7f89fedd] create failed: No details.
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 79, in resource
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 430, in create
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource return 
self._create(request, body, **kwargs)
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/api.py", line 88, in wrapped
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource self.force_reraise()
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/api.py", line 84, in wrapped
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 151, in wrapper
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource self.force_reraise()
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 139, in wrapper
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/api.py", line 124, in wrapped
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource 
traceback.format_exc())
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource self.force_reraise()
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-10-13 17:44:27.842 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/api.py", line 119, in wrapped
  2016-10-13 

[Yahoo-eng-team] [Bug 1586009] Re: Requesting stable/mitaka release for networking-onos

2017-01-19 Thread vikram.choudhary
** No longer affects: networking-onos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1586009

Title:
  Requesting stable/mitaka release for networking-onos

Status in neutron:
  Fix Released

Bug description:
  Could you please tag and release networking-onos, as it currently
  stands at:

  https://git.openstack.org/cgit/openstack/networking-onos

  root1@openstack:~/mydata/projects/networking-onos$ git log -1
  commit 871248650a8823d12a7be1e25d4803b298a8e81f
  Author: OpenStack Proposal Bot 
  Date:   Sat Apr 30 18:04:56 2016 +

  Updated from global requirements
  
  Change-Id: If2dd10f2451f4b52b4e419427ff6deb58961cb68

  This will be the second release of networking-onos, so I guess it will
  be 2.0.0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1586009/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657481] Re: unused import

2017-01-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/421993
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=7116453b404aa46d31ad973a7dd15f163927fa70
Submitter: Jenkins
Branch:master

commit 7116453b404aa46d31ad973a7dd15f163927fa70
Author: liaozd 
Date:   Wed Jan 18 22:58:19 2017 +0800

delete unused import

Change-Id: I6aff25cc35acb88e8763efe2575c2d967670fda7
Closes-Bug: 1657481


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1657481

Title:
  unused import

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  unused import horizon/management/commands/startdash.py:14

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1657481/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583769] Re: DHCP agent still refers to deprecated dhcp_domain

2017-01-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/406243
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=6578c531ad2452134760038524bbc0b26efce319
Submitter: Jenkins
Branch:master

commit 6578c531ad2452134760038524bbc0b26efce319
Author: Brian Haley 
Date:   Fri Dec 2 12:03:45 2016 -0500

Remove deprecated dhcp_domain from dhcp_agent.ini

The dhcp_domain value in dhcp_agent.ini value had been
marked for deprecation in favor of using dns_domain in
neutron.conf back in Liberty [1].  Remove it.

[1] https://review.openstack.org/#/c/200952/

Change-Id: Iebde452559f88ca95713664136da1613cac9b32c
Closes-bug: #1583769


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583769

Title:
  DHCP agent still refers to deprecated dhcp_domain

Status in neutron:
  Fix Released

Bug description:
  DESCRIPTION:

  VMs were coming up with a hostname configured as
  "vm_name.openstacklocal" despite having configured a dns_domain in
  neutron.conf. However, the DHCP host file had the correct entry with
  the domain, "vm_name.testdomain.net". Also, ping connectivity worked
  using just DNS name, and with testdomain.net

  Agent was using dhcp_domain (a deprecated field), when spawning
  dnsmasq process with a "--domain" argument. If dhcp_domain is never
  set in the dhcp_agent.ini file, it picks up the default value
  openstacklocal. The reason the DHCP hosts file is correct, is because
  it is sent as the dns_assignment field in the neutron port. Neutron
  server correctly uses the newer dns_domain, and the port's
  dns_name/assignment is not used in spawning dnsmasq.

  Setting dhcp_domain in dhcp_agent.ini fixed issue locally. But since
  this field has been deprecated since liberty, all references to
  dhcp_domain should be changed to dns_domain.

  I have fixed this locally on my deployment which uses liberty, will
  apply fix to master and send out for a review

  
  Pre-conditions:

   - Have the neutron and nova side changes for setting dns_name and 
dns_assignment on the port (present in master)
   - Set the dns_domain in neutron.conf

  
  Reproduction Steps:

  Create a VM like normal. Port it gets attached to should have dns_name
  set according to instance_name, and dns_assignment should have correct
  hostname and domain. Verify DHCP host file on network node has correct
  entry.

  Then login to VM, and do "hostname -f". Verify it displays with a
  .openstacklocal

  
  Version:

  Tested and discovered on Liberty, Centos 7.1 hosts and Centos 7.1 VMs.
  Upon code inspection, bug also exists in master
  (neutron/agent/linux/dhcp.py has a few references to conf.dhcp_domain)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583769/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642223] Re: neutron-openvswitch-agent failed to add default table

2017-01-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/415804
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=0af6e6ded04c994b788836906d94fe097b581a0b
Submitter: Jenkins
Branch:master

commit 0af6e6ded04c994b788836906d94fe097b581a0b
Author: shihanzhang 
Date:   Fri Dec 30 14:23:12 2016 +0800

Change the order of installing flows for br-int

For ovs-agent, it uses CANARY_TABLE table to check ovs status, when
ovs-agent restarts, it should firstly install flows for CANARY_TABLE
table.

Closes-bug: #1642223
Change-Id: I2aebbe5faca2fd4ec137255f0413cc2c129a4588


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1642223

Title:
  neutron-openvswitch-agent failed to add default table

Status in neutron:
  Fix Released

Bug description:
  Problem

  After power down and power off the host, the tenant network is not
  available.

  The cause is that default flow tables of br-int is not setup
  successfully when neutron-openvswitch-agent.starts:

  1) The neutron-openvswitch-agent fails to add the flow table 0 but
  adds the flow table 23 successfully in setup_default_table(). The
  flows look like as follows:

  cookie=0x8f4c30f934586d9c, duration=617166.781s, table=0, 
n_packets=31822416, n_bytes=2976996304, idle_age=0, hard_age=65534, 
priority=2,in_port=1 actions=drop
  cookie=0x8f4c30f934586d9c, duration=617167.023s, table=23, n_packets=0, 
n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
  cookie=0x8f4c30f934586d9c, duration=617167.007s, table=24, n_packets=0, 
n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop

  2) In the rpc_roop, the neutron-openvswitch-agent will check the
  ovs status by checking the flow table 23, and the flow table 23
  exists. The neutron-openvswitch-agent thinks the ovs is normal, but
  the flow table 0 does not exist and the network connection is not
  availble.

  Affected Neutron version:
  Newton

  Possible Solution:
  Check the default table 0 or check all the default flow tables in 
check_ovs_status().
  Or add the default flow table 23 first and then add the default table 0 in 
setup_default_table()

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1642223/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1650611] Re: dhcp agent reporting state as down during the initial sync

2017-01-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/413010
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=f15851b98974dc16606da195cf3ecee577cd0ef8
Submitter: Jenkins
Branch:master

commit f15851b98974dc16606da195cf3ecee577cd0ef8
Author: Bertrand Lallau 
Date:   Tue Dec 20 10:53:41 2016 +0100

DHCP: enhance DHCPAgent startup procedure

During DhcpAgent startup procedure all the following networks
initialization is actually perform twice:
 * Killing old dnsmasq processes
 * set and configure all TAP interfaces
 * building all Dnsmasq config files (lease and host files)
 * launching dnsmasq processes
What is done during the second iteration is just clean and redo
exactly the same another time! This is really inefficient and
increase dramatically DHCP startup time (near twice than needed).

Initialization process 'sync_state' method is called twice:
 * one time during init_host()
 * another time during _report_state()

sync_state() call must stay in init_host() due to bug #1420042.

sync_state() is always called during startup in init_host()
and will be periodically called by periodic_resync()
to do reconciliation.
Hence it can safely be removed from the run() method.

Change-Id: Id6433598d5c833d2e86be605089d42feee57c257
Closes-bug: #1651368
Closes-Bug: #1650611


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1650611

Title:
  dhcp agent reporting state as down during the initial sync

Status in neutron:
  Fix Released

Bug description:
  When dhcp agent is started, neutron agent-list reports its state as
  dead until the initial sync is complete.

  This can lead to unwanted alarms in monitoring systems, especially in
  large environments where the initial sync may take hours. During this
  time, systemctl shows that the agent is actually alive while neutron
  agent-list reports it as down.

  Technical details:

  If I'm right, this line [0] is the exact point where the initial sync
  takes place right after the first state report (with start_flag=True)
  is sent to the server. As it's being done in the same thread, it won't
  send a second state report until it's done with the sync.

  Doing it in a separate thread would let the heartbeat task to continue
  sending state reports to the server but I don't know whether this have
  any unwanted side effects.

  
  [0] 
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L751

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1650611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1651368] Re: Dhcpagent not efficient during initialization process

2017-01-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/413010
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=f15851b98974dc16606da195cf3ecee577cd0ef8
Submitter: Jenkins
Branch:master

commit f15851b98974dc16606da195cf3ecee577cd0ef8
Author: Bertrand Lallau 
Date:   Tue Dec 20 10:53:41 2016 +0100

DHCP: enhance DHCPAgent startup procedure

During DhcpAgent startup procedure all the following networks
initialization is actually perform twice:
 * Killing old dnsmasq processes
 * set and configure all TAP interfaces
 * building all Dnsmasq config files (lease and host files)
 * launching dnsmasq processes
What is done during the second iteration is just clean and redo
exactly the same another time! This is really inefficient and
increase dramatically DHCP startup time (near twice than needed).

Initialization process 'sync_state' method is called twice:
 * one time during init_host()
 * another time during _report_state()

sync_state() call must stay in init_host() due to bug #1420042.

sync_state() is always called during startup in init_host()
and will be periodically called by periodic_resync()
to do reconciliation.
Hence it can safely be removed from the run() method.

Change-Id: Id6433598d5c833d2e86be605089d42feee57c257
Closes-bug: #1651368
Closes-Bug: #1650611


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1651368

Title:
  Dhcpagent not efficient during initialization process

Status in neutron:
  Fix Released

Bug description:
  During DhcpAgent startup procedure all networks initialization will be done 
twice:
   * Killing old dnsmasq processes
   * set and configure all TAP interfaces
   * building all Dnsmasq config files (lease and host files)
   * launching dnsmasq processes
  This is really inefficient and can be strange in case of namespaces 
monitoring.

  The following Dhcpagent traces show the actual init process
  (logs have been cleanup to show only relevant informations)
  Explanations details are described after logs.


  2016-12-20 09:02:42.200 DEBUG neutron.openstack.common.service [req-b0a2772 
None None] 

 log_opt_values /usr/lib/python2.7/dist-packages/oslo_config/cfg.py:2197
  2016-12-20 09:02:42.214 INFO neutron.agent.dhcp.agent [-] Synchronizing state
  2016-12-20 09:02:43.262 DEBUG neutron.agent.dhcp.agent [-] Calling driver for 
network: 5be9fe58-0790-4342-9182-172438e8e0bc action: enable call_driver 
/usr/lib/python2.7/dist-packages/neutron/agent/dhcp/agent.py:106
  2016-12-20 09:02:43.314 DEBUG neutron.agent.dhcp.agent [-] Calling driver for 
network: f2a73fc1-6a93-4822-aa45-b789d50d action: enable call_driver 
/usr/lib/python2.7/dist-packages/neutron/agent/dhcp/agent.py:106
  Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'kill', '-9', '24540']
  2016-12-20 09:02:43.441 DEBUG neutron.agent.linux.utils [-] Unable to access 
/var/lib/neutron/dhcp/5be9fe58-0790-4342-9182-172438e8e0bc/pid 
get_value_from_file 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:222
  Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'kill', '-9', '24542']
  2016-12-20 09:02:43.459 DEBUG neutron.agent.linux.utils [-] Unable to access 
/var/lib/neutron/dhcp/f2a73fc1-6a93-4822-aa45-b789d50d/pid 
get_value_from_file 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:222
  Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qdhcp-5be9fe58-0790-4342-9182-172438e8e0bc', 'ip', 
'link', 'set', 'tap5c7f794d-08', 'up']
  Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qdhcp-f2a73fc1-6a93-4822-aa45-b789d50d', 'ip', 
'link', 'set', 'tap995da463-09', 'up']
  Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qdhcp-5be9fe58-0790-4342-9182-172438e8e0bc', 'ip', 
'addr', 'show', 'tap5c7f794d-08', 'permanent', 'scope', 'global']
  Stdout: 32: tap5c7f794d-08:  mtu 1500 qdisc 
noqueue state UNKNOWN group default
  link/ether fa:16:3e:6b:cb:29 brd ff:ff:ff:ff:ff:ff
  inet 20.0.0.36/24 brd 20.0.0.255 scope global tap5c7f794d-08
     valid_lft forever preferred_lft forever
  inet 169.254.169.254/16 brd 169.254.255.255 scope global tap5c7f794d-08
     valid_lft forever preferred_lft forever

  Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qdhcp-f2a73fc1-6a93-4822-aa45-b789d50d', 'ip', 
'addr', 'show', 'tap995da463-09', 'permanent', 'scope', 'global']
  Stdout: 35: 

[Yahoo-eng-team] [Bug 1657668] [NEW] Grammer error in LoadBalancerPluginv2

2017-01-19 Thread 1912294464
Public bug reported:

Hi
I use OpenStack Mitaka with Ubuntu 14.04.
I found a grammar error.

/usr/lib/python2.7/dist-
packages/neutron_lbaas/services/loadbalancer/plugin.py   line 1344

Whether need 'self' when invoke '_set_degraded' function?

1340 if (pool_status["id"] not in
1341 [ps["id"] for ps in lb_status["pools"]]):
1342 lb_status["pools"].append(pool_status)
1343 if self._is_degraded(curr_listener.default_pool):
1344 self._set_degraded(self, listener_status, lb_status)
1345 members = curr_listener.default_pool.members
1346 for curr_member in members:
1347 if not curr_member.admin_state_up:

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1657668

Title:
  Grammer error in LoadBalancerPluginv2

Status in neutron:
  New

Bug description:
  Hi
  I use OpenStack Mitaka with Ubuntu 14.04.
  I found a grammar error.

  /usr/lib/python2.7/dist-
  packages/neutron_lbaas/services/loadbalancer/plugin.py   line 1344

  Whether need 'self' when invoke '_set_degraded' function?

  1340 if (pool_status["id"] not in
  1341 [ps["id"] for ps in lb_status["pools"]]):
  1342 lb_status["pools"].append(pool_status)
  1343 if self._is_degraded(curr_listener.default_pool):
  1344 self._set_degraded(self, listener_status, lb_status)
  1345 members = curr_listener.default_pool.members
  1346 for curr_member in members:
  1347 if not curr_member.admin_state_up:

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1657668/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp