[Yahoo-eng-team] [Bug 1563174] Re: Using LOG.warning replace LOG.warn

2016-03-28 Thread Kairat Kushaev
https://review.openstack.org/#/c/262237/ - see comments here. Oslo.log
did that work for us=)

** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1563174

Title:
  Using LOG.warning replace LOG.warn

Status in Glance:
  Invalid

Bug description:
  Python 3 deprecated the logger.warn method [1] so I prefer to use warning to 
avoid DeprecationWarning. 
  If we are using logger from oslo.log, warn is still valid [2]. But i think 
switching to LOG.warning is better.

  [1] https://docs.python.org/3/library/logging.html#logging.warning
  [2] https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L85

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1563174/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460720] Re: [RFE] Add API to set ipv6 gateway

2016-03-28 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460720

Title:
  [RFE] Add API to set ipv6 gateway

Status in neutron:
  Expired

Bug description:
  Currently the ipv6 external gateway is an admin configuration item. We
  want instead to have an API to set the ipv6 gateway.

  Background:

  The ipv6 router BP
  (https://blueprints.launchpad.net/neutron/+spec/ipv6-router) added a
  new L3 agent config called ipv6_gateway wherein an admin can configure
  the IPv6 LLA of the upstream physical router, so that the neutron
  virtual router has a default V6 gateway route to the upstream router.

  This solution is however not scalable when there are multiple external 
routers per L3 agent.
  Per review comments - 
https://review.openstack.org/#/c/156283/42/etc/l3_agent.ini
  It is better to move this config to the CLI.

  As discussed in the L3 weekly meeting -
  
http://eavesdrop.openstack.org/meetings/neutron_l3/2015/neutron_l3.2015-06-11-15.04.log.html,
  this change aims to make this exact change by updating neutron net-
  create CLI will to now have a new option to set an ipv6_gateway (for
  the external router).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460720/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513981] Re: Extend IPAM tables to store more information helpful for IPAM drivers

2016-03-28 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513981

Title:
  Extend IPAM tables to store more information helpful for IPAM drivers

Status in neutron:
  Expired

Bug description:
  We (Yahoo) have some use cases where it would be great if following
  information is associated with the subnet db table

  1. Rack switch info
  2. Backplane info
  3. DHCP ip helpers
  4. Option to tag allocation pools inside subnets
  5. Multiple gateway addresses

  Heres our use cases in detail:

  
  Case 1: IP allocation in L3 TOR deployments:

  1. User sends a boot request as usual. The user need not know all the
  network and subnet information beforehand. All he would do is send a
  boot request.

  2. The scheduler will pick a node in an L3 rack. The way we map nodes <-> 
racks is as follows:
  a. For VMs, we store rack_id in nova.conf on compute nodes
  b. For Ironic nodes, right now we have static IP allocation, so we 
practically know which IP we want to assign. But when we move to dynamic 
allocation, we would probably use 'chassis' or 'driver_info' fields to store 
the rack id.

  3. Nova compute will try to pick a network ID for this instance.  At
  this point, it needs to know what networks (or subnets) are available
  in this rack. Based on that, it will pick a network ID and send port
  creation request to Neutron. At Yahoo, to avoid some back-and-forth,
  we send a fake network_id and let the plugin do all the work.

  4. We need some information associated with the network/subnet that
  tells us what rack it belongs to. Right now, for VMs, we have that
  information embedded in physnet name. But we would like to move away
  from that. If we had a column for subnets - e.g. tag, it would solve
  our problem. Ideally, we would like a column 'rack id' or a new table
  'racks' that maps to subnets, or something. We are open to different
  ideas that work for everyone. This is where IPAM can help.

  
  Case 2: Scheduling based on IP availability when user needs multiple IPs 
allocated for the instance

  1. User sends a boot request with --num-ips X
  The network/subnet level complexities need not be exposed to the user. 
For better experience, all we want our users to tell us is the number of IPs 
they want. 

  2. When the scheduler tries to find an appropriate host in L3 racks, we want 
it to find a rack that can satisfy this IP requirement. So, the scheduler will 
basically say, "give me all racks that have >X IPs available". If we have a 
'Racks' table in IPAM, that would help.
  Once the scheduler gets a rack, it will apply remaining filters to narrow 
down to one host and call nova-compute. The IP count will be propagated to nova 
compute from scheduler.

  3. Nova compute will call Neutron and send the node details and IP
  count along. Neutron IPAM driver will then look at the node details,
  query the database to find a network in that rack and allocate X IPs
  from the subnet.

  
  Case 3: Multiple Gateways for Subnets

  Currently, the subnets in Neutron only support one gateway. For
  provider networks in large data centers like ours, quite often, the
  architecture is such a way that multiple gateways are configured for
  the subnets. These multiple gateways are typically spread across
  backplanes so that the production traffic can be load-balanced between
  backplanes. Neutron subnets currently don't support multiple gateways

  
  In a way, this information is not specific to our company. Its generic 
information which ought to go with the subnets. Different companies can use 
this information differently in their IPAM drivers. But, the information needs 
to be made available to justify the flexibility of ipam. In fact, there are few 
companies that are interested in some having some of these attributes in the 
database.

  Considering that the community is against arbitrary JSON blobs, I
  think we can expand the database to incorporate such use cases. I
  would prefer to avoid having our own database to make sure that our
  use-cases are always shared with the community.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1513981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563174] [NEW] Using LOG.warning replace LOG.warn

2016-03-28 Thread Nam
Public bug reported:

Python 3 deprecated the logger.warn method [1] so I prefer to use warning to 
avoid DeprecationWarning. 
If we are using logger from oslo.log, warn is still valid [2]. But i think 
switching to LOG.warning is better.

[1] https://docs.python.org/3/library/logging.html#logging.warning
[2] https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L85

** Affects: glance
 Importance: Undecided
 Assignee: Nam (namnh)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Nam (namnh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1563174

Title:
  Using LOG.warning replace LOG.warn

Status in Glance:
  New

Bug description:
  Python 3 deprecated the logger.warn method [1] so I prefer to use warning to 
avoid DeprecationWarning. 
  If we are using logger from oslo.log, warn is still valid [2]. But i think 
switching to LOG.warning is better.

  [1] https://docs.python.org/3/library/logging.html#logging.warning
  [2] https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L85

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1563174/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563157] [NEW] Routes 2.3 release breaks neutron-server

2016-03-28 Thread Tony Breeds
Public bug reported:

Routes released 2.3 2016-03-28T15:48:33  Since then we're seeing:

---
Traceback (most recent call last):
  File "/usr/local/bin/neutron-server", line 10, in 
sys.exit(main())
  File "/opt/stack/new/neutron/neutron/cmd/eventlet/server/__init__.py", line 
17, in main
server.main()
  File "/opt/stack/new/neutron/neutron/server/__init__.py", line 44, in main
neutron_api = service.serve_wsgi(service.NeutronApiService)
  File "/opt/stack/new/neutron/neutron/service.py", line 106, in serve_wsgi
LOG.exception(_LE('Unrecoverable error: please check log '
  File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
85, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File "/opt/stack/new/neutron/neutron/service.py", line 103, in serve_wsgi
service.start()
  File "/opt/stack/new/neutron/neutron/service.py", line 74, in start
self.wsgi_app = _run_wsgi(self.app_name)
  File "/opt/stack/new/neutron/neutron/service.py", line 169, in _run_wsgi
app = config.load_paste_app(app_name)
  File "/opt/stack/new/neutron/neutron/common/config.py", line 227, in 
load_paste_app
app = deploy.loadapp("config:%s" % config_path, name=app_name)
  File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 
247, in loadapp
return loadobj(APP, uri, name=name, **kw)
  File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 
272, in loadobj
return context.create()
  File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 
710, in create
return self.object_type.invoke(self)
  File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 
144, in invoke
**context.local_conf)
  File "/usr/local/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, 
in fix_call
val = callable(*args, **kw)
  File "/usr/local/lib/python2.7/dist-packages/paste/urlmap.py", line 31, in 
urlmap_factory
app = loader.get_app(app_name, global_conf=global_conf)
  File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 
350, in get_app
name=name, global_conf=global_conf).create()
  File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 
710, in create
return self.object_type.invoke(self)
  File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 
144, in invoke
**context.local_conf)
  File "/usr/local/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, 
in fix_call
val = callable(*args, **kw)
  File "/opt/stack/new/neutron/neutron/auth.py", line 74, in pipeline_factory
app = filter(app)
  File "/opt/stack/new/neutron/neutron/api/extensions.py", line 392, in _factory
return ExtensionMiddleware(app, ext_mgr=ext_mgr)
  File "/opt/stack/new/neutron/neutron/api/extensions.py", line 293, in __init__
submap.connect(path)
  File "/usr/local/lib/python2.7/dist-packages/routes/mapper.py", line 168, in 
connect
routename, path = args
ValueError: need more than 1 value to unpack
---

logstash:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22ValueError%3A%20need%20more%20than%201%20value%20to%20unpack%5C%22%20AND%20voting%3A%5C%221%5C%22

Affects stable/* and master (modulo upper-constraints.txt)

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: trove
 Importance: Undecided
 Status: New

** Also affects: trove
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1563157

Title:
  Routes 2.3 release breaks neutron-server

Status in neutron:
  New
Status in Trove:
  New

Bug description:
  Routes released 2.3 2016-03-28T15:48:33  Since then we're seeing:

  ---
  Traceback (most recent call last):
File "/usr/local/bin/neutron-server", line 10, in 
  sys.exit(main())
File "/opt/stack/new/neutron/neutron/cmd/eventlet/server/__init__.py", line 
17, in main
  server.main()
File "/opt/stack/new/neutron/neutron/server/__init__.py", line 44, in main
  neutron_api = service.serve_wsgi(service.NeutronApiService)
File "/opt/stack/new/neutron/neutron/service.py", line 106, in serve_wsgi
  LOG.exception(_LE('Unrecoverable error: please check log '
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
85, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/new/neutron/neutron/service.py", line 103, in serve_wsgi
  service.start()
File "/opt/stack/new/neutron/neutron/service.py", line 74, in start
  self.wsgi_app = _run_wsgi(self.app_name)
File "/opt/stack/new/neutron/neutron/service.py", line 169, in _run_wsgi
  app = config.load_paste_app(app_name)
File "/opt/stack/new/neutron/neutron/common/config.py", line 227, in 
load_paste_app
  app = deploy.loadapp("config:%s" % config_

[Yahoo-eng-team] [Bug 1562427] Re: Cannot start imported ubuntu image: Error: Unexpected API Error.

2016-03-28 Thread Augustina Ragwitz
I see some errors in your api log that look like you should double check
your configuration (pasted below). Maybe see this bug and see if it's
relevant - https://bugs.launchpad.net/nova/+bug/1534273.

Once you've verified your config, if you still feel this is a valid bug, please 
update the bug with the following information and set the status to New:
1) Version of Nova (Liberty, Kilo, Mitaka, master..)
2) Steps to reproduce, specifically step-by-step how you created the image and 
how you attempted to launch the Ubuntu instance.

2016-03-26 23:28:47.979 38925 INFO nova.osapi_compute.wsgi.server 
[req-e6e6cde7-bf0c-4241-be29-b983e556f9ef 136970b581774dcaa00dc949424a3c4c 
5004085c4d8842aab824321078eb0efc - - -] 192.168.0.144 "GET 
/v2/5004085c4d8842aab824321078eb0efc/servers/detail HTTP/1.1" status: 200 len: 
211 time: 0.0435441
2016-03-26 23:31:32.384 38925 WARNING keystonemiddleware.auth_token [-] 
Authorization failed for token
2016-03-26 23:31:32.385 38925 WARNING keystonemiddleware.auth_token [-] 
Identity response: {"error": {"message": "Could not find token: 
cb9fa4a18a22407aadfc9ca016626e91", "code": 404, "title": "Not Found"}}
2016-03-26 23:31:32.386 38925 WARNING keystonemiddleware.auth_token [-] 
Authorization failed for token

2016-03-27 00:00:39.836 38925 INFO nova.osapi_compute.wsgi.server 
[req-317ffd13-cbc5-4857-a912-107b12a8b0b6 136970b581774dcaa00dc949424a3c4c 
5004085c4d8842aab824321078eb0efc - - -] 192.168.0.144 "GET 
/v2/5004085c4d8842aab824321078eb0efc/extensions HTTP/1.1" status: 200 len: 
21880 time: 0.0812910
2016-03-27 00:00:41.734 38925 INFO nova.api.openstack.wsgi 
[req-742a4fbe-d3d4-41ba-af99-da46e415eaa5 010a15f69e0140a6b261ddd5bd46d328 
0cee11df766745efaeb49deb1fbb1624 - - -] HTTP exception thrown: No instances 
found for any event
2016-03-27 00:00:41.735 38925 INFO nova.osapi_compute.wsgi.server 
[req-742a4fbe-d3d4-41ba-af99-da46e415eaa5 010a15f69e0140a6b261ddd5bd46d328 
0cee11df766745efaeb49deb1fbb1624 - - -] 192.168.0.144 "POST 
/v2/0cee11df766745efaeb49deb1fbb1624/os-server-external-events HTTP/1.1" 
status: 404 len: 296 time: 0.1142490
2016-03-27 00:00:42.498 38925 INFO nova.api.openstack.wsgi 
[req-2a93a0c0-cf1a-47ed-96f2-3cc7a24e0c61 136970b581774dcaa00dc949424a3c4c 
5004085c4d8842aab824321078eb0efc - - -] HTTP exception thrown: Instance 
1ee18871-3516-4235-8077-5d3385fd1939 could not be found.

2016-03-27 00:01:11.870 38925 INFO nova.osapi_compute.wsgi.server 
[req-b84080b4-c8f6-4f3d-a553-5e5be4d727f6 136970b581774dcaa00dc949424a3c4c 
5004085c4d8842aab824321078eb0efc - - -] 192.168.0.144 "GET 
/v2/5004085c4d8842aab824321078eb0efc/os-keypairs HTTP/1.1" status: 200 len: 734 
time: 0.0069401
2016-03-27 00:01:12.227 38925 ERROR nova.api.openstack.extensions 
[req-39392694-67dc-4a9e-8754-a119228aeba2 136970b581774dcaa00dc949424a3c4c 
5004085c4d8842aab824321078eb0efc - - -] Unexpected exception in API method
2016-03-27 00:01:12.227 38925 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2016-03-27 00:01:12.227 38925 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
2016-03-27 00:01:12.227 38925 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2016-03-27 00:01:12.227 38925 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
2016-03-27 00:01:12.227 38925 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-03-27 00:01:12.227 38925 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
2016-03-27 00:01:12.227 38925 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-03-27 00:01:12.227 38925 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/servers.py", line 
611, in create
2016-03-27 00:01:12.227 38925 ERROR nova.api.openstack.extensions 
**create_kwargs)
2016-03-27 00:01:12.227 38925 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/hooks.py", line 149, in inner
2016-03-27 00:01:12.227 38925 ERROR nova.api.openstack.extensions rv = 
f(*args, **kwargs)
2016-03-27 00:01:12.227 38925 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1581, in create
2016-03-27 00:01:12.227 38925 ERROR nova.api.openstack.extensions 
check_server_group_quota=check_server_group_quota)
2016-03-27 00:01:12.227 38925 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1181, in 
_create_instance
2016-03-27 00:01:12.227 38925 ERROR nova.api.openstack.extensions 
auto_disk_config, reservation_id, max_count)
2016-03-27 00:01:12.227 38925 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 940, in 
_validate_and_build_base_o

[Yahoo-eng-team] [Bug 1478100] Re: DHCP agent scheduler can schedule dnsmasq to an agent without reachability to the network its supposed to serve

2016-03-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/205631
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=0267c6a5acdcb68ea7e83ecb980362c4235ed1d7
Submitter: Jenkins
Branch:master

commit 0267c6a5acdcb68ea7e83ecb980362c4235ed1d7
Author: Assaf Muller 
Date:   Thu Jul 23 18:14:35 2015 -0400

Make DHCP agent scheduler physical_network aware

Currently neutron DCHP scheduler assumes that that every server running
a dhcp-agent can reach every network. Typically the scheduler can
wrongly schedule a vlan network on a dhcp-agent that has no reachability
to the network it's supposed to serve (ex: network's physical_network
not supported).

Typically such usecase can append if:

* physical_networks are dedicated to a specific service and we don't
  want to mix dnsmasqs related to different services (for
  isolation/configuration purpose),
* physical_networks are dedicated to a specific rack (see example
  diagram http://i.imgur.com/NTBxRxk.png), the rack interconnection can
  be handled outside of neutron or inside when routed-networks will be
  supported.

This change makes the DHCP scheduler network reachability aware by
querying plugin's filter_hosts_with_network_access method.

This change provides an implementation for ML2 plugin delegating host
filtering to its mechanism drivers: it aggregates the filtering done by
each mechanism or disables filtering if any mechanism doesn't overload
default mechanism implementation[1] (for backward compatibility with
out-of-tree mechanisms). Every in-tree mechanism overloads the default
implementation: OVS/LB/SRIOV mechanisms use their agent mapping to filter
hosts, l2pop/test/logger ones return empty set (they provide to "L2
capability").

This change provides a default implementation[2] for other plugins
filtering nothing (for backward compatibility), they can overload it to
provide their own implementation.

Such host filtering has some limitations if a dhcp-agent is on a host
handled by multiple l2 mechanisms with one mechanism claiming network
reachability but not the one handling dhcp-agent ports. Indeed the
host is able to reach the network but not dhcp-agent ports! Such
limitation will be handled in a follow-up change using host+vif_type
filtering.

[1] neutron.plugin.ml2.driver_api.MechanismDriver.\
  filter_hosts_with_network_access
[2] neutron.db.agents_db.AgentDbMixin.filter_hosts_with_network_access

Closes-Bug: #1478100
Co-Authored-By: Cedric Brandily 
Change-Id: I0501d47404c8adbec4bccb84ac5980e045da68b3


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1478100

Title:
  DHCP agent scheduler can schedule dnsmasq to an agent without
  reachability to the network its supposed to serve

Status in neutron:
  Fix Released

Bug description:
  While overlay networks are typically available on every host, flat or
  VLAN provider networks are often not. It may be the case where each
  rack only has access to a subset of networks defined in Neutron
  (Determined by the network's physical_network tag). In these cases,
  you would install a DHCP agent in every rack, but the DHCP scheduler
  could schedule a network to the wrong agent, and you end up in a
  situation where the dnsmasq instance is on the wrong rack and has no
  reachibility to its VMs.

  Here's a diagram to explain the use case:
  http://i.imgur.com/NTBxRxk.png
  Both networks on physical_network 1 should be served only by DHCP agent 1.

  More information may be found here:
  https://etherpad.openstack.org/p/Network_Segmentation_Usecases.
  Specifically "DHCP agents and metadata serices are run on nodes within each 
L2.  When the neutron network is created we specifically assign the dhcp agent 
in that segment to that network".

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1478100/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563149] [NEW] NeutronSyntheticFieldMultipleForeignKeys message is producing an error

2016-03-28 Thread Victor Morales
Public bug reported:

There is a business logic that validates the number of foreign keys[1]
on the computer object.  It message contains a parsing format error [2],
the variable must be field instead of fields.

[1] 
https://github.com/openstack/neutron/blob/d27b27c183828b17aabfd736d5251e3f408a199d/neutron/objects/base.py#L301
[2] 
https://github.com/openstack/neutron/blob/d27b27c183828b17aabfd736d5251e3f408a199d/neutron/objects/base.py#L58-L59

** Affects: neutron
 Importance: Undecided
 Assignee: Victor Morales (electrocucaracha)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Victor Morales (electrocucaracha)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1563149

Title:
  NeutronSyntheticFieldMultipleForeignKeys message is producing an error

Status in neutron:
  New

Bug description:
  There is a business logic that validates the number of foreign keys[1]
  on the computer object.  It message contains a parsing format error
  [2], the variable must be field instead of fields.

  [1] 
https://github.com/openstack/neutron/blob/d27b27c183828b17aabfd736d5251e3f408a199d/neutron/objects/base.py#L301
  [2] 
https://github.com/openstack/neutron/blob/d27b27c183828b17aabfd736d5251e3f408a199d/neutron/objects/base.py#L58-L59

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1563149/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562649] Re: Liberty RDO install fails to launch

2016-03-28 Thread Augustina Ragwitz
*** This bug is a duplicate of bug 1534273 ***
https://bugs.launchpad.net/bugs/1534273

** Summary changed:

- It prompt:Error:Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.  (HTTP 500) (Request-ID: 
req-c26c3d65-dd95-470b-892e-d83b30291da3)  when i started host from image.
+ Liberty RDO install fails to launch

** This bug has been marked a duplicate of bug 1534273
   Keystone configuration options for nova.conf missing from Redhat/CentOS 
install guide

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1562649

Title:
  Liberty RDO install fails to launch

Status in OpenStack Compute (nova):
  New

Bug description:
  When i started a host from image tab in Web, it failure and  prompt that:
  Error:Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.  (HTTP 500) (Request-ID: 
req-c26c3d65-dd95-470b-892e-d83b30291da3).

  I had checked the log in the servers.
  keystone.log:
  2016-03-28 09:06:14.221 11430 INFO keystone.common.wsgi 
[req-8e062673-a5a5-443b-95b5-4cd1a3e82a5b - - - - -] POST 
http://controller:35357/v3/auth/tokens
  2016-03-28 09:06:14.234 11430 WARNING keystone.common.wsgi 
[req-8e062673-a5a5-443b-95b5-4cd1a3e82a5b - - - - -] Authorization failed. The 
request you have made requires authentication. from 192.168.20.101

  And it was builded with centos7 according to the processes from
  http://docs.openstack.org/liberty/install-guide-rdo/launch-instance-
  public.html

  Anyone help me?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1562649/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563029] Re: Nova instance fail to launch: "Unexpected API Error."

2016-03-28 Thread Augustina Ragwitz
I moved this over to Kolla since this looks specific to that project and
I'm not working in that repo, so I can't reproduce this. If this turns
out to be a Nova specific issue, please update the bug with Nova-
specific reproduction steps and we'll take a look at it.

** Also affects: kolla
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1563029

Title:
  Nova instance fail to launch: "Unexpected API Error."

Status in kolla:
  New
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  This is a Kolla installation of openstack. The docker images build
  well and some services work great; e.g cinder, glance and neutron.

  Nova seems to have a problem that is prevent provisioning instances.
  The manila-api logs below says:

  "Returning 500 to user: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   __call__ 
/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:1070"


  1. The GIT commit level is:

  lsorrillo@ptest:~/git/kolla$ git log -1
  commit 90321d0497f14bda370908610cfc44296a940523
  Merge: 328a7e8 a789346
  Author: Jenkins 
  Date:   Sun Mar 27 23:42:47 2016 +

  Merge "Fix gate to use world writeable docker socket"
  lsorrillo@ptest:~/git/kolla$

  RPM version:

  lsorrillo@ptest:~$
  lsorrillo@ptest:~$ docker exec -ti nova_api bash
  bash-4.2$
  bash-4.2$
  bash-4.2$
  bash-4.2$ rpm -qa | grep nova
  python-novaclient-3.3.1-0.20160303082753.d63800d.el7.centos.noarch
  openstack-nova-common-13.0.0.0b4-0.20160304162843.c5a45a2.el7.centos.noarch
  python-nova-13.0.0.0b4-0.20160304162843.c5a45a2.el7.centos.noarch
  openstack-nova-api-13.0.0.0b4-0.20160304162843.c5a45a2.el7.centos.noarch
  bash-4.2$

  2.

  The following error is seen in the nova-api.log:

  2016-03-28 19:53:40.086 29 DEBUG oslo.messaging._drivers.impl_rabbit 
[req-88ffb6f7-1b1f-4332-b066-5b0b41827988 5bd4af0c54a14a789661d868a9fb1d84 
9e4b40d958e741599adc497b0a9d955a - - -] Connecting to AMQP server on 
192.168.0.121:5672 __init__ 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py:535
  2016-03-28 19:53:40.132 29 DEBUG oslo.messaging._drivers.impl_rabbit 
[req-88ffb6f7-1b1f-4332-b066-5b0b41827988 5bd4af0c54a14a789661d868a9fb1d84 
9e4b40d958e741599adc497b0a9d955a - - -] Connected to AMQP server on 
192.168.0.121:5672 via [amqp] client __init__ 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py:562
  2016-03-28 19:53:40.148 29 DEBUG oslo.messaging._drivers.pool 
[req-88ffb6f7-1b1f-4332-b066-5b0b41827988 5bd4af0c54a14a789661d868a9fb1d84 
9e4b40d958e741599adc497b0a9d955a - - -] Pool creating new connection create 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/pool.py:109
  2016-03-28 19:53:40.151 29 DEBUG oslo.messaging._drivers.impl_rabbit 
[req-88ffb6f7-1b1f-4332-b066-5b0b41827988 5bd4af0c54a14a789661d868a9fb1d84 
9e4b40d958e741599adc497b0a9d955a - - -] Connecting to AMQP server on 
192.168.0.121:5672 __init__ 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py:535
  2016-03-28 19:53:40.205 29 DEBUG oslo.messaging._drivers.impl_rabbit 
[req-88ffb6f7-1b1f-4332-b066-5b0b41827988 5bd4af0c54a14a789661d868a9fb1d84 
9e4b40d958e741599adc497b0a9d955a - - -] Connected to AMQP server on 
192.168.0.121:5672 via [amqp] client __init__ 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py:562
  2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions 
[req-88ffb6f7-1b1f-4332-b066-5b0b41827988 5bd4af0c54a14a789661d868a9fb1d84 
9e4b40d958e741599adc497b0a9d955a - - -] Unexpected exception in API method
  2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
  2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
  2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
  2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
  2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-03-28 19:5

[Yahoo-eng-team] [Bug 1563141] Re: Nova boot Fails InvalidBDM from GLANCE IMAGE

2016-03-28 Thread Daniel Smith
Confirmed

Addition of ",size=40" to the nova boot command provided the sizing to
nova to tell cinder the volume to be created (prior to the download of
the image).

I will close this BUG.

Cheers,
D

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1563141

Title:
  Nova boot Fails InvalidBDM from GLANCE IMAGE

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Running Liberty and on one of the image (CN-00) below - while all the
  other nodes booted and created VOLUME from IMAGE, this one started to
  FAIL.  No idea why.

  As with the other instances, this nova boot command should result in
  the GLANCE Image being used to create the CINDER volume (note - there
  is 5.5TB available so its not a space issue - and the error doesnt
  reflect that the call ever makes it to CINDER).

  EXPECTED RESULT:  nova boots the Instance, creating the CINDER volume
  from the glance image-id as outlined in the command.

  GLANCE IMAGES:
  root@node-16:/var/log/nova# glance image-list
  +--+-+
  | ID   | Name|
  +--+-+
  | 6be3e076-691e-48cf-b9ed-264de01965b8 | CN-00_R2C25 |
  | f47da74d-15d0-42b7-8261-c6d3b8f0dc2a | DB-00-1_R2C25   |
  | fd1d5774-5139-4644-81ac-1d5293d8410e | MS-00_R2C25 |
  | 9b2dc827-b04a-4def-b36c-b1ee11449705 | NAS |
  | 3444ce43-99b1-482e-97ef-35d7bb15aecc | PGPOOL-00_R2C25 |
  | ee11f00a-6ab4-42b3-b86d-f1ee1966ca5f | TestVM  |
  | a1503971-ddcf-4740-bcae-9efc3afe2988 | UTILITY-R2C25   |
  +--+-+

  
  NOVA BOOT COMMAND:
  root@node-16:/var/log/nova# nova boot --flavor MDN-8G-8CPU --block-device 
source=image,id=6be3e076-691e-48cf-b9ed-264de01965b8,dest=volume,bus=scsi,bootindex=0
  --nic port-id=5f9c10e9-09b9-4116-9d3a-348882d78817 --nic 
port-id=1c74b4bd-aaf0-4681-a566-92fdf0f69b0c --nic 
port-id=d4e59190-24ad-466c-8551-1e29ed3416fa --nic 
port-id=c08fa16e-11da-49d7-b12c-291dfe836d4c CN-00

  
  ERROR:
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-94b9fea7-1f45-486c-9292-47b2a3e6dc56)

  
  NOVA-API.LOG
  2016-03-29 01:05:05.173 41670 WARNING oslo_config.cfg 
[req-94b9fea7-1f45-486c-9292-47b2a3e6dc56 39849a4b3f814d30b31178358712944f 
d37e0a300f154eb099d40ef9cf362123 - - -] Option "auth_strategy" from group 
"neutron" is deprecated for removal.  Its value may be silently ignored in the 
future.
  2016-03-29 01:05:05.174 41670 WARNING oslo_config.cfg 
[req-94b9fea7-1f45-486c-9292-47b2a3e6dc56 39849a4b3f814d30b31178358712944f 
d37e0a300f154eb099d40ef9cf362123 - - -] Option "admin_auth_url" from group 
"neutron" is deprecated for removal.  Its value may be silently ignored in the 
future.
  2016-03-29 01:05:05.175 41670 WARNING oslo_config.cfg 
[req-94b9fea7-1f45-486c-9292-47b2a3e6dc56 39849a4b3f814d30b31178358712944f 
d37e0a300f154eb099d40ef9cf362123 - - -] Option "admin_username" from group 
"neutron" is deprecated for removal.  Its value may be silently ignored in the 
future.
  2016-03-29 01:05:05.176 41670 WARNING oslo_config.cfg 
[req-94b9fea7-1f45-486c-9292-47b2a3e6dc56 39849a4b3f814d30b31178358712944f 
d37e0a300f154eb099d40ef9cf362123 - - -] Option "admin_password" from group 
"neutron" is deprecated for removal.  Its value may be silently ignored in the 
future.
  2016-03-29 01:05:05.176 41670 WARNING oslo_config.cfg 
[req-94b9fea7-1f45-486c-9292-47b2a3e6dc56 39849a4b3f814d30b31178358712944f 
d37e0a300f154eb099d40ef9cf362123 - - -] Option "admin_tenant_name" from group 
"neutron" is deprecated for removal.  Its value may be silently ignored in the 
future.
  2016-03-29 01:05:06.475 41670 ERROR nova.api.openstack.extensions 
[req-94b9fea7-1f45-486c-9292-47b2a3e6dc56 39849a4b3f814d30b31178358712944f 
d37e0a300f154eb099d40ef9cf362123 - - -] Unexpected exception in API method
  2016-03-29 01:05:06.475 41670 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2016-03-29 01:05:06.475 41670 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
  2016-03-29 01:05:06.475 41670 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2016-03-29 01:05:06.475 41670 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
  2016-03-29 01:05:06.475 41670 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-03-29 01:05:06.475 41670 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in 
w

[Yahoo-eng-team] [Bug 1563141] [NEW] Nova boot Fails InvalidBDM from GLANCE IMAGE

2016-03-28 Thread Daniel Smith
Public bug reported:

Running Liberty and on one of the image (CN-00) below - while all the
other nodes booted and created VOLUME from IMAGE, this one started to
FAIL.  No idea why.

As with the other instances, this nova boot command should result in the
GLANCE Image being used to create the CINDER volume (note - there is
5.5TB available so its not a space issue - and the error doesnt reflect
that the call ever makes it to CINDER).

EXPECTED RESULT:  nova boots the Instance, creating the CINDER volume
from the glance image-id as outlined in the command.

GLANCE IMAGES:
root@node-16:/var/log/nova# glance image-list
+--+-+
| ID   | Name|
+--+-+
| 6be3e076-691e-48cf-b9ed-264de01965b8 | CN-00_R2C25 |
| f47da74d-15d0-42b7-8261-c6d3b8f0dc2a | DB-00-1_R2C25   |
| fd1d5774-5139-4644-81ac-1d5293d8410e | MS-00_R2C25 |
| 9b2dc827-b04a-4def-b36c-b1ee11449705 | NAS |
| 3444ce43-99b1-482e-97ef-35d7bb15aecc | PGPOOL-00_R2C25 |
| ee11f00a-6ab4-42b3-b86d-f1ee1966ca5f | TestVM  |
| a1503971-ddcf-4740-bcae-9efc3afe2988 | UTILITY-R2C25   |
+--+-+


NOVA BOOT COMMAND:
root@node-16:/var/log/nova# nova boot --flavor MDN-8G-8CPU --block-device 
source=image,id=6be3e076-691e-48cf-b9ed-264de01965b8,dest=volume,bus=scsi,bootindex=0
  --nic port-id=5f9c10e9-09b9-4116-9d3a-348882d78817 --nic 
port-id=1c74b4bd-aaf0-4681-a566-92fdf0f69b0c --nic 
port-id=d4e59190-24ad-466c-8551-1e29ed3416fa --nic 
port-id=c08fa16e-11da-49d7-b12c-291dfe836d4c CN-00


ERROR:
ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-94b9fea7-1f45-486c-9292-47b2a3e6dc56)


NOVA-API.LOG
2016-03-29 01:05:05.173 41670 WARNING oslo_config.cfg 
[req-94b9fea7-1f45-486c-9292-47b2a3e6dc56 39849a4b3f814d30b31178358712944f 
d37e0a300f154eb099d40ef9cf362123 - - -] Option "auth_strategy" from group 
"neutron" is deprecated for removal.  Its value may be silently ignored in the 
future.
2016-03-29 01:05:05.174 41670 WARNING oslo_config.cfg 
[req-94b9fea7-1f45-486c-9292-47b2a3e6dc56 39849a4b3f814d30b31178358712944f 
d37e0a300f154eb099d40ef9cf362123 - - -] Option "admin_auth_url" from group 
"neutron" is deprecated for removal.  Its value may be silently ignored in the 
future.
2016-03-29 01:05:05.175 41670 WARNING oslo_config.cfg 
[req-94b9fea7-1f45-486c-9292-47b2a3e6dc56 39849a4b3f814d30b31178358712944f 
d37e0a300f154eb099d40ef9cf362123 - - -] Option "admin_username" from group 
"neutron" is deprecated for removal.  Its value may be silently ignored in the 
future.
2016-03-29 01:05:05.176 41670 WARNING oslo_config.cfg 
[req-94b9fea7-1f45-486c-9292-47b2a3e6dc56 39849a4b3f814d30b31178358712944f 
d37e0a300f154eb099d40ef9cf362123 - - -] Option "admin_password" from group 
"neutron" is deprecated for removal.  Its value may be silently ignored in the 
future.
2016-03-29 01:05:05.176 41670 WARNING oslo_config.cfg 
[req-94b9fea7-1f45-486c-9292-47b2a3e6dc56 39849a4b3f814d30b31178358712944f 
d37e0a300f154eb099d40ef9cf362123 - - -] Option "admin_tenant_name" from group 
"neutron" is deprecated for removal.  Its value may be silently ignored in the 
future.
2016-03-29 01:05:06.475 41670 ERROR nova.api.openstack.extensions 
[req-94b9fea7-1f45-486c-9292-47b2a3e6dc56 39849a4b3f814d30b31178358712944f 
d37e0a300f154eb099d40ef9cf362123 - - -] Unexpected exception in API method
2016-03-29 01:05:06.475 41670 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2016-03-29 01:05:06.475 41670 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
2016-03-29 01:05:06.475 41670 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2016-03-29 01:05:06.475 41670 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
2016-03-29 01:05:06.475 41670 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-03-29 01:05:06.475 41670 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
2016-03-29 01:05:06.475 41670 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-03-29 01:05:06.475 41670 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/servers.py", line 
611, in create
2016-03-29 01:05:06.475 41670 ERROR nova.api.openstack.extensions 
**create_kwargs)
2016-03-29 01:05:06.475 41670 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/hooks.py", line 149, in inner
2016-03-29 01:05:06.475 41670 ERROR nova.api.openstack.extensions rv = 
f(*args, **kwargs)
2016-03-29 01:0

[Yahoo-eng-team] [Bug 1563137] [NEW] Duplicate help text in the metadata tab

2016-03-28 Thread Marcellin Fom Tchassem
Public bug reported:

During the process to launch an instance, there is a repeated help text at the 
top
and the bottom of the tab.
To reproduce this:
1- Under Project, click on Instances
2- Click on Launch Instance
3- Click on Metadata
4- The metadata tab should have the same help text twice

An approach to fix this can be to change the text on the bottom by a
meaningful message.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1563137

Title:
  Duplicate help text in the metadata tab

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  During the process to launch an instance, there is a repeated help text at 
the top
  and the bottom of the tab.
  To reproduce this:
  1- Under Project, click on Instances
  2- Click on Launch Instance
  3- Click on Metadata
  4- The metadata tab should have the same help text twice

  An approach to fix this can be to change the text on the bottom by a
  meaningful message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1563137/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439472] Re: OVS doesn't restart properly when Exception occurred

2016-03-28 Thread watanabe.isao
** Changed in: neutron/kilo
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1439472

Title:
  OVS doesn't restart properly when Exception occurred

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  Wish this fix can be fixed into kilo. If it's not able due to the bad timing, 
wish this fix can be merged into stable/kilo.
  ---
  [The problem]
  If there is an Exception (such as DBConnectionError) occurred/occurring when 
OVS restart,
  OVS will return "every thing is OK :-)",
  while flow of created network in br-tun will NOT be recovered. :-(
  Unless user operation of OVS restart has been executed.
  ---
  [action and log]
  [q-agent.log]
  [[[create network and subnet and add it to DHCP agent]]]
  [[[I turned off MySQL]]]
  [[[But nothing happened]]]
  [[[Then I restarted OVS]]]
  [[[Here it goes...]]]
  ...
  ...
  ...
  2015-04-01 22:06:48.237 DEBUG 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-352cd26d-7278-483e-a873-7558d0f37acd None None] Unable to sync tunnel IP 
192.168.122.96: Remote error: DBConnectionError (OperationalError) (2003, 
"Can't connect to MySQL server on '127.0.0.1' (111)") None None
  ...
  ...
  ...
  2015-04-01 22:06:56.060 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-352cd26d-7278-483e-a873-7558d0f37acd None None] Agent tunnel out of sync 
with plugin!
  2015-04-01 22:06:56.061 DEBUG oslo_messaging._drivers.amqpdriver 
[req-352cd26d-7278-483e-a873-7558d0f37acd None None] MSG_ID is 
705639bc86ae44f4b4cc28715ce981e8 _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:311
  2015-04-01 22:06:56.062 DEBUG oslo_messaging._drivers.amqp 
[req-352cd26d-7278-483e-a873-7558d0f37acd None None] UNIQUE_ID is 
41f997f166c04cff986ff08eb298b3eb. _add_unique_id 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:258
  2015-04-01 22:06:56.085 DEBUG 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-352cd26d-7278-483e-a873-7558d0f37acd None None] Unable to sync tunnel IP 
192.168.122.96: Remote error: DBConnectionError (OperationalError) (2003, 
"Can't connect to MySQL server on '127.0.0.1' (111)") None None
  ...
  ...
  ...
  2015-04-01 22:06:56.111 DEBUG oslo_messaging._drivers.amqpdriver 
[req-352cd26d-7278-483e-a873-7558d0f37acd None None] MSG_ID is 
6f012243c4844978a7b8181bedcafcc9 _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:311
  2015-04-01 22:06:56.112 DEBUG oslo_messaging._drivers.amqp 
[req-352cd26d-7278-483e-a873-7558d0f37acd None None] UNIQUE_ID is 
04ed1ccb78bb4ab495b9ebf40c2338f5. _add_unique_id 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:258
  2015-04-01 22:06:56.138 ERROR 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-352cd26d-7278-483e-a873-7558d0f37acd None None] Error while processing VIF 
ports
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call 
last):
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", 
line 1522, in rpc_loop
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ovs_restarted)
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", 
line 1260, in process_network_ports
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
port_info.get('updated', set()))
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 360, in 
setup_port_filters
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self.prepare_devices_filter(new_devices)
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 219, in 
decorated_function
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent *args, **kwargs)
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 229, in 
prepare_devices_filter
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent self.context, 
list(device_ids))
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 116, in 
security_group_info_for_devices
  2015-04

[Yahoo-eng-team] [Bug 1563113] [NEW] Implied Roles responses do not match the spec

2016-03-28 Thread Sean Perry
Public bug reported:

http --pretty format PUT
https://identity.example.com:35357/v3/roles/5a912666c3704c22a20d4c35f3068a88/implies/edd42085d3ab472e9cf13b3cf3c362b6
"X-Auth-Token:4879c74089b744439057581c9d85bc19"

{
"role_inference": {
"implies": {
"id": "edd42085d3ab472e9cf13b3cf3c362b6", 
"links": {
"self": 
"https://identity.example.com:35357/v3/roles/edd42085d3ab472e9cf13b3cf3c362b6";
}, 
"name": "SomeRole1"
}, 
"prior_role": {
"id": "5a912666c3704c22a20d4c35f3068a88", 
"links": {
"self": 
"https://identity.example.com:35357/v3/roles/5a912666c3704c22a20d4c35f3068a88";
}, 
"name": "testing"
}
}
}

https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-
api-v3.rst#create-role-inference-rule

{
"role_inference": {
"prior_role": {
"id": "--prior-role-id--",
"links": {
"self": "http://identity:35357/v3/roles/--prior-role-id--";
}
"name": "prior role name"
},
"implies":
{
"id": "--implied-role1-id--",
"link": {
"self": 
"http://identity:35357/v3/roles/--implied-role1-id--";
},
"name": "implied role1 name"
}
},
}

Note missing comma and s/links/link/. Also, json is usually output in
sorted order.

http --pretty format GET
https://identity.example.com:35357/v3/role_inferences "X-Auth-
Token:4879c74089b744439057581c9d85bc19"

{
"role_inferences": [
{
"implies": [
{
"id": "edd42085d3ab472e9cf13b3cf3c362b6", 
"links": {
"self": 
"https://identity.example.com:35357/v3/roles/edd42085d3ab472e9cf13b3cf3c362b6";
}, 
"name": "SomeRole1"
}
], 
"prior_role": {
"id": "5a912666c3704c22a20d4c35f3068a88", 
"links": {
"self": 
"https://identity.example.com:35357/v3/roles/5a912666c3704c22a20d4c35f3068a88";
}, 
"name": "testing"
}
}
]
}

https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-
api-v3.rst#list-all-role-inference-rules

Again, s/link/links/. No missing comma though.

http --pretty format GET
https://identity.example.com:35357/v3/roles/5a912666c3704c22a20d4c35f3068a88/implies/edd42085d3ab472e9cf13b3cf3c362b6
"X-Auth-Token:4879c74089b744439057581c9d85bc19"

{
"role_inference": {
"implies": {
"id": "edd42085d3ab472e9cf13b3cf3c362b6", 
"links": {
"self": 
"https://identity.example.com:35357/v3/roles/edd42085d3ab472e9cf13b3cf3c362b6";
}, 
"name": "SomeRole1"
}, 
"prior_role": {
"id": "5a912666c3704c22a20d4c35f3068a88", 
"links": {
"self": 
"https://identity.example.com:35357/v3/roles/5a912666c3704c22a20d4c35f3068a88";
}, 
"name": "testing"
}
}
}

https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-
api-v3.rst#get-role-inference-rule

According to the spec, there is no "role_inference" wrapper here. Also,
a top level "links". There is also a missing comma but the 'links' for
implies is correct (only place this is true).

http --pretty format GET
https://identity.example.com:35357/v3/roles/5a912666c3704c22a20d4c35f3068a88/implies
"X-Auth-Token:4879c74089b744439057581c9d85bc19"

{
"role_inference": {
"implies": [
{
"id": "edd42085d3ab472e9cf13b3cf3c362b6", 
"links": {
"self": 
"https://identity.example.com:35357/v3/roles/edd42085d3ab472e9cf13b3cf3c362b6";
}, 
"name": "SomeRole1"
}
], 
"prior_role": {
"id": "5a912666c3704c22a20d4c35f3068a88", 
"links": {
"self": 
"https://identity.example.com:35357/v3/roles/5a912666c3704c22a20d4c35f3068a88";
}, 
"name": "testing"
}
}
}

https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-
api-v3.rst#list-implied-roles-for-role

This says there will also be a "links" key under role_inference (which
is wrong). Also, continued failure of s/link/links/.

** Affects: keystone
 Importance: Undecided
 Status

[Yahoo-eng-team] [Bug 1563101] [NEW] Remove backend dependency on core

2016-03-28 Thread Ron De Rose
Public bug reported:

Remove dependencies where backend code references code in the core.  The
reasoning being that the core should know about the backends, but the
backends should not know anything about the core (separation of
concerns).  And part of the risk here is a potential for circular
dependencies.

For example, identity backends imports identity:
https://github.com/openstack/keystone/blob/master/keystone/identity/backends/sql.py#L24

Backend code could then reference code in the core, as well as other
modules inside identity; thus, creating a circular dependency.

** Affects: keystone
 Importance: Undecided
 Assignee: Ron De Rose (ronald-de-rose)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) => Ron De Rose (ronald-de-rose)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1563101

Title:
  Remove backend dependency on core

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  Remove dependencies where backend code references code in the core.
  The reasoning being that the core should know about the backends, but
  the backends should not know anything about the core (separation of
  concerns).  And part of the risk here is a potential for circular
  dependencies.

  For example, identity backends imports identity:
  
https://github.com/openstack/keystone/blob/master/keystone/identity/backends/sql.py#L24

  Backend code could then reference code in the core, as well as other
  modules inside identity; thus, creating a circular dependency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1563101/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563069] [NEW] [RFE] Centralize Configuration Options

2016-03-28 Thread Brian Stajkowski
Public bug reported:

[Overview]
Refactor Neutron configuration options to be in one place 'neutron/conf' 
similar to the Nova implementation found here: 
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/centralize-config-options.html

This would allow for centralization of all configuration options and
provide an easy way to import and gain access to the wide breadth of
configuration options available to Neutron.

[Proposal]

1. Introduce a new package: neutron/conf
2. Group modules logically under new package: i.e. common.py, securitygroups.py
3. Move all 'cfg.CONF.register_opts' and cfg declarations to new modules.
4. __init__.py in new package imports all modules so all options can be 
imported by: import neutron.conf

[Benefits]

- As a developer I will find all config options in one place and will add 
further config options to that central place.
- End user is not affected by this change.

[Related information]
[1] Nova Implementation: 
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/centralize-config-options.html

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1563069

Title:
  [RFE] Centralize Configuration Options

Status in neutron:
  New

Bug description:
  [Overview]
  Refactor Neutron configuration options to be in one place 'neutron/conf' 
similar to the Nova implementation found here: 
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/centralize-config-options.html

  This would allow for centralization of all configuration options and
  provide an easy way to import and gain access to the wide breadth of
  configuration options available to Neutron.

  [Proposal]

  1. Introduce a new package: neutron/conf
  2. Group modules logically under new package: i.e. common.py, 
securitygroups.py
  3. Move all 'cfg.CONF.register_opts' and cfg declarations to new modules.
  4. __init__.py in new package imports all modules so all options can be 
imported by: import neutron.conf

  [Benefits]

  - As a developer I will find all config options in one place and will add 
further config options to that central place.
  - End user is not affected by this change.

  [Related information]
  [1] Nova Implementation: 
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/centralize-config-options.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1563069/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563070] [NEW] external_network_bridge option deprecation is misleading

2016-03-28 Thread Kevin Benton
Public bug reported:

The deprecation warning is emitted whenever a user specifies a value.
However, we only want to warn users when the value is not explicitly set
to an empty string. (i.e. we want to warn even when the default value is
used).

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1563070

Title:
  external_network_bridge option deprecation is misleading

Status in neutron:
  In Progress

Bug description:
  The deprecation warning is emitted whenever a user specifies a value.
  However, we only want to warn users when the value is not explicitly
  set to an empty string. (i.e. we want to warn even when the default
  value is used).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1563070/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562970] Re: Handling tox errors in networking-brocade

2016-03-28 Thread bharath
** Changed in: networking-brocade
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1562970

Title:
  Handling tox errors in networking-brocade

Status in networking-brocade:
  Invalid
Status in neutron:
  Invalid

Bug description:
  This will fix all the tox errors in networking-brocade specific to
  vyatta

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-brocade/+bug/1562970/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563006] Re: db: 'allocations' table migration and model have conflicting indexes

2016-03-28 Thread Matt Riedemann
Damn, OK, I guess it is handled in
https://review.openstack.org/#/c/281837/ .

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1563006

Title:
  db: 'allocations' table migration and model have conflicting indexes

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  The spec for the 'allocations' table in nova says there will be an
  index over the resource_provider_id, resource_class_id and used
  columns.

  That's in the data model definition:

  
https://github.com/openstack/nova/blob/9bc935d7d059c5ad9ff563b373691c3134e8f3ac/nova/db/sqlalchemy/models.py#L1503-L1505

  Index('allocations_resource_provider_class_used_idx',
'resource_provider_id', 'resource_class_id',
'used'),

  It's not, however, in the database migration that adds the table to
  the schema:

  
https://github.com/openstack/nova/blob/9bc935d7d059c5ad9ff563b373691c3134e8f3ac/nova/db/sqlalchemy/migrate_repo/versions/314_add_resource_provider_tables.py#L71-L73

  Index('allocations_resource_provider_class_id_idx',
allocations.c.resource_provider_id,
allocations.c.resource_class_id)

  So we probably need a new migration to fix that index.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1563006/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563029] [NEW] Nova intance fail to launch: Unexpected API Error.

2016-03-28 Thread Lawrence Sorrillo
Public bug reported:

This is need in a KOlla installation of openstack.

1. 
The commit level is:

lsorrillo@ptest:~/git/kolla$ git log -1
commit 90321d0497f14bda370908610cfc44296a940523
Merge: 328a7e8 a789346
Author: Jenkins 
Date:   Sun Mar 27 23:42:47 2016 +

Merge "Fix gate to use world writeable docker socket"
lsorrillo@ptest:~/git/kolla$ 

lsorrillo@ptest:~$ 
lsorrillo@ptest:~$ docker exec -ti nova_api bash
bash-4.2$ 
bash-4.2$ 
bash-4.2$ 
bash-4.2$ rpm -qa | grep nova
python-novaclient-3.3.1-0.20160303082753.d63800d.el7.centos.noarch
openstack-nova-common-13.0.0.0b4-0.20160304162843.c5a45a2.el7.centos.noarch
python-nova-13.0.0.0b4-0.20160304162843.c5a45a2.el7.centos.noarch
openstack-nova-api-13.0.0.0b4-0.20160304162843.c5a45a2.el7.centos.noarch
bash-4.2$ 

2.

The following error is seen in the nova-api.log:

2016-03-28 19:53:40.086 29 DEBUG oslo.messaging._drivers.impl_rabbit 
[req-88ffb6f7-1b1f-4332-b066-5b0b41827988 5bd4af0c54a14a789661d868a9fb1d84 
9e4b40d958e741599adc497b0a9d955a - - -] Connecting to AMQP server on 
192.168.0.121:5672 __init__ 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py:535
2016-03-28 19:53:40.132 29 DEBUG oslo.messaging._drivers.impl_rabbit 
[req-88ffb6f7-1b1f-4332-b066-5b0b41827988 5bd4af0c54a14a789661d868a9fb1d84 
9e4b40d958e741599adc497b0a9d955a - - -] Connected to AMQP server on 
192.168.0.121:5672 via [amqp] client __init__ 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py:562
2016-03-28 19:53:40.148 29 DEBUG oslo.messaging._drivers.pool 
[req-88ffb6f7-1b1f-4332-b066-5b0b41827988 5bd4af0c54a14a789661d868a9fb1d84 
9e4b40d958e741599adc497b0a9d955a - - -] Pool creating new connection create 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/pool.py:109
2016-03-28 19:53:40.151 29 DEBUG oslo.messaging._drivers.impl_rabbit 
[req-88ffb6f7-1b1f-4332-b066-5b0b41827988 5bd4af0c54a14a789661d868a9fb1d84 
9e4b40d958e741599adc497b0a9d955a - - -] Connecting to AMQP server on 
192.168.0.121:5672 __init__ 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py:535
2016-03-28 19:53:40.205 29 DEBUG oslo.messaging._drivers.impl_rabbit 
[req-88ffb6f7-1b1f-4332-b066-5b0b41827988 5bd4af0c54a14a789661d868a9fb1d84 
9e4b40d958e741599adc497b0a9d955a - - -] Connected to AMQP server on 
192.168.0.121:5672 via [amqp] client __init__ 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py:562
2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions 
[req-88ffb6f7-1b1f-4332-b066-5b0b41827988 5bd4af0c54a14a789661d868a9fb1d84 
9e4b40d958e741599adc497b0a9d955a - - -] Unexpected exception in API method
2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions Traceback (most 
recent call last):
2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/servers.py", line 
630, in create
2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions 
**create_kwargs)
2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/hooks.py", line 154, in inner
2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions rv = 
f(*args, **kwargs)
2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1556, in create
2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions 
check_server_group_quota=check_server_group_quota)
2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1139, in 
_create_instance
2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions 
reservation_id, max_count)
2016-03-28 19:54:40.216 29 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 834, in 
_validate_and_build_base_options
2016-03-28 19:54:40.216 29 ERROR nova.ap

[Yahoo-eng-team] [Bug 1540512] Re: [RFE] Host Aware IPAM

2016-03-28 Thread Carl Baldwin
After some review and reading Neil's final comments, I'm going to close
this.  I'm about to crack IPAM for routed networks.  With this request
in mind, I'll be looking to evolve the IPAM interface so that it can
take host information into consideration.  I'll also be working on the
deferred IP allocation.

I think there will be a way to do all of this which can accommodate both
hard and soft boundaries for cidrs.  I'll have some code up for review
soon and will add Neil and Petra to the review.

In the meantime, I suggest we review and merge Petra's devref
contribution: [1].

[1] https://review.openstack.org/#/c/289460/

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540512

Title:
  [RFE] Host Aware IPAM

Status in neutron:
  Won't Fix

Bug description:
  In some Neutron use cases it is desirable for the IP address(es) that
  Neutron allocates to a VM to depend on the compute host that Nova
  chooses for that VM. For example, in the networking-calico approach
  where data is routed between compute hosts, it's desirable for the IP
  addresses that are used on a given host (or rack) to be clustered
  within a small IP prefix, so that the routes to those IP addresses can
  be aggregated on routers within the data center fabric. Neutron's new
  pluggable IPAM facility allows us in principle to start doing this,
  but we will need to design and implement three other pieces of the
  solution:

  - Firstly, we need a way for the pluggable IPAM framework to pass the
  chosen host into a pluggable IPAM module, such that a module can take
  the host into account if it so wishes.  (If this does not already
  exist - we are not yet sure!)

  - Secondly, to demonstrate that, we need a sample pluggable IPAM module that 
allocates IP addresses in some host-aware way.
  - Thirdly, we eventually need to enhance the port setup exchange between Nova 
and Neutron, such that Neutron can choose an IP address _after_ Nova has chosen 
the compute host.

  This work is being done as part of an Outreachy internship, and the
  last point cannot reasonably fit in that scope.  Hence this RFE
  proposes just the first two points, as a useful and concrete step
  towards the eventual complete picture.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563021] [NEW] Mouseover help for user settings items per page is not translatable

2016-03-28 Thread Doug Fish
Public bug reported:

When Editing user settings [username]->Settings the help mouseover value
"Number of items to show per page (..."  is not shown as translated.

This failure can be seen in the Pseudo translation tool if it's patched
with https://review.openstack.org/#/c/298379/

and can also be seen in Japanese. Note that the segment is translated, but not 
displayed in Horizon:
https://github.com/openstack/horizon/blob/stable/mitaka/openstack_dashboard/locale/ja/LC_MESSAGES/django.po#L5832
and
https://github.com/openstack/horizon/blob/master/openstack_dashboard/locale/ja/LC_MESSAGES/django.po#L5835

** Affects: horizon
 Importance: Undecided
 Status: New

** Affects: neutron-lbaas-dashboard
 Importance: Undecided
 Status: Invalid

** Also affects: horizon
   Importance: Undecided
   Status: New

** Changed in: neutron-lbaas-dashboard
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1563021

Title:
  Mouseover help for user settings items per page is not translatable

Status in OpenStack Dashboard (Horizon):
  New
Status in Neutron LBaaS Dashboard:
  Invalid

Bug description:
  When Editing user settings [username]->Settings the help mouseover
  value "Number of items to show per page (..."  is not shown as
  translated.

  This failure can be seen in the Pseudo translation tool if it's
  patched with https://review.openstack.org/#/c/298379/

  and can also be seen in Japanese. Note that the segment is translated, but 
not displayed in Horizon:
  
https://github.com/openstack/horizon/blob/stable/mitaka/openstack_dashboard/locale/ja/LC_MESSAGES/django.po#L5832
  and
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/locale/ja/LC_MESSAGES/django.po#L5835

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1563021/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557992] Re: NoSuchOptError: no such option: service_auth when using local cert manager

2016-03-28 Thread Neela Shah
Moving this to neutron as its not a lbaas-dashboard bug but a neutron-
lbaas bug which gets reported into the neutron lp project.

** Project changed: neutron-lbaas-dashboard => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1557992

Title:
  NoSuchOptError: no such option: service_auth when using local cert
  manager

Status in neutron:
  New

Bug description:
  When changing the default cert manager barbican to local cert manager
  using vmware edge driver, it threw an exception below, that was
  because the method get_service_url is defined in parent module
  cert_manager.py, but the service_auth is only registered in
  keystone.py which is imported by barbican.

  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/services/loadbalancer/plugin.py", line 
463, in _call_driver_operation
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin driver_method(context, db_entity)
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/local/lib/python2.7/dist-packages/oslo_log/helpers.py", line 46, in 
wrapper
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin return method(*args, **kwargs)
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/drivers/vmware/edge_driver_v2.py", line 
91, in create
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin context, listener, 
certificate=self._get_default_cert(listener))
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/drivers/vmware/edge_driver_v2.py", line 
84, in _get_default_cert
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin return 
cert_backend.CertManager.get_cert(
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/drivers/vmware/edge_driver_v2.py", line 
73, in _get_service_url
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin 
cfg.CONF.service_auth.service_name,
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py", line 1870, in 
__getattr__
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin raise NoSuchOptError(name)
  2016-03-04 11:45:10.461 27366 TRACE 
neutron_lbaas.services.loadbalancer.plugin NoSuchOptError: no such option: 
service_auth
  2016-03-04 11:45:10.461 27366 TRACE neutron_lbaas.services.loadbalancer.plugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1557992/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481161] Re: ML2 plugin for MLX: Unable to configure device with ssh connector when enable password is configured

2016-03-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/210271
Committed: 
https://git.openstack.org/cgit/openstack/networking-brocade/commit/?id=e2c8ecd218280c1cd7869fa8040ad02bf0f6b02b
Submitter: Jenkins
Branch:master

commit e2c8ecd218280c1cd7869fa8040ad02bf0f6b02b
Author: Pradeep Sathasivam 
Date:   Fri Aug 7 14:49:48 2015 +0530

Support for enable password configuration

When enable password is configured in device, ML2 plugin
will not be able to configure the device via SSH or TELNET.

Now this is supported by reading the response of each command
and identifying the prompt.

Change-Id: I3344f63b2f076809046fdbe92b9c9227bc0f1e1c
Closes-Bug: 1481161


** Changed in: networking-brocade
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1481161

Title:
  ML2 plugin for MLX: Unable to configure device with ssh connector when
  enable password is configured

Status in networking-brocade:
  Fix Released
Status in neutron:
  In Progress

Bug description:
  When the following command is configured in FI/NI device, it puts the
  user directly into privelege mode after login, without the need to
  execute 'enable' command.

  aaa authentication login privilege-mode

  When the above command is configured, ML2 plugin falis to configure
  the device. This also happens when enable password is configured using
  the following command.

  enable super-user-password

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-brocade/+bug/1481161/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557992] [NEW] NoSuchOptError: no such option: service_auth when using local cert manager

2016-03-28 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

When changing the default cert manager barbican to local cert manager
using vmware edge driver, it threw an exception below, that was because
the method get_service_url is defined in parent module cert_manager.py,
but the service_auth is only registered in keystone.py which is imported
by barbican.

2016-03-04 11:45:10.461 27366 TRACE neutron_lbaas.services.loadbalancer.plugin  
 File "/opt/stack/neutron-lbaas/neutron_lbaas/services/loadbalancer/plugin.py", 
line 463, in _call_driver_operation
2016-03-04 11:45:10.461 27366 TRACE neutron_lbaas.services.loadbalancer.plugin  
   driver_method(context, db_entity)
2016-03-04 11:45:10.461 27366 TRACE neutron_lbaas.services.loadbalancer.plugin  
 File "/usr/local/lib/python2.7/dist-packages/oslo_log/helpers.py", line 46, in 
wrapper
2016-03-04 11:45:10.461 27366 TRACE neutron_lbaas.services.loadbalancer.plugin  
   return method(*args, **kwargs)
2016-03-04 11:45:10.461 27366 TRACE neutron_lbaas.services.loadbalancer.plugin  
 File 
"/opt/stack/neutron-lbaas/neutron_lbaas/drivers/vmware/edge_driver_v2.py", line 
91, in create
2016-03-04 11:45:10.461 27366 TRACE neutron_lbaas.services.loadbalancer.plugin  
   context, listener, certificate=self._get_default_cert(listener))
2016-03-04 11:45:10.461 27366 TRACE neutron_lbaas.services.loadbalancer.plugin  
 File 
"/opt/stack/neutron-lbaas/neutron_lbaas/drivers/vmware/edge_driver_v2.py", line 
84, in _get_default_cert
2016-03-04 11:45:10.461 27366 TRACE neutron_lbaas.services.loadbalancer.plugin  
   return cert_backend.CertManager.get_cert(
2016-03-04 11:45:10.461 27366 TRACE neutron_lbaas.services.loadbalancer.plugin  
 File 
"/opt/stack/neutron-lbaas/neutron_lbaas/drivers/vmware/edge_driver_v2.py", line 
73, in _get_service_url
2016-03-04 11:45:10.461 27366 TRACE neutron_lbaas.services.loadbalancer.plugin  
   cfg.CONF.service_auth.service_name,
2016-03-04 11:45:10.461 27366 TRACE neutron_lbaas.services.loadbalancer.plugin  
 File "/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py", line 1870, 
in __getattr__
2016-03-04 11:45:10.461 27366 TRACE neutron_lbaas.services.loadbalancer.plugin  
   raise NoSuchOptError(name)
2016-03-04 11:45:10.461 27366 TRACE neutron_lbaas.services.loadbalancer.plugin 
NoSuchOptError: no such option: service_auth
2016-03-04 11:45:10.461 27366 TRACE neutron_lbaas.services.loadbalancer.plugin

** Affects: neutron
 Importance: Undecided
 Assignee: Yang Yu (yuyangbj)
 Status: New

-- 
NoSuchOptError: no such option: service_auth when using local cert manager
https://bugs.launchpad.net/bugs/1557992
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563006] [NEW] db: 'allocations' table migration and model have conflicting indexes

2016-03-28 Thread Matt Riedemann
Public bug reported:

The spec for the 'allocations' table in nova says there will be an index
over the resource_provider_id, resource_class_id and used columns.

That's in the data model definition:

https://github.com/openstack/nova/blob/9bc935d7d059c5ad9ff563b373691c3134e8f3ac/nova/db/sqlalchemy/models.py#L1503-L1505

Index('allocations_resource_provider_class_used_idx',
  'resource_provider_id', 'resource_class_id',
  'used'),

It's not, however, in the database migration that adds the table to the
schema:

https://github.com/openstack/nova/blob/9bc935d7d059c5ad9ff563b373691c3134e8f3ac/nova/db/sqlalchemy/migrate_repo/versions/314_add_resource_provider_tables.py#L71-L73

Index('allocations_resource_provider_class_id_idx',
  allocations.c.resource_provider_id,
  allocations.c.resource_class_id)

So we probably need a new migration to fix that index.

** Affects: nova
 Importance: Low
 Status: Confirmed


** Tags: allocations db scheduler

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1563006

Title:
  db: 'allocations' table migration and model have conflicting indexes

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  The spec for the 'allocations' table in nova says there will be an
  index over the resource_provider_id, resource_class_id and used
  columns.

  That's in the data model definition:

  
https://github.com/openstack/nova/blob/9bc935d7d059c5ad9ff563b373691c3134e8f3ac/nova/db/sqlalchemy/models.py#L1503-L1505

  Index('allocations_resource_provider_class_used_idx',
'resource_provider_id', 'resource_class_id',
'used'),

  It's not, however, in the database migration that adds the table to
  the schema:

  
https://github.com/openstack/nova/blob/9bc935d7d059c5ad9ff563b373691c3134e8f3ac/nova/db/sqlalchemy/migrate_repo/versions/314_add_resource_provider_tables.py#L71-L73

  Index('allocations_resource_provider_class_id_idx',
allocations.c.resource_provider_id,
allocations.c.resource_class_id)

  So we probably need a new migration to fix that index.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1563006/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563005] [NEW] pseudo translation tool creates uncompilable PO files

2016-03-28 Thread Doug Fish
Public bug reported:

Attempting to compile PO files generated by the pseudo translation tool
results in the following error

CommandError: Execution of msgfmt failed: 
/opt/stack/horizon/openstack_dashboard/locale/tr_TR/LC_MESSAGES/django.po:10595:
 a format specification for argument 'req' doesn't exist in 'msgstr[0]'
msgfmt: found 1 fatal error


The problem message appears to be 
#: 
openstack_dashboard/dashboards/project/instances/workflows/create_instance.py:213
#, python-format
msgid ""
"The requested instance cannot be launched as you only have %(avail)i of "
"your quota available. "
msgid_plural ""
"The requested %(req)i instances cannot be launched as you only have "
"%(avail)i of your quota available."
msgstr[0] ""
"[~0:The requested instance cannot be launched as you only have %(avail)i "
"of your quota available. ~您好яшçあ~~]"

It seems that that Babel really wants the translated string to have the
substitution variables from msgid_plural, but the string is based on
msgid

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1563005

Title:
  pseudo translation tool creates uncompilable PO files

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Attempting to compile PO files generated by the pseudo translation
  tool results in the following error

  CommandError: Execution of msgfmt failed: 
/opt/stack/horizon/openstack_dashboard/locale/tr_TR/LC_MESSAGES/django.po:10595:
 a format specification for argument 'req' doesn't exist in 'msgstr[0]'
  msgfmt: found 1 fatal error

  
  The problem message appears to be 
  #: 
openstack_dashboard/dashboards/project/instances/workflows/create_instance.py:213
  #, python-format
  msgid ""
  "The requested instance cannot be launched as you only have %(avail)i of "
  "your quota available. "
  msgid_plural ""
  "The requested %(req)i instances cannot be launched as you only have "
  "%(avail)i of your quota available."
  msgstr[0] ""
  "[~0:The requested instance cannot be launched as you only have %(avail)i "
  "of your quota available. ~您好яшçあ~~]"

  It seems that that Babel really wants the translated string to have
  the substitution variables from msgid_plural, but the string is based
  on msgid

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1563005/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562970] Re: Handling tox errors in networking-brocade

2016-03-28 Thread Henry Gessau
I believe this does not affect core neutron.

** Also affects: networking-brocade
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1562970

Title:
  Handling tox errors in networking-brocade

Status in networking-brocade:
  New
Status in neutron:
  Invalid

Bug description:
  This will fix all the tox errors in networking-brocade specific to
  vyatta

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-brocade/+bug/1562970/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562970] [NEW] Handling tox errors in networking-brocade

2016-03-28 Thread bharath
Public bug reported:

This will fix all the tox errors in networking-brocade specific to
vyatta

** Affects: neutron
 Importance: Undecided
 Assignee: bharath (bharath-7)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => bharath (bharath-7)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1562970

Title:
  Handling tox errors in networking-brocade

Status in neutron:
  New

Bug description:
  This will fix all the tox errors in networking-brocade specific to
  vyatta

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1562970/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562965] [NEW] liberty -> mitaka db migrate fails on postgresql 091 migration

2016-03-28 Thread Matthew Thode
Public bug reported:

looks like this row doesn't exist

https://github.com/openstack/keystone/blob/9.0.0.0rc1/keystone/common/sql/migrate_repo/versions/091_migrate_data_to_local_user_and_password_tables.py#L51

NoSuchColumnError: "Could not locate column in row for column
'user_password'"

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1562965

Title:
   liberty -> mitaka db migrate fails on postgresql 091 migration

Status in OpenStack Identity (keystone):
  New

Bug description:
  looks like this row doesn't exist

  
https://github.com/openstack/keystone/blob/9.0.0.0rc1/keystone/common/sql/migrate_repo/versions/091_migrate_data_to_local_user_and_password_tables.py#L51

  NoSuchColumnError: "Could not locate column in row for column
  'user_password'"

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1562965/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562934] Re: liberty -> mitaka db migrate fails on postgresql

2016-03-28 Thread Morgan Fainberg
This is a release blocker.

** Changed in: keystone
   Status: New => Triaged

** Changed in: keystone
Milestone: None => newton-1

** Also affects: keystone/mitaka
   Importance: High
   Status: Triaged

** Also affects: keystone/newton
   Importance: Undecided
   Status: New

** Changed in: keystone/mitaka
Milestone: newton-1 => mitaka-rc1

** Changed in: keystone/newton
Milestone: None => newton-3

** Changed in: keystone/newton
   Status: New => Triaged

** Changed in: keystone/newton
   Importance: Undecided => High

** Changed in: keystone/mitaka
   Importance: High => Critical

** Changed in: keystone/newton
   Importance: High => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1562934

Title:
  liberty -> mitaka db migrate fails on postgresql

Status in OpenStack Identity (keystone):
  Triaged
Status in OpenStack Identity (keystone) mitaka series:
  Triaged
Status in OpenStack Identity (keystone) newton series:
  Triaged

Bug description:
  specifically 088_domain_specific_roles.py and
  091_migrate_data_to_local_user_and_password_tables.py

  I can't remember which section I had to comment out with the 091
  migration, but in the 088 one it was the following.

  migrate.UniqueConstraint(role_table.c.name,
   name=_ROLE_NAME_OLD_CONSTRAINT).drop()

  If it was easier to translate into actual sql statements I might be
  able to give more info.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1562934/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562945] [NEW] Change nova's devstack blacklist to use test uuids

2016-03-28 Thread Chuck Carmack
Public bug reported:

See https://bugs.launchpad.net/nova/+bug/1562323 for the background.  A
temporary workaround was made for that bug.  This bug will be used to
implement changing the nova devstack blacklist file to use the test
idempotent ids and not the test names.

** Affects: nova
 Importance: Undecided
 Assignee: Chuck Carmack (chuckcarmack75)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Chuck Carmack (chuckcarmack75)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1562945

Title:
  Change nova's devstack blacklist to use test uuids

Status in OpenStack Compute (nova):
  New

Bug description:
  See https://bugs.launchpad.net/nova/+bug/1562323 for the background.
  A temporary workaround was made for that bug.  This bug will be used
  to implement changing the nova devstack blacklist file to use the test
  idempotent ids and not the test names.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1562945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548285] Re: l3 HA network management is racey

2016-03-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/282876
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=7512d8aa26a945a695e889e0a97c6414cec6ac10
Submitter: Jenkins
Branch:master

commit 7512d8aa26a945a695e889e0a97c6414cec6ac10
Author: Kevin Benton 
Date:   Sun Feb 21 21:31:59 2016 -0800

Make L3 HA interface creation concurrency safe

This patch creates a function to handle the creation of the
L3HA interfaces for a router in a manner that handles the
HA network not existing or an existing one being  deleted
by another worker before the interfaces could be created.

Closes-Bug: #1548285
Change-Id: Ibac0c366362aa76615e448fbe11d6d6b031732fe


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548285

Title:
  l3 HA network management is racey

Status in neutron:
  Fix Released

Bug description:
  The logic surrounding the creation of the L3 HA network doesn't handle
  races where the network could be deleted after its existence is
  checked for. It also doesn't handle the case where the network doesn't
  exist but another creation happens before it gets to create the
  network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548285/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562934] [NEW] liberty -> mitaka db migrate fails on postgresql

2016-03-28 Thread Matthew Thode
Public bug reported:

specifically 088_domain_specific_roles.py and
091_migrate_data_to_local_user_and_password_tables.py

I can't remember which section I had to comment out with the 091
migration, but in the 088 one it was the following.

migrate.UniqueConstraint(role_table.c.name,
 name=_ROLE_NAME_OLD_CONSTRAINT).drop()

If it was easier to translate into actual sql statements I might be able
to give more info.

** Affects: keystone
 Importance: High
 Status: New


** Tags: mitaka-rc-potential postgres sql

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1562934

Title:
  liberty -> mitaka db migrate fails on postgresql

Status in OpenStack Identity (keystone):
  New

Bug description:
  specifically 088_domain_specific_roles.py and
  091_migrate_data_to_local_user_and_password_tables.py

  I can't remember which section I had to comment out with the 091
  migration, but in the 088 one it was the following.

  migrate.UniqueConstraint(role_table.c.name,
   name=_ROLE_NAME_OLD_CONSTRAINT).drop()

  If it was easier to translate into actual sql statements I might be
  able to give more info.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1562934/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562914] [NEW] vendor-data not working with ConfigDrive in 0.7.5

2016-03-28 Thread Matt Dorn
Public bug reported:

vendor-data is not read with the ConfigDrive datasource in 0.7.5.

Mar 28 14:54:01 hi [CLOUDINIT] stages.py[DEBUG]: no vendordata from
datasource

Works properly with NoCloud and OpenStack datasources.

DistroRelease: Ubuntu 14.04
Package: 0.7.5-0ubuntu1.16
Uname: 3.13.0-79-generic x86_64

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1562914

Title:
  vendor-data not working with ConfigDrive in 0.7.5

Status in cloud-init:
  New

Bug description:
  vendor-data is not read with the ConfigDrive datasource in 0.7.5.

  Mar 28 14:54:01 hi [CLOUDINIT] stages.py[DEBUG]: no vendordata from
  datasource

  Works properly with NoCloud and OpenStack datasources.

  DistroRelease: Ubuntu 14.04
  Package: 0.7.5-0ubuntu1.16
  Uname: 3.13.0-79-generic x86_64

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1562914/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562892] [NEW] L3 HA: HA networks remain existing after rally tests execution

2016-03-28 Thread Ann Kamyshnikova
Public bug reported:

After running rally tests HA networks remain existing although all HA routers 
were deleted.
For example, after running create_list_routers we have just one router (which 
was not created by rally) and around 35 HA networks: 
http://paste.openstack.org/show/491573/ There are no ports on these networks 
and they can be simply deleted with "neutron net-delete " command.

** Affects: neutron
 Importance: Undecided
 Assignee: Ann Kamyshnikova (akamyshnikova)
 Status: New


** Tags: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1562892

Title:
  L3 HA: HA networks remain existing after rally tests execution

Status in neutron:
  New

Bug description:
  After running rally tests HA networks remain existing although all HA routers 
were deleted.
  For example, after running create_list_routers we have just one router (which 
was not created by rally) and around 35 HA networks: 
http://paste.openstack.org/show/491573/ There are no ports on these networks 
and they can be simply deleted with "neutron net-delete " command.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1562892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374045] Re: Add v3 endpoint for identity in catalog

2016-03-28 Thread Steve Martinelli
Patch https://review.openstack.org/#/c/290897/ fixed this issue

** Changed in: keystone
 Assignee: Kenjiro Kosaka (k-kosaka) => Colleen Murphy (krinkle)

** Summary changed:

- Add v3 endpoint for identity in catalog
+ Add v3 endpoint for identity in catalog for sample data

** Changed in: keystone
Milestone: None => newton-1

** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1374045

Title:
  Add v3 endpoint for identity in catalog for sample data

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  This is a wish list.

  Since we are moving to v3, it is better to add v3 endpoint in
  sample_data.sh.  We still have only v2.0 endpoint.I don't think
  keystoenclient will be affected since it doesn't use the endpoint from
  catalog, but relies on version discovery

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1374045/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562887] [NEW] L3 HA: IpAddressGenerationFailure during rally test execution

2016-03-28 Thread Ann Kamyshnikova
Public bug reported:

Environment 3 controllers, 46 computes, liberty. L3 HA During execution
NeutronNetworks.create_and_delete_routers in logs there is hundreds of
traces in dhcp-agent logs http://paste.openstack.org/show/491423/,
http://paste.openstack.org/show/491408/. Tests passes but having these
traces in logs is ugly.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1562887

Title:
  L3 HA: IpAddressGenerationFailure during rally test execution

Status in neutron:
  New

Bug description:
  Environment 3 controllers, 46 computes, liberty. L3 HA During
  execution NeutronNetworks.create_and_delete_routers in logs there is
  hundreds of traces in dhcp-agent logs
  http://paste.openstack.org/show/491423/,
  http://paste.openstack.org/show/491408/. Tests passes but having these
  traces in logs is ugly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1562887/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561099] Re: keystone-manage looks for default_config_files in the wrong place

2016-03-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/296110
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=139f892fecf4ce645e3a7a6a7d1087a94f402f89
Submitter: Jenkins
Branch:master

commit 139f892fecf4ce645e3a7a6a7d1087a94f402f89
Author: Colleen Murphy 
Date:   Tue Mar 22 16:10:06 2016 -0700

Fix keystone-manage config file path

Without this patch, the keystone-manage command looks for a default
keystone.conf relative to the installed executable. In a developer's
case this is likely to be relative to /bin/keystone-manage.
If installed via distro packages this will be something like
/usr/bin/keystone-manage. The keystone developer documentation
instructs the developer to copy the sample config file into the etc/
directory of the keystone source directory[1], which is not necessarily
related to where the keystone-manage executable is installed. This
patch causes the keystone-manage command to search for
etc/keystone.conf relative to the python source file,
keystone/cmd/manage.py, which will always be in the same place relative
to the keystone repo's etc/ directory. If installed via distro packages
this will cause keystone-manage to search for the config in something
like /usr/lib/python2.7/dist-packages, but since it falls back to
searching the standard oslo.cfg directories, the behavior won't change.

[1] 
http://docs.openstack.org/developer/keystone/developing.html#configuring-keystone

Closes-bug: #1561099

Change-Id: Icf9caac030e62deb17ce5df3a82737b408591ac0


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1561099

Title:
  keystone-manage looks for default_config_files in the wrong place

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Summary:

  The keystone-manage command searches for a default keystone.conf
  relative to the installed executable [1]. The result is that it will
  look in /../etc/keystone.conf. Failing to find it there, it
  will search the standard oslo.cfg directories: ~/.keystone/, ~/,
  /etc/keystone/, /etc/.

  I can't find documentation stating keystone.conf should live at /../etc/keystone.conf. I can find documentation saying it should
  live in the etc/ directory of the keystone source directory[2], and I
  can find documentation saying it should live in one of the oslo.cfg
  directories[3]. If keystone-manage searched for keystone.conf relative
  to the python source file keystone/cmd/manage.py rather than the
  installed binary, the instructions at [2] would work correctly and [3]
  would still work as a fallback.

  Steps to reproduce:

  1) Follow the "Developing with Keystone" instructions
  (http://docs.openstack.org/developer/keystone/developing.html),
  copying etc/keystone.conf.sample to etc/keystone.conf.

  2) Change the database connection string in etc/keystone.conf to
  sqlite:///keystone2.db

  3) Run a keystone-manage db_sync

  Expected result:

  A sqlite database is created in the current working directory called
  keystone2.db

  Actual result:

  A sqlite database is created in the current working directory called
  keystone.db.

  [1] 
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/cmd/manage.py#n23
  [2] 
http://docs.openstack.org/developer/keystone/developing.html#configuring-keystone
  [3] 
http://docs.openstack.org/developer/keystone/configuration.html#configuration-files

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1561099/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562878] [NEW] L3 HA: Unable to complete operation on subnet

2016-03-28 Thread Ann Kamyshnikova
Public bug reported:

Environment 3 controllers, 46 computes, liberty. L3 HA During execution 
NeutronNetworks.create_and_delete_routers several times test failed with 
"Unable to complete operation on subnet . One or more ports have an IP 
allocation from this subnet. " trace in neutron-server logs 
http://paste.openstack.org/show/491557/
Rally report attached.

** Affects: neutron
 Importance: Undecided
 Assignee: Ann Kamyshnikova (akamyshnikova)
 Status: New


** Tags: l3-ha

** Attachment added: "create_delete_routers L3 HA report"
   
https://bugs.launchpad.net/bugs/1562878/+attachment/4614887/+files/create_delete_200_70.html

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1562878

Title:
  L3 HA: Unable to complete operation on subnet

Status in neutron:
  New

Bug description:
  Environment 3 controllers, 46 computes, liberty. L3 HA During execution 
NeutronNetworks.create_and_delete_routers several times test failed with 
"Unable to complete operation on subnet . One or more ports have an IP 
allocation from this subnet. " trace in neutron-server logs 
http://paste.openstack.org/show/491557/
  Rally report attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1562878/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562876] [NEW] Deadlock 'DELETE FROM ipallocationpools' during rally test_create_delete_routers

2016-03-28 Thread Ann Kamyshnikova
Public bug reported:

Environment 3 controllers, 46 computes, liberty. L3 HA During execution
NeutronNetworks.create_and_delete_routers several times test failed with
deadlock  error in neutron-server logs
http://paste.openstack.org/show/491572/.

** Affects: neutron
 Importance: Undecided
 Assignee: Ann Kamyshnikova (akamyshnikova)
 Status: New


** Tags: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1562876

Title:
  Deadlock 'DELETE FROM ipallocationpools' during rally
  test_create_delete_routers

Status in neutron:
  New

Bug description:
  Environment 3 controllers, 46 computes, liberty. L3 HA During
  execution NeutronNetworks.create_and_delete_routers several times test
  failed with deadlock  error in neutron-server logs
  http://paste.openstack.org/show/491572/.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1562876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562863] Re: The command "nova-manage db archive_deleted_rows" doesn't move the entries to the shadow tables.

2016-03-28 Thread Matt Riedemann
*** This bug is a duplicate of bug 1183523 ***
https://bugs.launchpad.net/bugs/1183523

This is a known broken issue, see the mailing list thread here:

http://lists.openstack.org/pipermail/openstack-
dev/2015-November/079701.html

** This bug has been marked a duplicate of bug 1183523
   db-archiving fails to clear some deleted rows from instances table

-- 
You received this bug notification because you are a member of Nova,
which is a bug assignee.
https://bugs.launchpad.net/bugs/1562863

Title:
  The command "nova-manage db archive_deleted_rows" doesn't move the
  entries to the shadow tables.

Status in Mirantis OpenStack:
  New

Bug description:
  Once an instance is deleted from the cloud its entry in the database is still 
present. 
  When trying to archive some number of deleted rows with "nova-manage db 
archive_deleted_rows --max-rows 1", a constraint error is displayed in nova 
logs.

  Expected results:
  Specified deleted rows number should be moved from production tables to 
shadow tables.

  Actual result:
  db-archiving fails to archive specified number of deleted rows with 
"nova-manage db archive_deleted_rows --max-rows 1", a constraint error is 
displayed in nova logs:

  2016-03-28 12:31:43.751 14147 ERROR oslo_db.sqlalchemy.exc_filters 
[req-0fd152a6-b298-4c3c-9aa7-ebdadc752b89 - - - - -] DBAPIError exception 
wrapped from (IntegrityError) (1451, 'Cannot delete or update a parent row: a 
foreign key constraint fails (`nova`.`block_device_mapping`, CONSTRAINT 
`block_device_mapping_instance_uuid_fkey` FOREIGN KEY (`instance_uuid`) 
REFERENCES `instances` (`uuid`))') 'DELETE FROM instances WHERE instances.id in 
(SELECT T1.id FROM (SELECT instances.id \nFROM instances \nWHERE 
instances.deleted != %s ORDER BY instances.id \n LIMIT %s) as T1)' (0, 1)
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 951, in 
_execute_context
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters 
context)
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 436, in 
do_execute
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 205, in execute
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters 
self.errorhandler(self, exc, value)
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/MySQLdb/connections.py", line 36, in 
defaulterrorhandler
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters raise 
errorclass, errorvalue
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters 
IntegrityError: (1451, 'Cannot delete or update a parent row: a foreign key 
constraint fails (`nova`.`block_device_mapping`, CONSTRAINT 
`block_device_mapping_instance_uuid_fkey` FOREIGN KEY (`instance_uuid`) 
REFERENCES `instances` (`uuid`))')
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters 
  2016-03-28 12:31:43.753 14147 WARNING nova.db.sqlalchemy.api 
[req-0fd152a6-b298-4c3c-9aa7-ebdadc752b89 - - - - -] IntegrityError detected 
when archiving table instances

  
  Steps to reproduce:
  1) create instance
  2) delete instance
  3) run command 'nova-manage db archive_deleted_rows --max_rows 1'

  Description of the environment:
  MOS: 7.0
  OS: Ubuntu 14.04 
  Reference architecture: HA
  Network model: Neutron VLAN

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1562863/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562863] [NEW] The command "nova-manage db archive_deleted_rows" doesn't move the entries to the shadow tables.

2016-03-28 Thread Oleksandr Bryndzii
Public bug reported:

Once an instance is deleted from the cloud its entry in the database is still 
present. 
When trying to archive some number of deleted rows with "nova-manage db 
archive_deleted_rows --max-rows 1", a constraint error is displayed in nova 
logs.

Expected results:
Specified deleted rows number should be moved from production tables to shadow 
tables.

Actual result:
db-archiving fails to archive specified number of deleted rows with 
"nova-manage db archive_deleted_rows --max-rows 1", a constraint error is 
displayed in nova logs:

2016-03-28 12:31:43.751 14147 ERROR oslo_db.sqlalchemy.exc_filters 
[req-0fd152a6-b298-4c3c-9aa7-ebdadc752b89 - - - - -] DBAPIError exception 
wrapped from (IntegrityError) (1451, 'Cannot delete or update a parent row: a 
foreign key constraint fails (`nova`.`block_device_mapping`, CONSTRAINT 
`block_device_mapping_instance_uuid_fkey` FOREIGN KEY (`instance_uuid`) 
REFERENCES `instances` (`uuid`))') 'DELETE FROM instances WHERE instances.id in 
(SELECT T1.id FROM (SELECT instances.id \nFROM instances \nWHERE 
instances.deleted != %s ORDER BY instances.id \n LIMIT %s) as T1)' (0, 1)
2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 951, in 
_execute_context
2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters context)
2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 436, in 
do_execute
2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 205, in execute
2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters 
self.errorhandler(self, exc, value)
2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/MySQLdb/connections.py", line 36, in 
defaulterrorhandler
2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters raise 
errorclass, errorvalue
2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters 
IntegrityError: (1451, 'Cannot delete or update a parent row: a foreign key 
constraint fails (`nova`.`block_device_mapping`, CONSTRAINT 
`block_device_mapping_instance_uuid_fkey` FOREIGN KEY (`instance_uuid`) 
REFERENCES `instances` (`uuid`))')
2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters 
2016-03-28 12:31:43.753 14147 WARNING nova.db.sqlalchemy.api 
[req-0fd152a6-b298-4c3c-9aa7-ebdadc752b89 - - - - -] IntegrityError detected 
when archiving table instances


Steps to reproduce:
1) create instance
2) delete instance
3) run command 'nova-manage db archive_deleted_rows --max_rows 1'

Description of the environment:
MOS: 7.0
OS: Ubuntu 14.04 
Reference architecture: HA
Network model: Neutron VLAN

** Affects: mos
 Importance: High
 Assignee: Nova (nova)
 Status: New


** Tags: customer-found support

-- 
You received this bug notification because you are a member of Nova,
which is a bug assignee.
https://bugs.launchpad.net/bugs/1562863

Title:
  The command "nova-manage db archive_deleted_rows" doesn't move the
  entries to the shadow tables.

Status in Mirantis OpenStack:
  New

Bug description:
  Once an instance is deleted from the cloud its entry in the database is still 
present. 
  When trying to archive some number of deleted rows with "nova-manage db 
archive_deleted_rows --max-rows 1", a constraint error is displayed in nova 
logs.

  Expected results:
  Specified deleted rows number should be moved from production tables to 
shadow tables.

  Actual result:
  db-archiving fails to archive specified number of deleted rows with 
"nova-manage db archive_deleted_rows --max-rows 1", a constraint error is 
displayed in nova logs:

  2016-03-28 12:31:43.751 14147 ERROR oslo_db.sqlalchemy.exc_filters 
[req-0fd152a6-b298-4c3c-9aa7-ebdadc752b89 - - - - -] DBAPIError exception 
wrapped from (IntegrityError) (1451, 'Cannot delete or update a parent row: a 
foreign key constraint fails (`nova`.`block_device_mapping`, CONSTRAINT 
`block_device_mapping_instance_uuid_fkey` FOREIGN KEY (`instance_uuid`) 
REFERENCES `instances` (`uuid`))') 'DELETE FROM instances WHERE instances.id in 
(SELECT T1.id FROM (SELECT instances.id \nFROM instances \nWHERE 
instances.deleted != %s ORDER BY instances.id \n LIMIT %s) as T1)' (0, 1)
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 951, in 
_execute_context
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sql

[Yahoo-eng-team] [Bug 1562842] [NEW] instance going to scheduler when invalid v4-fixed-ip is provided in nova boot command

2016-03-28 Thread Abhilash Goyal
Public bug reported:

When providing invalid v4-fixed-ip in nova boot command, the instance
will queue with the scheduler but fail to boot. Instead what should
happen is, there should be check at client side or server side and error
message should be thrown so that VM does not get queued with the
scheduler.


neutron net-list
+--+-+--+
| id   | name| subnets  
|
+--+-+--+
| 97adf977-8b62-4996-a800-bbbdaf9c0fd9 | public  | 
35364205-3b7d-47db-8bf3-e1590416d9f1 2001:db8::/64   |
|  | | 
ddacd805-5e07-4e63-a827-fd094c61e84a 172.24.4.0/24   |
| f9976ad3-33fb-44a4-b25f-e8356de9e7d2 | private | 
6427f306-2646-479e-b47b-5a115f020d1c 10.0.0.0/24 |
|  | | 
4de2d8be-0c1a-4b9c-aabf-3a896a7c67c0 fd07:4910:1fa6::/64 |
+--+-+--+


nova boot abi_1 --image a258964a-4250-4b6d-9fe9-492ac2b3d8da --flavor m1.tiny 
--nic net-id=97adf977-8b62-4996-a800-bbbdaf9c0fd9,v6-fixed-ip=1.0.0.255
(note: 1.0.0.255 is broadcast ip for the network, should be invalid.)

+--++
| Property | Value  
|
+--++
| OS-DCF:diskConfig| MANUAL 
|
| OS-EXT-AZ:availability_zone  |
|
| OS-EXT-SRV-ATTR:host | -  
|
| OS-EXT-SRV-ATTR:hostname | abi-1  
|
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -  
|
| OS-EXT-SRV-ATTR:instance_name| instance-004e  
|
| OS-EXT-SRV-ATTR:kernel_id| e2a5fdda-7422-4e41-b59a-fa2dad82583e   
|
| OS-EXT-SRV-ATTR:launch_index | 0  
|
| OS-EXT-SRV-ATTR:ramdisk_id   | 4ea6bc99-2c01-459b-9ce1-b6bc6a51ad79   
|
| OS-EXT-SRV-ATTR:reservation_id   | r-wt958hij 
|
| OS-EXT-SRV-ATTR:root_device_name | -  
|
| OS-EXT-SRV-ATTR:user_data| -  
|
| OS-EXT-STS:power_state   | 0  
|
| OS-EXT-STS:task_state| scheduling 
|
| OS-EXT-STS:vm_state  | building   
|
| OS-SRV-USG:launched_at   | -  
|
| OS-SRV-USG:terminated_at | -  
|
| accessIPv4   |
|
| accessIPv6   |
|
| adminPass| FtPf4XB3exjh   
|
| config_drive |
|
| created  | 2016-03-28T12:33:13Z   
|
| flavor   | m1.tiny (1)
|
| hostId   |
|
| id   | fb037e34-7372-45ee-88b6-7dc1d228fe46   
|
| image| cirros-0.3.4-x86_64-uec 
(a258964a-4250-4b6d-9fe9-492ac2b3d8da) |
| key_name | -  
|
| metadata | {} 
|
| name | abi_1  
|
| os-extended-volumes:volumes_attached | [] 
|
| progress   

[Yahoo-eng-team] [Bug 1562834] [NEW] There are some trivial errors in doc/tutorials/plugin.rst

2016-03-28 Thread Wang Bo
Public bug reported:

1. ADD_JS_FILES is not menthoned but only ADD_SCSS_FILES
http://docs.openstack.org/developer/horizon/tutorials/plugin.html#file-structure


http://docs.openstack.org/developer/horizon/tutorials/plugin.html#mypanel-scss
2. static files under static/ could be discovery automatically
3. "div" should not in scss file.

** Affects: horizon
 Importance: Undecided
 Assignee: Wang Bo (chestack)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Wang Bo (chestack)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1562834

Title:
  There are some trivial errors in doc/tutorials/plugin.rst

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  1. ADD_JS_FILES is not menthoned but only ADD_SCSS_FILES
  
http://docs.openstack.org/developer/horizon/tutorials/plugin.html#file-structure

  
  http://docs.openstack.org/developer/horizon/tutorials/plugin.html#mypanel-scss
  2. static files under static/ could be discovery automatically
  3. "div" should not in scss file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1562834/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1549370] Re: Existing connections are not dropped with ovs-firewall when rule is removed

2016-03-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/284259
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=4f6aa3ffde2fd68b85bc5dfdaf6c2684931f3f61
Submitter: Jenkins
Branch:master

commit 4f6aa3ffde2fd68b85bc5dfdaf6c2684931f3f61
Author: Jakub Libosvar 
Date:   Wed Feb 24 16:34:07 2016 +

ovs-fw: Mark conntrack entries invalid if no rule is matched

This patch makes sure that existing connection breaks once security
group rule that allowed such connection is removed. Due to correctly
track connections on the same hypervisor, zones were changed from
per-port to per-network (based on port's vlan tag). This information is
now stored in register 6. Also there was added a test for RELATED
connections to avoid marking such connection as invalid by REPLY rules.

Closes-Bug: 1549370
Change-Id: Ibb5942a980ddd8f2dd7ac328e9559a80c05789bb


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1549370

Title:
  Existing connections are not dropped with ovs-firewall when rule is
  removed

Status in neutron:
  Fix Released

Bug description:
  When rule that allows some traffic is removed from security group, all
  existing connections going to port using this rule should be cut.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1549370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562777] [NEW] LiveMigration should check whether the destination has enough vcpus

2016-03-28 Thread javeme
Public bug reported:

When the live migration, currently we would check whether the destination host 
has enough memory,but would not check whether the destination host has enough 
vcpus.
This may lead to excessive VMs are migrated to a host, at the same time it 
leads to performance degradation of a VM in that host(such as io/vnc).

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1562777

Title:
  LiveMigration should check whether the destination has enough vcpus

Status in OpenStack Compute (nova):
  New

Bug description:
  When the live migration, currently we would check whether the destination 
host has enough memory,but would not check whether the destination host has 
enough vcpus.
  This may lead to excessive VMs are migrated to a host, at the same time it 
leads to performance degradation of a VM in that host(such as io/vnc).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1562777/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562758] [NEW] generate an uuid for image id

2016-03-28 Thread zwei
Public bug reported:

Why  not use `oslo_utils.uuidutils.generate_uuid` function

 def new_image(self, image_id=None, name=None, visibility='private',
  min_disk=0, min_ram=0, protected=False, owner=None,
  disk_format=None, container_format=None,
  extra_properties=None, tags=None, **other_args):
extra_properties = extra_properties or {}
self._check_readonly(other_args)
self._check_unexpected(other_args)
self._check_reserved(extra_properties)

if image_id is None:
image_id = str(uuid.uuid4())

  
created_at = timeutils.utcnow()
updated_at = created_at
status = 'queued'


I think :
   image_id = uuidtils.generate_uuid()
don't use 
  mage_id = str(uuid.uuid4())

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1562758

Title:
  generate  an uuid for image id

Status in Glance:
  New

Bug description:
  Why  not use `oslo_utils.uuidutils.generate_uuid` function

   def new_image(self, image_id=None, name=None, visibility='private',
min_disk=0, min_ram=0, protected=False, owner=None,
disk_format=None, container_format=None,
extra_properties=None, tags=None, **other_args):
  extra_properties = extra_properties or {}
  self._check_readonly(other_args)
  self._check_unexpected(other_args)
  self._check_reserved(extra_properties)

  if image_id is None:
  image_id = str(uuid.uuid4())  


  created_at = timeutils.utcnow()
  updated_at = created_at
  status = 'queued'


  I think :
 image_id = uuidtils.generate_uuid()
  don't use 
mage_id = str(uuid.uuid4())

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1562758/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562731] [NEW] Using LOG.warning replace LOG.warn

2016-03-28 Thread Nguyen Phuong An
Public bug reported:

Python 3 deprecated the logger.warn method, see:
https://docs.python.org/3/library/logging.html#logging.warning
so I prefer to use warning to avoid DeprecationWarning on 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/keystone.py#L222

** Affects: horizon
 Importance: Undecided
 Assignee: Nguyen Phuong An (annp)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Nguyen Phuong An (annp)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1562731

Title:
  Using LOG.warning replace LOG.warn

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Python 3 deprecated the logger.warn method, see:
  https://docs.python.org/3/library/logging.html#logging.warning
  so I prefer to use warning to avoid DeprecationWarning on 
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/keystone.py#L222

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1562731/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp