[Yahoo-eng-team] [Bug 1632768] Re: rootwrap daemon with libvirt/xen not working

2016-10-12 Thread Thomas Bechtold
Looks like this is a oslo.rootwrap bug and executing an unknown command
is not tested in the testsuite.

** Also affects: oslo.rootwrap
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1632768

Title:
  rootwrap daemon with libvirt/xen not working

Status in OpenStack Compute (nova):
  Incomplete
Status in oslo.rootwrap:
  In Progress

Bug description:
  Using:
  - SLE12SP1
  - xen 4.7
  - nova 13.1.2.dev68 (stable-mitaka tarball)

  
  When configuring nova-compute to use the rootwrap daemon and using Xen with 
libvirt as hypervisor I get the following error when booting a VM:

  2016-10-12 15:54:34.216 17936 INFO nova.compute.claims 
[req-5f1c8974-b449-41fa-806b-73f705ed1634 47640c082746419f87ae498f7bdab44e 
08f3d1224d1845cda767a2193594c3d7 - - -] [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5] Claim successful
  2016-10-12 15:54:34.458 17936 INFO nova.virt.osinfo 
[req-5f1c8974-b449-41fa-806b-73f705ed1634 47640c082746419f87ae498f7bdab44e 
08f3d1224d1845cda767a2193594c3d7 - - -] Cannot load Libosinfo: (No module named 
Libosinfo)
  2016-10-12 15:54:34.479 17936 WARNING oslo_config.cfg 
[req-5f1c8974-b449-41fa-806b-73f705ed1634 47640c082746419f87ae498f7bdab44e 
08f3d1224d1845cda767a2193594c3d7 - - -] Option "username" from group "neutron" 
is deprecated. Use option "user-name" from group "neutron".
  2016-10-12 15:54:34.751 17936 INFO nova.virt.libvirt.driver 
[req-5f1c8974-b449-41fa-806b-73f705ed1634 47640c082746419f87ae498f7bdab44e 
08f3d1224d1845cda767a2193594c3d7 - - -] [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5] Creating image
  2016-10-12 15:54:34.758 17936 INFO nova.utils 
[req-5f1c8974-b449-41fa-806b-73f705ed1634 47640c082746419f87ae498f7bdab44e 
08f3d1224d1845cda767a2193594c3d7 - - -] Executing RootwrapDaemonHelper.execute 
cmd=[u'touch -c 
/var/lib/nova/instances/_base/309100c6d00d13edba007a0dde00e9889ce0410a'] 
kwargs=[{'run_as_root': True}]
  2016-10-12 15:54:34.795 17936 INFO oslo_rootwrap.client 
[req-5f1c8974-b449-41fa-806b-73f705ed1634 47640c082746419f87ae498f7bdab44e 
08f3d1224d1845cda767a2193594c3d7 - - -] Spawned new rootwrap daemon process 
with pid=17984
  2016-10-12 15:54:36.060 17936 INFO nova.utils 
[req-5f1c8974-b449-41fa-806b-73f705ed1634 47640c082746419f87ae498f7bdab44e 
08f3d1224d1845cda767a2193594c3d7 - - -] Executing RootwrapDaemonHelper.execute 
cmd=[u'xend status'] kwargs=[{'run_as_root': True}]
  2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager 
[req-5f1c8974-b449-41fa-806b-73f705ed1634 47640c082746419f87ae498f7bdab44e 
08f3d1224d1845cda767a2193594c3d7 - - -] [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5] Instance failed to spawn
  2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5] Traceback (most recent call last):
  2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2218, in 
_build_resources
  2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5] yield resources
  2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2064, in 
_build_and_run_instance
  2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5] block_device_info=block_device_info)
  2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2790, in 
spawn
  2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5] write_to_disk=True)
  2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4746, in 
_get_guest_xml
  2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5] context)
  2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4605, in 
_get_guest_config
  2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5] flavor, guest.os_type)
  2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3640, in 
_get_guest_storage_config
  2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5] 

[Yahoo-eng-team] [Bug 1631319] Re: Can't deploy overcloud of Mitaka on CentOS

2016-10-12 Thread Steve Martinelli
Thanks for the quick analysis here Ben. Looking at newton and future
releases, if you are using the "keystone-manage bootstrap" option to
setup keystone, then the domain ID won't be "default" it'll be some
UUID. Your best bet going forward is to use the domain name only, it'll
always be "Default" (capital D).

Marking this as Invalid for keystone.

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1631319

Title:
  Can't deploy overcloud of Mitaka on CentOS

Status in OpenStack Identity (keystone):
  Invalid
Status in tripleo:
  Triaged

Bug description:
  CentOS 7.2
  Undercloud deployed normally by tripleo instructions.

  keystone -
  Version : 9.2.1
  Release : 0.20161007011449.012bc3d.el7.centos

  heat-api -
  Version : 6.0.1
  Release : 0.20160829124409.ed46562.el7.centos

  These numbers tell us that undercloud installed from Mitaka repository

  But overcloud deploy fails with error -
  [stack@myhost ~]$ openstack overcloud deploy --templates 
--neutron-tunnel-types vxlan --neutron-network-type vxlan --ntp-server 
pool.ntp.org   --control-scale 1 --compute-scale 1 --block-storage-scale 3   
--control-flavor control --compute-flavor compute --block-storage-flavor 
block-storage   -e overcloud/scaleio-env.yaml
  Deploying templates in the directory 
/usr/share/openstack-tripleo-heat-templates
  ERROR: Authorization failed.

  
  keystone logs contains error: domain Default can not be found

  
  workaround - change domain from 'Default' to 'default' in heat-api.conf and 
then overcloud deploy can be started...

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1631319/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274581] Re: keystone ldap identity backend will not work without TLS_CACERT path specified in an ldap.conf file

2016-10-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/379334
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=ac04a51db218215988a54e248b1ac14bc557e1c6
Submitter: Jenkins
Branch:master

commit ac04a51db218215988a54e248b1ac14bc557e1c6
Author: Annapoornima Koppad 
Date:   Thu Sep 29 15:27:34 2016 +0530

Updating the document regarding LDAP options

Closes-bug: #1274581

Change-Id: I3e334b7290745f3e0cdaaf05b07e942929acff04


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1274581

Title:
  keystone ldap identity backend will not work without TLS_CACERT path
  specified in an ldap.conf file

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  I'm on Ubuntu 12.04 using havana 2013.2.1. What I've found is that the
  LDAP identity backend for keystone will not talk to my LDAP server
  (using ldaps) unless I have an ldap.conf that contains a TLS_CACERT
  line. This line duplicates the setting of tls_cacertfile in my
  keystone conf and therefore I don't see why it should be required. The
  rest of my /etc/ldap/ldap.conf file is default/commented out. When I
  don't have this line set I get a SERVER_DOWN error. I am using LDAP
  from a FreeIPA server if that matters.

  Error message from the logs:
  2014-01-30 16:24:17.168 21174 TRACE keystone.common.wsgi SERVER_DOWN: 
{'info': '(unknown error code)', 'desc': "Can't contact LDAP server"}

  and from the CLI:
  Authorization Failed: An unexpected error prevented the server from 
fulfilling your request. {'info': '(unknown error code)', 'desc': "Can't 
contact LDAP server"} (HTTP 500)

  Below are relevant sections of my configs:

  /etc/ldap/ldap.conf:
  #
  # LDAP Defaults
  #

  # See ldap.conf(5) for details
  # This file should be world readable but not world writable.

  #BASE   dc=example,dc=com
  #URIldap://ldap.example.com ldap://ldap-master.example.com:666

  #SIZELIMIT  12
  #TIMELIMIT  15
  #DEREF  never

  # TLS certificates (needed for GnuTLS)
  TLS_CACERT  /etc/ssl/certs/ca-certificates.crt

  -

  keystone.conf:

  [identity]
  driver = keystone.identity.backends.ldap.Identity
  ...
  [ldap]
  url = ldaps://ldap.example.com:636
  user = uid=mfischer,cn=users,cn=accounts,dc=example,dc=com
  password = GoBroncos

  ...
  use_tls = False
  tls_cacertfile = /etc/ssl/certs/ca-certificates.crt
  # tls_cacertdir =
  tls_req_cert = demand

  -

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1274581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632924] [NEW] Lingering sql backend role assignments after deletion of ldap user.

2016-10-12 Thread Alberto Laporte
Public bug reported:

Greetings all,


There is currently an issue in an Openstack Liberty environment where the 
keystone configuration is using a ldap driver for users and the sql driver for 
role assignments.  The issue being encountered is when a ldap user is removed, 
the id for that user(actor_id) remains in the keystone.assignment table.  The 
way this was discovered was that if we attempt to perform a user list on a 
specific project where a former ldap user existed the openstack client abruptly 
exits with an exception[1] regarding the resource or in this case the user id 
no longer being found as it was deleted from ldap while its role assignment for 
the user remains in the keystone.assignments table.  There was a similar bug 
found [2], however that one deals by both identity and assignment driver using 
ldap whereas this particular case identity is ldap and assignment is sql.  


Environment details:
Openstack Version: 12.2.0(Liberty)
Keystone Version: 8.1.2
identity driver: ldap
assignment driver: sql


[0]

MariaDB [keystone]> select * from assignment where 
actor_id='50327bfee89ace875a8ffbe4040cdbc9ec712859f5c8c39a73b36003407f9a47';
+-+--+--+--+---+
| type| actor_id
 | target_id| role_id  | 
inherited |
+-+--+--+--+---+
| UserProject | 
50327bfee89ace875a8ffbe4040cdbc9ec712859f5c8c39a73b36003407f9a47 | 
14b2bc91832e455491a9fd4a42c8b19c | 9fe2ff9ee4384b1894a90878d3e92bab | 0 
|
| UserProject | 
50327bfee89ace875a8ffbe4040cdbc9ec712859f5c8c39a73b36003407f9a47 | 
14b2bc91832e455491a9fd4a42c8b19c | bffeb621920e40feb18ce2c28b07d1a1 | 0 
|
+-+--+--+--+---+

[1]

Request returned failure status: 401
Could not find resource 
50327bfee89ace875a8ffbe4040cdbc9ec712859f5c8c39a73b36003407f9a47
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/cliff/app.py", line 374, in 
run_subcommand
result = cmd.run(parsed_args)
  File "/usr/local/lib/python2.7/dist-packages/cliff/display.py", line 92, in 
run
column_names, data = self.take_action(parsed_args)
  File 
"/usr/local/lib/python2.7/dist-packages/openstackclient/common/utils.py", line 
45, in wrapper
return func(self, *args, **kwargs)
  File 
"/usr/local/lib/python2.7/dist-packages/openstackclient/identity/v3/user.py", 
line 251, in take_action
user = utils.find_resource(identity_client.users, user_id)
  File 
"/usr/local/lib/python2.7/dist-packages/openstackclient/common/utils.py", line 
141, in find_resource
raise exceptions.CommandError(msg)
CommandError: Could not find resource 
50327bfee89ace875a8ffbe4040cdbc9ec712859f5c8c39a73b36003407f9a47
clean_up ListUser: Could not find resource 
50327bfee89ace875a8ffbe4040cdbc9ec712859f5c8c39a73b36003407f9a47
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/openstackclient/shell.py", line 
112, in run
ret_val = super(OpenStackShell, self).run(argv)
  File "/usr/local/lib/python2.7/dist-packages/cliff/app.py", line 255, in run
result = self.run_subcommand(remainder)
  File "/usr/local/lib/python2.7/dist-packages/cliff/app.py", line 374, in 
run_subcommand
result = cmd.run(parsed_args)
  File "/usr/local/lib/python2.7/dist-packages/cliff/display.py", line 92, in 
run
column_names, data = self.take_action(parsed_args)
  File 
"/usr/local/lib/python2.7/dist-packages/openstackclient/common/utils.py", line 
45, in wrapper
return func(self, *args, **kwargs)
  File 
"/usr/local/lib/python2.7/dist-packages/openstackclient/identity/v3/user.py", 
line 251, in take_action
user = utils.find_resource(identity_client.users, user_id)
  File 
"/usr/local/lib/python2.7/dist-packages/openstackclient/common/utils.py", line 
141, in find_resource
raise exceptions.CommandError(msg)
CommandError: Could not find resource 
50327bfee89ace875a8ffbe4040cdbc9ec712859f5c8c39a73b36003407f9a47

END return value: 1


[2]
https://bugs.launchpad.net/keystone/+bug/1366211

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1632924

Title:
  Lingering sql backend role assignments after deletion of ldap user.

Status in OpenStack Identity (keystone):
  New

Bug description:
  Greetings all,

  
  There is currently an issue in an Openstack Liberty environment where the 
keystone confi

[Yahoo-eng-team] [Bug 1630092] Re: Admin password reset should be exempt from password history validation

2016-10-12 Thread Dolph Mathews
** Also affects: keystone/newton
   Importance: Undecided
   Status: New

** Changed in: keystone/newton
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1630092

Title:
  Admin password reset should be exempt from password history validation

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) newton series:
  New

Bug description:
  In Newton, we added password history validation for all password
  changes. However, for administrative password resets, we shouldn't
  validate against the end-user's password history.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1630092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632884] [NEW] Missing a slaac ipv6 address mode

2016-10-12 Thread Ying Zuo
Public bug reported:

Currently, Horizon only provides three ipv6 address modes on create
network/subnet modal. They are slaac/slaac,
DHCPv6-stateless/DHCPv6-stateless, DHCPv6-stateful/DHCPv6-stateless.
What's missing is none/slaac for using an external Router for routing.

See section Using SLAAC for addressing on
http://docs.openstack.org/newton/networking-guide/config-ipv6.html

** Affects: horizon
 Importance: Undecided
 Assignee: Ying Zuo (yingzuo)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Ying Zuo (yingzuo)

** Description changed:

  Currently, Horizon only provides three ipv6 address modes on create
  network/subnet modal. They are slaac/slaac,
  DHCPv6-stateless/DHCPv6-stateless, DHCPv6-stateful/DHCPv6-stateless.
  What's missing is none/slaac for using an external Router for routing.
  
+ See section Using SLAAC for addressing on
  http://docs.openstack.org/newton/networking-guide/config-ipv6.html

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1632884

Title:
  Missing a slaac ipv6 address mode

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently, Horizon only provides three ipv6 address modes on create
  network/subnet modal. They are slaac/slaac,
  DHCPv6-stateless/DHCPv6-stateless, DHCPv6-stateful/DHCPv6-stateless.
  What's missing is none/slaac for using an external Router for routing.

  See section Using SLAAC for addressing on
  http://docs.openstack.org/newton/networking-guide/config-ipv6.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1632884/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632723] Re: New WebOb minimum version requirement of >=1.6.1

2016-10-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/385499
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=41adb9f0394cd3115e620def803d18a1719791c9
Submitter: Jenkins
Branch:master

commit 41adb9f0394cd3115e620def803d18a1719791c9
Author: Matt Riedemann 
Date:   Wed Oct 12 10:25:09 2016 -0400

Require WebOb>=1.6.0

Nova change 4e923eb9a660593b8a7d2522992700182978a54c started
using the json_formatter kwarg which was introduced in WebOb
1.6.0:


https://github.com/Pylons/webob/commit/87c8749a57c1ff2442db2d74d9fb86935b7b201e

So we need to raise the minimum required version for nova to use.

Change-Id: Ia778a11afb03b6d4b57dbd55a801a5a28b10541d
Depends-On: I2bbad0c059cc514ba0be1d42c061056a342caadc
Closes-Bug: #1632723


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1632723

Title:
  New WebOb minimum version requirement of >=1.6.1

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  Confirmed

Bug description:
  Description
  ===
  I385c36e0af1a8a785c02e21ba4efa6046cde6366 introduced a new requirement of 
WebOb>=1.6.1 that has not been reflected in requirements.txt either globally or 
within Nova.

  Steps to reproduce
  ==
  # tox -e py27 
nova.tests.unit.api.openstack.placement.test_util.TestRequireContent.test_fail_no_content_type
  [..]
  Slowest Tests
  Test id   
  Runtime (s)
  
--
  ---
  
nova.tests.unit.api.openstack.placement.test_util.TestRequireContent.test_fail_no_content_type
  0.130

  ==
  Totals
  ==
  Ran: 1 tests in 15. sec.
   - Passed: 1
   - Skipped: 0
   - Expected Fail: 0
   - Unexpected Success: 0
   - Failed: 0
  Sum of execute time for each test: 0.1295 sec.

  ==
  Worker Balance
  ==
   - Worker 0 (1 tests) => 0:00:00.129534
  [..]
  # . .tox/py27/bin/activate
  (py27)# pip list | grep -i webob
  WebOb (1.6.1)
  (py27)# pip install WebOb==1.2.3
  Collecting WebOb==1.2.3
Downloading WebOb-1.2.3.tar.gz (191kB)
  100% || 194kB 319kB/s 
  Building wheels for collected packages: WebOb
Running setup.py bdist_wheel for WebOb ... done
Stored in directory: 
/home/lyarwood/.cache/pip/wheels/41/d1/c9/fd5b1a17465c81580c3b5c8876a4611c8c677b81a94dad8f72
  Successfully built WebOb
  Installing collected packages: WebOb
Found existing installation: WebOb 1.6.1
  Uninstalling WebOb-1.6.1:
Successfully uninstalled WebOb-1.6.1
  Successfully installed WebOb-1.2.3
  (py27)# deactivate 
  # tox -e py27 
nova.tests.unit.api.openstack.placement.test_util.TestRequireContent.test_fail_no_content_type
  [..]
  {0} 
nova.tests.unit.api.openstack.placement.test_util.TestRequireContent.test_fail_no_content_type
 [0.133657s] ... FAILED

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "nova/tests/unit/api/openstack/placement/test_util.py", line 229, 
in test_fail_no_content_type
  self.handler, req)
File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 485, in assertRaises
  self.assertThat(our_callable, matcher)
File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 496, in assertThat
  mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 547, in _matchHelper
  mismatch = matcher.match(matchee)
File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 108, in match
  mismatch = self.exception_matcher.match(exc_info)
File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/matchers/_higherorder.py",
 line 62, in match
  mismatch = matcher.match(matchee)
File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 475, in match
  reraise(*matchee)
File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 101, in match
  result = matchee()
File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 1049, in __call__
  return self._callable_object(

[Yahoo-eng-team] [Bug 1632877] [NEW] [RFE] Limits and Counts for SecGroup and FIPs

2016-10-12 Thread Ankur
Public bug reported:

[Problem]

As stated in recently submitted bug to OpenStack Client:
OpenStack limits --absolute shows wrong count for 'totalSecurityGroupsUsed'. 
Despite creating multiple security groups still, the count shows as 1.

openstack security group create  uses neutron API to create security group
openstack limits show --absolute fetches the information from nova api. Since 
nova-network has been deprecated and current devstack installation runs with 
neutron as default, it's better to change the way how how openstack limits show 
fetches its information

[Proposal]

Similar to Mitaka feature of IP Capacity. Provide an admin only feature
to return the number of Security Groups, Security Group Rules, Floating
IPs used and their absolute limits.

[References]
Original OSC bug
https://bugs.launchpad.net/python-openstackclient/+bug/1632460

Paste of difference in nova limits support.
http://paste.openstack.org/show/585516/

Paste sample return of "openstack limits show --absolute"
http://paste.openstack.org/show/585518/

** Affects: neutron
 Importance: Undecided
 Assignee: Ankur (ankur-gupta-f)
 Status: New


** Tags: rfe

** Changed in: neutron
 Assignee: (unassigned) => Ankur (ankur-gupta-f)

** Summary changed:

- Limits and Counts for SecGroup and FIPs
+ [RFE] Limits and Counts for SecGroup and FIPs

** Tags added: rfe

** Description changed:

  [Problem]
  
  As stated in recently submitted bug to OpenStack Client:
  OpenStack limits --absolute shows wrong count for 'totalSecurityGroupsUsed'. 
Despite creating multiple security groups still, the count shows as 1.
  
  openstack security group create  uses neutron API to create security 
group
  openstack limits show --absolute fetches the information from nova api. Since 
nova-network has been deprecated and current devstack installation runs with 
neutron as default, it's better to change the way how how openstack limits show 
fetches its information
  
  [Proposal]
  
  Similar to Mitaka feature of IP Capacity. Provide an admin only feature
  to return the number of Security Groups, Security Group Rules, Floating
  IPs used and their absolute limits.
  
- 
  [References]
  Original OSC bug
- https://bugs.launchpad.net/python-openstackclient/+bug/1632460 
+ https://bugs.launchpad.net/python-openstackclient/+bug/1632460
  
- Paste of difference in nova limits support. 
+ Paste of difference in nova limits support.
  http://paste.openstack.org/show/585516/
  
  Paste sample return of "openstack limits show --absolute"
- http://paste.openstack.org/show/585517/
+ http://paste.openstack.org/show/585518/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1632877

Title:
  [RFE] Limits and Counts for SecGroup and FIPs

Status in neutron:
  New

Bug description:
  [Problem]

  As stated in recently submitted bug to OpenStack Client:
  OpenStack limits --absolute shows wrong count for 'totalSecurityGroupsUsed'. 
Despite creating multiple security groups still, the count shows as 1.

  openstack security group create  uses neutron API to create security 
group
  openstack limits show --absolute fetches the information from nova api. Since 
nova-network has been deprecated and current devstack installation runs with 
neutron as default, it's better to change the way how how openstack limits show 
fetches its information

  [Proposal]

  Similar to Mitaka feature of IP Capacity. Provide an admin only
  feature to return the number of Security Groups, Security Group Rules,
  Floating IPs used and their absolute limits.

  [References]
  Original OSC bug
  https://bugs.launchpad.net/python-openstackclient/+bug/1632460

  Paste of difference in nova limits support.
  http://paste.openstack.org/show/585516/

  Paste sample return of "openstack limits show --absolute"
  http://paste.openstack.org/show/585518/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1632877/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475652] Re: libvirt, rbd imagebackend, disk.rescue not deleted when unrescued

2016-10-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/314928
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=c12d388070895e40be19f4f4e5fded736a5376be
Submitter: Jenkins
Branch:master

commit c12d388070895e40be19f4f4e5fded736a5376be
Author: Bartek Zurawski 
Date:   Tue May 10 17:31:19 2016 +0200

Fix issue with not removing rbd rescue disk

Currently when instance that use RBD as backend
is rescued and next unrescued, rescue image is
not removed, this cause issue when the same
instance is rescued again it's use old rescue
image not new one.

Change-Id: Idf4086303baa4b936c90be89552ad8deb45cef3a
Closes-Bug: #1475652


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475652

Title:
  libvirt, rbd imagebackend, disk.rescue not deleted when unrescued

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Reproduced on juno version (actually tested on a fork from 2014.2.3,
  sorry in advance if invalid but i think the legacy version is also
  concerned by it)

  not tested on younger versions, but looking at the code they seem
  impacted too

  For Rbd imagebackend only, when unrescuing an instance the disk.rescue
  file is actually not deleted on remote storage (only the rbd session
  is destroyed)

  Consequence: when rescuing instance once again, it simply ignores the
  new rescue image and takes the old _disk.rescue image

  Reproduce:

  1. nova rescue instance

  (take care that you are booted to the vda rescue disk -> when rescuing
  an instance from the same image it was spawned from (case by default),
  since fs uuid is the same, according to your image fstab (if entry
  UUID=) you can actually boot from the image you are actually trying to
  rescue, but this is another matter that concerns template building ->
  see https://bugs.launchpad.net/nova/+bug/1460536)

  edit rescue image disk

  2. nova unrescue instance

  3. nova rescue instance -> you get back the disk.rescue spawned in 1

  if confirmed, fix coming soon

  Concerning fix several possibilities:
  - nova.virt.libvirt.driver :LibvirtDriver-> unrescue method, not deleting the 
correct files
  or
  - nova.virt.libvirt.imagebackend:Rbd -> erase disk.rescue in create image 
method if already existing

  Rebuild not concerned by issue, delete instance correctly deletes
  files on remote storage

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475652/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632856] [NEW] Incorrect datatype for Python 3 in api-samples functional test

2016-10-12 Thread Ed Leafe
Public bug reported:

The nova/tests/functional/api_sample_tests/test_servers.py contains the
ServersSampleBase class, and in its class definition creates user data
by base64 encoding a string. However, this will not work in Python 3, as
the base64.b64encode() method requires bytes, not a string.

This can be seen by simply running 'tox -e functional' under Python 3,
which then emits a series of errors, most of which look like:

Failed to import test module: 
nova.tests.functional.api_sample_tests.test_servers
Traceback (most recent call last):
  File 
"/home/ed/projects/nova/.tox/functional/lib/python3.4/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
module = self._get_module_from_name(name)
  File 
"/home/ed/projects/nova/.tox/functional/lib/python3.4/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
__import__(name)
  File 
"/home/ed/projects/nova/nova/tests/functional/api_sample_tests/test_servers.py",
 line 24, in 
class ServersSampleBase(api_sample_base.ApiSampleTestBaseV21):
  File 
"/home/ed/projects/nova/nova/tests/functional/api_sample_tests/test_servers.py",
 line 29, in ServersSampleBase
user_data = base64.b64encode(user_data_contents)
  File "/home/ed/projects/nova/.tox/functional/lib/python3.4/base64.py", line 
62, in b64encode
encoded = binascii.b2a_base64(s)[:-1]
TypeError: 'str' does not support the buffer interface


This was reported in https://bugs.launchpad.net/nova/+bug/1632521, and a fix 
was issued that simply forced tox to use py27.

** Affects: nova
 Importance: Undecided
 Assignee: Ed Leafe (ed-leafe)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1632856

Title:
  Incorrect datatype for Python 3 in api-samples functional test

Status in OpenStack Compute (nova):
  New

Bug description:
  The nova/tests/functional/api_sample_tests/test_servers.py contains
  the ServersSampleBase class, and in its class definition creates user
  data by base64 encoding a string. However, this will not work in
  Python 3, as the base64.b64encode() method requires bytes, not a
  string.

  This can be seen by simply running 'tox -e functional' under Python 3,
  which then emits a series of errors, most of which look like:

  Failed to import test module: 
nova.tests.functional.api_sample_tests.test_servers
  Traceback (most recent call last):
File 
"/home/ed/projects/nova/.tox/functional/lib/python3.4/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/home/ed/projects/nova/.tox/functional/lib/python3.4/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File 
"/home/ed/projects/nova/nova/tests/functional/api_sample_tests/test_servers.py",
 line 24, in 
  class ServersSampleBase(api_sample_base.ApiSampleTestBaseV21):
File 
"/home/ed/projects/nova/nova/tests/functional/api_sample_tests/test_servers.py",
 line 29, in ServersSampleBase
  user_data = base64.b64encode(user_data_contents)
File "/home/ed/projects/nova/.tox/functional/lib/python3.4/base64.py", line 
62, in b64encode
  encoded = binascii.b2a_base64(s)[:-1]
  TypeError: 'str' does not support the buffer interface

  
  This was reported in https://bugs.launchpad.net/nova/+bug/1632521, and a fix 
was issued that simply forced tox to use py27.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1632856/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632852] [NEW] placement api responses should not be cacehable

2016-10-12 Thread Chris Dent
Public bug reported:

In version 1.0 of the placement API, responses are sent without any
cache-busting headers. This means that the responses may be cached by
the user-agent. It's not predictable.

Caching of resource providers is not desired so it would be good to send
cache headers to enforce that responses are not cached.

This old document remains the bizness for learning how to do such
things: https://www.mnot.net/cache_docs/

** Affects: nova
 Importance: Medium
 Status: Triaged


** Tags: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1632852

Title:
  placement api responses should not be cacehable

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  In version 1.0 of the placement API, responses are sent without any
  cache-busting headers. This means that the responses may be cached by
  the user-agent. It's not predictable.

  Caching of resource providers is not desired so it would be good to
  send cache headers to enforce that responses are not cached.

  This old document remains the bizness for learning how to do such
  things: https://www.mnot.net/cache_docs/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1632852/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632820] [NEW] os-server-groups policy doesn't separate CRUD actions

2016-10-12 Thread Matthew Edmonds
Public bug reported:

nova.api.openstack.compute.server_groups.ServerGroupController uses the
same policy check (os_compute_api:os-server-groups) for show, delete,
index, and create, instead of separating these into separate checks
(e.g. os_compute_api:os-server-groups:delete). This makes it impossible
to customize policy such that some roles are allowed to do some but not
all of these operations, E.g. show/index server groups but not
create/delete them.

Found with Newton.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1632820

Title:
  os-server-groups policy doesn't separate CRUD actions

Status in OpenStack Compute (nova):
  New

Bug description:
  nova.api.openstack.compute.server_groups.ServerGroupController uses
  the same policy check (os_compute_api:os-server-groups) for show,
  delete, index, and create, instead of separating these into separate
  checks (e.g. os_compute_api:os-server-groups:delete). This makes it
  impossible to customize policy such that some roles are allowed to do
  some but not all of these operations, E.g. show/index server groups
  but not create/delete them.

  Found with Newton.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1632820/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495701] Re: Sometimes Cinder volumes fail to attach with error "The device is not writable: Permission denied"

2016-10-12 Thread Sean McGinnis
** Changed in: nova
   Status: Incomplete => Invalid

** Changed in: os-brick
   Status: In Progress => Fix Released

** Changed in: cinder
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1495701

Title:
  Sometimes Cinder volumes fail to attach with error "The device is not
  writable: Permission denied"

Status in Cinder:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in os-brick:
  Fix Released

Bug description:
  This is happening on the latest master branch in CI systems. It
  happens very rarely in the gate:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJcImxpYnZpcnRFcnJvcjogb3BlcmF0aW9uIGZhaWxlZDogb3BlbiBkaXNrIGltYWdlIGZpbGUgZmFpbGVkXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0NDIyNjY3MDU1NzZ9

  And on some third party CI systems (not included in the logstash
  results):

  http://ec2-52-8-200-217.us-
  west-1.compute.amazonaws.com/28/216728/5/check/PureFCDriver-tempest-
  dsvm-volume-
  multipath/bd3618d/logs/libvirt/libvirtd.txt.gz#_2015-09-14_09_00_44_829

  When the error occurs there is a stack trace in the n-cpu log like
  this:

  http://logs.openstack.org/22/222922/2/check/gate-tempest-dsvm-full-
  lio/550be5e/logs/screen-n-cpu.txt.gz?level=DEBUG#_2015-09-13_17_34_07_787

  2015-09-13 17:34:07.787 ERROR nova.virt.libvirt.driver 
[req-4ac04f97-f468-466a-9fb2-02d1df3a5633 
tempest-TestEncryptedCinderVolumes-1564844141 
tempest-TestEncryptedCinderVolumes-804461249] [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] Failed to attach volume at mountpoint: 
/dev/vdb
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] Traceback (most recent call last):
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 1115, in attach_volume
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] guest.attach_device(conf, 
persistent=True, live=live)
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6]   File 
"/opt/stack/new/nova/nova/virt/libvirt/guest.py", line 233, in attach_device
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] 
self._domain.attachDeviceFlags(conf.to_xml(), flags=flags)
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, in doit
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, in 
proxy_call
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] rv = execute(f, *args, **kwargs)
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, in execute
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] six.reraise(c, e, tb)
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, in tworker
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] rv = meth(*args, **kwargs)
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6]   File 
"/usr/local/lib/python2.7/dist-packages/libvirt.py", line 517, in 
attachDeviceFlags
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] if ret == -1: raise libvirtError 
('virDomainAttachDeviceFlags() failed', dom=self)
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] libvirtError: operation failed: open disk 
image file failed
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6]

  and a corresponding error in the libvirt log such as this:

  http://logs.openstack.org/22/222922/2/check/gate-tempest-dsvm-full-
  lio/550be5e/logs/libvirt/libvirtd.txt.gz#_2015-09-13_17_34_07_499

  2015-09-13 17

[Yahoo-eng-team] [Bug 1592043] Re: os-brick 1.4.0 increases volume setup failure rates

2016-10-12 Thread Sean McGinnis
Cinder related change: https://review.openstack.org/#/c/331973/

** Changed in: os-brick
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1592043

Title:
  os-brick 1.4.0 increases volume setup failure rates

Status in OpenStack Compute (nova):
  Fix Released
Status in os-brick:
  Invalid
Status in oslo.privsep:
  New

Bug description:
  Since merging upper constraints 1.4.0 into upper-constraints, the
  multinode grenade jobs are hitting a nearly 1/3 failure rate on boot
  from volume scenarios around volume setup. This would be on Newton
  code using Mitaka configs.

  Representative failures are of the following form:
  http://logs.openstack.org/71/327971/5/gate/gate-grenade-dsvm-neutron-
  
multinode/f2690e3/logs/new/screen-n-cpu.txt.gz?level=WARNING#_2016-06-13_15_22_59_095

  The 1/3 failure rate is suspicious, and in the past has often hinted
  towards a race condition interacting between parallel API requests.

  The failure rate increase can be seen here -
  http://tinyurl.com/zrq35e8

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1592043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1618430] Re: IptablesFwaasDriver could hang neutron l3-agent

2016-10-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/384943
Committed: 
https://git.openstack.org/cgit/openstack/neutron-fwaas/commit/?id=b80de376d484e5cd1eb09df4c0720e52f3f29742
Submitter: Jenkins
Branch:master

commit b80de376d484e5cd1eb09df4c0720e52f3f29742
Author: Yann Morice 
Date:   Tue Oct 11 13:34:00 2016 +0200

Deal with the '-m protocol' flag in iptables FwAAS v1 and v2

Iptables automatically add a '-m protocol' flag for rules containing a 
source
or a destination port. FwAAS do not add this flag so that, on apply, rules 
are
always different from iptables-save output. This induce a very long loop in
neutron-l3-agent hosting the router as the comparison of each line is just
slightly different.

This patch only add one '-m protocol' flag before port.

Closes-Bug: #1618430
Change-Id: Ia3fa3889dbf3ee10425e7e7fce8a3b8351f14e60


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1618430

Title:
  IptablesFwaasDriver could hang neutron l3-agent

Status in neutron:
  Fix Released

Bug description:
  Hello!

  OVSHybridIptablesFirewallDriver completely hang neutron l3-agent
  during a very long time when using a large number of firewall rules
  (400 in our case). After debugging, it appears that we are locked in
  neutron/agent/linux/iptables_manager.py in function
  _generate_chain_diff_iptables_commands and exactly at line "for line
  in difflib.ndiff(old_chain_rules, new_chain_rules):".

  Ex.

  iptables_manager.py(784): for line in difflib.ndiff(old_chain_rules, 
new_chain_rules):
   --- modulename: difflib, funcname: compare
  difflib.py(922): for line in g:
   --- modulename: difflib, funcname: _dump
  difflib.py(927): for i in xrange(lo, hi):
  difflib.py(910): for tag, alo, ahi, blo, bhi in 
cruncher.get_opcodes():
  difflib.py(911): if tag == 'replace':
  difflib.py(912): g = self._fancy_replace(a, alo, ahi, b, blo, 
bhi)
  difflib.py(922): for line in g:
   --- modulename: difflib, funcname: _fancy_replace
  difflib.py(966): best_ratio, cutoff = 0.74, 0.75
  difflib.py(967): cruncher = SequenceMatcher(self.charjunk)
   --- modulename: difflib, funcname: __init__
  difflib.py(218): self.isjunk = isjunk
  difflib.py(219): self.a = self.b = None
  difflib.py(220): self.autojunk = autojunk
  difflib.py(221): self.set_seqs(a, b)
   --- modulename: difflib, funcname: set_seqs
  difflib.py(232): self.set_seq1(a)
   --- modulename: difflib, funcname: set_seq1
  difflib.py(256): if a is self.a:
  difflib.py(258): self.a = a
  difflib.py(259): self.matching_blocks = self.opcodes = None
  difflib.py(233): self.set_seq2(b)
   --- modulename: difflib, funcname: set_seq2
  difflib.py(282): if b is self.b:
  difflib.py(284): self.b = b
  difflib.py(285): self.matching_blocks = self.opcodes = None
  difflib.py(286): self.fullbcount = None
  difflib.py(287): self.__chain_b()
   --- modulename: difflib, funcname: __chain_b
  difflib.py(317): b = self.b
  difflib.py(318): self.b2j = b2j = {}
  difflib.py(320): for i, elt in enumerate(b):
  difflib.py(325): junk = set()
  difflib.py(326): isjunk = self.isjunk
  difflib.py(327): if isjunk:
  difflib.py(328): for elt in list(b2j.keys()):  # using list() 
since b2j is modified
  difflib.py(334): popular = set()
  difflib.py(335): n = len(b)
  difflib.py(336): if self.autojunk and n >= 200:
  difflib.py(347): self.isbjunk = junk.__contains__
  difflib.py(348): self.isbpopular = popular.__contains__
  difflib.py(968): eqi, eqj = None, None   # 1st indices of equal lines 
(if any)
  difflib.py(973): for j in xrange(blo, bhi):
  difflib.py(974): bj = b[j]
  difflib.py(975): cruncher.set_seq2(bj)
   --- modulename: difflib, funcname: set_seq2
  difflib.py(282): if b is self.b:
  difflib.py(284): self.b = b

  [...]

  difflib.py(418): for j in b2j.get(a[i], nothing):
  difflib.py(420): if j < blo:
  difflib.py(422): if j >= bhi:
  difflib.py(424): k = newj2len[j] = j2lenget(j-1, 0) + 1
  difflib.py(425): if k > bestsize:
  difflib.py(418): for j in b2j.get(a[i], nothing):
  difflib.py(420): if j < blo:
  difflib.py(422): if j >= bhi:
  difflib.py(424): k = newj2len[j] = j2lenget(j-1, 0) + 1
  difflib.py(425): if k > bestsize:
  difflib.py(418): for j in b2j.get(a[i], nothing):
  difflib.py(420): if j < blo:
  difflib.py(422):   

[Yahoo-eng-team] [Bug 1612192] Re: L3 DVR: Unable to complete operation on subnet

2016-10-12 Thread Brian Haley
I don't see this in logstash any more, unless it's broken.  I'll close
but please re-open if seen again.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612192

Title:
  L3 DVR: Unable to complete operation on subnet

Status in neutron:
  Invalid

Bug description:
  There is a new gate failure that can be found using the following
  logstash query:

  message:"One or more ports have an IP allocation from this subnet" &&
  filename:"console.html" && build_queue:"gate"

  This seems to be specific to DVR jobs and is separate from [1] (see
  comment #7 on that bug report).

  [1]: https://bugs.launchpad.net/neutron/+bug/1562878

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1612192/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632768] [NEW] rootwrap daemon with libvirt/xen not working

2016-10-12 Thread Thomas Bechtold
Public bug reported:

Using:
- SLE12SP1
- xen 4.7
- nova 13.1.2.dev68 (stable-mitaka tarball)


When configuring nova-compute to use the rootwrap daemon and using Xen with 
libvirt as hypervisor I get the following error when booting a VM:

2016-10-12 15:54:34.216 17936 INFO nova.compute.claims 
[req-5f1c8974-b449-41fa-806b-73f705ed1634 47640c082746419f87ae498f7bdab44e 
08f3d1224d1845cda767a2193594c3d7 - - -] [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5] Claim successful
2016-10-12 15:54:34.458 17936 INFO nova.virt.osinfo 
[req-5f1c8974-b449-41fa-806b-73f705ed1634 47640c082746419f87ae498f7bdab44e 
08f3d1224d1845cda767a2193594c3d7 - - -] Cannot load Libosinfo: (No module named 
Libosinfo)
2016-10-12 15:54:34.479 17936 WARNING oslo_config.cfg 
[req-5f1c8974-b449-41fa-806b-73f705ed1634 47640c082746419f87ae498f7bdab44e 
08f3d1224d1845cda767a2193594c3d7 - - -] Option "username" from group "neutron" 
is deprecated. Use option "user-name" from group "neutron".
2016-10-12 15:54:34.751 17936 INFO nova.virt.libvirt.driver 
[req-5f1c8974-b449-41fa-806b-73f705ed1634 47640c082746419f87ae498f7bdab44e 
08f3d1224d1845cda767a2193594c3d7 - - -] [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5] Creating image
2016-10-12 15:54:34.758 17936 INFO nova.utils 
[req-5f1c8974-b449-41fa-806b-73f705ed1634 47640c082746419f87ae498f7bdab44e 
08f3d1224d1845cda767a2193594c3d7 - - -] Executing RootwrapDaemonHelper.execute 
cmd=[u'touch -c 
/var/lib/nova/instances/_base/309100c6d00d13edba007a0dde00e9889ce0410a'] 
kwargs=[{'run_as_root': True}]
2016-10-12 15:54:34.795 17936 INFO oslo_rootwrap.client 
[req-5f1c8974-b449-41fa-806b-73f705ed1634 47640c082746419f87ae498f7bdab44e 
08f3d1224d1845cda767a2193594c3d7 - - -] Spawned new rootwrap daemon process 
with pid=17984
2016-10-12 15:54:36.060 17936 INFO nova.utils 
[req-5f1c8974-b449-41fa-806b-73f705ed1634 47640c082746419f87ae498f7bdab44e 
08f3d1224d1845cda767a2193594c3d7 - - -] Executing RootwrapDaemonHelper.execute 
cmd=[u'xend status'] kwargs=[{'run_as_root': True}]
2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager 
[req-5f1c8974-b449-41fa-806b-73f705ed1634 47640c082746419f87ae498f7bdab44e 
08f3d1224d1845cda767a2193594c3d7 - - -] [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5] Instance failed to spawn
2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5] Traceback (most recent call last):
2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2218, in 
_build_resources
2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5] yield resources
2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2064, in 
_build_and_run_instance
2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5] block_device_info=block_device_info)
2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2790, in 
spawn
2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5] write_to_disk=True)
2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4746, in 
_get_guest_xml
2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5] context)
2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4605, in 
_get_guest_config
2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5] flavor, guest.os_type)
2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3640, in 
_get_guest_storage_config
2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5] inst_type)
2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3593, in 
_get_guest_disk_config
2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5] self._host.get_version())
2016-10-12 15:54:36.062 17936 ERROR nova.compute.manager [instance: 
98dbdd8b-ae17-4861-bffe-cc48042f93e5]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 1

[Yahoo-eng-team] [Bug 1279611] Re: urlparse is incompatible for python 3

2016-10-12 Thread Sean McGinnis
** Changed in: python-cinderclient
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1279611

Title:
   urlparse is incompatible for python 3

Status in Astara:
  Fix Committed
Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in gce-api:
  In Progress
Status in Glance:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in openstack-doc-tools:
  Fix Released
Status in python-barbicanclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in RACK:
  Fix Committed
Status in Sahara:
  Fix Released
Status in Solar:
  Invalid
Status in storyboard:
  Fix Committed
Status in surveil:
  In Progress
Status in OpenStack Object Storage (swift):
  Fix Released
Status in swift-bench:
  Fix Committed
Status in OpenStack DBaaS (Trove):
  Fix Released
Status in tuskar:
  Fix Released
Status in vmware-nsx:
  Fix Committed
Status in zaqar:
  Fix Released
Status in Zuul:
  Fix Committed

Bug description:
  import urlparse

  should be changed to :
  import six.moves.urllib.parse as urlparse

  for python3 compatible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/astara/+bug/1279611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632742] [NEW] /v2 route doesn't exist

2016-10-12 Thread Matt Riedemann
Public bug reported:

Looking at the api-ref docs, I'm able to list versions of the image API
available in a cloud (I'm using a devstack created from master as of
last week):

http://developer.openstack.org/api-
ref/image/versions/index.html?expanded=id1-detail#id1

stack@osc:/opt/stack/glance$ git log -1
commit 9bd264cd034f996852372ae0ca988bd67b98cf9a
Merge: 2de3caf ce6cb2d
Author: Jenkins 
Date:   Tue Oct 4 02:28:10 2016 +

Merge "[api-ref] configure LogABug feature"
stack@osc:/opt/stack/glance$


I'm able to list versions for the image endpoint:

stack@osc:/opt/stack/glance$ curl -s -H "X-Auth-Token: $OS_TOKEN" 
http://9.5.127.82:9292 | json_pp
{
   "versions" : [
  {
 "id" : "v2.4",
 "status" : "CURRENT",
 "links" : [
{
   "rel" : "self",
   "href" : "http://9.5.127.82:9292/v2/";
}
 ]
  },
  {
 "links" : [
{
   "rel" : "self",
   "href" : "http://9.5.127.82:9292/v2/";
}
 ],
 "id" : "v2.3",
 "status" : "SUPPORTED"
  },
  {
 "links" : [
{
   "rel" : "self",
   "href" : "http://9.5.127.82:9292/v2/";
}
 ],
 "status" : "SUPPORTED",
 "id" : "v2.2"
  },
  {
 "id" : "v2.1",
 "status" : "SUPPORTED",
 "links" : [
{
   "rel" : "self",
   "href" : "http://9.5.127.82:9292/v2/";
}
 ]
  },
  {
 "status" : "SUPPORTED",
 "id" : "v2.0",
 "links" : [
{
   "href" : "http://9.5.127.82:9292/v2/";,
   "rel" : "self"
}
 ]
  },
  {
 "links" : [
{
   "href" : "http://9.5.127.82:9292/v1/";,
   "rel" : "self"
}
 ],
 "status" : "DEPRECATED",
 "id" : "v1.1"
  },
  {
 "links" : [
{
   "rel" : "self",
   "href" : "http://9.5.127.82:9292/v1/";
}
 ],
 "id" : "v1.0",
 "status" : "DEPRECATED"
  }
   ]
}
stack@osc:/opt/stack/glance$


I'm able to list the v1 route which is just a list of images:

stack@osc:/opt/stack/glance$ curl -s -H "X-Auth-Token: $OS_TOKEN" 
http://9.5.127.82:9292/v1/ | json_pp
   {
   "images" : [
  {
 "size" : 25165824,
 "name" : "cirros-0.3.4-x86_64-uec",
 "id" : "c8af19ff-cebc-4112-a237-78dcd19e588c",
 "disk_format" : "ami",
 "checksum" : "eb9139e4942121f22bbc2afc0400b2a4",
 "container_format" : "ami"
  },
  {
 "disk_format" : "ari",
 "container_format" : "ari",
 "checksum" : "be575a2b939972276ef675752936977f",
 "size" : 3740163,
 "name" : "cirros-0.3.4-x86_64-uec-ramdisk",
 "id" : "ff195fc4-c039-43b5-acca-501aba68aba2"
  },
  {
 "size" : 4979632,
 "name" : "cirros-0.3.4-x86_64-uec-kernel",
 "id" : "08463073-3460-4b5f-92cc-ade974936e96",
 "disk_format" : "aki",
 "container_format" : "aki",
 "checksum" : "8a40c862b5735975d82605c1dd395796"
  }
   ]
}


But I'm not able to list the v2 route:

stack@osc:/opt/stack/glance$ curl -s -H "X-Auth-Token: $OS_TOKEN" 
http://9.5.127.82:9292/v2/
404 Not Found

The resource could not be found.

   stack@osc:/opt/stack/glance$

** Affects: glance
 Importance: Undecided
 Status: Incomplete


** Tags: api documentation

** Tags added: api documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1632742

Title:
  /v2 route doesn't exist

Status in Glance:
  Incomplete

Bug description:
  Looking at the api-ref docs, I'm able to list versions of the image
  API available in a cloud (I'm using a devstack created from master as
  of last week):

  http://developer.openstack.org/api-
  ref/image/versions/index.html?expanded=id1-detail#id1

  stack@osc:/opt/stack/glance$ git log -1
  commit 9bd264cd034f996852372ae0ca988bd67b98cf9a
  Merge: 2de3caf ce6cb2d
  Author: Jenkins 
  Date:   Tue Oct 4 02:28:10 2016 +

  Merge "[api-ref] configure LogABug feature"
  stack@osc:/opt/stack/glance$

  
  I'm able to list versions for the image endpoint:

  stack@osc:/opt/stack/glance$ curl -s -H "X-Auth-Token: $OS_TOKEN" 
http://9.5.127.82:9292 | json_pp
  {
 "versions" : [
{
   "id" : "v2.4",
   "status" : "CURRENT",
   "links" : [
  {
 "rel" : "self",
 "href" : "http://9.5.127.82:9292/v2/";
  }
   ]
},
{
   "links" : [
  {
 "rel" : "self",
 "href" : "http://9.5.127.82

[Yahoo-eng-team] [Bug 1628301] Re: SR-IOV not working in Mitaka and Intel X series NIC

2016-10-12 Thread Bjoern Teipel
Closing this out, after updating the ixgbe and ixgbevf driver I was able
"attach" VF ports on nova instances.

** Changed in: neutron
   Status: Incomplete => Invalid

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1628301

Title:
  SR-IOV not working in Mitaka and Intel X series NIC

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  The SRIO functionality in Mitaka seems broken, all configuration
  options we evaluated lead to

   NovaException: Unexpected vif_type=binding_failed

  errors, stack following.
  We are currently using this code base, along with SRIOV configuration posted 
here

  Nova SHA 611efbe77c712d9ac35904f659d28dd0f0c1b3ff # HEAD of "stable/mitaka" 
as of 08.09.2016
  Neutron SHA c73269fa480a8a955f440570fc2fa6c347e3bb3c # HEAD of 
"stable/mitaka" as of 08.09.2016

  Stack :

  2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 
00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa] Traceback (most recent call last):
  2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 
00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa]   File 
"/openstack/venvs/nova-13.3.4/lib/python2.7/site-packages/nova/compute/manager.py",
 line 2218, in _build_resources
  2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 
00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa] yield resources
  2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 
00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa]   File 
"/openstack/venvs/nova-13.3.4/lib/python2.7/site-packages/nova/compute/manager.py",
 line 2064, in _build_and_run_instance
  2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 
00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa] block_device_info=block_device_info)
  2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 
00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa]   File 
"/openstack/venvs/nova-13.3.4/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
 line 2776, in spawn
  2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 
00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa] write_to_disk=True)
  2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 
00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa]   File 
"/openstack/venvs/nova-13.3.4/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
 line 4729, in _get_guest_xml
  2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 
00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa] context)
  2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 
00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa]   File 
"/openstack/venvs/nova-13.3.4/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
 line 4595, in _get_guest_config
  2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 
00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa] flavor, virt_type, self._host)
  2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 
00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa]   File 
"/openstack/venvs/nova-13.3.4/lib/python2.7/site-packages/nova/virt/libvirt/vif.py",
 line 447, in get_config
  2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 
00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa] _("Unexpected vif_type=%s") % 
vif_type)
  2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 
00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa] NovaException: Unexpected 
vif_type=binding_failed
  2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 
00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa]

  Interestingly the nova resource tracker seem to be able to create a
  list of all available sriov devices and they show up correctly inside
  the database as pci_device table entries

  2016-09-27 16:13:52.175 10248 INFO nova.compute.resource_tracker 
[req-284a7832-3794-4597-b939-273ea75d45f7 - - - - -] Total usable vcpus: 32, 
total allocated vcpus: 0
  2016-09-27 16:13:52.175 10248 INFO nova.compute.resource_tracker 
[req-284a7832-3794-4597-b939-273ea75d45f7 - - - - -] Final resource view: 
name=compute01 phys_ram=25
  MB used_ram=2048MB phys_disk=1935GB used_disk=2GB total_vcpus=32 used_vcpus=0 
pci_stats=[PciDevicePool(count=15,numa_node=None,product_id='10ed',tags={dev_type='type-VF',physical_network='physnet1'},vendor
  _id='8086'), 
PciDevicePool(count=2,numa_node=None,product_id='10fb',tags={dev_type='type-PF',physical_network='physnet1'},vendor_id='8086')]

  Available ports inside DB:
  
+-+--++---+--+--+---+
  | compute_node_id | address  | product_id | vendor_id | dev_type | dev_id 
  | status|
  
+-+--++---+--+--+---+
  |   5 | :88:10.1 | 10ed   | 8086  | type-VF  | 
pci__88_10_1 | available |
  | 

[Yahoo-eng-team] [Bug 1632723] Re: New WebOb minimum version requirement of >=1.6.1

2016-10-12 Thread Matt Riedemann
Yeah the change in newton that added this:

https://review.openstack.org/#/c/352573/6/nova/api/openstack/placement/util.py@41

And that json_formatter kwarg was in webob 1.6.0:

https://github.com/Pylons/webob/commit/87c8749a57c1ff2442db2d74d9fb86935b7b201e

** Also affects: nova/newton
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1632723

Title:
  New WebOb minimum version requirement of >=1.6.1

Status in OpenStack Compute (nova):
  New
Status in OpenStack Compute (nova) newton series:
  New

Bug description:
  Description
  ===
  I385c36e0af1a8a785c02e21ba4efa6046cde6366 introduced a new requirement of 
WebOb>=1.6.1 that has not been reflected in requirements.txt either globally or 
within Nova.

  Steps to reproduce
  ==
  # tox -e py27 
nova.tests.unit.api.openstack.placement.test_util.TestRequireContent.test_fail_no_content_type
  [..]
  Slowest Tests
  Test id   
  Runtime (s)
  
--
  ---
  
nova.tests.unit.api.openstack.placement.test_util.TestRequireContent.test_fail_no_content_type
  0.130

  ==
  Totals
  ==
  Ran: 1 tests in 15. sec.
   - Passed: 1
   - Skipped: 0
   - Expected Fail: 0
   - Unexpected Success: 0
   - Failed: 0
  Sum of execute time for each test: 0.1295 sec.

  ==
  Worker Balance
  ==
   - Worker 0 (1 tests) => 0:00:00.129534
  [..]
  # . .tox/py27/bin/activate
  (py27)# pip list | grep -i webob
  WebOb (1.6.1)
  (py27)# pip install WebOb==1.2.3
  Collecting WebOb==1.2.3
Downloading WebOb-1.2.3.tar.gz (191kB)
  100% || 194kB 319kB/s 
  Building wheels for collected packages: WebOb
Running setup.py bdist_wheel for WebOb ... done
Stored in directory: 
/home/lyarwood/.cache/pip/wheels/41/d1/c9/fd5b1a17465c81580c3b5c8876a4611c8c677b81a94dad8f72
  Successfully built WebOb
  Installing collected packages: WebOb
Found existing installation: WebOb 1.6.1
  Uninstalling WebOb-1.6.1:
Successfully uninstalled WebOb-1.6.1
  Successfully installed WebOb-1.2.3
  (py27)# deactivate 
  # tox -e py27 
nova.tests.unit.api.openstack.placement.test_util.TestRequireContent.test_fail_no_content_type
  [..]
  {0} 
nova.tests.unit.api.openstack.placement.test_util.TestRequireContent.test_fail_no_content_type
 [0.133657s] ... FAILED

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "nova/tests/unit/api/openstack/placement/test_util.py", line 229, 
in test_fail_no_content_type
  self.handler, req)
File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 485, in assertRaises
  self.assertThat(our_callable, matcher)
File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 496, in assertThat
  mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 547, in _matchHelper
  mismatch = matcher.match(matchee)
File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 108, in match
  mismatch = self.exception_matcher.match(exc_info)
File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/matchers/_higherorder.py",
 line 62, in match
  mismatch = matcher.match(matchee)
File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 475, in match
  reraise(*matchee)
File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 101, in match
  result = matchee()
File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 1049, in __call__
  return self._callable_object(*self._args, **self._kwargs)
File "nova/api/openstack/placement/util.py", line 131, in 
decorated_function
  json_formatter=json_error_formatter)
File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/webob/exc.py",
 line 263, in __init__
  **kw)
File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/webob/response.py",
 line 155, in __init__
  "Unexpected keyword: %s=%r" % (name, value))
  TypeError: Unexp

[Yahoo-eng-team] [Bug 1632723] [NEW] New WebOb minimum version requirement of >=1.6.1

2016-10-12 Thread Lee Yarwood
Public bug reported:

Description
===
I385c36e0af1a8a785c02e21ba4efa6046cde6366 introduced a new requirement of 
WebOb>=1.6.1 that has not been reflected in requirements.txt either globally or 
within Nova.

Steps to reproduce
==
# tox -e py27 
nova.tests.unit.api.openstack.placement.test_util.TestRequireContent.test_fail_no_content_type
[..]
Slowest Tests
Test id 
Runtime (s)
--
  ---
nova.tests.unit.api.openstack.placement.test_util.TestRequireContent.test_fail_no_content_type
  0.130

==
Totals
==
Ran: 1 tests in 15. sec.
 - Passed: 1
 - Skipped: 0
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 0
Sum of execute time for each test: 0.1295 sec.

==
Worker Balance
==
 - Worker 0 (1 tests) => 0:00:00.129534
[..]
# . .tox/py27/bin/activate
(py27)# pip list | grep -i webob
WebOb (1.6.1)
(py27)# pip install WebOb==1.2.3
Collecting WebOb==1.2.3
  Downloading WebOb-1.2.3.tar.gz (191kB)
100% || 194kB 319kB/s 
Building wheels for collected packages: WebOb
  Running setup.py bdist_wheel for WebOb ... done
  Stored in directory: 
/home/lyarwood/.cache/pip/wheels/41/d1/c9/fd5b1a17465c81580c3b5c8876a4611c8c677b81a94dad8f72
Successfully built WebOb
Installing collected packages: WebOb
  Found existing installation: WebOb 1.6.1
Uninstalling WebOb-1.6.1:
  Successfully uninstalled WebOb-1.6.1
Successfully installed WebOb-1.2.3
(py27)# deactivate 
# tox -e py27 
nova.tests.unit.api.openstack.placement.test_util.TestRequireContent.test_fail_no_content_type
[..]
{0} 
nova.tests.unit.api.openstack.placement.test_util.TestRequireContent.test_fail_no_content_type
 [0.133657s] ... FAILED

Captured traceback:
~~~
Traceback (most recent call last):
  File "nova/tests/unit/api/openstack/placement/test_util.py", line 229, in 
test_fail_no_content_type
self.handler, req)
  File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 485, in assertRaises
self.assertThat(our_callable, matcher)
  File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 496, in assertThat
mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
  File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 547, in _matchHelper
mismatch = matcher.match(matchee)
  File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 108, in match
mismatch = self.exception_matcher.match(exc_info)
  File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/matchers/_higherorder.py",
 line 62, in match
mismatch = matcher.match(matchee)
  File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 475, in match
reraise(*matchee)
  File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 101, in match
result = matchee()
  File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 1049, in __call__
return self._callable_object(*self._args, **self._kwargs)
  File "nova/api/openstack/placement/util.py", line 131, in 
decorated_function
json_formatter=json_error_formatter)
  File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/webob/exc.py",
 line 263, in __init__
**kw)
  File 
"/home/lyarwood/redhat/devel/src/openstack/nova/.tox/py27/lib/python2.7/site-packages/webob/response.py",
 line 155, in __init__
"Unexpected keyword: %s=%r" % (name, value))
TypeError: Unexpected keyword: json_formatter=
[..]

Expected result
===

Actual result
=


Environment
===
1. Exact version of OpenStack you are running. See the following
  list for all releases: http://docs.openstack.org/releases/

# git rev-parse HEAD
2669f1c73b7dee923c399729d95eee4d83c7ea56

2. Which hypervisor did you use?
   (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
   What's the version of that?
N/A

2. Which storage type did you use?
   (For example: Ceph, LVM, GPFS, ...)
   What's the version of that?
N/A

3. Which networking type did you use?
   (For example: nova-network, Neutron with OpenVSwitch, ...)
N/A

Logs & Configs
==
N/A

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because y

[Yahoo-eng-team] [Bug 1632633] Re: fix some word spelling error

2016-10-12 Thread Darek Smigiel
It's not even a bug. If there are any spelling errors, they can be fixed
without submitting a bug report.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1632633

Title:
  fix some word spelling error

Status in neutron:
  Invalid

Bug description:
  there are some word spelling error in .py file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1632633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632539] Re: Multiple nova schedulers for Ironic and nova conflict when they work together in one Region

2016-10-12 Thread Dmitry Tantsur
Hi! This fully belongs in Nova, so moving it back. I don't have a
precise answer to your question, but nova-scheduler's do not depend on
the exact compute backends. It is nova-compute instances that are
backend-specific.

I guess your problem is with choosing between BM and VM nodes. You can
use, for example, host aggregates to distinguish between these. I
suggest you reach out to the Nova team for better explanation.

** Project changed: ironic => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1632539

Title:
  Multiple nova schedulers for Ironic and nova conflict when they  work
  together in one Region

Status in OpenStack Compute (nova):
  New

Bug description:
  When Ironic and nova work in one region。we should deploy multiple
  nova-scheduler process,some for Ironic node scheduling and others for
  virtual machines scheduling。 Then we call rest API to boot an ironic
  node, nova-conductor polls the message to multiple nova-scheduler,and
  not distinguish nova-schedulers for ironic from nova-schedulers for
  virtual machines。 so it will cause an exception for not selecting a
  valid host to boot。

  I eager to solving this question because this scenario happened in our
  deploying project。

  Anyone has good ideas?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1632539/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632539] [NEW] Multiple nova schedulers for Ironic and nova conflict when they work together in one Region

2016-10-12 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

When Ironic and nova work in one region。we should deploy multiple nova-
scheduler process,some for Ironic node scheduling and others for virtual
machines scheduling。 Then we call rest API to boot an ironic node, nova-
conductor polls the message to multiple nova-scheduler,and not
distinguish nova-schedulers for ironic from nova-schedulers for virtual
machines。 so it will cause an exception for not selecting a valid host
to boot。

I eager to solving this question because this scenario happened in our
deploying project。

Anyone has good ideas?

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ironic nova-scheduler
-- 
Multiple nova schedulers for Ironic and nova conflict when they  work together 
in one Region
https://bugs.launchpad.net/bugs/1632539
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1579982] Re: Go to admin info error

2016-10-12 Thread Sean McGinnis
Fixed in python-cinderclient with
https://review.openstack.org/#/c/331596/

** Changed in: python-cinderclient
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1579982

Title:
  Go to admin info error

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in python-cinderclient:
  Fix Released

Bug description:
  I used openstack version is M ,when I go to /admin/info/ path will
  show info 'TemplateSyntaxError at /admin/info/'

  Browser show info:

  TemplateSyntaxError at /admin/info/
  service
  Request Method:   GET
  Request URL:  http://192.168.22.1:/admin/info/
  Django Version:   1.8.7
  Exception Type:   TemplateSyntaxError
  Exception Value:  
  service
  Exception Location:   
/usr/lib/python2.7/site-packages/cinderclient/openstack/common/apiclient/base.py
 in __getattr__, line 505
  Python Executable:/usr/bin/python2
  Python Version:   2.7.5
  Python Path:  
  ['/mnt/horizon_new',
   '/usr/lib64/python27.zip',
   '/usr/lib64/python2.7',
   '/usr/lib64/python2.7/plat-linux2',
   '/usr/lib64/python2.7/lib-tk',
   '/usr/lib64/python2.7/lib-old',
   '/usr/lib64/python2.7/lib-dynload',
   '/usr/lib64/python2.7/site-packages',
   '/usr/lib64/python2.7/site-packages/gtk-2.0',
   '/usr/lib/python2.7/site-packages',
   '/mnt/horizon_new/openstack_dashboard']

  Error during template rendering

  
  Console show info:

  Error while rendering table rows.
  Traceback (most recent call last):
File "/mnt/horizon_new/horizon/tables/base.py", line 1781, in get_rows
  row = self._meta.row_class(self, datum)
File "/mnt/horizon_new/horizon/tables/base.py", line 534, in __init__
  self.load_cells()
File "/mnt/horizon_new/horizon/tables/base.py", line 560, in load_cells
  cell = table._meta.cell_class(datum, column, self)
File "/mnt/horizon_new/horizon/tables/base.py", line 666, in __init__
  self.data = self.get_data(datum, column, row)
File "/mnt/horizon_new/horizon/tables/base.py", line 710, in get_data
  data = column.get_data(datum)
File "/mnt/horizon_new/horizon/tables/base.py", line 381, in get_data
  data = self.get_raw_data(datum)
File "/mnt/horizon_new/horizon/tables/base.py", line 363, in get_raw_data
  "%(obj)s.") % {'attr': self.transform, 'obj': datum}
File "/usr/lib/python2.7/site-packages/django/utils/functional.py", line 
178, in __mod__
  return six.text_type(self) % rhs
File "/usr/lib/python2.7/site-packages/cinderclient/v2/services.py", line 
25, in __repr__
  return "" % self.service
File 
"/usr/lib/python2.7/site-packages/cinderclient/openstack/common/apiclient/base.py",
 line 505, in __getattr__
  raise AttributeError(k)
  AttributeError: service
  Internal Server Error: /admin/info/
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 
132, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/mnt/horizon_new/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
File "/mnt/horizon_new/horizon/decorators.py", line 84, in dec
  return view_func(request, *args, **kwargs)
File "/mnt/horizon_new/horizon/decorators.py", line 52, in dec
  return view_func(request, *args, **kwargs)
File "/mnt/horizon_new/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/django/views/generic/base.py", line 
71, in view
  return self.dispatch(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/django/views/generic/base.py", line 
89, in dispatch
  return handler(request, *args, **kwargs)
File "/mnt/horizon_new/horizon/tabs/views.py", line 147, in get
  return self.handle_tabbed_response(context["tab_group"], context)
File "/mnt/horizon_new/horizon/tabs/views.py", line 68, in 
handle_tabbed_response
  return self.render_to_response(context)
File "/mnt/horizon_new/horizon/tabs/views.py", line 81, in 
render_to_response
  response.render()
File "/usr/lib/python2.7/site-packages/django/template/response.py", line 
158, in render
  self.content = self.rendered_content
File "/usr/lib/python2.7/site-packages/django/template/response.py", line 
135, in rendered_content
  content = template.render(context, self._request)
File "/usr/lib/python2.7/site-packages/django/template/backends/django.py", 
line 74, in render
  return self.template.render(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 210, 
in render
  return self._render(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 202, 
in _render
  return self.nodelist.render(context)
File "/usr/lib/python

[Yahoo-eng-team] [Bug 1631432] Re: port-update fails if allowed_address_pair is not a dict

2016-10-12 Thread John Davidge
Confirmed, I'm seeing this error too. Thanks for the well-written bug
report!

** Changed in: neutron
   Status: New => Triaged

** Changed in: neutron
   Importance: Undecided => Medium

** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** Changed in: python-neutronclient
   Status: New => Triaged

** Changed in: python-neutronclient
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1631432

Title:
  port-update fails if allowed_address_pair is not a dict

Status in neutron:
  Triaged
Status in python-neutronclient:
  Triaged

Bug description:
  CLI help is misleading. Neutron port-update called with parameters
  according to documentation returns an error.

  neutron help port-update
..
--allowed-address-pair ip_address=IP_ADDR[,mac_address=MAC_ADDR]
  Allowed address pair associated with the port.You can
  repeat this option.

  # neutron port-update 3f36328f-0629-4e41-afa8-e2992815bcd0 
--allowed-address-pairs ip_address=10.0.0.1
  The number of allowed address pair exceeds the maximum 10.
  Neutron server returns request_ids: 
['req-62e258cc-d47d-4ab7-8e69-a13c50865042']

  Work correctly when specific data type is enforced:
  # neutron port-update 3f36328f-0629-4e41-afa8-e2992815bcd0 
--allowed-address-pairs type=dict list=true ip_address=10.0.0.1
  Updated port: 3f36328f-0629-4e41-afa8-e2992815bcd0

  It always should be a list of dict, even when only one pair is given.

  CLI doc should be corrected.

  Furthermore, input data in neutron-server seem to be not validated correctly. 
The reason of misleading exception about exceeded number of address pairs is an 
implicit test of length of user data. In case of list of dict it is a number of 
elements of list - number of address pairs. When only one pair is given, it 
returns length of string "ip_address=10.0.0.1" == 20 what is greater than 10. 
There is a try-except clause for TypeError exception, but it is not thrown in 
this case.
  This bug is observed if there is no other pairs already defined on given 
port. In other case lists are merged and type error is thrown.

  def _validate_allowed_address_pairs(address_pairs, valid_values=None):
  ..
  try:
  if len(address_pairs) > cfg.CONF.max_allowed_address_pair:
  raise AllowedAddressPairExhausted(
  quota=cfg.CONF.max_allowed_address_pair)
  except TypeError:
  raise webob.exc.HTTPBadRequest(
  _("Allowed address pairs must be a list."))

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1631432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632521] Re: tox -efunctional fails when tox picks python 3.x

2016-10-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/385207
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=32e533d59abcd2ec27eebf174b492a75a85e7554
Submitter: Jenkins
Branch:master

commit 32e533d59abcd2ec27eebf174b492a75a85e7554
Author: melanie witt 
Date:   Tue Oct 11 23:59:25 2016 +

Always use python2.7 for functional tests

The functional testenv doesn't work with python 3.x on our codebase.
If someone is on a platform that defaults to python => python3,
functional tests will fail for them.

Closes-Bug: #1632521

Change-Id: I7bf6653f55c10d0a4f75054e519edf7da19c5c09


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1632521

Title:
  tox -efunctional fails when tox picks python 3.x

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Currently, the functional tests can't be run with python 3.x and fail
  with a trace like this:

  Failed to import test module: 
nova.tests.functional.api_sample_tests.test_volumes
  Traceback (most recent call last):
File 
"/home/ubuntu/nova/.tox/functional/lib/python3.5/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/home/ubuntu/nova/.tox/functional/lib/python3.5/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File 
"/home/ubuntu/nova/nova/tests/functional/api_sample_tests/test_volumes.py", 
line 21, in 
  from nova.tests.functional.api_sample_tests import test_servers
File 
"/home/ubuntu/nova/nova/tests/functional/api_sample_tests/test_servers.py", 
line 24, in 
  class ServersSampleBase(api_sample_base.ApiSampleTestBaseV21):
File 
"/home/ubuntu/nova/nova/tests/functional/api_sample_tests/test_servers.py", 
line 29, in ServersSampleBase
  user_data = base64.b64encode(user_data_contents)
File "/home/ubuntu/nova/.tox/functional/lib/python3.5/base64.py", line 59, 
in b64encode
  encoded = binascii.b2a_base64(s)[:-1]
  TypeError: a bytes-like object is required, not 'str'
  The test run didn't actually run any tests
  ERROR: InvocationError: '/bin/bash tools/pretty_tox.sh 
nova.tests.functional.db'
  

 summary 
_
  ERROR:   functional: commands failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1632521/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485399] Re: detach volume just then starting instance success but not in the instance, rbd backend

2016-10-12 Thread Prateek Arora
I tried reproducing this

Here are my observations

[stack@controller devstack]$ nova suspend test
[stack@controller devstack]$ nova list
+--+--+---++-+---+
| ID   | Name | Status| Task State | Power 
State | Networks  |
+--+--+---++-+---+
| a3e5e950-a51f-483a-ac48-85203bdb0bc9 | test | SUSPENDED | -  | 
Shutdown| private=10.0.0.3, 2001:db8:8000:0:f816:3eff:fe0b:f50c |
+--+--+---++-+---+
[stack@controller devstack]$ nova resume test && nova volume-detach 
a3e5e950-a51f-483a-ac48-85203bdb0bc9 29dd9434-7559-404b-9c61-9d943936c2bf
ERROR (Conflict): Cannot 'detach_volume' instance 
a3e5e950-a51f-483a-ac48-85203bdb0bc9 while it is in vm_state suspended (HTTP 
409) (Request-ID: req-8049116f-3e4c-476e-a475-9e67c874e65f)


As you can see that in this case the detach_volume did not go through because 
of the state suspended.

** Changed in: nova
   Status: Confirmed => Invalid

** Changed in: nova
 Assignee: Prateek Arora (parora) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1485399

Title:
  detach volume just then starting instance success but not in the
  instance, rbd backend

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  1、Version
  root@controller:~# dpkg -l | grep nova
  ii  nova-api1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - API frontend
  ii  nova-cert   1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - certificate management
  ii  nova-common 1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - common files
  ii  nova-conductor  1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - conductor service
  ii  nova-consoleauth1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - Console Authenticator
  ii  nova-novncproxy 1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - NoVNC proxy
  ii  nova-scheduler  1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - virtual machine scheduler
  ii  python-nova 1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute Python libraries
  ii  python-novaclient   1:2.22.0-0ubuntu1~cloud0  
all  client library for OpenStack Compute API

  vms / glance / volume backend storage all use ceph rbd

  
  2、Reproduce steps:
  a) Boot an instance
  b) attach an volume to the instance
  c) shut off the instance
  d) start the instance and detach the volume quickly until return success
  e) login the instance find the  volume of vdb is still exsit and can be used.
  f) show the volume status have been from in-used to avaiable

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1485399/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632658] [NEW] binding:profile update always trigger port rebind

2016-10-12 Thread Na Zhu
Public bug reported:

Currently, binding:profile update always trigger port rebind, the rebind
behavior brings problems if the MD handler the port status for some
special purpose, I think it is more reasonable to ask MD whether need
rebind.

>From the last section of binding:profile bp
(https://blueprints.launchpad.net/neutron/+spec/ml2-binding-profile), it
also mentioned that should not rebind if MD do not need.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1632658

Title:
  binding:profile update always trigger port rebind

Status in neutron:
  New

Bug description:
  Currently, binding:profile update always trigger port rebind, the
  rebind behavior brings problems if the MD handler the port status for
  some special purpose, I think it is more reasonable to ask MD whether
  need rebind.

  From the last section of binding:profile bp
  (https://blueprints.launchpad.net/neutron/+spec/ml2-binding-profile),
  it also mentioned that should not rebind if MD do not need.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1632658/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632633] [NEW] fix some word spelling error

2016-10-12 Thread huyupeng
Public bug reported:

there are some word spelling error in .py file.

** Affects: neutron
 Importance: Undecided
 Assignee: huyupeng (huyp)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => huyupeng (huyp)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1632633

Title:
  fix some word spelling error

Status in neutron:
  New

Bug description:
  there are some word spelling error in .py file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1632633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1621615] Re: network not configured when ipv6 netbooted into cloud-init

2016-10-12 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init -
0.7.8-1-g3705bb5-0ubuntu1~16.04.3

---
cloud-init (0.7.8-1-g3705bb5-0ubuntu1~16.04.3) xenial-proposed; urgency=medium

  * ntp: move to run after apt configuration (LP: #1628337).

cloud-init (0.7.8-1-g3705bb5-0ubuntu1~16.04.2) xenial; urgency=medium

  * Support IPv6 config coming from initramfs.  LP: #1621615.

 -- Scott Moser   Mon, 03 Oct 2016 12:22:26 -0400

** Changed in: cloud-init (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

** Changed in: cloud-initramfs-tools (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1621615

Title:
  network not configured when ipv6 netbooted into cloud-init

Status in cloud-init:
  Fix Committed
Status in MAAS:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-initramfs-tools package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Fix Released
Status in cloud-initramfs-tools source package in Xenial:
  Fix Released

Bug description:
  https://bugs.launchpad.net/ubuntu/+source/klibc/+bug/1621507 talks of
  how IPv6 netboot with iscsi root disk doesn't work, blocking IPv6-only
  MAAS.

  After I hand-walked busybox through getting an IPv6 address,
  everything worked just fine until cloud-init couldn't fetch the
  instance data, because it insisted on bringing up the interface in
  IPv4, and there is no IPv4 DHCP on that vlan.

  Please work with initramfs and friends on getting IPv6 netboot to
  actually configure the interface.  This may be as simple as teaching
  it about "inet6 dhcp" interfaces, and bolting the pieces together.
  Note that "use radvd" is not really an option for our use case.

  Related bugs:
   * bug 1621507: initramfs-tools configure_networking() fails to dhcp IPv6 
addresses

  [Impact]

  It is not possible to enlist, commmission, or deploy with MAAS in an
  IPv6-only environment. Anyone wanting to netboot with a network root
  filesystem in an IPv6-only environment is affected.

  This upload addresses this by accepting, using, and forwarding any
  IPV6* variables from the initramfs boot.  (See
  https://launchpad.net/bugs/1621507)

  [Test Case]

  See Bug 1229458. Configure radvd, dhcpd, and tftpd for your IPv6-only
  netbooting world. Pass the boot process an IPv6 address to fetch
  instance-data from, and see it fail to configure the network.

  [Regression Potential]

  1) If the booting host is in a dual-boot environment, and the
  instance-dat URL uses a hostname that has both A and  RRsets, the
  booting host may try to talk IPv6 to get instance data.  If the
  instance-data providing host is only allowing that to happen over
  IPv4, it will fail. (It also represents a configuraiton issue on the
  providing host...)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1621615/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628337] Re: cloud-init tries to install NTP before even configuring the archives

2016-10-12 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init -
0.7.8-1-g3705bb5-0ubuntu1~16.04.3

---
cloud-init (0.7.8-1-g3705bb5-0ubuntu1~16.04.3) xenial-proposed; urgency=medium

  * ntp: move to run after apt configuration (LP: #1628337).

cloud-init (0.7.8-1-g3705bb5-0ubuntu1~16.04.2) xenial; urgency=medium

  * Support IPv6 config coming from initramfs.  LP: #1621615.

 -- Scott Moser   Mon, 03 Oct 2016 12:22:26 -0400

** Changed in: cloud-init (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1628337

Title:
  cloud-init tries to install NTP before even configuring the archives

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Fix Released

Bug description:
  == Begin SRU Template ==
  [Impact]
  When told to configure ntp, and the ntp package is not installed
  in an image, cloud-init will attempt to install the package.

  The problem here is that it currently tries to install the package before
  it configures apt.  As a result, no apt proxy or mirror configuration is
  setup, and the stock image apt config is used.

  [Test Case]
  ## Failure can be shown like this:
  $ cat > user-data  "$p" &&
  apt-get update -q && apt-get -qy install cloud-init'
  $ lxc exec $name -- sh -c '
  cd /var/lib/cloud && for d in *; do [ "$d" = "seed" ] || rm -Rf "$d"; done
  rm -Rf /var/log/cloud-init*'

  $ lxc file pull $name/var/log/cloud-init-output.log - | egrep "^[EW]:" ||
echo "FIX WORKED."

  [Regression Potential]
  The 'ntp' function is fairly new, and is only used if a user specifies
  an ntp configuration as shown above.  Regression chance is low then
  and should be restricted to scenarios where users are providing
  the ntp configuration.
  == End SRU Template ==

  cloud-init tries to install NTP package before it actually configures
  /etc/apt/sources.list.

  In a closed MAAS environment where MAAS is limited to access to
  us.archive.ubuntu.com , cloud-init is trying to access to
  archive.ubuntu.com.

  In commissioning, however, cloud-init is doing this:

  1. cloud-init gets metadata from MAAS
  2. cloud-init tries to install NTP from archive.ubuntu.com
  3. cloud-init configures /etc/apt/sources.list with us.archive.ubuntu.com
  4. cloud-init installs other packages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1628337/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp