[Yahoo-eng-team] [Bug 1628819] [NEW] OVS firewall can generate too many flows

2016-09-29 Thread IWAMOTO Toshihiro
Public bug reported:

The firewall code generate O(n^2) flows when a security group rule uses a 
remote_group_id.
See OVSFirewallDriver.create_rules_generator.

This can be problematic when a large number of addresses are in a
security group.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovs sg-fw

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1628819

Title:
  OVS firewall can generate too many flows

Status in neutron:
  New

Bug description:
  The firewall code generate O(n^2) flows when a security group rule uses a 
remote_group_id.
  See OVSFirewallDriver.create_rules_generator.

  This can be problematic when a large number of addresses are in a
  security group.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1628819/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1625566] Re: Tutorial: Adding a complex action to a table

2016-09-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/373649
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=44afeac9d4f14c7c9b0b2685bbdedbf545101562
Submitter: Jenkins
Branch:master

commit 44afeac9d4f14c7c9b0b2685bbdedbf545101562
Author: Kenji Ishii 
Date:   Wed Sep 21 14:10:10 2016 +0900

[Trivial]remove unnecessary commna

Change-Id: Ie174e707adbc9ad668a2ad516e2cda11a8646b49
Closes-Bug: #1625566


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1625566

Title:
  Tutorial: Adding a complex action to a table

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  urlpatterns = [,
  url(r'^$',
  views.IndexView.as_view(), name='index'),
  url(r'^(?P[^/]+)/create_snapshot/$',
  views.CreateSnapshotView.as_view(),
  name='create_snapshot'),
  ]

  Still gives an error for me? I removed the comma as such:

  urlpatterns = [
  url(r'^$',
  views.IndexView.as_view(), name='index'),
  url(r'^(?P[^/]+)/create_snapshot/$',
  views.CreateSnapshotView.as_view(),
  name='create_snapshot'),
  ]

  And it worked!

  PS. This is a double thought it may as well be it's own report because
  where I posted the double wasn't precisely solid imo, feel free to
  correct me otherwise DS.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1625566/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628200] Re: Remove unused column "scheduled_at" from instances from schema

2016-09-29 Thread Sylvain Bauza
That's even a pure tech debt cleanup, not really a bug IMHO.

** Changed in: nova
   Importance: Low => Wishlist

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1628200

Title:
  Remove unused column "scheduled_at" from instances from schema

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  "scheduled_at" attribute from Instance model is no longer used. It is removed 
from data model in newton as a part of this commit
  
https://github.com/openstack/nova/commit/5e9df4baf97fd1f495b81d3b8b612e24f344b325

  Hence, now we can remove it from db schema.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1628200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1625619] Re: It is possible to download key pair for other user at the same project

2016-09-29 Thread Sylvain Bauza
Given the above comments, it doesn't seem related to Nova at all.
Putting it as Invalid unless I'm wrong and if so, feel free to put it
back to New.

** Changed in: nova
   Status: New => Incomplete

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1625619

Title:
  It is possible to download key pair for other user at the same project

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Identity (keystone):
  New
Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Bug was reproduced in mitaka openstack release.

  Steps to reproduce:

  1. Login to horizon.
  2. Click Project-> Compute -> Access and Security
  3. Click "Key Pairs" tab
  4. Click "Create Key Pair" button, enter keypair name.
  5. On the next screen with download key dialog copy URL from browser URL field

  URL will be like
  http://server/horizon/project/access_and_security/keypairs//download

  6. Click cancel to close download window.
  7. Click Project->Compute->Instances.
  8. In opened window select other key pair name from KEY PAIR column (it could 
be key pair for different user)
  9. open new browser window, paste URL string from step 5.
  10. Change in URL  with name obtained from step 8 and press 
enter

  You will be prompted to download private key for other user.

  It isn't correct user should be able to download only his own keys

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1625619/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628854] [NEW] Hyper-V VMs cannot spawn due to missing image property

2016-09-29 Thread Claudiu Belu
Public bug reported:

The HyperVDriver currently fails to spawn instances due to missing
"os_type" image property [1]. This image property is needed for VMs with
UEFI Secure Boot enabled, but not needed if this feature is not
required.

This issue has been seen in the Hyper-V CI.

[1] http://paste.openstack.org/show/583450/
[2] https://review.openstack.org/#/c/209581/

** Affects: nova
 Importance: High
 Assignee: Claudiu Belu (cbelu)
 Status: In Progress

** Changed in: nova
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1628854

Title:
  Hyper-V VMs cannot spawn due to missing image property

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The HyperVDriver currently fails to spawn instances due to missing
  "os_type" image property [1]. This image property is needed for VMs
  with UEFI Secure Boot enabled, but not needed if this feature is not
  required.

  This issue has been seen in the Hyper-V CI.

  [1] http://paste.openstack.org/show/583450/
  [2] https://review.openstack.org/#/c/209581/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1628854/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628776] Re: Release request of networking-hyperv and creation of stable/newton

2016-09-29 Thread Claudiu Belu
Hello,

Thanks for bringing it up. stable/newton branch has been cut. 3.0.0.0rc1
has been released quite a while ago, and I've sent a request for a final
Newton release (3.0.0).

https://review.openstack.org/379331


** Changed in: neutron
   Status: New => Invalid

** Changed in: networking-hyperv
   Status: New => In Progress

** Changed in: networking-hyperv
 Assignee: (unassigned) => Claudiu Belu (cbelu)

** Changed in: networking-hyperv
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1628776

Title:
  Release request of networking-hyperv and creation of stable/newton

Status in networking-hyperv:
  In Progress
Status in neutron:
  Invalid

Bug description:
  networking-hyperv has NOT yet branched a stable/newton branch and
  there is no tarball at http://tarballs.openstack.org/networking-
  hyperv/

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1628776/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628661] Re: horizon tox -e cover not working

2016-09-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/379002
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=f23b7d1fd19fd5076b1a8a417d97872c1e0b7b5b
Submitter: Jenkins
Branch:master

commit f23b7d1fd19fd5076b1a8a417d97872c1e0b7b5b
Author: eric 
Date:   Wed Sep 28 14:58:48 2016 -0600

Fix tox cover to not fail

This changes the run of tox with cover, to just ammend cover
output data the combine option seemed to be breaking the data
and then the xml / html options did not work as a result.

Change-Id: Ic600b55855cf74c1b6ed138fea5c4b7bb037de82
Closes-bug: #1628661


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1628661

Title:
  horizon tox -e cover not working

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  running "tox -e cover" is now yielding this failure:

  
  OK (SKIP=8)
  Destroying test database for alias 'default'...
  cover runtests: commands[3] | coverage combine
  cover runtests: commands[4] | coverage xml
  No data to report.
  ERROR: InvocationError: 
'/home/eric/work/public/horizon/.tox/cover/bin/coverage xml'
  
_
 summary 
__
  ERROR:   cover: commands failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1628661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628875] [NEW] Missing rollback logic when image creating for volume based instance

2016-09-29 Thread Alex Xu
Public bug reported:

In the code of create image for volume based instance
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/api.py#n2610,
there isn't rollback code when create volume snaphost or create image
failed. For example, both cinder and glance have quota for the number of
volume snapshot and the number of image metadata, so there have a chance
to failed on quota. But the code won't rollbacked already created volume
snapshot.

** Affects: nova
 Importance: Medium
 Assignee: Alex Xu (xuhj)
 Status: Confirmed

** Description changed:

- In the code of create image for volume based instance, there isn't
- rollback code when create volume snaphost or create image failed. For
- example, both cinder and glance have quota for the number of volume
- snapshot and the number of image metadata, so there have a chance to
- failed on quota. But the code won't rollbacked already created volume
+ In the code of create image for volume based instance
+ http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/api.py#n2610,
+ there isn't rollback code when create volume snaphost or create image
+ failed. For example, both cinder and glance have quota for the number of
+ volume snapshot and the number of image metadata, so there have a chance
+ to failed on quota. But the code won't rollbacked already created volume
  snapshot.

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova
 Assignee: (unassigned) => Alex Xu (xuhj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1628875

Title:
  Missing rollback logic when image creating for volume based instance

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  In the code of create image for volume based instance
  http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/api.py#n2610,
  there isn't rollback code when create volume snaphost or create image
  failed. For example, both cinder and glance have quota for the number
  of volume snapshot and the number of image metadata, so there have a
  chance to failed on quota. But the code won't rollbacked already
  created volume snapshot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1628875/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1625305] Re: neutron-openvswitch-agent is crashing due to KeyError in _restore_local_vlan_map()

2016-09-29 Thread Esha Seth
I tried one more scenario in which I used one network (flat) and created
2 ports off it (2 vms) using mitaka ovs agent. The ports created had the
same tag '2' and same net uuid. Then I used newton agent to create
another one and it worked fine. So I am not seeing the issue now with
same tag.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1625305

Title:
  neutron-openvswitch-agent is crashing due to KeyError in
  _restore_local_vlan_map()

Status in neutron:
  Invalid

Bug description:
  Neutron openvswitch agent is unable to restart because vms with
  untagged/flat networks (tagged 3999) cause issue with
  _restore_local_vlan_map

  Loaded agent extensions: []
  2016-09-06 07:57:39.682 70085 CRITICAL neutron 
[req-ef8eea4f-c1ed-47a0-8318-eb5473b7c667 - - - - -] KeyError: 3999
  2016-09-06 07:57:39.682 70085 ERROR neutron Traceback (most recent call last):
  2016-09-06 07:57:39.682 70085 ERROR neutron   File 
"/usr/bin/neutron-openvswitch-agent", line 28, in 
  2016-09-06 07:57:39.682 70085 ERROR neutron sys.exit(main())
  2016-09-06 07:57:39.682 70085 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 235, in __init__
  2016-09-06 07:57:39.682 70085 ERROR neutron self._restore_local_vlan_map()
  2016-09-06 07:57:39.682 70085 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 356, in _restore_local_vlan_map
  2016-09-06 07:57:39.682 70085 ERROR neutron 
self.available_local_vlans.remove(local_vlan)
  2016-09-06 07:57:39.682 70085 ERROR neutron KeyError: 3999
  2016-09-06 07:57:39.682 70085 ERROR neutron
  2016-09-06 07:57:39.684 70085 INFO oslo_rootwrap.client 
[req-ef8eea4f-c1ed-47a0-8318-eb5473b7c667 - - - - -] Stopping rootwrap daemon 
process with pid=70197

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1625305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628883] [NEW] Minimum requirements too low on oslo.log for keystone

2016-09-29 Thread Dr. Jens Rosenboom
Public bug reported:

After upgrading keystone from mitaka to newton-rc1 on Xenial I am
getting this error:


$ keystone-manage db_sync
Traceback (most recent call last):
  File "/usr/bin/keystone-manage", line 6, in 
from keystone.cmd.manage import main
  File "/usr/lib/python2.7/dist-packages/keystone/cmd/manage.py", line 32, in 

from keystone.cmd import cli
  File "/usr/lib/python2.7/dist-packages/keystone/cmd/cli.py", line 28, in 

from keystone.cmd import doctor
  File "/usr/lib/python2.7/dist-packages/keystone/cmd/doctor/__init__.py", line 
13, in 
from keystone.cmd.doctor import caching
  File "/usr/lib/python2.7/dist-packages/keystone/cmd/doctor/caching.py", line 
13, in 
import keystone.conf
  File "/usr/lib/python2.7/dist-packages/keystone/conf/__init__.py", line 26, 
in 
from keystone.conf import default
  File "/usr/lib/python2.7/dist-packages/keystone/conf/default.py", line 180, 
in 
deprecated_since=versionutils.deprecated.NEWTON,
AttributeError: type object 'deprecated' has no attribute 'NEWTON'

It seems due to the fact that the installed version of oslo.log is not
updated properly:

python-oslo.log:
  Installed: 3.2.0-2
  Candidate: 3.16.0-0ubuntu1~cloud0
  Version table:
 3.16.0-0ubuntu1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
xenial-updates/newton/main amd64 Packages
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
xenial-updates/newton/main i386 Packages
 *** 3.2.0-2 500
500 http://mirror/ubuntu xenial/main amd64 Packages
100 /var/lib/dpkg/status

But looking at the requirements.txt in stable/newton, even
oslo.log>=1.14.0 is claimed to work.

** Affects: keystone
 Importance: Undecided
 Status: New

** Affects: keystone (Ubuntu)
 Importance: Undecided
 Status: Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1628883

Title:
  Minimum requirements too low on oslo.log for keystone

Status in OpenStack Identity (keystone):
  New
Status in keystone package in Ubuntu:
  Triaged

Bug description:
  After upgrading keystone from mitaka to newton-rc1 on Xenial I am
  getting this error:

  
  $ keystone-manage db_sync
  Traceback (most recent call last):
File "/usr/bin/keystone-manage", line 6, in 
  from keystone.cmd.manage import main
File "/usr/lib/python2.7/dist-packages/keystone/cmd/manage.py", line 32, in 

  from keystone.cmd import cli
File "/usr/lib/python2.7/dist-packages/keystone/cmd/cli.py", line 28, in 

  from keystone.cmd import doctor
File "/usr/lib/python2.7/dist-packages/keystone/cmd/doctor/__init__.py", 
line 13, in 
  from keystone.cmd.doctor import caching
File "/usr/lib/python2.7/dist-packages/keystone/cmd/doctor/caching.py", 
line 13, in 
  import keystone.conf
File "/usr/lib/python2.7/dist-packages/keystone/conf/__init__.py", line 26, 
in 
  from keystone.conf import default
File "/usr/lib/python2.7/dist-packages/keystone/conf/default.py", line 180, 
in 
  deprecated_since=versionutils.deprecated.NEWTON,
  AttributeError: type object 'deprecated' has no attribute 'NEWTON'

  It seems due to the fact that the installed version of oslo.log is not
  updated properly:

  python-oslo.log:
Installed: 3.2.0-2
Candidate: 3.16.0-0ubuntu1~cloud0
Version table:
   3.16.0-0ubuntu1~cloud0 500
  500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
xenial-updates/newton/main amd64 Packages
  500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
xenial-updates/newton/main i386 Packages
   *** 3.2.0-2 500
  500 http://mirror/ubuntu xenial/main amd64 Packages
  100 /var/lib/dpkg/status

  But looking at the requirements.txt in stable/newton, even
  oslo.log>=1.14.0 is claimed to work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1628883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628886] [NEW] test_reprocess_port_when_ovs_restarts fails nondeterministicly

2016-09-29 Thread John Schwarz
Public bug reported:

Encountered in https://review.openstack.org/#/c/365326/8/, specifically
http://logs.openstack.org/26/365326/8/check/gate-neutron-dsvm-
functional-ubuntu-trusty/cc5f8eb/testr_results.html.gz

Stack trace from tempest (if the logs are deleted from the server):
http://paste.openstack.org/show/583476/

Stack trace from dsvm-functional log dir:
http://paste.openstack.org/show/583478/

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1628886

Title:
  test_reprocess_port_when_ovs_restarts fails nondeterministicly

Status in neutron:
  Confirmed

Bug description:
  Encountered in https://review.openstack.org/#/c/365326/8/,
  specifically http://logs.openstack.org/26/365326/8/check/gate-neutron-
  dsvm-functional-ubuntu-trusty/cc5f8eb/testr_results.html.gz

  Stack trace from tempest (if the logs are deleted from the server):
  http://paste.openstack.org/show/583476/

  Stack trace from dsvm-functional log dir:
  http://paste.openstack.org/show/583478/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1628886/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628883] Re: Minimum requirements too low on oslo.log for keystone

2016-09-29 Thread Corey Bryant
** Also affects: keystone (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: keystone (Ubuntu)
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1628883

Title:
  Minimum requirements too low on oslo.log for keystone

Status in OpenStack Identity (keystone):
  New
Status in keystone package in Ubuntu:
  Triaged

Bug description:
  After upgrading keystone from mitaka to newton-rc1 on Xenial I am
  getting this error:

  
  $ keystone-manage db_sync
  Traceback (most recent call last):
File "/usr/bin/keystone-manage", line 6, in 
  from keystone.cmd.manage import main
File "/usr/lib/python2.7/dist-packages/keystone/cmd/manage.py", line 32, in 

  from keystone.cmd import cli
File "/usr/lib/python2.7/dist-packages/keystone/cmd/cli.py", line 28, in 

  from keystone.cmd import doctor
File "/usr/lib/python2.7/dist-packages/keystone/cmd/doctor/__init__.py", 
line 13, in 
  from keystone.cmd.doctor import caching
File "/usr/lib/python2.7/dist-packages/keystone/cmd/doctor/caching.py", 
line 13, in 
  import keystone.conf
File "/usr/lib/python2.7/dist-packages/keystone/conf/__init__.py", line 26, 
in 
  from keystone.conf import default
File "/usr/lib/python2.7/dist-packages/keystone/conf/default.py", line 180, 
in 
  deprecated_since=versionutils.deprecated.NEWTON,
  AttributeError: type object 'deprecated' has no attribute 'NEWTON'

  It seems due to the fact that the installed version of oslo.log is not
  updated properly:

  python-oslo.log:
Installed: 3.2.0-2
Candidate: 3.16.0-0ubuntu1~cloud0
Version table:
   3.16.0-0ubuntu1~cloud0 500
  500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
xenial-updates/newton/main amd64 Packages
  500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
xenial-updates/newton/main i386 Packages
   *** 3.2.0-2 500
  500 http://mirror/ubuntu xenial/main amd64 Packages
  100 /var/lib/dpkg/status

  But looking at the requirements.txt in stable/newton, even
  oslo.log>=1.14.0 is claimed to work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1628883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628892] [NEW] Admin network panel throws something went wrong page

2016-09-29 Thread Sudheer Kalla
Public bug reported:

When neutron service is down, when user tries to visit the networks
panel in the admin dashboard it give some thing went wrong page.

How ever the network panel in project view will throw a pop-up but not
error page.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1628892

Title:
  Admin network panel throws something went wrong page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When neutron service is down, when user tries to visit the networks
  panel in the admin dashboard it give some thing went wrong page.

  How ever the network panel in project view will throw a pop-up but not
  error page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1628892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498130] Re: LBaaSv2: Can't delete the Load balancer and also dependant entities if the load balancer provisioning_status is in PENDING_UPDATE

2016-09-29 Thread Spyros Trigazis
Any advice for that? I even modified the db and didn't work.

So for the use case:
In openstack/magnum we have an option to use lbaas for our clusters. Two lbaas' 
are created one for etcd and one for api. These lbaas' are created with heat. 
If for any reason (unrelated to lbaas) the heat creation fails we want to 
delete the stack, but it is impossible because we can't delete the load 
balancers.

One more thing I tried and failed:
I tried to use an even smaller flavor than m1.amphora with 512mb RAM and the 
lbaas creation as stack.

** Changed in: neutron
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498130

Title:
  LBaaSv2: Can't  delete the Load balancer and also dependant entities
  if the load balancer provisioning_status is  in PENDING_UPDATE

Status in neutron:
  New

Bug description:
  If the Load balancer provisioning_status is  in PENDING_UPDATE

  cannot delete the Loadbalancer and also dependent entities like
  listener or pool

   neutron -v lbaas-listener-delete 6f9fdf3a-4578-4e3e-8b0b-f2699608b7e6
  DEBUG: keystoneclient.session REQ: curl -g -i -X GET 
http://9.197.47.200:5000/v2.0 -H "Accept: application/json" -H "User-Agent: 
python-keystoneclient"
  DEBUG: keystoneclient.session RESP: [200] content-length: 338 vary: 
X-Auth-Token connection: keep-alive date: Mon, 21 Sep 2015 18:35:55 GMT 
content-type: application/json x-openstack-request-id: 
req-952f21b0-81bf-4e0f-a6c8-b3fc13ac4cd2
  RESP BODY: {"version": {"status": "stable", "updated": 
"2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": 
[{"href": "http://9.197.47.200:5000/v2.0/";, "rel": "self"}, {"href": 
"http://docs.openstack.org/";, "type": "text/html", "rel": "describedby"}]}}

  DEBUG: neutronclient.neutron.v2_0.lb.v2.listener.DeleteListener 
run(Namespace(id=u'6f9fdf3a-4578-4e3e-8b0b-f2699608b7e6', 
request_format='json'))
  DEBUG: keystoneclient.auth.identity.v2 Making authentication request to 
http://9.197.47.200:5000/v2.0/tokens
  DEBUG: keystoneclient.session REQ: curl -g -i -X GET 
http://9.197.47.200:9696/v2.0/lbaas/listeners.json?fields=id&id=6f9fdf3a-4578-4e3e-8b0b-f2699608b7e6
 -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}9ea944020f06fa79f4b6db851dbd9e69aca65d58"
  DEBUG: keystoneclient.session RESP: [200] date: Mon, 21 Sep 2015 18:35:56 GMT 
connection: keep-alive content-type: application/json; charset=UTF-8 
content-length: 346 x-openstack-request-id: 
req-fd7ee22b-f776-4ebd-94c6-7548a5aff362
  RESP BODY: {"listeners": [{"protocol_port": 100, "protocol": "TCP", 
"description": "", "sni_container_ids": [], "admin_state_up": true, 
"loadbalancers": [{"id": "ab8f76ec-236f-4f4c-b28e-cd7bfee48cd2"}], 
"default_tls_container_id": null, "connection_limit": 100, "default_pool_id": 
null, "id": "6f9fdf3a-4578-4e3e-8b0b-f2699608b7e6", "name": "listener100"}]}

  DEBUG: keystoneclient.session REQ: curl -g -i -X DELETE 
http://9.197.47.200:9696/v2.0/lbaas/listeners/6f9fdf3a-4578-4e3e-8b0b-f2699608b7e6.json
 -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}9ea944020f06fa79f4b6db851dbd9e69aca65d58"
  DEBUG: keystoneclient.session RESP:
  DEBUG: neutronclient.v2_0.client Error message: {"NeutronError": {"message": 
"Invalid state PENDING_UPDATE of loadbalancer resource 
ab8f76ec-236f-4f4c-b28e-cd7bfee48cd2", "type": "StateInvalid", "detail": ""}}
  ERROR: neutronclient.shell Invalid state PENDING_UPDATE of loadbalancer 
resource ab8f76ec-236f-4f4c-b28e-cd7bfee48cd2
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/neutronclient/shell.py", line 766, 
in run_subcommand
  return run_command(cmd, cmd_parser, sub_argv)
File "/usr/lib/python2.7/site-packages/neutronclient/shell.py", line 101, 
in run_command
  return cmd.run(known_args)
File 
"/usr/lib/python2.7/site-packages/neutronclient/neutron/v2_0/__init__.py", line 
581, in run
  obj_deleter(_id)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
102, in with_params
  ret = self.function(instance, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
932, in delete_listener
  return self.delete(self.lbaas_listener_path % (lbaas_listener))
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
289, in delete
  headers=headers, params=params)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
270, in retry_request
  headers=headers, params=params)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
211, in do_request
  self._handle_fault_response(status_code, replybody)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0

[Yahoo-eng-team] [Bug 1627673] Re: LANGUAGES list in settings.py needs to be updated before Newton release

2016-09-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/376357
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=e01c54bb0fc9f9b677c5e8601fb27225852d21a6
Submitter: Jenkins
Branch:master

commit e01c54bb0fc9f9b677c5e8601fb27225852d21a6
Author: Akihiro Motoki 
Date:   Mon Sep 26 19:08:33 2016 +0900

i18n: Add Indonesian to the language list

Indonesian translation have made a significant progress
during Newton cycle. Let's add it to the language list
in openstack_dashboard.settings so that Indonesian is listed
by the language pull-down menu by default.

I am planning to make this process automated in near future
(hopefully in Ocata cycle).

Change-Id: Iacbf112df81ee1b4fc40c063eaaa7d0b1c92ca7a
Closes-Bug: #1627673


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1627673

Title:
  LANGUAGES list in settings.py needs to be updated before Newton
  release

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  LANGUAGES list in settings.py needs to be updated before Newton
  release. In Newton cycle Indonesian translation has made a significant
  progress but it is not included in the language list in
  openstack_dashboard.settings. The list controls the default language
  list and if a language is not listed here most users do not notice the
  language translation is available. It is worth addressed before Newton
  release.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1627673/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1391504] Re: Sample policies for Openstack

2016-09-29 Thread Sean McGinnis
** Changed in: cinder
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1391504

Title:
  Sample policies for Openstack

Status in Cinder:
  Won't Fix
Status in Glance:
  In Progress
Status in OpenStack Identity (keystone):
  Confirmed
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Regarding OpenStack policies, in general, the described roles seem
  quite complicated, it is not clear which roles are appropriated for
  each user. For example, in many policies it is defined just a global
  admin role. We would like to clarify what are the role organizations,
  for example, cloud_admin is the role for the cloud managers,
  domain_admin is the role for the domain managers, project_admin for
  the project admin and project_member a member with a role in a project
  but with no admin permissions. In this way, it is clear for the cloud
  manager which capability is being given to a user. The idea is create
  a policy.cloudsample.json, where roles as cloud_admin project_admin,
  and project_member will be defined and some default permissions,
  making policies closer to the business reality.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1391504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628692] Re: Password history constraints not enforced via /v3/users//password path

2016-09-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/379018
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=4be9164e53403b863f8c717b58227c9fcbd13f7c
Submitter: Jenkins
Branch:master

commit 4be9164e53403b863f8c717b58227c9fcbd13f7c
Author: Ronald De Rose 
Date:   Wed Sep 28 21:57:23 2016 +

Validate password history for self-service password changes

This patch adds password history validation to the change_password
(self-service) backend method.

backport: newton
Closes-Bug: #1628692
Change-Id: I6a21eb355a60b96da0615e64f57fa64289c0221e


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1628692

Title:
  Password history constraints not enforced via
  /v3/users//password path

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Differently from the /v3/user/ route [1], the
  /v3/user//password is not enforcing the password history [2].

  At [3] we are able to change a password that breaks the password
  history constraints

  [1] 
https://github.com/openstack/keystone/blob/master/keystone/identity/backends/sql.py#L161
  [2] 
https://github.com/openstack/keystone/blob/master/keystone/identity/backends/sql.py#L189
  [3] http://paste.openstack.org/show/583366/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1628692/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612875] Re: FixedIPsTestJson fails server build with "was re-scheduled: operation failed: filter 'nova-no-nd-reflection' already exists with uuid"

2016-09-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/374975
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=2ce193e16d7ca5b93671d7f0d2e3b35761f5d386
Submitter: Jenkins
Branch:master

commit 2ce193e16d7ca5b93671d7f0d2e3b35761f5d386
Author: Matt Riedemann 
Date:   Thu Sep 22 12:41:46 2016 -0400

libvirt: ignore conflict when defining network filters

We have a latent race in the libvirt firewall code when setting
up static filters which is now an error with libvirt>=1.2.7,
which is why we started seeing this in CI failures starting in
newton which run on xenial nodes that have libvirt 1.3.1 (but
didn't see it on trusty nodes with libvirt 1.2.2).

Libvirt commit 46a811db0731cedaea0153fc223faa6096cee5b5 checks
for an existing filter with the same name but a different uuid
when defining network filters and raises an error if found. That
was added in the 1.2.7 release.

This change simply handles the error and ignores it so we don't
fail to boot the instance.

Unfortunately we don't have a specific error code from libvirt
when this happens so the best we can do is compare the error
message from the libvirt error which is only going to work for
English locales because the error message from libvirt is
translated.

Change-Id: I161be26d605351f168e351d3ed3d308234346f6f
Closes-Bug: #1612875


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1612875

Title:
  FixedIPsTestJson fails server build with "was re-scheduled: operation
  failed: filter 'nova-no-nd-reflection' already exists with uuid"

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  Confirmed

Bug description:
  Seen here:

  http://logs.openstack.org/98/355098/1/check/gate-tempest-dsvm-full-
  ubuntu-
  xenial/3c301f3/logs/screen-n-cond.txt.gz?level=TRACE#_2016-08-13_00_53_27_621

  2016-08-13 00:53:27.621 15971 ERROR nova.scheduler.utils 
[req-e7b83619-01ae-43f2-b293-af02c5cb35a8 tempest-FixedIPsTestJson-2017152444 
tempest-FixedIPsTestJson-2017152444] [instance: 
ac2c9a4a-1b07-43e1-8f4c-b75541331307] Error from last host: 
ubuntu-xenial-rax-ord-3453779 (node ubuntu-xenial-rax-ord-3453779): 
[u'Traceback (most recent call last):\n', u'  File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1778, in 
_do_build_and_run_instance\nfilter_properties)\n', u'  File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1973, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', u"RescheduledException: Build of instance 
ac2c9a4a-1b07-43e1-8f4c-b75541331307 was re-scheduled: operation failed: filter 
'nova-no-nd-reflection' already exists with uuid 
1f47eeb2-d473-481e-998a-c4d64a44ac5e\n"]
  2016-08-13 00:53:27.676 15971 WARNING nova.scheduler.utils 
[req-e7b83619-01ae-43f2-b293-af02c5cb35a8 tempest-FixedIPsTestJson-2017152444 
tempest-FixedIPsTestJson-2017152444] Failed to compute_task_build_instances: No 
valid host was found. There are not enough hosts available.
  Traceback (most recent call last):

File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", 
line 199, in inner
  return func(*args, **kwargs)

File "/opt/stack/new/nova/nova/scheduler/manager.py", line 104, in 
select_destinations
  dests = self.driver.select_destinations(ctxt, spec_obj)

File "/opt/stack/new/nova/nova/scheduler/filter_scheduler.py", line 74, in 
select_destinations
  raise exception.NoValidHost(reason=reason)

  NoValidHost: No valid host was found. There are not enough hosts
  available.

  2016-08-13 00:53:27.676 15971 WARNING nova.scheduler.utils [req-
  e7b83619-01ae-43f2-b293-af02c5cb35a8 tempest-
  FixedIPsTestJson-2017152444 tempest-FixedIPsTestJson-2017152444]
  [instance: ac2c9a4a-1b07-43e1-8f4c-b75541331307] Setting instance to
  ERROR state.

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22was
  %20re-scheduled%3A%20operation%20failed%3A%20filter%20'nova-no-nd-
  
reflection'%20already%20exists%20with%20uuid%5C%22%20AND%20tags%3A%5C%22screen-n-cond.txt%5C%22&from=7d

  5 hits in 7 days, check queue only, but multiple changes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1612875/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628776] Re: Release request of networking-hyperv and creation of stable/newton

2016-09-29 Thread Claudiu Belu
Done. Let us know if there is anything else.

** Changed in: networking-hyperv
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1628776

Title:
  Release request of networking-hyperv and creation of stable/newton

Status in networking-hyperv:
  Fix Released
Status in neutron:
  Invalid

Bug description:
  networking-hyperv has NOT yet branched a stable/newton branch and
  there is no tarball at http://tarballs.openstack.org/networking-
  hyperv/

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1628776/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1617268] Re: vpn-agent does not initialize FWaaS

2016-09-29 Thread Nate Johnston
This is no longer needed.  FWaaS no longer inherits from
L3NATAgentWithStateReport, it plugs in using the L3 agent extensions
mechanism.  https://review.openstack.org/#/c/355576/

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1617268

Title:
  vpn-agent does not initialize FWaaS

Status in neutron:
  Invalid

Bug description:
  Currently, main class for L3, FWaaS and VPNaaS is as attached
  file(l3_fw_vpn_class_relation.txt).

  * When launching l3-agent without FWaaS, L3NATAgentWithStateReport  class is 
initialized.
  * When launching l3-agent with FWaaS, L3WithFWaaS class is initialized.
  This is achieved by following commit.
  
https://github.com/openstack/neutron-fwaas/commit/debc3595599ed6cd52caf6e04f083af9c93f6fa4

  * When launching vpn-agent with/without FWaaS, VPNAgent class is initialized.
    In this case, L3WithFWaaS class is not initialized even though FWaaS is 
enabled.
    Thus, FWaaS won't be available when using both of FWaaS and VPNaaS.

  Here is log of vpn-agent that is when the agent receives RPC request about 
firewall from neutron.
  ===
  2016-08-26 10:42:35.340 16065 ERROR oslo_messaging.rpc.server [-] Exception 
during message handling
  2016-08-26 10:42:35.340 16065 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2016-08-26 10:42:35.340 16065 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
133, in _process_incoming
  2016-08-26 10:42:35.340 16065 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2016-08-26 10:42:35.340 16065 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
155, in dispatch
  2016-08-26 10:42:35.340 16065 ERROR oslo_messaging.rpc.server raise 
NoSuchMethod(method)
  2016-08-26 10:42:35.340 16065 ERROR oslo_messaging.rpc.server NoSuchMethod: 
Endpoint does not support RPC method delete_firewall
  2016-08-26 10:42:35.340 16065 ERROR oslo_messaging.rpc.server
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1617268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628996] [NEW] subnet service types not working with sqlite3 version 3.7.17

2016-09-29 Thread venkata anil
Public bug reported:

subnet service types not working with sqlite3 version 3.7.17. But it
works from sqlite3 version 3.8.0 and above versions.

Because of this, subnet service type unit tests failing in sqlite3
version 3.7.17.

Captured traceback:
~~~
Traceback (most recent call last):
  File "neutron/tests/base.py", line 125, in func
return f(self, *args, **kwargs)
  File "neutron/tests/unit/extensions/test_subnet_service_types.py", line 
245, in test_create_port_no_device_owner_no_fallback
self.test_create_port_no_device_owner(fallback=False)
  File "neutron/tests/base.py", line 125, in func
return f(self, *args, **kwargs)
  File "neutron/tests/unit/extensions/test_subnet_service_types.py", line 
242, in test_create_port_no_device_owner
self._assert_port_res(port, '', subnet, fallback)
  File "neutron/tests/unit/extensions/test_subnet_service_types.py", line 
173, in _assert_port_res
self.assertEqual(error, res['NeutronError']['type'])
KeyError: 'NeutronError'

_query_filter_service_subnets [1] is behaving differently in 3.7.17 and 3.8.0 
for these tests
[1] 
https://github.com/openstack/neutron/blob/master/neutron/db/ipam_backend_mixin.py#L597

I have seen this on centos7 setup, which by default uses sqlite3 version
3.7.17.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1628996

Title:
  subnet service types not working with sqlite3 version 3.7.17

Status in neutron:
  New

Bug description:
  subnet service types not working with sqlite3 version 3.7.17. But it
  works from sqlite3 version 3.8.0 and above versions.

  Because of this, subnet service type unit tests failing in sqlite3
  version 3.7.17.

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "neutron/tests/base.py", line 125, in func
  return f(self, *args, **kwargs)
File "neutron/tests/unit/extensions/test_subnet_service_types.py", line 
245, in test_create_port_no_device_owner_no_fallback
  self.test_create_port_no_device_owner(fallback=False)
File "neutron/tests/base.py", line 125, in func
  return f(self, *args, **kwargs)
File "neutron/tests/unit/extensions/test_subnet_service_types.py", line 
242, in test_create_port_no_device_owner
  self._assert_port_res(port, '', subnet, fallback)
File "neutron/tests/unit/extensions/test_subnet_service_types.py", line 
173, in _assert_port_res
  self.assertEqual(error, res['NeutronError']['type'])
  KeyError: 'NeutronError'

  _query_filter_service_subnets [1] is behaving differently in 3.7.17 and 3.8.0 
for these tests
  [1] 
https://github.com/openstack/neutron/blob/master/neutron/db/ipam_backend_mixin.py#L597

  I have seen this on centos7 setup, which by default uses sqlite3
  version 3.7.17.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1628996/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623168] Re: referencing versionutils.deprecated.NEWTON in oslo.log <3.4.0

2016-09-29 Thread Steve Martinelli
Marking this as fix-released for keystone on the ocata branch, we depend
on oslo.log>=3.11.0:
https://github.com/openstack/keystone/blob/master/requirements.txt

** Changed in: keystone
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1623168

Title:
  referencing versionutils.deprecated.NEWTON in oslo.log <3.4.0

Status in Cinder:
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  cinder/keymgr/__init__.py contains:

  versionutils.deprecation_warning(deprecated, versionutils.NEWTON,
   in_favor_of=castellan, logger=LOG)

  versionutils.NEWTON does not exist until oslo.log 3.4.0, but Cinder
  Newton only requires oslo.log>=1.14.0.

  It is too late in Newton to bump the global requirements for a newer
  oslo.log.   ( https://review.openstack.org/#/c/366418/ )

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1623168/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558772] Re: Magic-Search shouldn't exist inside of table structure

2016-09-29 Thread Travis Tripp
** Changed in: searchlight
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1558772

Title:
  Magic-Search shouldn't exist inside of table structure

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Magnum UI:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Released

Bug description:
  Currently, the way the Angular Magic-Search directive works, it
  requires being placed in the context of a smart-table.  This is not
  ideal and causes trouble with formatting.

  A good solution would allow the search bar directive to be placed
  outside of the table structure in the markup.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1558772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629040] [NEW] Incorrect hyper-v driver capability

2016-09-29 Thread Lucian Petrut
Public bug reported:

The Hyper-V driver incorrectly enables the
'supports_migrate_to_same_host' capability.

This capability seems to have been introduced having the VMWare cluster
architecture in mind, but it leads to unintended behavior in case of the
HyperV driver.

For this reason, the Hyper-V CI is failing on the test_cold_migration
tempest test, which asserts that the host has changed.

** Affects: compute-hyperv
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Assignee: Lucian Petrut (petrutlucian94)
 Status: In Progress


** Tags: drivers hyper-v

** Description changed:

  The Hyper-V driver incorrectly enables the
- 'supports_migrate_to_same_host' capability. This capability seems to
- have been introduced having the VMWare cluster architecture in mind, but
- it leads to unintended behavior in case of the HyperV driver.
+ 'supports_migrate_to_same_host' capability.
+ 
+ This capability seems to have been introduced having the VMWare cluster
+ architecture in mind, but it leads to unintended behavior in case of the
+ HyperV driver.
  
  For this reason, the Hyper-V CI is failing on the test_cold_migration
  tempest test, which asserts that the host has changed.

** Also affects: compute-hyperv
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1629040

Title:
  Incorrect hyper-v driver capability

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The Hyper-V driver incorrectly enables the
  'supports_migrate_to_same_host' capability.

  This capability seems to have been introduced having the VMWare
  cluster architecture in mind, but it leads to unintended behavior in
  case of the HyperV driver.

  For this reason, the Hyper-V CI is failing on the test_cold_migration
  tempest test, which asserts that the host has changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1629040/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629047] [NEW] Warnings about parameters to load being deprecated pollute functional test output

2016-09-29 Thread Jay Pipes
Public bug reported:

I'm seeing one of these per testr process in the functional test output:

Captured stderr:


/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py:22:
 DeprecationWarning: Parameters to load are deprecated.  Call .resolve and 
.require separately.
  return pkg_resources.EntryPoint.parse("x=" + s).load(False)

Would be great to have those cleaned up in paste.deploy.loadwsgi.

** Affects: nova
 Importance: Low
 Status: Confirmed


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1629047

Title:
  Warnings about parameters to load being deprecated pollute functional
  test output

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  I'm seeing one of these per testr process in the functional test
  output:

  Captured stderr:
  
  
/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py:22:
 DeprecationWarning: Parameters to load are deprecated.  Call .resolve and 
.require separately.
return pkg_resources.EntryPoint.parse("x=" + s).load(False)

  Would be great to have those cleaned up in paste.deploy.loadwsgi.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1629047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629046] [NEW] Errors about multiple networks pollutes functional test output

2016-09-29 Thread Jay Pipes
Public bug reported:

Running the Nova functional tests locally, I consistently get around a
dozen or so of these in the output:

Captured stderr:

Traceback (most recent call last):
  File 
"/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 457, in fire_timers
timer()
  File 
"/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/eventlet/hubs/timer.py",
 line 58, in __call__
cb(*args, **kw)
  File 
"/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/eventlet/greenthread.py",
 line 214, in main
result = function(*args, **kwargs)
  File "nova/utils.py", line 1066, in context_wrapper
return func(*args, **kwargs)
  File "nova/compute/manager.py", line 1414, in _allocate_network_async
six.reraise(*exc_info)
  File "nova/compute/manager.py", line 1397, in _allocate_network_async
bind_host_id=bind_host_id)
  File "nova/network/neutronv2/api.py", line 844, in allocate_for_instance
context, instance, neutron, requested_networks, ordered_networks)
  File "nova/network/neutronv2/api.py", line 730, in 
_validate_requested_network_ids
raise exception.NetworkAmbiguous(msg)
NetworkAmbiguous: Multiple possible networks found, use a Network ID to be 
more specific.

They don't affect the result of the test, so it's nothing more than an
annoyance, but would be nice to fix.

** Affects: nova
 Importance: Low
 Status: Confirmed


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1629046

Title:
  Errors about multiple networks pollutes functional test output

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Running the Nova functional tests locally, I consistently get around a
  dozen or so of these in the output:

  Captured stderr:
  
  Traceback (most recent call last):
File 
"/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 457, in fire_timers
  timer()
File 
"/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/eventlet/hubs/timer.py",
 line 58, in __call__
  cb(*args, **kw)
File 
"/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/eventlet/greenthread.py",
 line 214, in main
  result = function(*args, **kwargs)
File "nova/utils.py", line 1066, in context_wrapper
  return func(*args, **kwargs)
File "nova/compute/manager.py", line 1414, in _allocate_network_async
  six.reraise(*exc_info)
File "nova/compute/manager.py", line 1397, in _allocate_network_async
  bind_host_id=bind_host_id)
File "nova/network/neutronv2/api.py", line 844, in allocate_for_instance
  context, instance, neutron, requested_networks, ordered_networks)
File "nova/network/neutronv2/api.py", line 730, in 
_validate_requested_network_ids
  raise exception.NetworkAmbiguous(msg)
  NetworkAmbiguous: Multiple possible networks found, use a Network ID to 
be more specific.

  They don't affect the result of the test, so it's nothing more than an
  annoyance, but would be nice to fix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1629046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1621615] Re: network not configured when ipv6 netbooted into cloud-init

2016-09-29 Thread Brian Murray
As far as I can tell this doesn't seem to be fixed in cloud-initramfs-
tools for Yakkety.  Am I missing something?

** Also affects: cloud-initramfs-tools (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1621615

Title:
  network not configured when ipv6 netbooted into cloud-init

Status in cloud-init:
  In Progress
Status in MAAS:
  New
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-initramfs-tools package in Ubuntu:
  New
Status in cloud-init source package in Xenial:
  Fix Committed

Bug description:
  https://bugs.launchpad.net/ubuntu/+source/klibc/+bug/1621507 talks of
  how IPv6 netboot with iscsi root disk doesn't work, blocking IPv6-only
  MAAS.

  After I hand-walked busybox through getting an IPv6 address,
  everything worked just fine until cloud-init couldn't fetch the
  instance data, because it insisted on bringing up the interface in
  IPv4, and there is no IPv4 DHCP on that vlan.

  Please work with initramfs and friends on getting IPv6 netboot to
  actually configure the interface.  This may be as simple as teaching
  it about "inet6 dhcp" interfaces, and bolting the pieces together.
  Note that "use radvd" is not really an option for our use case.

  Related bugs:
   * bug 1621507: initramfs-tools configure_networking() fails to dhcp IPv6 
addresses

  [Impact]

  It is not possible to enlist, commmission, or deploy with MAAS in an
  IPv6-only environment. Anyone wanting to netboot with a network root
  filesystem in an IPv6-only environment is affected.

  This upload addresses this by accepting, using, and forwarding any
  IPV6* variables from the initramfs boot.  (See
  https://launchpad.net/bugs/1621507)

  [Test Case]

  See Bug 1229458. Configure radvd, dhcpd, and tftpd for your IPv6-only
  netbooting world. Pass the boot process an IPv6 address to fetch
  instance-data from, and see it fail to configure the network.

  [Regression Potential]

  1) If the booting host is in a dual-boot environment, and the
  instance-dat URL uses a hostname that has both A and  RRsets, the
  booting host may try to talk IPv6 to get instance data.  If the
  instance-data providing host is only allowing that to happen over
  IPv4, it will fail. (It also represents a configuraiton issue on the
  providing host...)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1621615/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629047] Re: Warnings about parameters to load being deprecated pollute functional test output

2016-09-29 Thread Jay Pipes
Seems there's an issue logged on BitBucket:

https://bitbucket.org/ianb/pastedeploy/issues/20/loadwsgi-should-
account-for-entrypointload

I'd submit a pull request to fix this crap but BitBucket won't let me
sign in with my Google account properly...

-jay

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1629047

Title:
  Warnings about parameters to load being deprecated pollute functional
  test output

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I'm seeing one of these per testr process in the functional test
  output:

  Captured stderr:
  
  
/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py:22:
 DeprecationWarning: Parameters to load are deprecated.  Call .resolve and 
.require separately.
return pkg_resources.EntryPoint.parse("x=" + s).load(False)

  Would be great to have those cleaned up in paste.deploy.loadwsgi.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1629047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629066] [NEW] RFE Optionally bind load balancer instance to multiple IPs to increase available (source IP, source port) space to support > 64k connections to a single backend

2016-09-29 Thread Dustin Lundquist
Public bug reported:

This limitation arose in while testing Neutron LBaaS using the HAProxy
namespace driver, but applies to other proxying type backends including
Octavia. A single load balancer instance (network namespace, or amphora)
can only establish as many concurrent TCP connections to a single pool
member as there are available distinct source IP, source TCP port
combinations on the load balancing instance (network namespace or
amphora). The source TCP port range is limited by the configured
ephemeral port range, but this can be tuned to include all the
unprivileged TCP ports (1024 - 65535) via sysctl. The available source
addresses are limited to IP addresses bound to the instance, for the
load balancing instance must be able to receive the response from the
pool member.

In short the total number of concurrent TCP connections to any single
backend is limited to 64k times the number of available source IP
addresses. This is because each TCP connection is identified by the
4-tuple: (src-ip, src-port, dst-ip, dst-port) and (dst-ip, dst-port) is
used to define a specific pool member. TCP ports are limited by the
16bit field in the TCP protocol definition. In order to further increase
the number of possible connections from a load balancing instance to a
single backend we must increase this tuple space by increasing the
number of available source IP addresses.

Therefore, I propose we offer an option to attach multiple fixed-ips in
the same subnet to the Neutron port of the load balancing instance
facing the pool member. This would increase the tuple space allowing
more than 64k concurrent connections to a single backend.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1629066

Title:
  RFE Optionally bind load balancer instance to multiple IPs to increase
  available (source IP, source port) space to support > 64k connections
  to a single backend

Status in neutron:
  New

Bug description:
  This limitation arose in while testing Neutron LBaaS using the HAProxy
  namespace driver, but applies to other proxying type backends
  including Octavia. A single load balancer instance (network namespace,
  or amphora) can only establish as many concurrent TCP connections to a
  single pool member as there are available distinct source IP, source
  TCP port combinations on the load balancing instance (network
  namespace or amphora). The source TCP port range is limited by the
  configured ephemeral port range, but this can be tuned to include all
  the unprivileged TCP ports (1024 - 65535) via sysctl. The available
  source addresses are limited to IP addresses bound to the instance,
  for the load balancing instance must be able to receive the response
  from the pool member.

  In short the total number of concurrent TCP connections to any single
  backend is limited to 64k times the number of available source IP
  addresses. This is because each TCP connection is identified by the
  4-tuple: (src-ip, src-port, dst-ip, dst-port) and (dst-ip, dst-port)
  is used to define a specific pool member. TCP ports are limited by the
  16bit field in the TCP protocol definition. In order to further
  increase the number of possible connections from a load balancing
  instance to a single backend we must increase this tuple space by
  increasing the number of available source IP addresses.

  Therefore, I propose we offer an option to attach multiple fixed-ips
  in the same subnet to the Neutron port of the load balancing instance
  facing the pool member. This would increase the tuple space allowing
  more than 64k concurrent connections to a single backend.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1629066/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629066] Re: RFE Optionally bind load balancer instance to multiple IPs to increase available (source IP, source port) space to support > 64k connections to a single backend

2016-09-29 Thread Michael Johnson
** Also affects: octavia
   Importance: Undecided
   Status: New

** Changed in: octavia
   Status: New => Triaged

** Changed in: octavia
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1629066

Title:
  RFE Optionally bind load balancer instance to multiple IPs to increase
  available (source IP, source port) space to support > 64k connections
  to a single backend

Status in neutron:
  New
Status in octavia:
  Triaged

Bug description:
  This limitation arose in while testing Neutron LBaaS using the HAProxy
  namespace driver, but applies to other proxying type backends
  including Octavia. A single load balancer instance (network namespace,
  or amphora) can only establish as many concurrent TCP connections to a
  single pool member as there are available distinct source IP, source
  TCP port combinations on the load balancing instance (network
  namespace or amphora). The source TCP port range is limited by the
  configured ephemeral port range, but this can be tuned to include all
  the unprivileged TCP ports (1024 - 65535) via sysctl. The available
  source addresses are limited to IP addresses bound to the instance,
  for the load balancing instance must be able to receive the response
  from the pool member.

  In short the total number of concurrent TCP connections to any single
  backend is limited to 64k times the number of available source IP
  addresses. This is because each TCP connection is identified by the
  4-tuple: (src-ip, src-port, dst-ip, dst-port) and (dst-ip, dst-port)
  is used to define a specific pool member. TCP ports are limited by the
  16bit field in the TCP protocol definition. In order to further
  increase the number of possible connections from a load balancing
  instance to a single backend we must increase this tuple space by
  increasing the number of available source IP addresses.

  Therefore, I propose we offer an option to attach multiple fixed-ips
  in the same subnet to the Neutron port of the load balancing instance
  facing the pool member. This would increase the tuple space allowing
  more than 64k concurrent connections to a single backend.

  While this limitation could be addressed by increasing the number of
  listening TCP ports on the pool member and adding additional members
  with the same IP address and different TCP ports, not all applications
  are suitable to this modification.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1629066/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413610] Re: Nova volume-update leaves volumes stuck in attaching/detaching

2016-09-29 Thread Sean McGinnis
** No longer affects: cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1413610

Title:
  Nova volume-update leaves volumes stuck in attaching/detaching

Status in OpenStack Compute (nova):
  Expired

Bug description:
  There is a problem with the nova command 'volume-update' that leaves
  cinder volumes in the states 'attaching' and 'deleting'.

  If the nova command 'volume-update' is used by a non admin user the
  command fails and the volumes referenced in the command are left in
  the states 'attaching' and 'deleting'.

  
  For example, if a non admin user runs the command
   $ nova volume-update d39dc7f2-929d-49bb-b22f-56adb3f378c7 
f0c3ea8f-c00f-4db8-aa20-e2dc6a1ddc9b 59b0cf66-67c8-4041-a505-78000b9c71f6

   Will result in the two volumes stuck like this:

   $ cinder list
   
+--+---+--+--+-+--+--+
   |  ID  |   Status  | Display Name | Size | 
Volume Type | Bootable | Attached to  |
   
+--+---+--+--+-+--+--+
   | 59b0cf66-67c8-4041-a505-78000b9c71f6 | attaching | vol2 |  1   |   
  None|  false   |  |
   | f0c3ea8f-c00f-4db8-aa20-e2dc6a1ddc9b | detaching | vol1 |  1   |   
  None|  false   | d39dc7f2-929d-49bb-b22f-56adb3f378c7 |
   
+--+---+--+--+-+--+--+

  
  And the following in the cinder-api log:

  
  2015-01-21 11:00:03.969 13588 DEBUG keystonemiddleware.auth_token [-] 
Received request from user: user_id None, project_id None, roles None service: 
user_id None, project_id None, roles None __call__ 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/keystonemiddleware/auth_token.py:746
  2015-01-21 11:00:03.970 13588 DEBUG routes.middleware 
[req-d08bc456-14e9-4f97-9f4e-a9f87e7d6add a00b77059c8c400a4ff9a8496b2d 
d40e3207e34a4b558bf2d58bd3fe268a - - -] Matched POST 
/d40e3207e34a4b558bf2d58bd3fe268a/volumes/f0c3ea8f-c00f-4db8-aa20-e2dc6a1ddc9b/action
 __call__ 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/routes/middleware.py:100
  2015-01-21 11:00:03.971 13588 DEBUG routes.middleware 
[req-d08bc456-14e9-4f97-9f4e-a9f87e7d6add a00b77059c8c400a4ff9a8496b2d 
d40e3207e34a4b558bf2d58bd3fe268a - - -] Route path: 
'/{project_id}/volumes/:(id)/action', defaults: {'action': u'action', 
'controller': } 
__call__ 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/routes/middleware.py:102
  2015-01-21 11:00:03.971 13588 DEBUG routes.middleware 
[req-d08bc456-14e9-4f97-9f4e-a9f87e7d6add a00b77059c8c400a4ff9a8496b2d 
d40e3207e34a4b558bf2d58bd3fe268a - - -] Match dict: {'action': u'action', 
'controller': , 
'project_id': u'd40e3207e34a4b558bf2d58bd3fe268a', 'id': 
u'f0c3ea8f-c00f-4db8-aa20-e2dc6a1ddc9b'} __call__ 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/routes/middleware.py:103
  2015-01-21 11:00:03.972 13588 INFO cinder.api.openstack.wsgi 
[req-d08bc456-14e9-4f97-9f4e-a9f87e7d6add a00b77059c8c400a4ff9a8496b2d 
d40e3207e34a4b558bf2d58bd3fe268a - - -] POST 
http://192.0.2.24:8776/v1/d40e3207e34a4b558bf2d58bd3fe268a/volumes/f0c3ea8f-c00f-4db8-aa20-e2dc6a1ddc9b/action
  2015-01-21 11:00:03.972 13588 DEBUG cinder.api.openstack.wsgi 
[req-d08bc456-14e9-4f97-9f4e-a9f87e7d6add a00b77059c8c400a4ff9a8496b2d 
d40e3207e34a4b558bf2d58bd3fe268a - - -] Action body: 
{"os-migrate_volume_completion": {"new_volume": 
"59b0cf66-67c8-4041-a505-78000b9c71f6", "error": false}} get_method 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/cinder/api/openstack/wsgi.py:1010
  2015-01-21 11:00:03.973 13588 INFO cinder.api.openstack.wsgi 
[req-d08bc456-14e9-4f97-9f4e-a9f87e7d6add a00b77059c8c400a4ff9a8496b2d 
d40e3207e34a4b558bf2d58bd3fe268a - - -] 
http://192.0.2.24:8776/v1/d40e3207e34a4b558bf2d58bd3fe268a/volumes/f0c3ea8f-c00f-4db8-aa20-e2dc6a1ddc9b/action
 returned with HTTP 403
  2015-01-21 11:00:03.975 13588 INFO eventlet.wsgi.server 
[req-d08bc456-14e9-4f97-9f4e-a9f87e7d6add a00b77059c8c400a4ff9a8496b2d 
d40e3207e34a4b558bf2d58bd3fe268a - - -] 127.0.0.1 - - [21/Jan/2015 11:00:03] 
"POST 
/v1/d40e3207e34a4b558bf2d58bd3fe268a/volumes/f0c3ea8f-c00f-4db8-aa20-e2dc6a1ddc9b/action
 HTTP/1.1" 403 429 0.123613


  
  The problem is that the nova policy.json file allows a non admin user to run 
the command 'volume-update', but the cinder policy.json file requires the admin 
role to run the action os-migrate_volume_completion, which is called by nova as 
part of the 'update-volume' process.

  The operation will complete successfully if it is performed

[Yahoo-eng-team] [Bug 1591240] Re: progress_watermark is not updated

2016-09-29 Thread Matt Riedemann
** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova/mitaka
   Status: New => Fix Committed

** Changed in: nova/mitaka
   Importance: Undecided => Medium

** Changed in: nova/mitaka
 Assignee: (unassigned) => Gaudenz Steinlin (gaudenz-debian)

** Also affects: nova/liberty
   Importance: Undecided
   Status: New

** Changed in: nova/liberty
   Status: New => In Progress

** Changed in: nova/liberty
 Assignee: (unassigned) => Gaudenz Steinlin (gaudenz-debian)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1591240

Title:
  progress_watermark is not updated

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  In Progress
Status in OpenStack Compute (nova) mitaka series:
  Fix Committed

Bug description:
  During the live migration process the progress_watermark/progress_time
  are not being (re)updated with the new progress made by the live
  migration at the "_live_migration_monitor" function
  (virt/libvirt/driver.py).

  More specifically, in these lines of code:
  if ((progress_watermark is None) or
  (progress_watermark > info.data_remaining)):
  progress_watermark = info.data_remaining
  progress_time = now

  
  It may happen that the first time it gets inside (progress_watermark = None), 
the info.data_remaining is still 0, thus the progress_watermark is set to 0. 
This avoids to get inside the "if" block in the future iterations (as 
progress_watermark=0 is never bigger than info.data_remaining), and thus not 
updating neither the progress_watermark, nor the progress_time from that point. 

  This may lead to (unneeded) abort migrations due to progress_time not
  being updated, making (now - progress_time) > progress_timeout.

  It can be fixed just by modifying the if clause to be like:
  if ((progress_watermark is None) or
  (progress_watermark == 0) or
  (progress_watermark > info.data_remaining)):

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1591240/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1538896] Re: nova flavor-show raises 500 error if flavor is deleted

2016-09-29 Thread Matt Riedemann
** Also affects: nova/liberty
   Importance: Undecided
   Status: New

** Changed in: nova/liberty
   Status: New => In Progress

** Changed in: nova/liberty
 Assignee: (unassigned) => Nicolas Simonds (nicolas.simonds)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1538896

Title:
  nova flavor-show  raises 500 error if flavor is deleted

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  In Progress

Bug description:
  Reproducible on latest master.

  Tried to show deleted flavor then it raises 500 error.

  Steps to reproduce:

  1. Create flavor
  $ nova flavor-create m2.test 10 1200 12 8

  2. Delete flavor
  $ nova flavor-delete 10

  3. Show deleted flavor
  $ nova flavor-show 10

  ERROR (ClientException): Unexpected API Error. Please report this at
  http://bugs.launchpad.net/nova/ and attach the Nova API log if
  possible.  (HTTP 500) (Request-ID:
  req-4f77f74c-2234-4beb-9875-157364bdd964)

  Nova api logs:
  -

   from (pid=31454) _http_log_response 
/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:254
  2016-01-27 15:28:14.067 DEBUG nova.api.openstack.wsgi 
[req-15b4f90e-332b-4e96-b839-8cd931605e42 admin demo] Calling method '>' from (pid=31454) _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:699
  2016-01-27 15:28:14.082 ERROR nova.api.openstack.extensions 
[req-15b4f90e-332b-4e96-b839-8cd931605e42 admin demo] Unexpected exception in 
API method
  2016-01-27 15:28:14.082 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
  2016-01-27 15:28:14.082 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 478, in wrapped
  2016-01-27 15:28:14.082 TRACE nova.api.openstack.extensions return 
f(*args, **kwargs)
  2016-01-27 15:28:14.082 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/flavors_extraspecs.py", line 60, in 
index
  2016-01-27 15:28:14.082 TRACE nova.api.openstack.extensions return 
self._get_extra_specs(context, flavor_id)
  2016-01-27 15:28:14.082 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/flavors_extraspecs.py", line 39, in 
_get_extra_specs
  2016-01-27 15:28:14.082 TRACE nova.api.openstack.extensions flavor = 
common.get_flavor(context, flavor_id)
  2016-01-27 15:28:14.082 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/common.py", line 543, in get_flavor
  2016-01-27 15:28:14.082 TRACE nova.api.openstack.extensions raise 
exc.HTTPNotFound(explanation=error.format_message())
  2016-01-27 15:28:14.082 TRACE nova.api.openstack.extensions HTTPNotFound: 
Flavor 122 could not be found.
  2016-01-27 15:28:14.082 TRACE nova.api.openstack.extensions
  2016-01-27 15:28:14.085 INFO nova.api.openstack.wsgi 
[req-15b4f90e-332b-4e96-b839-8cd931605e42 admin demo] HTTP exception thrown: 
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
  
  2016-01-27 15:28:14.086 DEBUG nova.api.openstack.wsgi 
[req-15b4f90e-332b-4e96-b839-8cd931605e42 admin demo] Returning 500 to user: 
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
   from (pid=31454) __call__ 
/opt/stack/nova/nova/api/openstack/wsgi.py:1070
  2016-01-27 15:28:14.097 INFO nova.osapi_compute.wsgi.server 
[req-15b4f90e-332b-4e96-b839-8cd931605e42 admin demo] 10.69.4.177 "GET 
/v2.1/982e2fc081364cf889d959b27d4bd510/flavors/122/os-extra_specs HTTP/1.1" 
status: 500 len: 499 time: 0.1559708

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1538896/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612466] Re: aggregate object file does not define LOG error in Liberty

2016-09-29 Thread Matt Riedemann
This is not a bug in upstream code, at least what you're patching.

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1612466

Title:
  aggregate object file does not define LOG error in Liberty

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  the error output in nova-api.log is :
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions 
[req-56dda11e-3041-4fac-8342-bb643643a1c7 e88120bc348c4f3ca37207ef4bcd3b90 
43b2137632ac4ad8b2
  df8c0d27f13fb8 - - -] Unexpected exception in API method
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, 
in wrap
  ped
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
wrappe
  r
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/aggregates.py", 
line 169,
   in _remove_host
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 89, in wrapped
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, in __exit__
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions 
six.reraise(self.type_, self.value, self.tb)
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 72, in wrapped
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 3908, in 
remove_host_from
  _aggregate
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 213, in 
wrapper
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions return 
fn(self, *args, **kwargs)
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/objects/aggregate.py", line 165, in 
delete_host
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/objects/aggregate.py", line 64, in 
update_aggre
  gate_for_instances
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions NameError: 
global name 'LOG' is not defined
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions
  2016-08-09 14:50:19.148 4532 INFO nova.api.openstack.wsgi 
[req-56dda11e-3041-4fac-8342-bb643643a1c7 e88120bc348c4f3ca37207ef4bcd3b90 
43b2137632ac4ad8b2df8c0d27f13fb8 - - -] HTTP exception thrown: Unexpected API 
Error. Please report this at http://bugs.launchpad.net/nova/ and attach the 
Nova API log if possible.
  

  so i found that in nova/object/aggregate.py function
  update_aggregate_for_instance using LOG.exception to write down
  message in nova-api.log when instance.save error and throw exception,
  bug in this module does not defined LOG, so it will report  Unexpected
  API Error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1612466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629097] Re: ovsdb-client processes not getting cleaned up

2016-09-29 Thread Corey Bryant
** Also affects: neutron
   Importance: Undecided
   Status: New

** No longer affects: openvswitch (Ubuntu)

** Summary changed:

- ovsdb-client processes not getting cleaned up
+ neutron-rootwrap processes not getting cleaned up

** Description changed:

- This can be recreated with the openstack charms using xenial-newton-staging.  
On newton deploys, neutron-gateway and nova-compute units will exhaust memory 
due to compounding ovsdb-client processes:
+ neutron-rootwrap processes aren't getting cleaned up on Newton.  I'm
+ testing with Newton rc3.
+ 
+ I was noticing memory exhaustion on my neutron gateway units, which turned 
out to be due to compounding neutron-rootwrap processes:
  sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ovsdb-client 
monitor Interface name,ofport,external_ids --format=json
  
  $ top -n1 -b -o VIRT
  http://paste.ubuntu.com/23252407/
  
  $ ps aux|grep ovsdb-client
  http://paste.ubuntu.com/23252658/
  
  Restarting openvswitch cleans up the processes but they just start piling 
again up soon after:
  sudo systemctl restart openvswitch-switch
+ 
+ At first I thought this was an openvswitch issue, however I reverted the
+ code in get_root_helper_child_pid() and neutron-rootwrap processes
+ started getting cleaned up. See corresponding commit at [1].
+ 
+ This can be recreated with the openstack charms using xenial-newton-
+ staging.  On newton deploys, neutron-gateway and nova-compute units will
+ exhaust memory due to compounding ovsdb-client processes.
+ 
+ [1]
+ commit fd93e19f2a415b3803700fc491749daba01a4390
+ Author: Assaf Muller 
+ Date:   Fri Mar 18 16:29:26 2016 -0400
+ 
+ Change get_root_helper_child_pid to stop when it finds cmd
+ 
+ get_root_helper_child_pid recursively finds the child of pid,
+ until it can no longer find a child. However, the intention is
+ not to find the deepest child, but to strip away root helpers.
+ For example 'sudo neutron-rootwrap x' is supposed to find the
+ pid of x. However, in cases 'x' spawned quick lived children of
+ its own (For example: ip / brctl / ovs invocations),
+ get_root_helper_child_pid returned those pids if called in
+ the wrong time.
+ 
+ Change-Id: I582aa5c931c8bfe57f49df6899445698270bb33e
+ Closes-Bug: #1558819

** Description changed:

  neutron-rootwrap processes aren't getting cleaned up on Newton.  I'm
  testing with Newton rc3.
  
  I was noticing memory exhaustion on my neutron gateway units, which turned 
out to be due to compounding neutron-rootwrap processes:
  sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ovsdb-client 
monitor Interface name,ofport,external_ids --format=json
  
  $ top -n1 -b -o VIRT
  http://paste.ubuntu.com/23252407/
  
  $ ps aux|grep ovsdb-client
  http://paste.ubuntu.com/23252658/
  
  Restarting openvswitch cleans up the processes but they just start piling 
again up soon after:
  sudo systemctl restart openvswitch-switch
  
  At first I thought this was an openvswitch issue, however I reverted the
  code in get_root_helper_child_pid() and neutron-rootwrap processes
- started getting cleaned up. See corresponding commit at [1].
+ started getting cleaned up. See corresponding commit for code that
+ possibly introduced this at [1].
  
  This can be recreated with the openstack charms using xenial-newton-
  staging.  On newton deploys, neutron-gateway and nova-compute units will
  exhaust memory due to compounding ovsdb-client processes.
  
  [1]
  commit fd93e19f2a415b3803700fc491749daba01a4390
  Author: Assaf Muller 
  Date:   Fri Mar 18 16:29:26 2016 -0400
  
- Change get_root_helper_child_pid to stop when it finds cmd
- 
- get_root_helper_child_pid recursively finds the child of pid,
- until it can no longer find a child. However, the intention is
- not to find the deepest child, but to strip away root helpers.
- For example 'sudo neutron-rootwrap x' is supposed to find the
- pid of x. However, in cases 'x' spawned quick lived children of
- its own (For example: ip / brctl / ovs invocations),
- get_root_helper_child_pid returned those pids if called in
- the wrong time.
- 
- Change-Id: I582aa5c931c8bfe57f49df6899445698270bb33e
- Closes-Bug: #1558819
+ Change get_root_helper_child_pid to stop when it finds cmd
+ 
+ get_root_helper_child_pid recursively finds the child of pid,
+ until it can no longer find a child. However, the intention is
+ not to find the deepest child, but to strip away root helpers.
+ For example 'sudo neutron-rootwrap x' is supposed to find the
+ pid of x. However, in cases 'x' spawned quick lived children of
+ its own (For example: ip / brctl / ovs invocations),
+ get_root_helper_child_pid returned those pids if called in
+ the wrong time.
+ 
+ Change-Id: I582aa5c931c8bfe57f49df6899445698270bb33e
+ Closes-Bug: #1558819

-- 
You received this bug notification because you are a member of Yaho

[Yahoo-eng-team] [Bug 1628437] Re: (hyper-v) resize instance failed if remove configdrive.iso

2016-09-29 Thread Matt Riedemann
*** This bug is a duplicate of bug 1619602 ***
https://bugs.launchpad.net/bugs/1619602

Sounds like a duplicate of bug 1619602 which was fixed with
https://review.openstack.org/#/c/364829/ - please confirm that fixes
your issue.

** This bug has been marked a duplicate of bug 1619602
   Hyper-V: vhd config drive images are not migrated

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1628437

Title:
  (hyper-v) resize instance failed if remove configdrive.iso

Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  After I use configdrive to inject ip,hostname and password of the
  instance, and i move the configdrive.iso file from cdrom. when i
  resize the instance, report that "config drive is required by
  instance: ***, but it does not exist" .

  openstack use hypervisor which is hyper-V.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1628437/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619602] Re: Hyper-V: vhd config drive images are not migrated

2016-09-29 Thread Matt Riedemann
** Also affects: nova/newton
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1619602

Title:
  Hyper-V: vhd config drive images are not migrated

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  In Progress

Bug description:
  During cold migration, vhd config drive images are not copied over, on
  the wrong assumption that the instance is already configured and does
  not need the config drive.

  There is an explicit check at the following location:
  
https://github.com/openstack/nova/blob/8f35bb321d26bd7d296c57f4188ec12fcde897c3/nova/virt/hyperv/migrationops.py#L75-L76

  For this reason, migrating instances using vhd config drives will fail, as 
there is a check ensuring that the config drive is present, if required:
  
https://github.com/openstack/nova/blob/8f35bb321d26bd7d296c57f4188ec12fcde897c3/nova/virt/hyperv/migrationops.py#L153-L163

  The Hyper-V driver should not skip moving the config drive image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1619602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629110] [NEW] update-volume API should return BadRequest response if specified volume with request body is not existent

2016-09-29 Thread Ken'ichi Ohmichi
Public bug reported:

As the following part of API-WG guidline[1],

  If a request contains a reference to a nonexistent resource in the
  body (not URI), the code should be 400 Bad Request. Do not use 404
  NotFound because :rfc:`7231#section-6.5.4` (section 6.5.4) mentions
  the origin server did not find a current representation for the
  target resource for 404 and representation for the target resource
  means a URI

Nova should return a BadRequest response if specified resource with request 
body is not existent.
However, update-volume API returns a NotFound response instead.
That is a common mistake on REST API and we need to change it to BadRequest.

[1]: https://github.com/openstack/api-wg/blob/master/guidelines/http.rst
#failure-code-clarifications

** Affects: nova
 Importance: Undecided
 Assignee: Ken'ichi Ohmichi (oomichi)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Ken'ichi Ohmichi (oomichi)

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1629110

Title:
  update-volume API should return BadRequest response if specified
  volume with request body is not existent

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  As the following part of API-WG guidline[1],

If a request contains a reference to a nonexistent resource in the
body (not URI), the code should be 400 Bad Request. Do not use 404
NotFound because :rfc:`7231#section-6.5.4` (section 6.5.4) mentions
the origin server did not find a current representation for the
target resource for 404 and representation for the target resource
means a URI

  Nova should return a BadRequest response if specified resource with request 
body is not existent.
  However, update-volume API returns a NotFound response instead.
  That is a common mistake on REST API and we need to change it to BadRequest.

  [1]: https://github.com/openstack/api-
  wg/blob/master/guidelines/http.rst#failure-code-clarifications

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1629110/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629114] [NEW] Ceph RBD live-migration failure due to wrong rbd_user/rbd_secret

2016-09-29 Thread Bert JW Regeer
Public bug reported:

Description:

Ceph RBD live-migration fails to modify rbd_user/libvirt secret UUID to
the receiving hosts information, causing live migration to fail.

Steps to reproduce:

Compute node A:

/etc/nova/nova.conf:
rbd_user=compute_node_A
rbd_secret_uuid = secretA

Secret file:
/etc/libvirt/secrets/secretA.xml

Compute node B:

/etc/nova/nova.conf:
rbd_user=compute_node_B
rbd_secret_uuid = secretB

Secret file:
/etc/libvirt/secret/secretB.xml


Expected result:

Live migration completes

Current result:

Live migration fails because it sets the secret/key/id to the
information from cmopute_node_A instead of compute_node_B.

Sep 29 18:50:40 compute_node_A nova-compute[175448]: 2016-09-29 18:50:40.613 
175448 ERROR nova.virt.libvirt.driver [req-77ce1a5a-6588-420d-8c77-7b106e4ca3f0 
4c8a770be6c54c23bbf20e8a63803d63 2d98cd4d4fdf43f5b9db5e39846922d8 - - -] 
[instance: b4407d16-8946-45a0-8e58-3a1bf8b0edfc] Live Migration failure: 
internal error: process exited while connecting to monitor: 
2016-09-29T18:50:40.220091Z qemu-system-x86_64: -drive 
file=rbd:nova/b4407d16-8946-45a0-8e58-3a1bf8b0edfc_disk:id=nova-compute-c07:keysomecephkey:auth_supported=cephx\;none:mon_host=[fd2d\:dec4\:cf59\:3c12\:0\:1\:\:]\:6789\;[fd2d\:dec4\:cf59\:3c13\:0\:1\:\:]\:6789\;[fd2d\:dec4\:cf59\:3c14\:0\:1\:\:]\:6789,format=raw,if=none,id=drive-virtio-disk0,cache=none:
 error connecting
Sep 29 18:50:40 compute_node_A nova-compute[175448]: 
2016-09-29T18:50:40.246712Z qemu-system-x86_64: network script /etc/qemu-ifdown 
failed with status 256
Sep 29 18:50:40 compute_node_A nova-compute[175448]: 
2016-09-29T18:50:40.274406Z qemu-system-x86_64: network script /etc/qemu-ifdown 
failed with status 256
Sep 29 18:50:40 compute_node_A nova-compute[175448]: Traceback (most recent 
call last):
Sep 29 18:50:40 compute_node_A nova-compute[175448]:   File 
"/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 457, in 
fire_timers
Sep 29 18:50:40 compute_node_A nova-compute[175448]: timer()
Sep 29 18:50:40 compute_node_A nova-compute[175448]:   File 
"/usr/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line 58, in __call__
Sep 29 18:50:40 compute_node_A nova-compute[175448]: cb(*args, **kw)
Sep 29 18:50:40 compute_node_A nova-compute[175448]:   File 
"/usr/lib/python2.7/dist-packages/eventlet/event.py", line 168, in _do_send
Sep 29 18:50:40 compute_node_A nova-compute[175448]: waiter.switch(result)
Sep 29 18:50:40 compute_node_A nova-compute[175448]:   File 
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 214, in main
Sep 29 18:50:40 compute_node_A nova-compute[175448]: result = 
function(*args, **kwargs)
Sep 29 18:50:40 compute_node_A nova-compute[175448]:   File 
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 1145, in context_wrapper
Sep 29 18:50:40 compute_node_A nova-compute[175448]: return func(*args, 
**kwargs)
Sep 29 18:50:40 compute_node_A nova-compute[175448]:   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 6104, in 
_live_migration_operation
Sep 29 18:50:40 compute_node_A nova-compute[175448]: instance=instance)
Sep 29 18:50:40 compute_node_A nova-compute[175448]:   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
Sep 29 18:50:40 compute_node_A nova-compute[175448]: self.force_reraise()
Sep 29 18:50:40 compute_node_A nova-compute[175448]:   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
Sep 29 18:50:40 compute_node_A nova-compute[175448]: 
six.reraise(self.type_, self.value, self.tb)
Sep 29 18:50:40 compute_node_A nova-compute[175448]:   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 6064, in 
_live_migration_operation
Sep 29 18:50:40 compute_node_A nova-compute[175448]: migration_flags)
Sep 29 18:50:40 compute_node_A nova-compute[175448]:   File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit
Sep 29 18:50:40 compute_node_A nova-compute[175448]: result = 
proxy_call(self._autowrap, f, *args, **kwargs)
Sep 29 18:50:40 compute_node_A nova-compute[175448]:   File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in proxy_call
Sep 29 18:50:40 compute_node_A nova-compute[175448]: rv = execute(f, *args, 
**kwargs)
Sep 29 18:50:40 compute_node_A nova-compute[175448]:   File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute
Sep 29 18:50:40 compute_node_A nova-compute[175448]: six.reraise(c, e, tb)
Sep 29 18:50:40 compute_node_A nova-compute[175448]:   File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker
Sep 29 18:50:40 compute_node_A nova-compute[175448]: rv = meth(*args, 
**kwargs)
Sep 29 18:50:40 compute_node_A nova-compute[175448]:   File 
"/usr/lib/python2.7/dist-packages/libvirt.py", line 1833, in migrateToURI3
Sep 29 18:50:40 compute_node_A nova-compute[175448]: if ret == -1: raise 
libvirtError ('virD

[Yahoo-eng-team] [Bug 1628337] Re: cloud-init tries to install NTP before even configuring the archives

2016-09-29 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.8-11-g02f6c4b-
0ubuntu1

---
cloud-init (0.7.8-11-g02f6c4b-0ubuntu1) yakkety; urgency=medium

  * New upstream snapshot.
- lxd: Update network config for LXD 2.3 [Stéphane Graber]
- DigitalOcean: use meta-data for network configruation [Ben Howard]
- ntp: move to run after apt configuration (LP: #1628337)

 -- Scott Moser   Thu, 29 Sep 2016 14:30:15 -0400

** Changed in: cloud-init (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1628337

Title:
  cloud-init tries to install NTP before even configuring the archives

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released

Bug description:
  cloud-init tries to install NTP package before it actually configures
  /etc/apt/sources.list.

  In a closed MAAS environment where MAAS is limited to access to
  us.archive.ubuntu.com , cloud-init is trying to access to
  archive.ubuntu.com.

  In commissioning, however, cloud-init is doing this:

  
  1. cloud-init gets metadata from MAAS
  2. cloud-init tries to install NTP from archive.ubuntu.com
  3. cloud-init configures /etc/apt/sources.list with us.archive.ubuntu.com
  4. cloud-init installs other packages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1628337/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629133] [NEW] New neutron subnet pool support breaks multinode testing.

2016-09-29 Thread Clark Boylan
Public bug reported:

The new subnet pool support in devstack breaks multinode testing bceause it 
results in the route for 10.0.0.0/8 being set to via br-ex when the host has 
interfaces that are actually on 10 nets and that is where we need the routes to 
go out. Multinode testing is affected because it uses these 10 net addresses to 
run the vxlan overlays between hosts.
For many years devstack-gate has set FIXED_RANGE to ensure that the networks 
devstack uses do not interfere with the underlying test host's networking. 
However this setting was completely ignored when setting up the subnet pools.

I think the correct way to fix this is to use a much smaller subnet pool
range to avoid conflicting with every possible 10.0.0.0/8 network in the
wild, possibly by defaulting to the existing FIXED_RANGE information.
Using the existing information will help ensure that anyone with
networks in 10.0.0.0/8 will continue to work if they have specified a
range that doesn't conflict using this variable.

In addition to this we need to clearly document what this new stuff in
devstack does and how people can work around it should they conflict
with the new defaults we end up choosing.

I have proposed https://review.openstack.org/379543 which mostly works
except it fails one tempest test that apparently depends on this new
subnet pool stuff. Specifically the V6 range isn't large enough aiui.

** Affects: devstack
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1629133

Title:
  New neutron subnet pool support breaks multinode testing.

Status in devstack:
  New
Status in neutron:
  New

Bug description:
  The new subnet pool support in devstack breaks multinode testing bceause it 
results in the route for 10.0.0.0/8 being set to via br-ex when the host has 
interfaces that are actually on 10 nets and that is where we need the routes to 
go out. Multinode testing is affected because it uses these 10 net addresses to 
run the vxlan overlays between hosts.
  For many years devstack-gate has set FIXED_RANGE to ensure that the networks 
devstack uses do not interfere with the underlying test host's networking. 
However this setting was completely ignored when setting up the subnet pools.

  I think the correct way to fix this is to use a much smaller subnet
  pool range to avoid conflicting with every possible 10.0.0.0/8 network
  in the wild, possibly by defaulting to the existing FIXED_RANGE
  information. Using the existing information will help ensure that
  anyone with networks in 10.0.0.0/8 will continue to work if they have
  specified a range that doesn't conflict using this variable.

  In addition to this we need to clearly document what this new stuff in
  devstack does and how people can work around it should they conflict
  with the new defaults we end up choosing.

  I have proposed https://review.openstack.org/379543 which mostly works
  except it fails one tempest test that apparently depends on this new
  subnet pool stuff. Specifically the V6 range isn't large enough aiui.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1629133/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629133] Re: New neutron subnet pool support breaks multinode testing.

2016-09-29 Thread John L. Villalovos
** Also affects: ironic
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1629133

Title:
  New neutron subnet pool support breaks multinode testing.

Status in devstack:
  New
Status in Ironic:
  New
Status in neutron:
  New

Bug description:
  The new subnet pool support in devstack breaks multinode testing bceause it 
results in the route for 10.0.0.0/8 being set to via br-ex when the host has 
interfaces that are actually on 10 nets and that is where we need the routes to 
go out. Multinode testing is affected because it uses these 10 net addresses to 
run the vxlan overlays between hosts.
  For many years devstack-gate has set FIXED_RANGE to ensure that the networks 
devstack uses do not interfere with the underlying test host's networking. 
However this setting was completely ignored when setting up the subnet pools.

  I think the correct way to fix this is to use a much smaller subnet
  pool range to avoid conflicting with every possible 10.0.0.0/8 network
  in the wild, possibly by defaulting to the existing FIXED_RANGE
  information. Using the existing information will help ensure that
  anyone with networks in 10.0.0.0/8 will continue to work if they have
  specified a range that doesn't conflict using this variable.

  In addition to this we need to clearly document what this new stuff in
  devstack does and how people can work around it should they conflict
  with the new defaults we end up choosing.

  I have proposed https://review.openstack.org/379543 which mostly works
  except it fails one tempest test that apparently depends on this new
  subnet pool stuff. Specifically the V6 range isn't large enough aiui.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1629133/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629149] [NEW] Unit tests fail when run on machines with domains in their hostnames

2016-09-29 Thread Garrett Holmstrom
Public bug reported:

We run cloud-init's unit tests as part of building packages for Fedora.
Since cc_apt_configure includes an extra entry in its list of mirror
URIs when the system's hostname includes a domain, one of
test_handler_apt_configure_sources_list_v3's tests fails because of that
extra entry:

==
FAIL: test_apt_v3_mirror_search_dns - Test searching dns patterns
--
Traceback (most recent call last):
  File 
"/builddir/build/BUILD/cloud-init-0.7.8/tests/unittests/test_handler/test_handler_apt_source_v3.py",
 line 986, in test_apt_v3_mirror_search_dns
mockse.assert_has_calls(calls)
  File "/usr/lib64/python3.5/unittest/mock.py", line 824, in assert_has_calls
) from cause
AssertionError: Calls not found.
Expected: [call(None), call(['http://ubuntu-mirror.localdomain/ubuntu', 
'http://ubuntu-mirror/ubuntu']), call(None), 
call(['http://ubuntu-security-mirror.localdomain/ubuntu', 
'http://ubuntu-security-mirror/ubuntu'])]
Actual: [call(None),
 call(['http://ubuntu-mirror.devzero.com/ubuntu', 
'http://ubuntu-mirror.localdomain/ubuntu', 'http://ubuntu-mirror/ubuntu']),
 call(None),
 call(['http://ubuntu-security-mirror.devzero.com/ubuntu', 
'http://ubuntu-security-mirror.localdomain/ubuntu', 
'http://ubuntu-security-mirror/ubuntu'])]
 >> begin captured logging << 
tests.unittests.test_handler.test_handler_apt_configure_sources_list_v3: DEBUG: 
got primary mirror: http://mocked/foo
tests.unittests.test_handler.test_handler_apt_configure_sources_list_v3: DEBUG: 
got security mirror: http://mocked/foo
tests.unittests.test_handler.test_handler_apt_configure_sources_list_v3: DEBUG: 
got primary mirror: http://mocked/foo
tests.unittests.test_handler.test_handler_apt_configure_sources_list_v3: DEBUG: 
got security mirror: http://mocked/foo
cloudinit.util: DEBUG: Reading from /tmp/tmpsfoz_fzy/etc/hosts (quiet=False)
cloudinit.util: DEBUG: Reading from /tmp/tmpsfoz_fzy/etc/hosts (quiet=False)
tests.unittests.test_handler.test_handler_apt_configure_sources_list_v3: DEBUG: 
got primary mirror: phit
cloudinit.util: DEBUG: Reading from /tmp/tmpsfoz_fzy/etc/hosts (quiet=False)
cloudinit.util: DEBUG: Reading from /tmp/tmpsfoz_fzy/etc/hosts (quiet=False)
tests.unittests.test_handler.test_handler_apt_configure_sources_list_v3: DEBUG: 
got security mirror: shit
- >> end captured logging << -

I can skip this test because Fedora can't use that module, but I suspect
it would be worthwhile to fix the problem so it doesn't affect distros
that can use it.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1629149

Title:
  Unit tests fail when run on machines with domains in their hostnames

Status in cloud-init:
  New

Bug description:
  We run cloud-init's unit tests as part of building packages for
  Fedora.  Since cc_apt_configure includes an extra entry in its list of
  mirror URIs when the system's hostname includes a domain, one of
  test_handler_apt_configure_sources_list_v3's tests fails because of
  that extra entry:

  ==
  FAIL: test_apt_v3_mirror_search_dns - Test searching dns patterns
  --
  Traceback (most recent call last):
File 
"/builddir/build/BUILD/cloud-init-0.7.8/tests/unittests/test_handler/test_handler_apt_source_v3.py",
 line 986, in test_apt_v3_mirror_search_dns
  mockse.assert_has_calls(calls)
File "/usr/lib64/python3.5/unittest/mock.py", line 824, in assert_has_calls
  ) from cause
  AssertionError: Calls not found.
  Expected: [call(None), call(['http://ubuntu-mirror.localdomain/ubuntu', 
'http://ubuntu-mirror/ubuntu']), call(None), 
call(['http://ubuntu-security-mirror.localdomain/ubuntu', 
'http://ubuntu-security-mirror/ubuntu'])]
  Actual: [call(None),
   call(['http://ubuntu-mirror.devzero.com/ubuntu', 
'http://ubuntu-mirror.localdomain/ubuntu', 'http://ubuntu-mirror/ubuntu']),
   call(None),
   call(['http://ubuntu-security-mirror.devzero.com/ubuntu', 
'http://ubuntu-security-mirror.localdomain/ubuntu', 
'http://ubuntu-security-mirror/ubuntu'])]
   >> begin captured logging << 
  tests.unittests.test_handler.test_handler_apt_configure_sources_list_v3: 
DEBUG: got primary mirror: http://mocked/foo
  tests.unittests.test_handler.test_handler_apt_configure_sources_list_v3: 
DEBUG: got security mirror: http://mocked/foo
  tests.unittests.test_handler.test_handler_apt_configure_sources_list_v3: 
DEBUG: got primary mirror: http://mocked/foo
  tests.unittests.test_handler.test_handler_apt_configure_sources_list

[Yahoo-eng-team] [Bug 1628883] Re: Minimum requirements too low on oslo.log for keystone

2016-09-29 Thread Steve Martinelli
** Changed in: keystone
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1628883

Title:
  Minimum requirements too low on oslo.log for keystone

Status in OpenStack Identity (keystone):
  Fix Released
Status in keystone package in Ubuntu:
  Triaged

Bug description:
  After upgrading keystone from mitaka to newton-rc1 on Xenial I am
  getting this error:

  
  $ keystone-manage db_sync
  Traceback (most recent call last):
File "/usr/bin/keystone-manage", line 6, in 
  from keystone.cmd.manage import main
File "/usr/lib/python2.7/dist-packages/keystone/cmd/manage.py", line 32, in 

  from keystone.cmd import cli
File "/usr/lib/python2.7/dist-packages/keystone/cmd/cli.py", line 28, in 

  from keystone.cmd import doctor
File "/usr/lib/python2.7/dist-packages/keystone/cmd/doctor/__init__.py", 
line 13, in 
  from keystone.cmd.doctor import caching
File "/usr/lib/python2.7/dist-packages/keystone/cmd/doctor/caching.py", 
line 13, in 
  import keystone.conf
File "/usr/lib/python2.7/dist-packages/keystone/conf/__init__.py", line 26, 
in 
  from keystone.conf import default
File "/usr/lib/python2.7/dist-packages/keystone/conf/default.py", line 180, 
in 
  deprecated_since=versionutils.deprecated.NEWTON,
  AttributeError: type object 'deprecated' has no attribute 'NEWTON'

  It seems due to the fact that the installed version of oslo.log is not
  updated properly:

  python-oslo.log:
Installed: 3.2.0-2
Candidate: 3.16.0-0ubuntu1~cloud0
Version table:
   3.16.0-0ubuntu1~cloud0 500
  500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
xenial-updates/newton/main amd64 Packages
  500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
xenial-updates/newton/main i386 Packages
   *** 3.2.0-2 500
  500 http://mirror/ubuntu xenial/main amd64 Packages
  100 /var/lib/dpkg/status

  But looking at the requirements.txt in stable/newton, even
  oslo.log>=1.14.0 is claimed to work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1628883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629158] [NEW] Set migration status to 'error' on live-migration failure

2016-09-29 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/353851
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 8825efa3b263f1334aa78c786b01c9dfdd3ad726
Author: Rajesh Tailor 
Date:   Thu Jul 2 03:22:01 2015 -0700

Set migration status to 'error' on live-migration failure

(A) In resize, confirm-resize and revert-resize operation, migration status
is marked as 'error' in case of failure for respective operation.

Migration object support is added in live-migration operation, which mark
migration status to 'failed' if live-migration operation fails in-between.

To make live-migration consistent with resize, confirm-resize and revert-
resize operation, it needs to mark migration status to 'error' instead of
'failed' in case of failure.

(B) Apart from consistency, proposed change fixes issue (similar to [1])
which might occur on live-migration failure as follows:
If live-migration fails (which sets migration status to 'failed') after
copying instance files from source to dest node and then user request for
instance deletion. In that case, delete api will only remove instance
files from instance.host and not from other host (which could be either
source or dest node but not instance.host). Since instance is already
deleted, instance files will remain on other host (not instance.host).

Set migration status to 'error' on live-migration failure, so that
periodic task _cleanup_incomplete_migrations [2] will remove orphaned
instance files from compute nodes after instance deletion in above case.

[1] https://bugs.launchpad.net/nova/+bug/1392527
[2] https://review.openstack.org/#/c/219299/

DocImpact: On live-migration failure, set migration status to 'error'
instead of 'failed'.

Change-Id: I7a0c5a32349b0d3604802d22e83a3c2dab4b1370
Closes-Bug: 1470420
(cherry picked from commit d61e15818c1d108275b3286a6665fa3e6540e7e7)

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: doc nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1629158

Title:
  Set migration status to 'error' on live-migration failure

Status in OpenStack Compute (nova):
  New

Bug description:
  https://review.openstack.org/353851
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 8825efa3b263f1334aa78c786b01c9dfdd3ad726
  Author: Rajesh Tailor 
  Date:   Thu Jul 2 03:22:01 2015 -0700

  Set migration status to 'error' on live-migration failure
  
  (A) In resize, confirm-resize and revert-resize operation, migration 
status
  is marked as 'error' in case of failure for respective operation.
  
  Migration object support is added in live-migration operation, which mark
  migration status to 'failed' if live-migration operation fails in-between.
  
  To make live-migration consistent with resize, confirm-resize and revert-
  resize operation, it needs to mark migration status to 'error' instead of
  'failed' in case of failure.
  
  (B) Apart from consistency, proposed change fixes issue (similar to [1])
  which might occur on live-migration failure as follows:
  If live-migration fails (which sets migration status to 'failed') after
  copying instance files from source to dest node and then user request for
  instance deletion. In that case, delete api will only remove instance
  files from instance.host and not from other host (which could be either
  source or dest node but not instance.host). Since instance is already
  deleted, instance files will remain on other host (not instance.host).
  
  Set migration status to 'error' on live-migration failure, so that
  periodic task _cleanup_incomplete_migrations [2] will remove orphaned
  instance files from compute nodes after instance deletion in above case.
  
  [1] https://bugs.launchpad.net/nova/+bug/1392527
  [2] https://review.openstack.org/#/c/219299/
  
  DocImpact: On live-migration failure, set migration status to 'error'
  instead of 'failed'.
  
  Change-Id: I7a0c5a32349b0d3604802d22e83a3c2dab4b1370
  Closes-Bug: 1470420
  (cherry p

[Yahoo-eng-team] [Bug 1629159] [NEW] delete router with error of failed unplugging ha interface

2016-09-29 Thread Perry
Public bug reported:

When deleting a Router, there are ERROR logs of failed unplugging ha
interface. This happens in  environment with stable/mitaka. What needs
to note is that router could be deleted successfully after the ERROR.

Reproduce steps:
neutron router-create test
neutron router-delete test
monitor log in neutron-l3-agent.log

This problem is different from existing defects. some existing defect
addressed problem of looping deleting router. some addressed problem of
race between router sync and router deleting. And some defect has
similar symptom which happened on different place,  such as bug 1606801.


2016-09-29 06:57:11.744 6287 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'delete', 
'qrouter-74c4a209-2f42-4f45-b409-082939df0962'] create_process 
/opt/bbc/openstack-2016.1-bbc234/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:84
2016-09-29 06:57:11.835 6287 DEBUG neutron.agent.linux.utils [-] Exit code: 0 
execute 
/opt/bbc/openstack-2016.1-bbc234/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:142
2016-09-29 06:57:11.836 6287 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'kill', '-9', '10728'] create_process 
/opt/bbc/openstack-2016.1-bbc234/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:84
2016-09-29 06:57:11.897 6287 DEBUG neutron.agent.linux.utils [-] Exit code: 0 
execute 
/opt/bbc/openstack-2016.1-bbc234/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:142
2016-09-29 06:57:11.898 6287 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-74c4a209-2f42-4f45-b409-082939df0962', 'ip', 'link', 'delete', 
'ha-e210e603-0c'] create_process 
/opt/bbc/openstack-2016.1-bbc234/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:84
2016-09-29 06:57:11.961 6287 ERROR neutron.agent.linux.utils [-] Exit code: 1; 
Stdin: ; Stdout: ; Stderr: Cannot open network namespace 
"qrouter-74c4a209-2f42-4f45-b409-082939df0962": No such file or directory

2016-09-29 06:57:11.962 6287 ERROR neutron.agent.linux.interface [-] Failed 
unplugging interface 'ha-e210e603-0c'
2016-09-29 06:57:11.962 6287 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'kill', '-15', '10910'] create_process 
/opt/bbc/openstack-2016.1-bbc234/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:84

** Affects: neutron
 Importance: Undecided
 Assignee: Perry (panxia6679)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Perry (panxia6679)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1629159

Title:
  delete router with error of  failed unplugging ha interface

Status in neutron:
  New

Bug description:
  When deleting a Router, there are ERROR logs of failed unplugging ha
  interface. This happens in  environment with stable/mitaka. What needs
  to note is that router could be deleted successfully after the ERROR.

  Reproduce steps:
  neutron router-create test
  neutron router-delete test
  monitor log in neutron-l3-agent.log

  This problem is different from existing defects. some existing defect
  addressed problem of looping deleting router. some addressed problem
  of race between router sync and router deleting. And some defect has
  similar symptom which happened on different place,  such as bug
  1606801.


  2016-09-29 06:57:11.744 6287 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'delete', 
'qrouter-74c4a209-2f42-4f45-b409-082939df0962'] create_process 
/opt/bbc/openstack-2016.1-bbc234/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:84
  2016-09-29 06:57:11.835 6287 DEBUG neutron.agent.linux.utils [-] Exit code: 0 
execute 
/opt/bbc/openstack-2016.1-bbc234/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:142
  2016-09-29 06:57:11.836 6287 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'kill', '-9', '10728'] create_process 
/opt/bbc/openstack-2016.1-bbc234/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:84
  2016-09-29 06:57:11.897 6287 DEBUG neutron.agent.linux.utils [-] Exit code: 0 
execute 
/opt/bbc/openstack-2016.1-bbc234/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:142
  2016-09-29 06:57:11.898 6287 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.c

[Yahoo-eng-team] [Bug 1629167] [NEW] HEAD request blocked cause of response Content-Length more than 0

2016-09-29 Thread 侯喆
Public bug reported:

version: keystone9.2.0
api: curl -i -X HEAD http://**:35357/v2.0/tokens/**   -H "X-Auth-Token:**"

result:

HTTP/1.1 200 OK
Vary: X-Auth-Token
Content-Type: application/json
Content-Length: 5420
X-Openstack-Request-Id: req-c0db94a5-9078-4181-947c-924dfca65a7a
Date: Fri, 30 Sep 2016 03:31:51 GMT

and we found block at here.

I think beause resp body has set to b'', content-lenth not set to 0.

** Affects: keystone
 Importance: Undecided
 Assignee: 侯喆 (houzhe)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => 侯喆 (houzhe)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1629167

Title:
  HEAD request blocked cause of  response Content-Length more than 0

Status in OpenStack Identity (keystone):
  New

Bug description:
  version: keystone9.2.0
  api: curl -i -X HEAD http://**:35357/v2.0/tokens/**   -H "X-Auth-Token:**"

  result:

  HTTP/1.1 200 OK
  Vary: X-Auth-Token
  Content-Type: application/json
  Content-Length: 5420
  X-Openstack-Request-Id: req-c0db94a5-9078-4181-947c-924dfca65a7a
  Date: Fri, 30 Sep 2016 03:31:51 GMT

  and we found block at here.

  I think beause resp body has set to b'', content-lenth not set to 0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1629167/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1602081] Re: Use oslo.context's policy dict

2016-09-29 Thread Jamie Lennox
** Also affects: ironic
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1602081

Title:
  Use oslo.context's policy dict

Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in Ironic:
  New
Status in OpenStack Identity (keystone):
  In Progress
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  This is a cross project goal to standardize the values available to
  policy writers and to improve the basic oslo.context object. It is
  part of the follow up work to bug #1577996 and bug #968696.

  There has been an ongoing problem for how we define the 'admin' role.
  Because tokens are project scoped having the 'admin' role on any
  project granted you the 'admin' role on all of OpenStack. As a
  solution to this keystone defined an is_admin_project field so that
  keystone defines a single project that your token must be scoped to to
  perform admin operations. This has been implemented.

  The next phase of this is to make all the projects understand the X
  -Is-Admin-Project header from keystonemiddleware and pass it to
  oslo_policy. However this pattern of keystone changes something and
  then goes to every project to fix it has been repeated a number of
  times now and we would like to make it much more automatic.

  Ongoing work has enhanced the base oslo.context object to include both
  the load_from_environ and to_policy_values methods. The
  load_from_environ classmethod takes an environment dict with all the
  standard auth_token and oslo middleware headers and loads them into
  their standard place on the context object.

  The to_policy_values() then creates a standard credentials dictionary
  with all the information that should be required to enforce policy
  from the context. The combination of these two methods means in future
  when authentication information needs to be passed to policy it can be
  handled entirely by oslo.context and does not require changes in each
  individual service.

  Note that in future a similar pattern will hopefully be employed to
  simplify passing authentication information over RPC to solve the
  timeout issues. This is a prerequisite for that work.

  There are a few common problems in services that are required to make
  this work:

  1. Most service context.__init__ functions take and discard **kwargs.
  This is so if the context.from_dict receives arguments it doesn't know
  how to handle (possibly because new things have been added to the base
  to_dict) it ignores them. Unfortunately to make the load_from_environ
  method work we need to pass parameters to __init__ that are handled by
  the base class.

  To make this work we simply have to do a better job of using
  from_dict. Instead of passing everything to __init__ and ignoring what
  we don't know we have from_dict extract only the parameters that
  context knows how to use and call __init__ with those.

  2. The parameters passed to the base context.__init__ are old.
  Typically they are user and tenant where most services expect user_id
  and project_id. There is ongoing work to improve this in oslo.context
  but for now we have to ensure that the subclass correctly sets and
  uses the right variable names.

  3. Some services provide additional information to the policy
  enforcement method. To continue to make this function we will simply
  override the to_policy_values method in the subclasses.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1602081/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp