[Yahoo-eng-team] [Bug 1635519] [NEW] There are some js files related with openstack component in horizon folder

2016-10-20 Thread Kenji Ishii
Public bug reported:

According to 
http://docs.openstack.org/developer/horizon/quickstart.html#horizon-s-structure,
 horizon folder expect to be used for any Django project.
However, at the moment this folder are contained some js file like volume.js, 
instance.js and so on.
Actually, implements like openstack api are done in openstack_dashboard so I 
think such files should be moved into openstack_dashboard folder.

** Affects: horizon
 Importance: Undecided
 Assignee: Kenji Ishii (ken-ishii)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Kenji Ishii (ken-ishii)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1635519

Title:
  There are some js files related with openstack component in horizon
  folder

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  According to 
http://docs.openstack.org/developer/horizon/quickstart.html#horizon-s-structure,
 horizon folder expect to be used for any Django project.
  However, at the moment this folder are contained some js file like volume.js, 
instance.js and so on.
  Actually, implements like openstack api are done in openstack_dashboard so I 
think such files should be moved into openstack_dashboard folder.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1635519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1634849] Re: Public and private Image Lists not separating properly

2016-10-20 Thread Richard Jones
Hi, it's not clear from your report, but:

- which version of Horizon are you using?
- are you using the newer Angular interface in Newton?

The newer interface has no such restriction (though it does paginate at
20 images). The search box is interactive, and offers filtering on
public/private also.

Marking as Invalid unless further information clarifies things :-)

** Changed in: horizon
   Status: New => Invalid

** Changed in: horizon
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1634849

Title:
  Public and private Image Lists not separating properly

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Project/Compute/Images menu bring 20 items every time in total.

  If you have 4 private images (named xyz) and lets say 100 public
  images you can't see anything till alphabetically reach the xyz.

  So depend on your image names you can't see your private images till
  you press "next" enough times.

  
  Solution:
  private images and public images should come as separate lists in total 20. 
Right now public and private images coming in total 20. Like 4 private 16 
public.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1634849/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1571878] Re: Add protocol to identity provider using nonexistent mapping

2016-10-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/362397
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=de8fbcf9a0072c84adf4f3630088bc34f9e9782e
Submitter: Jenkins
Branch:master

commit de8fbcf9a0072c84adf4f3630088bc34f9e9782e
Author: Ronald De Rose 
Date:   Mon Aug 29 20:13:35 2016 +

Validate mapping exists when creating/updating a protocol

This patch validates that a mapping exists when adding or updating
a federation protocol.

Change-Id: I996f94d26eb0f2c679542ba13a03bbaa4442486a
Closes-Bug: #1571878


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1571878

Title:
  Add protocol to identity provider using nonexistent mapping

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Currently, it is possible to add a protocol to a identity provider [0]
  using a nonexistent mapping id. We could add a mapping later using the
  ID in the previous step, but several errors can occur in between this
  steps.

  We might want to enforce steps here:
  1 - create idp
  2 - create mapping
  3 - create protocol

  This would also be valid for the update case: only allow update the
  protocol using a valid mapping ID.

  [0] https://github.com/openstack/keystone-specs/blob/master/api/v3
  /identity-api-v3-os-federation-ext.rst#add-a-protocol-and-attribute-
  mapping-to-an-identity-provider

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1571878/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635505] [NEW] Magic search acts weird when pressing arrow keys [firefox]

2016-10-20 Thread Eddie Ramirez
Public bug reported:

Tested on Firefox 47.x, master branch Horizon (Newton).

How to reproduce:
1. Go to any table with magicsearch (e.g. Project->Images)
2. Click on the search input
3. Press left, up, right or down arrow and see how the table below gets empty

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: filter magicsearch table

** Attachment added: "Video"
   
https://bugs.launchpad.net/bugs/1635505/+attachment/4764646/+files/magisearch.mov

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1635505

Title:
  Magic search acts weird when pressing arrow keys [firefox]

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Tested on Firefox 47.x, master branch Horizon (Newton).

  How to reproduce:
  1. Go to any table with magicsearch (e.g. Project->Images)
  2. Click on the search input
  3. Press left, up, right or down arrow and see how the table below gets empty

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1635505/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1626243] Re: Cloud-init fails to write ext4 filesystem to Azure Ephemeral Drive

2016-10-20 Thread Scott Moser
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-init
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu)
   Status: New => Confirmed

** Changed in: cloud-init
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1626243

Title:
  Cloud-init fails to write ext4 filesystem to Azure Ephemeral Drive

Status in cloud-init:
  Confirmed
Status in cloud-init package in Ubuntu:
  Confirmed

Bug description:
  The symptom is similar to bug 1611074 but the cause is different. In
  this case it seems there is an error accessing /dev/sdb1 when lsblk is
  run, possibly because sgdisk isn't done creating the partition. The
  specific error message is "/dev/sdb1: not a block device." A simple
  wait and retry here may resolve the issue.

  util.py[DEBUG]: Running command ['/sbin/sgdisk', '-p', '/dev/sdb'] with 
allowed return codes [0] (shell=False, capture=True)
  cc_disk_setup.py[DEBUG]: Device partitioning layout matches
  util.py[DEBUG]: Creating partition on /dev/disk/cloud/azure_resource took 
0.056 seconds
  cc_disk_setup.py[DEBUG]: setting up filesystems: [{'filesystem': 'ext4', 
'device': 'ephemeral0.1', 'replace_fs': 'ntfs'}]
  cc_disk_setup.py[DEBUG]: ephemeral0.1 is mapped to 
disk=/dev/disk/cloud/azure_resource part=1
  cc_disk_setup.py[DEBUG]: Creating new filesystem.
  cc_disk_setup.py[DEBUG]: Checking /dev/sdb against default devices
  cc_disk_setup.py[DEBUG]: Manual request of partition 1 for /dev/sdb1
  cc_disk_setup.py[DEBUG]: Checking device /dev/sdb1
  util.py[DEBUG]: Running command ['/sbin/blkid', '-c', '/dev/null', 
'/dev/sdb1'] with allowed return codes [0, 2] (shell=False, capture=True)
  cc_disk_setup.py[DEBUG]: Device /dev/sdb1 has None None
  cc_disk_setup.py[DEBUG]: Device /dev/sdb1 is cleared for formating
  cc_disk_setup.py[DEBUG]: File system None will be created on /dev/sdb1
  util.py[DEBUG]: Running command ['/bin/lsblk', '--pairs', '--output', 
'NAME,TYPE,FSTYPE,LABEL', '/dev/sdb1', '--nodeps'] with allowed return codes 
[0] (shell=False, capture=True)
  util.py[DEBUG]: Creating fs for /dev/disk/cloud/azure_resource took 0.008 
seconds
  util.py[WARNING]: Failed during filesystem operation#012Failed during disk 
check for /dev/sdb1#012Unexpected error while running command.#012Command: 
['/bin/lsblk', '--pairs', '--output', 'NAME,TYPE,FSTYPE,LABEL', '/dev/sdb1', 
'--nodeps']#012Exit code: 32#012Reason: -#012Stdout: ''#012Stderr: 'lsblk: 
/dev/sdb1: not a block device\n'

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1626243/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1621994] Re: Restarted instance after resize, fails init-local

2016-10-20 Thread Scott Moser
*** This bug is a duplicate of bug 1575055 ***
https://bugs.launchpad.net/bugs/1575055

** This bug has been marked a duplicate of bug 1575055
   check_instance_id() error on reboots when using config-drive

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1621994

Title:
  Restarted instance after resize, fails init-local

Status in cloud-init:
  New

Bug description:
  [7.464278] cloud-init[601]: Cloud-init v. 0.7.7 running 'init-local' at 
Fri, 09 Sep 2016 19:51:20 +. Up 7.37 seconds.
  [7.466742] cloud-init[601]: 2016-09-09 19:51:20,144 - util.py[WARNING]: 
failed of stage init-local
  [7.471852] cloud-init[601]: failed run of stage init-local
  [7.473345] cloud-init[601]: 

  [7.475069] cloud-init[601]: Traceback (most recent call last):
  [7.476528] cloud-init[601]:   File "/usr/bin/cloud-init", line 520, in 
status_wrapper
  [7.478138] cloud-init[601]: ret = functor(name, args)
  [7.479436] cloud-init[601]:   File "/usr/bin/cloud-init", line 250, in 
main_init
  [7.481019] cloud-init[601]: init.fetch(existing=existing)
  [7.482307] cloud-init[601]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 322, in fetch
  [7.484147] cloud-init[601]: return 
self._get_data_source(existing=existing)
  [7.485736] cloud-init[601]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 229, in 
_get_data_source
  [7.487611] cloud-init[601]: ds.check_instance_id(self.cfg)):
  [7.488956] cloud-init[601]: TypeError: check_instance_id() takes 1 
positional argument but 2 were given
  [7.490642] cloud-init[601]: 

  [[0;1;31mFAILED[0m] Failed to start Initial cloud-init job (pre-networking).
  See 'systemctl status cloud-init-local.service' for details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1621994/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1616616] Re: shows only one ipv6 address per interface

2016-10-20 Thread Scott Moser
** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu)
   Status: New => Confirmed

** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1616616

Title:
  shows only one ipv6 address per interface

Status in cloud-init:
  New
Status in cloud-init package in Ubuntu:
  Confirmed

Bug description:
  With (self reported) version 0.7.7, when cloud-init displays the "Net
  device info", it only reports 1 each IPv4 and IPv6 address per
  interface, even if a second subnet has been configured on the
  interface (via ens3:1, for example).  This can be confusing to someone
  working on network configuration related issues.

  At the very minimum, it should show 1 IP per subnet, I expect.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1616616/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635490] [NEW] resource names should be placed in a common place other than scattered in api/v2/attributes, extensions/ and callbacks/resources

2016-10-20 Thread Rui Zang
Public bug reported:

Currently, neutron resource name definitions are scattered in
api/v2/attribues, extensions/ and callbacks/resources


In review 
https://review.openstack.org/#/c/387194/1/neutron/callbacks/resources.py
It was brought up that the resource names should be placed in a common place to 
avoid bugs of mis-spelling resource names like 
https://bugs.launchpad.net/neutron/+bug/167

This requests code refactoring

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1635490

Title:
  resource names should be placed in a common place other than scattered
  in  api/v2/attributes, extensions/ and callbacks/resources

Status in neutron:
  New

Bug description:
  Currently, neutron resource name definitions are scattered in
  api/v2/attribues, extensions/ and callbacks/resources

  
  In review 
https://review.openstack.org/#/c/387194/1/neutron/callbacks/resources.py
  It was brought up that the resource names should be placed in a common place 
to avoid bugs of mis-spelling resource names like 
https://bugs.launchpad.net/neutron/+bug/167

  This requests code refactoring

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1635490/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1610173] Re: Sorting Horizon usage tables by disk space ignores units

2016-10-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/347597
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=5ea8482ae42c85a080a75f1b9db46c04c158f896
Submitter: Jenkins
Branch:master

commit 5ea8482ae42c85a080a75f1b9db46c04c158f896
Author: Alex Monk 
Date:   Wed Jul 27 02:55:30 2016 +0100

Usage tables: Sort by disk properly

Change-Id: I05057ac399f6b4b0d8935a88684f7ed7099c367d
Closes-Bug: #1610173


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1610173

Title:
  Sorting Horizon usage tables by disk space ignores units

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Sorting Horizon usage tables by disk space ignores units

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1610173/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1634677] Re: POLICY_CHECK_FUNCTION is not disable-able

2016-10-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/388244
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=2fdaf8550ba4c396cc52be37dae1324a8593
Submitter: Jenkins
Branch:master

commit 2fdaf8550ba4c396cc52be37dae1324a8593
Author: Richard Jones 
Date:   Wed Oct 19 09:36:28 2016 +1100

Allow POLICY_CHECK_FUNCTION to be disabled

Remove the current defaulting behaviour when it is set to None.

Change-Id: If8e018414f9f348980d0783f90afa039330e0e49
Closes-Bug: 1634677


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1634677

Title:
  POLICY_CHECK_FUNCTION is not disable-able

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The intention is that POLICY_CHECK_FUNCTION should be settable to None
  to disable policy checks, but the current structure of settings.py
  doesn't allow this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1634677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635468] [NEW] iptables: fail to start ovs/linuxbridge agents on missing sysctl knobs

2016-10-20 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/371523
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit e83a44b96a8e3cd81b7cc684ac90486b283a3507
Author: Ihar Hrachyshka 
Date:   Thu Sep 15 21:48:10 2016 +

iptables: fail to start ovs/linuxbridge agents on missing sysctl knobs

For new kernels (3.18+), bridge module is split into two pieces: bridge
and br_netfilter. The latter provides firewall support for bridged
traffic, as well as the following sysctl knobs:

* net.bridge.bridge-nf-call-arptables
* net.bridge.bridge-nf-call-ip6tables
* net.bridge.bridge-nf-call-iptables

Before kernel 3.18, any brctl command was loading the 'bridge' module
with the knobs, so at the moment where we reached iptables setup, they
were always available.

With new 3.18+ kernels, brctl still loads 'bridge' module, but not
br_netfilter. So bridge existance no longer guarantees us knobs'
presence. If we reach _enable_netfilter_for_bridges before the new
module is loaded, then the code will fail, triggering agent resync. It
will also fail to enable bridge firewalling on systems where it's
disabled by default (examples of those systems are most if not all Red
Hat/Fedora based systems), making security groups completely
ineffective.

Systems that don't override default settings for those knobs would work
fine except for this exception in the log file and agent resync. This is
because the first attempt to add a iptables rule using 'physdev' module
(-m physdev) will trigger the kernel module loading. In theory, we could
silently swallow missing knobs, and still operate correctly. But on
second thought, it's quite fragile to rely on that implicit module
loading. In the case where we can't detect whether firewall is enabled,
it's better to fail than hope for the best.

An alternative to the proposed path could be trying
to fix broken deployment, meaning we would need to load the missing
kernel module on agent startup. It's not even clear whether we can
assume the operation would be available to us. Even with that, adding a
rootwrap filter to allow loading code in the kernel sounds quite scary.
If we would follow the path, we would also hit an issue of
distinguishing between cases of built-in kernel module vs. modular one.
A complexity that is probably beyond what Neutron should fix.

The patch introduces a sanity check that would fail on missing
configuration knobs.

DocImpact: document the new deployment requirement in operations guide
UpgradeImpact: deployers relying on agents fixing wrong sysctl defaults
   will need to make sure bridge firewalling is enabled.
   Also, the kernel module providing sysctl knobs must be
   loaded before starting the agent, otherwise it will fail
   to start.

Depends-On: Id6bfd9595f0772a63d1096ef83ebbb6cd630fafd
Change-Id: I9137ea017624ac92a05f73863b77f9ee4681bbe7
Related-Bug: #1622914

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1635468

Title:
  iptables: fail to start ovs/linuxbridge agents on missing sysctl
  knobs

Status in neutron:
  New

Bug description:
  https://review.openstack.org/371523
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit e83a44b96a8e3cd81b7cc684ac90486b283a3507
  Author: Ihar Hrachyshka 
  Date:   Thu Sep 15 21:48:10 2016 +

  iptables: fail to start ovs/linuxbridge agents on missing sysctl knobs
  
  For new kernels (3.18+), bridge module is split into two pieces: bridge
  and br_netfilter. The latter provides firewall support for bridged
  traffic, as well as the following sysctl knobs:
  
  * net.bridge.bridge-nf-call-arptables
  * net.bridge.bridge-nf-call-ip6tables
  * net.bridge.bridge-nf-call-iptables
  
  Before kernel 3.18, any brctl command was loading the 'bridge' module
  with the knobs, so at the moment where we reached iptables setup, they
  were always available.
  
  With new 3.18

[Yahoo-eng-team] [Bug 1635467] [NEW] _emit_versioned_exception_notification fails with AttributeError: 'NoneType' object has no attribute '__name__'

2016-10-20 Thread Matt Riedemann
Public bug reported:

Seen here:

http://logs.openstack.org/99/389399/1/check/gate-tempest-dsvm-full-
devstack-plugin-ceph-ubuntu-
xenial/9ef7a1b/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-10-21_00_55_46_872

2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server 
[req-a08665db-81d9-4114-9523-d7bf54576c3c 
tempest-ServerRescueNegativeTestJSON-1537801177 
tempest-ServerRescueNegativeTestJSON-1537801177] Exception during message 
handling
2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
155, in _process_incoming
2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
225, in dispatch
2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
195, in _do_dispatch
2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 75, in wrapped
2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server 
function_name, call_dict, binary)
2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 32, in 
_emit_exception_notification
2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server 
_emit_versioned_exception_notification(context, ex, binary)
2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 36, in 
_emit_versioned_exception_notification
2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server 
versioned_exception_payload = exception.ExceptionPayload.from_exception(ex)
2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/notifications/objects/exception.py", line 40, in 
from_exception
2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server 
module_name=inspect.getmodule(trace[0]).__name__,
2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server AttributeError: 
'NoneType' object has no attribute '__name__'
2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server

** Affects: nova
 Importance: Undecided
 Status: Confirmed


** Tags: notifications

** Changed in: nova
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1635467

Title:
  _emit_versioned_exception_notification fails with AttributeError:
  'NoneType' object has no attribute '__name__'

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Seen here:

  http://logs.openstack.org/99/389399/1/check/gate-tempest-dsvm-full-
  devstack-plugin-ceph-ubuntu-
  xenial/9ef7a1b/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-10-21_00_55_46_872

  2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server 
[req-a08665db-81d9-4114-9523-d7bf54576c3c 
tempest-ServerRescueNegativeTestJSON-1537801177 
tempest-ServerRescueNegativeTestJSON-1537801177] Exception during message 
handling
  2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
155, in _process_incoming
  2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
225, in dispatch
  2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
195, in _do_dispatch
  2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 75, in wrapped
  2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server 
function_name, call_dict, binary)
  2016-10-21 00:55:46.872 27346 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 32, in 
_emit_exception

[Yahoo-eng-team] [Bug 1635449] [NEW] Too many pools created from heat template when both listeners and pools depend on a item

2016-10-20 Thread Daniel Russell
Public bug reported:

When you deploy a heat template that has both listeners and pools
depending on a item, due to the order of locking, you may get additional
pools created erroneously.

Excerpt of heat template showing the issue :

# LOADBALANCERS #

  test-loadbalancer:
type: OS::Neutron::LBaaS::LoadBalancer
properties:
  name: test
  description: test
  vip_subnet: { get_param: subnet }

# LISTENERS #

  http-listener:
type: OS::Neutron::LBaaS::Listener
depends_on: test-loadbalancer
properties:
  name: listener1
  description: listener1
  protocol_port: 80
  loadbalancer: { get_resource: test-loadbalancer } 
  protocol: HTTP

  https-listener:
type: OS::Neutron::LBaaS::Listener
depends_on: http-listener
properties:
  name: listener2
  description: listener2
  protocol_port: 443
  loadbalancer: { get_resource: test-loadbalancer }
  protocol: TERMINATED_HTTPS
  default_tls_container_ref: ''

# POOLS #

  http-pool:
type: OS::Neutron::LBaaS::Pool
depends_on: http-listener
properties:
  name: pool1
  description: pool1
  lb_algorithm: 'ROUND_ROBIN'
  listener: { get_resource: http-listener }
  protocol: HTTP

  https-pool:
type: OS::Neutron::LBaaS::Pool
depends_on: https-listener
properties:
  name: pool2
  description: pool2
  lb_algorithm: 'ROUND_ROBIN'
  listener: { get_resource: https-listener }
  protocol: HTTP

After the http-listener is created, both a pool and another listener
attempt to create but we end up with a number of pools (not always the
same number).

** Affects: neutron
 Importance: Critical
 Status: Triaged


** Tags: lbaas mitaka-backport-potential newton-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1635449

Title:
  Too many pools created from heat template when both listeners and
  pools depend on a item

Status in neutron:
  Triaged

Bug description:
  When you deploy a heat template that has both listeners and pools
  depending on a item, due to the order of locking, you may get
  additional pools created erroneously.

  Excerpt of heat template showing the issue :

  # LOADBALANCERS #

test-loadbalancer:
  type: OS::Neutron::LBaaS::LoadBalancer
  properties:
name: test
description: test
vip_subnet: { get_param: subnet }

  # LISTENERS #

http-listener:
  type: OS::Neutron::LBaaS::Listener
  depends_on: test-loadbalancer
  properties:
name: listener1
description: listener1
protocol_port: 80
loadbalancer: { get_resource: test-loadbalancer } 
protocol: HTTP

https-listener:
  type: OS::Neutron::LBaaS::Listener
  depends_on: http-listener
  properties:
name: listener2
description: listener2
protocol_port: 443
loadbalancer: { get_resource: test-loadbalancer }
protocol: TERMINATED_HTTPS
default_tls_container_ref: ''

  # POOLS #

http-pool:
  type: OS::Neutron::LBaaS::Pool
  depends_on: http-listener
  properties:
name: pool1
description: pool1
lb_algorithm: 'ROUND_ROBIN'
listener: { get_resource: http-listener }
protocol: HTTP

https-pool:
  type: OS::Neutron::LBaaS::Pool
  depends_on: https-listener
  properties:
name: pool2
description: pool2
lb_algorithm: 'ROUND_ROBIN'
listener: { get_resource: https-listener }
protocol: HTTP

  After the http-listener is created, both a pool and another listener
  attempt to create but we end up with a number of pools (not always the
  same number).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1635449/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608980] Re: Remove MANIFEST.in as it is not explicitly needed by PBR

2016-10-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/385955
Committed: 
https://git.openstack.org/cgit/openstack/solum/commit/?id=9f48b08da1117de3f73682a0ef7af56ca48f2a10
Submitter: Jenkins
Branch:master

commit 9f48b08da1117de3f73682a0ef7af56ca48f2a10
Author: Iswarya_Vakati 
Date:   Thu Oct 13 17:51:27 2016 +0530

Drop MANIFEST.in - it's not needed by pbr

Solum already uses PBR:-
setuptools.setup(
setup_requires=['pbr>=1.8'],
pbr=True)

This patch removes `MANIFEST.in` file as pbr generates a
sensible manifest from git files and some standard files
and it removes the need for an explicit `MANIFEST.in` file.

Change-Id: Id5702fee8da7ceb5d2a9dc90b584ab56973b7ea1
Closes-Bug:#1608980


** Changed in: solum
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1608980

Title:
  Remove MANIFEST.in as it is not explicitly needed by PBR

Status in craton:
  Fix Released
Status in ec2-api:
  In Progress
Status in gce-api:
  Fix Released
Status in Karbor:
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in Kosmos:
  New
Status in Magnum:
  Fix Released
Status in masakari:
  Fix Released
Status in neutron:
  Fix Released
Status in Neutron LBaaS Dashboard:
  Confirmed
Status in octavia:
  Fix Released
Status in python-searchlightclient:
  In Progress
Status in OpenStack Search (Searchlight):
  In Progress
Status in Solum:
  Fix Released
Status in Swift Authentication:
  In Progress
Status in OpenStack Object Storage (swift):
  In Progress
Status in Tricircle:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released
Status in watcher:
  Fix Released
Status in Zun:
  Fix Released

Bug description:
  PBR do not explicitly require MANIFEST.in, so it can be removed.

  
  Snippet from: http://docs.openstack.org/infra/manual/developers.html

  Manifest

  Just like AUTHORS and ChangeLog, why keep a list of files you wish to
  include when you can find many of these in git. MANIFEST.in generation
  ensures almost all files stored in git, with the exception of
  .gitignore, .gitreview and .pyc files, are automatically included in
  your distribution. In addition, the generated AUTHORS and ChangeLog
  files are also included. In many cases, this removes the need for an
  explicit ‘MANIFEST.in’ file

To manage notifications about this bug go to:
https://bugs.launchpad.net/craton/+bug/1608980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1634925] Re: nova api error

2016-10-20 Thread Matt Riedemann
Looks like this is your problem:

2016-10-19 16:29:19.011 39516 ERROR oslo_db.sqlalchemy.exc_filters 
[req-328a55d5-b96d-46e7-9d77-90bd1c03eb3a 58931d6e7ca247a2be2dd7fac6ee9927 
869d2cede19d483e8565a6984202a37a - - -] DBAPIError exception wrapped from 
(pymysql.err.InternalError) (23, u"Out of resources when opening file 
'/tmp/#sql_87a4_0.MAI' (Errcode: 24)") [SQL: u'SELECT 
anon_1.instances_created_at AS anon_1_instances_created_at, 
anon_1.instances_updated_at AS anon_1_instances_updated_at, 
anon_1.instances_deleted_at AS anon_1_instances_deleted_at, 
anon_1.instances_deleted AS anon_1_instances_deleted, anon_1.instances_id AS 
anon_1_instances_id, anon_1.instances_user_id AS anon_1_instances_user_id, 
anon_1.instances_project_id AS anon_1_instances_project_id, 
anon_1.instances_image_ref AS anon_1_instances_image_ref, 
anon_1.instances_kernel_id AS anon_1_instances_kernel_id, 
anon_1.instances_ramdisk_id AS anon_1_instances_ramdisk_id, 
anon_1.instances_hostname AS anon_1_instances_hostname, 
anon_1.instances_launch_index AS a
 non_1_instances_launch_index, anon_1.instances_key_name AS 
anon_1_instances_key_name, anon_1.instances_key_data AS 
anon_1_instances_key_data, anon_1.instances_power_state AS 
anon_1_instances_power_state, anon_1.instances_vm_state AS 
anon_1_instances_vm_state, anon_1.instances_task_state AS 
anon_1_instances_task_state, anon_1.instances_memory_mb AS 
anon_1_instances_memory_mb, anon_1.instances_vcpus AS anon_1_instances_vcpus, 
anon_1.instances_root_gb AS anon_1_instances_root_gb, 
anon_1.instances_ephemeral_gb AS anon_1_instances_ephemeral_gb, 
anon_1.instances_ephemeral_key_uuid AS anon_1_instances_ephemeral_key_uuid, 
anon_1.instances_host AS anon_1_instances_host, anon_1.instances_node AS 
anon_1_instances_node, anon_1.instances_instance_type_id AS 
anon_1_instances_instance_type_id, anon_1.instances_user_data AS 
anon_1_instances_user_data, anon_1.instances_reservation_id AS 
anon_1_instances_reservation_id, anon_1.instances_launched_at AS 
anon_1_instances_launched_at, anon_1.instances_te
 rminated_at AS anon_1_instances_terminated_at, 
anon_1.instances_availability_zone AS anon_1_instances_availability_zone, 
anon_1.instances_display_name AS anon_1_instances_display_name, 
anon_1.instances_display_description AS anon_1_instances_display_description, 
anon_1.instances_launched_on AS anon_1_instances_launched_on, 
anon_1.instances_locked AS anon_1_instances_locked, anon_1.instances_locked_by 
AS anon_1_instances_locked_by, anon_1.instances_os_type AS 
anon_1_instances_os_type, anon_1.instances_architecture AS 
anon_1_instances_architecture, anon_1.instances_vm_mode AS 
anon_1_instances_vm_mode, anon_1.instances_uuid AS anon_1_instances_uuid, 
anon_1.instances_root_device_name AS anon_1_instances_root_device_name, 
anon_1.instances_default_ephemeral_device AS 
anon_1_instances_default_ephemeral_device, anon_1.instances_default_swap_device 
AS anon_1_instances_default_swap_device, anon_1.instances_config_drive AS 
anon_1_instances_config_drive, anon_1.instances_access_ip_v4 AS anon_1_
 instances_access_ip_v4, anon_1.instances_access_ip_v6 AS 
anon_1_instances_access_ip_v6, anon_1.instances_auto_disk_config AS 
anon_1_instances_auto_disk_config, anon_1.instances_progress AS 
anon_1_instances_progress, anon_1.instances_shutdown_terminate AS 
anon_1_instances_shutdown_terminate, anon_1.instances_disable_terminate AS 
anon_1_instances_disable_terminate, anon_1.instances_cell_name AS 
anon_1_instances_cell_name, anon_1.instances_internal_id AS 
anon_1_instances_internal_id, anon_1.instances_cleaned AS 
anon_1_instances_cleaned, security_groups_1.created_at AS 
security_groups_1_created_at, security_groups_1.updated_at AS 
security_groups_1_updated_at, security_groups_1.deleted_at AS 
security_groups_1_deleted_at, security_groups_1.deleted AS 
security_groups_1_deleted, security_groups_1.id AS security_groups_1_id, 
security_groups_1.name AS security_groups_1_name, security_groups_1.description 
AS security_groups_1_description, security_groups_1.user_id AS 
security_groups_1_user_id,
  security_groups_1.project_id AS security_groups_1_project_id, 
security_group_rules_1.created_at AS security_group_rules_1_created_at, 
security_group_rules_1.updated_at AS security_group_rules_1_updated_at, 
security_group_rules_1.deleted_at AS security_group_rules_1_deleted_at, 
security_group_rules_1.deleted AS security_group_rules_1_deleted, 
security_group_rules_1.id AS security_group_rules_1_id, 
security_group_rules_1.parent_group_id AS 
security_group_rules_1_parent_group_id, security_group_rules_1.protocol AS 
security_group_rules_1_protocol, security_group_rules_1.from_port AS 
security_group_rules_1_from_port, security_group_rules_1.to_port AS 
security_group_rules_1_to_port, security_group_rules_1.cidr AS 
security_group_rules_1_cidr, security_group_rules_1.group_id AS 
security_group_rules_1_group_id, instance_extra_1.numa_topology AS 
instance_extra_1_numa_topology, instance_extra_

[Yahoo-eng-team] [Bug 1635367] Re: Ram filter is broken since Mitaka

2016-10-20 Thread Matt Riedemann
** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova/newton
   Status: New => Confirmed

** Changed in: nova/mitaka
   Status: New => Confirmed

** Changed in: nova/mitaka
   Importance: Undecided => High

** Changed in: nova
   Importance: Undecided => High

** Changed in: nova/newton
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1635367

Title:
  Ram filter is broken since Mitaka

Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Compute (nova) mitaka series:
  Confirmed
Status in OpenStack Compute (nova) newton series:
  Confirmed

Bug description:
  This patch that was backported to Mitaka made the assumption that free_ram_mb 
can never be less than 0.
  
https://github.com/openstack/nova/commit/016b810f675b20e8ce78f4c82dc9c679c0162b7a

  free_ram_mb can be negative when the ram_allocation_ratio is set higher than 
one. This is also supported by the following comment in the code for the 
resource tracker.
  > # free ram and disk may be negative, depending on policy:

  This breaks any Scheduler filter that supports oversubcribe (e.g. memory or 
disk), as an example see this unit-test for the ram filter.
  
https://github.com/openstack/nova/blob/master/nova/tests/unit/scheduler/filters/test_ram_filters.py#L43

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1635367/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632383] Re: The current Horizon settings indicate no valid image creation methods are available

2016-10-20 Thread Kevin Carter
Bump. I too am seeing this issue.

** Also affects: openstack-ansible
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1632383

Title:
  The current Horizon settings indicate no valid image creation methods
  are available

Status in OpenStack Dashboard (Horizon):
  New
Status in openstack-ansible:
  New

Bug description:
  This is an all new install on CentOS 7.

  The error message is:

  "The current Horizon settings indicate no valid image creation methods
  are available. Providing an image location and/or uploading from the
  local file system must be allowed to support image creation."

  This link gives you some additional information about the file the message is 
associated with.
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/images/images/forms.py

  I am mostly new to OpenStack, so any initial help is much appreciated.

  Thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1632383/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623871] Re: Nova hugepage support does not include aarch64

2016-10-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/372304
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=50e0106d35ec1a3204c18f3912b0dc6cf6632305
Submitter: Jenkins
Branch:master

commit 50e0106d35ec1a3204c18f3912b0dc6cf6632305
Author: VeenaSL 
Date:   Mon Sep 19 13:36:53 2016 +0530

Adding hugepage and NUMA support check for aarch64

Nova ignores aarch64 while verifying for hugepage and NUMA support.
AARCH64 also supports hugepage and NUMA on the same libvirt versions as of 
x86.
Hence adding this chek for aarch64 also.

Change-Id: I7b5ae1dbdca4fdd0aee2eefd4099c4c4953b609a
Closes-bug: #1623871


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1623871

Title:
  Nova hugepage support does not include aarch64

Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Xenial:
  In Progress

Bug description:
  [Impact]
  Although aarch64 supports spawning a vm with hugepages, in nova code, the 
libvirt driver considers only x86_64 and I686. Both for NUMA and Hugepage 
support, AARCH64 needs to be added. Due to this bug, vm can not be launched 
with hugepage using OpenStack on aarch64 servers.

  Note: this depends on the fix for LP: #1627926.

  [Test Case]
  Steps to reproduce:
  On an openstack environment running on aarch64:
  1. Configure compute to use hugepages.
  2. Set mem_page_size="2048" for a flavor
  3. Launch a VM using the above flavor.

  Expected result:
  VM should be launched with hugepages and the libvirt xml should have

    
    
  
    
    

  Actual result:
  VM is launched without hugepages.

  There are no error logs in nova-scheduler.

  [Regression Risk]
  Risk is minimized by the fact that this change is just enabling the same code 
for arm64 that is already enabled for Ubuntu/x86.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1623871/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614063] Re: live migration doesn't use the correct interface to transfer the data

2016-10-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/356558
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=94f89e96f86c1d1bb258da754d3d368856637a0a
Submitter: Jenkins
Branch:master

commit 94f89e96f86c1d1bb258da754d3d368856637a0a
Author: Alberto Planas 
Date:   Wed Aug 17 17:37:48 2016 +0200

Add migrate_uri for invoking the migration

Add migrate_uri parameter in Guest.migrate method, to indicate
the URI where we want to stablish the connection in a non-tunneled
migration.

Change-Id: I6c2ad0170d90560d7d710b578c45287e78c682d1
Closes-Bug: #1614063


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1614063

Title:
  live migration doesn't use the correct interface to transfer the data

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  My compute nodes are attached to several networks (storage, admin,
  etc). For each network I have a real or a virtual interface with an IP
  assigned. The DNS is properly configured, so I can `ping node1`, or
  `ping storage.node1`, and is resolving to the correct IP.

  I want to use the second network to transfer the data so:

  * Setup libvirtd to listen into the correct interface (checked with
  netstat)

  * Configure nova.conf live_migration_uri

  * Monitor interfaces and do nova live-migration

  The migration works correctly, is doing what I think is a PEER2PEER
  migration type, but the data is transfered via the normal interface.

  I can replicate it doing a live migration via virsh.

  After more checks I discover that if I do not use the --migrate-uri
  parameter, libvirt will ask to the other node the hostname to build
  this migrage_uri parameter. The hostname resolve via the slow
  interface.

  Using the --migrate-uri and the --listen-address (for the -incoming
  parameter) works at libvirt level. So we need somehow inject this
  paramer in migrateToURIx in the libvirt nova driver.

  I have a patch (attached - WIP) that address this issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1614063/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624712] Re: Filtering by visibility in image table does not work

2016-10-20 Thread Eddie Ramirez
*** This bug is a duplicate of bug 1634205 ***
https://bugs.launchpad.net/bugs/1634205

** This bug has been marked a duplicate of bug 1634205
   glance image list private images fails

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1624712

Title:
  Filtering by visibility in image table does not work

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  In the filtering form in the Image table (in both project and admin
  image panel), when 'visibility' filter is selected, "Error: Unable to
  retrieve the images." is returned. This occurs regardless of selected
  visibility value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1624712/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635387] Re: Replace usage of 'retrying' with 'tenacity'

2016-10-20 Thread Ian Cordasco
This should start off as an email with [all] on the openstack-dev
mailing list rather than a bug on Glance (which I expect someone will
start targetting at all the projects in OpenStack).

Please let's start this discussion there.

I'm going to mark this as "Opinion" as it's not apparent to me the
direct benefits to Glance of these features.

** Changed in: glance
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1635387

Title:
  Replace usage of 'retrying' with 'tenacity'

Status in Glance:
  Opinion

Bug description:
  Today a number of OpenStack projects use the 'retrying' library [1]
  for generic retry behavior. While retrying provides a functional API,
  its author no longer actively maintains the repo and hasn't responded
  to numerous PRs, emails, etc. (more discussion in [2]). As a result,
  we can't push fixes/features to retrying to support various
  initiatives such as [3].

  A fellow stacker graciously forked the retrying repo and revamped it's API to 
provide greater functionality/pluggablility; called tenacity [4]. While 
tenacity provides the same functionality as retrying, it has some notable 
differences such as:
  - Tenacity uses seconds rather than ms as retrying did.
  - Tenacity has different kwargs for the decorator and
  Retrying class itself.
  - Tenacity has a different approach for retrying args by
  using classes for its stop/wait/retry kwargs.
  - By default tenacity raises a RetryError if a retried callable
  times out; retrying raises the last exception from the callable.
  Tenacity provides backwards compatibility here by offering
  the 'reraise' kwarg.
  - Tenacity defines 'time.sleep' as a default value for a kwarg.
  That said consumers who need to mock patch time.sleep
  need to account for this via mocking of time.sleep before
  tenacity is imported.
  - For retries that check a result, tenacity will raise if the retried
  function raises, whereas retrying retried on all exceptions.

  We'd like to move from retrying to tenacity and eventually remove
  retrying from global requirements all together.

  For projects using retrying, the move to tenacity (hopefully) isn't
  overly intrusive, but must take the above differences into
  consideration.

  While I'm working to move all affected projects [6] from retrying to
  tenacity, this effort is a work in progress (under [5]).

  [1] https://github.com/rholder/retrying
  [2] https://review.openstack.org/#/c/321867/
  [3] http://lists.openstack.org/pipermail/openstack-dev/2016-April/092914.html
  [4] https://github.com/jd/tenacity
  [5] 
https://review.openstack.org/#/q/message:%22Replace+retrying+with+tenacity%22
  [6] http://codesearch.openstack.org/?q=retrying&i=nope&files=.*.txt&repos=

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1635387/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635395] [NEW] Replace usage of 'retrying' with 'tenacity'

2016-10-20 Thread Boden R
Public bug reported:

Today a number of OpenStack projects use the 'retrying' library [1] for
generic retry behavior. While retrying provides a functional API, its
author no longer actively maintains the repo and hasn't responded to
numerous PRs, emails, etc. (more discussion in [2]). As a result, we
can't push fixes/features to retrying to support various initiatives
such as [3].

A fellow stacker graciously forked the retrying repo and revamped it's API to 
provide greater functionality/pluggablility; called tenacity [4]. While 
tenacity provides the same functionality as retrying, it has some notable 
differences such as:
- Tenacity uses seconds rather than ms as retrying did.
- Tenacity has different kwargs for the decorator and
Retrying class itself.
- Tenacity has a different approach for retrying args by
using classes for its stop/wait/retry kwargs.
- By default tenacity raises a RetryError if a retried callable
times out; retrying raises the last exception from the callable.
Tenacity provides backwards compatibility here by offering
the 'reraise' kwarg.
- Tenacity defines 'time.sleep' as a default value for a kwarg.
That said consumers who need to mock patch time.sleep
need to account for this via mocking of time.sleep before
tenacity is imported.
- For retries that check a result, tenacity will raise if the retried
function raises, whereas retrying retried on all exceptions.

We'd like to move from retrying to tenacity and eventually remove
retrying from global requirements all together.

For projects using retrying, the move to tenacity (hopefully) isn't
overly intrusive, but must take the above differences into
consideration.

While I'm working to move all affected projects [6] from retrying to
tenacity, this effort is a work in progress (under [5]).


[1] https://github.com/rholder/retrying
[2] https://review.openstack.org/#/c/321867/
[3] http://lists.openstack.org/pipermail/openstack-dev/2016-April/092914.html
[4] https://github.com/jd/tenacity
[5] 
https://review.openstack.org/#/q/message:%22Replace+retrying+with+tenacity%22
[6] http://codesearch.openstack.org/?q=retrying&i=nope&files=.*.txt&repos=

** Affects: neutron
 Importance: Undecided
 Assignee: Boden R (boden)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1635395

Title:
  Replace usage of 'retrying' with 'tenacity'

Status in neutron:
  In Progress

Bug description:
  Today a number of OpenStack projects use the 'retrying' library [1]
  for generic retry behavior. While retrying provides a functional API,
  its author no longer actively maintains the repo and hasn't responded
  to numerous PRs, emails, etc. (more discussion in [2]). As a result,
  we can't push fixes/features to retrying to support various
  initiatives such as [3].

  A fellow stacker graciously forked the retrying repo and revamped it's API to 
provide greater functionality/pluggablility; called tenacity [4]. While 
tenacity provides the same functionality as retrying, it has some notable 
differences such as:
  - Tenacity uses seconds rather than ms as retrying did.
  - Tenacity has different kwargs for the decorator and
  Retrying class itself.
  - Tenacity has a different approach for retrying args by
  using classes for its stop/wait/retry kwargs.
  - By default tenacity raises a RetryError if a retried callable
  times out; retrying raises the last exception from the callable.
  Tenacity provides backwards compatibility here by offering
  the 'reraise' kwarg.
  - Tenacity defines 'time.sleep' as a default value for a kwarg.
  That said consumers who need to mock patch time.sleep
  need to account for this via mocking of time.sleep before
  tenacity is imported.
  - For retries that check a result, tenacity will raise if the retried
  function raises, whereas retrying retried on all exceptions.

  We'd like to move from retrying to tenacity and eventually remove
  retrying from global requirements all together.

  For projects using retrying, the move to tenacity (hopefully) isn't
  overly intrusive, but must take the above differences into
  consideration.

  While I'm working to move all affected projects [6] from retrying to
  tenacity, this effort is a work in progress (under [5]).


  [1] https://github.com/rholder/retrying
  [2] https://review.openstack.org/#/c/321867/
  [3] http://lists.openstack.org/pipermail/openstack-dev/2016-April/092914.html
  [4] https://github.com/jd/tenacity
  [5] 
https://review.openstack.org/#/q/message:%22Replace+retrying+with+tenacity%22
  [6] http://codesearch.openstack.org/?q=retrying&i=nope&files=.*.txt&repos=

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1635395/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More

[Yahoo-eng-team] [Bug 1635387] [NEW] Replace usage of 'retrying' with 'tenacity'

2016-10-20 Thread Boden R
Public bug reported:

Today a number of OpenStack projects use the 'retrying' library [1] for
generic retry behavior. While retrying provides a functional API, its
author no longer actively maintains the repo and hasn't responded to
numerous PRs, emails, etc. (more discussion in [2]). As a result, we
can't push fixes/features to retrying to support various initiatives
such as [3].

A fellow stacker graciously forked the retrying repo and revamped it's API to 
provide greater functionality/pluggablility; called tenacity [4]. While 
tenacity provides the same functionality as retrying, it has some notable 
differences such as:
- Tenacity uses seconds rather than ms as retrying did.
- Tenacity has different kwargs for the decorator and
Retrying class itself.
- Tenacity has a different approach for retrying args by
using classes for its stop/wait/retry kwargs.
- By default tenacity raises a RetryError if a retried callable
times out; retrying raises the last exception from the callable.
Tenacity provides backwards compatibility here by offering
the 'reraise' kwarg.
- Tenacity defines 'time.sleep' as a default value for a kwarg.
That said consumers who need to mock patch time.sleep
need to account for this via mocking of time.sleep before
tenacity is imported.
- For retries that check a result, tenacity will raise if the retried
function raises, whereas retrying retried on all exceptions.

We'd like to move from retrying to tenacity and eventually remove
retrying from global requirements all together.

For projects using retrying, the move to tenacity (hopefully) isn't
overly intrusive, but must take the above differences into
consideration.

While I'm working to move all affected projects [6] from retrying to
tenacity, this effort is a work in progress (under [5]).

[1] https://github.com/rholder/retrying
[2] https://review.openstack.org/#/c/321867/
[3] http://lists.openstack.org/pipermail/openstack-dev/2016-April/092914.html
[4] https://github.com/jd/tenacity
[5] 
https://review.openstack.org/#/q/message:%22Replace+retrying+with+tenacity%22
[6] http://codesearch.openstack.org/?q=retrying&i=nope&files=.*.txt&repos=

** Affects: glance
 Importance: Undecided
 Status: New

** Description changed:

  Today a number of OpenStack projects use the 'retrying' library [1] for
  generic retry behavior. While retrying provides a functional API, its
  author no longer actively maintains the repo and hasn't responded to
  numerous PRs, emails, etc. (more discussion in [2]). As a result, we
  can't push fixes/features to retrying to support various initiatives
  such as [3].
  
  A fellow stacker graciously forked the retrying repo and revamped it's API to 
provide greater functionality/pluggablility; called tenacity [4]. While 
tenacity provides the same functionality as retrying, it has some notable 
differences such as:
  - Tenacity uses seconds rather than ms as retrying did.
  - Tenacity has different kwargs for the decorator and
  Retrying class itself.
  - Tenacity has a different approach for retrying args by
  using classes for its stop/wait/retry kwargs.
  - By default tenacity raises a RetryError if a retried callable
  times out; retrying raises the last exception from the callable.
  Tenacity provides backwards compatibility here by offering
  the 'reraise' kwarg.
  - Tenacity defines 'time.sleep' as a default value for a kwarg.
  That said consumers who need to mock patch time.sleep
  need to account for this via mocking of time.sleep before
  tenacity is imported.
  - For retries that check a result, tenacity will raise if the retried
  function raises, whereas retrying retried on all exceptions.
  
  We'd like to move from retrying to tenacity and eventually remove
  retrying from global requirements all together.
  
  For projects using retrying, the move to tenacity (hopefully) isn't
  overly intrusive, but must take the above differences into
  consideration.
  
  While I'm working to move all affected projects [6] from retrying to
  tenacity, this effort is a work in progress (under [5]).
  
- 
  [1] https://github.com/rholder/retrying
  [2] https://review.openstack.org/#/c/321867/
  [3] http://lists.openstack.org/pipermail/openstack-dev/2016-April/092914.html
  [4] https://github.com/jd/tenacity
- [5] https://review.openstack.org/#/q/topic:retrying-to-tenacity
+ [5] 
https://review.openstack.org/#/q/message:%22Replace+retrying+with+tenacity%22
  [6] http://codesearch.openstack.org/?q=retrying&i=nope&files=.*.txt&repos=

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1635387

Title:
  Replace usage of 'retrying' with 'tenacity'

Status in Glance:
  New

Bug description:
  Today a number of OpenStack projects use the 'retrying' library [1]
  for generic retry behavior. While retrying provides a functional API,
  its author no longer actively maintains the repo and hasn't responded
  to 

[Yahoo-eng-team] [Bug 1635389] [NEW] keystone.contrib.ec2.controllers.Ec2Controller is untested

2016-10-20 Thread Lance Bragstad
Public bug reported:

The Ec2Controller controller is completely untested. I can break
interfaces in keystone that controller relies on and all the tests still
pass.

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: test-improvement

** Tags added: test-improvement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1635389

Title:
  keystone.contrib.ec2.controllers.Ec2Controller is untested

Status in OpenStack Identity (keystone):
  New

Bug description:
  The Ec2Controller controller is completely untested. I can break
  interfaces in keystone that controller relies on and all the tests
  still pass.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1635389/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635283] Re: OVS conntrack firewall doesn't work with OVS 2.6.0

2016-10-20 Thread Jakub Libosvar
*** This bug is a duplicate of bug 1634757 ***
https://bugs.launchpad.net/bugs/1634757

** This bug has been marked a duplicate of bug 1634757
   OVS firewall doesn't work with recent ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1635283

Title:
  OVS conntrack firewall doesn't work with OVS 2.6.0

Status in neutron:
  Fix Released

Bug description:
  OVS introduced a check for inconsistent CT actions in the following
  bug:
  
https://github.com/openvswitch/ovs/commit/d86e03c57e295811533ed873602a3f2eadc85548

  
  If a flow doesn't meet the requirements for a CT action, the flow will be 
discarded. CT firewall musst specify the L3/L4 protocol (ip, ipv6, tcp, udp, 
scp) as long as a CT action is used. For example:
 
Flow1="hard_timeout=0,idle_timeout=0,priority=90,ct_state=+new-est,reg5=6,cookie=13600354711851837061,table=73,actions=ct(commit,zone=NXM_NX_REG6[0..15]),normal"
  should be:
 Flow1="hard_timeout=0,idle_timeout=0,priority=90,  ip  
,ct_state=+new-est,reg5=6,cookie=13600354711851837061,table=73,actions=ct(commit,zone=NXM_NX_REG6[0..15]),normal"

  
  When the flows are added by the agent, an error appears:
  ...
  
hard_timeout=0,idle_timeout=0,priority=40,ct_state=+est,reg5=6,cookie=13600354711851837061,table=72,actions=ct(commit,zone=NXM_NX_REG6[0..15],exec(set_field:0x1->ct_mark))
  
hard_timeout=0,idle_timeout=0,priority=70,dl_type=0x0800,ct_state=+est-rel-rpl,reg5=6,nw_proto=17,cookie=13600354711851837061,table=82,udp_dst=0x1388,dl_dst=fa:16:3e:d3:28:85,actions=strip_vlan,output:6
  
hard_timeout=0,idle_timeout=0,priority=70,dl_type=0x0800,ct_state=+new-est,reg5=6,nw_proto=17,cookie=13600354711851837061,table=82,udp_dst=0x1388,dl_dst=fa:16:3e:d3:28:85,actions=ct(commit,zone=NXM_NX_REG6[0..15]),strip_vlan,output:6;
 Stdout: ; Stderr: ovs-ofctl: -:17: actions are invalid with specified match 
(OFPBAC_MATCH_INCONSISTENT)

  but the agent continues working without exiting.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1635283/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635367] [NEW] Ram filter is broken since Mitaka

2016-10-20 Thread Erik Olof Gunnar Andersson
Public bug reported:

This patch that was backported to Mitaka made the assumption that free_ram_mb 
can never be less than 0.
https://github.com/openstack/nova/commit/016b810f675b20e8ce78f4c82dc9c679c0162b7a

free_ram_mb can be negative when the ram_allocation_ratio is set higher than 
one. This is also supported by the following comment in the code for the 
resource tracker.
> # free ram and disk may be negative, depending on policy:

This breaks any Scheduler filter that supports oversubcribe (e.g. memory or 
disk), as an example see this unit-test for the ram filter.
https://github.com/openstack/nova/blob/master/nova/tests/unit/scheduler/filters/test_ram_filters.py#L43

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1635367

Title:
  Ram filter is broken since Mitaka

Status in OpenStack Compute (nova):
  New

Bug description:
  This patch that was backported to Mitaka made the assumption that free_ram_mb 
can never be less than 0.
  
https://github.com/openstack/nova/commit/016b810f675b20e8ce78f4c82dc9c679c0162b7a

  free_ram_mb can be negative when the ram_allocation_ratio is set higher than 
one. This is also supported by the following comment in the code for the 
resource tracker.
  > # free ram and disk may be negative, depending on policy:

  This breaks any Scheduler filter that supports oversubcribe (e.g. memory or 
disk), as an example see this unit-test for the ram filter.
  
https://github.com/openstack/nova/blob/master/nova/tests/unit/scheduler/filters/test_ram_filters.py#L43

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1635367/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635358] [NEW] os-volume_attachments APIs requiring os-volumes permissions

2016-10-20 Thread Matthew Edmonds
Public bug reported:

The os-volume_attachments APIs (see http://developer.openstack.org/api-
ref/compute/#servers-with-volume-attachments-servers-os-volume-
attachments) have their own policy settings defined, yet are also
checking the policy settings which are defined for the os-volumes APIs
(see http://developer.openstack.org/api-ref/compute/#volume-extension-
os-volumes-os-snapshots-deprecated). This should never have been the
case, but especially not now that the os-volumes APIs have been
deprecated and have even stopped working with newer microversions.

seen in newton

** Affects: nova
 Importance: Undecided
 Assignee: Matthew Edmonds (edmondsw)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Matthew Edmonds (edmondsw)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1635358

Title:
  os-volume_attachments APIs requiring os-volumes permissions

Status in OpenStack Compute (nova):
  New

Bug description:
  The os-volume_attachments APIs (see http://developer.openstack.org
  /api-ref/compute/#servers-with-volume-attachments-servers-os-volume-
  attachments) have their own policy settings defined, yet are also
  checking the policy settings which are defined for the os-volumes APIs
  (see http://developer.openstack.org/api-ref/compute/#volume-extension-
  os-volumes-os-snapshots-deprecated). This should never have been the
  case, but especially not now that the os-volumes APIs have been
  deprecated and have even stopped working with newer microversions.

  seen in newton

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1635358/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635355] [NEW] subnet host routes are not added to the lbaasv2 namespace

2016-10-20 Thread Flávio Ramalho
Public bug reported:

Using mitaka@3cf4a7cffd5f76b9f8a3eed649d04075714dcd9c.

LBaaSv2 with HAProxy driver.

Creating a network with a subnet with host routes and using this network
in the load balancer, the host routes are not being added to the
namespace created by the lbaas.

$ neutron subnet-show 6937f8fa-858d-4bc9-a3a5-18d2c957452a
+---+-+
| Field | Value   |
+---+-+
| allocation_pools  | {"start": "10.0.0.2", "end": "10.0.0.254"}  |
| cidr  | 10.0.0.0/16 |
| created_at| 2016-09-19T16:04:54 |
| description   | |
| dns_nameservers   | 10.20.6.1   |
| enable_dhcp   | True|
| gateway_ip| 10.0.0.1|
| host_routes   | {"destination": "10.0.0.0/8", "nexthop": "10.0.1.1"}|
| id| 6937f8fa-858d-4bc9-a3a5-18d2c957452a|
| ip_version| 4   |
| ipv6_address_mode | |
| ipv6_ra_mode  | |
| name  | provider|
| network_id| 64ee4355-4d7f-4170-80b4-5e2348af6a61|
| subnetpool_id | |
| tenant_id | 3f7a886d30404c22aee51f93b0d50e46|
| updated_at| 2016-09-19T16:10:15 |
+---+-+

# ip netns exec qlbaas-df0db86b-b3fd-4538-bfeb-f79f3a812059 route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse Iface
0.0.0.0 10.0.0.10.0.0.0 UG0  00 
qg-4cbe0b85-75
10.0.0.00.0.0.0 255.255.0.0 U 0  00 
ns-e285ebec-71

** Affects: neutron
 Importance: Undecided
 Assignee: Flávio Ramalho (flaviosr)
 Status: In Progress


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1635355

Title:
  subnet host routes are not added to the lbaasv2 namespace

Status in neutron:
  In Progress

Bug description:
  Using mitaka@3cf4a7cffd5f76b9f8a3eed649d04075714dcd9c.

  LBaaSv2 with HAProxy driver.

  Creating a network with a subnet with host routes and using this
  network in the load balancer, the host routes are not being added to
  the namespace created by the lbaas.

  $ neutron subnet-show 6937f8fa-858d-4bc9-a3a5-18d2c957452a
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | allocation_pools  | {"start": "10.0.0.2", "end": "10.0.0.254"}  
|
  | cidr  | 10.0.0.0/16 
|
  | created_at| 2016-09-19T16:04:54 
|
  | description   | 
|
  | dns_nameservers   | 10.20.6.1   
|
  | enable_dhcp   | True
|
  | gateway_ip| 10.0.0.1
|
  | host_routes   | {"destination": "10.0.0.0/8", "nexthop": "10.0.1.1"}
|
  | id| 6937f8fa-858d-4bc9-a3a5-18d2c957452a
|
  | ip_version| 4   
|
  | ipv6_address_mode | 
|
  | ipv6_ra_mode  | 
|
  | name  | provider
|
  | network_id| 64ee4355-4d7f-4170-80b4-5e2348af6a61
|
  | subnetpool_id | 
|
  | tenant_id | 3f7a886d30404c22aee51f93b0d50e46
|
  | updated_at| 2016-09-19T16:10:15 
|
  
+---+-+

  # ip netns exec qlbaas-df0db86b-b3fd-4538-bfeb-f79f3a812059 route -n
  Kernel IP routing table
  Destination Gateway Genmask   

[Yahoo-eng-team] [Bug 1635350] [NEW] bddeb does not work on a system deployed by maas

2016-10-20 Thread Joshua Powers
Public bug reported:

Observed Behavior:

On a system deployed by MAAS I checked out master and then tried to immediately 
build it:
> git clone https://git.launchpad.net/cloud-init
> cd cloud-init
> ./packages/bddeb

I get a number of errors around permission issues around this file:
PermissionError: [Errno 13] Permission denied: 
\'/etc/cloud/cloud.cfg.d/90_dpkg_maas.cfg\'

See: https://paste.ubuntu.com/23354559/

If I run as root however, it build as expected.


Expected Behavior:
Running bddeb works as a non-root user.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1635350

Title:
  bddeb does not work on a system deployed by maas

Status in cloud-init:
  New

Bug description:
  Observed Behavior:

  On a system deployed by MAAS I checked out master and then tried to 
immediately build it:
  > git clone https://git.launchpad.net/cloud-init
  > cd cloud-init
  > ./packages/bddeb

  I get a number of errors around permission issues around this file:
  PermissionError: [Errno 13] Permission denied: 
\'/etc/cloud/cloud.cfg.d/90_dpkg_maas.cfg\'

  See: https://paste.ubuntu.com/23354559/

  If I run as root however, it build as expected.

  
  Expected Behavior:
  Running bddeb works as a non-root user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1635350/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1633941] Re: VPNaaS: peer-cidr validation is invalid

2016-10-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/387408
Committed: 
https://git.openstack.org/cgit/openstack/neutron-vpnaas/commit/?id=c1adb319f18feeffe02fd889c69347805f7e654c
Submitter: Jenkins
Branch:master

commit c1adb319f18feeffe02fd889c69347805f7e654c
Author: Dongcan Ye 
Date:   Mon Oct 17 20:32:36 2016 +0800

Validate peer_cidrs for ipsec_site_connections

When cidrs format of remote peer is incorrect, we should get
validate message from neutron-lib. Meanwhile this patch
add a validator for peer_cidrs in db.

Change-Id: Ia77208f9a6704b651929c35c85dbc227972014aa
Closes-Bug: #1633941


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1633941

Title:
  VPNaaS: peer-cidr validation is invalid

Status in neutron:
  Fix Released

Bug description:
  When creating ipsec-site-connection in VPNaaS, it looks peer-cidr validation 
is invalid.
  The cidr format like "10/8" should be rejected like cidr in subnet resources 
but it is accepted like the following: 

  $ neutron ipsec-site-connection-create --vpnservice-id service1 
--ikepolicy-id ike1 --ipsecpolicy-id ipsec1 --peer-id 192.168.7.1 
--peer-address 192.168.7.1 --peer-cidr 10/8 --psk pass
  Created a new ipsec_site_connection:
  +---++
  | Field | Value  |
  +---++
  | admin_state_up| True   |
  | auth_mode | psk|
  | description   ||
  | dpd   | {"action": "hold", "interval": 30, "timeout": 120} |
  | id| 2bed308f-5462-45bb-ae79-5cb9003424ef   |
  | ikepolicy_id  | be1f92ab-8064-4328-8862-777ae6878691   |
  | initiator | bi-directional |
  | ipsecpolicy_id| 09c67ae8-6ede-47ca-a15b-c52be1d7feaf   |
  | local_ep_group_id ||
  | local_id  ||
  | mtu   | 1500   |
  | name  ||
  | peer_address  | 192.168.7.1|
  | peer_cidrs| 10/8   |
  | peer_ep_group_id  ||
  | peer_id   | 192.168.7.1|
  | project_id| 068a47c758ae4b5d9fab059539e57740   |
  | psk   | pass   |
  | route_mode| static |
  | status| PENDING_CREATE |
  | tenant_id | 068a47c758ae4b5d9fab059539e57740   |
  | vpnservice_id | 4f82612c-5e3a-4699-aafa-bdfa5ede31fe   |
  +---++

  I think this is because _validate_subnet_list_or_none method in
  neutron_vpnaas.extensions.vpnaas doesn't return the result.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1633941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635283] Re: OVS conntrack firewall doesn't work with OVS 2.6.0

2016-10-20 Thread Rodolfo Alonso
Sorry, I see it's covered by https://review.openstack.org/#/c/388467/

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1635283

Title:
  OVS conntrack firewall doesn't work with OVS 2.6.0

Status in neutron:
  Fix Released

Bug description:
  OVS introduced a check for inconsistent CT actions in the following
  bug:
  
https://github.com/openvswitch/ovs/commit/d86e03c57e295811533ed873602a3f2eadc85548

  
  If a flow doesn't meet the requirements for a CT action, the flow will be 
discarded. CT firewall musst specify the L3/L4 protocol (ip, ipv6, tcp, udp, 
scp) as long as a CT action is used. For example:
 
Flow1="hard_timeout=0,idle_timeout=0,priority=90,ct_state=+new-est,reg5=6,cookie=13600354711851837061,table=73,actions=ct(commit,zone=NXM_NX_REG6[0..15]),normal"
  should be:
 Flow1="hard_timeout=0,idle_timeout=0,priority=90,  ip  
,ct_state=+new-est,reg5=6,cookie=13600354711851837061,table=73,actions=ct(commit,zone=NXM_NX_REG6[0..15]),normal"

  
  When the flows are added by the agent, an error appears:
  ...
  
hard_timeout=0,idle_timeout=0,priority=40,ct_state=+est,reg5=6,cookie=13600354711851837061,table=72,actions=ct(commit,zone=NXM_NX_REG6[0..15],exec(set_field:0x1->ct_mark))
  
hard_timeout=0,idle_timeout=0,priority=70,dl_type=0x0800,ct_state=+est-rel-rpl,reg5=6,nw_proto=17,cookie=13600354711851837061,table=82,udp_dst=0x1388,dl_dst=fa:16:3e:d3:28:85,actions=strip_vlan,output:6
  
hard_timeout=0,idle_timeout=0,priority=70,dl_type=0x0800,ct_state=+new-est,reg5=6,nw_proto=17,cookie=13600354711851837061,table=82,udp_dst=0x1388,dl_dst=fa:16:3e:d3:28:85,actions=ct(commit,zone=NXM_NX_REG6[0..15]),strip_vlan,output:6;
 Stdout: ; Stderr: ovs-ofctl: -:17: actions are invalid with specified match 
(OFPBAC_MATCH_INCONSISTENT)

  but the agent continues working without exiting.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1635283/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1634757] Re: OVS firewall doesn't work with recent ovs

2016-10-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/388467
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=14ee32940c87186d36b253773e1d73b5274349bf
Submitter: Jenkins
Branch:master

commit 14ee32940c87186d36b253773e1d73b5274349bf
Author: IWAMOTO Toshihiro 
Date:   Wed Oct 19 15:28:32 2016 +0900

ovsfw: Add a dl_type match for action=ct flows

Recently ovs has been changed to require a dl_type match for
action=ct flows.

Change-Id: I9040d8c50ee30f5daef7ea931a28cd0cf7855f3e
Closes-bug: #1634757


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1634757

Title:
  OVS firewall doesn't work with recent ovs

Status in neutron:
  Fix Released

Bug description:
  On ovs branch-2.5 after commit fd6daf7, ovs firewall fails to install
  flows with a following error.

  ovs-ofctl: -:17: actions are invalid with specified match
  (OFPBAC_MATCH_INCONSISTENT)

  Similar commit exists on master, so the firewall won't work on master,
  either. (not confirmed)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1634757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630929] Re: Affinity instance slot reservation

2016-10-20 Thread OpenStack Infra
Fix proposed to branch: master
Review: https://review.openstack.org/389216

** Changed in: nova
   Status: Opinion => In Progress

** Changed in: nova
 Assignee: (unassigned) => daniel.pawlik (daniel-pawlik)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1630929

Title:
  Affinity instance slot reservation

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Currently ServerGroupAffinityFilter schedules all instances of a group
  on the same host, but it doesn't reserve any slots for future
  instances on the host.

  Example:
  max_instances_per_host=10
  quota_server_group_members=5

  When user_1 would spawns 3 instances with affinity server group policy, all 
the instances will be scheduled on the same host (host_A). 
  If user_2 also spawns instances and they are placed on host_A, quota of 
max_instances_per_host will be reached, so user_1 can not add 2 new instances 
to the same server group and error "No valid host found" will be returned.

  My proposition is to add new parameters to nova.conf to configure 
ServerGroupAffinityFilter:
  - enable_slots_reservation (Boolean)
  - reserved_slots_per_instance (-1 will count difference between 
max_instances_per_host and quota_server_group_members ; values bigger than 0 
will reserve the indicated number of ports per group)

  Nova scheduler checks if on a host there are any instances with
  affinity policy and based on that, it counts available slots.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1630929/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362766] Re: ConnectionFailed: Connection to XXXXXX failed: 'HTTPSConnectionPool' object has no attribute 'insecure'

2016-10-20 Thread Chuck Short
This should be fixed in Ubuntu now.

** Changed in: python-glanceclient (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362766

Title:
  ConnectionFailed: Connection to XX failed: 'HTTPSConnectionPool'
  object has no attribute 'insecure'

Status in keystonemiddleware:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in python-glanceclient:
  Fix Released
Status in python-neutronclient:
  Invalid
Status in python-glanceclient package in Ubuntu:
  Fix Released

Bug description:
  While compute manager was trying to authenticate with neutronclient,
  we see the following:

  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager Traceback 
(most recent call last):
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/powervc_nova/compute/manager.py", line 672, 
in _populate_admin_context
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager 
nclient.authenticate()
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/neutronclient/client.py", line 231, in 
authenticate
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager 
self._authenticate_keystone()
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/neutronclient/client.py", line 209, in 
_authenticate_keystone
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager 
allow_redirects=True)
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/neutronclient/client.py", line 113, in 
_cs_request
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager raise 
exceptions.ConnectionFailed(reason=e)
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager 
ConnectionFailed: Connection to neutron failed: 'HTTPSConnectionPool' object 
has no attribute 'insecure'

  Setting a pdb breakpoint and stepping into the code, I see that the
  requests library is getting a connection object from a pool.  The
  interesting thing is that the connection object is actually from
  glanceclient.common.https.HTTPSConnectionPool.  It seems odd to me
  that neutronclient is using a connection object from glanceclient
  pool, but I do not know this requests code.  Here is the stack just
  before failure:

/usr/lib/python2.7/site-packages/neutronclient/client.py(234)authenticate()
  -> self._authenticate_keystone()

/usr/lib/python2.7/site-packages/neutronclient/client.py(212)_authenticate_keystone()
  -> allow_redirects=True)
/usr/lib/python2.7/site-packages/neutronclient/client.py(106)_cs_request()
  -> resp, body = self.request(*args, **kargs)
/usr/lib/python2.7/site-packages/neutronclient/client.py(151)request()
  -> **kwargs)
/usr/lib/python2.7/site-packages/requests/api.py(44)request()
  -> return session.request(method=method, url=url, **kwargs)
/usr/lib/python2.7/site-packages/requests/sessions.py(335)request()
  -> resp = self.send(prep, **send_kwargs)
/usr/lib/python2.7/site-packages/requests/sessions.py(438)send()
  -> r = adapter.send(request, **kwargs)
/usr/lib/python2.7/site-packages/requests/adapters.py(292)send()
  -> timeout=timeout
/usr/lib/python2.7/site-packages/urllib3/connectionpool.py(454)urlopen()
  -> conn = self._get_conn(timeout=pool_timeout)
/usr/lib/python2.7/site-packages/urllib3/connectionpool.py(272)_get_conn()
  -> return conn or self._new_conn()
  > 
/usr/lib/python2.7/site-packages/glanceclient/common/https.py(100)_new_conn()
  -> return VerifiedHTTPSConnection(host=self.host,

  The code about to run there is this:

  class HTTPSConnectionPool(connectionpool.HTTPSConnectionPool):
  """
  HTTPSConnectionPool will be instantiated when a new
  connection is requested to the HTTPSAdapter.This
  implementation overwrites the _new_conn method and
  returns an instances of glanceclient's VerifiedHTTPSConnection
  which handles no compression.

  ssl_compression is hard-coded to False because this will
  be used just when the user sets --no-ssl-compression.
  """

  scheme = 'https'

  def _new_conn(self):
  self.num_connections += 1
  return VerifiedHTTPSConnection(host=self.host,
 port=self.port,
 key_file=self.key_file,
 cert_file=self.cert_file,
 cacert=self.ca_certs,
 insecure=self.insecure,
 ssl_compression=False)

  Note the self.insecure, which does not exist here.  I see the
  following fairly recent change in the glancecli

[Yahoo-eng-team] [Bug 1422046] Re: cinder backup-list is always listing all tenants's bug for admin

2016-10-20 Thread Chuck Short
** Changed in: python-cinderclient (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1422046

Title:
  cinder backup-list is always listing all tenants's bug for admin

Status in OpenStack Dashboard (Horizon):
  New
Status in ospurge:
  Fix Committed
Status in OpenStack Security Advisory:
  Won't Fix
Status in python-cinderclient:
  Fix Released
Status in python-cinderclient package in Ubuntu:
  Fix Released

Bug description:
  cinder backup-list doesn't support '--all-tenants' argument for admin
  wright now. This lead to admin always getting all tenants's backups.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1422046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635259] [NEW] Signing error: Unable to load certificate - ensure you have configured PKI with "keystone-manage pki_setup"

2016-10-20 Thread Jens Offenbach
Public bug reported:

I have a fresh installation of OpenStack Newton based on Ubuntu 16.04. I
am using Ceph Object Gateway as object storage implementation which
regularly makes the following call "GET
http://controller:5000/v3/auth/tokens/OS-PKI/revoked";.

This call causes the following exception in the log of Keystone:
2016-10-20 14:30:33.764 13934 INFO keystone.common.wsgi 
[req-fccd6064-2c29-4929-8a68-8b439db14957 924990606827451ca0599a5dcc8fb2ec 
76e3b8253287442bac2772138583cde9 - default default] GET 
http://os-identity:5000/v3/auth/tokens/OS-PKI/revoked
2016-10-20 14:30:33.889 13934 ERROR keystoneclient.common.cms 
[req-fccd6064-2c29-4929-8a68-8b439db14957 924990606827451ca0599a5dcc8fb2ec 
76e3b8253287442bac2772138583cde9 - default default] Signing error: Unable to 
load certificate - ensure you have configured PKI with "keystone-manage 
pki_setup"
2016-10-20 14:30:33.890 13934 ERROR keystone.common.wsgi 
[req-fccd6064-2c29-4929-8a68-8b439db14957 924990606827451ca0599a5dcc8fb2ec 
76e3b8253287442bac2772138583cde9 - default default] Command 'openssl' returned 
non-zero exit status 3
2016-10-20 14:30:33.890 13934 ERROR keystone.common.wsgi Traceback (most recent 
call last):
2016-10-20 14:30:33.890 13934 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py", line 225, in 
__call__
2016-10-20 14:30:33.890 13934 ERROR keystone.common.wsgi result = 
method(req, **params)
2016-10-20 14:30:33.890 13934 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/controller.py", line 164, in 
inner
2016-10-20 14:30:33.890 13934 ERROR keystone.common.wsgi return f(self, 
request, *args, **kwargs)
2016-10-20 14:30:33.890 13934 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/auth/controllers.py", line 590, in 
revocation_list
2016-10-20 14:30:33.890 13934 ERROR keystone.common.wsgi 
CONF.signing.keyfile)
2016-10-20 14:30:33.890 13934 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/common/cms.py", line 325, in 
cms_sign_text
2016-10-20 14:30:33.890 13934 ERROR keystone.common.wsgi 
signing_key_file_name, message_digest=message_digest)
2016-10-20 14:30:33.890 13934 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/common/cms.py", line 373, in 
cms_sign_data
2016-10-20 14:30:33.890 13934 ERROR keystone.common.wsgi raise 
subprocess.CalledProcessError(retcode, 'openssl')
2016-10-20 14:30:33.890 13934 ERROR keystone.common.wsgi CalledProcessError: 
Command 'openssl' returned non-zero exit status 3
2016-10-20 14:30:33.890 13934 ERROR keystone.common.wsgi

This is my keystone.conf:

[DEFAULT]
debug = false
# NOTE: log_dir alone does not work for Keystone
log_file = /var/log/keystone/keystone.log
transport_url = 
rabbit://keystone:XYZ@os-rabbit01:5672,keystone:XYZ@os-rabbit02:5672/openstack

[assignment]
driver = sql

[cache]
backend = oslo_cache.memcache_pool
enabled = true
memcache_servers = os-memcache:11211

[credential]
provider = fernet
key_repository = /etc/keystone/credential-keys

[database]
connection = mysql+pymysql://keystone:XYZ@os-controller/keystone
max_retries = -1

[memcache]
servers = os-memcache:11211

[oslo_messaging_notifications]
driver = messagingv2

[oslo_messaging_rabbit]
amqp_durable_queues = true
rabbit_ha_queues = true
rabbit_retry_backoff = 2
rabbit_retry_interval = 1

[oslo_middleware]
enable_proxy_headers_parsing = true

[token]
driver = sql
provider = uuid

[extra_headers]
Distribution = Ubuntu

I know that with the Newton release a lot of things have been changed
regarding signing and PKI. How can calls to Keystone's revocation list
be handled in the Newton release without a PKI setup?

** Affects: keystone
 Importance: Undecided
 Status: New

** Description changed:

  I have a fresh installation of OpenStack Newton based on Ubuntu 16.04. I
  am using Ceph Object Gateway as object storage implementation which
  regularly makes the following call "GET
  http://controller:5000/v3/auth/tokens/OS-PKI/revoked";.
  
  This call causes the following exception in the log of Keystone:
  2016-10-20 14:30:33.764 13934 INFO keystone.common.wsgi 
[req-fccd6064-2c29-4929-8a68-8b439db14957 924990606827451ca0599a5dcc8fb2ec 
76e3b8253287442bac2772138583cde9 - default default] GET 
http://os-identity:5000/v3/auth/tokens/OS-PKI/revoked
  2016-10-20 14:30:33.889 13934 ERROR keystoneclient.common.cms 
[req-fccd6064-2c29-4929-8a68-8b439db14957 924990606827451ca0599a5dcc8fb2ec 
76e3b8253287442bac2772138583cde9 - default default] Signing error: Unable to 
load certificate - ensure you have configured PKI with "keystone-manage 
pki_setup"
  2016-10-20 14:30:33.890 13934 ERROR keystone.common.wsgi 
[req-fccd6064-2c29-4929-8a68-8b439db14957 924990606827451ca0599a5dcc8fb2ec 
76e3b8253287442bac2772138583cde9 - default default] Command 'openssl' returned 
non-zero exit status 3
  2016-10-20 14:30:33.890 13934 ERROR keyst

[Yahoo-eng-team] [Bug 1610932] Re: XenServer 7.0 failed live-migrate with iSCSI SR volume

2016-10-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/359548
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=22b8ae0975187a847fc201a750f05be0af9818a4
Submitter: Jenkins
Branch:master

commit 22b8ae0975187a847fc201a750f05be0af9818a4
Author: Huan Xie 
Date:   Wed Aug 24 02:25:10 2016 +

XenAPI: Fix VM live-migrate with iSCSI SR volume

With XenServer 7.0 (XCP 2.1.0), VM live migration with iSCSI volume
will fail due to no relax_xsm_sr_check in /etc/xapi.conf in Dom0.
But since XS7.0, the default value of relax_xsm_sr_check is true if
not configured in /etc/xapi.conf in Dom0.
This fix is to add checks on XCP version, if it's equal or greater
than 2.1.0, will treat relax_xsm_sr_check=true if not found in
/etc/xapi.conf

Change-Id: I88d1d384ab7587c428e517d184258bb517dfb4ab
Closes-bug: #1610932


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1610932

Title:
  XenServer 7.0 failed live-migrate with iSCSI SR volume

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I deployed a two compute nodes devstack environment with XenServer 7.0 (XCP 
platform version 2.1.0)
  I booted a VM and tried to live migrate it to another compute node, but it 
failed at both source compute node and dest compute node with:

  2016-08-08 02:29:46.042 DEBUG oslo_messaging.drivers.amqpdriver [-] received 
reply msg_id: 8022537717374357b576f5a11ec98719 from (pid=30939) __call_ 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:296
  2016-08-08 02:29:46.045 ERROR oslo_messaging.rpc.server 
[req-fc49665b-5361-4042-bcca-2992335f75a8 admin demo] Exception during message 
handling
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server Traceback (most 
recent call last):
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
133, in _process_incoming
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
150, in dispatch
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
121, in _do_dispatch
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server result = func(ctxt, 
**new_args)
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server File 
"/opt/stack/nova/nova/exception_wrapper.py", line 75, in wrapped
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server function_name, 
call_dict, binary)
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
_exit_
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server self.force_reraise()
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server File 
"/opt/stack/nova/nova/exception_wrapper.py", line 66, in wrapped
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server return f(self, 
context, *args, **kw)
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server File 
"/opt/stack/nova/nova/compute/utils.py", line 613, in decorated_function
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server return function(self, 
context, *args, **kwargs)
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server File 
"/opt/stack/nova/nova/compute/manager.py", line 216, in decorated_function
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server kwargs['instance'], 
e, sys.exc_info())
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
_exit_
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server self.force_reraise()
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server File 
"/opt/stack/nova/nova/compute/manager.py", line 204, in decorated_function
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server return function(self, 
context, *args, **kwargs)
  2016-08-08 02:29:46.045 TRACE oslo_messaging.rpc.server File 
"/opt/stack/nova

[Yahoo-eng-team] [Bug 1635230] [NEW] Unable to specify hw_vif_model image metadata for Virtuozzo hypervisor virt_type=parallels

2016-10-20 Thread Maxim Nestratov
Public bug reported:

If you specify hw_vif_model in image metadata when you use Virtuozzo
(libvirt virt_type=parallels), then a UnsupportedHardware exception will
be thrown.

2016-08-04 19:11:44.880 16260 ERROR nova.compute.manager 
[req-8b5cdf08-e3b3-4d0d-9d1c-a1ff10daee71 690eedf97d024da8974029784c26cff1 
4a7ea1a081334814b7f45b5950657564 - - -] [instance: 
63ae37d5-a377-4ff8-9ea8-2bfc86565415] Instance failed to spawn
[..]
2016-08-04 19:11:44.880 16260 ERROR nova.compute.manager [instance: 
63ae37d5-a377-4ff8-9ea8-2bfc86565415]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/vif.py", line 81, in 
is_vif_model_valid_for_virt
2016-08-04 19:11:44.880 16260 ERROR nova.compute.manager [instance: 
63ae37d5-a377-4ff8-9ea8-2bfc86565415] raise 
exception.UnsupportedVirtType(virt=virt_type)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1635230

Title:
  Unable to specify hw_vif_model image metadata for Virtuozzo hypervisor
  virt_type=parallels

Status in OpenStack Compute (nova):
  New

Bug description:
  If you specify hw_vif_model in image metadata when you use Virtuozzo
  (libvirt virt_type=parallels), then a UnsupportedHardware exception
  will be thrown.

  2016-08-04 19:11:44.880 16260 ERROR nova.compute.manager 
[req-8b5cdf08-e3b3-4d0d-9d1c-a1ff10daee71 690eedf97d024da8974029784c26cff1 
4a7ea1a081334814b7f45b5950657564 - - -] [instance: 
63ae37d5-a377-4ff8-9ea8-2bfc86565415] Instance failed to spawn
  [..]
  2016-08-04 19:11:44.880 16260 ERROR nova.compute.manager [instance: 
63ae37d5-a377-4ff8-9ea8-2bfc86565415]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/vif.py", line 81, in 
is_vif_model_valid_for_virt
  2016-08-04 19:11:44.880 16260 ERROR nova.compute.manager [instance: 
63ae37d5-a377-4ff8-9ea8-2bfc86565415] raise 
exception.UnsupportedVirtType(virt=virt_type)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1635230/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635219] [NEW] Linux Bridge - Multiple Interfaces - Iptables Rules

2016-10-20 Thread Alexander Birkner
Public bug reported:

Hello,

we are trying to have multiple IPv4 and IPv6 addresses on the same Nova
instance.

We are using IPv6 router aggregation so if we want multiple IPv6
addresses we need multiple interfaces. Also we had the problem that DHCP
from Network Manager only works with one IP, so each IPv4 address needs
also a own interface. That's the reason why we decided to add for each
IPv4/IPv6 pair a new network interface.

But there is one problem, the Linux OS adds multiple default routes with
different metrics. So if you send a packet from a non-main IP address
the packet is leaving the server on the main interface. So only the
first IP working. And this is limited by iptables in OpenStack.

Neutron creates this following rule for each interface:

Chain neutron-linuxbri-sdb819e32-1 (1 references)
 pkts bytes target prot opt in out source   destination 

0 0 RETURN all  --  *  *   10.67.1.45   0.0.0.0/0   
 MAC FA:16:3E:EA:3D:EA /* Allow traffic from defined IP/MAC pairs. */
0 0 DROP   all  --  *  *   0.0.0.0/00.0.0.0/0   
 /* Drop traffic without an IP/MAC allow rule. */

Each interface does only allow his own IP. But to make this work each IP
from each interface must be whitelisted. Disabling iptables is also no
option because we need this kind of protection.

Is there any chance to configure this in Neutron? Or is there a better
solution to have multiple IPs with Linux Bridge?

Regards,
Alexander

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: iptables linuxbridge

** Tags added: linuxbridge

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1635219

Title:
  Linux Bridge - Multiple Interfaces - Iptables Rules

Status in neutron:
  New

Bug description:
  Hello,

  we are trying to have multiple IPv4 and IPv6 addresses on the same
  Nova instance.

  We are using IPv6 router aggregation so if we want multiple IPv6
  addresses we need multiple interfaces. Also we had the problem that
  DHCP from Network Manager only works with one IP, so each IPv4 address
  needs also a own interface. That's the reason why we decided to add
  for each IPv4/IPv6 pair a new network interface.

  But there is one problem, the Linux OS adds multiple default routes
  with different metrics. So if you send a packet from a non-main IP
  address the packet is leaving the server on the main interface. So
  only the first IP working. And this is limited by iptables in
  OpenStack.

  Neutron creates this following rule for each interface:

  Chain neutron-linuxbri-sdb819e32-1 (1 references)
   pkts bytes target prot opt in out source   
destination 
  0 0 RETURN all  --  *  *   10.67.1.45   0.0.0.0/0 
   MAC FA:16:3E:EA:3D:EA /* Allow traffic from defined IP/MAC pairs. */
  0 0 DROP   all  --  *  *   0.0.0.0/00.0.0.0/0 
   /* Drop traffic without an IP/MAC allow rule. */

  Each interface does only allow his own IP. But to make this work each
  IP from each interface must be whitelisted. Disabling iptables is also
  no option because we need this kind of protection.

  Is there any chance to configure this in Neutron? Or is there a better
  solution to have multiple IPs with Linux Bridge?

  Regards,
  Alexander

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1635219/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1634884] Re: Unknown service q-agent-notifier-network-delete

2016-10-20 Thread Oleg Bondarev
By default neutron server sends fanout on all events when resource state
changes. Some are consumed by current agents (like port update, etc.),
some may be consumed or not, depending on
plugins/drivers/extensions/agents used in current installation. I guess
with your set of drivers q-agent-notifier-network-delete is not consumed
by anybody, but it doesn't mean that there is no other agents that might
need this info.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1634884

Title:
  Unknown service q-agent-notifier-network-delete

Status in neutron:
  Invalid

Bug description:
  While testing oslo.messaging zmq driver, we came across with the
  following behavior of neutron: it sends fanout messages to the target
  , although there are no such services. Zmq driver uses
  redis as a matchmaker for coordination between rpc clients and
  servers, and each server registers itself in redis so that clients can
  find it. In our particular case redis has no records associated with
  the topic q-agent-notifier-network-delete, and our current behavior is
  to retry (20 seconds at most, 2 seconds between attempts) to get
  available hosts from redis, and drop the message afterwards. As a
  result we get failed tests. By the way, if we drop those messages
  immediately, everything works fine.

  For example, here is the link to tempest logs:
  http://logs.openstack.org/44/388044/1/check/gate-tempest-neutron-dsvm-
  src-oslo.messaging-
  zmq/925c9ee/logs/screen-q-svc.txt.gz#_2016-10-18_17_39_59_053

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1634884/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540555] Re: Nobody listens to network delete notifications (OVS specific)

2016-10-20 Thread Oleksii Zamiatin
My concern is that correct behavior would be not to write to non
existent topic. Looks like some code currently common to both backend
drivers should be moved lower to a specific backend and removed from the
others.

** Summary changed:

- Nobody listens to network delete notifications
+ Nobody listens to network delete notifications (OVS specific)

** Changed in: neutron
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540555

Title:
  Nobody listens to network delete notifications (OVS specific)

Status in neutron:
  New

Bug description:
  Here it can be seen that agents are notified of network delete event:
  
https://github.com/openstack/neutron/blob/8.0.0.0b2/neutron/plugins/ml2/rpc.py#L304

  But on the agent side only network update events are being listened:
  
https://github.com/openstack/neutron/blob/8.0.0.0b2/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L374

  That was uncovered during testing of Pika driver, because it does not
  allow to send messages to queues which do not exist, unlike current
  Rabbit (Kombu) driver. That behaviour will probably be changed in Pika
  driver, but still it is worthy to get rid of unnecessary notifications
  on Neutron side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540555/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635180] [NEW] Firewall creation is stuck in "PENDING_UPDATE"

2016-10-20 Thread Jens Offenbach
Public bug reported:

I have a fresh installation of OpenStack Newton running on Ubuntu 16.04.
The setup contains DVR, VPaaS, LBaaS_v2 and FWaaS. After firewall
creation (no router association) the instance transitions to state
"INACTIVE". When I associate a distributed router to the newly created
firewall, the instance transistions to state "PENDING_UPDATE" and gets
stuck there.

$ neutron firewall-rule-create --protocol tcp --destination-port 22 --action 
deny
Created a new firewall_rule:
++--+
| Field  | Value|
++--+
| action | deny |
| description|  |
| destination_ip_address |  |
| destination_port   | 22   |
| enabled| True |
| firewall_policy_id |  |
| id | 1c25c04c-ec29-4d5d-8069-f1510a7ebbfe |
| ip_version | 4|
| name   |  |
| position   |  |
| project_id | d332c49688364651a7fca7c866a3f933 |
| protocol   | tcp  |
| shared | False|
| source_ip_address  |  |
| source_port|  |
| tenant_id  | d332c49688364651a7fca7c866a3f933 |
++--+

$ neutron firewall-policy-create --firewall-rules 
1c25c04c-ec29-4d5d-8069-f1510a7ebbfe test-policy
Created a new firewall_policy:
++--+
| Field  | Value|
++--+
| audited| False|
| description|  |
| firewall_rules | 1c25c04c-ec29-4d5d-8069-f1510a7ebbfe |
| id | 2daf47c9-00fd-44dd-a6a5-329bfa58c76a |
| name   | test-policy  |
| project_id | d332c49688364651a7fca7c866a3f933 |
| shared | False|
| tenant_id  | d332c49688364651a7fca7c866a3f933 |
++--+

$ neutron firewall-create 2daf47c9-00fd-44dd-a6a5-329bfa58c76a --name 
test-firewall
Created a new firewall:
++--+
| Field  | Value|
++--+
| admin_state_up | True |
| description|  |
| firewall_policy_id | 2daf47c9-00fd-44dd-a6a5-329bfa58c76a |
| id | 2fe9749d-b8e5-4ed8-8fd8-701fb3fbb571 |
| name   | test-firewall|
| project_id | d332c49688364651a7fca7c866a3f933 |
| router_ids |  |
| status | INACTIVE |
| tenant_id  | d332c49688364651a7fca7c866a3f933 |
++--+

$ neutron firewall-show test-firewall
++--+
| Field  | Value|
++--+
| admin_state_up | True |
| description|  |
| firewall_policy_id | 2daf47c9-00fd-44dd-a6a5-329bfa58c76a |
| id | 2fe9749d-b8e5-4ed8-8fd8-701fb3fbb571 |
| name   | test-firewall|
| project_id | d332c49688364651a7fca7c866a3f933 |
| router_ids |  |
| status | INACTIVE |
| tenant_id  | d332c49688364651a7fca7c866a3f933 |
++--+

$ neutron firewall-update --router demo-router test-firewall
Updated firewall: test-firewall

$ neutron firewall-show test-firewall
++--+
| Field  | Value|
++--+
| admin_state_up | True |
| description|  |
| firewall_policy_id | 2daf47c9-00fd-44dd-a6a5-329bfa58c76a |
| id | 2fe9749d-b8e5-4ed8-8fd8-701fb3fbb571 |
| name   | test

[Yahoo-eng-team] [Bug 1635182] [NEW] [placement] in api code repeated need to restate json_error_formatter is error prone

2016-10-20 Thread Chris Dent
Public bug reported:

Any time placement API (v1.0) code wants to raise a webob.exc of some
kind it needs to add a json_formatter attribute to the call. This is
easy to forget. We can probably figure out a way to make this a default,
perhaps by making subclasses of the existing exceptions or setting some
defaults somewhere. Not a huge deal but worth looking into.

** Affects: nova
 Importance: Low
 Status: Triaged


** Tags: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1635182

Title:
  [placement] in api code repeated need to restate json_error_formatter
  is error prone

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  Any time placement API (v1.0) code wants to raise a webob.exc of some
  kind it needs to add a json_formatter attribute to the call. This is
  easy to forget. We can probably figure out a way to make this a
  default, perhaps by making subclasses of the existing exceptions or
  setting some defaults somewhere. Not a huge deal but worth looking
  into.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1635182/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632538] Re: Using generate_service_certificate and undercloud_public_vip in undercloud.conf breaks nova

2016-10-20 Thread James Page
Dropped UCA targets; this package comes directly from Ubuntu, so we
should fix it there.

** No longer affects: cloud-archive

** No longer affects: cloud-archive/newton

** Package changed: nova (Ubuntu Yakkety) => python-rfc3986 (Ubuntu
Yakkety)

** Also affects: python-rfc3986 (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: python-rfc3986 (Ubuntu Xenial)
   Status: New => Triaged

** Changed in: python-rfc3986 (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: python-rfc3986 (Ubuntu Yakkety)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1632538

Title:
  Using generate_service_certificate and undercloud_public_vip in
  undercloud.conf breaks nova

Status in OpenStack Compute (nova):
  Incomplete
Status in OpenStack Compute (nova) newton series:
  Incomplete
Status in tripleo:
  Triaged
Status in python-rfc3986 package in Ubuntu:
  Triaged
Status in python-rfc3986 source package in Xenial:
  Triaged
Status in python-rfc3986 source package in Yakkety:
  Triaged
Status in python-rfc3986 source package in Zesty:
  Triaged

Bug description:
  Enabling SSL on the Undercloud using generate_service_certificate
  results in all Nova services on the undercloud (api, cert, compute,
  conductor, scheduler), all failing with errors similar to the
  following:

  2016-10-11 22:28:27.327 66082 CRITICAL nova 
[req-b5f37af3-96fc-42e2-aaa6-52815aca07fe - - - - -] ConfigFileValueError: 
Value for option url is not valid: invalid URI: 
'https://rdo-ci-fx2-06-s5.v103.rdoci.lab.eng.rdu.redhat.com:13696'
  2016-10-11 22:28:27.327 66082 ERROR nova Traceback (most recent call last):
  2016-10-11 22:28:27.327 66082 ERROR nova   File "/usr/bin/nova-cert", line 
10, in 
  2016-10-11 22:28:27.327 66082 ERROR nova sys.exit(main())
  2016-10-11 22:28:27.327 66082 ERROR nova   File 
"/usr/lib/python2.7/site-packages/nova/cmd/cert.py", line 49, in main
  2016-10-11 22:28:27.327 66082 ERROR nova service.wait()
  2016-10-11 22:28:27.327 66082 ERROR nova   File 
"/usr/lib/python2.7/site-packages/nova/service.py", line 415, in wait
  2016-10-11 22:28:27.327 66082 ERROR nova _launcher.wait()
  2016-10-11 22:28:27.327 66082 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_service/service.py", line 328, in wait
  2016-10-11 22:28:27.327 66082 ERROR nova status, signo = 
self._wait_for_exit_or_signal()
  2016-10-11 22:28:27.327 66082 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_service/service.py", line 303, in 
_wait_for_exit_or_signal
  2016-10-11 22:28:27.327 66082 ERROR nova self.conf.log_opt_values(LOG, 
logging.DEBUG)
  2016-10-11 22:28:27.327 66082 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2630, in 
log_opt_values
  2016-10-11 22:28:27.327 66082 ERROR nova _sanitize(opt, 
getattr(group_attr, opt_name)))
  2016-10-11 22:28:27.327 66082 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 3061, in __getattr__
  2016-10-11 22:28:27.327 66082 ERROR nova return self._conf._get(name, 
self._group)
  2016-10-11 22:28:27.327 66082 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2672, in _get
  2016-10-11 22:28:27.327 66082 ERROR nova value = self._do_get(name, 
group, namespace)
  2016-10-11 22:28:27.327 66082 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2715, in _do_get
  2016-10-11 22:28:27.327 66082 ERROR nova % (opt.name, str(ve)))
  2016-10-11 22:28:27.327 66082 ERROR nova ConfigFileValueError: Value for 
option url is not valid: invalid URI: 
'https://rdo-ci-fx2-06-s5.v103.rdoci.lab.eng.rdu.redhat.com:13696'
  2016-10-11 22:28:27.327 66082 ERROR nova 

  I believe the failure happens inside the [neutron] section of
  /etc/nova/nova.conf.

  This does not look related to the scheme (https) being used as the
  result of enabling SSL because doing a one-off test with the
  openstack-nova-conductor service after changing the schema to http
  results in the same startup failure.

  Another one-off test substituting an IP address instead of a FQDN
  inside of nova.conf with the openstack-nova-conductor service as
  before results in openstack-nova-conductor starting properly but
  eventually failing with a connection-related failure due to the one-
  off data used (an IP address of 1.2.3.4).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1632538/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632538] Re: Using generate_service_certificate and undercloud_public_vip in undercloud.conf breaks nova

2016-10-20 Thread James Page
** Also affects: nova (Ubuntu Zesty)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Changed in: nova (Ubuntu Yakkety)
   Status: New => Incomplete

** Changed in: nova (Ubuntu Yakkety)
   Status: Incomplete => Triaged

** Changed in: nova (Ubuntu Zesty)
   Status: New => Triaged

** Changed in: nova (Ubuntu Zesty)
   Importance: Undecided => Medium

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/newton
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/newton
   Status: New => Triaged

** Changed in: cloud-archive/newton
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1632538

Title:
  Using generate_service_certificate and undercloud_public_vip in
  undercloud.conf breaks nova

Status in OpenStack Compute (nova):
  Incomplete
Status in OpenStack Compute (nova) newton series:
  Incomplete
Status in tripleo:
  Triaged
Status in nova package in Ubuntu:
  Triaged
Status in nova source package in Yakkety:
  Triaged
Status in nova source package in Zesty:
  Triaged

Bug description:
  Enabling SSL on the Undercloud using generate_service_certificate
  results in all Nova services on the undercloud (api, cert, compute,
  conductor, scheduler), all failing with errors similar to the
  following:

  2016-10-11 22:28:27.327 66082 CRITICAL nova 
[req-b5f37af3-96fc-42e2-aaa6-52815aca07fe - - - - -] ConfigFileValueError: 
Value for option url is not valid: invalid URI: 
'https://rdo-ci-fx2-06-s5.v103.rdoci.lab.eng.rdu.redhat.com:13696'
  2016-10-11 22:28:27.327 66082 ERROR nova Traceback (most recent call last):
  2016-10-11 22:28:27.327 66082 ERROR nova   File "/usr/bin/nova-cert", line 
10, in 
  2016-10-11 22:28:27.327 66082 ERROR nova sys.exit(main())
  2016-10-11 22:28:27.327 66082 ERROR nova   File 
"/usr/lib/python2.7/site-packages/nova/cmd/cert.py", line 49, in main
  2016-10-11 22:28:27.327 66082 ERROR nova service.wait()
  2016-10-11 22:28:27.327 66082 ERROR nova   File 
"/usr/lib/python2.7/site-packages/nova/service.py", line 415, in wait
  2016-10-11 22:28:27.327 66082 ERROR nova _launcher.wait()
  2016-10-11 22:28:27.327 66082 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_service/service.py", line 328, in wait
  2016-10-11 22:28:27.327 66082 ERROR nova status, signo = 
self._wait_for_exit_or_signal()
  2016-10-11 22:28:27.327 66082 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_service/service.py", line 303, in 
_wait_for_exit_or_signal
  2016-10-11 22:28:27.327 66082 ERROR nova self.conf.log_opt_values(LOG, 
logging.DEBUG)
  2016-10-11 22:28:27.327 66082 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2630, in 
log_opt_values
  2016-10-11 22:28:27.327 66082 ERROR nova _sanitize(opt, 
getattr(group_attr, opt_name)))
  2016-10-11 22:28:27.327 66082 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 3061, in __getattr__
  2016-10-11 22:28:27.327 66082 ERROR nova return self._conf._get(name, 
self._group)
  2016-10-11 22:28:27.327 66082 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2672, in _get
  2016-10-11 22:28:27.327 66082 ERROR nova value = self._do_get(name, 
group, namespace)
  2016-10-11 22:28:27.327 66082 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2715, in _do_get
  2016-10-11 22:28:27.327 66082 ERROR nova % (opt.name, str(ve)))
  2016-10-11 22:28:27.327 66082 ERROR nova ConfigFileValueError: Value for 
option url is not valid: invalid URI: 
'https://rdo-ci-fx2-06-s5.v103.rdoci.lab.eng.rdu.redhat.com:13696'
  2016-10-11 22:28:27.327 66082 ERROR nova 

  I believe the failure happens inside the [neutron] section of
  /etc/nova/nova.conf.

  This does not look related to the scheme (https) being used as the
  result of enabling SSL because doing a one-off test with the
  openstack-nova-conductor service after changing the schema to http
  results in the same startup failure.

  Another one-off test substituting an IP address instead of a FQDN
  inside of nova.conf with the openstack-nova-conductor service as
  before results in openstack-nova-conductor starting properly but
  eventually failing with a connection-related failure due to the one-
  off data used (an IP address of 1.2.3.4).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1632538/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635131] Re: Value for option url is not valid: invalid URI (inappropriate validation of rfc3986)

2016-10-20 Thread György Szombathelyi
Yepp, please upgrade python-rfc3986

** Also affects: python-rfc3986 (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1635131

Title:
  Value for option url is not valid: invalid URI (inappropriate
  validation of rfc3986)

Status in OpenStack Compute (nova):
  New
Status in python-rfc3986 package in Ubuntu:
  New

Bug description:
  I have a fresh installed OpenStack Newton environment based on Ubuntu
  16.04. I am facing a regression in the Nova module regarding URI
  parsing.

  $ apt list --installed | grep -E 'nova|rfc'
  nova-api/xenial-updates,xenial-updates,now 2:14.0.1-0ubuntu1~cloud0 all 
[installed]
  nova-common/xenial-updates,xenial-updates,now 2:14.0.1-0ubuntu1~cloud0 all 
[installed,automatic]
  nova-conductor/xenial-updates,xenial-updates,now 2:14.0.1-0ubuntu1~cloud0 all 
[installed]
  nova-consoleauth/xenial-updates,xenial-updates,now 2:14.0.1-0ubuntu1~cloud0 
all [installed]
  nova-novncproxy/xenial-updates,xenial-updates,now 2:14.0.1-0ubuntu1~cloud0 
all [installed]
  nova-scheduler/xenial-updates,xenial-updates,now 2:14.0.1-0ubuntu1~cloud0 all 
[installed]
  python-nova/xenial-updates,xenial-updates,now 2:14.0.1-0ubuntu1~cloud0 all 
[installed,automatic]
  python-novaclient/xenial-updates,xenial-updates,now 2:6.0.0-0ubuntu1~cloud0 
all [installed,automatic]
  python-rfc3986/xenial,xenial,now 0.2.0-2 all [installed,automatic]

  This is an excerpt from my nova.conf:
  [neutron]
  auth_type = password
  auth_uri = http://os-identity:5000
  auth_url = http://os-identity:35357
  metadata_proxy_shared_secret = XYZ
  password = XYZ
  project_domain_name = default
  project_name = service
  region_name = RegionOne
  service_metadata_proxy = true
  url = http://os-network:9696
  user_domain_name = default
  username = neutron

  The following error is stated in the log:
  2016-10-20 08:28:40.251 19479 CRITICAL nova 
[req-26ab01e7-72b0-4792-a8a4-56454a4c390c - - - - -] ConfigFileValueError: 
Value for option url is not valid: invalid URI: 'http://os-network:9696'
  2016-10-20 08:28:40.251 19479 ERROR nova Traceback (most recent call last):
  2016-10-20 08:28:40.251 19479 ERROR nova   File "/usr/bin/nova-api", line 10, 
in 
  2016-10-20 08:28:40.251 19479 ERROR nova sys.exit(main())
  2016-10-20 08:28:40.251 19479 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/nova/cmd/api.py", line 73, in main
  2016-10-20 08:28:40.251 19479 ERROR nova launcher.wait()
  2016-10-20 08:28:40.251 19479 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 568, in wait
  2016-10-20 08:28:40.251 19479 ERROR nova self.conf.log_opt_values(LOG, 
logging.DEBUG)
  2016-10-20 08:28:40.251 19479 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2626, in 
log_opt_values
  2016-10-20 08:28:40.251 19479 ERROR nova _sanitize(opt, 
getattr(group_attr, opt_name)))
  2016-10-20 08:28:40.251 19479 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 3057, in __getattr__
  2016-10-20 08:28:40.251 19479 ERROR nova return self._conf._get(name, 
self._group)
  2016-10-20 08:28:40.251 19479 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2668, in _get
  2016-10-20 08:28:40.251 19479 ERROR nova value = self._do_get(name, 
group, namespace)
  2016-10-20 08:28:40.251 19479 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2711, in _do_get
  2016-10-20 08:28:40.251 19479 ERROR nova % (opt.name, str(ve)))
  2016-10-20 08:28:40.251 19479 ERROR nova ConfigFileValueError: Value for 
option url is not valid: invalid URI: 'http://os-network:9696'

  The same problem was reported here:
  https://bugs.launchpad.net/kolla/+bug/1629729

  As it is stated there 'python-rfc3986' could not handle URI which has
  '-' characters in case of older versions of python-rfc3986 than 0.3.0.

  A current workaround is to use e.g. the ip address instead of a
  hostname or alias that has a hyphen in its name.

  The same problem exists in nova-compute on my compute node using:
  [vnc]
  enabled = true
  novncproxy_base_url = https://os-cloud.mycompany.com:6080/vnc_auto.html
  vncserver_listen = 0.0.0.0

  The setting causes the same invalid uri exception as stated above.

  We need a new version of 'python-rfc3986'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1635131/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635160] [NEW] No bootable device when evacuate a instance on shared_storage_storage ceph

2016-10-20 Thread sampsonhuo
Public bug reported:

Nova Verion:nova-kilo-2015.1.1
Ceph Verion:ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)

When i test nova evacuate function i found after the instance evacuated it cant 
not boot normally.
By the vnc console i see "No bootable device" info.

Through some tests i found the reason: when u used the shared storage
the rebuild task flow will not get the image meta again. So if u set
meta for the image, the problem occur.

The code:
 nova/compute/manager.py

 @object_compat
 @messaging.expected_exceptions(exception.PreserveEphemeralNotSupported)
 @wrap_exception()
 @reverts_task_state
 @wrap_instance_event
 @wrap_instance_fault
 def rebuild_instance(self, context, instance, orig_image_ref, image_ref,
   injected_files, new_pass,
  orig_sys_metadata,
   bdms, recreate, on_shared_storage,
   preserve_ephemeral=False):
..

if on_shared_storage != self.driver.instance_on_disk(instance):
  raise exception.InvalidSharedStorage(_("Invalid state of instance files on 
shared"
" storage"))

if on_shared_storage:
  LOG.info(_LI('disk on shared storage, recreating using'
 ' existing disk'))
else:
  image_ref = orig_image_ref = instance.image_ref
  LOG.info(_LI("disk not on shared storage, rebuilding from:"
 " '%s'"), str(image_ref))

# NOTE(mriedem): On a recreate (evacuate), we need to update
# the instance's host and node properties to reflect it's
# destination node for the recreate.
node_name = None
try:
  compute_node = self._get_compute_info(context, self.host)
 node_name = compute_node.hypervisor_hostname
except exception.ComputeHostNotFound:
  LOG.exception(_LE('Failed to get compute_info for %s'),self.host)
finally:
  instance.host = self.host
  instance.node = node_name
  instance.save()

if image_ref:
  image_meta = self.image_api.get(context, image_ref)
else:
  image_meta = {}
..

Bellow is my image info
+--+--+
| Property | Value|
+--+--+
| OS-EXT-IMG-SIZE:size | 53687091200  |
| created  | 2016-09-20T08:15:21Z |
| id   | 8b218b4d-74ff-44af-bc4c-c37fb1106b03 |
| metadata hw_disk_bus | scsi |
| metadata hw_qemu_guest_agent | yes  |
| metadata hw_scsi_model   | virtio-scsi  |
| minDisk  | 0|
| minRam   | 0|
| name | zptest-20160920  |
| progress | 100  |
| status   | ACTIVE   |
| updated  | 2016-10-20T07:38:54Z |
+--+--+

Curently i fixed the problem by update the code:

nova/compute/api.py

@check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED,
vm_states.ERROR])
def evacuate(self, context, instance, host, on_shared_storage,
 admin_password=None):
"""Running evacuate to target host.

Checking vm compute host state, if the host not in expected_state,
raising an exception.

:param instance: The instance to evacuate
:param host: Target host. if not set, the scheduler will pick up one
:param on_shared_storage: True if instance files on shared storage
:param admin_password: password to set on rebuilt instance

"""
LOG.debug('vm evacuation scheduled', instance=instance)
inst_host = instance.host
service = objects.Service.get_by_compute_host(context, inst_host)
if self.servicegroup_api.service_is_up(service):
LOG.error(_LE('Instance compute service state on %s '
   expected to be down, but it was up.'), inst_host)
raise exception.ComputeServiceInUse(host=inst_host)

instance.task_state = task_states.REBUILDING
instance.save(expected_task_state=[None])
self._record_action_start(context, instance, instance_actions.EVACUATE)

return self.compute_task_api.rebuild_instance(context,
   instance=instance,
   new_pass=admin_password,
   injected_files=None,
   image_ref=None,
   orig_image_ref=None,
   orig_sys_metadata=None,
   bdms=None,
   recreate=True,
   on_shared_storage=on_shared_storage,
   host=host)

update image_ref=None to it's value,not None.

** Affects: nova
 Importance: Undecided
 

[Yahoo-eng-team] [Bug 1635162] [NEW] log.error use _LE of i18n

2016-10-20 Thread guoshan
Public bug reported:

log.error should be translate with _LE

** Affects: keystone
 Importance: Undecided
 Assignee: guoshan (guoshan)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => guoshan (guoshan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1635162

Title:
  log.error use _LE of i18n

Status in OpenStack Identity (keystone):
  New

Bug description:
  log.error should be translate with _LE

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1635162/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635156] [NEW] fwaas v2 hit driver internal error when restart l3 agent

2016-10-20 Thread zhaobo
Public bug reported:

There is a router serviced by l3 agent. And a firewall group located a
port which serviced by the router, such as an interface of an internal
subnet. Then I restart l3 agent, it will sync the server side router
info and refresh the all related router info, include firewall iptables
rules. It hit driver internal error during this period.

The trace like:
2016-10-20 15:05:11.587 DEBUG neutron.agent.linux.iptables_manager [-] 
IPTablesManager.apply completed with success. 16 iptables commands were issued 
from (pid=81933) _apply_synchronized 
/opt/stack/neutron/neutron/agent/linux/iptables_manager.py:533
2016-10-20 15:05:11.588 DEBUG oslo_concurrency.lockutils [-] Releasing 
semaphore "iptables-qrouter-cc5ab5a3-ef25-4496-87c7-5063cd167ce6" from 
(pid=81933) lock 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:225
2016-10-20 15:05:11.589 DEBUG 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 [-] 
Process router update, router_id: cc5ab5a3-ef25-4496-87c7-5063cd167ce6  tenant: 
488da3aab0ff45df9e85e17e7f89fedd. from (pid=81933) _process_router_update 
/opt/stack/neutron-fwaas/neutron_fwaas/services/firewall/agents/l3reference/firewall_l3_agent_v2.py:239
2016-10-20 15:05:11.589 DEBUG neutron.agent.linux.utils [-] Running command 
(rootwrap daemon): ['ip', 'netns', 'list'] from (pid=81933) 
execute_rootwrap_daemon /opt/stack/neutron/neutron/agent/linux/utils.py:100
2016-10-20 15:05:11.593 DEBUG neutron.agent.linux.utils [-] Exit code: 0 from 
(pid=81933) execute /opt/stack/neutron/neutron/agent/linux/utils.py:141
2016-10-20 15:05:14.358 DEBUG oslo_policy._cache_handler 
[req-646cbdc6-89d9-4af2-9779-3ed1042154f9 None 
488da3aab0ff45df9e85e17e7f89fedd] Reloading cached file 
/etc/neutron/policy.json from (pid=81933) read_cached_file 
/usr/local/lib/python2.7/dist-packages/oslo_policy/_cache_handler.py:38
2016-10-20 15:05:14.470 DEBUG oslo_policy.policy 
[req-646cbdc6-89d9-4af2-9779-3ed1042154f9 None 
488da3aab0ff45df9e85e17e7f89fedd] Reloaded policy file: 
/etc/neutron/policy.json from (pid=81933) _load_policy_file 
/usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py:584
2016-10-20 15:05:14.471 DEBUG oslo_policy._cache_handler 
[req-646cbdc6-89d9-4af2-9779-3ed1042154f9 None 
488da3aab0ff45df9e85e17e7f89fedd] Reloading cached file 
/etc/neutron/policy.d/bgpvpn.conf from (pid=81933) read_cached_file 
/usr/local/lib/python2.7/dist-packages/oslo_policy/_cache_handler.py:38
2016-10-20 15:05:14.484 DEBUG oslo_policy.policy 
[req-646cbdc6-89d9-4af2-9779-3ed1042154f9 None 
488da3aab0ff45df9e85e17e7f89fedd] Reloaded policy file: 
/etc/neutron/policy.d/bgpvpn.conf from (pid=81933) _load_policy_file 
/usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py:584
2016-10-20 15:05:14.484 DEBUG oslo_policy._cache_handler 
[req-646cbdc6-89d9-4af2-9779-3ed1042154f9 None 
488da3aab0ff45df9e85e17e7f89fedd] Reloading cached file 
/etc/neutron/policy.d/dynamic_routing.conf from (pid=81933) read_cached_file 
/usr/local/lib/python2.7/dist-packages/oslo_policy/_cache_handler.py:38
2016-10-20 15:05:14.491 DEBUG oslo_policy.policy 
[req-646cbdc6-89d9-4af2-9779-3ed1042154f9 None 
488da3aab0ff45df9e85e17e7f89fedd] Reloaded policy file: 
/etc/neutron/policy.d/dynamic_routing.conf from (pid=81933) _load_policy_file 
/usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py:584
2016-10-20 15:05:14.491 DEBUG oslo_policy._cache_handler 
[req-646cbdc6-89d9-4af2-9779-3ed1042154f9 None 
488da3aab0ff45df9e85e17e7f89fedd] Reloading cached file 
/etc/neutron/policy.d/neutron-fwaas.json from (pid=81933) read_cached_file 
/usr/local/lib/python2.7/dist-packages/oslo_policy/_cache_handler.py:38
2016-10-20 15:05:14.502 DEBUG oslo_policy.policy 
[req-646cbdc6-89d9-4af2-9779-3ed1042154f9 None 
488da3aab0ff45df9e85e17e7f89fedd] Reloaded policy file: 
/etc/neutron/policy.d/neutron-fwaas.json from (pid=81933) _load_policy_file 
/usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py:584
2016-10-20 15:05:14.503 DEBUG 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 
[req-646cbdc6-89d9-4af2-9779-3ed1042154f9 None 
488da3aab0ff45df9e85e17e7f89fedd] Fetch firewall groups from plugin from 
(pid=81933) get_firewall_groups_for_project 
/opt/stack/neutron-fwaas/neutron_fwaas/services/firewall/agents/l3reference/firewall_l3_agent_v2.py:43
2016-10-20 15:05:14.504 DEBUG oslo_messaging._drivers.amqpdriver 
[req-646cbdc6-89d9-4af2-9779-3ed1042154f9 None 
488da3aab0ff45df9e85e17e7f89fedd] CALL msg_id: 2d47fe084441434db22bbc5e21861abc 
exchange 'neutron' topic 'q-firewall-plugin' from (pid=81933) _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:448
2016-10-20 15:05:14.505 DEBUG neutron.agent.linux.utils [-] Exit code: 0 from 
(pid=81933) execute /opt/stack/neutron/neutron/agent/linux/utils.py:141
2016-10-20 15:05:14.505 DEBUG neutron.agent.linux.utils [-] Exit code: 0 from 
(pid=81933) execute /opt/stack/neutron/neutron/agent/linux/ut

[Yahoo-eng-team] [Bug 1629396] Re: create images requires admin role ignoring policy.json

2016-10-20 Thread wangxiyuan
Set the invalid due to no response for long time and can't reproduce.
Please restart if there is any update.

** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1629396

Title:
  create images requires admin role ignoring policy.json

Status in Glance:
  Invalid

Bug description:
  Setup a default OpenStack environment using keystone's sample_data.sh
  This gives user "glance" the "_member_" role for project "service".
  Couple this with a policy.json containing the following:

{
  "context_is_admin":  "role:admin",
  "default": "",

  "add_image": "",
  "delete_image": "",
  .
  .
}

  
  If you attempt to create a new image as "glance" user it fails with following 
error:

 403 Forbidden: You are not authorized to complete this action.
  (HTTP 403)

  Delving into the code you can see is_admin is enforced:

   api/authorization.py:new_image():

   if not self.context.is_admin:
   if owner is None or owner != self.context.owner:
   message = _("You are not permitted to create images "
   "owned by '%s'.")
   raise exception.Forbidden(message % owner)

  
  Thus indicating that the user creating images must have "admin" role for this 
project.

  However this same user can successfully delete images, as delete uses
  policy enforcement only and adheres to whatever is defined within
  policy.json:

api/policy.py:delete():

def delete(self):
self.policy.enforce(self.context, 'delete_image', self.target)
return self.image.delete()

  
  This seems inconsistent, image creation should probably use policy 
enforcement and not have a hard coded requirement for admin role.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1629396/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635131] [NEW] Value for option url is not valid: invalid URI (inappropriate validation of rfc3986)

2016-10-20 Thread Jens Offenbach
Public bug reported:

I have a fresh installed OpenStack Newton environment based on Ubuntu
16.04. I am facing a regression in the Nova module regarding URI
parsing.

$ apt list --installed | grep -E 'nova|rfc'
nova-api/xenial-updates,xenial-updates,now 2:14.0.1-0ubuntu1~cloud0 all 
[installed]
nova-common/xenial-updates,xenial-updates,now 2:14.0.1-0ubuntu1~cloud0 all 
[installed,automatic]
nova-conductor/xenial-updates,xenial-updates,now 2:14.0.1-0ubuntu1~cloud0 all 
[installed]
nova-consoleauth/xenial-updates,xenial-updates,now 2:14.0.1-0ubuntu1~cloud0 all 
[installed]
nova-novncproxy/xenial-updates,xenial-updates,now 2:14.0.1-0ubuntu1~cloud0 all 
[installed]
nova-scheduler/xenial-updates,xenial-updates,now 2:14.0.1-0ubuntu1~cloud0 all 
[installed]
python-nova/xenial-updates,xenial-updates,now 2:14.0.1-0ubuntu1~cloud0 all 
[installed,automatic]
python-novaclient/xenial-updates,xenial-updates,now 2:6.0.0-0ubuntu1~cloud0 all 
[installed,automatic]
python-rfc3986/xenial,xenial,now 0.2.0-2 all [installed,automatic]

This is an excerpt from my nova.conf:
[neutron]
auth_type = password
auth_uri = http://os-identity:5000
auth_url = http://os-identity:35357
metadata_proxy_shared_secret = XYZ
password = XYZ
project_domain_name = default
project_name = service
region_name = RegionOne
service_metadata_proxy = true
url = http://os-network:9696
user_domain_name = default
username = neutron

The following error is stated in the log:
2016-10-20 08:28:40.251 19479 CRITICAL nova 
[req-26ab01e7-72b0-4792-a8a4-56454a4c390c - - - - -] ConfigFileValueError: 
Value for option url is not valid: invalid URI: 'http://os-network:9696'
2016-10-20 08:28:40.251 19479 ERROR nova Traceback (most recent call last):
2016-10-20 08:28:40.251 19479 ERROR nova   File "/usr/bin/nova-api", line 10, 
in 
2016-10-20 08:28:40.251 19479 ERROR nova sys.exit(main())
2016-10-20 08:28:40.251 19479 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/nova/cmd/api.py", line 73, in main
2016-10-20 08:28:40.251 19479 ERROR nova launcher.wait()
2016-10-20 08:28:40.251 19479 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 568, in wait
2016-10-20 08:28:40.251 19479 ERROR nova self.conf.log_opt_values(LOG, 
logging.DEBUG)
2016-10-20 08:28:40.251 19479 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2626, in 
log_opt_values
2016-10-20 08:28:40.251 19479 ERROR nova _sanitize(opt, getattr(group_attr, 
opt_name)))
2016-10-20 08:28:40.251 19479 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 3057, in __getattr__
2016-10-20 08:28:40.251 19479 ERROR nova return self._conf._get(name, 
self._group)
2016-10-20 08:28:40.251 19479 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2668, in _get
2016-10-20 08:28:40.251 19479 ERROR nova value = self._do_get(name, group, 
namespace)
2016-10-20 08:28:40.251 19479 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2711, in _do_get
2016-10-20 08:28:40.251 19479 ERROR nova % (opt.name, str(ve)))
2016-10-20 08:28:40.251 19479 ERROR nova ConfigFileValueError: Value for option 
url is not valid: invalid URI: 'http://os-network:9696'

The same problem was reported here:
https://bugs.launchpad.net/kolla/+bug/1629729

As it is stated there 'python-rfc3986' could not handle URI which has
'-' characters in case of older versions of python-rfc3986 than 0.3.0.

A current workaround is to use e.g. the ip address instead of a hostname
or alias that has a hyphen in its name.

The same problem exists in nova-compute on my compute node using:
[vnc]
enabled = true
novncproxy_base_url = https://os-cloud.mycompany.com:6080/vnc_auto.html
vncserver_listen = 0.0.0.0

The setting causes the same invalid uri exception as stated above.

We need a new version of 'python-rfc3986'.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1635131

Title:
  Value for option url is not valid: invalid URI (inappropriate
  validation of rfc3986)

Status in OpenStack Compute (nova):
  New

Bug description:
  I have a fresh installed OpenStack Newton environment based on Ubuntu
  16.04. I am facing a regression in the Nova module regarding URI
  parsing.

  $ apt list --installed | grep -E 'nova|rfc'
  nova-api/xenial-updates,xenial-updates,now 2:14.0.1-0ubuntu1~cloud0 all 
[installed]
  nova-common/xenial-updates,xenial-updates,now 2:14.0.1-0ubuntu1~cloud0 all 
[installed,automatic]
  nova-conductor/xenial-updates,xenial-updates,now 2:14.0.1-0ubuntu1~cloud0 all 
[installed]
  nova-consoleauth/xenial-updates,xenial-updates,now 2:14.0.1-0ubuntu1~cloud0 
all [installed]
  nova-novncproxy/xenial-updates,xenial-updates,now 2:14.0.1-0ubuntu1~cloud0 
all [installed]
  nova-scheduler/xenial-updates,xenial-upd