[Yahoo-eng-team] [Bug 1649488] [NEW] Duplicated revises_on_change in qos models

2016-12-12 Thread Hong Hui Xiao
Public bug reported:

Both 0e51574b2fb299eb42d6f5333e68f70244b08d50 and
3b610a1debdfb99def758406b1604aa3273edeea add revises_on_change to qos db
models, which cause duplication in

https://github.com/openstack/neutron/blob/09bc8a724e42fed0f527b56d38c5720167031764/neutron/db/qos/models.py#L49-L75

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649488

Title:
  Duplicated revises_on_change in qos models

Status in neutron:
  New

Bug description:
  Both 0e51574b2fb299eb42d6f5333e68f70244b08d50 and
  3b610a1debdfb99def758406b1604aa3273edeea add revises_on_change to qos
  db models, which cause duplication in

  
https://github.com/openstack/neutron/blob/09bc8a724e42fed0f527b56d38c5720167031764/neutron/db/qos/models.py#L49-L75

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649488/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649466] [NEW] contrail analyticks api status stuck in "contrail-analytics-api initializing (UvePartitions:UVE-Aggregation[None] connection down)"

2016-12-12 Thread Soumil Kulkarni
Public bug reported:

Sorry

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1649466

Title:
  contrail analyticks api status stuck in "contrail-analytics-api
  initializing (UvePartitions:UVE-Aggregation[None] connection down)"

Status in OpenStack Identity (keystone):
  New

Bug description:
  Sorry

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1649466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649092] Re: Cleaning snat namespace didn't unplug external device

2016-12-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/409528
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=fd3eebbec4ae95019a7135679248b753c391504e
Submitter: Jenkins
Branch:master

commit fd3eebbec4ae95019a7135679248b753c391504e
Author: Quan Tian 
Date:   Sun Dec 11 02:14:44 2016 +0800

Unplug external device when delete snat namespace

[1] allow us to identify the stale snat namespace and delete the
namespace when the gateway is cleared as the agent restarts. But Method
SnatNamespace.delete unplugs 'sg-XXX' devices only, leads to stale
port remaining in ovs bridge.

This patch identify the stale external device and unplug it.

[1] https://review.openstack.org/#/c/326729/

Change-Id: I27fff32aeeecdc599a578637f390dc1d73f0171b
Closes-Bug: #1649092


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649092

Title:
  Cleaning snat namespace didn't unplug external device

Status in neutron:
  Fix Released

Bug description:
  [1] allow us to identify the stale snat namespace and delete the
  namespace when the gateway is cleared as the agent restarts.

  But method SnatNamespace.delete unplugs 'sg-XXX' devices only, leads
  to stale port remains in ovs bridge.

  [1] https://review.openstack.org/#/c/326729/

  
  How to reproduce:

  - create a distributed router, set its router gateway, bind a subnet to it
  - stop the l3 agent hosting the router
  - clear the router’s gateway
  - start the stopped l3 agent, device "sg-" and snat namespace will be 
cleaned, stale device “qg-” will remain.

  $ sudo ovs-vsctl list port qg-749cbaab-13
  _uuid   : 7f116611-4885-4813-a938-a9aebf2723ac
  bond_downdelay  : 0
  bond_fake_iface : false
  bond_mode   : []
  bond_updelay: 0
  external_ids: {}
  fake_bridge : false
  interfaces  : [81e93ffd-ce76-4c75-8522-5ff5e9a4c1c0]
  lacp: []
  mac : []
  name: "qg-749cbaab-13"
  other_config: {net_uuid="eaad0784-47dd-4263-82e1-419e7f3d8e3f", 
network_type=vxlan, physical_network=None, segmentation_id="101", tag="7"}
  qos : []
  statistics  : {}
  status  : {}
  tag : 4095
  trunks  : []
  vlan_mode   : []

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512666] Re: [RFE] Allow for per-subnet/network dhcp options

2016-12-12 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1512666

Title:
  [RFE] Allow for per-subnet/network dhcp options

Status in neutron:
  Expired

Bug description:
  [Existing Problem]
  Neutron currently does not allow for DHCP options to be set which will affect 
any/all mac addresses in a subnet/network, DHCP options can only be set per 
port aka per mac address. In order to achieve this functionality right now it 
requires manually setting up in a non-neutron controlled DHCP server.

  This is currently a factor complicating the setup for the Ironic
  Inspector which requires non-mac address specific DHCP options to be
  set in order to inspect hardware which we don't currently know the mac
  addresses for, and we are running our own dnsmasq instance to provide
  the required functionality.

  [Solution]
  Provide the ability to set extra-dhcp-opt on a subnet or network in addition 
to ports. Options set on a network will apply to any/every machine that uses 
DHCP inside that network however if port has extra-dhcp-opt set then 
conflicting options will take priority/override the network/subnet level 
options for that specific mac address.

  [Related]
  https://blueprints.launchpad.net/neutron/+spec/dhcp-options-per-subnet

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1512666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594529] Re: VM creation failure due to Nova hugepage assumptions

2016-12-12 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1594529

Title:
  VM creation failure due to Nova hugepage assumptions

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Description:

  In Liberty and Mitaka, Nova assumes that it has exclusive access to
  the huge pages on the compute node. It maintains track of the total
  pages per NUMA node on the compute node, and then number of used (by
  Nova VMs) pages on each NUMA node. This is done for the three huge
  page sizes supported.

  However, if other third party processes consume huge pages, there will
  be a discrepancy between the actual pages available and what Nova
  thinks is available. As a result, it is possible (based on the number
  of pages and the VM size) for Nova to think it has enough pages, when
  there are not enough pages. The create will fail with QEMU reporting
  insufficient memory available, for example.

  
  Steps to reproduce:

  1. Compute with 32768 2MB pages available, giving 16384 per NUMA node with 
two nodes.
  2. Third party process that consumes 256 pages per NUMA node.
  3. Create 15 small flavor (2GB = 1024 pages) VMs.
  4. Create another small flavor VM.

  Expected Result:

  That the 16th VM would be created, without an error, and using huge
  pages on the second NUMA node (and allow more VMs as well).

  Actual Result:

  After step 3, Nova thinks there are 1024 pages available, but the
  compute host shows only 768 pages available. The scheduler thinks
  there is space for one more VM, it will pass the filter. The creation
  will commence, as Nova thinks there is enough space on NUMA node 0.
  QEMU will fail, indicating that there is not enough memory.

  In addition, there are 16128 pages available on NUMA node 1, but Nova
  will not attempt using them, as it thinks there is still memory
  available on NUMA node 0.

  In my case, I had multiple compute hosts and ended up with a "No hosts
  available" error, as it fails on each host when trying NUMA node 0.
  If, at step 4, one creates a medium flavor VM, it will succeed, as
  Nova will not see enough pages on NUMA node 0, and will try NUMA node
  1, which has ample space.

  Commentary: Nova checks total huge pages, but not available huge
  pages.

  Note: A feature was added to master (for Newton) that has a config
  based mechanism to reserve huge pages for third party applications
  under bug 1543149. However, the Nova team indicated that this change
  cannot be back ported to Liberty.

  Environment:

  Liberty release (12.0.3), with LB, neutron networking, libvirt 1.2.17,
  API QEMU 1.2.17, QEMU 2.3.0.

  Config:

  nova flavor-key m1.small set hw:numa_nodes=1
  nova flavor-key m1.small set hw:mem_page_size=2048

  network, subnet, and standard VM create commands.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1594529/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624005] Re: The Update Flavor Metadata does not work properly

2016-12-12 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1624005

Title:
  The Update Flavor Metadata does not work properly

Status in OpenStack Dashboard (Horizon):
  Expired
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  The flavor metadatas changes are not saved well.

  Steps:
  1) Create a new flavor - e.g.: in console: '$openstack flavor create 
new_flavor --id 99 --ram 512 --disk 2 --vcpus 4' or in dashboard
  2) Update Metadata  - e.g.: CIM Processor Allocation Setting -> Instruction 
Set Extension -> select: ARM:DSP and ARM:DSP and ARM:NEON
  3) Save
  4) Update Metadata - remove the CIM_PASD_InstructionSetExtensionName from 
Existing Metadata box
  5) Save

  Result:
  There are the CIM_PASD_InstructionSetExtensionNames metadata in extra_specs 
(See: $nova flavor-show 99 or dashboard)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1624005/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630141] Re: The flavor metadatas changes are not saved well

2016-12-12 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1630141

Title:
  The flavor metadatas changes are not saved well

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  I created a new flavor and I changed the metadata - twice (Update Metadata - 
e.g.: CIM Processor Allocation Setting -> Instruction Set Extension -> ARM:DSP 
and ARM:DSP and ARM:NEON , add and remove) in dashboard.
  If I want to change these settings, then the changes are not saved well.

  Steps:
  1) Create a new flavor
  2) Update Metadata
  3) Add new Existing Metadata (e.g.: CIM Processor Allocation Setting -> 
Instruction Set Extension -> select: ARM:DSP and ARM:DSP and ARM:NEON) and save 
it
  4) Update Metadata again
  5) Remove the CIM Processor Allocation Setting Existing Metadata and save it

  Results: the Existing Metadata is not changed (the ARM:DSP, ARM:DSP
  and ARM:NEON are "active", these are in Existing Metadata box)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1630141/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518430] Re: liberty: ~busy loop on epoll_wait being called with zero timeout

2016-12-12 Thread Mathew Hodson
** Project changed: nova => ubuntu-translations

** No longer affects: ubuntu-translations

** No longer affects: nova (Ubuntu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1518430

Title:
  liberty: ~busy loop on epoll_wait being called with zero timeout

Status in Ubuntu Cloud Archive:
  Fix Committed
Status in Ubuntu Cloud Archive kilo series:
  Fix Committed
Status in Ubuntu Cloud Archive liberty series:
  Fix Committed
Status in Ubuntu Cloud Archive mitaka series:
  Fix Committed
Status in Ubuntu Cloud Archive newton series:
  Fix Committed
Status in oslo.messaging:
  Fix Released
Status in python-oslo.messaging package in Ubuntu:
  Fix Released
Status in python-oslo.messaging source package in Xenial:
  New
Status in python-oslo.messaging source package in Yakkety:
  New
Status in python-oslo.messaging source package in Zesty:
  Fix Released

Bug description:
  Context: openstack juju/maas deploy using 1510 charms release
  on trusty, with:
    openstack-origin: "cloud:trusty-liberty"
    source: "cloud:trusty-updates/liberty

  * Several openstack nova- and neutron- services, at least:
  nova-compute, neutron-server, nova-conductor,
  neutron-openvswitch-agent,neutron-vpn-agent
  show almost busy looping on epoll_wait() calls, with zero timeout set
  most frequently.
  - nova-compute (chose it b/cos single proc'd) strace and ltrace captures:
    http://paste.ubuntu.com/13371248/ (ltrace, strace)

  As comparison, this is how it looks on a kilo deploy:
  - http://paste.ubuntu.com/13371635/

  * 'top' sample from a nova-cloud-controller unit from
     this completely idle stack:
    http://paste.ubuntu.com/13371809/

  FYI *not* seeing this behavior on keystone, glance, cinder,
  ceilometer-api.

  As this issue is present on several components, it likely comes
  from common libraries (oslo concurrency?), fyi filed the bug to
  nova itself as a starting point for debugging.

  Note: The description in the following bug gives a good overview of
  the issue and points to a possible fix for oslo.messaging:
  https://bugs.launchpad.net/mos/+bug/1380220

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1518430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649440] [NEW] [vmware] vspere 6.5 have add some new OS type

2016-12-12 Thread xhzhf
Public bug reported:

Description
===
vspere 6.5 have add some new OS type. Nova should support vsphere 6.5 and add 
constants

Steps to reproduce
==
None

Expected result
===
None

Actual result
=
None

Environment
===
vsphere 6.5

Logs & Configs
==
None

** Affects: nova
 Importance: Undecided
 Assignee: xhzhf (guoyongxhzhf)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => xhzhf (guoyongxhzhf)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1649440

Title:
  [vmware] vspere 6.5 have add some new OS type

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  vspere 6.5 have add some new OS type. Nova should support vsphere 6.5 and add 
constants

  Steps to reproduce
  ==
  None

  Expected result
  ===
  None

  Actual result
  =
  None

  Environment
  ===
  vsphere 6.5

  Logs & Configs
  ==
  None

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1649440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614537] Re: Neutron Objects do not override the UUIDField to actually validate UUIDs

2016-12-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/393150
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=bb78621a72bb0f54205fac57e19d4679143bc3f3
Submitter: Jenkins
Branch:master

commit bb78621a72bb0f54205fac57e19d4679143bc3f3
Author: Zainub Wahid 
Date:   Thu Nov 3 12:57:25 2016 +0500

Make UUIDField actually validate UUIDs

Instead of using oslo.versionedobjects UUID type, use a custom UUIDField
class located in common_types that will actually validate passed values
for UUID-ness.

Closes-Bug: #1614537
Change-Id: I20b24ee57c521b1c68977c2ff7ae56b56875dd64


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614537

Title:
  Neutron Objects do not override the UUIDField to actually validate
  UUIDs

Status in neutron:
  Fix Released

Bug description:
  Oslo Versioned Objects' implementation of the UUID Field does not
  actually validate anything.

  It is a wrapper around a string type.

  Projects are advised that to actually validate UUIDs, they need to
  override the field in their custom fields. [1]

  Leaving it non validating can cause issues when the field is moved to
  validate, or there is an assumption that it is being validated.

  1 -
  
http://docs.openstack.org/developer/oslo.versionedobjects/api/fields.html#oslo_versionedobjects.fields.UUIDField

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1614537/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649079] Re: TypeError seen on gate-neutron-lib-api-ref

2016-12-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/409515
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=322ae30246d21d154b44b90e12ecf5506ae4ed68
Submitter: Jenkins
Branch:master

commit 322ae30246d21d154b44b90e12ecf5506ae4ed68
Author: YAMAMOTO Takashi 
Date:   Sun Dec 11 21:29:21 2016 +0900

Use constranits for api-ref target

It should be ok to use zuul-cloner these days. [1]
This would avoid docutils 0.13.1, which seems to cause the following
TypeError.

Exception occurred:
  File 
"/home/jenkins/workspace/gate-neutron-lib-api-ref/.tox/api-ref/local/lib/python2.7/site-packages/docutils/writers/html4css1/__init__.py",
 line 288, in write_colspecs
width += node['colwidth']
TypeError: unsupported operand type(s) for +=: 'int' and 'str'

[1] I8f45a53429b9fcbf3689a268f096afdf5f32f461

Closes-Bug: #1649079
Change-Id: Id75e88f5031aeab21b2158c721881bf2da4a0d28


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649079

Title:
  TypeError seen on gate-neutron-lib-api-ref

Status in neutron:
  Fix Released

Bug description:
  eg.
  
http://logs.openstack.org/74/407974/4/check/gate-neutron-lib-api-ref/fd1da17/console.html.gz

  2016-12-10 02:56:27.259327 | Running Sphinx v1.3.6
  2016-12-10 02:56:27.259419 | making output directory...
  2016-12-10 02:56:27.313299 | loading pickled environment... not yet created
  2016-12-10 02:56:27.567368 | building [mo]: targets for 0 po files that are 
out of date
  2016-12-10 02:56:27.568804 | building [html]: targets for 2 source files that 
are out of date
  2016-12-10 02:56:27.576707 | updating environment: 2 added, 0 changed, 0 
removed
  2016-12-10 02:56:27.576844 | reading sources... [ 50%] index
  2016-12-10 02:56:27.627771 | reading sources... [100%] v2/index
  2016-12-10 02:56:44.174666 | 
  2016-12-10 02:56:44.175238 | looking for now-outdated files... none found
  2016-12-10 02:56:44.450451 | pickling environment... done
  2016-12-10 02:56:44.450539 | checking consistency... done
  2016-12-10 02:56:44.453109 | preparing documents... done
  2016-12-10 02:56:44.453170 | writing output... [ 50%] index
  2016-12-10 02:56:44.602761 | writing output... [100%] v2/index
  2016-12-10 02:56:46.870245 | 
  2016-12-10 02:56:46.870340 | Exception occurred:
  2016-12-10 02:56:46.872196 |   File 
"/home/jenkins/workspace/gate-neutron-lib-api-ref/.tox/api-ref/local/lib/python2.7/site-packages/docutils/writers/html4css1/__init__.py",
 line 288, in write_colspecs
  2016-12-10 02:56:46.872238 | width += node['colwidth']
  2016-12-10 02:56:46.872272 | TypeError: unsupported operand type(s) for +=: 
'int' and 'str'
  2016-12-10 02:56:46.875121 | The full traceback has been saved in 
/tmp/tmp.MGddJBZpRU/sphinx-err-cwC4qg.log, if you want to report the issue to 
the developers.
  2016-12-10 02:56:46.875177 | Please also report this if it was a user error, 
so that a better error message can be provided next time.
  2016-12-10 02:56:46.875219 | A bug report can be filed in the tracker at 
. Thanks!
  2016-12-10 02:56:47.306947 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-neutron-lib-api-ref/.tox/api-ref/bin/sphinx-build 
-W -b html -d api-ref/build/doctrees api-ref/source api-ref/build/html'
  2016-12-10 02:56:47.307094 | ___ summary 

  2016-12-10 02:56:47.307128 | ERROR:   api-ref: commands failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649079/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615114] Re: Volume size should change when user selects a different image on create volume modal

2016-12-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/365154
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=cd2a7907be0f2762ea5dca4a13171d66429519c0
Submitter: Jenkins
Branch:master

commit cd2a7907be0f2762ea5dca4a13171d66429519c0
Author: Ying Zuo 
Date:   Fri Sep 2 14:49:41 2016 -0700

The minimum volume size should be pre-populated

On the create volume modal, the minimum volume size should be
pre-populated when user selects a different image.

Closes-bug: #1615114

Change-Id: I6a0feb376250aacb00270aa5db879dd4a98d6977


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1615114

Title:
  Volume size should change when user selects a different image on
  create volume modal

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Steps to reproduce:
  1. Create two images with different sizes
  2. Go to project -> volumes panel
  3. Click create volume
  4. Select "image" as the volume source and choose the image with bigger size
  5. Check the pre-populated value on the Size field is the minimum required 
size of the volume based on the selected image
  6. Change the image to the one with smaller size
  7. Note that the Size field does not change

  
  The size field should pre-populate the minimum required size of the volume 
based on the selected image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1615114/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648656] Re: Angular template cache preloading makes developers cry

2016-12-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/408858
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=d219f6078b315048b770e1f0c420dce6ad4f6ec9
Submitter: Jenkins
Branch:master

commit d219f6078b315048b770e1f0c420dce6ad4f6ec9
Author: Richard Jones 
Date:   Fri Dec 9 11:12:45 2016 +1100

Turn off angular template cache preloading when DEBUG=True

This allows easier reloading of changed HTML templates when
developing them.

Change-Id: If1acaf2e03c21e13652e6c8ff0e4984b77ea8716
Fixes-Bug: 1648656


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1648656

Title:
  Angular template cache preloading makes developers cry

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  It is difficult to convince angular to reload a changed HTML file with
  preloaded template caching turned on. It should be turned off when
  DEBUG is on.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1648656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649417] [NEW] RFE: Security group rule using address set

2016-12-12 Thread Han Zhou
Public bug reported:

Today if we want to create a rule in security group to allow access
to/from a set of remote IPs, there are 2 ways:

1. If the set of remote IPs belongs to a group of Neutron ports, we can
attach those remote Neutron ports to a Neutron security group and use
the "remote group" field in security group rule.

2. If the set of remote IPs can't be mapped to Neutron ports (they can
be IPs from external or legacy networking system), we will have to
white-list each individual IPs (if they cannot be summarized to CIDRs)
in each rule that references to that set of IPs in the remote_ip_prefix
field.

For 2, if the number of remote IPs is huge, it will be inefficient in
Neutron Security group implementation and cause scaling issues. Now that
some back-end SDN systems (e.g. OVN) support concept of "address set",
it will be good to have same model in Neutron security group, so that
the capability of "address set" can be utilized directly for external
IPs.

It can be a simple extension to Neutron's Security Group extension, to
support "Address Set" object and reference it in Neutron security group
rules.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649417

Title:
  RFE: Security group rule using address set

Status in neutron:
  New

Bug description:
  Today if we want to create a rule in security group to allow access
  to/from a set of remote IPs, there are 2 ways:

  1. If the set of remote IPs belongs to a group of Neutron ports, we
  can attach those remote Neutron ports to a Neutron security group and
  use the "remote group" field in security group rule.

  2. If the set of remote IPs can't be mapped to Neutron ports (they can
  be IPs from external or legacy networking system), we will have to
  white-list each individual IPs (if they cannot be summarized to CIDRs)
  in each rule that references to that set of IPs in the
  remote_ip_prefix field.

  For 2, if the number of remote IPs is huge, it will be inefficient in
  Neutron Security group implementation and cause scaling issues. Now
  that some back-end SDN systems (e.g. OVN) support concept of "address
  set", it will be good to have same model in Neutron security group, so
  that the capability of "address set" can be utilized directly for
  external IPs.

  It can be a simple extension to Neutron's Security Group extension, to
  support "Address Set" object and reference it in Neutron security
  group rules.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649417/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649412] [NEW] user to nonlocal_user should be a 1 to 1 table relationship

2016-12-12 Thread Ron De Rose
Public bug reported:

The 'nonlocal_user' table shadows LDAP or custom identity driver users.
Currently, the 'user' to 'nonlocal_user' table relationship is 1 to
many. However, this is inaccurate. For example, there shouldn't be a
user with multiple usernames from a single domain; keystone doesn't
support that. A user belongs to a domain and has a single username.

** Affects: keystone
 Importance: Low
 Assignee: Ron De Rose (ronald-de-rose)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) => Ron De Rose (ronald-de-rose)

** Changed in: keystone
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1649412

Title:
  user to nonlocal_user should be a 1 to 1 table relationship

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  The 'nonlocal_user' table shadows LDAP or custom identity driver
  users. Currently, the 'user' to 'nonlocal_user' table relationship is
  1 to many. However, this is inaccurate. For example, there shouldn't
  be a user with multiple usernames from a single domain; keystone
  doesn't support that. A user belongs to a domain and has a single
  username.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1649412/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649341] Re: Undercloud upgrade fails with "Cell mappings are not created, but required for Ocata"

2016-12-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/409876
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=6764ff0db2aaa4dda0d804c7db886d3b64226674
Submitter: Jenkins
Branch:master

commit 6764ff0db2aaa4dda0d804c7db886d3b64226674
Author: Matt Riedemann 
Date:   Mon Dec 12 12:52:27 2016 -0500

Fix instructions for running simple_cell_setup

Change ff6b9998bb977421a5cbc94878ced8542d910c9e enforces in
a database migration that you've run the simple_cell_setup
command for cells v2 but the instructions in the error and
in the release note said to use 'nova-manage db' when it should
be 'nova-manage cell_v2'.

Change-Id: I8e71d1c7022d1000f26b7c16ed1c56f6e87ab8ac
Closes-Bug: #1649341


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1649341

Title:
  Undercloud upgrade fails with "Cell mappings are not created, but
  required for Ocata"

Status in OpenStack Compute (nova):
  Fix Released
Status in puppet-nova:
  New
Status in tripleo:
  Triaged

Bug description:
  Trying to upgrade with recent trunk nova and puppet-nova gives this
  error:

  Notice: /Stage[main]/Nova::Db::Sync_api/Exec[nova-db-sync-api]/returns: 
error: Cell mappings are not created, but required for Ocata. Please run 
nova-manage db simple_cell_setup before continuing.
  Error: /usr/bin/nova-manage  api_db sync returned 1 instead of one of [0]
  Error: /Stage[main]/Nova::Db::Sync_api/Exec[nova-db-sync-api]/returns: change 
from notrun to 0 failed: /usr/bin/nova-manage  api_db sync returned 1 instead 
of one of [0]

  
  Debugging manually gives:

  $ sudo /usr/bin/nova-manage  api_db sync
  error: Cell mappings are not created, but required for Ocata. Please run 
nova-manage db simple_cell_setup before continuing.

  
  but...

  $ sudo nova-manage db simple_cell_setup
  usage: nova-manage db [-h]


{archive_deleted_rows,null_instance_uuid_scan,online_data_migrations,sync,version}
...
  nova-manage db: error: argument action: invalid choice: 'simple_cell_setup' 
(choose from 'archive_deleted_rows', 'null_instance_uuid_scan', 
'online_data_migrations', 'sync', 'version')

  
  I tried adding openstack-nova* to the delorean-current whitelist, but with 
the latest nova packages there still appears to be this mismatch.

  [stack@instack /]$ rpm -qa | grep nova
  openstack-nova-conductor-15.0.0-0.20161212155146.909410c.el7.centos.noarch
  python-nova-15.0.0-0.20161212155146.909410c.el7.centos.noarch
  openstack-nova-scheduler-15.0.0-0.20161212155146.909410c.el7.centos.noarch
  puppet-nova-10.0.0-0.20161211003757.09b9f7b.el7.centos.noarch
  python2-novaclient-6.0.0-0.20161003181629.25117fa.el7.centos.noarch
  openstack-nova-api-15.0.0-0.20161212155146.909410c.el7.centos.noarch
  openstack-nova-cert-15.0.0-0.20161212155146.909410c.el7.centos.noarch
  openstack-nova-common-15.0.0-0.20161212155146.909410c.el7.centos.noarch
  openstack-nova-compute-15.0.0-0.20161212155146.909410c.el7.centos.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1649341/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649403] [NEW] nova.tests.functional.notification_sample_tests.test_instance.TestInstanceNotificationSample.test_create_delete_server_with_instance_update randomly fails with ip_

2016-12-12 Thread Matt Riedemann
Public bug reported:

Seen here:

http://logs.openstack.org/90/409890/1/check/gate-nova-tox-db-functional-
ubuntu-xenial/17015ce/console.html#_2016-12-12_19_24_33_892626

The differences between the expected notifications on instance.update
notifications and what we actually get is the u'ip_addresses': [] is
empty in the actual results. There is probably a race where the fake
virt driver isn't waiting for the network allocation (which is stubbed
out) to complete.

** Affects: nova
 Importance: Medium
 Status: Confirmed


** Tags: functional notifications testing

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => Medium

** Tags added: functional testing

** Tags added: notifications

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1649403

Title:
  
nova.tests.functional.notification_sample_tests.test_instance.TestInstanceNotificationSample.test_create_delete_server_with_instance_update
  randomly fails with ip_addresses not set in notifications

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Seen here:

  http://logs.openstack.org/90/409890/1/check/gate-nova-tox-db-
  functional-ubuntu-
  xenial/17015ce/console.html#_2016-12-12_19_24_33_892626

  The differences between the expected notifications on instance.update
  notifications and what we actually get is the u'ip_addresses': [] is
  empty in the actual results. There is probably a race where the fake
  virt driver isn't waiting for the network allocation (which is stubbed
  out) to complete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1649403/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648332] Re: hzResourceProperty cannot handle 'priority' attribute

2016-12-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/397132
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=8f58f66c79536ce4b07236ab6a1c012e7529e1c3
Submitter: Jenkins
Branch:master

commit 8f58f66c79536ce4b07236ab6a1c012e7529e1c3
Author: Kenji Ishii 
Date:   Mon Nov 14 20:00:23 2016 +0900

hzResourceProperty can handle 'priority' attribute

Same as table column, properties displayed in drawer should be
set each priority whether to show or not with depending on a
screen width. This patch addresses it.

Change-Id: I1f45aa64735e81f96dbc3635b1db68f12e258b20
Closes-Bug: #1648332


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1648332

Title:
   hzResourceProperty cannot handle 'priority' attribute

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  hzResourceProperty can be used to display an additinal attributes to an 
expansion area in angular table.
  Currently, columns in Angular table has an attribute 'priority', which has a 
feature to show or hide its value depending on the screen width.
  However, hzResourceProperty doesn't support that attribute. 
  Same as columns, it's better to be flexible whether to show or not depending 
on the screen width.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1648332/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546396] Re: Nova api throws 500 error when invalid name passed to servers

2016-12-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/282190
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=117fad897d5310d66cc2e690f3cd32e72614d8fd
Submitter: Jenkins
Branch:master

commit 117fad897d5310d66cc2e690f3cd32e72614d8fd
Author: jichenjc 
Date:   Fri Feb 26 12:52:07 2016 +0800

Refactor REGEX filters to eliminate 500 errors

You can currently create a 500 error on mysql by passing | as the name
filter because mysql assumes regex values are well crafted by the
application layer.

This puts in facilities to provide a safe regex filter per db engine.

It also refactors some of the inline code from _regex_instance_filter
into slightly more logical blocks, which will make it a little more
straight forward about where we need to do something smarter about
determining the dbtype in a cellsv2 world.

Change-Id: Ice2e21666905fdb76c001195e8fca21b427ea737
Closes-Bug: 1546396


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1546396

Title:
  Nova api throws 500 error when invalid name passed to servers

Status in OpenStack Compute (nova):
  Fix Released

Bug description:

  System was running with 2k cirrOS vm on 100 KVM hypervisors, and
  seeing below DB exception while trying to delete using nova api.

  
  Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ 
and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-379addbc-c4e5-43b4-bf37-f64436e13750)

  stack@controller:/opt/stack/nova$ git log -1
  commit 5aee67a80a30725a7d2b95533baf8bfb73476ef1
  Merge: 2e28de7 0ecc870
  Author: Jenkins 
  Date:   Mon Feb 15 21:56:09 2016 +

  Merge "Move Disk allocation ratio to ResourceTracker"
  stack@controller:/opt/stack/nova$ 

  Have attached nova-api logs to bug.

  Logs:

  2016-02-16 20:47:29.186 DEBUG nova.api.openstack.wsgi 
[req-eff95987-035d-48fe-8c3b-5b947167e72c admin admin] Calling method '>' from (pid=29444) _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:699
  2016-02-16 20:47:29.187 DEBUG nova.compute.api 
[req-eff95987-035d-48fe-8c3b-5b947167e72c admin admin] Searching by: 
{'deleted': False, 'project_id': u'3122784921764f0c8e2ca9feb5fc7424', u'name': 
u'|'} fro
  m (pid=29444) get_all /opt/stack/nova/nova/compute/api.py:2001
  2016-02-16 20:47:29.225 ERROR oslo_db.sqlalchemy.exc_filters 
[req-eff95987-035d-48fe-8c3b-5b947167e72c admin admin] DBAPIError exception 
wrapped from (pymysql.err.InternalError) (1139, u"Got error 'empty 
(sub)expression' from regexp") [SQL: u'SELECT anon_1.instances_created_at AS 
anon_1_instances_created_at, anon_1.instances_updated_at AS 
anon_1_instances_updated_at, anon_1.instances_deleted_at AS anon_1_
  instances_deleted_at, anon_1.instances_deleted AS anon_1_instances_deleted, 
anon_1.instances_id AS anon_1_instances_id, anon_1.instances_user_id AS 
anon_1_instances_user_id, anon_1.instances_project_id AS
   anon_1_instances_project_id, anon_1.instances_image_ref AS 
anon_1_instances_image_ref, anon_1.instances_kernel_id AS 
anon_1_instances_kernel_id, anon_1.instances_ramdisk_id AS 
anon_1_instances_ramdisk_id
  , anon_1.instances_hostname AS anon_1_instances_hostname, 
anon_1.instances_launch_index AS anon_1_instances_launch_index, 
anon_1.instances_key_name AS anon_1_instances_key_name, 
anon_1.instances_key_data 
  AS anon_1_instances_key_data, anon_1.instances_power_state AS 
anon_1_instances_power_state, anon_1.instances_vm_state AS 
anon_1_instances_vm_state, anon_1.instances_task_state AS 
anon_1_instances_task_sta
  te, anon_1.instances_memory_mb AS anon_1_instances_memory_mb, 
anon_1.instances_vcpus AS anon_1_instances_vcpus, anon_1.instances_root_gb AS 
anon_1_instances_root_gb, anon_1.instances_ephemeral_gb AS anon_
  1_instances_ephemeral_gb, anon_1.instances_ephemeral_key_uuid AS 
anon_1_instances_ephemeral_key_uuid, anon_1.instances_host AS 
anon_1_instances_host, anon_1.instances_node AS anon_1_instances_node, anon_1
  .instances_instance_type_id AS anon_1_instances_instance_type_id, 
anon_1.instances_user_data AS anon_1_instances_user_data, 
anon_1.instances_reservation_id AS anon_1_instances_reservation_id, anon_1.insta
  nces_launched_at AS anon_1_instances_launched_at, 
anon_1.instances_terminated_at AS anon_1_instances_terminated_at, 
anon_1.instances_availability_zone AS anon_1_instances_availability_zone, 
anon_1.instanc
  es_display_name AS anon_1_instances_display_name, 
anon_1.instances_display_description AS anon_1_instances_display_description, 
anon_1.instances_launched_on AS anon_1_instances_launched_on, anon_1.instanc
  es_locked AS anon_1_instances_locked, anon_1.instances_locked_by AS 
anon_1_instances_locked_by, anon_1.instances_os_type AS 
anon_1_instances_os_type, anon_1.instances_architecture AS anon_1_instances_a

[Yahoo-eng-team] [Bug 1632103] Re: UX: Bullets in Launch Instance Wizard don't have left-padding

2016-12-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/406879
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=964719507882dada12d0fe63b9e29bf403f06ad8
Submitter: Jenkins
Branch:master

commit 964719507882dada12d0fe63b9e29bf403f06ad8
Author: anu 
Date:   Mon Dec 5 15:55:58 2016 +0530

UX: Bullets in Launch Instance Wizard don't have left-padding

I have fixed the above bug, now it is working fine and it has left padding 
with  Bullets.
Closes-Bug: #1632103

Change-Id: I67338348fe160f634d19439a810d9d6853a59cc3


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1632103

Title:
  UX: Bullets in Launch Instance Wizard don't have left-padding

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  How to reproduce:
  1. Go to Project->Instances
  2. Click on the "Launch Instance" button
  3. Click on the "Question mark icon" in the upper right corner (shows help)
  4. There are two  elements with several  in it. Bullets are not 
aligned with the other  elements.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1632103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649234] Re: neutron-server not available after a neutron-server start via systemd

2016-12-12 Thread Brian Haley
You will need to file a bug against systemd instead of neutron for this,
as this isn't per-se a neutron issue since from the description the
server did start.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649234

Title:
  neutron-server not available after a neutron-server start via systemd

Status in neutron:
  Invalid

Bug description:
  When starting neutron-server via systemd, "systemctl start openstack-neutron" 
returns before the server is actually ready.
  So using the server directly afterwards (eg via "neutron net-list") you get a:

  Unable to establish connection to
  http://192.168.122.96:9696/v2.0/networks.json

  
  This could be avoided if systemd's SD_NOTIFY interface would be used when 
starting the server and then also use that in the .service file. 
  oslo.service already has support for SD_NOTIFY and nova is using it for some 
of its services.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649384] [NEW] in placement service capacity exceeded LOG.warning call is in wrong place

2016-12-12 Thread Chris Dent
Public bug reported:

As of master 2016-12-12, there are four LOG.warning calls in
nova.objects.resource_provider, all associated in some way with
inventory capacity being violated in some fashion. Two of these log and
then raise an exception, two are simply warnings.

For the two that log and then raise, instead of logging in the objects
it would be more correct to raise where the exception is caught (the
placement api layer).

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1649384

Title:
  in placement service capacity exceeded LOG.warning call is in wrong
  place

Status in OpenStack Compute (nova):
  New

Bug description:
  As of master 2016-12-12, there are four LOG.warning calls in
  nova.objects.resource_provider, all associated in some way with
  inventory capacity being violated in some fashion. Two of these log
  and then raise an exception, two are simply warnings.

  For the two that log and then raise, instead of logging in the objects
  it would be more correct to raise where the exception is caught (the
  placement api layer).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1649384/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648047] Re: Compute API in Compute API Reference

2016-12-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/408270
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=a35de0cd7b8ad3fb79e07282e40fcfc11a6b409a
Submitter: Jenkins
Branch:master

commit a35de0cd7b8ad3fb79e07282e40fcfc11a6b409a
Author: Matt Riedemann 
Date:   Wed Dec 7 14:36:24 2016 -0500

api-ref: note that os-virtual-interfaces is nova-network only

Let's avoid confusion over errors from the os-virtual-interfaces
exception when using Neutron and make a note that it's only
implemented for nova-network.

Change-Id: I7a136eecbeb5f89dfe98f51abf1188213bdca9fd
Closes-Bug: #1648047


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1648047

Title:
  Compute API in Compute API Reference

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  List Virtual Interfaces 
  The postfix link isn't os-virtual-interfaces,infact,the link is os-interface 
in newton version. 

  ---
  Release: 15.0.0.0b2.dev306 on 'Wed Dec 7 02:22:20 2016, commit 8f24088'
  SHA: 
  Source: Can't derive source file URL
  URL: http://developer.openstack.org/api-ref/compute/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1648047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623327] Re: openstack orchestration service list fails to return endpoint

2016-12-12 Thread Billy Olsen
Based on Brad's comment in #9, there were actions that were missing for
the openstack orchestration service. I believe this to no longer be a
valid bug, therefore I'm marking remaining tasks as invalid.

** Changed in: python-openstackclient
   Status: New => Invalid

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1623327

Title:
  openstack orchestration service list fails to return endpoint

Status in OpenStack Identity (keystone):
  Invalid
Status in python-heatclient:
  Invalid
Status in python-openstackclient:
  Invalid

Bug description:
  OpenStack service endpoints are created for the heat service, but the
  openstack client cannot find the endpoints to issue the query against.
  I suspect this is due to the domain auth tokens included in the
  initial authentication doesn't include any endpoints with the
  $(tenant_id)s in the output there.

  I'm not sure whether this should be a bug against the openstack client
  or against keystone. I believe its intentional to exclude the
  endpoints with a tenant_id substitution in the endpoint, but it
  doesn't make any sense to me as it seems the openstack catalog list
  command uses this catalog query in order to list endpoints and
  services, which it only gets the service but not the endpoints.

  Here's some output collected:

  > openstack catalog list
  +--+-++
  | Name | Type| Endpoints  |
  +--+-++
  | heat | orchestration   ||
  | heat-cfn | cloudformation  | RegionOne  |
  |  | |   public: http://10.5.20.176:8000/v1   |
  |  | | RegionOne  |
  |  | |   admin: http://10.5.20.176:8000/v1|
  |  | | RegionOne  |
  |  | |   internal: http://10.5.20.176:8000/v1 |
  |  | ||

  ...

  > openstack endpoint list | grep heat
  | 85ee6b6e8f814856a3a547982f6b2835 | RegionOne  | heat | 
orchestration   | True| internal  | 
http://10.5.20.176:8004/v1/$(tenant_id)s  |
  | 895cb2e4e5d1492e9e40c205f6b0c508 | RegionOne  | heat | 
orchestration   | True| public| 
http://10.5.20.176:8004/v1/$(tenant_id)s  |
  | ad63a139c90749ff9d98a704200d2e49 | RegionOne  | heat | 
orchestration   | True| admin | 
http://10.5.20.176:8004/v1/$(tenant_id)s  |


  > openstack orchestration service list
  public endpoint for orchestration service not found

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1623327/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649341] Re: Undercloud upgrade fails with "Cell mappings are not created, but required for Ocata"

2016-12-12 Thread Steven Hardy
[stack@instack ~]$ sudo grep transport_url /etc/nova/nova.conf
transport_url=rabbit://55f7b1c2b4ee0e8a4f8311de334c6b71d13c1b45:1cf85a15b3fb0d86ec3bda2dedd3b8952ad6d72a@192.0.2.1//

[stack@instack ~]$ sudo nova-manage cell_v2 simple_cell_setup --transport-url 
"rabbit://55f7b1c2b4ee0e8a4f8311de334c6b71d13c1b45:1cf85a15b3fb0d86ec3bda2dedd3b8952ad6d72a@192.0.2.1//"
Traceback (most recent call last):
  File "/bin/nova-manage", line 10, in 
sys.exit(main())
  File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1561, in main
config.parse_args(sys.argv)
  File "/usr/lib/python2.7/site-packages/nova/config.py", line 50, in parse_args
rpc.init(CONF)
  File "/usr/lib/python2.7/site-packages/nova/rpc.py", line 74, in init
TRANSPORT = create_transport(get_transport_url())
  File "/usr/lib/python2.7/site-packages/nova/rpc.py", line 154, in 
get_transport_url
return messaging.TransportURL.parse(CONF, url_str, TRANSPORT_ALIASES)
  File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 
398, in parse
url = url or conf.transport_url
  File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2320, in 
__getattr__
raise NoSuchOptError(name)
oslo_config.cfg.NoSuchOptError: no such option transport_url in group [DEFAULT]

It may be this is partly a nova and partly a puppet-nova bug?  As it
seems the nova releasenotes and api_db sync help text is wrong, and it
seems like puppet-nova isn't driving the cells_v2 simple_cell_setup at
the appropriate time with the URL it expects (which isn't all that clear
as evidently it doesn't match the nova.conf setting with the same name).

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: puppet-nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1649341

Title:
  Undercloud upgrade fails with "Cell mappings are not created, but
  required for Ocata"

Status in OpenStack Compute (nova):
  New
Status in puppet-nova:
  New
Status in tripleo:
  Triaged

Bug description:
  Trying to upgrade with recent trunk nova and puppet-nova gives this
  error:

  Notice: /Stage[main]/Nova::Db::Sync_api/Exec[nova-db-sync-api]/returns: 
error: Cell mappings are not created, but required for Ocata. Please run 
nova-manage db simple_cell_setup before continuing.
  Error: /usr/bin/nova-manage  api_db sync returned 1 instead of one of [0]
  Error: /Stage[main]/Nova::Db::Sync_api/Exec[nova-db-sync-api]/returns: change 
from notrun to 0 failed: /usr/bin/nova-manage  api_db sync returned 1 instead 
of one of [0]

  
  Debugging manually gives:

  $ sudo /usr/bin/nova-manage  api_db sync
  error: Cell mappings are not created, but required for Ocata. Please run 
nova-manage db simple_cell_setup before continuing.

  
  but...

  $ sudo nova-manage db simple_cell_setup
  usage: nova-manage db [-h]


{archive_deleted_rows,null_instance_uuid_scan,online_data_migrations,sync,version}
...
  nova-manage db: error: argument action: invalid choice: 'simple_cell_setup' 
(choose from 'archive_deleted_rows', 'null_instance_uuid_scan', 
'online_data_migrations', 'sync', 'version')

  
  I tried adding openstack-nova* to the delorean-current whitelist, but with 
the latest nova packages there still appears to be this mismatch.

  [stack@instack /]$ rpm -qa | grep nova
  openstack-nova-conductor-15.0.0-0.20161212155146.909410c.el7.centos.noarch
  python-nova-15.0.0-0.20161212155146.909410c.el7.centos.noarch
  openstack-nova-scheduler-15.0.0-0.20161212155146.909410c.el7.centos.noarch
  puppet-nova-10.0.0-0.20161211003757.09b9f7b.el7.centos.noarch
  python2-novaclient-6.0.0-0.20161003181629.25117fa.el7.centos.noarch
  openstack-nova-api-15.0.0-0.20161212155146.909410c.el7.centos.noarch
  openstack-nova-cert-15.0.0-0.20161212155146.909410c.el7.centos.noarch
  openstack-nova-common-15.0.0-0.20161212155146.909410c.el7.centos.noarch
  openstack-nova-compute-15.0.0-0.20161212155146.909410c.el7.centos.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1649341/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1611458] Re: MIgration incorrectly compares None as greater than any time

2016-12-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/353123
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=ba27ae7a4b5435cf95d42df818903a23d8133783
Submitter: Jenkins
Branch:master

commit ba27ae7a4b5435cf95d42df818903a23d8133783
Author: EdLeafe 
Date:   Tue Aug 9 21:48:30 2016 +

Correct the sorting of datetimes for migrations

In commit e5269b3a8f95c41283a9e6109835142586fe62a6, the code to compare
the updated_at times for different migration objects was changed to make
it Python3 compatible. However, there was a logical error introduced,
whereby migrations with a value of None for their updated_at attribute
were considered as more recent than those with actual values. This fixes
that logic so that None values always sort as older than actual values.

Closes-Bug: #1611458

Change-Id: If4feceb9e385f962fdf690f3ed62f63a19c61d7d


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1611458

Title:
  MIgration incorrectly compares None as greater than any time

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  The code in nova/compute/resource_tracker.py was updated in commit 
e5269b3a8f95c41283a9e6109835142586fe62a6 to better handle the comparison of 
potential None values in order to make the code Python 3 compatible. 
Unfortunately, the logic is incorrect, and will consider a migration with a 
None value for updated_at as more recent than a migration with a non-None 
datetime value.

  Steps to reproduce
  ==
  The easiest way to reproduce is to run the unit test here:

  
https://review.openstack.org/#/c/350319/8/nova/tests/unit/compute/test_tracker.py@1827

  Note that it now has to *expect* the None value to be preferred over
  an actual value

  Expected result
  ===
  Any migration that has been updated is always more recent than one that 
hasn't, so I would expect that any None-valued migration would not be selected 
over an actual date.

  Actual result
  =
  The None value is selected over one that has been updated.

  Environment
  ===
  Nova master

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1611458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649317] [NEW] Combinatorial blow-up with the Alchemy strategy lazy='joined'

2016-12-12 Thread Pierre Crégut
Public bug reported:

A regular tenant can create objects that will require a lot of time to
enumerate because of the strategy used by the ORM to build back the
object from the different tables in the database.

The script attached can be used to reproduce the problem. It creates a
network with several subnetworks (with several outes and DNS servers),
several tags, several RBAC policies but it does not exceed any typical
quota. Because the network is retrieved at each stage it is modified, it
is hard to run this script until its end in a typical settings.

Using the strategy lazy='joined' means that a single request is
performed to retrieve an object and all its parts that may be expressed
in several tables. For example when one asks for the network list, a
complex query will be issued that also retrieves subnets, subnetpools,
dns agents, etc. The exact query is visible at
http://paste.openstack.org/show/592120/

Unfortunately using the strategy lazy=joined has another impact when the
relation between the parent object and the sub-object has a ?-n arity.
Rather than giving back exactly the row needed, the single query builds
a kind of cross-product of the answers sharing the join keys. For
example if we have a network with 4 tags and 4 subnetworks, we will have
at least 16 rows for each combination of tags and subnetworks. Other
fields like rbac rules, special routes, dns servers can amplify the
problem.

It is not clear if the heavy usage of the database server and neutron
server could lead to a real deny of service for other users.

** Affects: neutron
 Importance: Undecided
 Status: New

** Attachment added: "Script reproducing the problem"
   
https://bugs.launchpad.net/bugs/1649317/+attachment/4790746/+files/combinatorial_blowup.sh

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649317

Title:
  Combinatorial blow-up with the Alchemy strategy lazy='joined'

Status in neutron:
  New

Bug description:
  A regular tenant can create objects that will require a lot of time to
  enumerate because of the strategy used by the ORM to build back the
  object from the different tables in the database.

  The script attached can be used to reproduce the problem. It creates a
  network with several subnetworks (with several outes and DNS servers),
  several tags, several RBAC policies but it does not exceed any typical
  quota. Because the network is retrieved at each stage it is modified,
  it is hard to run this script until its end in a typical settings.

  Using the strategy lazy='joined' means that a single request is
  performed to retrieve an object and all its parts that may be
  expressed in several tables. For example when one asks for the network
  list, a complex query will be issued that also retrieves subnets,
  subnetpools, dns agents, etc. The exact query is visible at
  http://paste.openstack.org/show/592120/

  Unfortunately using the strategy lazy=joined has another impact when
  the relation between the parent object and the sub-object has a ?-n
  arity. Rather than giving back exactly the row needed, the single
  query builds a kind of cross-product of the answers sharing the join
  keys. For example if we have a network with 4 tags and 4 subnetworks,
  we will have at least 16 rows for each combination of tags and
  subnetworks. Other fields like rbac rules, special routes, dns servers
  can amplify the problem.

  It is not clear if the heavy usage of the database server and neutron
  server could lead to a real deny of service for other users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648887] Re: nova host-servers-migrate [hostname] fails

2016-12-12 Thread Matt Riedemann
** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Changed in: nova/newton
   Status: New => In Progress

** Changed in: nova/newton
   Importance: Undecided => High

** Changed in: nova/newton
 Assignee: (unassigned) => Lee Yarwood (lyarwood)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1648887

Title:
  nova host-servers-migrate [hostname] fails

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  In Progress

Bug description:
  Tried to migrate all vms as part of upgrading controllers going from OSP9 to 
OSP10.
  Successfully migrated all vms from computes of the same version.
  Upgraded one compute node.
  Then tried to migrate all vms from the other machine to the upgraded one with:
  nova host-servers-migrate overcloud-compute-0.localdomain

  This resulted in error:
  Grepping for errors the /var/log/nova/nova-compute.log file on a target 
compute:

  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server 
[req-5bbd4371-dd07-443c-884f-02937064bb8f 929ce5e2752e4705bcc6d32e303230a9 
2a99364bbb3c496290fcf71066bdfbfb - - -] Exception during message handling
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 133, in 
_process_incoming
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 150, 
in dispatch
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 121, 
in _do_dispatch
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 75, in 
wrapped
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server 
function_name, call_dict, binary)
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 66, in 
wrapped
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server return 
f(self, context, *args, **kw)
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 188, in 
decorated_function
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server 
LOG.warning(msg, e, instance=instance)
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 157, in 
decorated_function
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/utils.py", line 613, in 
decorated_function
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 145, in 
decorated_function
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server 
migration.instance_uuid, exc_info=True)
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/

[Yahoo-eng-team] [Bug 1649300] [NEW] ocata - webob.exc.HTTPBadRequest: The Store URI was malformed.

2016-12-12 Thread Corey Bryant
Public bug reported:

We're hitting the following 2 errors in Ocata Ubuntu packages when
running unit tests:


glance.tests.unit.v2.test_images_resource.TestImagesController.test_add_location_possible_on_queued
---

Captured traceback:
~~~
Traceback (most recent call last):
  File "glance/tests/unit/v2/test_images_resource.py", line 1607, in 
test_add_location_possible_on_queued
output = self.controller.update(request, '1', changes)
  File "glance/common/utils.py", line 363, in wrapped
return func(self, req, *args, **kwargs)
  File "glance/api/v2/images.py", line 148, in update
change_method(req, image, change)
  File "glance/api/v2/images.py", line 201, in _do_add
self._do_add_locations(image, path[1], value)
  File "glance/api/v2/images.py", line 314, in _do_add_locations
raise webob.exc.HTTPBadRequest(explanation=e.msg)
webob.exc.HTTPBadRequest: The Store URI was malformed.


glance.tests.unit.v2.test_images_resource.TestImagesController.test_replace_location_possible_on_queued
---

Captured traceback:
~~~
Traceback (most recent call last):
  File "glance/tests/unit/v2/test_images_resource.py", line 1587, in 
test_replace_location_possible_on_queued
output = self.controller.update(request, '1', changes)
  File "glance/common/utils.py", line 363, in wrapped
return func(self, req, *args, **kwargs)
  File "glance/api/v2/images.py", line 148, in update
change_method(req, image, change)
  File "glance/api/v2/images.py", line 182, in _do_replace
self._do_replace_locations(image, value)
  File "glance/api/v2/images.py", line 288, in _do_replace_locations
raise webob.exc.HTTPBadRequest(explanation=e.msg)
webob.exc.HTTPBadRequest: The Store URI was malformed.


This started with introduction of the following commit:

commit 4ac8adbccc2b7bbef3d53f9141079c48f9c768f4
Author: Nikhil Komawar 
Date:   Wed Sep 7 17:29:41 2016 -0400

   Restrict location updates to active, queued images

** Affects: glance
 Importance: Undecided
 Status: New

** Summary changed:

- webob.exc.HTTPBadRequest: The Store URI was malformed.
+ ocata - webob.exc.HTTPBadRequest: The Store URI was malformed.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1649300

Title:
  ocata - webob.exc.HTTPBadRequest: The Store URI was malformed.

Status in Glance:
  New

Bug description:
  We're hitting the following 2 errors in Ocata Ubuntu packages when
  running unit tests:

  
  
glance.tests.unit.v2.test_images_resource.TestImagesController.test_add_location_possible_on_queued
  
---

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "glance/tests/unit/v2/test_images_resource.py", line 1607, in 
test_add_location_possible_on_queued
  output = self.controller.update(request, '1', changes)
File "glance/common/utils.py", line 363, in wrapped
  return func(self, req, *args, **kwargs)
File "glance/api/v2/images.py", line 148, in update
  change_method(req, image, change)
File "glance/api/v2/images.py", line 201, in _do_add
  self._do_add_locations(image, path[1], value)
File "glance/api/v2/images.py", line 314, in _do_add_locations
  raise webob.exc.HTTPBadRequest(explanation=e.msg)
  webob.exc.HTTPBadRequest: The Store URI was malformed.
  

  
glance.tests.unit.v2.test_images_resource.TestImagesController.test_replace_location_possible_on_queued
  
---

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "glance/tests/unit/v2/test_images_resource.py", line 1587, in 
test_replace_location_possible_on_queued
  output = self.controller.update(request, '1', changes)
File "glance/common/utils.py", line 363, in wrapped
  return func(self, req, *args, **kwargs)
File "glance/api/v2/images.py", line 148, in update
  change_method(req, image, change)
File "glance/api/v2/images.py", line 182, in _do_replace
  self._do_replace_locations(image, value)
File "glance/api/v2/images.py", line 288, in _do_replace_locations
  raise webob.exc.HTTPBadRequest(explanation=e.msg)
  webob.exc.HTTPBadRequest: The Store URI was malformed.

  
  This started with introduction of the following commit:

  commit 4ac8adbccc2b7bbef3d53f9141079c48f9c768f4
  Author: Nikhil

[Yahoo-eng-team] [Bug 1649297] [NEW] N313 hacking check is not being run

2016-12-12 Thread Maciej Szankin
Public bug reported:

Description
===
N313 hacking check has a regex that is not allowing the check to be run.

Steps to reproduce
==
Change any configuration option to start with lower case letter and run ``tox 
-e pep8``

Expected result
===
``tox -e pep8`` command should fail, due to violating N313 hacking check.

Actual result
=
``tox -e pep8`` passes.

Environment
===
Current master branch (4728c3e4fde5b5b7b068f60ea410d663deea7db2)

Logs & Configs
==
None. This check also does not have any UT coverage.

** Affects: nova
 Importance: Low
 Assignee: Maciej Szankin (mszankin)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Maciej Szankin (mszankin)

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1649297

Title:
  N313 hacking check is not being run

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  N313 hacking check has a regex that is not allowing the check to be run.

  Steps to reproduce
  ==
  Change any configuration option to start with lower case letter and run ``tox 
-e pep8``

  Expected result
  ===
  ``tox -e pep8`` command should fail, due to violating N313 hacking check.

  Actual result
  =
  ``tox -e pep8`` passes.

  Environment
  ===
  Current master branch (4728c3e4fde5b5b7b068f60ea410d663deea7db2)

  Logs & Configs
  ==
  None. This check also does not have any UT coverage.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1649297/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648887] Re: nova host-servers-migrate [hostname] fails

2016-12-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/409338
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=15564eb355657fcf2594d5060cb36f25e28bc0c9
Submitter: Jenkins
Branch:master

commit 15564eb355657fcf2594d5060cb36f25e28bc0c9
Author: Dan Smith 
Date:   Fri Dec 9 13:23:44 2016 -0800

Fix crashing during guest config with pci_devices=None

The Instance.pci_devices field is nullable, but the get_instance_pci_devs()
function does not account for that. If it is passed an instance (from an
older deployment) with no pci_devices, it will crash trying to iterate
None as a list.

Change-Id: I3d535e01ac31db7804347c3938c0d88a28ba67f5
Closes-Bug: #1648887


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1648887

Title:
  nova host-servers-migrate [hostname] fails

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Tried to migrate all vms as part of upgrading controllers going from OSP9 to 
OSP10.
  Successfully migrated all vms from computes of the same version.
  Upgraded one compute node.
  Then tried to migrate all vms from the other machine to the upgraded one with:
  nova host-servers-migrate overcloud-compute-0.localdomain

  This resulted in error:
  Grepping for errors the /var/log/nova/nova-compute.log file on a target 
compute:

  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server 
[req-5bbd4371-dd07-443c-884f-02937064bb8f 929ce5e2752e4705bcc6d32e303230a9 
2a99364bbb3c496290fcf71066bdfbfb - - -] Exception during message handling
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 133, in 
_process_incoming
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 150, 
in dispatch
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 121, 
in _do_dispatch
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 75, in 
wrapped
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server 
function_name, call_dict, binary)
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 66, in 
wrapped
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server return 
f(self, context, *args, **kw)
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 188, in 
decorated_function
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server 
LOG.warning(msg, e, instance=instance)
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 157, in 
decorated_function
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/utils.py", line 613, in 
decorated_function
  2016-12-09 18:44:21.693 31237 ERROR oslo_messaging.rpc.server re

[Yahoo-eng-team] [Bug 1641535] Re: FIP failed to remove in router's standby node

2016-12-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/397092
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=b45fd35e3f4a2aaacc7a22faafd00b7350e4f398
Submitter: Jenkins
Branch:master

commit b45fd35e3f4a2aaacc7a22faafd00b7350e4f398
Author: Dongcan Ye 
Date:   Mon Nov 14 17:35:20 2016 +0800

Remove floatingip address ignores ha_state

We both enables router_distributed and l3_ha in server side,
and configures L3 agent node as dvr_snat in compute nodes.
HA router removing floatingip address only in master node,
and dvr local router only remove FIP rule. This will cause
RTNETLINK error if we operates floatingip "associate --> disassociate
--> reassociate".

This patch removes floatingip address whether router's ha_state
is master or backup.
Another solution is adding remove_floating_ip in dvr_edge_router.

Change-Id: I2fab45cff786c475d69c5f0cf4e9b71e6bbbe653
Closes-Bug: #1641535


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1641535

Title:
  FIP failed to remove in router's standby node

Status in neutron:
  Fix Released

Bug description:
  ENV
  
  1. Server side:
 enable router_distributed and l3_ha

  2. Agent side:
 all L3 agent mode is dvr_snat (include network nodes and compute nodes)

  
  How to reprocude:
  =
  associate floatingip  -->  disassociate floatingip  --> reassociate floatingip

  We hit trace info in l3 agent:
  http://paste.openstack.org/show/589071/

  
  Analysis
  ==
  When we processing floatingip (In the situation router's attribute is ha + 
dvr), in ha_router we only remove floatingip if ha state is 'master'[1], and in 
dvr_local_router we remove it's related IP rule.
  Then we reassociate floatingip, it will hit RTNETLINK error. Because we had 
already delete the realted IP rule.

  
  [1] 
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/ha_router.py#L273

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1641535/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649171] Re: qos tempest tests should check the extension before using it

2016-12-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/409574
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=60dc1a0ce04968b39100ce7ab252fd1e07c0
Submitter: Jenkins
Branch:master

commit 60dc1a0ce04968b39100ce7ab252fd1e07c0
Author: YAMAMOTO Takashi 
Date:   Mon Dec 12 11:14:58 2016 +0900

tempest: Fix qos extension check

Fix issues introduced by the recent change. [1]

[1] I88e59cdbd79afb5337052ba3e5aecb96c7c8ea1c

Closes-Bug: #1649171
Change-Id: I2a2b627fd30ec564d8c8566fd3e46eb889e15dc9


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649171

Title:
  qos tempest tests should check the extension before using it

Status in neutron:
  Fix Released

Bug description:
  after I88e59cdbd79afb5337052ba3e5aecb96c7c8ea1c
  it seems to query available qos rule types even when qos extension is not 
enabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648879] Re: l3 rpc handler not checking for UNBOUND with host_id

2016-12-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/409314
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=a5a8a37344f6d193886ae8d047664aef76920ed8
Submitter: Jenkins
Branch:master

commit a5a8a37344f6d193886ae8d047664aef76920ed8
Author: Kevin Benton 
Date:   Fri Dec 9 11:49:09 2016 -0800

Check for unbound ports in L3 RPC handler

The handler was making the incorrect assumption that once a
host_id was set, a port which failed to bind could only be in
the 'binding_failed' state. So it would not try to rebind ports
that encountered an exeption during port binding commit that left
them in the unbound state.

Change-Id: I28bbeda5fed4275ea38e27308518f89df9ab4eff
Closes-Bug: #1648879


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1648879

Title:
  l3 rpc handler not checking for UNBOUND with host_id

Status in neutron:
  Fix Released

Bug description:
  If a binding fails to commit after a host_id is set on a port, the
  port will have a host_id with a vif type of 'unbound'. In this case
  the L3 RPC handler should be trying to update the port again as it
  would when the binding had changed the vif type to binding_failed.
  However, it currently doesn't due to a bad conditional[1].

  
  1. 
https://github.com/openstack/neutron/blob/5254f0ab60d46cabdace6d59ce50ca05568a87d5/neutron/api/rpc/handlers/l3_rpc.py#L141-L142

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1648879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649275] [NEW] Launch instance wizard

2016-12-12 Thread Mounika
Public bug reported:

When I launch a new instance through launch instance wizard, the page redirects 
me to the instances page but the status is not automatically updated in the 
table. The status gets updated when I refresh the page. Strangely, this 
behavior is observed only in firefox. In firefox the row doesn't get 
automatically updated.
Firefox version: 48
Openstack version: Mitaka
OS version: Ubuntu 14.04

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1649275

Title:
  Launch instance wizard

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When I launch a new instance through launch instance wizard, the page 
redirects me to the instances page but the status is not automatically updated 
in the table. The status gets updated when I refresh the page. Strangely, this 
behavior is observed only in firefox. In firefox the row doesn't get 
automatically updated.
  Firefox version: 48
  Openstack version: Mitaka
  OS version: Ubuntu 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1649275/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1625075] Re: Shared & public images no working with multi-tenant swift backend

2016-12-12 Thread Ian Cordasco
** Also affects: glance/newton
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1625075

Title:
  Shared & public images no working with multi-tenant swift backend

Status in Glance:
  Triaged
Status in Glance newton series:
  New

Bug description:
  Hi,

  We are seeing issues when trying to using public and shared images
  when Glance is configured to use a multi-tenant Swift backend.

  Here's what we see :

  1. Create a public image in project cf8fc081a9954cef81befb67b4002ce8 
  2. Attempt to create instance from image in project 
67e22ed6876d432d9e48f9bd2a20a527
  3. The instance creation fails, with the following log line in the Glance API 

  Object GET failed:
  https://objectstore.domain.corp:443/v1/AUTH_67e22ed6876d432d9e48f9bd2a20a527
  /glance_6e84cb8d-7f09-4f78-8363-a6005e0c51d2/6e84cb8d-
  7f09-4f78-8363-a6005e0c51d2 404 Not Found

  The issue appears to be that the storage url in the swift store driver
  is determined from the catalog in the context of the current request
  (which is scoped to the project we are creating the instance in) not
  project where the image is created.

  Looking at the changes introduced here
  
https://git.openstack.org/cgit/openstack/glance_store/commit/?id=68762058cc5d063f3a846b495af03150e648224f
  it seems to us that storage_url can only contain the account
  AUTH_[current_context_project_id] and in this case its not clear how a
  public or shared image from another project can be retrieved from
  Swift.

  Since this is pretty fundamental for the use case we can only assume we are 
missing some configuration option. The direct url in Glance is stored as 
direct_url='swift+config://swift-global/glance_c7396e07-484c-4ef3-b54c-9b6ea0cb367e/c7396e07-484c-4ef3-b54c-9b6ea0cb367e'
 and 
   since the driver and location seem to have no information on the image other 
than the image id its not clear how it could make the distinction between 
public/shared images and private ones or determine the project if of the shared 
image. 

  The only way we can get this to work is first to create an instance on
  each hypervisor in the project of the shared image. When we do this
  creating instances in a second project work because the image is
  cached on the hypervisor - obviously this is not a viable workaround.

  Any information on how to get this scenario working would be much
  appreciated.

  Thanks

  Andrew

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1625075/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649263] [NEW] Nova APi failed after upgrade mitaka->newton

2016-12-12 Thread Jack Ivanov
Public bug reported:

Hello everybody!

I have 2 controllers and 2 compute nodes. I disabled all the services on the 
first controller and compute (nova service-disable), then I upgraded those 
services to newton, executed `db sync` and started all the services on both 
nodes.
After start the FIRST controller and compute nodes, I noticed, that nova-api on 
my SECOND controller is down and the services don't work anymore.

nova-api log on the second controller:
ServiceTooOld: This service is older (v9) than the minimum (v15) version of the 
rest of the deployment. Unable to continue.

And now i have a question:

How that upgrading process of the first half of my cluster affected on
the second half of my cluster?

(my goals: upgrade from mitaka to newton with minimal downtime, because
I have 2 controllers, I want to upgrade the first one and only then the
second one)

both server:
3.10.0-327.36.3.el7.x86_64

The first one (already newton):
openstack-nova-console-14.0.0-1.el7.noarch
openstack-nova-api-14.0.0-1.el7.noarch
openstack-nova-scheduler-14.0.0-1.el7.noarch
openstack-nova-common-14.0.0-1.el7.noarch
openstack-nova-novncproxy-14.0.0-1.el7.noarch
openstack-nova-conductor-14.0.0-1.el7.noarch

The second one (still mitaka):
openstack-nova-common-13.1.2-1.el7.noarch
openstack-nova-scheduler-13.1.2-1.el7.noarch
openstack-nova-conductor-13.1.2-1.el7.noarch
openstack-nova-api-13.1.2-1.el7.noarch
openstack-nova-console-13.1.2-1.el7.noarch
openstack-nova-novncproxy-13.1.2-1.el7.noarch

Thanks!

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  Hello everybody!
  
- I have 2 controllers and 2 compute nodes. I disabled all the services on the 
first controller and compute (nova service-disable), then I upgraded those 
services to newton, executed `db sync` and start all the services on both 
nodes. 
+ I have 2 controllers and 2 compute nodes. I disabled all the services on the 
first controller and compute (nova service-disable), then I upgraded those 
services to newton, executed `db sync` and started all the services on both 
nodes.
  After start the FIRST controller and compute node, I noticed, that nova-api 
on my SECOND controller is down and the services don't work anymore.
  
  nova-api log on the second controller:
  ServiceTooOld: This service is older (v9) than the minimum (v15) version of 
the rest of the deployment. Unable to continue.
  
  And now i have a question:
  
  How that upgrading process of the first half of my cluster affected on
  the second half of my cluster?
  
  (my goals: upgrade from mitaka to newton with minimal downtime, because
  I have 2 controllers, I want to upgrade the first one and only then the
  second one.)
  
  both server:
  3.10.0-327.36.3.el7.x86_64
  
  The first one (already newton):
  openstack-nova-console-14.0.0-1.el7.noarch
  openstack-nova-api-14.0.0-1.el7.noarch
  openstack-nova-scheduler-14.0.0-1.el7.noarch
  openstack-nova-common-14.0.0-1.el7.noarch
  openstack-nova-novncproxy-14.0.0-1.el7.noarch
  openstack-nova-conductor-14.0.0-1.el7.noarch
  
  The second one (still mitaka):
  openstack-nova-common-13.1.2-1.el7.noarch
  openstack-nova-scheduler-13.1.2-1.el7.noarch
  openstack-nova-conductor-13.1.2-1.el7.noarch
  openstack-nova-api-13.1.2-1.el7.noarch
  openstack-nova-console-13.1.2-1.el7.noarch
  openstack-nova-novncproxy-13.1.2-1.el7.noarch
  
- 
  Thanks!

** Description changed:

  Hello everybody!
  
  I have 2 controllers and 2 compute nodes. I disabled all the services on the 
first controller and compute (nova service-disable), then I upgraded those 
services to newton, executed `db sync` and started all the services on both 
nodes.
- After start the FIRST controller and compute node, I noticed, that nova-api 
on my SECOND controller is down and the services don't work anymore.
+ After start the FIRST controller and compute nodes, I noticed, that nova-api 
on my SECOND controller is down and the services don't work anymore.
  
  nova-api log on the second controller:
  ServiceTooOld: This service is older (v9) than the minimum (v15) version of 
the rest of the deployment. Unable to continue.
  
  And now i have a question:
  
  How that upgrading process of the first half of my cluster affected on
  the second half of my cluster?
  
  (my goals: upgrade from mitaka to newton with minimal downtime, because
  I have 2 controllers, I want to upgrade the first one and only then the
  second one.)
  
  both server:
  3.10.0-327.36.3.el7.x86_64
  
  The first one (already newton):
  openstack-nova-console-14.0.0-1.el7.noarch
  openstack-nova-api-14.0.0-1.el7.noarch
  openstack-nova-scheduler-14.0.0-1.el7.noarch
  openstack-nova-common-14.0.0-1.el7.noarch
  openstack-nova-novncproxy-14.0.0-1.el7.noarch
  openstack-nova-conductor-14.0.0-1.el7.noarch
  
  The second one (still mitaka):
  openstack-nova-common-13.1.2-1.el7.noarch
  openstack-nova-scheduler-13.1.2-1.el7.noarch
  openstack-nova-conductor-13.1.2-1.el7.n

[Yahoo-eng-team] [Bug 1649245] [NEW] Identity Liberty version does not return 'description' if not passed while create domain

2016-12-12 Thread Ghanshyam Mann
Public bug reported:

While creating domain, we can pass optional description field and same
will be returned in response. If not passed then empty string.

But there is different behaviour in case of liberty version. If
description is not passed in create domain then it would not be returned
in response.

I think issue is here -
https://github.com/openstack/keystone/blob/stable/liberty/keystone/resource/backends/sql.py#L235-L242

Where Domain table does not have description field.

Higher version use Project table which have description and at least
return empty description.

Failure can be seen here- http://logs.openstack.org/79/349379/8/check
/gate-tempest-dsvm-full-ubuntu-trusty-
liberty/fbc5b76/logs/testr_results.html.gz

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1649245

Title:
  Identity Liberty version does not return 'description'  if not passed
  while create domain

Status in OpenStack Identity (keystone):
  New

Bug description:
  While creating domain, we can pass optional description field and same
  will be returned in response. If not passed then empty string.

  But there is different behaviour in case of liberty version. If
  description is not passed in create domain then it would not be
  returned in response.

  I think issue is here -
  
https://github.com/openstack/keystone/blob/stable/liberty/keystone/resource/backends/sql.py#L235-L242

  Where Domain table does not have description field.

  Higher version use Project table which have description and at least
  return empty description.

  Failure can be seen here- http://logs.openstack.org/79/349379/8/check
  /gate-tempest-dsvm-full-ubuntu-trusty-
  liberty/fbc5b76/logs/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1649245/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649234] [NEW] neutron-server not available after a neutron-server start via systemd

2016-12-12 Thread Thomas Bechtold
Public bug reported:

When starting neutron-server via systemd, "systemctl start openstack-neutron" 
returns before the server is actually ready.
So using the server directly afterwards (eg via "neutron net-list") you get a:

Unable to establish connection to
http://192.168.122.96:9696/v2.0/networks.json


This could be avoided if systemd's SD_NOTIFY interface would be used when 
starting the server and then also use that in the .service file. 
oslo.service already has support for SD_NOTIFY and nova is using it for some of 
its services.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649234

Title:
  neutron-server not available after a neutron-server start via systemd

Status in neutron:
  New

Bug description:
  When starting neutron-server via systemd, "systemctl start openstack-neutron" 
returns before the server is actually ready.
  So using the server directly afterwards (eg via "neutron net-list") you get a:

  Unable to establish connection to
  http://192.168.122.96:9696/v2.0/networks.json

  
  This could be avoided if systemd's SD_NOTIFY interface would be used when 
starting the server and then also use that in the .service file. 
  oslo.service already has support for SD_NOTIFY and nova is using it for some 
of its services.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649232] [NEW] Broken OVA Import on VMwareVCDriver

2016-12-12 Thread Fabian Wiesel
Public bug reported:

If a user places an OVA on Glance with its own storage independent of
the VCenter, the import fails in nova with not finding the file in the
vmware_temp directory.


I am using mitaka on a VCenter 6.0, but looking at the code, I would say it 
affects also liberty and later versions, when the Glance isn't storing the 
images directly in the VCenter (e.g Swift store)

I created an OVA image in Glance (Swift backed), and then started a VM with the 
image.
The import fails with a missing file.

Essentially, the code is broken here:
https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/vmops.py#L620-L629

`image_prepare` creates some temporary directory, which is never used in
`image_fetch` aka `_fetch_image_as_ova`, and `image_cache` expects the
imported image there.

However, the function `_fetch_image_as_ova` imports the OVA as a VM,
which places the root disk in a folder named as the imported image name.


Attached is a patch, which makes the `_fetch_image_as_ova` function move the 
image to the cache directory, and changes the `image_prepare` and `image_cache` 
function to a noop.

** Affects: nova
 Importance: Undecided
 Status: New

** Patch added: "0001-Fix-import-of-OVAs-with-VMwareVCDriver.patch"
   
https://bugs.launchpad.net/bugs/1649232/+attachment/4790617/+files/0001-Fix-import-of-OVAs-with-VMwareVCDriver.patch

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1649232

Title:
  Broken OVA Import on VMwareVCDriver

Status in OpenStack Compute (nova):
  New

Bug description:
  If a user places an OVA on Glance with its own storage independent of
  the VCenter, the import fails in nova with not finding the file in the
  vmware_temp directory.

  
  I am using mitaka on a VCenter 6.0, but looking at the code, I would say it 
affects also liberty and later versions, when the Glance isn't storing the 
images directly in the VCenter (e.g Swift store)

  I created an OVA image in Glance (Swift backed), and then started a VM with 
the image.
  The import fails with a missing file.

  Essentially, the code is broken here:
  
https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/vmops.py#L620-L629

  `image_prepare` creates some temporary directory, which is never used
  in `image_fetch` aka `_fetch_image_as_ova`, and `image_cache` expects
  the imported image there.

  However, the function `_fetch_image_as_ova` imports the OVA as a VM,
  which places the root disk in a folder named as the imported image
  name.

  
  Attached is a patch, which makes the `_fetch_image_as_ova` function move the 
image to the cache directory, and changes the `image_prepare` and `image_cache` 
function to a noop.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1649232/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp