[Yahoo-eng-team] [Bug 1585893] [NEW] Launch instance got libvirtError for qemu unsupported IDE bus in AARCH64

2016-05-25 Thread Kevin Zhao
Public bug reported:

Description
===
After setup the nova development environment with devstack in aarch64 machine 
,use the glance upload the image ,then use nova to launch the 
instance.Launching failed with the error "libvirtError: unsupported 
configuration: IDE controllers are unsupported for this QEMU binary or machine 
type".
 
Steps to reproduce
==
1.Using devstack to deploy openstack. Using default local.conf.

2.Upload the aarch64 image with glance.
$ source ~/devstack/openrc admin admin
$ glance image-create --name image-arm64.img --disk-format qcow2 
--container-format bare --visibility public --file 
images/image-arm64-wily.qcow2 --progress
$ glance image-create --name image-arm64.vmlinuz --disk-format aki 
--container-format aki --visibility public --file 
images/image-arm64-wily.vmlinuz --progress
$ glance image-create --name image-arm64.initrd --disk-format ari 
--container-format ari --visibility public --file 
images/image-arm64-wily.initrd --progress
$ IMAGE_UUID=$(glance image-list | grep image-arm64.img | awk '{ print $2 }')
$ IMAGE_KERNEL_UUID=$(glance image-list | grep image-arm64.vmlinuz | awk '{ 
print $2 }')
$ IMAGE_INITRD_UUID=$(glance image-list | grep image-arm64.initrd | awk '{ 
print $2 }')
$ glance image-update --kernel-id ${IMAGE_KERNEL_UUID} --ramdisk-id 
${IMAGE_INITRD_UUID} ${IMAGE_UUID}

3.nova add keypair
$ nova keypair-add default --pub-key ~/.ssh/id_rsa.pub

4.Launch the instance:
$ image=$(nova image-list | egrep "image-arm64.img"'[^-]' | awk '{ print $2 }')
$ nova boot --flavor m1.medium --image ${image} --key-name default test-arm64

5.screen -x and select the n-cpu session to see the output.
Then will got the error.

Expected result
===
After spawningn the instance, use :
$ nova list
We can see the instance is active.

Actual result
=
Got the error: 
libvirtError: unsupported configuration: IDE controllers are unsupported for 
this QEMU binary or machine type

We can see the detailed information:
 ERROR nova.compute.manager [req-75325207-6c1b-481d-b188-a66c0a64eb89 admin 
admin] [instance: 188aa5bc-173c-46ec-b872-6bacb512911e] Instance failed to spawn
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e] 
Traceback (most recent call last):
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]   
File "/opt/stack/nova/nova/compute/manager.py", line 2041, in _build_resources
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]
 yield resources
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]   
File "/opt/stack/nova/nova/compute/manager.py", line 1887, in 
_build_and_run_instance
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]
 block_device_info=block_device_info)
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]   
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2569, in spawn
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]
 block_device_info=block_device_info)
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]   
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 4713, in 
_create_domain_and_network
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]
 xml, pause=pause, power_on=power_on)
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]   
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 4644, in _create_domain
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]
 guest.launch(pause=pause)
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]   
File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 142, in launch
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]
 self._encoded_xml, errors='ignore')
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]   
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 221, 
in __exit__
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]
 self.force_reraise()
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]   
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, 
in force_reraise
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]
 six.reraise(self.type_, self.value, self.tb)
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]   
File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 137, in launch
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]
 return self._domain.createWithFlags(flags)
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]   
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in 
doit
 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1585890] [NEW] No check that member address whether is in the member subnet

2016-05-25 Thread dongjuan
Public bug reported:

issue is in kilo branch

member subnet cidr is 20.0.0.0/24, but member address is 30.0.0.11
but it configured ok.

[root@opencos2 v2(keystone_admin)]# neutron subnet-show 
502be3ac-f8d8-43b3-af5b-f0feada72aed
+---++
| Field | Value  |
+---++
| allocation_pools  | {"start": "20.0.0.2", "end": "20.0.0.254"} |
| cidr  | 20.0.0.0/24|
| dns_nameservers   ||
| enable_dhcp   | True   |
| gateway_ip| 20.0.0.1   |
| host_routes   ||
| id| 502be3ac-f8d8-43b3-af5b-f0feada72aed   |
| ip_version| 4  |
| ipv6_address_mode ||
| ipv6_ra_mode  ||
| name  ||
| network_id| 2e424980-14f0-4405-92dc-e4c57c32235a   |
| subnetpool_id ||
| tenant_id | be58eaec789d44f296a65f96b944a9f5   |
+---++
[root@opencos2 v2(keystone_admin)]# neutron lbaas-member-create pool101 
--subnet 502be3ac-f8d8-43b3-af5b-f0feada72aed --address 30.0.0.11 
--protocol-port 80
Created a new member:
++--+
| Field  | Value|
++--+
| address| 30.0.0.11|
| admin_state_up | True |
| id | 1dcc-2f00-4fd7-9a68-6031a96a172b |
| protocol_port  | 80   |
| subnet_id  | 502be3ac-f8d8-43b3-af5b-f0feada72aed |
| tenant_id  | be58eaec789d44f296a65f96b944a9f5 |
| weight | 1|
++--+

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585890

Title:
  No check that member address whether is in the member subnet

Status in neutron:
  New

Bug description:
  issue is in kilo branch

  member subnet cidr is 20.0.0.0/24, but member address is 30.0.0.11
  but it configured ok.

  [root@opencos2 v2(keystone_admin)]# neutron subnet-show 
502be3ac-f8d8-43b3-af5b-f0feada72aed
  +---++
  | Field | Value  |
  +---++
  | allocation_pools  | {"start": "20.0.0.2", "end": "20.0.0.254"} |
  | cidr  | 20.0.0.0/24|
  | dns_nameservers   ||
  | enable_dhcp   | True   |
  | gateway_ip| 20.0.0.1   |
  | host_routes   ||
  | id| 502be3ac-f8d8-43b3-af5b-f0feada72aed   |
  | ip_version| 4  |
  | ipv6_address_mode ||
  | ipv6_ra_mode  ||
  | name  ||
  | network_id| 2e424980-14f0-4405-92dc-e4c57c32235a   |
  | subnetpool_id ||
  | tenant_id | be58eaec789d44f296a65f96b944a9f5   |
  +---++
  [root@opencos2 v2(keystone_admin)]# neutron lbaas-member-create pool101 
--subnet 502be3ac-f8d8-43b3-af5b-f0feada72aed --address 30.0.0.11 
--protocol-port 80
  Created a new member:
  ++--+
  | Field  | Value|
  ++--+
  | address| 30.0.0.11|
  | admin_state_up | True |
  | id | 1dcc-2f00-4fd7-9a68-6031a96a172b |
  | protocol_port  | 80   |
  | subnet_id  | 502be3ac-f8d8-43b3-af5b-f0feada72aed |
  | tenant_id  | be58eaec789d44f296a65f96b944a9f5 |
  | weight | 1|
  ++--+

To manage notifications about this bug go to:
https://bugs.launchpad.net

[Yahoo-eng-team] [Bug 1570259] Re: Pecan: 'fields' query parameter not handled anywhere

2016-05-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/305707
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=80426cf6201374588f0118365992905eddce268e
Submitter: Jenkins
Branch:master

commit 80426cf6201374588f0118365992905eddce268e
Author: salvatore 
Date:   Thu Apr 14 11:34:04 2016 +0200

Pecan: tell the plugin about field selection

Neutron plugin are able to do field selection on responses (for
instance only returning id & name for a resource). However,
Pecan does not leverage this capability and always tells the
plugin to fetch all fields, doing field selection while
processing the response.

This patch ensures that pecan send the field list down to the
plugin.

As a part of this patch TestRequestProcessing has been update
to inherit from TestRootController rather than
TestResourceController. Inheriting from the latter was causing
tests to be executed twice for no reason, beyong using
TestResourceController's 'port' attribute, which was however
unnecessary as this change proves.

Closes-Bug: #1570259

Change-Id: Iac930cd3bb14dfdda78e6a94d2c8bef2b5c4b9a5


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1570259

Title:
  Pecan: 'fields' query parameter not handled anywhere

Status in neutron:
  Fix Released

Bug description:
  The pecan framework currently does not handle properly the 'fields'
  query parameter as when specifiied it is not sent down to the plugin.
  Instead field selection happens while processing the response.

  This is not entirely wrong, but since plugins have the capability of
  doing field selection they should allowed to use it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1570259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585859] [NEW] SRIOV port cann't get the IP which is allocated by dhcp agent on the same node

2016-05-25 Thread dongwenshuai
Public bug reported:

I create a VM with a sriov port. Nova services and neutron services are
on the same physical node. I find that the nic in VM can't auto get IP
address. The neutron dhcp-agent service is active and I can see the
network namespace which the siov port belongs to by command "ip netns"

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585859

Title:
  SRIOV port cann't get the IP which is allocated by dhcp agent on the
  same node

Status in neutron:
  New

Bug description:
  I create a VM with a sriov port. Nova services and neutron services
  are on the same physical node. I find that the nic in VM can't auto
  get IP address. The neutron dhcp-agent service is active and I can see
  the network namespace which the siov port belongs to by command "ip
  netns"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585860] [NEW] SRIOV port cann't get the IP which is allocated by dhcp agent on the same node

2016-05-25 Thread dongwenshuai
Public bug reported:

I create a VM with a sriov port. Nova services and neutron services are
on the same physical node. I find that the nic in VM can't auto get IP
address. The neutron dhcp-agent service is active and I can see the
network namespace which the sriov port belongs to by command "ip netns"

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  I create a VM with a sriov port. Nova services and neutron services are
  on the same physical node. I find that the nic in VM can't auto get IP
  address. The neutron dhcp-agent service is active and I can see the
- network namespace which the siov port belongs to by command "ip netns"
+ network namespace which the sriov port belongs to by command "ip netns"

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585860

Title:
  SRIOV port cann't get the IP which is allocated by dhcp agent on the
  same node

Status in neutron:
  New

Bug description:
  I create a VM with a sriov port. Nova services and neutron services
  are on the same physical node. I find that the nic in VM can't auto
  get IP address. The neutron dhcp-agent service is active and I can see
  the network namespace which the sriov port belongs to by command "ip
  netns"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584737] Re: Incorrect objects comparison in unit test

2016-05-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/320220
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b1973fc300b2cb2d476cd9b63184368e44a8fba9
Submitter: Jenkins
Branch:master

commit b1973fc300b2cb2d476cd9b63184368e44a8fba9
Author: Takashi NATSUME 
Date:   Tue May 24 13:52:21 2016 +0900

Add length check in comparing object lists

Add length check when comparing a test result object list
and an expected object list
in nova/tests/unit/compute/test_host_api.py

Change-Id: I27c094d84a9ec17250d3e8046b0138080d404e3a
Closes-Bug: #1584737


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1584737

Title:
  Incorrect objects comparison in unit test

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In nova/tests/unit/compute/test_host_api.py (commit 
bb50389bb6dcf891ae1f1ec7bd037efc462ce517),
  there is the '_compare_objs' method for comparing test result objects and 
expected objects.

  ---
  class ComputeHostAPITestCase(test.TestCase):
  (snipped...)
  def _compare_obj(self, obj, db_obj):
  test_objects.compare_obj(self, obj, db_obj,
   allow_missing=test_service.OPTIONAL)

  def _compare_objs(self, obj_list, db_obj_list):
  for index, obj in enumerate(obj_list):
  self._compare_obj(obj, db_obj_list[index])
  ---

  In '_compare_objs' method,
  the inside of 'for' statement is never executed if obj_list (test result) is 
empty list([]).
  In that case, there is a possibility to overlook the difference between test 
result obejcts and expected objects.
  It is a potential bug. So it should be fixed.

  * This bug was found in the following patch.

  
https://review.openstack.org/#/c/308213/1/nova/tests/unit/compute/test_host_api.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1584737/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585831] [NEW] Horizon dashboard leaks internal information through cookies

2016-05-25 Thread Dave McCowan
Public bug reported:

When horizon is configured where:
1) internalURL and publicURL are on different networks
2) horizon uses the internalURL endpoint for authentication

The cookie "login_region" will be set to the value configured as
OPENSTACK_KEYSTONE_URL.

This URL contains the IP address of the internalURL of keystone.

In the case of a deployment where the internal network is different than
the public network, the IP address of the internal network is considered
sensitive information.  By putting the OPENSTACK_KEYSTONE_URL in the
cookie that is sent to the public network, horizon leaks the values of
the internal network IP addresses.

** Affects: horizon
 Importance: Undecided
 Status: New

** Affects: ossn
 Importance: Undecided
 Status: New

** Also affects: ossn
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1585831

Title:
  Horizon dashboard leaks internal information through cookies

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Security Notes:
  New

Bug description:
  When horizon is configured where:
  1) internalURL and publicURL are on different networks
  2) horizon uses the internalURL endpoint for authentication

  The cookie "login_region" will be set to the value configured as
  OPENSTACK_KEYSTONE_URL.

  This URL contains the IP address of the internalURL of keystone.

  In the case of a deployment where the internal network is different
  than the public network, the IP address of the internal network is
  considered sensitive information.  By putting the
  OPENSTACK_KEYSTONE_URL in the cookie that is sent to the public
  network, horizon leaks the values of the internal network IP
  addresses.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1585831/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585826] [NEW] nova hypervisor-show/stats reports incorrect values

2016-05-25 Thread Ben Nemec
Public bug reported:

When looking at hypervisor resources through either nova hypervisor-
stats or nova hypervisor-show with compute nodes of differing sizes, I
am getting incorrect/inconsistent values back for one of the
hypervisors.  For example, in an environment with one 32 GB compute node
and one 16 GB node, I see the following when running nova hypervisor-
show multiple times on the 32 GB node:

$ nova hypervisor-show 1 | grep memory_mb
| memory_mb | 15934|
| memory_mb_used| 20992|
$ nova hypervisor-show 1 | grep memory_mb
| memory_mb | 31906|
| memory_mb_used| 20992|

hypervisor-stats shows similar incorrect behavior:

$ nova hypervisor-stats | grep memory_mb
| memory_mb| 31868 |
| memory_mb_used   | 34304 |
$ nova hypervisor-stats | grep memory_mb
| memory_mb| 63812 |
| memory_mb_used   | 34304 |

>From what I can tell, the same stats are being returned for both
hypervisors, but which node's stats are being used randomly changes.

This particular environment is a two node devstack setup built today,
but I've seen similar behavior in a three compute TripleO deployment
using recent builds of Nova for at least a couple of weeks.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1585826

Title:
  nova hypervisor-show/stats reports incorrect values

Status in OpenStack Compute (nova):
  New

Bug description:
  When looking at hypervisor resources through either nova hypervisor-
  stats or nova hypervisor-show with compute nodes of differing sizes, I
  am getting incorrect/inconsistent values back for one of the
  hypervisors.  For example, in an environment with one 32 GB compute
  node and one 16 GB node, I see the following when running nova
  hypervisor-show multiple times on the 32 GB node:

  $ nova hypervisor-show 1 | grep memory_mb
  | memory_mb | 15934|
  | memory_mb_used| 20992|
  $ nova hypervisor-show 1 | grep memory_mb
  | memory_mb | 31906|
  | memory_mb_used| 20992|

  hypervisor-stats shows similar incorrect behavior:

  $ nova hypervisor-stats | grep memory_mb
  | memory_mb| 31868 |
  | memory_mb_used   | 34304 |
  $ nova hypervisor-stats | grep memory_mb
  | memory_mb| 63812 |
  | memory_mb_used   | 34304 |

  From what I can tell, the same stats are being returned for both
  hypervisors, but which node's stats are being used randomly changes.

  This particular environment is a two node devstack setup built today,
  but I've seen similar behavior in a three compute TripleO deployment
  using recent builds of Nova for at least a couple of weeks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1585826/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585816] [NEW] qos-bandwidth-limit-rule-create failed with internal server error

2016-05-25 Thread wuwoo
Public bug reported:

When using following command to create a bandwidth rule:
# neutron qos-bandwidth-limit-rule-create --max-kbps 1000 --max-burst-kbps 100 
test-policy

error returned:
Request Failed: internal server error while processing your request.

in /var/log/neutron/server.log, error message contains:
2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters 
[req-ecefbd10-e988-43e1-a556-0f7b8a2b58a7 2eaf7ddac8b94a94ab40fad216341232 
e91adc92dfea433f9432857edb8af8cb - - -] DBAPIError exception wrapped from 
(_mysql_exceptions.ProgrammingError) (1064, "You have an error in your SQL 
syntax; check the manual that corresponds to your MySQL server version for the 
right syntax to use near ')' at line 3") [SQL: u'SELECT qos_policies.tenant_id 
AS qos_policies_tenant_id, qos_policies.id AS qos_policies_id, 
qos_policies.name AS qos_policies_name, qos_policies.description AS 
qos_policies_description, qos_policies.shared AS qos_policies_shared \nFROM 
qos_policies \nWHERE qos_policies.name = %s'] [parameters: ([u'test-policy'],)]
2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1139, in 
_execute_context
2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters context)
2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 450, in 
do_execute
2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib64/python2.7/site-packages/MySQLdb/cursors.py", line 174, in execute
2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters 
self.errorhandler(self, exc, value)
2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib64/python2.7/site-packages/MySQLdb/connections.py", line 36, in 
defaulterrorhandler
2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters raise 
errorclass, errorvalue
2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters 
ProgrammingError: (1064, "You have an error in your SQL syntax; check the 
manual that corresponds to your MySQL server version for the right syntax to 
use near ')' at line 3")
2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters
2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource 
[req-ecefbd10-e988-43e1-a556-0f7b8a2b58a7 2eaf7ddac8b94a94ab40fad216341232 
e91adc92dfea433f9432857edb8af8cb - - -] index failed
2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 83, in 
resource
2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 340, in index
2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource return 
self._items(request, True, parent_id)
2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 267, in _items
2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource obj_list = 
obj_getter(request.context, **kwargs)
2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/db/db_base_plugin_common.py", line 
49, in inner_filter
2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource result = 
f(*args, **kwargs)
2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/db/db_base_plugin_common.py", line 
35, in inner
2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource result = 
f(*args, **kwargs)
2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/services/qos/qos_plugin.py", line 84, 
in get_policies
2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource return 
policy_object.QosPolicy.get_objects(context, **filters)
2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/objects/qos/policy.py", line 108, in 
get_objects
2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource **kwargs)
2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/objects/base.py", line 122, in 
get_objects
2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource db_objs = 
db_api.get_objects(context, cls.db_model, **kwargs)
2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/db/ap

[Yahoo-eng-team] [Bug 1580440] Re: neutron purge - executing command on non existing tenant print wrong command

2016-05-25 Thread Assaf Muller
@John - Done. Thank you!

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

** Changed in: openstack-manuals
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1580440

Title:
  neutron purge - executing command on non existing tenant print wrong
  command

Status in neutron:
  Invalid
Status in openstack-manuals:
  In Progress

Bug description:
  I executed " neutron purge" command  with a non existing tenant ID and
  received the following:

  neutron purge 25a1c11e26354d7dbb5b204eb1310f33
  Purging resources: 100% complete.
  The following resources could not be deleted: 1 network

  
  We do not have that tenant ID so the message should be :

  There is not tenant with "SPECIFIED ID" id found.


  python-neutron-8.0.0-1.el7ost.noarch
  openstack-neutron-8.0.0-1.el7ost.noarch
  python-neutron-lib-0.0.2-1.el7ost.noarch
  openstack-neutron-metering-agent-8.0.0-1.el7ost.noarch
  openstack-neutron-ml2-8.0.0-1.el7ost.noarch
  openstack-neutron-openvswitch-8.0.0-1.el7ost.noarch
  python-neutronclient-4.1.1-2.el7ost.noarch
  openstack-neutron-common-8.0.0-1.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1580440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1576000] Re: Deprecate advertise_mtu option

2016-05-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/310448
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=4955746bbff317d5976a1130d695b928561afc00
Submitter: Jenkins
Branch:master

commit 4955746bbff317d5976a1130d695b928561afc00
Author: Ihar Hrachyshka 
Date:   Wed Apr 27 07:43:56 2016 -0500

Deprecate advertise_mtu option

Now that we advertise MTU via DHCP and RA by default, there is no reason
to keep the option available for configuration. Other agents/plugins are
also encouraged to advertise MTU values to instances by their own means.

DocImpact: mark the advertise_mtu option as deprecated as of Newton.

Closes-Bug: 1576000
Change-Id: Ibf7d60dfc57bec090f16d909c050c09e7cfd9352


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1576000

Title:
  Deprecate advertise_mtu option

Status in neutron:
  Fix Released

Bug description:
  Now that we advertise MTU via DHCP and RA by default, there is no
  reason to keep the option available for configuration. Other
  agents/plugins are also encouraged to advertise MTU values to
  instances by their own means.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1576000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585789] [NEW] Deprecate advertise_mtu option

2016-05-25 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/310448
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 4955746bbff317d5976a1130d695b928561afc00
Author: Ihar Hrachyshka 
Date:   Wed Apr 27 07:43:56 2016 -0500

Deprecate advertise_mtu option

Now that we advertise MTU via DHCP and RA by default, there is no reason
to keep the option available for configuration. Other agents/plugins are
also encouraged to advertise MTU values to instances by their own means.

DocImpact: mark the advertise_mtu option as deprecated as of Newton.

Closes-Bug: 1576000
Change-Id: Ibf7d60dfc57bec090f16d909c050c09e7cfd9352

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585789

Title:
  Deprecate advertise_mtu option

Status in neutron:
  New

Bug description:
  https://review.openstack.org/310448
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 4955746bbff317d5976a1130d695b928561afc00
  Author: Ihar Hrachyshka 
  Date:   Wed Apr 27 07:43:56 2016 -0500

  Deprecate advertise_mtu option
  
  Now that we advertise MTU via DHCP and RA by default, there is no reason
  to keep the option available for configuration. Other agents/plugins are
  also encouraged to advertise MTU values to instances by their own means.
  
  DocImpact: mark the advertise_mtu option as deprecated as of Newton.
  
  Closes-Bug: 1576000
  Change-Id: Ibf7d60dfc57bec090f16d909c050c09e7cfd9352

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585789/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585770] [NEW] [RFE] DVR-aware fixed IP announcements for with BGP

2016-05-25 Thread Ryan Tidwell
Public bug reported:

Enable BGP to announce the next-hop for fixed IP host route when using
DVR. The next-hop when using DVR is the IP address of the FIP agent
gateway. This would allow an operator to toggle whether to enable
announcement of host routes for each fixed IP or just rely on the prefix
announcement for the subnet that sends traffic through the central
router.

Depends on https://bugs.launchpad.net/neutron/+bug/1557290. Fast-exit
DVR not required, but would be a nice companion feature.

** Affects: neutron
 Importance: Wishlist
 Status: Triaged


** Tags: l3-bgp rfe

** Summary changed:

- DVR-aware fixed IP announcements for with BGP
+ DVR-aware fixed IP announcements with BGP

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585770

Title:
  [RFE] DVR-aware fixed IP announcements for with BGP

Status in neutron:
  Triaged

Bug description:
  Enable BGP to announce the next-hop for fixed IP host route when using
  DVR. The next-hop when using DVR is the IP address of the FIP agent
  gateway. This would allow an operator to toggle whether to enable
  announcement of host routes for each fixed IP or just rely on the
  prefix announcement for the subnet that sends traffic through the
  central router.

  Depends on https://bugs.launchpad.net/neutron/+bug/1557290. Fast-exit
  DVR not required, but would be a nice companion feature.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585770/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585761] [NEW] failed to boot instance from image without cinder volume

2016-05-25 Thread sean mooney
Public bug reported:

using master horizon it is currently not possible
to boot and instance form a glance image without
allocating a cinder volume.

this prevent horizon from booting an instance if cinder is not deployed.

to reproduce on the launch instance screen go to the source tab
and select image (default) in the select boot source drop down.

below this item a volume size element and a ratio (yes nod) booted
displayed asking if the volume should be deleted on instace delete

http://picpaste.com/horizon-IsnSMR2S.PNG

the volume is required to be requested and the minium size is 1 GB.

if cinder is not deployed this results in a failure to boot with the message 
"abouted:block device mapping is invalid"

http://picpaste.com/horizon-error-om6qqHST.PNG

opening the developer console the only message that appears is a warning 
JQMIGRATE: jQuery.fn.attr('selected') may use property instead of attribute 
from this section of code.

jQuery.migrateReset=function(){warnedAbout={};jQuery.migrateWarnings.length=0;};function
migrateWarn(msg){var
console=window.console;if(!warnedAbout[msg]){warnedAbout[msg]=true;jQuery.migrateWarnings.push(msg);if(console&&console.warn&&!jQuery.migrateMute){console.warn("JQMIGRATE:
"+msg);if(jQuery.migrateTrace&&console.trace){console.trace();

expected behavior 
allow selection of boot instace from image(create new volumen) in drop down or 
have checkbox to control
creation of cinder volume.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: angularjs

** Summary changed:

- failded to boot instance from image without cinder volume
+ failed to boot instance from image without cinder volume

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1585761

Title:
  failed to boot instance from image without cinder volume

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  using master horizon it is currently not possible
  to boot and instance form a glance image without
  allocating a cinder volume.

  this prevent horizon from booting an instance if cinder is not
  deployed.

  to reproduce on the launch instance screen go to the source tab
  and select image (default) in the select boot source drop down.

  below this item a volume size element and a ratio (yes nod) booted
  displayed asking if the volume should be deleted on instace delete

  http://picpaste.com/horizon-IsnSMR2S.PNG

  the volume is required to be requested and the minium size is 1 GB.

  if cinder is not deployed this results in a failure to boot with the message 
  "abouted:block device mapping is invalid"

  http://picpaste.com/horizon-error-om6qqHST.PNG

  opening the developer console the only message that appears is a warning 
  JQMIGRATE: jQuery.fn.attr('selected') may use property instead of attribute 
from this section of code.

  
jQuery.migrateReset=function(){warnedAbout={};jQuery.migrateWarnings.length=0;};function
  migrateWarn(msg){var
  
console=window.console;if(!warnedAbout[msg]){warnedAbout[msg]=true;jQuery.migrateWarnings.push(msg);if(console&&console.warn&&!jQuery.migrateMute){console.warn("JQMIGRATE:
  "+msg);if(jQuery.migrateTrace&&console.trace){console.trace();

  expected behavior 
  allow selection of boot instace from image(create new volumen) in drop down 
or have checkbox to control
  creation of cinder volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1585761/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585738] [NEW] ML2 doesn't return fixed_ips on a port update with binding

2016-05-25 Thread Carl Baldwin
Public bug reported:

I found this yesterday while working on deferred IP allocation for
routed networks.  However, it isn't unique to deferred port binding.
With my deferred IP allocation patch [2], I need to be able to make a
port create call [1] without binding information that doesn't allocate
an IP address.  Then, I need to follow it up with a port update which
sends host binding information and allocates an IP address.  But, when I
do that, the response doesn't contain the IP addresses that were
allocated [3].  However, immediately following it with a GET on the same
port shows the allocation [4].

This doesn't happen in other plugins besides ML2.  Only with ML2.  I've
put up a patch to run unit tests with ML2 that expose this problem [5].
The problem can be reproduced on master [6].  I can get it to happen by
creating a network without a subnet, creating a port on the network
(with no IP address), and then calling port update to allocate an IP
address.

If this goes unaddressed, Nova will have to make a GET call after doing
a port update with binding information when working with a port with
deferred IP allocation.

[1] http://paste.openstack.org/show/505419/
[2] https://review.openstack.org/#/c/320631/
[3] http://paste.openstack.org/show/505420/
[4] http://paste.openstack.org/show/505421/
[5] 
http://logs.openstack.org/57/320657/2/check/gate-neutron-python27/153a619/testr_results.html.gz
[6] https://review.openstack.org/321152

** Affects: neutron
 Importance: High
 Status: New

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585738

Title:
  ML2 doesn't return fixed_ips on a port update with binding

Status in neutron:
  New

Bug description:
  I found this yesterday while working on deferred IP allocation for
  routed networks.  However, it isn't unique to deferred port binding.
  With my deferred IP allocation patch [2], I need to be able to make a
  port create call [1] without binding information that doesn't allocate
  an IP address.  Then, I need to follow it up with a port update which
  sends host binding information and allocates an IP address.  But, when
  I do that, the response doesn't contain the IP addresses that were
  allocated [3].  However, immediately following it with a GET on the
  same port shows the allocation [4].

  This doesn't happen in other plugins besides ML2.  Only with ML2.
  I've put up a patch to run unit tests with ML2 that expose this
  problem [5].  The problem can be reproduced on master [6].  I can get
  it to happen by creating a network without a subnet, creating a port
  on the network (with no IP address), and then calling port update to
  allocate an IP address.

  If this goes unaddressed, Nova will have to make a GET call after
  doing a port update with binding information when working with a port
  with deferred IP allocation.

  [1] http://paste.openstack.org/show/505419/
  [2] https://review.openstack.org/#/c/320631/
  [3] http://paste.openstack.org/show/505420/
  [4] http://paste.openstack.org/show/505421/
  [5] 
http://logs.openstack.org/57/320657/2/check/gate-neutron-python27/153a619/testr_results.html.gz
  [6] https://review.openstack.org/321152

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585706] [NEW] neutron-lbaas test failure webob.exc.HTTPClientError: Unexpected error code: 400

2016-05-25 Thread Corey Bryant
Public bug reported:

I have several tests failing similar to the following ever since commit
b0b6a0aa8566bae552f6a7607cf254ee0cbc76ae.  If I revert that commit (e.g.
s/n_constants.ATTR_NOT_SPECIFIED/attributes.ATTR_NOT_SPECIFIED) then
this error goes away.

==
FAIL: 
neutron_lbaas.tests.unit.test_agent_scheduler.LBaaSAgentSchedulerTestCase.test_schedule_loadbalancer_with_down_agent
neutron_lbaas.tests.unit.test_agent_scheduler.LBaaSAgentSchedulerTestCase.test_schedule_loadbalancer_with_down_agent
--

Traceback (most recent call last):
  File 
"/build/neutron-lbaas-8.1.1~dev34/neutron_lbaas/tests/unit/test_agent_scheduler.py",
 line 199, in test_schedule_loadbalancer_with_down_agent
with self.loadbalancer() as loadbalancer:
  File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
  File 
"/build/neutron-lbaas-8.1.1~dev34/neutron_lbaas/tests/unit/db/loadbalancer/test_db_loadbalancerv2.py",
 line 257, in loadbalancer
res.status_int
webob.exc.HTTPClientError: Unexpected error code: 400

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- webob.exc.HTTPClientError: Unexpected error code: 400
+ neutron-lbaas test failure webob.exc.HTTPClientError: Unexpected error code: 
400

** Also affects: neutron
   Importance: Undecided
   Status: New

** No longer affects: neutron-lbaas-dashboard

** Description changed:

  I have several tests failing similar to the following ever since commit
  b0b6a0aa8566bae552f6a7607cf254ee0cbc76ae.  If I revert that commit (e.g.
- s/attributes.ATTR_NOT_SPECIFIED/attributes.ATTR_NOT_SPECIFIED) then this
- error goes away.
+ s/n_constants.ATTR_NOT_SPECIFIED/attributes.ATTR_NOT_SPECIFIED) then
+ this error goes away.
  
  ==
  FAIL: 
neutron_lbaas.tests.unit.test_agent_scheduler.LBaaSAgentSchedulerTestCase.test_schedule_loadbalancer_with_down_agent
  
neutron_lbaas.tests.unit.test_agent_scheduler.LBaaSAgentSchedulerTestCase.test_schedule_loadbalancer_with_down_agent
  --
  
  Traceback (most recent call last):
-   File 
"/build/neutron-lbaas-8.1.1~dev34/neutron_lbaas/tests/unit/test_agent_scheduler.py",
 line 199, in test_schedule_loadbalancer_with_down_agent
- with self.loadbalancer() as loadbalancer:
-   File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
- return self.gen.next()
-   File 
"/build/neutron-lbaas-8.1.1~dev34/neutron_lbaas/tests/unit/db/loadbalancer/test_db_loadbalancerv2.py",
 line 257, in loadbalancer
- res.status_int
+   File 
"/build/neutron-lbaas-8.1.1~dev34/neutron_lbaas/tests/unit/test_agent_scheduler.py",
 line 199, in test_schedule_loadbalancer_with_down_agent
+ with self.loadbalancer() as loadbalancer:
+   File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
+ return self.gen.next()
+   File 
"/build/neutron-lbaas-8.1.1~dev34/neutron_lbaas/tests/unit/db/loadbalancer/test_db_loadbalancerv2.py",
 line 257, in loadbalancer
+ res.status_int
  webob.exc.HTTPClientError: Unexpected error code: 400

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585706

Title:
  neutron-lbaas test failure webob.exc.HTTPClientError: Unexpected error
  code: 400

Status in neutron:
  New

Bug description:
  I have several tests failing similar to the following ever since
  commit b0b6a0aa8566bae552f6a7607cf254ee0cbc76ae.  If I revert that
  commit (e.g.
  s/n_constants.ATTR_NOT_SPECIFIED/attributes.ATTR_NOT_SPECIFIED) then
  this error goes away.

  ==
  FAIL: 
neutron_lbaas.tests.unit.test_agent_scheduler.LBaaSAgentSchedulerTestCase.test_schedule_loadbalancer_with_down_agent
  
neutron_lbaas.tests.unit.test_agent_scheduler.LBaaSAgentSchedulerTestCase.test_schedule_loadbalancer_with_down_agent
  --

  Traceback (most recent call last):
    File 
"/build/neutron-lbaas-8.1.1~dev34/neutron_lbaas/tests/unit/test_agent_scheduler.py",
 line 199, in test_schedule_loadbalancer_with_down_agent
  with self.loadbalancer() as loadbalancer:
    File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
  return self.gen.next()
    File 
"/build/neutron-lbaas-8.1.1~dev34/neutron_lbaas/tests/unit/db/loadbalancer/test_db_loadbalancerv2.py",
 line 257, in loadbalancer
  res.status_int
  webob.exc.HTTPClientError: Unexpected error code: 400

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585706/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.n

[Yahoo-eng-team] [Bug 1585699] [NEW] Neutron Metadata Agent Configuration - nova_metadata_ip

2016-05-25 Thread Ross Martyn
Public bug reported:

I am not sure if this constitutes the tag 'bug'. However it has lead us
to some confusion and I feel it should be updated.

This option in neutron metadata configuration (and install docs) is
misleading.

{{{
# IP address used by Nova metadata server. (string value)
#nova_metadata_ip = 127.0.0.1
}}}

It implies the need to present an IP address for the nova metadata api.
Where as in actual fact this can be a hostname or IP address.

When using TLS encrypted sessions, this 'has' to be a hostname, else
this ends in a SSL issue, as the hostname is embedded in the
certificates.

I am seeing this issue with OpenStack Liberty, however it appears to be
in the configuration reference for Mitaka too, so I guess this is
accross the board.

If this needs to be listed in a different forum, please let me know!

Thanks

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585699

Title:
  Neutron Metadata Agent Configuration - nova_metadata_ip

Status in neutron:
  New

Bug description:
  I am not sure if this constitutes the tag 'bug'. However it has lead
  us to some confusion and I feel it should be updated.

  This option in neutron metadata configuration (and install docs) is
  misleading.

  {{{
  # IP address used by Nova metadata server. (string value)
  #nova_metadata_ip = 127.0.0.1
  }}}

  It implies the need to present an IP address for the nova metadata
  api. Where as in actual fact this can be a hostname or IP address.

  When using TLS encrypted sessions, this 'has' to be a hostname, else
  this ends in a SSL issue, as the hostname is embedded in the
  certificates.

  I am seeing this issue with OpenStack Liberty, however it appears to
  be in the configuration reference for Mitaka too, so I guess this is
  accross the board.

  If this needs to be listed in a different forum, please let me know!

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585699/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1190149] Re: Token auth fails when token is larger than 8k

2016-05-25 Thread Thomas Herve
** No longer affects: heat/havana

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1190149

Title:
  Token auth fails when token is larger than 8k

Status in Cinder:
  Fix Released
Status in Cinder havana series:
  Fix Released
Status in Glance:
  Fix Released
Status in Glance havana series:
  Fix Committed
Status in heat:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Murano:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in Sahara:
  Fix Released
Status in OpenStack Object Storage (swift):
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released

Bug description:
  The following tests fail when there are 8 or more endpoints registered with 
keystone 
  tempest.api.compute.test_auth_token.AuthTokenTestJSON.test_v3_token 
  tempest.api.compute.test_auth_token.AuthTokenTestXML.test_v3_token

  Steps to reproduce:
  - run devstack with the following services (the heat h-* apis push the 
endpoint count over the threshold

ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-sch,horizon,mysql,rabbit,sysstat,tempest,s-proxy,s-account,s-container,s-object,cinder,c-api,c-vol,c-sch,n-cond,heat,h-api,h-api-cfn,h-api-cw,h-eng,n-net
  - run the failing tempest tests, eg
testr run test_v3_token
  - results in the following errors:
  ERROR: tempest.api.compute.test_auth_token.AuthTokenTestJSON.test_v3_token
  tags: worker-0
  --
  Traceback (most recent call last):
File "tempest/api/compute/test_auth_token.py", line 48, in test_v3_token
  self.servers_v3.list_servers()
File "tempest/services/compute/json/servers_client.py", line 138, in 
list_servers
  resp, body = self.get(url)
File "tempest/common/rest_client.py", line 269, in get
  return self.request('GET', url, headers)
File "tempest/common/rest_client.py", line 394, in request
  resp, resp_body)
File "tempest/common/rest_client.py", line 443, in _error_checker
  resp_body = self._parse_resp(resp_body)
File "tempest/common/rest_client.py", line 327, in _parse_resp
  return json.loads(body)
File "/usr/lib64/python2.7/json/__init__.py", line 326, in loads
  return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode
  obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 384, in raw_decode
  raise ValueError("No JSON object could be decoded")
  ValueError: No JSON object could be decoded
  ==
  ERROR: tempest.api.compute.test_auth_token.AuthTokenTestXML.test_v3_token
  tags: worker-0
  --
  Traceback (most recent call last):
File "tempest/api/compute/test_auth_token.py", line 48, in test_v3_token
  self.servers_v3.list_servers()
File "tempest/services/compute/xml/servers_client.py", line 181, in 
list_servers
  resp, body = self.get(url, self.headers)
File "tempest/common/rest_client.py", line 269, in get
  return self.request('GET', url, headers)
File "tempest/common/rest_client.py", line 394, in request
  resp, resp_body)
File "tempest/common/rest_client.py", line 443, in _error_checker
  resp_body = self._parse_resp(resp_body)
File "tempest/common/rest_client.py", line 519, in _parse_resp
  return xml_to_json(etree.fromstring(body))
File "lxml.etree.pyx", line 2993, in lxml.etree.fromstring 
(src/lxml/lxml.etree.c:63285)
File "parser.pxi", line 1617, in lxml.etree._parseMemoryDocument 
(src/lxml/lxml.etree.c:93571)
File "parser.pxi", line 1495, in lxml.etree._parseDoc 
(src/lxml/lxml.etree.c:92370)
File "parser.pxi", line 1011, in lxml.etree._BaseParser._parseDoc 
(src/lxml/lxml.etree.c:89010)
File "parser.pxi", line 577, in 
lxml.etree._ParserContext._handleParseResultDoc (src/lxml/lxml.etree.c:84711)
File "parser.pxi", line 676, in lxml.etree._handleParseResult 
(src/lxml/lxml.etree.c:85816)
File "parser.pxi", line 627, in lxml.etree._raiseParseError 
(src/lxml/lxml.etree.c:85308)
  XMLSyntaxError: None
  Ran 2 tests in 2.497s (+0.278s)
  FAILED (id=214, failures=2)

  - run keystone endpoint-delete on endpoints until there is 7 endpoints
  - failing tests should now pass

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1190149/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340596] Re: Tests fail due to novaclient 2.18 update

2016-05-25 Thread Thomas Herve
** No longer affects: heat/havana

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1340596

Title:
  Tests fail due to novaclient 2.18 update

Status in heat:
  Invalid
Status in heat icehouse series:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in python-novaclient:
  Fix Released

Bug description:
  tests currently fail on stable branches:
  2014-07-11 07:14:28.737 | 
==
  2014-07-11 07:14:28.738 | ERROR: test_index 
(openstack_dashboard.dashboards.admin.aggregates.tests.AggregatesViewTests)
  2014-07-11 07:14:28.774 | 
--
  2014-07-11 07:14:28.775 | Traceback (most recent call last):
  2014-07-11 07:14:28.775 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/helpers.py",
 line 124, in setUp
  2014-07-11 07:14:28.775 | test_utils.load_test_data(self)
  2014-07-11 07:14:28.775 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/utils.py",
 line 43, in load_test_data
  2014-07-11 07:14:28.775 | data_func(load_onto)
  2014-07-11 07:14:28.775 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/exceptions.py",
 line 60, in data
  2014-07-11 07:14:28.776 | TEST.exceptions.nova_unauthorized = 
create_stubbed_exception(nova_unauth)
  2014-07-11 07:14:28.776 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/exceptions.py",
 line 44, in create_stubbed_exception
  2014-07-11 07:14:28.776 | return cls(status_code, msg)
  2014-07-11 07:14:28.776 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/exceptions.py",
 line 31, in fake_init_exception
  2014-07-11 07:14:28.776 | self.code = code
  2014-07-11 07:14:28.776 | AttributeError: can't set attribute
  2014-07-11 07:14:28.777 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1340596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582323] Re: Commissioning fails when competing cloud metadata resides on disk

2016-05-25 Thread Blake Rouse
** No longer affects: cloud-init

** Changed in: maas
   Status: Triaged => In Progress

** Changed in: maas
   Importance: High => Critical

** Changed in: maas
 Assignee: (unassigned) => Blake Rouse (blake-rouse)

** Changed in: maas
Milestone: None => 2.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1582323

Title:
  Commissioning fails when competing cloud metadata resides on disk

Status in MAAS:
  In Progress

Bug description:
  A customer reused hardware that had previously deployed a RHEL
  Overcloud-controller which places metadata on the disk as a legitimate
  source, that cloud-init looks at by default.  When the newly enlisted
  node appeared it had the name of "overcloud-controller-0" vs. maas-
  enlist, pulled from the disk metadata which had overridden MAAS'
  metadata.  Commissioning continually failed on all of the nodes until
  the disk metadata was manually removed (KVM boot Ubuntu ISO, rm -f
  data or dd zeros to disk).

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1582323/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339273] Re: Sphinx documentation build failed in stable/havana: source_dir is not a directory

2016-05-25 Thread Thomas Herve
** Changed in: heat/havana
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1339273

Title:
  Sphinx documentation build failed in stable/havana: source_dir is not
  a directory

Status in Glance:
  Invalid
Status in Glance havana series:
  New
Status in heat:
  Invalid
Status in heat havana series:
  Won't Fix
Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  Documentation is not building in stable/havana:

  $ tox -evenv -- python setup.py build_sphinx
  venv inst: /opt/stack/horizon/.tox/dist/horizon-2013.2.4.dev9.g19634d6.zip
  venv runtests: PYTHONHASHSEED='1422458638'
  venv runtests: commands[0] | python setup.py build_sphinx
  running build_sphinx
  error: 'source_dir' must be a directory name (got 
`/opt/stack/horizon/doc/source`)
  ERROR: InvocationError: '/opt/stack/horizon/.tox/venv/bin/python setup.py 
build_sphinx'

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1339273/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503501] Re: oslo.db no longer requires testresources and testscenarios packages

2016-05-25 Thread Thomas Herve
** Changed in: heat/liberty
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1503501

Title:
  oslo.db no longer requires testresources and testscenarios packages

Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in heat liberty series:
  Fix Released
Status in Ironic:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in Sahara:
  Fix Released
Status in Sahara liberty series:
  Fix Committed
Status in Sahara mitaka series:
  Fix Released

Bug description:
  As of https://review.openstack.org/#/c/217347/ oslo.db no longer has
  testresources or testscenarios in its requirements, So next release of
  oslo.db will break several projects. These project that use fixtures
  from oslo.db should add these to their requirements if they need it.

  Example from Nova:
  ${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./nova/tests} 
--list 
  ---Non-zero exit code (2) from test listing.
  error: testr failed (3) 
  import errors ---
  Failed to import test module: nova.tests.unit.db.test_db_api
  Traceback (most recent call last):
File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File "nova/tests/unit/db/test_db_api.py", line 31, in 
  from oslo_db.sqlalchemy import test_base
File 
"/home/travis/build/dims/nova/.tox/py27/src/oslo.db/oslo_db/sqlalchemy/test_base.py",
 line 17, in 
  import testresources
  ImportError: No module named testresources

  https://travis-ci.org/dims/nova/jobs/83992423

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1503501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582323] Re: Commissioning fails when competing cloud metadata resides on disk

2016-05-25 Thread Blake Rouse
It was recommended by smoser to add this to MAAS:
http://paste.ubuntu.com/16683033/

But then it was discovered that will not handle all cases. A kernel
parameter would be needed to force only one datasource.


** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Changed in: cloud-init
   Status: New => Confirmed

** Changed in: maas
   Status: In Progress => Triaged

** Changed in: maas
 Assignee: Blake Rouse (blake-rouse) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1582323

Title:
  Commissioning fails when competing cloud metadata resides on disk

Status in cloud-init:
  Confirmed
Status in MAAS:
  Triaged

Bug description:
  A customer reused hardware that had previously deployed a RHEL
  Overcloud-controller which places metadata on the disk as a legitimate
  source, that cloud-init looks at by default.  When the newly enlisted
  node appeared it had the name of "overcloud-controller-0" vs. maas-
  enlist, pulled from the disk metadata which had overridden MAAS'
  metadata.  Commissioning continually failed on all of the nodes until
  the disk metadata was manually removed (KVM boot Ubuntu ISO, rm -f
  data or dd zeros to disk).

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1582323/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585682] [NEW] Horizon gating on dsvm-integration job is broken due to recent changes in devstack/keystone

2016-05-25 Thread Timur Sufiev
Public bug reported:

More importantly, Horizon in devstack is broken too due to the inability
to get list of projects / switch the current project for a user, see

DEBUG:keystoneauth.session:Request returned failure status: 404
Unable to retrieve project list.
Traceback (most recent call last):
  File "/home/tsufiev/develop/django_openstack_auth/openstack_auth/user.py", 
line 314, in authorized_tenants
is_federated=self.is_federated)
  File "/home/tsufiev/develop/django_openstack_auth/openstack_auth/utils.py", 
line 325, in get_project_list
projects = client.projects.list(user=kwargs.get('user_id'))
  File 
"/home/tsufiev/develop/horizon/.venv/local/lib/python2.7/site-packages/positional/__init__.py",
 line 101, in inner
return wrapped(*args, **kwargs)
  File 
"/home/tsufiev/develop/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/v3/projects.py",
 line 107, in list
**kwargs)
  File 
"/home/tsufiev/develop/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/base.py",
 line 75, in func
return f(*args, **new_kwargs)
  File 
"/home/tsufiev/develop/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/base.py",
 line 383, in list
self.collection_key)
  File 
"/home/tsufiev/develop/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/base.py",
 line 124, in _list
resp, body = self.client.get(url, **kwargs)
  File 
"/home/tsufiev/develop/horizon/.venv/local/lib/python2.7/site-packages/keystoneauth1/adapter.py",
 line 173, in get
return self.request(url, 'GET', **kwargs)
  File 
"/home/tsufiev/develop/horizon/.venv/local/lib/python2.7/site-packages/keystoneauth1/adapter.py",
 line 330, in request
resp = super(LegacyJsonAdapter, self).request(*args, **kwargs)
  File 
"/home/tsufiev/develop/horizon/.venv/local/lib/python2.7/site-packages/keystoneauth1/adapter.py",
 line 98, in request
return self.session.request(url, method, **kwargs)
  File 
"/home/tsufiev/develop/horizon/.venv/local/lib/python2.7/site-packages/positional/__init__.py",
 line 101, in inner
return wrapped(*args, **kwargs)
  File 
"/home/tsufiev/develop/horizon/.venv/local/lib/python2.7/site-packages/keystoneauth1/session.py",
 line 468, in request
raise exceptions.from_response(resp, method, url)
NotFound: The resource could not be found. (HTTP 404)

** Affects: horizon
 Importance: Critical
 Status: New

** Affects: keystone
 Importance: Undecided
 Status: New

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1585682

Title:
  Horizon gating on dsvm-integration job is broken due to recent changes
  in devstack/keystone

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Identity (keystone):
  New

Bug description:
  More importantly, Horizon in devstack is broken too due to the
  inability to get list of projects / switch the current project for a
  user, see

  DEBUG:keystoneauth.session:Request returned failure status: 404
  Unable to retrieve project list.
  Traceback (most recent call last):
File "/home/tsufiev/develop/django_openstack_auth/openstack_auth/user.py", 
line 314, in authorized_tenants
  is_federated=self.is_federated)
File "/home/tsufiev/develop/django_openstack_auth/openstack_auth/utils.py", 
line 325, in get_project_list
  projects = client.projects.list(user=kwargs.get('user_id'))
File 
"/home/tsufiev/develop/horizon/.venv/local/lib/python2.7/site-packages/positional/__init__.py",
 line 101, in inner
  return wrapped(*args, **kwargs)
File 
"/home/tsufiev/develop/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/v3/projects.py",
 line 107, in list
  **kwargs)
File 
"/home/tsufiev/develop/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/base.py",
 line 75, in func
  return f(*args, **new_kwargs)
File 
"/home/tsufiev/develop/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/base.py",
 line 383, in list
  self.collection_key)
File 
"/home/tsufiev/develop/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/base.py",
 line 124, in _list
  resp, body = self.client.get(url, **kwargs)
File 
"/home/tsufiev/develop/horizon/.venv/local/lib/python2.7/site-packages/keystoneauth1/adapter.py",
 line 173, in get
  return self.request(url, 'GET', **kwargs)
File 
"/home/tsufiev/develop/horizon/.venv/local/lib/python2.7/site-packages/keystoneauth1/adapter.py",
 line 330, in request
  resp = super(LegacyJsonAdapter, self).request(*args, **kwargs)
File 
"/home/tsufiev/develop/horizon/.venv/local/lib/python2.7/site-packages/keystoneauth1/adapter.py",
 line 98, in request
  return self.session.request(url, method, **kwargs)
File 
"/home/tsufiev/develop/horizon/.venv/local/lib/python2.7/site-pa

[Yahoo-eng-team] [Bug 1568197] Re: Problems with neutron.common.constants import break HA and DVRHA functionality

2016-05-25 Thread Adolfo Duarte
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1568197

Title:
  Problems with neutron.common.constants import break HA and DVRHA
  functionality

Status in neutron:
  Fix Released

Bug description:
  Many files in the neutron project use this import:

  from neutron.common import constants as l3_const

  or some variation of it.

  For some reason this import is not working correctly. 
  Particularly the following lines from ithe file neturon.common.constants seem 
to not take effect (or value of constant) gets reverted: 

  L24 - L 31:
  # TODO(anilvenkata) Below constants should be added to neutron-lib
  DEVICE_OWNER_HA_REPLICATED_INT = (lib_constants.DEVICE_OWNER_NETWORK_PREFIX +
"ha_router_replicated_interface")
  ROUTER_INTERFACE_OWNERS = lib_constants.ROUTER_INTERFACE_OWNERS + \
  (DEVICE_OWNER_HA_REPLICATED_INT,)
  ROUTER_INTERFACE_OWNERS_SNAT = lib_constants.ROUTER_INTERFACE_OWNERS_SNAT + \
  (DEVICE_OWNER_HA_REPLICATED_INT,)


  The ROUTER_INTERFACE_OWNER and ROUTER_INTERFACE_OWNER_SNAT constants
  do not seem to take the new values assigned to them: original-tuple +
  DEVICE_OWNER_HA_REPLICATED_INT

  In files which use the import mentioned above: "from neutron.common import 
constant" or a variation of it, the values
  of ROUTER_INTERFACE_OWNERS and ROUTER_INTERFACE_OWNERS_SNAT do not contain 
the value "ha_router_replicated_interface"

  This is causing problems with HA router and DVRHA routers because the
  files neutron/db/l3_dvr_db.py and neutron/db/l3_hamode_db.py make use
  of the constants neutron.common.constants.ROUTER_INTERFACE_OWNERS and
  neutron.common.constants.ROUTER_INTERFACE_OWNERS_SNAT to figure out
  what ports belong to a router.

  Since the "ha_router_replicated_interface" is not listed in either one
  of those variables, the neutron server (q-svc) does not include any
  ports which are owned by ha_router_replicated_interface as part of
  router updates.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1568197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585680] [NEW] neutron-lbaas doesn't have tempest plugin

2016-05-25 Thread Emilien Macchi
Public bug reported:

Puppet OpenStack CI is interested to run neutron-lbaas Tempest tests but
it's currently not working because neutron-lbaas is missing a Tempest
plugin and its entry-point, so discovery of tests does not work.

Right now, to run tempest we need to go in the neutron-lbaas directory and run 
tox inside, etc.
That's not the way to go and other projects already (Neutron itself does) 
provide tempest plugins.

This is a official RFE to have it in neutron-lbaas so we can run the
tests in a consistent way with other projects.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585680

Title:
  neutron-lbaas doesn't have tempest plugin

Status in neutron:
  New

Bug description:
  Puppet OpenStack CI is interested to run neutron-lbaas Tempest tests
  but it's currently not working because neutron-lbaas is missing a
  Tempest plugin and its entry-point, so discovery of tests does not
  work.

  Right now, to run tempest we need to go in the neutron-lbaas directory and 
run tox inside, etc.
  That's not the way to go and other projects already (Neutron itself does) 
provide tempest plugins.

  This is a official RFE to have it in neutron-lbaas so we can run the
  tests in a consistent way with other projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585680/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580440] Re: neutron purge - executing command on non existing tenant print wrong command

2016-05-25 Thread John Davidge
I've posted a patch to improve the documentation here:

https://review.openstack.org/#/c/321012

Please review and let me know if those changes help to clarify the
expected behavior.

@Assaf Could you expand on your thoughts about shared resources? Perhaps
on the doc review. Thanks

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1580440

Title:
  neutron purge - executing command on non existing tenant print wrong
  command

Status in neutron:
  Invalid

Bug description:
  I executed " neutron purge" command  with a non existing tenant ID and
  received the following:

  neutron purge 25a1c11e26354d7dbb5b204eb1310f33
  Purging resources: 100% complete.
  The following resources could not be deleted: 1 network

  
  We do not have that tenant ID so the message should be :

  There is not tenant with "SPECIFIED ID" id found.


  python-neutron-8.0.0-1.el7ost.noarch
  openstack-neutron-8.0.0-1.el7ost.noarch
  python-neutron-lib-0.0.2-1.el7ost.noarch
  openstack-neutron-metering-agent-8.0.0-1.el7ost.noarch
  openstack-neutron-ml2-8.0.0-1.el7ost.noarch
  openstack-neutron-openvswitch-8.0.0-1.el7ost.noarch
  python-neutronclient-4.1.1-2.el7ost.noarch
  openstack-neutron-common-8.0.0-1.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1580440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585652] [NEW] EmptyCatalog not treated during cinderclient creation

2016-05-25 Thread Rodrigo Duarte
Public bug reported:

Steps to reproduce
==
1 - Get a keystone v3 token using the ?nocatalog param. Example:

export TOKEN=`curl -i -k -v -H "Content-type: application/json" -d
'{"auth": {"identity": {"methods": ["password"], "password": {"user":
{"domain": {"name": "Default"}, "name": "test", "password":
"password"}}}, "scope": {"project": {"name": "test-project", "domain":
{"name": "Default"}' http://localhost:5000/v3/auth/tokens | grep X
-Subject-Token | awk '{print $2}' | sed -e 's,\r,,' `

2 - Try to create a server using a cinder volume. Example:

curl -k -v -H  "X-Auth-Token:$TOKEN" -H "Content-type: application/json"
-d '{"server": {"name": "test_CSDPU_1", "imageRef": "",
"block_device_mapping_v2": [{"source_type": "volume",
"destination_type": "volume", "boot_index": 0, "delete_on_termination":
false, "uuid": "85397498-850f-406f-806a-25cf93cd94dc"}], "flavorRef":
"790959df-f79b-4b87-8389-a160a3b6e606", "max_count": 1, "min_count":
1}}' http://localhost:8774/v2/07564c39740f405b92f4722090cd745b/servers

Actual result
=

{"badRequest": {"message": "Block Device Mapping is Invalid: failed to
get volume 85397498-850f-406f-806a-25cf93cd94dc.", "code": 400}}

Expected result
===

Server is created without issues or a meaningful error message is
displayed.

Details
===

- During cinderclient creation, nova tries to get cinder's endpoint
using the auth object obtained from the token without the catalog [1].
keystoneauth will raise an EmptyCatalog exception [2] that is not
treated and will result in the error seen above.

[1] https://github.com/openstack/nova/blob/master/nova/volume/cinder.py#L82
[2] 
https://github.com/openstack/keystoneauth/blob/master/keystoneauth1/access/service_catalog.py#L190

- This issue might happen in other areas of code, is not necessarily
exclusive to the cinderclient creation.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1585652

Title:
  EmptyCatalog not treated during cinderclient creation

Status in OpenStack Compute (nova):
  New

Bug description:
  Steps to reproduce
  ==
  1 - Get a keystone v3 token using the ?nocatalog param. Example:

  export TOKEN=`curl -i -k -v -H "Content-type: application/json" -d
  '{"auth": {"identity": {"methods": ["password"], "password": {"user":
  {"domain": {"name": "Default"}, "name": "test", "password":
  "password"}}}, "scope": {"project": {"name": "test-project", "domain":
  {"name": "Default"}' http://localhost:5000/v3/auth/tokens | grep X
  -Subject-Token | awk '{print $2}' | sed -e 's,\r,,' `

  2 - Try to create a server using a cinder volume. Example:

  curl -k -v -H  "X-Auth-Token:$TOKEN" -H "Content-type:
  application/json" -d '{"server": {"name": "test_CSDPU_1", "imageRef":
  "", "block_device_mapping_v2": [{"source_type": "volume",
  "destination_type": "volume", "boot_index": 0,
  "delete_on_termination": false, "uuid": "85397498-850f-406f-806a-
  25cf93cd94dc"}], "flavorRef": "790959df-f79b-4b87-8389-a160a3b6e606",
  "max_count": 1, "min_count": 1}}'
  http://localhost:8774/v2/07564c39740f405b92f4722090cd745b/servers

  Actual result
  =

  {"badRequest": {"message": "Block Device Mapping is Invalid: failed to
  get volume 85397498-850f-406f-806a-25cf93cd94dc.", "code": 400}}

  Expected result
  ===

  Server is created without issues or a meaningful error message is
  displayed.

  Details
  ===

  - During cinderclient creation, nova tries to get cinder's endpoint
  using the auth object obtained from the token without the catalog [1].
  keystoneauth will raise an EmptyCatalog exception [2] that is not
  treated and will result in the error seen above.

  [1] https://github.com/openstack/nova/blob/master/nova/volume/cinder.py#L82
  [2] 
https://github.com/openstack/keystoneauth/blob/master/keystoneauth1/access/service_catalog.py#L190

  - This issue might happen in other areas of code, is not necessarily
  exclusive to the cinderclient creation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1585652/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585632] [NEW] test_revoke_by_audit_chain_id_chained_token() fails

2016-05-25 Thread Corey Bryant
Public bug reported:

This test started failing in Ubuntu package builds as of commit
79952ffbd4c1ffd2dab32c04581b3a7f71a05e28.

==
Failed 1 tests - output below:
==

keystone.tests.unit.test_auth.FernetAuthWithToken.test_revoke_by_audit_chain_id_chained_token
-

Captured traceback:
~~~
Traceback (most recent call last):
  File "/��PKGBUILDDIR��/keystone/tests/unit/test_auth.py", line 595, in 
test_revoke_by_audit_chain_id_chained_token
token_id=token_id)
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 422, 
in assertRaises
self.assertThat(our_callable, matcher)
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 435, 
in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: > returned {'access': {'token': {'issued_at': 
'2016-05-25T04:29:17.00Z', 'expires': '2016-05-25T05:29:14.00Z', 'id': 
u'gABXRSodRmWvyRUcgKFIelwt9imwkJYBsdVmL3DSNEN7rntAzfi0bpFSB4fWfSsZ61G7ETkEjDoFGOfV_lycQpUhTSZg1JFKK4Tc_cCk04aDq6vLVMnW7ZEjSZ4KTNsyKrbKtZVNbVi254Rg3vNizFUysVsG-w',
 'audit_ids': [u'DIO9TonYRu6riZal9j7keg']}, 'serviceCatalog': [], 'user': 
{'username': u'FOO', 'roles_links': [], 'id': 
u'cad5986016664bb3927a3a6195581a0c', 'roles': [], 'name': u'FOO'}, 'metadata': 
{'is_admin': 0, 'roles': []}}}

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1585632

Title:
  test_revoke_by_audit_chain_id_chained_token() fails

Status in OpenStack Identity (keystone):
  New

Bug description:
  This test started failing in Ubuntu package builds as of commit
  79952ffbd4c1ffd2dab32c04581b3a7f71a05e28.

  ==
  Failed 1 tests - output below:
  ==

  
keystone.tests.unit.test_auth.FernetAuthWithToken.test_revoke_by_audit_chain_id_chained_token
  
-

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "/��PKGBUILDDIR��/keystone/tests/unit/test_auth.py", line 595, in 
test_revoke_by_audit_chain_id_chained_token
  token_id=token_id)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 
422, in assertRaises
  self.assertThat(our_callable, matcher)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 
435, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: > returned {'access': {'token': {'issued_at': 
'2016-05-25T04:29:17.00Z', 'expires': '2016-05-25T05:29:14.00Z', 'id': 
u'gABXRSodRmWvyRUcgKFIelwt9imwkJYBsdVmL3DSNEN7rntAzfi0bpFSB4fWfSsZ61G7ETkEjDoFGOfV_lycQpUhTSZg1JFKK4Tc_cCk04aDq6vLVMnW7ZEjSZ4KTNsyKrbKtZVNbVi254Rg3vNizFUysVsG-w',
 'audit_ids': [u'DIO9TonYRu6riZal9j7keg']}, 'serviceCatalog': [], 'user': 
{'username': u'FOO', 'roles_links': [], 'id': 
u'cad5986016664bb3927a3a6195581a0c', 'roles': [], 'name': u'FOO'}, 'metadata': 
{'is_admin': 0, 'roles': []}}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1585632/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585623] [NEW] A vm's port is in down state after compute node reboot

2016-05-25 Thread Oleg Bondarev
Public bug reported:

After compute node reboot some ports may end up in DOWN state and
corresponding VMs lose net access.

** Affects: neutron
 Importance: High
 Assignee: Oleg Bondarev (obondarev)
 Status: Confirmed


** Tags: mitaka-backport-potential ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585623

Title:
  A vm's port is in down state after compute node reboot

Status in neutron:
  Confirmed

Bug description:
  After compute node reboot some ports may end up in DOWN state and
  corresponding VMs lose net access.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585623/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585494] Re: instances recovered

2016-05-25 Thread Béla Vancsics
** Project changed: devstack => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1585494

Title:
  instances recovered

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Nova currently does not rebuild instances if they were removed from
  the disk.

  
  The instances should be recovered (after we restart the OpenStack) if the 
compute host is replaced or the disk is erased so that a board replacement can 
be performed in case of hardware failure.

  
  Steps:
  0) OpenStack is running
  1) Create a new instance
  2) Stop the OpenStack
  3) Erase the instance from the disk
  4) Destroy the instance with virsh
  5) Start the OpenStack

  Result: The (new) instance's status is Shutoff and the power state is
  Shut Down.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1585494/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585608] [NEW] theme switcher broken

2016-05-25 Thread Matthias Runge
Public bug reported:

while adding a new theme via local_settings.d, all themes from theme
switcher get removed.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1585608

Title:
  theme switcher broken

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  while adding a new theme via local_settings.d, all themes from theme
  switcher get removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1585608/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585494] [NEW] instances recovered

2016-05-25 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Nova currently does not rebuild instances if they were removed from the
disk.


The instances should be recovered (after we restart the OpenStack) if the 
compute host is replaced or the disk is erased so that a board replacement can 
be performed in case of hardware failure.


Steps:
0) OpenStack is running
1) Create a new instance
2) Stop the OpenStack
3) Erase the instance from the disk
4) Destroy the instance with virsh
5) Start the OpenStack

Result: The (new) instance's status is Shutoff and the power state is
Shut Down.

** Affects: nova
 Importance: Undecided
 Assignee: Béla Vancsics (vancsics)
 Status: In Progress

-- 
instances recovered
https://bugs.launchpad.net/bugs/1585494
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585606] [NEW] theming via local_settings.d broken

2016-05-25 Thread Matthias Runge
Public bug reported:

while adding a theme via settings.py works, the same theme does NOT
work, when added via local_settings.d

Compressing... CommandError: An error occurred during rendering 
/home/mrunge/work/jeff-theme/horizon/openstack_dashboard/templates/_stylesheets.html:
 Couldn't find anything to import: /themes/default/variables
Extensions: , , 
Search path:
  
on line 3 of themes/rcue/_variables.scss
imported from line 1 of u'string:fecafbf5d3584816:\n// My Themes\n@import 
"/themes/rcue/variables";\n\n// Horizon\n@import "/dashboard/scss/horizon.scs'

Not to mention, there is *NO* file named 'horizon.scs' (without the last
's') in the whole file tree.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1585606

Title:
  theming via local_settings.d broken

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  while adding a theme via settings.py works, the same theme does NOT
  work, when added via local_settings.d

  Compressing... CommandError: An error occurred during rendering 
/home/mrunge/work/jeff-theme/horizon/openstack_dashboard/templates/_stylesheets.html:
 Couldn't find anything to import: /themes/default/variables
  Extensions: , , 
  Search path:

  on line 3 of themes/rcue/_variables.scss
  imported from line 1 of u'string:fecafbf5d3584816:\n// My Themes\n@import 
"/themes/rcue/variables";\n\n// Horizon\n@import "/dashboard/scss/horizon.scs'

  Not to mention, there is *NO* file named 'horizon.scs' (without the
  last 's') in the whole file tree.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1585606/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585601] [NEW] Deleting a live-migrated instance causes its fixed IP to remain reserved

2016-05-25 Thread Artom Lifshitz
Public bug reported:

When using nova-network, an attempt to boot an instance with the fixed
IP of an instance that has been live-migrated and then deleted will fail
with 'Fixed IP address is already in use on instance.'

To reproduce:

1. Boot an instance
2. Live-migrate it
3. Delete it
4. Boot a new instance with the same fixed IP.

This has been reported against Icehouse and has been reproduced in
master, and is therefore presumably present in all versions in-between.

** Affects: nova
 Importance: Undecided
 Assignee: Artom Lifshitz (notartom)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Artom Lifshitz (notartom)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1585601

Title:
  Deleting a live-migrated instance causes its fixed IP to remain
  reserved

Status in OpenStack Compute (nova):
  New

Bug description:
  When using nova-network, an attempt to boot an instance with the fixed
  IP of an instance that has been live-migrated and then deleted will
  fail with 'Fixed IP address is already in use on instance.'

  To reproduce:

  1. Boot an instance
  2. Live-migrate it
  3. Delete it
  4. Boot a new instance with the same fixed IP.

  This has been reported against Icehouse and has been reproduced in
  master, and is therefore presumably present in all versions in-
  between.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1585601/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585515] Re: Paramiko doesn't work with Nova

2016-05-25 Thread Markus Zoeller (markus_z)
Paramiko 2.0 got released 26 days ago with [1]. Nova put a workaround in place 
to work with paramiko 2.x 21 days ago [2]. After that, we bumped the version in 
the global requirements to 2.0 [3]. I tested it locally with commit 9d99081 
(Newton master) and it works for me. I also didn't find the error message in 
logstash. I could reproduce this issue when I used stable/Mitaka and upgraded 
from the pinned version paramiko 1.16 to paramiko 2.0 manually.
All of this makes me believe that your setup could be in a weird state. I'm 
closing this for now. If you can reproduce it, feel free to reopen.

References:
[1] 
https://github.com/paramiko/paramiko/commit/258cc64ab36b58c681aa974151288fc7ddc1bb31
[2] 
https://github.com/openstack/nova/commit/c05b338f163e0bafbe564c6c7c593b819f2f2eac
[3] 
https://github.com/openstack/requirements/commit/e379813e9ccd41138af969f4c8e57abd062af527

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1585515

Title:
  Paramiko doesn't work with Nova

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  It looks like Paramiko 2.0.0 again breaks nova which currently has a
  requirement for 'paramiko>=1.16.0 # LGPL'.

  
nova.tests.unit.api.openstack.compute.test_keypairs.KeypairsTestV210.test_keypair_create_duplicate
  
--

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "nova/tests/unit/api/openstack/compute/test_keypairs.py", line 
237, in test_keypair_create_duplicate
  self.controller.create, self.req, body=body)
File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 485, in assertRaises
  self.assertThat(our_callable, matcher)
File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 496, in assertThat
  mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 547, in _matchHelper
  mismatch = matcher.match(matchee)
File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 108, in match
  mismatch = self.exception_matcher.match(exc_info)
File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py",
 line 62, in match
  mismatch = matcher.match(matchee)
File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 475, in match
  reraise(*matchee)
File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 101, in match
  result = matchee()
File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 1049, in __call__
  return self._callable_object(*self._args, **self._kwargs)
File "nova/api/openstack/wsgi.py", line 961, in version_select
  return func.func(self, *args, **kwargs)
File "nova/api/openstack/extensions.py", line 504, in wrapped
  raise webob.exc.HTTPInternalServerError(explanation=msg)
  webob.exc.HTTPInternalServerError: Unexpected API Error. Please report 
this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
  
  

  Captured pythonlogging:
  ~~~
  2016-05-25 09:55:14,571 INFO [nova.api.openstack] Loaded extensions: 
['os-keypairs', 'servers']
  2016-05-25 09:55:16,314 ERROR [nova.api.openstack.extensions] Unexpected 
exception in API method
  Traceback (most recent call last):
File "nova/api/openstack/extensions.py", line 478, in wrapped
  return f(*args, **kwargs)
File "nova/api/validation/__init__.py", line 73, in wrapper
  return func(*args, **kwargs)
File "nova/api/openstack/compute/keypairs.py", line 72, in create
  return self._create(req, body, type=True, user_id=user_id)
File "nova/api/openstack/compute/keypairs.py", line 132, in _create
  context, user_id, name, key_type)
File "nova/exception.py", line 110, in wrapped
  payload)
File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 221, in __exit__
  self.force_reraise()
File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 197, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "nova/exception.py", line 89, in wrapped
  return f(self, context, *args, **kw)
File "nova/co

[Yahoo-eng-team] [Bug 1585584] [NEW] [Glare] Glare v0.1 is unable to create a public artifact draft

2016-05-25 Thread Alexander Tivelkov
Public bug reported:

Due to some reason the visibility field gets excluded from the valid
input when creating an artifact draft. Thus it is impossible to create a
public artifact in a single call: a PATCH call should follow instead.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1585584

Title:
  [Glare] Glare v0.1 is unable to create a public artifact draft

Status in Glance:
  New

Bug description:
  Due to some reason the visibility field gets excluded from the valid
  input when creating an artifact draft. Thus it is impossible to create
  a public artifact in a single call: a PATCH call should follow
  instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1585584/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558683] Re: Versions endpoint does not support X-Forwarded-Proto

2016-05-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/305152
Committed: 
https://git.openstack.org/cgit/openstack/cinder/commit/?id=d7e7e7bdf0f112c8315ae38f04b4849338173d51
Submitter: Jenkins
Branch:master

commit d7e7e7bdf0f112c8315ae38f04b4849338173d51
Author: yuriy_n 
Date:   Mon May 23 11:28:25 2016 +0300

Handle SSL termination proxies for version list

Cinder list with pagination contains wrong scheme for
'next' link in case of SSL endpoints. This patch fixes
it and returns the correct scheme in version URLs if
service is behind an SSL termination proxy.

Change-Id: If5aab9cc25a2e7c66a0bb13b5f7488a667b30309
Closes-Bug: #1558683


** Changed in: cinder
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1558683

Title:
  Versions endpoint does not support X-Forwarded-Proto

Status in Cinder:
  Fix Released
Status in Glance:
  In Progress

Bug description:
  When a project is deployed behind a SSL terminating proxy, the version
  endpoint returns the wrong URLs.  The returned protocol in the reponse
  URLs is  http:// instead of the expected https://.

  This is because the response built by versions.py git the host
  information only from the incoming req.  If SSL has been terminated by
  a proxy, then the information in the req indicates http://.  Other
  projects have addressed this by adding the config parameter
  secure_proxy_ssl_header = HTTP_X_FORWARDED_PROTO.  This will tell the
  project to use the value in X-Forwarded-Proto (https or http) when
  building the URLs in the response.  Nova and Keystone support this
  configuration option.

  One workaround is to set the public_endpoint parameter. However, the
  value set for public_endpoint, is also returned when the internal and
  admin version endpoints are queried, which breaks other things.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1558683/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585524] Re: neutron server Error: TooManyExternalNetworks

2016-05-25 Thread Hong Hui Xiao
According to [1], this should be invalid. I have verified that config as
[1], no error will report.

[1]
https://github.com/openstack/neutron/blob/f60291820599804e8bfdaafa0cd0565549daa193/neutron/agent/l3/config.py#L64-L66

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585524

Title:
  neutron server Error:  TooManyExternalNetworks

Status in neutron:
  Invalid

Bug description:
  Main steps:
  1 create 2 external networks each with a different subnet with neutron CLI 
commands, there is no error info from CLI.
  e.g. neutron net-create --router:external=True --provider:physical_network 
provider100 --provider:network_type flat provider100
  2 create 2 routers connected each of the external net, there is no error info 
from CLI.
  3 create 1 floating ip from one of the external network, no error info from 
CLI.
  4 create 1 private network, and try creating a vm connected to the private 
network.
  there is no response with the command: nova boot xxx.
  We can see errors from screen, seems neutron CLI needs more checking when 
creating more external networks.
  q-svc:
  2016-05-25 00:55:39.756 ERROR oslo_messaging.rpc.server 
[req-8ff829a5-2241-4ad0-896e-136b1de3efe7 None None] Exception during handling 
message
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server Traceback (most 
recent call last):
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 133, in 
_process_incoming
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 153, 
in dispatch
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/neutron/neutron/api/rpc/handlers/l3_rpc.py", line 214, in 
get_external_network_id
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server net_id = 
self.plugin.get_external_network_id(context)
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/neutron/neutron/db/external_net_db.py", line 199, in 
get_external_network_id
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server raise 
n_exc.TooManyExternalNetworks()
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server 
TooManyExternalNetworks: More than one external network exists.
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server
  neutron l3-agent:
  2016-05-24 22:28:22.418 ERROR neutron.agent.l3.agent [-] Failed to process 
compatible router '69b7ca3c-3aa5-44eb-bec8-8e53accbde64'
  2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent Traceback (most recent 
call last):
  2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 485, in 
_process_router_update
  2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent 
self._process_router_if_compatible(router)
  2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 417, in 
_process_router_if_compatible
  2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent if ex_net_id != 
self._fetch_external_net_id(force=True):
  2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 297, in 
_fetch_external_net_id
  2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent raise Exception(msg)
  2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent Exception: The 
'gateway_external_network_id' option must be configured for this agent as 
Neutron has more than one external network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585524/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585250] Re: Statuses not shown for non-"loadbalancer" LBaaS objects on CLI

2016-05-25 Thread Elena Ezhova
** Project changed: python-neutronclient => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585250

Title:
  Statuses not shown for non-"loadbalancer" LBaaS objects on CLI

Status in neutron:
  Confirmed

Bug description:
  There is no indication on the CLI when creating an LBaaSv2 object
  (other than a "loadbalancer") has failed...

  stack@openstack:~$ neutron lbaas-listener-create --name MyListener1 
--loadbalancer MyLB1 --protocol HTTP --protocol-port 80
  Created a new listener:
  +---++
  | Field | Value  |
  +---++
  | admin_state_up| True   |
  | connection_limit  | -1 |
  | default_pool_id   ||
  | default_tls_container_ref ||
  | description   ||
  | id| 5ca664d6-3a3a-4369-821d-e36c87ff5dc2   |
  | loadbalancers | {"id": "549982d9-7f52-48ac-a4fe-a905c872d71d"} |
  | name  | MyListener1|
  | protocol  | HTTP   |
  | protocol_port | 80 |
  | sni_container_refs||
  | tenant_id | 22000d943c5341cd88d27bd39a4ee9cd   |
  +---++

  There is no indication of any issue here, and lbaas-listener-show
  produces the same output.  However, in reality, the listener is in an
  error state...

  mysql> select * from lbaas_listeners;
  
+--+--+-+-+--+---+--+--+-++-+--+--+
  | tenant_id| id   | 
name| description | protocol | protocol_port | connection_limit | 
loadbalancer_id  | default_pool_id | admin_state_up | 
provisioning_status | operating_status | default_tls_container_id |
  
+--+--+-+-+--+---+--+--+-++-+--+--+
  | 22000d943c5341cd88d27bd39a4ee9cd | 5ca664d6-3a3a-4369-821d-e36c87ff5dc2 | 
MyListener1 | | HTTP |80 |   -1 | 
549982d9-7f52-48ac-a4fe-a905c872d71d | NULL|  1 | ERROR 
  | OFFLINE  | NULL |
  
+--+--+-+-+--+---+--+--+-++-+--+--+
  1 row in set (0.00 sec)

  
  How is a CLI user who doesn't have access to the Neutron DB supposed to know 
an error has occurred (other than "it doesn't work", obviously)?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585250/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585250] [NEW] Statuses not shown for non-"loadbalancer" LBaaS objects on CLI

2016-05-25 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

There is no indication on the CLI when creating an LBaaSv2 object (other
than a "loadbalancer") has failed...

stack@openstack:~$ neutron lbaas-listener-create --name MyListener1 
--loadbalancer MyLB1 --protocol HTTP --protocol-port 80
Created a new listener:
+---++
| Field | Value  |
+---++
| admin_state_up| True   |
| connection_limit  | -1 |
| default_pool_id   ||
| default_tls_container_ref ||
| description   ||
| id| 5ca664d6-3a3a-4369-821d-e36c87ff5dc2   |
| loadbalancers | {"id": "549982d9-7f52-48ac-a4fe-a905c872d71d"} |
| name  | MyListener1|
| protocol  | HTTP   |
| protocol_port | 80 |
| sni_container_refs||
| tenant_id | 22000d943c5341cd88d27bd39a4ee9cd   |
+---++

There is no indication of any issue here, and lbaas-listener-show
produces the same output.  However, in reality, the listener is in an
error state...

mysql> select * from lbaas_listeners;
+--+--+-+-+--+---+--+--+-++-+--+--+
| tenant_id| id   | 
name| description | protocol | protocol_port | connection_limit | 
loadbalancer_id  | default_pool_id | admin_state_up | 
provisioning_status | operating_status | default_tls_container_id |
+--+--+-+-+--+---+--+--+-++-+--+--+
| 22000d943c5341cd88d27bd39a4ee9cd | 5ca664d6-3a3a-4369-821d-e36c87ff5dc2 | 
MyListener1 | | HTTP |80 |   -1 | 
549982d9-7f52-48ac-a4fe-a905c872d71d | NULL|  1 | ERROR 
  | OFFLINE  | NULL |
+--+--+-+-+--+---+--+--+-++-+--+--+
1 row in set (0.00 sec)


How is a CLI user who doesn't have access to the Neutron DB supposed to know an 
error has occurred (other than "it doesn't work", obviously)?

** Affects: neutron
 Importance: Undecided
 Assignee: Brandon Logan (brandon-logan)
 Status: Confirmed


** Tags: lbaas
-- 
Statuses not shown for non-"loadbalancer" LBaaS objects on CLI
https://bugs.launchpad.net/bugs/1585250
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585538] [NEW] Creating LBaaS pool related to LB (not listener) does not reflect the new pool to provider's driver

2016-05-25 Thread Evgeny Fedoruk
Public bug reported:

1. Create LB
2. Create pool related to that LB (with --loadbalancer argument).
3. The LB object passed as an argument to the provider's driver handling does 
not include the new pool in LB'a pools parameter.

The problem occurs because the context is not refreshed for the pool's
LB object.

** Affects: neutron
 Importance: Undecided
 Assignee: Evgeny Fedoruk (evgenyf)
 Status: New


** Tags: lbaas

** Changed in: neutron
 Assignee: (unassigned) => Evgeny Fedoruk (evgenyf)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585538

Title:
  Creating LBaaS pool related to LB (not listener) does not  reflect the
  new pool  to provider's driver

Status in neutron:
  New

Bug description:
  1. Create LB
  2. Create pool related to that LB (with --loadbalancer argument).
  3. The LB object passed as an argument to the provider's driver handling does 
not include the new pool in LB'a pools parameter.

  The problem occurs because the context is not refreshed for the pool's
  LB object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585538/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585537] [NEW] Healthmonitor is not deleted from DB when its pool is deleted

2016-05-25 Thread Evgeny Fedoruk
Public bug reported:

When LBaaS pool having a health monitor is deleted, its health monitor
remains in DB.

Recreate:
1. Create LB, Listener, Pool with HM.
2. Delete Pool
3. See HM is still in lbaas_healthmonitors table in DB

** Affects: neutron
 Importance: Undecided
 Assignee: Evgeny Fedoruk (evgenyf)
 Status: New


** Tags: lbaas

** Changed in: neutron
 Assignee: (unassigned) => Evgeny Fedoruk (evgenyf)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585537

Title:
  Healthmonitor is not deleted from DB when its pool is deleted

Status in neutron:
  New

Bug description:
  When LBaaS pool having a health monitor is deleted, its health monitor
  remains in DB.

  Recreate:
  1. Create LB, Listener, Pool with HM.
  2. Delete Pool
  3. See HM is still in lbaas_healthmonitors table in DB

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585537/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585524] [NEW] neutron server Error: TooManyExternalNetworks

2016-05-25 Thread flynnmmm
Public bug reported:

Main steps:
1 create 2 external networks each with a different subnet with neutron CLI 
commands, there is no error info from CLI.
e.g. neutron net-create --router:external=True --provider:physical_network 
provider100 --provider:network_type flat provider100
2 create 2 routers connected each of the external net, there is no error info 
from CLI.
3 create 1 floating ip from one of the external network, no error info from CLI.
4 create 1 private network, and try creating a vm connected to the private 
network.
there is no response with the command: nova boot xxx.
We can see errors from screen, seems neutron CLI needs more checking when 
creating more external networks.
q-svc:
2016-05-25 00:55:39.756 ERROR oslo_messaging.rpc.server 
[req-8ff829a5-2241-4ad0-896e-136b1de3efe7 None None] Exception during handling 
message
2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server Traceback (most recent 
call last):
2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 133, in 
_process_incoming
2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 153, 
in dispatch
2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server result = func(ctxt, 
**new_args)
2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/neutron/neutron/api/rpc/handlers/l3_rpc.py", line 214, in 
get_external_network_id
2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server net_id = 
self.plugin.get_external_network_id(context)
2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/neutron/neutron/db/external_net_db.py", line 199, in 
get_external_network_id
2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server raise 
n_exc.TooManyExternalNetworks()
2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server 
TooManyExternalNetworks: More than one external network exists.
2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server
neutron l3-agent:
2016-05-24 22:28:22.418 ERROR neutron.agent.l3.agent [-] Failed to process 
compatible router '69b7ca3c-3aa5-44eb-bec8-8e53accbde64'
2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent Traceback (most recent 
call last):
2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 485, in 
_process_router_update
2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent 
self._process_router_if_compatible(router)
2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 417, in 
_process_router_if_compatible
2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent if ex_net_id != 
self._fetch_external_net_id(force=True):
2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 297, in 
_fetch_external_net_id
2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent raise Exception(msg)
2016-05-24 22:28:22.418 TRACE neutron.agent.l3.agent Exception: The 
'gateway_external_network_id' option must be configured for this agent as 
Neutron has more than one external network.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585524

Title:
  neutron server Error:  TooManyExternalNetworks

Status in neutron:
  New

Bug description:
  Main steps:
  1 create 2 external networks each with a different subnet with neutron CLI 
commands, there is no error info from CLI.
  e.g. neutron net-create --router:external=True --provider:physical_network 
provider100 --provider:network_type flat provider100
  2 create 2 routers connected each of the external net, there is no error info 
from CLI.
  3 create 1 floating ip from one of the external network, no error info from 
CLI.
  4 create 1 private network, and try creating a vm connected to the private 
network.
  there is no response with the command: nova boot xxx.
  We can see errors from screen, seems neutron CLI needs more checking when 
creating more external networks.
  q-svc:
  2016-05-25 00:55:39.756 ERROR oslo_messaging.rpc.server 
[req-8ff829a5-2241-4ad0-896e-136b1de3efe7 None None] Exception during handling 
message
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server Traceback (most 
recent call last):
  2016-05-25 00:55:39.756 TRACE oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/

[Yahoo-eng-team] [Bug 1183523] Re: db-archiving fails to clear some deleted rows from instances table

2016-05-25 Thread Markus Zoeller (markus_z)
As we use the "direct-release" model in Nova we don't use the
"Fix Comitted" status for merged bug fixes anymore. I'm setting
this manually to "Fix Released" to be consistent.

[1] "[openstack-dev] [release][all] bugs will now close automatically
when patches merge"; Doug Hellmann; 2015-12-07;
http://lists.openstack.org/pipermail/openstack-dev/2015-December/081612.html


** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1183523

Title:
  db-archiving fails to clear some deleted rows from instances table

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Downstream bug report from Red Hat Bugzilla against Grizzly:
  https://bugzilla.redhat.com/show_bug.cgi?id=960644

  In unit tests, db-archiving moves all 'deleted' rows to the shadow
  tables.  However, in the real-world test, some deleted rows got stuck
  in the instances table.

  I suspect a bug in the way we deal with foreign key constraints.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1183523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582278] Re: [SR-IOV][CPU Pinning] nova compute can try to boot VM with CPUs from one NUMA node and PCI device from another NUMA node.

2016-05-25 Thread Markus Zoeller (markus_z)
As we use the "direct-release" model in Nova we don't use the
"Fix Comitted" status for merged bug fixes anymore. I'm setting
this manually to "Fix Released" to be consistent.

[1] "[openstack-dev] [release][all] bugs will now close automatically
when patches merge"; Doug Hellmann; 2015-12-07;
http://lists.openstack.org/pipermail/openstack-dev/2015-December/081612.html


** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1582278

Title:
  [SR-IOV][CPU Pinning] nova compute can try to boot VM with CPUs from
  one NUMA node and PCI device from another NUMA node.

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Environment:
  Two NUMA nodes on compute host (node-0 and node-1).
  One SR-IOV PCI device associated with NUMA node-1.

  Steps to reproduce:

  Steps to reproduce:
   1) Deploy env with SR-IOV and CPU pinning enable
   2) Create new flavor with cpu pinning:
  nova flavor-show m1.small.performance
  
++---+
  | Property | Value |
  
++---+
  | OS-FLV-DISABLED:disabled | False |
  | OS-FLV-EXT-DATA:ephemeral | 0 |
  | disk | 20 |
  | extra_specs | {"hw:cpu_policy": "dedicated", "hw:numa_nodes": "1"} |
  | id | 7b0e5ee0-0bf7-4a46-9653-9279a947c650 |
  | name | m1.small.performance |
  | os-flavor-access:is_public | True |
  | ram | 2048 |
  | rxtx_factor | 1.0 |
  | swap | |
  | vcpus | 1 |
  
++
   3) download ubuntu image
   4) create sr-iov port and boot vm on this port with m1.small.performance 
flavor:
  NODE_1='node-4.test.domain.local'
  NODE_2='node-5.test.domain.local'
  NET_ID_1=$(neutron net-list | grep net_EW_2 | awk '{print$2}')
  neutron port-create $NET_ID_1 --binding:vnic-type direct --device_owner 
nova-compute --name sriov_23
  port_id=$(neutron port-list | grep 'sriov_23' | awk '{print$2}')
  nova boot vm23 --flavor m1.small.performance --image ubuntu_image 
--availability-zone nova:$NODE_1 --nic port-id=$port_id --key-name vm_key

  Expected results:
   VM is an ACTIVE state
  Actual result:
   In most cases the state is ERROR with following logs:

  2016-05-13 08:25:56.598 29097 ERROR nova.pci.stats 
[req-26138c0b-fa55-4ff8-8f3a-aad980e3c815 d864c4308b104454b7b46fb652f4f377 
9322dead0b5d440986b12596d9cbff5b - - -] Failed to allocate PCI devices for 
instance. Unassigning devices back to pools. This should not happen, since the 
scheduler should have accurate information, and allocation during claims is 
controlled via a hold on the compute node semaphore
  2016-05-13 08:25:57.502 29097 INFO nova.virt.libvirt.driver 
[req-26138c0b-fa55-4ff8-8f3a-aad980e3c815 d864c4308b104454b7b46fb652f4f377 
9322dead0b5d440986b12596d9cbff5b - - -] [instance: 
4e691469-893d-4b24-a0a8-00bbee0fa566] Creating image
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager 
[req-26138c0b-fa55-4ff8-8f3a-aad980e3c815 d864c4308b104454b7b46fb652f4f377 
9322dead0b5d440986b12596d9cbff5b - - -] Instance failed network setup after 1 
attempt(s)
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager Traceback (most 
recent call last):
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1570, in 
_allocate_network_async
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager 
bind_host_id=bind_host_id)
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 666, in 
allocate_for_instance
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager 
self._delete_ports(neutron, instance, created_port_ids)
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager 
self.force_reraise()
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager 
six.reraise(self.type_, self.value, self.tb)
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 645, in 
allocate_for_instance
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager 
bind_host_id=bind_host_id)
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.

[Yahoo-eng-team] [Bug 1575335] Re: Out-of-tree compute drivers no longer loading

2016-05-25 Thread Markus Zoeller (markus_z)
As we use the "direct-release" model in Nova we don't use the
"Fix Comitted" status for merged bug fixes anymore. I'm setting
this manually to "Fix Released" to be consistent.

[1] "[openstack-dev] [release][all] bugs will now close automatically
when patches merge"; Doug Hellmann; 2015-12-07;
http://lists.openstack.org/pipermail/openstack-dev/2015-December/081612.html

** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1575335

Title:
  Out-of-tree compute drivers no longer loading

Status in OpenStack Compute (nova):
  Fix Released
Status in nova-powervm:
  Fix Committed

Bug description:
  Commit
  
https://github.com/openstack/nova/commit/8eb03de1eb83a6cd2d4d41804e1b8253f94e5400
  removed the mechanism by which nova-powervm was loading its Compute
  driver from out of tree, resulting in the following failure to start
  up n-cpu:

  2016-04-25 23:53:46.581 32459 INFO nova.virt.driver [-] Loading compute 
driver 'nova_powervm.virt.powervm.driver.PowerVMDriver'
  2016-04-25 23:53:46.582 32459 ERROR nova.virt.driver [-] Unable to load the 
virtualization driver
  2016-04-25 23:53:46.582 32459 ERROR nova.virt.driver Traceback (most recent 
call last):
  2016-04-25 23:53:46.582 32459 ERROR nova.virt.driver   File 
"/opt/stack/nova/nova/virt/driver.py", line 1623, in load_compute_driver
  2016-04-25 23:53:46.582 32459 ERROR nova.virt.driver virtapi)
  2016-04-25 23:53:46.582 32459 ERROR nova.virt.driver   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 44, in 
import_object
  2016-04-25 23:53:46.582 32459 ERROR nova.virt.driver return 
import_class(import_str)(*args, **kwargs)
  2016-04-25 23:53:46.582 32459 ERROR nova.virt.driver   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 30, in 
import_class
  2016-04-25 23:53:46.582 32459 ERROR nova.virt.driver __import__(mod_str)
  2016-04-25 23:53:46.582 32459 ERROR nova.virt.driver ImportError: No module 
named nova_powervm.virt.powervm.driver
  2016-04-25 23:53:46.582 32459 ERROR nova.virt.driver 
  n-cpu failed to start

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1575335/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585520] [NEW] Only some specific router or ovs ports which SRI-OV port need to reach but now can not on the same compute

2016-05-25 Thread dongwenshuai
Public bug reported:

In some cases, we need to implement that the SRI-OV ports can reach some
specific ovs ports but not all the ovs ports on the same compute node or
the SRI-OV ports can reach the specific router internal ports, don't
need to add all the router ports mac into fdb tables on the same compute
node .

I have seen some code changes and bugs about this problem before. All
solutions are adding the mac of the all ovs ports or router internal
ports into nic fdb tables. I think that the approach is flawed ,and it
would take up and waste the resources of network nic registers. If there
are a lot of ovs ports in the one compute node ,the l2 agent would add
all the mac of ovs ports into fdb table. But because the number of
physical nic registers are limited,some mac which we want to reach
cann't be added into the fdb table . In this case, the SRI-OV port just
can reach some ports which we do not wanted ,and can't reach the
specific ports which wo hope to

In my opinion , adding a port extensions such as fdb_enable、sriov_enable
is more better. CURD operation is very easy for tenant, and does not
need to know the command about linux fdb.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585520

Title:
  Only some specific router or ovs ports which SRI-OV port need to reach
  but now can not on the same compute

Status in neutron:
  New

Bug description:
  In some cases, we need to implement that the SRI-OV ports can reach
  some specific ovs ports but not all the ovs ports on the same compute
  node or the SRI-OV ports can reach the specific router internal ports,
  don't need to add all the router ports mac into fdb tables on the same
  compute node .

  I have seen some code changes and bugs about this problem before. All
  solutions are adding the mac of the all ovs ports or router internal
  ports into nic fdb tables. I think that the approach is flawed ,and it
  would take up and waste the resources of network nic registers. If
  there are a lot of ovs ports in the one compute node ,the l2 agent
  would add all the mac of ovs ports into fdb table. But because the
  number of physical nic registers are limited,some mac which we want to
  reach cann't be added into the fdb table . In this case, the SRI-OV
  port just can reach some ports which we do not wanted ,and can't reach
  the specific ports which wo hope to

  In my opinion , adding a port extensions such as
  fdb_enable、sriov_enable is more better. CURD operation is very easy
  for tenant, and does not need to know the command about linux fdb.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585520/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585515] [NEW] Paramiko doesn't work with Nova

2016-05-25 Thread Bartek Żurawski
Public bug reported:

It looks like Paramiko 2.0.0 again breaks nova which currently has a
requirement for 'paramiko>=1.16.0 # LGPL'.

nova.tests.unit.api.openstack.compute.test_keypairs.KeypairsTestV210.test_keypair_create_duplicate
--

Captured traceback:
~~~
Traceback (most recent call last):
  File "nova/tests/unit/api/openstack/compute/test_keypairs.py", line 237, 
in test_keypair_create_duplicate
self.controller.create, self.req, body=body)
  File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 485, in assertRaises
self.assertThat(our_callable, matcher)
  File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 496, in assertThat
mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
  File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 547, in _matchHelper
mismatch = matcher.match(matchee)
  File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 108, in match
mismatch = self.exception_matcher.match(exc_info)
  File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py",
 line 62, in match
mismatch = matcher.match(matchee)
  File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 475, in match
reraise(*matchee)
  File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 101, in match
result = matchee()
  File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 1049, in __call__
return self._callable_object(*self._args, **self._kwargs)
  File "nova/api/openstack/wsgi.py", line 961, in version_select
return func.func(self, *args, **kwargs)
  File "nova/api/openstack/extensions.py", line 504, in wrapped
raise webob.exc.HTTPInternalServerError(explanation=msg)
webob.exc.HTTPInternalServerError: Unexpected API Error. Please report this 
at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.



Captured pythonlogging:
~~~
2016-05-25 09:55:14,571 INFO [nova.api.openstack] Loaded extensions: 
['os-keypairs', 'servers']
2016-05-25 09:55:16,314 ERROR [nova.api.openstack.extensions] Unexpected 
exception in API method
Traceback (most recent call last):
  File "nova/api/openstack/extensions.py", line 478, in wrapped
return f(*args, **kwargs)
  File "nova/api/validation/__init__.py", line 73, in wrapper
return func(*args, **kwargs)
  File "nova/api/openstack/compute/keypairs.py", line 72, in create
return self._create(req, body, type=True, user_id=user_id)
  File "nova/api/openstack/compute/keypairs.py", line 132, in _create
context, user_id, name, key_type)
  File "nova/exception.py", line 110, in wrapped
payload)
  File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 221, in __exit__
self.force_reraise()
  File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 197, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File "nova/exception.py", line 89, in wrapped
return f(self, context, *args, **kw)
  File "nova/compute/api.py", line 4040, in create_key_pair
user_id, key_type)
  File "nova/compute/api.py", line 4062, in _generate_key_pair
return crypto.generate_key_pair()
  File "nova/crypto.py", line 152, in generate_key_pair
key = generate_key(bits)
  File "nova/crypto.py", line 144, in generate_key
key = paramiko.RSAKey(vals=(rsa.e, rsa.n))
TypeError: __init__() got an unexpected keyword argument 'vals'

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1585515

Title:
  Paramiko doesn't work with Nova

Status in OpenStack Compute (nova):
  New

Bug description:
  It looks like Paramiko 2.0.0 again breaks nova which currently has a
  requirement for 'paramiko>=1.16.0 # LGPL'.

  
nova.tests.unit.api.openstack.compute.test_keypairs.KeypairsTestV210.test_keypair_create_duplicate
  
--

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "nova/tests/unit/api/openstack/compute/test_keypairs.py", line 
237, in test_keypair_create_duplic

[Yahoo-eng-team] [Bug 1585510] [NEW] [RFE] openvswitch-agent support rootwrap daemon when hypervisor is XenServer

2016-05-25 Thread huan
Public bug reported:

As titled, when XenServer is hypervisor we want to implement rootwrap
daemon mode in neutron-openvswitch-agent which runs in compute node.

neutron-openvswitch-agent which runs in compute node(DomU) cannot
support rootwrap daemon mode. This is because XenServer has the
seperation of Dom0(privileged domain) and DomU(user domain), br-int
bridge of neutron-openvswitch-agent(in compute node) resides in Dom0, so
all the ovs-vsctl/ovs-ofctl/iptables/ipset commands executed by neutron-
openvswitch-agent(in compute node) need to be executed in Dom0 not DomU
which is different with other hypervisors.

https://github.com/openstack/neutron/blob/master/bin/neutron-rootwrap-
xen-dom0 is current implementation but cannot support rootwrap daemon.

We noticed rootwrap produces significant performance overhead and We
want to implement the rootwrap daemon mode when XenServer is hypervisor
to improve the performance.

Proposal: subclass and override some class/functions from oslo.rootwrap
to achive the goal. Actually I have did the POC which can work well.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585510

Title:
  [RFE] openvswitch-agent support rootwrap daemon when hypervisor is
  XenServer

Status in neutron:
  New

Bug description:
  As titled, when XenServer is hypervisor we want to implement rootwrap
  daemon mode in neutron-openvswitch-agent which runs in compute node.

  neutron-openvswitch-agent which runs in compute node(DomU) cannot
  support rootwrap daemon mode. This is because XenServer has the
  seperation of Dom0(privileged domain) and DomU(user domain), br-int
  bridge of neutron-openvswitch-agent(in compute node) resides in Dom0,
  so all the ovs-vsctl/ovs-ofctl/iptables/ipset commands executed by
  neutron-openvswitch-agent(in compute node) need to be executed in Dom0
  not DomU which is different with other hypervisors.

  https://github.com/openstack/neutron/blob/master/bin/neutron-rootwrap-
  xen-dom0 is current implementation but cannot support rootwrap daemon.

  We noticed rootwrap produces significant performance overhead and We
  want to implement the rootwrap daemon mode when XenServer is
  hypervisor to improve the performance.

  Proposal: subclass and override some class/functions from
  oslo.rootwrap to achive the goal. Actually I have did the POC which
  can work well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp