[Yahoo-eng-team] [Bug 1478961] Re: db sync on federation failed if there is existing data

2015-10-01 Thread Thierry Carrez
** Changed in: keystone
   Status: Fix Committed => Fix Released

** Changed in: keystone
Milestone: None => liberty-rc2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1478961

Title:
  db sync on federation failed if there is existing data

Status in Keystone:
  Fix Released

Bug description:
  If you have an existing entry in the identity_provider table, when
  updating from juno to kilo, it fails with the following error

  OperationalError: (OperationalError) (1048, "Column 'remote_id' cannot
  be null") 'INSERT INTO idp_remote_ids (idp_id, remote_id) VALUES (%s,
  %s)' ('https://cern.ch/login', None)

  The migrate_repo goes from 2 to 6 and it failes in step 7.

  The issue is linked to a new field created on identity_provider table called 
'remote_id' that it is created in step 3 and left empty.
  Then on step 7 it tries to read to insert into the idp_remote_ids that does 
not accept null values.

  2015-07-28 15:04:00.247 18168 TRACE keystone Traceback (most recent call 
last):
  2015-07-28 15:04:00.247 18168 TRACE keystone   File 
"/usr/bin/keystone-manage", line 44, in 
  2015-07-28 15:04:00.247 18168 TRACE keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2015-07-28 15:04:00.247 18168 TRACE keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cli.py", line 585, in main
  2015-07-28 15:04:00.247 18168 TRACE keystone CONF.command.cmd_class.main()
  2015-07-28 15:04:00.247 18168 TRACE keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cli.py", line 76, in main
  2015-07-28 15:04:00.247 18168 TRACE keystone 
migration_helpers.sync_database_to_version(extension, version)
  2015-07-28 15:04:00.247 18168 TRACE keystone   File 
"/usr/lib/python2.7/site-packages/keystone/common/sql/migration_helpers.py", 
line 249, in sync_database_to_version
  2015-07-28 15:04:00.247 18168 TRACE keystone 
_sync_extension_repo(extension, version)
  2015-07-28 15:04:00.247 18168 TRACE keystone   File 
"/usr/lib/python2.7/site-packages/keystone/common/sql/migration_helpers.py", 
line 219, in _sync_extension_repo
  2015-07-28 15:04:00.247 18168 TRACE keystone init_version=init_version)
  2015-07-28 15:04:00.247 18168 TRACE keystone   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/migration.py", line 79, in 
db_sync
  2015-07-28 15:04:00.247 18168 TRACE keystone return 
versioning_api.upgrade(engine, repository, version)
  2015-07-28 15:04:00.247 18168 TRACE keystone   File 
"/usr/lib/python2.7/site-packages/migrate/versioning/api.py", line 186, in 
upgrade
  2015-07-28 15:04:00.247 18168 TRACE keystone return _migrate(url, 
repository, version, upgrade=True, err=err, **opts)
  2015-07-28 15:04:00.247 18168 TRACE keystone   File "", line 2, in 
_migrate
  2015-07-28 15:04:00.247 18168 TRACE keystone   File 
"/usr/lib/python2.7/site-packages/migrate/versioning/util/__init__.py", line 
160, in with_engine
  2015-07-28 15:04:00.247 18168 TRACE keystone return f(*a, **kw)
  2015-07-28 15:04:00.247 18168 TRACE keystone   File 
"/usr/lib/python2.7/site-packages/migrate/versioning/api.py", line 366, in 
_migrate
  2015-07-28 15:04:00.247 18168 TRACE keystone schema.runchange(ver, 
change, changeset.step)
  2015-07-28 15:04:00.247 18168 TRACE keystone   File 
"/usr/lib/python2.7/site-packages/migrate/versioning/schema.py", line 93, in 
runchange
  2015-07-28 15:04:00.247 18168 TRACE keystone change.run(self.engine, step)
  2015-07-28 15:04:00.247 18168 TRACE keystone   File 
"/usr/lib/python2.7/site-packages/migrate/versioning/script/py.py", line 148, 
in run
  2015-07-28 15:04:00.247 18168 TRACE keystone script_func(engine)
  2015-07-28 15:04:00.247 18168 TRACE keystone   File 
"/usr/lib/python2.7/site-packages/keystone/contrib/federation/migrate_repo/versions/007_add_remote_id_table.py",
 line 39, in upgrade
  2015-07-28 15:04:00.247 18168 TRACE keystone 
remote_id_table.insert(remote_idp_entry).execute()
  2015-07-28 15:04:00.247 18168 TRACE keystone   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/sql/base.py", line 386, in 
execute
  2015-07-28 15:04:00.247 18168 TRACE keystone return 
e._execute_clauseelement(self, multiparams, params)
  2015-07-28 15:04:00.247 18168 TRACE keystone   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1759, in 
_execute_clauseelement
  2015-07-28 15:04:00.247 18168 TRACE keystone return 
connection._execute_clauseelement(elem, multiparams, params)
  2015-07-28 15:04:00.247 18168 TRACE keystone   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 826, in 
_execute_clauseelement
  2015-07-28 15:04:00.247 18168 TRACE keystone compiled_sql, 
distilled_params
  2015-07-28 15:04:00.247 18168 TRACE keystone   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 958, in 
_execute_context
  2015-07-28 

[Yahoo-eng-team] [Bug 1501698] [NEW] LDAP identity backend does not honor list_limit

2015-10-01 Thread Alexander Makarov
Public bug reported:

list_limit set in [identity] section is not used to limit user list: the
result contains entire set of users returned by the query

** Affects: keystone
 Importance: Undecided
 Assignee: Alexander Makarov (amakarov)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Alexander Makarov (amakarov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1501698

Title:
  LDAP identity backend does not honor list_limit

Status in Keystone:
  New

Bug description:
  list_limit set in [identity] section is not used to limit user list:
  the result contains entire set of users returned by the query

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1501698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501440] Re: Ironic driver uses node's UUID instead of name

2015-10-01 Thread Sylvain Bauza
I feel that request for feature would need a huge effort for changing
the current situation and would be better provided as a blueprint, see
that wikipage for more explanation :

https://wiki.openstack.org/wiki/Nova/Liberty_Release_Schedule#How_do_I_get_my_code_merged.3F

** Changed in: nova
   Importance: Undecided => Wishlist

** Changed in: nova
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1501440

Title:
  Ironic driver uses node's UUID instead of name

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  When nova creates a hypervisor from an Ironic node, the hypervisor is
  created with hypervisor_hostname set to the UUID of the Ironic node.
  This is inconvenient, as it's not very human-friendly. It would be
  nice if the hypervisor_hostname attribute could be set to the node's
  name, or at least some combination, such as `node.name + '-' +
  node.uuid`. The relevant line is here:

  
https://github.com/openstack/nova/blob/stable/kilo/nova/virt/ironic/driver.py#L290

  This is on CentOS 7, and yum shows me as running version
  2015.1.1.dev18 for all nova packages.

  I tried just changing the line above to read `'hypervisor_hostname':
  str(node.name),`, but this caused no hypervisors to get created,
  although nothing crashed, which makes it seem like there's more that
  needs to be done than just changing that line.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1501440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501703] [NEW] unit test failures on 32 bit architectures

2015-10-01 Thread James Page
Public bug reported:

Test all pass fine in Ubuntu on 64 bit archs, however on a 32 bit
architecture (which is how we build packages in 14.04), two unit tests
fail - this is a int/long type problem.

==
FAIL: 
neutron.tests.unit.agent.linux.test_ip_lib.TestIpRuleCommand.test__make_canonical_fwmark
neutron.tests.unit.agent.linux.test_ip_lib.TestIpRuleCommand.test__make_canonical_fwmark
--
_StringException: Traceback (most recent call last):
_StringException: Empty attachments:
  pythonlogging:''
  pythonlogging:'neutron.api.extensions'
  stderr
  stdout

Traceback (most recent call last):
  File "/«PKGBUILDDIR»/neutron/tests/unit/agent/linux/test_ip_lib.py", line 
633, in test__make_canonical_fwmark
'type': 'unicast'}, actual)
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 348, in 
assertEqual
self.assertThat(observed, matcher, message)
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 433, in 
assertThat
raise mismatch_error
MismatchError: !=:
reference = {'fwmark': '0x400/0x', 'type': 'unicast'}
actual= {'fwmark': '0x400/0xL', 'type': 'unicast'}

Traceback (most recent call last):
_StringException: Empty attachments:
  pythonlogging:''
  pythonlogging:'neutron.api.extensions'
  stderr
  stdout

Traceback (most recent call last):
  File "/«PKGBUILDDIR»/neutron/tests/unit/agent/linux/test_ip_lib.py", line 
633, in test__make_canonical_fwmark
'type': 'unicast'}, actual)
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 348, in 
assertEqual
self.assertThat(observed, matcher, message)
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 433, in 
assertThat
raise mismatch_error
MismatchError: !=:
reference = {'fwmark': '0x400/0x', 'type': 'unicast'}
actual= {'fwmark': '0x400/0xL', 'type': 'unicast'}


==
FAIL: 
neutron.tests.unit.agent.linux.test_ip_lib.TestIpRuleCommand.test__make_canonical_fwmark_integer
neutron.tests.unit.agent.linux.test_ip_lib.TestIpRuleCommand.test__make_canonical_fwmark_integer
--
_StringException: Traceback (most recent call last):
_StringException: Empty attachments:
  pythonlogging:''
  pythonlogging:'neutron.api.extensions'
  stderr
  stdout

Traceback (most recent call last):
  File "/«PKGBUILDDIR»/neutron/tests/unit/agent/linux/test_ip_lib.py", line 
642, in test__make_canonical_fwmark_integer
'type': 'unicast'}, actual)
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 348, in 
assertEqual
self.assertThat(observed, matcher, message)
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 433, in 
assertThat
raise mismatch_error
MismatchError: !=:
reference = {'fwmark': '0x400/0x', 'type': 'unicast'}
actual= {'fwmark': '0x400/0xL', 'type': 'unicast'}

Traceback (most recent call last):
_StringException: Empty attachments:
  pythonlogging:''
  pythonlogging:'neutron.api.extensions'
  stderr
  stdout

Traceback (most recent call last):
  File "/«PKGBUILDDIR»/neutron/tests/unit/agent/linux/test_ip_lib.py", line 
642, in test__make_canonical_fwmark_integer
'type': 'unicast'}, actual)
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 348, in 
assertEqual
self.assertThat(observed, matcher, message)
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 433, in 
assertThat
raise mismatch_error
MismatchError: !=:
reference = {'fwmark': '0x400/0x', 'type': 'unicast'}
actual= {'fwmark': '0x400/0xL', 'type': 'unicast'}

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501703

Title:
  unit test failures on 32 bit architectures

Status in neutron:
  New

Bug description:
  Test all pass fine in Ubuntu on 64 bit archs, however on a 32 bit
  architecture (which is how we build packages in 14.04), two unit tests
  fail - this is a int/long type problem.

  ==
  FAIL: 
neutron.tests.unit.agent.linux.test_ip_lib.TestIpRuleCommand.test__make_canonical_fwmark
  
neutron.tests.unit.agent.linux.test_ip_lib.TestIpRuleCommand.test__make_canonical_fwmark
  --
  _StringException: Traceback (most recent call last):
  _StringException: Empty attachments:
pythonlogging:''
pythonlogging:'neutron.api.extensions'
stderr
stdout

  Traceback (most recent call last):
File "/«PKGBUILDDIR»/neutron/tests/unit/agent/linux/test_ip_lib.py", line 
633, in 

[Yahoo-eng-team] [Bug 1501740] [NEW] Creating a region without request parameters failed.

2015-10-01 Thread Kouichi Katano
Public bug reported:

Use Identity API v3 (CURRENT)

URL: http://developer.openstack.org/api-ref-
identity-v3.html#createRegion

Issu: Creating a region failed when optional parameters are not
specified.

"POST /v3/regions" has 3 parameters, which are "region", "description" and 
"parent_region_id".
"description" and "parent_region_id" are optional.
Although this API with only "region" parameter should succeed, it fails.

I confirmed this issue by the following command:

curl -s -X POST -H "X-Auth-Token: $TOKEN" -H "Content-type: application/json" 
http://localhost:5000/v3/regions -d '{ "region": {} }' | python -m json.tool
{
"error": {
"code": 400,
"message": "Expecting to find region in request body - the server could 
not comply with the request since it is either malformed or otherwise 
incorrect. The client is assumed to be in error.",
"title": "Bad Request"
}
}

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1501740

Title:
  Creating a region without request parameters failed.

Status in Keystone:
  New

Bug description:
  Use Identity API v3 (CURRENT)

  URL: http://developer.openstack.org/api-ref-
  identity-v3.html#createRegion

  Issu: Creating a region failed when optional parameters are not
  specified.

  "POST /v3/regions" has 3 parameters, which are "region", "description" and 
"parent_region_id".
  "description" and "parent_region_id" are optional.
  Although this API with only "region" parameter should succeed, it fails.

  I confirmed this issue by the following command:

  curl -s -X POST -H "X-Auth-Token: $TOKEN" -H "Content-type: application/json" 
http://localhost:5000/v3/regions -d '{ "region": {} }' | python -m json.tool
  {
  "error": {
  "code": 400,
  "message": "Expecting to find region in request body - the server 
could not comply with the request since it is either malformed or otherwise 
incorrect. The client is assumed to be in error.",
  "title": "Bad Request"
  }
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1501740/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499535] Re: InvalidBDMFormat when running exercise/boot_from_volume.sh

2015-10-01 Thread Davanum Srinivas (DIMS)
*** This bug is a duplicate of bug 1501435 ***
https://bugs.launchpad.net/bugs/1501435

** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** This bug has been marked a duplicate of bug 1501435
   osc 1.7 no longer can boot a server from volume

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1499535

Title:
  InvalidBDMFormat when running exercise/boot_from_volume.sh

Status in OpenStack Compute (nova):
  New
Status in python-novaclient:
  New

Bug description:
  1. Exact version of Nova/OpenStack you are running:

  $ git log -1
  commit b6249dc2ad630ecb9a231d0ce65d0f14f2116598
  Merge: 5090142 91b7fa1
  Author: Jenkins 
  Date:   Wed Sep 23 10:28:14 2015 +

  Merge "Add manila to devstack plugin registry"

  
  2. Relevant log files:

  2015-09-24 08:58:14.736 DEBUG nova.api.openstack.wsgi 
[req-88fcff38-652c-46bc-83a5-b8bdde7c22ac demo demo] Action: 'create', calling 
method: >, 
body: {"server": {"name": "ex-bfv-inst", "imageRef":
   "a2b158b9-a7e0-4c51-9bf3-98196d6cd9e9", "block_device_mapping": 
[{"device_name": "vda"}], "key_name": "test_key", "flavorRef": "1", 
"max_count": 1, "min_co
  unt": 1, "security_groups": [{"name": "boot_secgroup"}]}} from (pid=18220) 
_process_stack /opt/stack/nova/nova/api/openstack/wsgi.py:789
  2015-09-24 08:58:14.737 DEBUG nova.api.openstack.compute.servers 
[req-88fcff38-652c-46bc-83a5-b8bdde7c22ac demo demo] Running 
_create_extension_point for  from 
(pid=18220) _create_extension_point 
/opt/stack/nova/nova/api/openstack/compute/serv
  ers.py:700
  2015-09-24 08:58:14.738 DEBUG nova.api.openstack.compute.servers 
[req-88fcff38-652c-46bc-83a5-b8bdde7c22ac demo demo] Running 
_create_extension_point for  from (pid=18220) 
_create_extension_point 
/opt/stack/nova/nova/api/openstack/compute/servers.py:700
  2015-09-24 08:58:14.738 DEBUG nova.api.openstack.compute.servers 
[req-88fcff38-652c-46bc-83a5-b8bdde7c22ac demo demo] Running 
_create_extension_point for  
from (pid=18220) _create_extension_point /opt/stack/nova/nova/api/openstack/com
  pute/servers.py:700
  2015-09-24 08:58:14.738 DEBUG nova.api.openstack.compute.servers 
[req-88fcff38-652c-46bc-83a5-b8bdde7c22ac demo demo] Running 
_create_extension_point for  from (pid=18220) _create_extension_point 
/opt/stack/nova/nova/api/openstack/compute/servers.py:700
  2015-09-24 08:58:14.738 DEBUG nova.api.openstack.compute.servers 
[req-88fcff38-652c-46bc-83a5-b8bdde7c22ac demo demo] Running 
_create_extension_point for  from (pid=18220) 
_create_extension_point 
/opt/stack/nova/nova/api/openstack/compute/servers.py:700
  2015-09-24 08:58:14.738 DEBUG nova.api.openstack.compute.servers 
[req-88fcff38-652c-46bc-83a5-b8bdde7c22ac demo demo] Running 
_create_extension_point for  from (pid=18220) _create_extension_point 
/opt/stack/nova/nova/api/openstack/compute/servers.py:700
  2015-09-24 08:58:14.739 DEBUG nova.api.openstack.compute.servers 
[req-88fcff38-652c-46bc-83a5-b8bdde7c22ac demo demo] Running 
_create_extension_point for  from (pid=18220) _create_extension_point 
/opt/stack/nova/nova/api/openstack/compute/servers.py:700
  2015-09-24 08:58:14.739 DEBUG nova.api.openstack.compute.servers 
[req-88fcff38-652c-46bc-83a5-b8bdde7c22ac demo demo] Running 
_create_extension_point for  from (pid=18220) _create_extension_point 
/opt/stack/nova/nova/api/openstack/compute/servers.py:700
  2015-09-24 08:58:14.739 DEBUG nova.api.openstack.compute.servers 
[req-88fcff38-652c-46bc-83a5-b8bdde7c22ac demo demo] Running 
_create_extension_point for  from (pid=18220) _create_extension_point 
/opt/stack/nova/nova/api/openstack/compute/servers.py:700
  2015-09-24 08:58:14.739 DEBUG nova.api.openstack.compute.servers 
[req-88fcff38-652c-46bc-83a5-b8bdde7c22ac demo demo] Running 
_create_extension_point for  from (pid=18220) _create_extension_point 
/opt/stack/nova/nova/api/openstack/compute/servers.py:700
  2015-09-24 08:58:14.739 DEBUG nova.api.openstack.compute.servers 
[req-88fcff38-652c-46bc-83a5-b8bdde7c22ac demo demo] Running 
_create_extension_point for  from (pid=18220) _create_extension_point 
/opt/stack/nova/nova/api/openstack/compute/servers.py:700
  2015-09-24 08:58:14.740 DEBUG nova.api.openstack.compute.servers 
[req-88fcff38-652c-46bc-83a5-b8bdde7c22ac demo demo] Running 
_create_extension_point for  from (pid=18220) _create_extension_point 
/opt/stack/nova/nova/api/openstack/compute/servers.py:700
  2015-09-24 08:58:14.906 ERROR nova.api.openstack.extensions 
[req-88fcff38-652c-46bc-83a5-b8bdde7c22ac demo demo] Unexpected exception in 
API method
  2015-09-24 08:58:14.906 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
  2015-09-24 08:58:14.906 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 478, in wrapped
  2015-09-24 08:58:14.906 TRACE 

[Yahoo-eng-team] [Bug 1501722] [NEW] make security group optional in new launch instance

2015-10-01 Thread Masco Kaliyamoorthy
Public bug reported:

security group available list is mess in new launch instance form.

according to API doc and legacy launch instance form security group is optional.
so in new launch instance form also it should be optional but it is not.

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1501722

Title:
  make security group optional in new launch instance

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  security group available list is mess in new launch instance form.

  according to API doc and legacy launch instance form security group is 
optional.
  so in new launch instance form also it should be optional but it is not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1501722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501686] [NEW] Incorrect exception handling in DB code involving rollbacked transactions.

2015-10-01 Thread Ann Kamyshnikova
Public bug reported:

I found out that some methods like _create_ha_interfaces
https://github.com/openstack/neutron/blob/master/neutron/db/l3_hamode_db.py#L329-L345
contain the following logic:

def create():
  create_something()
  try:
_do_other_thing()
  except Exception:
 with excutils.save_and_reraise_exception():
 delete_something()

def  _do_other_thing():
 with context.session.begin(subtransactions=True):
 


The problem is that when exception is raised in _do_other_thing it is caught in 
except block, but the object cannot be deleted in except section because 
internal transaction has been rolled back. We have tests on these methods, but 
they also are not correct (for example 
https://github.com/openstack/neutron/blob/master/neutron/tests/unit/db/test_l3_hamode_db.py#L360-L377)
 as methods  _do_other_thing() are mocked so inside transaction is never 
created and aborted.

The possible solution is to use nested transaction in such places like
this:

def create()
   with context.session.begin(subtransactions=True):
   create_something()
   try:
   _do_other_thing()
   except Exception:
   with excutils.save_and_reraise_exception():
   delete_something()

def _do_other_thing():
 with context.session.begin(nested=True):
 

** Affects: neutron
 Importance: Undecided
 Assignee: Ann Kamyshnikova (akamyshnikova)
 Status: New


** Tags: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501686

Title:
  Incorrect exception handling in DB code involving rollbacked
  transactions.

Status in neutron:
  New

Bug description:
  I found out that some methods like _create_ha_interfaces
  
https://github.com/openstack/neutron/blob/master/neutron/db/l3_hamode_db.py#L329-L345
  contain the following logic:

  def create():
create_something()
try:
  _do_other_thing()
except Exception:
   with excutils.save_and_reraise_exception():
   delete_something()

  def  _do_other_thing():
   with context.session.begin(subtransactions=True):
   

  
  The problem is that when exception is raised in _do_other_thing it is caught 
in except block, but the object cannot be deleted in except section because 
internal transaction has been rolled back. We have tests on these methods, but 
they also are not correct (for example 
https://github.com/openstack/neutron/blob/master/neutron/tests/unit/db/test_l3_hamode_db.py#L360-L377)
 as methods  _do_other_thing() are mocked so inside transaction is never 
created and aborted.

  The possible solution is to use nested transaction in such places like
  this:

  def create()
 with context.session.begin(subtransactions=True):
 create_something()
 try:
 _do_other_thing()
 except Exception:
 with excutils.save_and_reraise_exception():
 delete_something()

  def _do_other_thing():
   with context.session.begin(nested=True):
   

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501686/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501729] [NEW] Launching an instance on devstack triggers to an error

2015-10-01 Thread Sergii Turivnyi
Public bug reported:

Preconditions:
3.13.0-61-generic #100-Ubuntu
Devstack

Staps to reproduce:

Login to Horizon as an Admin
Navigate to Project -> Instance
Hit Launch Instance button
In opened window select:
Availability Zone == Nova
Instance Name == test_instance
Flavor == m1.nano
Instance Count ==1
Instance Boot Source == Boot from image
Image Name == cirros-0.3.4-x86_64-uec
Hit launch button

Expected result:
Instance status is Active

Actual result:
Instance status is Error
Error: Failed to perform requested operation on instance "aa", the instance has 
an error status: Please try again later [Error: Build of instance 
68718ad6-73b1-4ddb-a48d-da4265e336fa aborted: Failed to allocate the 
network(s), not rescheduling.].

http://paste.openstack.org/show/474856/

** Affects: neutron
 Importance: Undecided
 Status: New

** Attachment added: "devstack_all.log"
   
https://bugs.launchpad.net/bugs/1501729/+attachment/4480822/+files/devstack_all.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501729

Title:
  Launching an instance on devstack triggers to an error

Status in neutron:
  New

Bug description:
  Preconditions:
  3.13.0-61-generic #100-Ubuntu
  Devstack

  Staps to reproduce:

  Login to Horizon as an Admin
  Navigate to Project -> Instance
  Hit Launch Instance button
  In opened window select:
  Availability Zone == Nova
  Instance Name == test_instance
  Flavor == m1.nano
  Instance Count ==1
  Instance Boot Source == Boot from image
  Image Name == cirros-0.3.4-x86_64-uec
  Hit launch button

  Expected result:
  Instance status is Active

  Actual result:
  Instance status is Error
  Error: Failed to perform requested operation on instance "aa", the instance 
has an error status: Please try again later [Error: Build of instance 
68718ad6-73b1-4ddb-a48d-da4265e336fa aborted: Failed to allocate the 
network(s), not rescheduling.].

  http://paste.openstack.org/show/474856/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501729/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501735] [NEW] Network interface allocation corrupts instance info cache

2015-10-01 Thread Mark Goddard
Public bug reported:

Allocation of network interfaces for an instance can result in
corruption of the instance info cache in Nova. The result is that the
cache may contain duplicate entries for network interfaces. This can
cause failure to boot nodes, as seen with the Libvirt driver.

Seen on Ubuntu / devstack / commit
b0013d93ffeaed53bc28d9558def26bdb7041ed7.

The issue can be reproduced using an instance with a large number of
interfaces, for example using the heat stack in the attached YAML file
heat-stack-many-interfaces.yaml. For improved reproducibility, add a
short sleep in nova.network.neutronv2.api.API.allocate_for_instance,
just before the call to self.get_instance_nw_info.

This issue was found by SecurityFun23 when testing the fix for bug
#1467581.

The problem appears to be that in
nova.network.neutronv2.api.API.allocate_for_instance, after the Neutron
API calls to create/update ports, but before the instance info cache is
updated in get_instance_nw_info, it is possible for another request to
refresh the instance info cache. This will cause the new/updated ports
to be added to the cache as they are discovered in Neutron. Then, the
original request resumes, and unconditionally adds the new interfaces to
the cache. This results in duplicate entries. The most likely candidate
for another request is probably Neutron network-change notifications,
which are triggered by the port update/create operation. The allocation
of multiple interfaces is more likely to make the problem to occur, as
Neutron API requests are made serially for each of the ports, allowing
time for the notifications to arrive.

The perceived problem in a more visual form:

Request:
- Allocate interfaces for an instance 
(nova.network.neutronv2.api.API.allocate_for_instance)
- n x Neutron API port create/updates
--
Notification:
- External event notification from Neutron - network-changed 
(nova.compute.manager.ComputeManager.external_instance_event)
- Refresh instance network cache (network_api.get_instance_nw_info)
- Query ports for device in Neutron
- Add new ports to instance info cache
---
Request:
- Refresh instance network cache with new interfaces (get_instance_nw_info)
- Unconditionally add duplicate interfaces to cache.

** Affects: nova
 Importance: Undecided
 Assignee: Mark Goddard (mgoddard)
 Status: New

** Attachment added: "Heat stack with many network interfaces"
   
https://bugs.launchpad.net/bugs/1501735/+attachment/4480839/+files/heat-stack-many-interfaces.yaml

** Changed in: nova
 Assignee: (unassigned) => Mark Goddard (mgoddard)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1501735

Title:
  Network interface allocation corrupts instance info cache

Status in OpenStack Compute (nova):
  New

Bug description:
  Allocation of network interfaces for an instance can result in
  corruption of the instance info cache in Nova. The result is that the
  cache may contain duplicate entries for network interfaces. This can
  cause failure to boot nodes, as seen with the Libvirt driver.

  Seen on Ubuntu / devstack / commit
  b0013d93ffeaed53bc28d9558def26bdb7041ed7.

  The issue can be reproduced using an instance with a large number of
  interfaces, for example using the heat stack in the attached YAML file
  heat-stack-many-interfaces.yaml. For improved reproducibility, add a
  short sleep in nova.network.neutronv2.api.API.allocate_for_instance,
  just before the call to self.get_instance_nw_info.

  This issue was found by SecurityFun23 when testing the fix for bug
  #1467581.

  The problem appears to be that in
  nova.network.neutronv2.api.API.allocate_for_instance, after the
  Neutron API calls to create/update ports, but before the instance info
  cache is  updated in get_instance_nw_info, it is possible for another
  request to refresh the instance info cache. This will cause the
  new/updated ports to be added to the cache as they are discovered in
  Neutron. Then, the original request resumes, and unconditionally adds
  the new interfaces to the cache. This results in duplicate entries.
  The most likely candidate for another request is probably Neutron
  network-change notifications, which are triggered by the port
  update/create operation. The allocation of multiple interfaces is more
  likely to make the problem to occur, as Neutron API requests are made
  serially for each of the ports, allowing time for the notifications to
  arrive.

  The perceived problem in a more visual form:

  Request:
  - Allocate interfaces for an instance 
(nova.network.neutronv2.api.API.allocate_for_instance)
  - n x Neutron API port create/updates
  --
  Notification:
  - External event notification from Neutron - network-changed 
(nova.compute.manager.ComputeManager.external_instance_event)
  - 

[Yahoo-eng-team] [Bug 1477461] Re: UnicodeDecodeError deleting instance with Spanish as browser language

2015-10-01 Thread Matthias Runge
*** This bug is a duplicate of bug 1488443 ***
https://bugs.launchpad.net/bugs/1488443

** This bug has been marked a duplicate of bug 1488443
   Any action always cause error ( in kilo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1477461

Title:
  UnicodeDecodeError deleting instance with Spanish as browser language

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Hi all,

  When I try to delete an instance from Horizon, I get an error message
  if my browser is using Spanish as its primary language (and thus
  Horizon uses it too):

  
  ¡Algo ha ido mal!
  Ha ocurrido un error inesperado. Pruebe a refrescar la página. Si esto no lo 
soluciona, contacte con su administrador local.

  
  Also, /var/log/horizon/horizon.log shows this trace:

  
  2015-07-23 08:59:47,767 23435 ERROR django.request Internal Server Error: 
/dashboard/project/instances/
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 
132, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 36, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 52, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 36, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 84, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/django/views/generic/base.py", line 
71, in view
  return self.dispatch(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/django/views/generic/base.py", line 
89, in dispatch
  return handler(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/tables/views.py", line 223, 
in post
  return self.get(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/tables/views.py", line 159, 
in get
  handled = self.construct_tables()
File "/usr/lib/python2.7/site-packages/horizon/tables/views.py", line 150, 
in construct_tables
  handled = self.handle_table(table)
File "/usr/lib/python2.7/site-packages/horizon/tables/views.py", line 125, 
in handle_table
  handled = self._tables[name].maybe_handle()
File "/usr/lib/python2.7/site-packages/horizon/tables/base.py", line 1640, 
in maybe_handle
  return self.take_action(action_name, obj_id)
File "/usr/lib/python2.7/site-packages/horizon/tables/base.py", line 1482, 
in take_action
  response = action.multiple(self, self.request, obj_ids)
File "/usr/lib/python2.7/site-packages/horizon/tables/actions.py", line 
302, in multiple
  return self.handle(data_table, request, object_ids)
File "/usr/lib/python2.7/site-packages/horizon/tables/actions.py", line 
827, in handle
  exceptions.handle(request, ignore=ignore)
File "/usr/lib/python2.7/site-packages/horizon/exceptions.py", line 364, in 
handle
  six.reraise(exc_type, exc_value, exc_traceback)
File "/usr/lib/python2.7/site-packages/horizon/tables/actions.py", line 
817, in handle
  (self._get_action_name(past=True), datum_display))
  UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 9: 
ordinal not in range(128)


  The instance is deleted anyway. Then I change the browser language (in 
Firefox, about:config and set intl.accept_languages to “en”), try to delete 
another instance and everything runs smoothly.
  I suspect there is a problem with translation of message “Success: Scheduled 
termination of Instance:” which is shown when using English, but not in Spanish 
(the error above appears instead).

  You can try it by yourselves setting your browser to language “es”. My
  versions:

  CentOS Linux release 7.1.1503 (Core)
  openstack-dashboard.noarch   2015.1.0-7.el7  @openstack-kilo

  Regards.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1477461/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501703] Re: unit test failures on 32 bit architectures

2015-10-01 Thread James Page
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => James Page (james-page)

** Changed in: neutron
   Status: New => In Progress

** Changed in: neutron (Ubuntu)
 Assignee: (unassigned) => James Page (james-page)

** Changed in: neutron (Ubuntu)
   Importance: Undecided => High

** Changed in: neutron (Ubuntu)
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501703

Title:
  unit test failures on 32 bit architectures

Status in neutron:
  In Progress
Status in neutron package in Ubuntu:
  In Progress

Bug description:
  Test all pass fine in Ubuntu on 64 bit archs, however on a 32 bit
  architecture (which is how we build packages in 14.04), two unit tests
  fail - this is a int/long type problem.

  ==
  FAIL: 
neutron.tests.unit.agent.linux.test_ip_lib.TestIpRuleCommand.test__make_canonical_fwmark
  
neutron.tests.unit.agent.linux.test_ip_lib.TestIpRuleCommand.test__make_canonical_fwmark
  --
  _StringException: Traceback (most recent call last):
  _StringException: Empty attachments:
pythonlogging:''
pythonlogging:'neutron.api.extensions'
stderr
stdout

  Traceback (most recent call last):
File "/«PKGBUILDDIR»/neutron/tests/unit/agent/linux/test_ip_lib.py", line 
633, in test__make_canonical_fwmark
  'type': 'unicast'}, actual)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 348, in 
assertEqual
  self.assertThat(observed, matcher, message)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 433, in 
assertThat
  raise mismatch_error
  MismatchError: !=:
  reference = {'fwmark': '0x400/0x', 'type': 'unicast'}
  actual= {'fwmark': '0x400/0xL', 'type': 'unicast'}

  Traceback (most recent call last):
  _StringException: Empty attachments:
pythonlogging:''
pythonlogging:'neutron.api.extensions'
stderr
stdout

  Traceback (most recent call last):
File "/«PKGBUILDDIR»/neutron/tests/unit/agent/linux/test_ip_lib.py", line 
633, in test__make_canonical_fwmark
  'type': 'unicast'}, actual)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 348, in 
assertEqual
  self.assertThat(observed, matcher, message)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 433, in 
assertThat
  raise mismatch_error
  MismatchError: !=:
  reference = {'fwmark': '0x400/0x', 'type': 'unicast'}
  actual= {'fwmark': '0x400/0xL', 'type': 'unicast'}

  
  ==
  FAIL: 
neutron.tests.unit.agent.linux.test_ip_lib.TestIpRuleCommand.test__make_canonical_fwmark_integer
  
neutron.tests.unit.agent.linux.test_ip_lib.TestIpRuleCommand.test__make_canonical_fwmark_integer
  --
  _StringException: Traceback (most recent call last):
  _StringException: Empty attachments:
pythonlogging:''
pythonlogging:'neutron.api.extensions'
stderr
stdout

  Traceback (most recent call last):
File "/«PKGBUILDDIR»/neutron/tests/unit/agent/linux/test_ip_lib.py", line 
642, in test__make_canonical_fwmark_integer
  'type': 'unicast'}, actual)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 348, in 
assertEqual
  self.assertThat(observed, matcher, message)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 433, in 
assertThat
  raise mismatch_error
  MismatchError: !=:
  reference = {'fwmark': '0x400/0x', 'type': 'unicast'}
  actual= {'fwmark': '0x400/0xL', 'type': 'unicast'}

  Traceback (most recent call last):
  _StringException: Empty attachments:
pythonlogging:''
pythonlogging:'neutron.api.extensions'
stderr
stdout

  Traceback (most recent call last):
File "/«PKGBUILDDIR»/neutron/tests/unit/agent/linux/test_ip_lib.py", line 
642, in test__make_canonical_fwmark_integer
  'type': 'unicast'}, actual)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 348, in 
assertEqual
  self.assertThat(observed, matcher, message)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 433, in 
assertThat
  raise mismatch_error
  MismatchError: !=:
  reference = {'fwmark': '0x400/0x', 'type': 'unicast'}
  actual= {'fwmark': '0x400/0xL', 'type': 'unicast'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501703/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team

[Yahoo-eng-team] [Bug 1500688] Re: VNC URL of instance unavailable in CLI

2015-10-01 Thread Sylvain Bauza
** Changed in: nova
   Importance: Undecided => High

** Changed in: nova
   Status: New => Confirmed

** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: Confirmed => Invalid

** Changed in: python-novaclient
   Status: New => Confirmed

** Changed in: python-novaclient
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1500688

Title:
  VNC URL of instance unavailable in CLI

Status in OpenStack Compute (nova):
  Invalid
Status in python-novaclient:
  Confirmed

Bug description:
  I use heat template to build an autoscaling group with
  'OS::Heat::AutoScalingGroup' and 'OS::Nova::Server' and it works fine.
  I can see instance running both by CLI and dashboard. However, I can
  only get to the console by dashboard directly. While using command
  'nova get-vnc-console instance_ID novnc', I got an error:'ERROR
  (NotFound): The resource could not be found. (HTTP 404) (Request-ID:
  req-6f260624-56ad-45fd-aa21-f86fb2c541d1)' instead of its URL.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1500688/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500361] Re: Generated config files are completely wrong

2015-10-01 Thread Thierry Carrez
** Changed in: glance/liberty
   Status: Fix Committed => Fix Released

** No longer affects: glance/mitaka

** No longer affects: glance/liberty

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1500361

Title:
  Generated config files are completely wrong

Status in Glance:
  Fix Released

Bug description:
  The files generated using oslo-config-generator are completely wrong.
  For example, it is missing [keystone_authtoken] and many more. This
  shows on the example config in git (ie: etc/glance-api.conf in
  Glance's git repo).

  I believe the generator's config files is missing --namespace
  keystonemiddleware.auth_token (maybe instead of
  keystoneclient.middleware.auth_token).

  IMO, this is a critical issue, which should be addressed with highest
  priority. This blocks me from testing Liberty rc1 in Debian.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1500361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501772] [NEW] Metadata proxy process errors with binary user_data

2015-10-01 Thread James Page
Public bug reported:

Boot instances with binary user data content (rather than simple text)
is not happy right now:

2015-10-01 13:19:39.109 10854 DEBUG neutron.agent.metadata.namespace_proxy [-] 
{'date': 'Thu, 01 Oct 2015 13:19:39 GMT', 'status': '200', 'content-length': 
'979', 'content-type': 'text/plain; charset=UTF-8', 'content-location': 
u'http://169.254.169.254/openstack/2013-10-17/user_data'} _proxy_request 
/usr/lib/python2.7/dist-packages/neutron/agent/metadata/namespace_proxy.py:90
2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy [-] 
Unexpected error.
2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy 
Traceback (most recent call last):
2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy   
File 
"/usr/lib/python2.7/dist-packages/neutron/agent/metadata/namespace_proxy.py", 
line 55, in __call__
2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy 
req.body)
2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy   
File 
"/usr/lib/python2.7/dist-packages/neutron/agent/metadata/namespace_proxy.py", 
line 91, in _proxy_request
2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy 
LOG.debug(content)
2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy   
File "/usr/lib/python2.7/logging/__init__.py", line 1437, in debug
2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy 
msg, kwargs = self.process(msg, kwargs)
2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy   
File "/usr/lib/python2.7/dist-packages/oslo_log/log.py", line 139, in process
2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy 
msg = _ensure_unicode(msg)
2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy   
File "/usr/lib/python2.7/dist-packages/oslo_log/log.py", line 113, in 
_ensure_unicode
2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy 
errors='xmlcharrefreplace',
2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy   
File "/usr/lib/python2.7/dist-packages/oslo_utils/encodeutils.py", line 43, in 
safe_decode
2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy 
return text.decode(incoming, errors)
2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy   
File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode
2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy 
return codecs.utf_8_decode(input, errors, True)
2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy 
TypeError: don't know how to handle UnicodeDecodeError in error callback
2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy
2015-10-01 13:19:39.112 10854 INFO neutron.wsgi [-] 192.168.21.15 - - 
[01/Oct/2015 13:19:39] "GET /openstack/2013-10-17/user_data HTTP/1.1" 500 343 
0.014536

This is thrown be the log call just prior to it being served back to the
instance.

ProblemType: Bug
DistroRelease: Ubuntu 15.10
Package: neutron-metadata-agent 2:7.0.0~b3-0ubuntu3
ProcVersionSignature: Ubuntu 4.2.0-11.13-generic 4.2.1
Uname: Linux 4.2.0-11-generic x86_64
ApportVersion: 2.19-0ubuntu1
Architecture: amd64
Date: Thu Oct  1 13:38:21 2015
Ec2AMI: ami-05ce
Ec2AMIManifest: FIXME
Ec2AvailabilityZone: nova
Ec2InstanceType: m1.small.osci
Ec2Kernel: None
Ec2Ramdisk: None
JournalErrors: -- No entries --
PackageArchitecture: all
SourcePackage: neutron
UpgradeStatus: No upgrade log present (probably fresh install)
mtime.conffile..etc.neutron.metadata.agent.ini: 2015-10-01T13:18:25.075633

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: neutron (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug ec2-images wily

** Summary changed:

- Metadata proxy process fails to provide user_data
+ Metadata proxy process errors with binary user_data

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501772

Title:
  Metadata proxy process errors with binary user_data

Status in neutron:
  New
Status in neutron package in Ubuntu:
  New

Bug description:
  Boot instances with binary user data content (rather than simple text)
  is not happy right now:

  2015-10-01 13:19:39.109 10854 DEBUG neutron.agent.metadata.namespace_proxy 
[-] {'date': 'Thu, 01 Oct 2015 13:19:39 GMT', 'status': '200', 
'content-length': '979', 'content-type': 'text/plain; charset=UTF-8', 
'content-location': u'http://169.254.169.254/openstack/2013-10-17/user_data'} 
_proxy_request 
/usr/lib/python2.7/dist-packages/neutron/agent/metadata/namespace_proxy.py:90
  2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy 
[-] 

[Yahoo-eng-team] [Bug 1501779] [NEW] Failing to delete an ML2 linux bridge b/c it does not exist should not be an ERROR in the logs

2015-10-01 Thread Matt Riedemann
Public bug reported:

I saw this in some ansible jobs in the gate:

2015-09-30 22:37:21.805 26634 ERROR
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent
[req-23466df3-f59e-4897-9a22-1abb7c99dfd9
9a365636c1b44c41a9770a26ead28701 cbddab88045d45eeb3d2027a3e265b78 - - -]
Cannot delete bridge brq33213e3f-2b, does not exist

http://logs.openstack.org/57/227957/3/gate/gate-openstack-ansible-dsvm-
commit/de3daa3/logs/aio1-neutron/neutron-linuxbridge-agent.log

https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py#L533

That should not be an ERROR message, it could be INFO at best.  If
you're racing with RPC and a thing is already gone, which you were going
to delete anyway, it's not an error.

** Affects: neutron
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: In Progress


** Tags: linuxbridge logging ml2

** Tags added: linuxbridge ml2

** Changed in: neutron
   Status: New => In Progress

** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
 Assignee: (unassigned) => Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501779

Title:
  Failing to delete an ML2 linux bridge b/c it does not exist should not
  be an ERROR in the logs

Status in neutron:
  In Progress

Bug description:
  I saw this in some ansible jobs in the gate:

  2015-09-30 22:37:21.805 26634 ERROR
  neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent
  [req-23466df3-f59e-4897-9a22-1abb7c99dfd9
  9a365636c1b44c41a9770a26ead28701 cbddab88045d45eeb3d2027a3e265b78 - -
  -] Cannot delete bridge brq33213e3f-2b, does not exist

  http://logs.openstack.org/57/227957/3/gate/gate-openstack-ansible-
  dsvm-commit/de3daa3/logs/aio1-neutron/neutron-linuxbridge-agent.log

  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py#L533

  That should not be an ERROR message, it could be INFO at best.  If
  you're racing with RPC and a thing is already gone, which you were
  going to delete anyway, it's not an error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1392527] Re: [OSSA 2015-017] Deleting instance while resize instance is running leads to unuseable compute nodes (CVE-2015-3280)

2015-10-01 Thread Matthew Booth
** Changed in: nova
   Status: Fix Released => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1392527

Title:
  [OSSA 2015-017] Deleting instance while resize instance is running
  leads to unuseable compute nodes (CVE-2015-3280)

Status in OpenStack Compute (nova):
  New
Status in OpenStack Compute (nova) juno series:
  In Progress
Status in OpenStack Compute (nova) kilo series:
  Fix Committed
Status in OpenStack Security Advisory:
  Fix Committed

Bug description:
  Steps to reproduce:
  1) Create a new instance,waiting until it’s status goes to ACTIVE state
  2) Call resize API
  3) Delete the instance immediately after the task_state is “resize_migrated” 
or vm_state is “resized”
  4) Repeat 1 through 3 in a loop

  I have kept attached program running for 4 hours, all instances
  created are deleted (nova list returns empty list) but I noticed
  instances directories with the name “_resize> are not
  deleted from the instance path of the compute nodes (mainly from the
  source compute nodes where the instance was running before resize). If
  I keep this program running for couple of more hours (depending on the
  number of compute nodes), then it completely uses the entire disk of
  the compute nodes (based on the disk_allocation_ratio parameter
  value). Later, nova scheduler doesn’t select these compute nodes for
  launching new vms and starts reporting error "No valid hosts found".

  Note: Even the periodic tasks doesn't cleanup these orphan instance
  directories from the instance path.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1392527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497343] Re: Need to consolidate duplicated volume detach code between compute manager and block_device

2015-10-01 Thread Mark Doffman
Belive that for now this is invalid. There is code that is superficially
similar between the 'detach' code in manager.py and the block_device
attach function, but there are subtle differences. The code in
manager.py calls roll_detach on failure, which I believe is
inappropriate for the block_device.py attach function. There isn't an
easy way to re-use this code without a much larger re-factor.

** Changed in: nova
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1497343

Title:
  Need to consolidate duplicated volume detach code between compute
  manager and block_device

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  In this change:

  https://review.openstack.org/#/c/186742/11/nova/virt/block_device.py

  It was pointed out that the change is adding volume detach code that
  is duplicated with what's also in the _shutdown_instance method in
  nova.compute.manager.

  We wanted to get that bug fix into liberty before rc1 but we should
  consolidate this duplicate volume detach code int the
  nova.virt.block_device module and then have the compute manager call
  that.

  This bug is just tracking the reminder to clean this up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1497343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501831] [NEW] Evacuate libvirt instance failed with error 'Cannot load 'disk_format' in the base class'

2015-10-01 Thread Christine Wang
Public bug reported:

openstack-nova-12.0.0-201509202117

When evacuate a libvirt instance, it failed with the following error:
NotImplementedError: Cannot load 'disk_format' in the base class

2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2431, in 
spawn
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher 
block_device_info)
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/blockinfo.py", line 630, in 
get_disk_info
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher rescue)
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/blockinfo.py", line 537, in 
get_disk_mapping
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher disk_bus, 
cdrom_bus, root_device_name)
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/blockinfo.py", line 432, in 
get_root_info
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher if 
image_meta.disk_format == 'iso':
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 66, in 
getter
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher 
self.obj_load_attr(name)
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 555, in 
obj_load_attr
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher _("Cannot 
load '%s' in the base class") % attrname)
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher 
NotImplementedError: Cannot load 'disk_format' in the base class
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher

When libvirt instance is evacuated, the image_meta is passed in with {}.
So, the disk_format is not populated with the ImageMeta object.

It's unclear to me what's the right way to fix this issue. Should change
ImageMeta's from_dict to make sure 'disk_format' is always populated or
we should add obj_load_attr method to ImageMeta

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1501831

Title:
  Evacuate libvirt instance failed with error 'Cannot load 'disk_format'
  in the base class'

Status in OpenStack Compute (nova):
  New

Bug description:
  openstack-nova-12.0.0-201509202117

  When evacuate a libvirt instance, it failed with the following error:
  NotImplementedError: Cannot load 'disk_format' in the base class

  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2431, in 
spawn
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher 
block_device_info)
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/blockinfo.py", line 630, in 
get_disk_info
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher rescue)
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/blockinfo.py", line 537, in 
get_disk_mapping
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher 
disk_bus, cdrom_bus, root_device_name)
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/blockinfo.py", line 432, in 
get_root_info
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher if 
image_meta.disk_format == 'iso':
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 66, in 
getter
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher 
self.obj_load_attr(name)
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 555, in 
obj_load_attr
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher 
_("Cannot load '%s' in the base class") % attrname)
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher 
NotImplementedError: Cannot load 'disk_format' in the base class
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher

  When libvirt instance is evacuated, the image_meta is passed in with
  {}. So, the disk_format is not populated with the ImageMeta object.

  It's unclear to me what's the right way to fix this issue. Should
  change ImageMeta's from_dict to make sure 

[Yahoo-eng-team] [Bug 1457517] Re: Unable to boot from volume when flavor disk too small

2015-10-01 Thread Serge Hallyn
This fix is in the current wily package, so marking fix released there.

** Changed in: nova (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1457517

Title:
  Unable to boot from volume when flavor disk too small

Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Vivid:
  New

Bug description:
  [Impact]

   * Without the backport, booting from volume requires flavor disk size
  larger than volume size, which is wrong. This patch skips flavor disk
  size checking when booting from volume.

  [Test Case]

   * 1. create a bootable volume
 2. boot from this bootable volume with a flavor that has disk size smaller 
than the volume size
 3. error should be reported complaining disk size too small
 4. apply this patch
 5. boot from that bootable volume with a flavor that has disk size smaller 
than the volume size again
 6. boot should succeed

  [Regression Potential]

   * none

  
  Version: 1:2015.1.0-0ubuntu1~cloud0 on Ubuntu 14.04

  I attempt to boot an instance from a volume:

  nova boot --nic net-id=[NET ID] --flavor v.512mb --block-device
  source=volume,dest=volume,id=[VOLUME
  ID],bus=virtio,device=vda,bootindex=0,shutdown=preserve vm

  This results in nova-api raising a FlavorDiskTooSmall exception in the
  "_check_requested_image" function in compute/api.py. However,
  according to [1], the root disk limit should not apply to volumes.

  [1] http://docs.openstack.org/admin-guide-cloud/content/customize-
  flavors.html

  Log (first line is debug output I added showing that it's looking at
  the image that the volume was created from):

  2015-05-21 10:28:00.586 25835 INFO nova.compute.api 
[req-1fb882c7-07ae-4c2b-86bd-3d174602d0ae f438b80d215c42efb7508c59dc80940c 
8341c85ad9ae49408fa25074adba0480 - - -] image: {'min_disk': 0, 'status': 
'active', 'min_ram': 0, 'properties': {u'container_format': u'bare', 
u'min_ram': u'0', u'disk_format': u'qcow2', u'image_name': u'Ubuntu 14.04 
64-bit', u'image_id': u'cf0dffef-30ef-4032-add0-516e88048d85', 
u'libvirt_cpu_mode': u'host-passthrough', u'checksum': 
u'76a965427d2866f006ddd2aac66ed5b9', u'min_disk': u'0', u'size': u'255524864'}, 
'size': 21474836480}
  2015-05-21 10:28:00.587 25835 INFO nova.api.openstack.wsgi 
[req-1fb882c7-07ae-4c2b-86bd-3d174602d0ae f438b80d215c42efb7508c59dc80940c 
8341c85ad9ae49408fa25074adba0480 - - -] HTTP exception thrown: Flavor's disk is 
too small for requested image.

  Temporary solution: I have special flavor for volume-backed instances so I 
just set the root disk on those to 0, but this doesn't work if volume are used 
on other flavors.
  Reproduce: create flavor with 1 GB root disk size, then try to boot an 
instance from a volume created from an image that is larger than 1 GB.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1457517/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501835] [NEW] Enable IPv6 Prefix Delegation using a new configuration option

2015-10-01 Thread John Davidge
Public bug reported:

With the existing default subnetpool configuration being deprecated[1],
a new method for enabling IPv6 PD will be needed.

A new boolean option in neutron.conf called "ipv6_pd_enabled" will do
the job.

[1]https://bugs.launchpad.net/neutron/+bug/1501328

** Affects: neutron
 Importance: Undecided
 Assignee: John Davidge (john-davidge)
 Status: New


** Tags: rfe

** Changed in: neutron
 Assignee: (unassigned) => John Davidge (john-davidge)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501835

Title:
  Enable IPv6 Prefix Delegation using a new configuration option

Status in neutron:
  New

Bug description:
  With the existing default subnetpool configuration being
  deprecated[1], a new method for enabling IPv6 PD will be needed.

  A new boolean option in neutron.conf called "ipv6_pd_enabled" will do
  the job.

  [1]https://bugs.launchpad.net/neutron/+bug/1501328

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501835/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500361] Re: Generated config files are completely wrong

2015-10-01 Thread nikhil komawar
RC2 is yet to be out, so I updated the status to Fix Committed from FIx
Released. This will be released when RC2 is out; that is either tomorrow
(Friday Oct 2) or next week.

** Changed in: glance
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1500361

Title:
  Generated config files are completely wrong

Status in Glance:
  Fix Committed

Bug description:
  The files generated using oslo-config-generator are completely wrong.
  For example, it is missing [keystone_authtoken] and many more. This
  shows on the example config in git (ie: etc/glance-api.conf in
  Glance's git repo).

  I believe the generator's config files is missing --namespace
  keystonemiddleware.auth_token (maybe instead of
  keystoneclient.middleware.auth_token).

  IMO, this is a critical issue, which should be addressed with highest
  priority. This blocks me from testing Liberty rc1 in Debian.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1500361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501873] [NEW] FIP Namespace add/delete race condition seen in DVR router log

2015-10-01 Thread Swaminathan Vasudevan
Public bug reported:

FIP Namespace add/delete race conditon seen in DVR router log. This might cause 
the FIP functionality to fail.
>From the trace log it seems when this happens, a bunch of tests related to FIP 
>functionality fails with SSH Timeout waiting for reply.


Here is the output of the trace that kinds of shows the race condition.

Exit code: 0
 execute /opt/stack/new/neutron/neutron/agent/linux/utils.py:156
2015-09-29 21:10:33.433 7884 DEBUG neutron.agent.l3.dvr_local_router [-] 
Removed last floatingip, so requesting the server to delete Floatingip Agent 
Gateway port:{u'allowed_address_pairs': [], u'extra_dhcp_opts': [], 
u'device_owner': u'network:floatingip_agent_gateway', u'port_security_enabled': 
False, u'binding:profile': {}, u'fixed_ips': [{u'subnet_id': 
u'362e9033-db93-4193-9413-1073215ab326', u'prefixlen': 24, u'ip_address': 
u'172.24.5.9'}, {u'subnet_id': u'feb3aa76-53b1-4d4e-b136-412c747ffd30', 
u'prefixlen': 64, u'ip_address': u'2001:db8::a'}], u'id': 
u'044a8e2f-00eb-4231-b526-13cb46dcc42f', u'security_groups': [], 
u'binding:vif_details': {u'port_filter': True, u'ovs_hybrid_plug': True}, 
u'binding:vif_type': u'ovs', u'mac_address': u'fa:16:3e:7a:a6:85', u'status': 
u'DOWN', u'subnets': [{u'ipv6_ra_mode': None, u'cidr': u'2001:db8::/64', 
u'gateway_ip': u'2001:db8::2', u'id': u'feb3aa76-53b1-4d4e-b136-412c747ffd30', 
u'subnetpool_id': None}, {u'ipv6_ra_mode': None, u'cidr': u'172.24
 .5.0/24', u'gateway_ip': u'172.24.5.1', u'id': 
u'362e9033-db93-4193-9413-1073215ab326', u'subnetpool_id': None}], 
u'binding:host_id': u'devstack-trusty-hpcloud-b5-5153724', u'dns_assignment': 
[{u'hostname': u'host-172-24-5-9', u'ip_address': u'172.24.5.9', u'fqdn': 
u'host-172-24-5-9.openstacklocal.'}, {u'hostname': u'host-2001-db8--a', 
u'ip_address': u'2001:db8::a', u'fqdn': u'host-2001-db8--a.openstacklocal.'}], 
u'device_id': u'646bb18b-da52-4ead-a635-012c72c1ccf1', u'name': u'', 
u'admin_state_up': True, u'network_id': 
u'31689320-95d7-44f9-932a-cc82c1bca2b4', u'dns_name': u'', 
u'binding:vnic_type': u'normal', u'tenant_id': u'', u'extra_subnets': []} 
floating_ip_removed_dist 
/opt/stack/new/neutron/neutron/agent/l3/dvr_local_router.py:148

2015-09-29 21:10:34.031 7884 DEBUG neutron.agent.linux.utils [-] Running
command (rootwrap daemon): ['ip', 'netns', 'delete',
'fip-31689320-95d7-44f9-932a-cc82c1bca2b4'] execute_rootwrap_daemon
/opt/stack/new/neutron/neutron/agent/linux/utils.py:101


2015-09-29 21:10:34.043 DEBUG neutron.agent.l3.dvr_local_router 
[req-33413b07-784c-469e-8a35-0e20312a157e None None] FloatingIP agent gateway 
port received from the plugin: {u'allowed_address_pairs': [], 
u'extra_dhcp_opts': [], u'device_owner': u'network:floatingip_agent_gateway', 
u'port_security_enabled': False, u'binding:profile': {}, u'fixed_ips': 
[{u'subnet_id': u'362e9033-db93-4193-9413-1073215ab326', u'prefixlen': 24, 
u'ip_address': u'172.24.5.9'}, {u'subnet_id': 
u'feb3aa76-53b1-4d4e-b136-412c747ffd30', u'prefixlen': 64, u'ip_address': 
u'2001:db8::a'}], u'id': u'044a8e2f-00eb-4231-b526-13cb46dcc42f', 
u'security_groups': [], u'binding:vif_details': {u'port_filter': True, 
u'ovs_hybrid_plug': True}, u'binding:vif_type': u'ovs', u'mac_address': 
u'fa:16:3e:7a:a6:85', u'status': u'ACTIVE', u'subnets': [{u'ipv6_ra_mode': 
None, u'cidr': u'172.24.5.0/24', u'gateway_ip': u'172.24.5.1', u'id': 
u'362e9033-db93-4193-9413-1073215ab326', u'subnetpool_id': None}, 
{u'ipv6_ra_mode': None, u'cidr
 ': u'2001:db8::/64', u'gateway_ip': u'2001:db8::2', u'id': 
u'feb3aa76-53b1-4d4e-b136-412c747ffd30', u'subnetpool_id': None}], 
u'binding:host_id': u'devstack-trusty-hpcloud-b5-5153724', u'dns_assignment': 
[{u'hostname': u'host-172-24-5-9', u'ip_address': u'172.24.5.9', u'fqdn': 
u'host-172-24-5-9.openstacklocal.'}, {u'hostname': u'host-2001-db8--a', 
u'ip_address': u'2001:db8::a', u'fqdn': u'host-2001-db8--a.openstacklocal.'}], 
u'device_id': u'646bb18b-da52-4ead-a635-012c72c1ccf1', u'name': u'', 
u'admin_state_up': True, u'network_id': 
u'31689320-95d7-44f9-932a-cc82c1bca2b4', u'dns_name': u'', 
u'binding:vnic_type': u'normal', u'tenant_id': u'', u'extra_subnets': []} 
create_dvr_fip_interfaces 
/opt/stack/new/neutron/neutron/agent/l3/dvr_local_router.py:427


2015-09-29 21:10:34.043 DEBUG neutron.agent.l3.dvr_fip_ns 
[req-33413b07-784c-469e-8a35-0e20312a157e None None] add 
fip-namespace(fip-31689320-95d7-44f9-932a-cc82c1bca2b4) create 
/opt/stack/new/neutron/neutron/agent/l3/dvr_fip_ns.py:133

Exit code: 0
 execute /opt/stack/new/neutron/neutron/agent/linux/utils.py:156
2015-09-29 21:10:34.053 DEBUG neutron.agent.linux.utils 
[req-33413b07-784c-469e-8a35-0e20312a157e None None] Running command (rootwrap 
daemon): ['ip', 'netns', 'exec', 'fip-31689320-95d7-44f9-932a-cc82c1bca2b4', 
'sysctl', '-w', 'net.ipv4.ip_forward=1'] execute_rootwrap_daemon 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:101


2015-09-29 21:10:34.084 ERROR neutron.agent.linux.utils 
[req-33413b07-784c-469e-8a35-0e20312a157e None 

[Yahoo-eng-team] [Bug 1366682] Re: ScrubberFileQueue is never called

2015-10-01 Thread Hemanth Makkapati
*** This bug is a duplicate of bug 1427929 ***
https://bugs.launchpad.net/bugs/1427929

This is fixed with https://review.openstack.org/#/c/161051/

** Changed in: glance
   Importance: Undecided => Low

** This bug has been marked a duplicate of bug 1427929
   Purge dead file-backed scrubber queue code

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1366682

Title:
  ScrubberFileQueue is never called

Status in Glance:
  Incomplete

Bug description:
  It looks like the change I6910b55de8c3b203560d75ff3d1675eda31ae786
  might have broken the `test_scrubber_with_metadata_enc` test since now
  `ScrubberFileQueue` is never called.

  Adding a FIXME

  
https://github.com/openstack/glance/blob/master/glance/tests/functional/test_scrubber.py#L199

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1366682/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501851] [NEW] Nova can incorrectly think an instance is volume backed

2015-10-01 Thread Andrew Laski
Public bug reported:

If an instance is booted with "nova boot --block-device
source=image,dest=local..." the instance ends up with no image_ref set
and an entry in the block_device_mappings table.  This confuses the
compute/api.py is_volume_backed_instance method which assumes that if
image_ref isn't set then the instance is volume backed.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: volumes

** Tags added: volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1501851

Title:
  Nova can incorrectly think an instance is volume backed

Status in OpenStack Compute (nova):
  New

Bug description:
  If an instance is booted with "nova boot --block-device
  source=image,dest=local..." the instance ends up with no image_ref set
  and an entry in the block_device_mappings table.  This confuses the
  compute/api.py is_volume_backed_instance method which assumes that if
  image_ref isn't set then the instance is volume backed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1501851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501914] [NEW] Liberty devstack failed to launch instance w/ NetApp eSeries.

2015-10-01 Thread Hong
Public bug reported:

1. Exact version of Nova/OpenStack you are running:

Liberty Devstack

commit f4485bae9c719ee6b0c243cf5a69a6461df0bf23
Merge: ace1e8f e5a6f82
Author: Jenkins 
Date:   Thu Oct 1 07:14:41 2015 +

Merge "Cleanup nova v2.1 API testing options"


2. Relevant log files:  n-cpu.log file is in the attachment.

3. Reproduce steps:
- Setup is running with Liberty devstack version on Ubuntu 14.04.
- Connected to a NetApp eSeries (iSCSI) for storage.  (Using multipath to 
manage the array)
- Launch an instance from Horizon, by selecting "launch instance", input an 
Instance Name, m1.small, Instance count: 1, select "Boot from image (creates a 
new volume)", select "cirros..." image, default size(20G for small), then click 
on "Launch"

- The launch instance failed with the following error:

Error: Failed to perform requested operation on instance "testvm", the
instance has an error status: Please try again later [Error: Build of
instance 1304643b-f8f2-4894-89d8-26c1b8d3e438 aborted: Block Device
Mapping is Invalid.].

It seems the host failed to get the new disk from the eSeries storage.

Did some more tests with the following observation:

When I create a new (1st) volume with certain image (cirros), the host created 
a host on the array, started the iSCSI sessions, and mapped the volume.  But 
then the iSCSI sessions disconnected and the host failed to discover the 
volume, “sudo multipath –ll” did not list any volume.
 
Then I tried to create a 2nd instance, the host restarted the iSCSI sessions, 
created and mapped a new (2nd) volume.  This time the host discovered the first 
volume, but not the newly created (2nd) volume.   Also, the iSCSI sessions stay 
this time, they didn’t get disconnected.
 
It seem like there might be a problem with when the newly added volume is being 
discover is not in proper order, the discover/rescan command is being use too 
early.

Also, tried the same with the Kilo Devstack version, and this version is
working fine.

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "n-cpu.log"
   
https://bugs.launchpad.net/bugs/1501914/+attachment/4481340/+files/n-cpu.log.recreate

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1501914

Title:
  Liberty devstack failed to launch instance w/ NetApp eSeries.

Status in OpenStack Compute (nova):
  New

Bug description:
  1. Exact version of Nova/OpenStack you are running:

  Liberty Devstack

  commit f4485bae9c719ee6b0c243cf5a69a6461df0bf23
  Merge: ace1e8f e5a6f82
  Author: Jenkins 
  Date:   Thu Oct 1 07:14:41 2015 +

  Merge "Cleanup nova v2.1 API testing options"

  
  2. Relevant log files:  n-cpu.log file is in the attachment.

  3. Reproduce steps:
  - Setup is running with Liberty devstack version on Ubuntu 14.04.
  - Connected to a NetApp eSeries (iSCSI) for storage.  (Using multipath to 
manage the array)
  - Launch an instance from Horizon, by selecting "launch instance", input an 
Instance Name, m1.small, Instance count: 1, select "Boot from image (creates a 
new volume)", select "cirros..." image, default size(20G for small), then click 
on "Launch"

  - The launch instance failed with the following error:

  Error: Failed to perform requested operation on instance "testvm", the
  instance has an error status: Please try again later [Error: Build of
  instance 1304643b-f8f2-4894-89d8-26c1b8d3e438 aborted: Block Device
  Mapping is Invalid.].

  It seems the host failed to get the new disk from the eSeries storage.

  Did some more tests with the following observation:

  When I create a new (1st) volume with certain image (cirros), the host 
created a host on the array, started the iSCSI sessions, and mapped the volume. 
 But then the iSCSI sessions disconnected and the host failed to discover the 
volume, “sudo multipath –ll” did not list any volume.
   
  Then I tried to create a 2nd instance, the host restarted the iSCSI sessions, 
created and mapped a new (2nd) volume.  This time the host discovered the first 
volume, but not the newly created (2nd) volume.   Also, the iSCSI sessions stay 
this time, they didn’t get disconnected.
   
  It seem like there might be a problem with when the newly added volume is 
being discover is not in proper order, the discover/rescan command is being use 
too early.

  Also, tried the same with the Kilo Devstack version, and this version
  is working fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1501914/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1180950] Re: Openstack operators need to be able to move instances between tenants without modifying the database directly

2015-10-01 Thread Matt Riedemann
This is a big change so I'm going to invalidate it as a bug.  There is a
blueprint for the same thing here:

https://blueprints.launchpad.net/nova/+spec/transfer-instance-ownership

And there was a spec proposed at one point but it's abandoned:

https://review.openstack.org/#/c/105367/

We could probably use that as the basis for a backlog spec in nova specs
now - although it would probably actually be a cross-project impact spec
so would need to live in oslo most likey.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1180950

Title:
  Openstack operators need to be able to move instances between tenants
  without modifying the database directly

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Periodically openstack operators have the need to move instances to
  another tenant for whatever reason. In those cases they have to log in
  to the database and update the instance by hand. It would be better if
  it could be done via the nova CLI/API/Horizon.

  Thanks!

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1180950/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501921] [NEW] Heat Stacks Details: Events missing unit tests

2015-10-01 Thread Cindy Lu
Public bug reported:

https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/stacks/tests.py

There are no tests for the events_list api call.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1501921

Title:
  Heat Stacks Details: Events missing unit tests

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/stacks/tests.py

  There are no tests for the events_list api call.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1501921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501937] [NEW] Clicking on the same link triggers the modal and causes modal to hang

2015-10-01 Thread Kyle Olivo
Public bug reported:

Observed behavior: A user clicks on a link that points to the same route
that the user just requested. The user sees a modal pop up with a
spinner, indicating that something is loading.

Expected behavior: A user clicks on a link that points to the same route
as the route the user just requested. The user remains on the route and
does not navigate to another route. No modal pops up indicating that
something is loading.

** Affects: horizon
 Importance: Undecided
 Assignee: Kyle Olivo (kyleolivo)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Kyle Olivo (kyleolivo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1501937

Title:
  Clicking on the same link triggers the modal and causes modal to hang

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Observed behavior: A user clicks on a link that points to the same
  route that the user just requested. The user sees a modal pop up with
  a spinner, indicating that something is loading.

  Expected behavior: A user clicks on a link that points to the same
  route as the route the user just requested. The user remains on the
  route and does not navigate to another route. No modal pops up
  indicating that something is loading.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1501937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501948] [NEW] Quota enforcement does not work in Pecan

2015-10-01 Thread Salvatore Orlando
Public bug reported:

Pecan still uses old-style, reservation-less quota enforcement [1]

Unfortunately this just does not work.
There are two independent issues:
- only extension resources are being registered with the quota engine, because 
resource registration for core resources used to happen in the "API router" 
[2]. This is clear from the following message in the logs:

DEBUG neutron.pecan_wsgi.hooks.quota_enforcement [req-6643e848-0cec-
45d9-88d8-35f49a60b8b5 demo 3f3039040f0e434d8e10d7f43dabfe75] Unknown
quota resources ['network']

- the enforcement hook still passes the plural to the resource's count
method. The plural resource name parameter was removed during liberty
[3] as it was not necessary, and this causes a non negligible issue that
it's being interpreted as the tenant_id. [4]


[1] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/pecan_wsgi/hooks/quota_enforcement.py
[2] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/api/v2/router.py
[3] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/quota/resource.py#n134
[4] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/pecan_wsgi/hooks/quota_enforcement.py#n48

** Affects: neutron
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501948

Title:
  Quota enforcement does not work in Pecan

Status in neutron:
  In Progress

Bug description:
  Pecan still uses old-style, reservation-less quota enforcement [1]

  Unfortunately this just does not work.
  There are two independent issues:
  - only extension resources are being registered with the quota engine, 
because resource registration for core resources used to happen in the "API 
router" [2]. This is clear from the following message in the logs:

  DEBUG neutron.pecan_wsgi.hooks.quota_enforcement [req-6643e848-0cec-
  45d9-88d8-35f49a60b8b5 demo 3f3039040f0e434d8e10d7f43dabfe75] Unknown
  quota resources ['network']

  - the enforcement hook still passes the plural to the resource's count
  method. The plural resource name parameter was removed during liberty
  [3] as it was not necessary, and this causes a non negligible issue
  that it's being interpreted as the tenant_id. [4]

  
  [1] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/pecan_wsgi/hooks/quota_enforcement.py
  [2] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/api/v2/router.py
  [3] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/quota/resource.py#n134
  [4] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/pecan_wsgi/hooks/quota_enforcement.py#n48

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501948/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501969] [NEW] Instance does not get IP from dhcp ipv6 subnet (slaac/slaac) with DVR, when router interface is added after VM creation.

2015-10-01 Thread Ritesh Anand
Public bug reported:

Instance does not get IP from dhcp ipv6 subnet (slaac/slaac) with DVR,
when router interface is added after VM creation.

Instance does get IP when it is booted after interface to the subnet  has 
already been added to the DVR.
This ordering issue is not observed with centralized router.

Easy to recreate.

On compute:
--

NOT getting IP, when router interface is added after VM has been created:
$ ifconfig
eth0  Link encap:Ethernet  HWaddr FA:16:3E:9C:15:B7
  inet6 addr: fe80::f816:3eff:fe9c:15b7/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:14 errors:0 dropped:0 overruns:0 frame:0
  TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:1116 (1.0 KiB)  TX bytes:1138 (1.1 KiB)

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:12 errors:0 dropped:0 overruns:0 frame:0
  TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:1020 (1020.0 B)  TX bytes:1020 (1020.0 B)

Gets IP when router interface is added before VM is booted.
$
$ ifconfig
eth0  Link encap:Ethernet  HWaddr FA:16:3E:9C:15:B7
  inet6 addr: 4001:db8::f816:3eff:fe9c:15b7/64 Scope:Global
  inet6 addr: fe80::f816:3eff:fe9c:15b7/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:15 errors:0 dropped:0 overruns:0 frame:0
  TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:1226 (1.1 KiB)  TX bytes:1138 (1.1 KiB)

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:12 errors:0 dropped:0 overruns:0 frame:0
  TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:1020 (1020.0 B)  TX bytes:1020 (1020.0 B)

$

Subnet:
stack@osctrlr:~/devstack$ neutron subnet-show ipv62s1
+---+--+
| Field | Value 
   |
+---+--+
| allocation_pools  | {"start": "4001:db8::2", "end": 
"4001:db8:::::"} |
| cidr  | 4001:db8::/64 
   |
| dns_nameservers   |   
   |
| enable_dhcp   | True  
   |
| gateway_ip| 4001:db8::1   
   |
| host_routes   |   
   |
| id| 2b24b126-f618-4daa-a3a8-24ea8720a0db  
   |
| ip_version| 6 
   |
| ipv6_address_mode | slaac 
   |
| ipv6_ra_mode  | slaac 
   |
| name  | ipv62s1   
   |
| network_id| d9a71eed-0768-46b7-be0e-74664211f28b  
   |
| subnetpool_id |   
   |
| tenant_id | 9fbdd2326fe34e949ece2bef8c8f8c8c  
   |
+---+--+
stack@osctrlr:~/devstack$

Router:
stack@osctrlr:~/devstack$ neutron router-show dvr
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| distributed   | True |
| external_gateway_info |  |
| ha| False|
| id| 3512b48b-a1a8-4923-9a4b-0720dfd71baf |
| name  | dvr  |
| routes|  |
| status| ACTIVE   |
| tenant_id | 9fbdd2326fe34e949ece2bef8c8f8c8c |
+---+--+
stack@osctrlr:~/devstack$

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, 

[Yahoo-eng-team] [Bug 1501672] [NEW] v2 image download returns 403 when 'get_image_locations' policy set

2015-10-01 Thread Stuart McLaren
Public bug reported:

when get_image_location is set role:admin a regular users sees:

 $ glance --os-image-api-version 2 image-download 
33fd3f1a-4924-4078-9d57-d7f6db4d015b
 403 Forbidden: You are not authorized to complete this action. (HTTP 403)

v1 works ok.

** Affects: glance
 Importance: Undecided
 Assignee: Stuart McLaren (stuart-mclaren)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Stuart McLaren (stuart-mclaren)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1501672

Title:
  v2 image download returns 403 when 'get_image_locations' policy set

Status in Glance:
  New

Bug description:
  when get_image_location is set role:admin a regular users sees:

   $ glance --os-image-api-version 2 image-download 
33fd3f1a-4924-4078-9d57-d7f6db4d015b
   403 Forbidden: You are not authorized to complete this action. (HTTP 403)

  v1 works ok.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1501672/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367542] Re: OpenStack Dashboard couldn't displayed nested stack very well

2015-10-01 Thread Rob Cresswell
*** This bug is a duplicate of bug 1335032 ***
https://bugs.launchpad.net/bugs/1335032

** This bug has been marked a duplicate of bug 1335032
   Resource Id link on 'Resource Detail' page is broken

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1367542

Title:
  OpenStack Dashboard couldn't displayed nested stack very well

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  OpenStack Dashboard couldn't displayed nested stack very well.

  Example,
  Icehouse OpenStack Dasboard:

  In panel "Project" ---> "Orchestration" ---> "Stacks", when clicked tab 
"Resources", the "Stack Resources" could be displayed very well, continued to 
click this column's text of "Stack Resource" which is a link in the tables, the 
tab "Overview" could be displayed very well. Then, clicked this text of 
"Resource ID" which is a link in the tab "Overview", the displayed content in 
the html is: 
  "The page you were looking for doesn't exist

  You may have mistyped the address or the page may have moved."

  
  Root cause:

  In the template
  project/stacks/templates/stacks/_resource_overview.html, the
  "resource_url" need to be used.

  {% trans "Resource ID" %}
  

{{ resource.physical_resource_id }}

  

  the value of resource_url is setted in class "ResourceOverviewTab" in
  project/stacks/tabs.py, please see below code:

  class ResourceOverviewTab(tabs.Tab):
  name = _("Overview")
  slug = "resource_overview"
  template_name = "project/stacks/_resource_overview.html"

  def get_context_data(self, request):
  resource = self.tab_group.kwargs['resource']
  resource_url = mappings.resource_to_url(resource)
  return {
  "resource": resource,
  "resource_url": resource_url,
  "metadata": self.tab_group.kwargs['metadata']}

  
  the 'resource_urls' dictionary in project/stacks/mappings.py is out of date. 
It's not updated in Icehouse version, even earlier. some new concerns like 
Netron couldn't be found these.

  resource_urls = {
  "AWS::EC2::Instance": {
  'link': 'horizon:project:instances:detail'},
  "AWS::EC2::NetworkInterface": {
  'link': 'horizon:project:networks:ports:detail'},
  "AWS::EC2::RouteTable": {
  'link': 'horizon:project:routers:detail'},
  "AWS::EC2::Subnet": {
  'link': 'horizon:project:networks:subnets:detail'},
  "AWS::EC2::Volume": {
  'link': 'horizon:project:volumes:volumes:detail'},
  "AWS::EC2::VPC": {
  'link': 'horizon:project:networks:detail'},
  "AWS::S3::Bucket": {
  'link': 'horizon:project:containers:index'},
  "OS::Quantum::Net": {
  'link': 'horizon:project:networks:detail'},
  "OS::Quantum::Port": {
  'link': 'horizon:project:networks:ports:detail'},
  "OS::Quantum::Router": {
  'link': 'horizon:project:routers:detail'},
  "OS::Quantum::Subnet": {
  'link': 'horizon:project:networks:subnets:detail'},
  "OS::Swift::Container": {
  'link': 'horizon:project:containers:index',
  'format_pattern': '%s' + swift.FOLDER_DELIMITER},
  } 

  Since the "resource_type" could NOT match the type in "resource_urls", the 
value of "resource_url" in the template is "None". So we didn't find the 
correct html template. 
  For example, the URL is like 
"http://10.10.0.3/dashboard/project/stacks/stack/[outer stack 
id]/[resource_name]/None". 
  Note: We can get the resource by "resource_name", and the resource_type in 
the resource is user customized, and is nested stack actually.

  
  What's more,  if we add new resource_type(in fact, it's quite frequent in 
real project), we must update the code for "resource_urls", it's tedious and 
error prone.
  Since the heat template already support define a new resource_type based on 
the customer's requirement, the dashboard should keep consistent with it. 
  It's not a good behavior always to update this dictionary manually. Shall we 
do an enhancement on this point?
  Please help to check it. Thanks very much.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1367542/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340167] Re: VMware: Horizon reports incorrect message for PAUSE instance

2015-10-01 Thread Rob Cresswell
** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340167

Title:
  VMware: Horizon reports incorrect message for PAUSE instance

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  When I pause an instance hosted on VMware cluster, it shows 
  SUCCESS: Paused Instance: ; 
  in horizon portal and nothing happens (instance does not go to Pause state)

  In nova-compute log it shows: pause not supported for vmwareapi

  2014-07-10 06:53:37.212 ERROR oslo.messaging.rpc.dispatcher 
[req-f8159224-a1e2-4271-84d8-eea2edeaaee1 admin demo] Exception during message 
handling: pause not supported for vmwareapi
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher Traceback (most 
recent call last):
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 88, in wrapped
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher payload)
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 71, in wrapped
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 285, in decorated_function
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher pass
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 271, in decorated_function
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 335, in decorated_function
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher 
function(self, context, *args, **kwargs)
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 313, in decorated_function
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 301, in decorated_function
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 3680, in pause_instance
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher 
self.driver.pause(instance)
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 678, in pause
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher 
_vmops.pause(instance)
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 938, in pause
  2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher raise 
NotImplementedError(msg)
  2014-07-10 06:53:37.212 TRACE 

[Yahoo-eng-team] [Bug 1501662] [NEW] filesystem_store_datadir doesn't have the default it pretends

2015-10-01 Thread Thomas Goirand
Public bug reported:

Testing Glance Liberty RC1 over on top of Jessie, I have found out that
if I don't write a value for filesystem_store_datadir in both glance-
api.conf and glance-registry.conf, Glance simply doesn't work, despite
what is written here:

http://docs.openstack.org/developer/glance/configuring.html#configuring-
the-filesystem-storage-backend

So I would suggest to either fix the doc, or better, make it so that the
filesystem_store_datadir really defaults to a sane value, which is
/var/lib/glance/images in the case of a distribution deployment. It is
my understanding that devstack anyway sets a correct value there, so we
wont break the gate by fixing the default value to something that works.

Hoping this helps.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1501662

Title:
  filesystem_store_datadir doesn't have the default it pretends

Status in Glance:
  New

Bug description:
  Testing Glance Liberty RC1 over on top of Jessie, I have found out
  that if I don't write a value for filesystem_store_datadir in both
  glance-api.conf and glance-registry.conf, Glance simply doesn't work,
  despite what is written here:

  http://docs.openstack.org/developer/glance/configuring.html
  #configuring-the-filesystem-storage-backend

  So I would suggest to either fix the doc, or better, make it so that
  the filesystem_store_datadir really defaults to a sane value, which is
  /var/lib/glance/images in the case of a distribution deployment. It is
  my understanding that devstack anyway sets a correct value there, so
  we wont break the gate by fixing the default value to something that
  works.

  Hoping this helps.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1501662/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501198] Re: When network is rescheduled from one DHCP agent to another, DHCP port binding (host) doesn't change

2015-10-01 Thread Eugene Nikanorov
*** This bug is a duplicate of bug 1411163 ***
https://bugs.launchpad.net/bugs/1411163

** This bug has been marked a duplicate of bug 1411163
   No fdb entries added when failover dhcp and l3 agent together

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501198

Title:
  When network is rescheduled from one DHCP agent to another, DHCP port
  binding (host) doesn't change

Status in neutron:
  In Progress

Bug description:
  During network failover DHCP port doesn't change its port binding 
information, host in particular.
  This prevents external SDNs like Cisco from configuring port properly because 
it needs correct binding information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501198/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476213] Re: Adding users from different domain to a group

2015-10-01 Thread Bajarang Jadhav
** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1476213

Title:
  Adding users from different domain to a group

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in Keystone:
  New

Bug description:
  In Horizon, I found that, users from one domain are not allowed to be
  part of the group of another domain..

  Steps followed:
  1. Created 2 domains, domain1 and domain2
  2. Created users, user1 in domain1 and user2 in domain2.
  3. Created groups, group1 in domain1 and group2 in domain2.
  4. In UI, tried to add user1 to group2. While "Add users" is clicked in 
"Group Management" page of group2, it shows only user2.Have attached the 
screenshot of the same.
  5. Same behavior is observed while adding user2 to group1.

  As per the discussion above, users from one domain are allowed to be
  part of the group of another domain.In CLI, same behavior is observed,
  however in UI, the behavior is different as mentioned in the above
  steps.

  Can you please let me know if UI is behaving as designed?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1476213/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp