[Yahoo-eng-team] [Bug 1495815] [NEW] Hard to translate "Displaying %s of %s items" (cannot control the order of substitutions)

2015-09-14 Thread Akihiro Motoki
Public bug reported:

horizon/locale/djangojs.pot has the following string.

#: static/framework/util/filters/filters.js:177
#, python-format
msgid "Displaying %s of %s items"
msgstr ""

In some languages, there is a need to swap the order of the two %s.
%s should be replaced by %(keyword)s (keyword substitution).

The current horizon/static/framework/util/filters/filters.js is as
follows:

176 var total = ensureNonNegative(totalInput);
177 var format = gettext('Displaying %s of %s items');
178 return interpolate(format, [count, total]);

L.177 should be:

 var format = gettext('Displaying %(count)s of %(total)s items');

** Affects: horizon
 Importance: High
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: i18n

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1495815

Title:
  Hard to translate "Displaying %s of %s items" (cannot control the
  order of substitutions)

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  horizon/locale/djangojs.pot has the following string.

  #: static/framework/util/filters/filters.js:177
  #, python-format
  msgid "Displaying %s of %s items"
  msgstr ""

  In some languages, there is a need to swap the order of the two %s.
  %s should be replaced by %(keyword)s (keyword substitution).

  The current horizon/static/framework/util/filters/filters.js is as
  follows:

  176 var total = ensureNonNegative(totalInput);
  177 var format = gettext('Displaying %s of %s items');
  178 return interpolate(format, [count, total]);

  L.177 should be:

   var format = gettext('Displaying %(count)s of %(total)s items');

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1495815/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472727] Re: Subnet pools and the quota on subnets

2015-09-14 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1472727

Title:
  Subnet pools and the quota on subnets

Status in neutron:
  Expired

Bug description:
  Here is the use case I have in mind:

  Want to have a quota on subnets but all subnets being created from a single 
subnet pool be counted as 1 against the quota.
  (The newly added quota mechanism for subnet pools (or something similar to 
it) can coexist and be enforced along side the quota on subnets). 

  For example, if the Neutron quota on subnets is 1 and subnets are
  being created from a single subnet pool, even if there are several
  subnets being created, I want to allow that. Currently the Neutron
  quota limit will prevent creation of subnets beyond the quota even
  though they are from say a single subnet pool (or a number of pools
  that is smaller than the quota).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1472727/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450435] Re: resource usage calendar is not translated

2015-09-14 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1450435

Title:
  resource usage calendar is not translated

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  On Admin->System->Resource Usage->Stats when selecting Period/Other
  the calendars are displayed in English

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1450435/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474760] Re: Unit test failures with sqlalchemy 1.0.6

2015-09-14 Thread Launchpad Bug Tracker
[Expired for Keystone because there has been no activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1474760

Title:
  Unit test failures with sqlalchemy 1.0.6

Status in Keystone:
  Expired

Bug description:
  Hi,

  Building Keystone in Jessie poses no problem, but it looks like in
  Sid, Keystone doesn't like SQLAlchemy 1.0.6. Here's a full build log:

  http://sid.gplhost.com/keystone_8.0.0~b1-1_amd64.build

  Just in case if that file wasn't available, here's an example crash.
  There's one single occurence of the first failure, and 26 of the 2nd
  one with migrate.exceptions.DatabaseAlreadyControlledError as error.

  FAIL: 
keystone.tests.unit.test_sql_upgrade.SqlUpgradeTests.test_add_actor_id_index
  --
  Traceback (most recent call last):
  testtools.testresult.real._StringException: Empty attachments:
pythonlogging:''-1
stderr
stdout

  pythonlogging:'': {{{
  Adding cache-proxy 'keystone.tests.unit.test_cache.CacheIsolatingProxy' to 
backend.
  Loading repository 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo...
  Loading script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/044_icehouse.py...
  Script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/044_icehouse.py
 loaded successfully
  Loading script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/045_placeholder.py...
  Script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/045_placeholder.py
 loaded successfully
  Loading script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/046_placeholder.py...
  Script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/046_placeholder.py
 loaded successfully
  Loading script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/047_placeholder.py...
  Script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/047_placeholder.py
 loaded successfully
  Loading script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/048_placeholder.py...
  Script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/048_placeholder.py
 loaded successfully
  Loading script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/049_placeholder.py...
  Script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/049_placeholder.py
 loaded successfully
  Loading script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/050_fk_consistent_indexes.py...
  Script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/050_fk_consistent_indexes.py
 loaded successfully
  Loading script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/051_add_id_mapping.py...
  Script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/051_add_id_mapping.py
 loaded successfully
  Loading script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/052_add_auth_url_to_region.py...
  Script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/052_add_auth_url_to_region.py
 loaded successfully
  Loading script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/053_endpoint_to_region_association.py...
  Script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/053_endpoint_to_region_association.py
 loaded successfully
  Loading script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/054_add_actor_id_index.py...
  Script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/054_add_actor_id_

[Yahoo-eng-team] [Bug 1495755] [NEW] test_show_policy_failed fails (depending on another test to create db?)

2015-09-14 Thread Jesse J. Cook
Public bug reported:

Test fails sporadically or consistency depending on your setup. I could
make pass or fail consistently depending on level at which test was
executed. I expect test is depending on something done outside the test
that can occur out of order (i.e. db setup / table creation):

(dev)[~/src/rackspace/openstack/nova] ./run_tests.sh -d 
nova.tests.unit.api.openstack.compute.test_quota_classes.QuotaClassesPolicyEnforcementV21.test_show_policy_failed
Tests running...
nova/db/sqlalchemy/api.py:156: OsloDBDeprecationWarning: EngineFacade is 
deprecated; please use oslo.db.sqlalchemy.enginefacade
  retry_interval=conf_group.retry_interval)
==
ERROR: 
nova.tests.unit.api.openstack.compute.test_quota_classes.QuotaClassesPolicyEnforcementV21.test_show_policy_failed
--
Empty attachments:
  pythonlogging:''

Traceback (most recent call last):
  File "nova/tests/unit/api/openstack/compute/test_quota_classes.py", line 170, 
in setUp
extension_info=ext_info)
  File "nova/api/openstack/compute/quota_classes.py", line 45, in __init__
self.supported_quotas = QUOTAS.resources
  File "nova/quota.py", line 1473, in resources
self._register_resources_by_flavor(ctxt)
  File "nova/quota.py", line 1183, in _register_resources_by_flavor
flavors = db.flavor_get_all(ctxt, inactive=True)
  File "nova/db/api.py", line 1455, in flavor_get_all
sort_dir=sort_dir, limit=limit, marker=marker)
  File "nova/db/sqlalchemy/api.py", line 230, in wrapper
return f(*args, **kwargs)
  File "nova/db/sqlalchemy/api.py", line 4835, in flavor_get_all
inst_types = query.all()
  File 
"/home/jesse/src/rackspace/openstack/nova/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 line 2399, in all
return list(self)
  File 
"/home/jesse/src/rackspace/openstack/nova/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 line 2516, in __iter__
return self._execute_and_instances(context)
  File 
"/home/jesse/src/rackspace/openstack/nova/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 line 2531, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
  File 
"/home/jesse/src/rackspace/openstack/nova/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 914, in execute
return meth(self, multiparams, params)
  File 
"/home/jesse/src/rackspace/openstack/nova/.venv/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py",
 line 323, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
  File 
"/home/jesse/src/rackspace/openstack/nova/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1010, in _execute_clauseelement
compiled_sql, distilled_params
  File 
"/home/jesse/src/rackspace/openstack/nova/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1146, in _execute_context
context)
  File 
"/home/jesse/src/rackspace/openstack/nova/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1337, in _handle_dbapi_exception
util.raise_from_cause(newraise, exc_info)
  File 
"/home/jesse/src/rackspace/openstack/nova/.venv/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py",
 line 199, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
  File 
"/home/jesse/src/rackspace/openstack/nova/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1139, in _execute_context
context)
  File 
"/home/jesse/src/rackspace/openstack/nova/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py",
 line 450, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: 
instance_types [SQL: u'SELECT instance_types.created_at AS 
instance_types_created_at, instance_types.updated_at AS 
instance_types_updated_at, instance_types.deleted_at AS 
instance_types_deleted_at, instance_types.deleted AS instance_types_deleted, 
instance_types.id AS instance_types_id, instance_types.name AS 
instance_types_name, instance_types.memory_mb AS instance_types_memory_mb, 
instance_types.vcpus AS instance_types_vcpus, instance_types.root_gb AS 
instance_types_root_gb, instance_types.ephemeral_gb AS 
instance_types_ephemeral_gb, instance_types.flavorid AS 
instance_types_flavorid, instance_types.swap AS instance_types_swap, 
instance_types.rxtx_factor AS instance_types_rxtx_factor, 
instance_types.vcpu_weight AS instance_types_vcpu_weight, 
instance_types.disabled AS instance_types_disabled, instance_types.is_public AS 
instance_types_is_public, instance_type_extra_specs_1.created_at AS 
instance_type_extra_s
 pecs_1_created_at, instance_type_extra_specs_1.updated_at AS 
instance_type_extra_specs_1_updated_at, instance_type_extra_specs_1.deleted_at 
AS instance_type_extra_sp

[Yahoo-eng-team] [Bug 1481872] Re: [neutron]admin_auth_url does not support keystone v3 API

2015-09-14 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1481872

Title:
  [neutron]admin_auth_url does not support keystone v3 API

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Existing code uses v2 auth plugin from python-keystoneclient
  
http://git.openstack.org/cgit/openstack/nova/tree/nova/network/neutronv2/api.py#n159

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1481872/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1484335] Re: Update hypervisor support matrix document: Hyper-V already supports VLAN networking

2015-09-14 Thread Lily Xing
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1484335

Title:
  Update hypervisor support matrix document: Hyper-V already supports
  VLAN networking

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  In http://docs.openstack.org/developer/nova/support-matrix.html, 'VLAN
  networking' support for Hyper-V is marked as not supported, which is
  misguided since we can use VLAN networking for Hyper-V now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1484335/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495742] Re: [Neutron][Improvement]Neutron can ask user to see the help file in case the user passes wrong arguments in CLI

2015-09-14 Thread Hong Hui Xiao
** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495742

Title:
  [Neutron][Improvement]Neutron can ask user to see the help file in
  case the user passes wrong arguments in CLI

Status in python-neutronclient:
  New

Bug description:
  Openstack CLIs have the support of displaying help file for all components, 
specifically neutron.
  However, for a new user/developer, understanding the help file is important.

  When we pass incorrect attributes to a nova client, we get the following 
output:
  ##
  reedip@reedip-VirtualBox:/opt/stack/sqlalchemy$ nova agent-delete
  usage: nova agent-delete 
  error: too few arguments
  Try 'nova help agent-delete' for more information.
  ##

  The last line gives a useful information to the new user/developer as to what 
he/she can do to find out more information.
  Something like this can be added to the neutron-client as well

  Current output of neutron client:
  ##
  reedip@reedip-VirtualBox:/opt/stack/sqlalchemy$ neutron firewall-create
  usage: neutron firewall-create [-h] [-f {html,json,shell,table,value,yaml}]
 [-c COLUMN] [--max-width ]
 [--prefix PREFIX] [--request-format {json,xml}]
 [--tenant-id TENANT_ID] [--name NAME]
 [--description DESCRIPTION] [--shared]
 [--admin-state-down] [--router ROUTER]
 POLICY
  neutron firewall-create: error: too few arguments
  ##

  Expected output:
  ##
  reedip@reedip-VirtualBox:/opt/stack/sqlalchemy$ neutron firewall-create
  usage: neutron firewall-create [-h] [-f {html,json,shell,table,value,yaml}]
 [-c COLUMN] [--max-width ]
 [--prefix PREFIX] [--request-format {json,xml}]
 [--tenant-id TENANT_ID] [--name NAME]
 [--description DESCRIPTION] [--shared]
 [--admin-state-down] [--router ROUTER]
 POLICY
  neutron firewall-create: error: too few arguments
  Try 'neutron help firewall-delete' for more information.
  ##

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1495742/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495742] [NEW] [Neutron][Improvement]Neutron can ask user to see the help file in case the user passes wrong arguments in CLI

2015-09-14 Thread Reedip
Public bug reported:

Openstack CLIs have the support of displaying help file for all components, 
specifically neutron.
However, for a new user/developer, understanding the help file is important.

When we pass incorrect attributes to a nova client, we get the following output:
##
reedip@reedip-VirtualBox:/opt/stack/sqlalchemy$ nova agent-delete
usage: nova agent-delete 
error: too few arguments
Try 'nova help agent-delete' for more information.
##

The last line gives a useful information to the new user/developer as to what 
he/she can do to find out more information.
Something like this can be added to the neutron-client as well

Current output of neutron client:
##
reedip@reedip-VirtualBox:/opt/stack/sqlalchemy$ neutron firewall-create
usage: neutron firewall-create [-h] [-f {html,json,shell,table,value,yaml}]
   [-c COLUMN] [--max-width ]
   [--prefix PREFIX] [--request-format {json,xml}]
   [--tenant-id TENANT_ID] [--name NAME]
   [--description DESCRIPTION] [--shared]
   [--admin-state-down] [--router ROUTER]
   POLICY
neutron firewall-create: error: too few arguments
##

Expected output:
##
reedip@reedip-VirtualBox:/opt/stack/sqlalchemy$ neutron firewall-create
usage: neutron firewall-create [-h] [-f {html,json,shell,table,value,yaml}]
   [-c COLUMN] [--max-width ]
   [--prefix PREFIX] [--request-format {json,xml}]
   [--tenant-id TENANT_ID] [--name NAME]
   [--description DESCRIPTION] [--shared]
   [--admin-state-down] [--router ROUTER]
   POLICY
neutron firewall-create: error: too few arguments
Try 'neutron help firewall-delete' for more information.
##

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495742

Title:
  [Neutron][Improvement]Neutron can ask user to see the help file in
  case the user passes wrong arguments in CLI

Status in neutron:
  New

Bug description:
  Openstack CLIs have the support of displaying help file for all components, 
specifically neutron.
  However, for a new user/developer, understanding the help file is important.

  When we pass incorrect attributes to a nova client, we get the following 
output:
  ##
  reedip@reedip-VirtualBox:/opt/stack/sqlalchemy$ nova agent-delete
  usage: nova agent-delete 
  error: too few arguments
  Try 'nova help agent-delete' for more information.
  ##

  The last line gives a useful information to the new user/developer as to what 
he/she can do to find out more information.
  Something like this can be added to the neutron-client as well

  Current output of neutron client:
  ##
  reedip@reedip-VirtualBox:/opt/stack/sqlalchemy$ neutron firewall-create
  usage: neutron firewall-create [-h] [-f {html,json,shell,table,value,yaml}]
 [-c COLUMN] [--max-width ]
 [--prefix PREFIX] [--request-format {json,xml}]
 [--tenant-id TENANT_ID] [--name NAME]
 [--description DESCRIPTION] [--shared]
 [--admin-state-down] [--router ROUTER]
 POLICY
  neutron firewall-create: error: too few arguments
  ##

  Expected output:
  ##
  reedip@reedip-VirtualBox:/opt/stack/sqlalchemy$ neutron firewall-create
  usage: neutron firewall-create [-h] [-f {html,json,shell,table,value,yaml}]
 [-c COLUMN] [--max-width ]
 [--prefix PREFIX] [--request-format {json,xml}]
 [--tenant-id TENANT_ID] [--name NAME]
 [--description DESCRIPTION] [--shared]
 [--admin-state-down] [--router ROUTER]
 POLICY
  neutron firewall-create: error: too few arguments
  Try 'neutron help firewall-delete' for more information.
  ##

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495742/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495523] Re: router-interface-add fails with error 500 on PostgreSQL

2015-09-14 Thread Jim Rollenhagen
This doesn't require Ironic changes, only affects Ironic... I'm going to
close this in Ironic as invalid so it doesn't show up in the milestone.
(and yes, it's fixed now)

** Changed in: ironic
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495523

Title:
  router-interface-add fails with error 500 on PostgreSQL

Status in Ironic:
  Invalid
Status in neutron:
  Fix Committed

Bug description:
  If PostgreSQL is used as DB backend then Neutron fails with error code
  500 using CLI "router-interface-add":

  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1139, 
in _execute_context
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters 
context)
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
450, in do_execute
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters 
ProgrammingError: column "agents.id" must appear in the GROUP BY clause or be 
used in an aggregate function
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters LINE 1: 
SELECT agents.id AS agents_id, agents.agent_type AS agents_a...

  Manila CI Tempest job with PostreSQL errors:

  http://logs.openstack.org/01/218801/9/check/gate-manila-tempest-dsvm-
  neutron-
  postgres/76739da/logs/devstacklog.txt.gz#_2015-09-14_11_37_13_009

  http://logs.openstack.org/01/218801/9/check/gate-manila-tempest-dsvm-
  neutron-
  postgres/76739da/logs/screen-q-svc.txt.gz?level=TRACE#_2015-09-14_11_37_13_976

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1495523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495701] [NEW] Sometimes Cinder volumes fail to attach with error "The device is not writable: Permission denied"

2015-09-14 Thread Patrick East
Public bug reported:

This is happening on the latest master branch in CI systems. It happens
very rarely in the gate:

http://logstash.openstack.org/#eyJzZWFyY2giOiJcImxpYnZpcnRFcnJvcjogb3BlcmF0aW9uIGZhaWxlZDogb3BlbiBkaXNrIGltYWdlIGZpbGUgZmFpbGVkXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0NDIyNjY3MDU1NzZ9

And on some third party CI systems (not included in the logstash
results):

http://ec2-54-67-51-189.us-
west-1.compute.amazonaws.com/28/216728/5/check/PureFCDriver-tempest-
dsvm-volume-
multipath/bd3618d/logs/libvirt/libvirtd.txt.gz#_2015-09-14_09_00_44_829

When the error occurs there is a stack trace in the n-cpu log like this:

http://logs.openstack.org/22/222922/2/check/gate-tempest-dsvm-full-
lio/550be5e/logs/screen-n-cpu.txt.gz?level=DEBUG#_2015-09-13_17_34_07_787

2015-09-13 17:34:07.787 ERROR nova.virt.libvirt.driver 
[req-4ac04f97-f468-466a-9fb2-02d1df3a5633 
tempest-TestEncryptedCinderVolumes-1564844141 
tempest-TestEncryptedCinderVolumes-804461249] [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] Failed to attach volume at mountpoint: 
/dev/vdb
2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] Traceback (most recent call last):
2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 1115, in attach_volume
2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] guest.attach_device(conf, 
persistent=True, live=live)
2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6]   File 
"/opt/stack/new/nova/nova/virt/libvirt/guest.py", line 233, in attach_device
2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] 
self._domain.attachDeviceFlags(conf.to_xml(), flags=flags)
2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, in doit
2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, in 
proxy_call
2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] rv = execute(f, *args, **kwargs)
2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, in execute
2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] six.reraise(c, e, tb)
2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, in tworker
2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] rv = meth(*args, **kwargs)
2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6]   File 
"/usr/local/lib/python2.7/dist-packages/libvirt.py", line 517, in 
attachDeviceFlags
2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] if ret == -1: raise libvirtError 
('virDomainAttachDeviceFlags() failed', dom=self)
2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] libvirtError: operation failed: open disk 
image file failed
2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] 

and a corresponding error in the libvirt log such as this:

http://logs.openstack.org/22/222922/2/check/gate-tempest-dsvm-full-
lio/550be5e/logs/libvirt/libvirtd.txt.gz#_2015-09-13_17_34_07_499

2015-09-13 17:34:07.496+: 16871: debug : qemuMonitorJSONCommandWithFd:264 : 
Send command 
'{"execute":"human-monitor-command","arguments":{"command-line":"drive_add 
dummy 
file=/dev/disk/by-path/ip-172.99.112.13:3260-iscsi-iqn.2010-10.org.openstack:volume-561640e9-081a-430b-a7f8-9cadd63d2d00-lun-0,if=none,id=drive-virtio-disk1,format=raw,serial=561640e9-081a-430b-a7f8-9cadd63d2d00,cache=none"},"id":"libvirt-16"}'
 for write with FD -1
2015-09-13 17:34:07.496+: 16871: debug : qemuMonitorSend:959 : 
QEMU_MONITOR_SEND_MSG: mon=0x7f50dc008db0 
msg={"execute":"human-monitor-command","arguments":{"command-line":"drive_add 
dummy 
file=/dev/disk/by-

[Yahoo-eng-team] [Bug 1491325] Re: nova api v2.1 does not allow to use autodetection of volume device path

2015-09-14 Thread Matt Riedemann
Since the v2.1 API was still experimental in Juno I don't think this is
worth fixing there.

** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New => Won't Fix

** Changed in: nova/kilo
   Status: New => Fix Committed

** Changed in: nova/kilo
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Changed in: nova/kilo
   Importance: Undecided => High

** Tags removed: juno-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491325

Title:
  nova api v2.1 does not allow to use autodetection of volume device
  path

Status in OpenStack Compute (nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Won't Fix
Status in OpenStack Compute (nova) kilo series:
  Fix Committed
Status in python-novaclient:
  Fix Released

Bug description:
  Using API v2.1 we are forced to provide device path attaching a volume
  to an instance.

  using API v2.0 it allowed to provide 'auto' and in this case Nova
  calculated it by itself.

  It is very useful when we do not care about exact device path.

  using APi v2.1 Nova, at first verifies request body [1] and only then
  have logic to autodetect "device path". So, either autodetect is dead
  code now or request validation should be changed.

  For the moment, this bug is blocker for Manila project.

  We get one of two errors:

  Returning 400 to user: Invalid input for field/attribute device.
  Value: None. None is not of type 'string' __call__

  or

  Returning 400 to user: Invalid input for field/attribute device.
  Value: auto. u'auto' does not match
  '(^/dev/x{0,1}[a-z]{0,1}d{0,1})([a-z]+)[0-9]*$'

  Where Nova client says explicitly:

  $ nova help volume-attach
  usage: nova volume-attach   []

  Attach a volume to a server.

  Positional arguments:
      Name or ID of server.
      ID of the volume to attach.
      Name of the device e.g. /dev/vdb. Use "auto" for autoassign (if 
supported)

  That "device" is optional and can be set to 'auto'.

  [1]
  
https://github.com/openstack/nova/blob/b7c8a73824211db9627962abd31b8801cc2c2880/nova/api/openstack/compute/volumes.py#L270

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1491325/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444841] Re: Resize instance fails after creating host aggregate

2015-09-14 Thread Sylvain Bauza
Okay, so I feel the bug should be marked as Invalid. Why ? Let me
explain :

While any instance can be shown with an AZ, it doesn't mean that the 
instance.az field is set with that value but rather showing the value of 
CONF.default_availability_zone if that field is left blank.
How to the instance.az field is set ? That's populated once in the instance 
lifetime at the Compute API level here:
https://github.com/openstack/nova/blob/79fe4d8e076c9c7bb76f0afb1b2787d51b2c5037/nova/compute/api.py#L1147-L1161

As you can see, it calls _handle_availability_zone which reads what the API 
received and defaults to CONF.default_schedule_zone :
https://github.com/openstack/nova/blob/79fe4d8e076c9c7bb76f0afb1b2787d51b2c5037/nova/compute/api.py#L596-L597

As CONF.default_schedule_zone is defaulted to None (
https://github.com/openstack/nova/blob/79fe4d8e076c9c7bb76f0afb1b2787d51b2c5037/nova/compute/api.py#L92-L93
) that means that a default nova boot command (without using the
--availability_zone flag) will create an Instance entry in the table
with an AZ field equals to NULL.

When it comes to the AZ filter, if the instance.az field is set to None,
then the filter always returns True (which makes sense because the user
didn't specify an AZ to stick with).


So, now that I explained how it works, lemme explain the error here : by 
specifying an AZ in the boot command, it will do the exact opposite : it will 
stick the instance to be created to the AZ provided. Since the bug reporter 
provided a value (even for the default value of "nova"), it means that then the 
instance.az field became "nova".

For the original boot, the AZ filter checked if the host was having an
aggregate. Since it was not the case, it checked if the instance AZ
(here "nova') was equal to CONF.default_availability_zone (defaulted to
"nova')
https://github.com/openstack/nova/blob/3aff2d7bff7f6e9edb5fa8b688287265722c27fb/nova/scheduler/filters/availability_zone_filter.py#L54
Yay, it worked.

Now, what happened once the host was part of the aggregate ? It didn't
change the instance.AZ field since that field doesn't change for the
whole lifetime of the instance (kept as an information of what the user
requested) but it ends to the AZ Filter which then sees that the host
belongs now to an aggregate and consequently matches the host.AZ with
the instance.az which was False this time
https://github.com/openstack/nova/blob/3aff2d7bff7f6e9edb5fa8b688287265722c27fb/nova/scheduler/filters/availability_zone_filter.py#L51

To be honest, rule of thumb : Never ever calls explicitely an AZ "nova",
either when booting an instance or when putting an AZ to an aggregate,
that will just prevent the default behaviour to work unless you modify
CONF.default_schedule_zone


** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1444841

Title:
  Resize instance fails after creating host aggregate

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Latest Kilo code

  
  Reproduce steps:

  1. Do not define any host aggregate. AZ of host is 'nova'. Boot one
  instance named 'zhaoqin-nova' whose AZ is 'nova'

  2. Create host aggregate 'zhaoqin' whose AZ is 'zhaoqin-az'. Add host
  to 'zhaoqin' aggregate.  Now AZ of instance 'zhaoqin-nova' in db is
  still 'nova'; and 'nova list' displays AZ of 'zhaoqin-nova' is
  'zhaoqin-az'.

  3. Resize 'zhaoqin-nova' fails, no valid host.

  4. Boot another instance 'zhaoqin-my-az' whose AZ is 'zhaoqin-az'.
  Resize 'zhaoqin-my-az' succeed.

  5. Remove host from aggregate 'zhaoqin'.

  6. Resize 'zhaoqin-nova' succeed.  Resize 'zhaoqin-my-az' fails, no
  valid host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1444841/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1314526] Re: revert resize removes rbd shared image

2015-09-14 Thread Jon Bernard
*** This bug is a duplicate of bug 1399244 ***
https://bugs.launchpad.net/bugs/1399244

** This bug has been marked a duplicate of bug 1399244
   rbd resize revert fails

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1314526

Title:
  revert resize removes rbd shared image

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  We run multi-host nova-compute with

  libvirt_images_type=rbd
  libvirt_images_rbd_pool=compute

  Resize-confirm function works just fine.
  Resize-revert removes shared rbd for both instances image during reverting.

  Options nova.conf i've tried to change with no luck :

  allow_resize_to_same_host=True/False
  resize_fs_using_block_device=True/False
  block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, 
VIR_MIGRATE_NON_SHARED_INC
  live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER

  Errors you can find at the bottom of the page.
  1. first error was fixed by adding  image_cache_manager_interval = 0
  2. 2nd error still active. 

  During revert process for both types of migration there is
  driver.destroy() at destination that removes original image from rbd
  storage.

  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L3164
  _
  def revert_resize(self, context, instance, migration, reservations):
     ...
     self.driver.destroy(context, instance, network_info,
     block_device_info)

     ...
  _
  that calls

  https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L956
  _
  def destroy(self, context, instance, network_info, block_device_info=None,
   destroy_disks=True):
   self._destroy(instance)
   self.cleanup(context, instance, network_info, block_device_info,
    destroy_disks)
  _

  that calls

  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L1069
  _
  def cleanup(self, context, instance, network_info, block_device_info=None,
  destroy_disks=True):
  
  if destroy_disks:
  self._delete_instance_files(instance)
  self._cleanup_lvm(instance)
  #NOTE(haomai): destroy volumes if needed
  if CONF.libvirt.images_type == 'rbd':
  self._cleanup_rbd(instance)
  
  _

  revert_resize runs destroy function without destory_disk variable
  which makes cleanup function to delete SHARED image.

  Here is approximate solution (not a developer)

  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L3199

  change from :
  _
  self.driver.destroy(context, instance, network_info,
  block_device_info)
  _
  to:
  _
  destroy_disks = not (self._is_instance_storage_shared(context, instance))
   self.driver.destroy(instance, network_info,
  block_device_info)
  block_device_info, 
destroy_disks=destroy_disks)
  _

  ERROR1
  <179>Apr 28 14:14:00 [compute] node-39 
nova-nova.virt.libvirt.imagebackend ERROR: error opening rbd image 
/var/lib/
  nova/instances/_base/103bc0322b21e499ecea1c360abc6843ab829d06
  Traceback (most recent call last):
    File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", 
line 467, in __init__
  read_only=read_only)
    File "/usr/lib/python2.7/dist-packages/rbd.py", line 351, in __init__
  raise make_ex(ret, 'error opening image %s at snapshot %s' % (name, 
snapshot))
  ImageNotFound: error opening image 
/var/lib/nova/instances/_base/103bc0322b21e499ecea1c360abc6843ab829d06 at 
snapshot None
  <179>Apr 28 14:14:00 [compute] node-39 nova-nova.compute.manager 
ERROR: Setting instance vm_state to ERROR
  Traceback (most recent call last):
    File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3160, 
in finish_resize
  disk_info, image)
    File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3128, 
in _finish_resize
  block_device_info, power_on)
    File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
4627, in finish_migration
  block_device_info=None, inject_files=False)
    File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
2395, in _create_image
  project_id=instance['project_id'])
    File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", 
line 177, in cache
  *args, **kwargs)
    File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", 
line 638, in create_image
  self.verify_base_size(base, size)
    File "/usr/lib/python2.7/dist-packages/nova/vir

[Yahoo-eng-team] [Bug 1495669] [NEW] domain-specific drivers does not honor the list_limit set in domain-specific conf file

2015-09-14 Thread Guang Yee
Public bug reported:

Step to reproduce:

1. enable domain_specific drivers in keystone.conf

  domain_specific_drivers_enabled = true
  domain_configurations_from_database = false
  domain_config_dir = /etc/keystone/domains

2. set the global list_limit to 2 in keystone.conf

  [default]
  list_limit = 2

3. create a new domain, along with the corresponding domain-specific
conf in /etc/keystone/domains/ and set the list_limit to 3 at the driver
level

[identity]
driver = ldap
list_limit = 5

[ldap]

url = ldap://localhost
...

4. restart Keystone and do user list for the specific domain and notice
that only 2 users are returned


Interestingly, the list_limit set in the [identity] section in keystone.conf 
works.  i.e.

  [default]
  list_limit = 2

  [identity]
  list_limit = 5

We just can't override it in the domain-specific conf file.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1495669

Title:
  domain-specific drivers does not honor the list_limit set in domain-
  specific conf file

Status in Keystone:
  New

Bug description:
  Step to reproduce:

  1. enable domain_specific drivers in keystone.conf

domain_specific_drivers_enabled = true
domain_configurations_from_database = false
domain_config_dir = /etc/keystone/domains

  2. set the global list_limit to 2 in keystone.conf

[default]
list_limit = 2

  3. create a new domain, along with the corresponding domain-specific
  conf in /etc/keystone/domains/ and set the list_limit to 3 at the
  driver level

  [identity]
  driver = ldap
  list_limit = 5

  [ldap]

  url = ldap://localhost
  ...

  4. restart Keystone and do user list for the specific domain and
  notice that only 2 users are returned

  
  Interestingly, the list_limit set in the [identity] section in keystone.conf 
works.  i.e.

[default]
list_limit = 2

[identity]
list_limit = 5

  We just can't override it in the domain-specific conf file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1495669/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495523] Re: router-interface-add fails with error 500 on PostgreSQL

2015-09-14 Thread John L. Villalovos
** Also affects: ironic
   Importance: Undecided
   Status: New

** Changed in: ironic
   Importance: Undecided => Critical

** Changed in: ironic
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495523

Title:
  router-interface-add fails with error 500 on PostgreSQL

Status in Ironic:
  In Progress
Status in neutron:
  In Progress

Bug description:
  If PostgreSQL is used as DB backend then Neutron fails with error code
  500 using CLI "router-interface-add":

  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1139, 
in _execute_context
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters 
context)
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
450, in do_execute
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters 
ProgrammingError: column "agents.id" must appear in the GROUP BY clause or be 
used in an aggregate function
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters LINE 1: 
SELECT agents.id AS agents_id, agents.agent_type AS agents_a...

  Manila CI Tempest job with PostreSQL errors:

  http://logs.openstack.org/01/218801/9/check/gate-manila-tempest-dsvm-
  neutron-
  postgres/76739da/logs/devstacklog.txt.gz#_2015-09-14_11_37_13_009

  http://logs.openstack.org/01/218801/9/check/gate-manila-tempest-dsvm-
  neutron-
  postgres/76739da/logs/screen-q-svc.txt.gz?level=TRACE#_2015-09-14_11_37_13_976

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1495523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495664] [NEW] public base URL is returned in the links even though request is coming from admin URL

2015-09-14 Thread Guang Yee
Public bug reported:

Public base URL is returned in the links even though the request is
coming from admin URL.  Set both admin_endpoint and public_endpoint in
keystone.conf and notice that public_endpoint is always use as the base
URL in the links.  i.e.

$curl -k -s -H 'X-Auth-Token: d5363c1fe9524972b89192242087' 
http://localhost:5000/v3/policies | python -mjson.tool
{
"links": {
"next": null,
"previous": null,
"self": "https://public:5000/v3/policies";
},
"policies": []
}

$ curl -k -s -H 'X-Auth-Token: d5363c1fe9524972b89192242087' 
http://localhost:35357/v3/policies | python -mjson.tool
{
"links": {
"next": null,
"previous": null,
"self": "https://public:5000/v3/policies";
},
"policies": []
}

This is related to https://bugs.launchpad.net/keystone/+bug/1381961

See

https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L419

** Affects: keystone
 Importance: Low
 Status: New

** Changed in: keystone
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1495664

Title:
  public base URL is returned in the links even though request is coming
  from admin URL

Status in Keystone:
  New

Bug description:
  Public base URL is returned in the links even though the request is
  coming from admin URL.  Set both admin_endpoint and public_endpoint in
  keystone.conf and notice that public_endpoint is always use as the
  base URL in the links.  i.e.

  $curl -k -s -H 'X-Auth-Token: d5363c1fe9524972b89192242087' 
http://localhost:5000/v3/policies | python -mjson.tool
  {
  "links": {
  "next": null,
  "previous": null,
  "self": "https://public:5000/v3/policies";
  },
  "policies": []
  }

  $ curl -k -s -H 'X-Auth-Token: d5363c1fe9524972b89192242087' 
http://localhost:35357/v3/policies | python -mjson.tool
  {
  "links": {
  "next": null,
  "previous": null,
  "self": "https://public:5000/v3/policies";
  },
  "policies": []
  }

  This is related to https://bugs.launchpad.net/keystone/+bug/1381961

  See

  
https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L419

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1495664/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 794730] Re: API doesn't specify what limit=0 means

2015-09-14 Thread Mark Doffman
Sorry to try and bring this one back to life, but i'm just not sure that
its really invalid. Marked https://bugs.launchpad.net/nova/+bug/1494617
as duplicate.

Seems that for the images api this now implements the empty list .
However I think that for flavors and servers api Ithat the behavior is
still to use limit=0 as max_limit.  These should one day be consistent?

At the very least we should change the comments in
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/common.py#n206
to represent the new behavior.

** Changed in: nova
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/794730

Title:
  API doesn't specify what limit=0 means

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  An http request like /v1.1/images?limit=0 returns all images
  available. It should return an empty container.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/794730/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494617] Re: Different behavior in API and DB when Nova list with limit set to 0

2015-09-14 Thread Mark Doffman
*** This bug is a duplicate of bug 794730 ***
https://bugs.launchpad.net/bugs/794730

** This bug has been marked a duplicate of bug 794730
   API doesn't specify what limit=0 means

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1494617

Title:
  Different behavior in API and DB when Nova list with limit set to 0

Status in OpenStack Compute (nova):
  New

Bug description:
  According to the code:
  
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/common.py#n206

  when limit = 0, it should apply as max_limit, but currently, in:
  
http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/api.py#n1930

  we directly return [], this is quite different with comment in the api
  code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1494617/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494574] Re: Logging missing value types

2015-09-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/222550
Committed: 
https://git.openstack.org/cgit/openstack/tempest/commit/?id=367c6935247c0f24c096f7b63a2e5128e5773153
Submitter: Jenkins
Branch:master

commit 367c6935247c0f24c096f7b63a2e5128e5773153
Author: ghanshyam 
Date:   Fri Sep 11 18:51:02 2015 +0900

Fix missing value types for log message

This commit fix the missing value type in log message of exception.

Change-Id: Ib052964e49ded2eb427d5b448241e25cbd066906
Closes-Bug: #1494574


** Changed in: tempest
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494574

Title:
  Logging missing value types

Status in Cinder:
  Fix Committed
Status in heat:
  In Progress
Status in Ironic:
  In Progress
Status in Magnum:
  Fix Committed
Status in Manila:
  In Progress
Status in networking-midonet:
  In Progress
Status in neutron:
  Fix Committed
Status in os-brick:
  In Progress
Status in oslo.versionedobjects:
  In Progress
Status in python-neutronclient:
  In Progress
Status in tempest:
  Fix Released
Status in Trove:
  In Progress

Bug description:
  There are a few locations in the code where the log string is missing
  the formatting type, causing log messages to fail.

  
  FILE: ../OpenStack/cinder/cinder/volume/drivers/emc/emc_vnx_cli.py
  
  LOG.debug('EMC: Command Exception: %(rc) %(result)s. 
'  
  FILE: ../OpenStack/cinder/cinder/consistencygroup/api.py  
  
  LOG.error(_LE("CG snapshot %(cgsnap) not found 
when "
  LOG.error(_LE("Source CG %(source_cg) not found 
when "
  FILE: ../OpenStack/cinder/cinder/volume/drivers/emc/emc_vmax_masking.py   
  
  "Storage group %(sgGroupName) "   
  
  FILE: ../OpenStack/cinder/cinder/volume/manager.py
  
  '%(image_id) will not create cache 
entry.'),

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1494574/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495653] [NEW] Clean out security group tests from tempest-dsvm-cells-rc

2015-09-14 Thread Matt Riedemann
Public bug reported:

There are several tests in
http://git.openstack.org/cgit/openstack/nova/tree/devstack/tempest-dsvm-
cells-rc which are skipped in devstack + tempest + cells runs because
cells doesn't support security groups.

Some of these are obvious, like:

# skip security group tests
r="$r(?:tempest\.api\.compute\.security_groups.*)"

There are others, like scenario tests (test_stamp_pattern) which don't
work with cells because they create a security group for the server
instance being tested:

http://git.openstack.org/cgit/openstack/tempest/tree/tempest/scenario/test_stamp_pattern.py#n155

Since security group usage in nova is optional (openstack-infra doesn't
use them since RAX doesn't support them - due to cells), Tempest should
have a compute-feature-enabled.security_groups config option in here:

http://git.openstack.org/cgit/openstack/tempest/tree/tempest/config.py#n315

That would default to True for backwards compatibility with Tempest.

Then in devstack/lib/tempest if we're running with cells, we can set
compute-feature-enabled.security_groups=False so that jobs running
tempest + devstack + cells don't run those tests.

Once we have that devstack change, we can remove the tests from tempest-
dsvm-cells-rc which are only skipped because of security groups.

Note that the Tempest change which adds the compute-feature-
enabled.security_groups config option will also have to go through and
add skip checks for any tests that are creating and using security
groups on server instances.

So the chain of changes would be:

1. Tempest
2. devstack
3. nova

The nova change would be similar to how this was done:

https://review.openstack.org/#/c/220158/

** Affects: devstack
 Importance: Undecided
 Status: Confirmed

** Affects: nova
 Importance: Low
 Status: Confirmed

** Affects: tempest
 Importance: Undecided
 Status: Confirmed


** Tags: cells testing

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => Low

** Also affects: tempest
   Importance: Undecided
   Status: New

** Also affects: devstack
   Importance: Undecided
   Status: New

** Changed in: devstack
   Status: New => Confirmed

** Changed in: tempest
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1495653

Title:
  Clean out security group tests from tempest-dsvm-cells-rc

Status in devstack:
  Confirmed
Status in OpenStack Compute (nova):
  Confirmed
Status in tempest:
  Confirmed

Bug description:
  There are several tests in
  http://git.openstack.org/cgit/openstack/nova/tree/devstack/tempest-
  dsvm-cells-rc which are skipped in devstack + tempest + cells runs
  because cells doesn't support security groups.

  Some of these are obvious, like:

  # skip security group tests
  r="$r(?:tempest\.api\.compute\.security_groups.*)"

  There are others, like scenario tests (test_stamp_pattern) which don't
  work with cells because they create a security group for the server
  instance being tested:

  
http://git.openstack.org/cgit/openstack/tempest/tree/tempest/scenario/test_stamp_pattern.py#n155

  Since security group usage in nova is optional (openstack-infra
  doesn't use them since RAX doesn't support them - due to cells),
  Tempest should have a compute-feature-enabled.security_groups config
  option in here:

  http://git.openstack.org/cgit/openstack/tempest/tree/tempest/config.py#n315

  That would default to True for backwards compatibility with Tempest.

  Then in devstack/lib/tempest if we're running with cells, we can set
  compute-feature-enabled.security_groups=False so that jobs running
  tempest + devstack + cells don't run those tests.

  Once we have that devstack change, we can remove the tests from
  tempest-dsvm-cells-rc which are only skipped because of security
  groups.

  Note that the Tempest change which adds the compute-feature-
  enabled.security_groups config option will also have to go through and
  add skip checks for any tests that are creating and using security
  groups on server instances.

  So the chain of changes would be:

  1. Tempest
  2. devstack
  3. nova

  The nova change would be similar to how this was done:

  https://review.openstack.org/#/c/220158/

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1495653/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495645] [NEW] keystone-manage and keystone-all man pages incorrect versions/dates

2015-09-14 Thread Eric Brown
Public bug reported:

The manpages for keystone-manage and keystone-all still refer to older
versions and release dates.

The keystone-manage man page lists version as 2015.1 and date of 2015-10-15.
The keystone-all man page lists version as 2014.2 and date of 2014-10-16.  This 
is a Juno date, so there should probably also be a cherry-pick fix in Kilo.

** Affects: keystone
 Importance: Low
 Assignee: Eric Brown (ericwb)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) => Eric Brown (ericwb)

** Changed in: keystone
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1495645

Title:
  keystone-manage and keystone-all man pages incorrect versions/dates

Status in Keystone:
  In Progress

Bug description:
  The manpages for keystone-manage and keystone-all still refer to older
  versions and release dates.

  The keystone-manage man page lists version as 2015.1 and date of 2015-10-15.
  The keystone-all man page lists version as 2014.2 and date of 2014-10-16.  
This is a Juno date, so there should probably also be a cherry-pick fix in Kilo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1495645/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495642] [NEW] Missing values from VPN Service details

2015-09-14 Thread Rob Cresswell
Public bug reported:

VPNaaS details page is missing a couple of values: 'external_v4_ip' and
'external_v6_ip'.

** Affects: horizon
 Importance: Low
 Assignee: Rob Cresswell (robcresswell)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Rob Cresswell (robcresswell)

** Changed in: horizon
Milestone: None => liberty-rc1

** Changed in: horizon
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1495642

Title:
  Missing values from VPN Service details

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  VPNaaS details page is missing a couple of values: 'external_v4_ip'
  and 'external_v6_ip'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1495642/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495634] [NEW] [sahara] Cluster creation fails

2015-09-14 Thread Chad Roberts
Public bug reported:

When attempting to create a cluster via Sahara under Data Processing, the 
cluster creation fails.
In the log, the following can be seen:  Recoverable error: 'cluster_count'

I suspect that the recent addition of cluster_count may not be in the
currently used version of python-saharaclient.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: sahara

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1495634

Title:
  [sahara] Cluster creation fails

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When attempting to create a cluster via Sahara under Data Processing, the 
cluster creation fails.
  In the log, the following can be seen:  Recoverable error: 'cluster_count'

  I suspect that the recent addition of cluster_count may not be in the
  currently used version of python-saharaclient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1495634/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495628] [NEW] In DHCP agent's enable_dhcp_helper, its good to call safe_configure_dhcp_for_network

2015-09-14 Thread Sudhakar Gariganti
Public bug reported:

def enable_dhcp_helper(self, network_id):
"""Enable DHCP for a network that meets enabling criteria."""
network = self.safe_get_network_info(network_id)
if network:
self.configure_dhcp_for_network(network)

Its quite possible that we can get exceptions in
configure_dhcp_for_network and its better to call its safer counterpart,
which takes care of handling any exceptions.

** Affects: neutron
 Importance: Undecided
 Assignee: Sudhakar Gariganti (sudhakar-gariganti)
 Status: In Progress


** Tags: l3-ipam-dhcp

** Changed in: neutron
 Assignee: (unassigned) => Sudhakar Gariganti (sudhakar-gariganti)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495628

Title:
  In DHCP agent's enable_dhcp_helper, its good to call
  safe_configure_dhcp_for_network

Status in neutron:
  In Progress

Bug description:
  def enable_dhcp_helper(self, network_id):
  """Enable DHCP for a network that meets enabling criteria."""
  network = self.safe_get_network_info(network_id)
  if network:
  self.configure_dhcp_for_network(network)

  Its quite possible that we can get exceptions in
  configure_dhcp_for_network and its better to call its safer
  counterpart, which takes care of handling any exceptions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495628/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493980] Re: The NullHandler and StandardLogging test fixtures don't appear to be detecting formatting errors

2015-09-14 Thread Matt Riedemann
** Changed in: nova
   Status: In Progress => Won't Fix

** Changed in: nova
 Assignee: Matt Riedemann (mriedem) => (unassigned)

** Changed in: nova
   Importance: Medium => Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1493980

Title:
  The NullHandler and StandardLogging test fixtures don't appear to be
  detecting formatting errors

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  We have the NullHandler and StandardLogging fixtures here to detect
  formatting errors in the nova logs:

  http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/fixtures.py#n61

  We've seen a few bugs recently where we weren't substituting variables
  in the log messages but didn't notice until after the fact:

  https://review.openstack.org/#/c/220253/

  https://review.openstack.org/#/c/221910/

  It also appears we should be using the logging_error fixture from
  oslo.log:

  
http://git.openstack.org/cgit/openstack/oslo.log/tree/oslo_log/fixture/logging_error.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1493980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495592] [NEW] In DHCP agent, resync only for the required networks when sync_state fails

2015-09-14 Thread Sudhakar Gariganti
Public bug reported:

In sync_state method of DHCP agent, if we are trying to resync one
failed network, and then an exception occurs, say for
get_active_networks_info,  we try to resync all the networks, which is
not desired.

** Affects: neutron
 Importance: Undecided
 Assignee: Sudhakar Gariganti (sudhakar-gariganti)
 Status: New


** Tags: l3-ipam-dhcp

** Changed in: neutron
 Assignee: (unassigned) => Sudhakar Gariganti (sudhakar-gariganti)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495592

Title:
  In DHCP agent, resync only for the required networks when sync_state
  fails

Status in neutron:
  New

Bug description:
  In sync_state method of DHCP agent, if we are trying to resync one
  failed network, and then an exception occurs, say for
  get_active_networks_info,  we try to resync all the networks, which is
  not desired.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495592/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348818] Re: Unittests do not succeed with random PYTHONHASHSEED value

2015-09-14 Thread Eric Harney
** Changed in: cinder
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348818

Title:
  Unittests do not succeed with random PYTHONHASHSEED value

Status in Ceilometer:
  Fix Released
Status in Cinder:
  Confirmed
Status in Cinder icehouse series:
  Fix Committed
Status in Designate:
  Fix Released
Status in Glance:
  Fix Released
Status in Glance icehouse series:
  Fix Committed
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Dashboard (Horizon) icehouse series:
  In Progress
Status in Ironic:
  Fix Released
Status in Keystone:
  Fix Released
Status in Keystone icehouse series:
  Fix Committed
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-neutronclient:
  Fix Committed
Status in Sahara:
  Fix Released
Status in Trove:
  Fix Released
Status in WSME:
  Fix Released

Bug description:
  New tox and python3.3 set a random PYTHONHASHSEED value by default.
  These projects should support this in their unittests so that we do
  not have to override the PYTHONHASHSEED value and potentially let bugs
  into these projects.

  To reproduce these failures:

  # install latest tox
  pip install --upgrade tox
  tox --version # should report 1.7.2 or greater
  cd $PROJECT_REPO
  # edit tox.ini to remove any PYTHONHASHSEED=0 lines
  tox -epy27

  Most of these failures appear to be related to dict entry ordering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1348818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495584] [NEW] VPNaaS: Help reduce cross project breakage

2015-09-14 Thread Paul Michali
Public bug reported:

One issue that has been occurring is that a neutron project commit may
change a method/attribute that the neutron-vpnaas project uses,
resulting in breakage in the neutron-vpnaas project (which may not be
detected for days).

To help reduce this probability, there is a desire to have neutron
commits run VPN unit and functional tests. This bug is to document the
need for running VPN functional tests on neutron commits.

Code review 203201 upstreamed  (before this bug was created) a pair of
neutron jobs that will run VPN functional tests in the experimental
queue. These jobs need to move to check, and eventually gate queues.

** Affects: neutron
 Importance: Undecided
 Assignee: Paul Michali (pcm)
 Status: In Progress


** Tags: vpnaas

** Changed in: neutron
 Assignee: (unassigned) => Paul Michali (pcm)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495584

Title:
  VPNaaS: Help reduce cross project breakage

Status in neutron:
  In Progress

Bug description:
  One issue that has been occurring is that a neutron project commit may
  change a method/attribute that the neutron-vpnaas project uses,
  resulting in breakage in the neutron-vpnaas project (which may not be
  detected for days).

  To help reduce this probability, there is a desire to have neutron
  commits run VPN unit and functional tests. This bug is to document the
  need for running VPN functional tests on neutron commits.

  Code review 203201 upstreamed  (before this bug was created) a pair of
  neutron jobs that will run VPN functional tests in the experimental
  queue. These jobs need to move to check, and eventually gate queues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495584/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1387244] Re: Increasing number of InstancePCIRequests.get_by_instance_uuid RPC calls during compute host auditing

2015-09-14 Thread Dan Smith
** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1387244

Title:
  Increasing number of InstancePCIRequests.get_by_instance_uuid RPC
  calls during compute host auditing

Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Triaged

Bug description:
  Environment: Ubuntu 14.04/OpenStack Juno Release

  The periodic auditing on compute node becomes very RPC call intensive
  when a large number of instances are running on a cloud; the
  InstancePCIRequests.get_by_instance_uuid call is made on all instances
  running on the hypervisor - when this is multiplied across a large
  number of hypervisors, this impacts back onto the conductor processes
  as they try to service an increasing amount of RPC calls over time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1387244/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495561] [NEW] Total Instances chart in ng Launch Instance has incorrect style

2015-09-14 Thread Justin Pomeroy
Public bug reported:

On the Select Source page of the ng Launch Instance wizard, it looks
like the Total Instances chart has some bad style.  The width and
possibly the height is messed up and causes the legend to wrap under the
chart when it should be able to fit to the right.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "total_instances_chart.png"
   
https://bugs.launchpad.net/bugs/1495561/+attachment/4464118/+files/total_instances_chart.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1495561

Title:
  Total Instances chart in ng Launch Instance has incorrect style

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On the Select Source page of the ng Launch Instance wizard, it looks
  like the Total Instances chart has some bad style.  The width and
  possibly the height is messed up and causes the legend to wrap under
  the chart when it should be able to fit to the right.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1495561/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495547] [NEW] nova.tests.unit.test_crypto.RevokeCertsTest.test_revoke_cert_project_not_found_chdir_fails fails locally since ~9/11

2015-09-14 Thread Matt Riedemann
Public bug reported:

I rebased my local master branch around 9/11 and I've been seeing this
failing consistently locally:

nova.tests.unit.test_crypto.RevokeCertsTest.test_revoke_cert_project_not_found_chdir_fails
--

Captured traceback:
~~~
Traceback (most recent call last):
  File "nova/tests/unit/test_crypto.py", line 168, in 
test_revoke_cert_project_not_found_chdir_fails
2, 'test_file')
  File 
"/home/mriedem/git/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 422, in assertRaises
self.assertThat(our_callable, matcher)
  File 
"/home/mriedem/git/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 433, in assertThat
mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
  File 
"/home/mriedem/git/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 483, in _matchHelper
mismatch = matcher.match(matchee)
  File 
"/home/mriedem/git/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 108, in match
mismatch = self.exception_matcher.match(exc_info)
  File 
"/home/mriedem/git/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py",
 line 62, in match
mismatch = matcher.match(matchee)
  File 
"/home/mriedem/git/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 414, in match
reraise(*matchee)
  File 
"/home/mriedem/git/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 101, in match
result = matchee()
  File 
"/home/mriedem/git/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 969, in __call__
return self._callable_object(*self._args, **self._kwargs)
  File "nova/crypto.py", line 227, in revoke_cert
raise exception.RevokeCertFailure(project_id=project_id)
nova.exception.RevokeCertFailure: Failed to revoke certificate for 2

The last changes to test_crypto were made on 9/4:

https://review.openstack.org/#/c/191604/

Given that's related to processutils in oslo.concurrency, I'm wondering
if there was a regression in 2.6.0, released on 9/8:

https://pypi.python.org/pypi/oslo.concurrency/2.6.0

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: crypto oslo testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1495547

Title:
  
nova.tests.unit.test_crypto.RevokeCertsTest.test_revoke_cert_project_not_found_chdir_fails
  fails locally since ~9/11

Status in OpenStack Compute (nova):
  New

Bug description:
  I rebased my local master branch around 9/11 and I've been seeing this
  failing consistently locally:

  
nova.tests.unit.test_crypto.RevokeCertsTest.test_revoke_cert_project_not_found_chdir_fails
  
--

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "nova/tests/unit/test_crypto.py", line 168, in 
test_revoke_cert_project_not_found_chdir_fails
  2, 'test_file')
File 
"/home/mriedem/git/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 422, in assertRaises
  self.assertThat(our_callable, matcher)
File 
"/home/mriedem/git/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 433, in assertThat
  mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
File 
"/home/mriedem/git/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 483, in _matchHelper
  mismatch = matcher.match(matchee)
File 
"/home/mriedem/git/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 108, in match
  mismatch = self.exception_matcher.match(exc_info)
File 
"/home/mriedem/git/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py",
 line 62, in match
  mismatch = matcher.match(matchee)
File 
"/home/mriedem/git/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 414, in match
  reraise(*matchee)
File 
"/home/mriedem/git/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 101, in match
  result = matchee()
File 
"/home/mriedem/git/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 969, in __call__
  return self._callable_object(*self._args, **self._kwargs)
File "nova/crypto.py", line 227, in revoke_cert
  raise exception.RevokeCertFailure(project_id=project_id)
  nova

[Yahoo-eng-team] [Bug 1489111] Re: [OSSA 2015-018] IP, MAC, and DHCP spoofing rules can by bypassed by changing device_owner (CVE-2015-5240)

2015-09-14 Thread Tristan Cacqueray
** Changed in: ossa
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489111

Title:
  [OSSA 2015-018] IP, MAC, and DHCP spoofing rules can by bypassed by
  changing device_owner (CVE-2015-5240)

Status in neutron:
  Fix Committed
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  This issue is being treated as a potential security risk under
  embargo. Please do not make any public mention of embargoed (private)
  security vulnerabilities before their coordinated publication by the
  OpenStack Vulnerability Management Team in the form of an official
  OpenStack Security Advisory. This includes discussion of the bug or
  associated fixes in public forums such as mailing lists, code review
  systems and bug trackers. Please also avoid private disclosure to
  other individuals not already approved for access to this information,
  and provide this same reminder to those who are made aware of the
  issue prior to publication. All discussion should remain confined to
  this private bug report, and any proposed fixes should be added to the
  bug as attachments.

  --

  The anti-IP spoofing rules, anti-MAC spoofing rules, and anti-DHCP
  spoofing rules can be bypassed by changing the device_owner field of a
  compute node's port to something that starts with 'network:'.

  Steps to reproduce:

  Create a port on the target network:

  neutron port-create some_network

  Start a repeated update of the device_owner field to immediately
  change it back after nova sets it to 'compute:' on VM
  attachment. (This has to be done quickly because the owner has to be
  set to 'network:something' before the L2 agent wires up the security
  group rules.)

  watch neutron port-update  --device-owner
  network:hello

  Then boot the VM with the port UUID:

  nova boot test --nic port-id= --flavor m1.tiny
  --image cirros-0.3.4-x86_64-uec

  This VM will now have no iptables rules applied because it will be
  treated as a network owned port (e.g. router interface, DHCP
  interface, etc).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489111/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495523] [NEW] router-interface-add fails with error 500 on PostgreSQL

2015-09-14 Thread Valeriy Ponomaryov
Public bug reported:

If PostgreSQL is used as DB backend then Neutron fails with error code
500 using CLI "router-interface-add":

2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1139, 
in _execute_context
2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters context)
2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
450, in do_execute
2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters 
ProgrammingError: column "agents.id" must appear in the GROUP BY clause or be 
used in an aggregate function
2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters LINE 1: 
SELECT agents.id AS agents_id, agents.agent_type AS agents_a...

Manila CI Tempest job with PostreSQL errors:

http://logs.openstack.org/01/218801/9/check/gate-manila-tempest-dsvm-
neutron-
postgres/76739da/logs/devstacklog.txt.gz#_2015-09-14_11_37_13_009

http://logs.openstack.org/01/218801/9/check/gate-manila-tempest-dsvm-
neutron-
postgres/76739da/logs/screen-q-svc.txt.gz?level=TRACE#_2015-09-14_11_37_13_976

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: db

** Tags added: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495523

Title:
  router-interface-add fails with error 500 on PostgreSQL

Status in neutron:
  New

Bug description:
  If PostgreSQL is used as DB backend then Neutron fails with error code
  500 using CLI "router-interface-add":

  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1139, 
in _execute_context
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters 
context)
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
450, in do_execute
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters 
ProgrammingError: column "agents.id" must appear in the GROUP BY clause or be 
used in an aggregate function
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters LINE 1: 
SELECT agents.id AS agents_id, agents.agent_type AS agents_a...

  Manila CI Tempest job with PostreSQL errors:

  http://logs.openstack.org/01/218801/9/check/gate-manila-tempest-dsvm-
  neutron-
  postgres/76739da/logs/devstacklog.txt.gz#_2015-09-14_11_37_13_009

  http://logs.openstack.org/01/218801/9/check/gate-manila-tempest-dsvm-
  neutron-
  postgres/76739da/logs/screen-q-svc.txt.gz?level=TRACE#_2015-09-14_11_37_13_976

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494963] Re: router_info process_floating_ip_addresses execution time is O(n)

2015-09-14 Thread Ryan Moats
Using 3.19 kernel appears to address this, so marking invalid

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494963

Title:
  router_info process_floating_ip_addresses execution time is O(n)

Status in neutron:
  Invalid

Bug description:
  router_info's process_floating_ip_addresses execution time increases
  as the number of routers scheduled to a network node increases.
  Ideally, this execution time should be O(1) if possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1494963/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495519] [NEW] import flow doesn't raise error on forbidden location

2015-09-14 Thread Flavio Percoco
Public bug reported:

The following command correctly raises an exception server side but it
doesn't report the error back to the user. This ends up in the user
"thinking" the task was created correctly:

$ glance --os-image-api-version 2  --os-image-url
http://localhost:9292/v2 --os-tenant-id a1875f8a27f74708b6fb9281e7430a98
task-create --type import --input '{"import_from_format": "qcow2",
"import_from": "swift://127.0.0.1:8000/test.qcow2", "image_properties":
{"disk_format": "qcow2", "container_format": "bare"}}'

** Affects: glance
 Importance: High
 Status: New

** Changed in: glance
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1495519

Title:
  import flow doesn't raise error on forbidden location

Status in Glance:
  New

Bug description:
  The following command correctly raises an exception server side but it
  doesn't report the error back to the user. This ends up in the user
  "thinking" the task was created correctly:

  $ glance --os-image-api-version 2  --os-image-url
  http://localhost:9292/v2 --os-tenant-id
  a1875f8a27f74708b6fb9281e7430a98 task-create --type import --input
  '{"import_from_format": "qcow2", "import_from":
  "swift://127.0.0.1:8000/test.qcow2", "image_properties":
  {"disk_format": "qcow2", "container_format": "bare"}}'

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1495519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495505] Re: VPNaaS: Coverage tests broken

2015-09-14 Thread Paul Michali
Already have bug under 1454772.

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495505

Title:
  VPNaaS: Coverage tests broken

Status in neutron:
  Invalid

Bug description:
  Currently, 'testr' does not support the --coverage-package-name
  option, which is needed for coverage testing of VPN. Need to update
  tox.ini to use 'test'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495505/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495508] [NEW] l2population driver registers duplicate topics

2015-09-14 Thread Ilya Shakhat
Public bug reported:

When l2population driver is enabled it registers the following topics:

developer@ubuntu:~$ sudo rabbitmqctl list_queues | grep l2population
q-agent-notifier-l2population-update0
q-agent-notifier-l2population-update.ubuntu0
q-agent-notifier-l2population-update.ubuntu.ubuntu0
q-agent-notifier-l2population-update.ubuntu_fanout_0feb7221cb4f4c0d98870c8bea8739ba
0
q-agent-notifier-l2population-update_fanout_6af4eb0c2790410da307ae985b984b2d0

Topics 1, 2 and 5 are expected, while 3 (with double host name) and 4
are not.

** Affects: neutron
 Importance: Undecided
 Assignee: Ilya Shakhat (shakhat)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Ilya Shakhat (shakhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495508

Title:
  l2population driver registers duplicate topics

Status in neutron:
  New

Bug description:
  When l2population driver is enabled it registers the following topics:

  developer@ubuntu:~$ sudo rabbitmqctl list_queues | grep l2population
  q-agent-notifier-l2population-update0
  q-agent-notifier-l2population-update.ubuntu0
  q-agent-notifier-l2population-update.ubuntu.ubuntu0
  
q-agent-notifier-l2population-update.ubuntu_fanout_0feb7221cb4f4c0d98870c8bea8739ba
0
  q-agent-notifier-l2population-update_fanout_6af4eb0c2790410da307ae985b984b2d  
  0

  Topics 1, 2 and 5 are expected, while 3 (with double host name) and 4
  are not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495508/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495499] [NEW] VPNaaS: Coverage tests broken

2015-09-14 Thread Paul Michali
Public bug reported:

Currently, 'testr' does not support the --coverage-package-name option,
which is needed for coverage testing of VPN. Need to update tox.ini to
use 'test'.

** Affects: neutron
 Importance: Undecided
 Status: Invalid


** Tags: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495499

Title:
  VPNaaS: Coverage tests broken

Status in neutron:
  Invalid

Bug description:
  Currently, 'testr' does not support the --coverage-package-name
  option, which is needed for coverage testing of VPN. Need to update
  tox.ini to use 'test'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495499/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495499] Re: VPNaaS: Coverage tests broken

2015-09-14 Thread Paul Michali
Duplicate bug created due to launchpad issue.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495499

Title:
  VPNaaS: Coverage tests broken

Status in neutron:
  Invalid

Bug description:
  Currently, 'testr' does not support the --coverage-package-name
  option, which is needed for coverage testing of VPN. Need to update
  tox.ini to use 'test'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495499/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495502] [NEW] VPNaaS: Coverage tests broken

2015-09-14 Thread Paul Michali
Public bug reported:

Currently, 'testr' does not support the --coverage-package-name option,
which is needed for coverage testing of VPN. Need to update tox.ini to
use 'test'.

** Affects: neutron
 Importance: Undecided
 Status: Invalid


** Tags: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495502

Title:
  VPNaaS: Coverage tests broken

Status in neutron:
  Invalid

Bug description:
  Currently, 'testr' does not support the --coverage-package-name
  option, which is needed for coverage testing of VPN. Need to update
  tox.ini to use 'test'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495502/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495502] Re: VPNaaS: Coverage tests broken

2015-09-14 Thread Paul Michali
Duplicate bug created due to launchpad issue.

** Changed in: neutron
   Status: New => Incomplete

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495502

Title:
  VPNaaS: Coverage tests broken

Status in neutron:
  Invalid

Bug description:
  Currently, 'testr' does not support the --coverage-package-name
  option, which is needed for coverage testing of VPN. Need to update
  tox.ini to use 'test'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495502/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495504] Re: VPNaaS: Coverage tests broken

2015-09-14 Thread Paul Michali
Duplicate bug created due to launchpad issue.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495504

Title:
  VPNaaS: Coverage tests broken

Status in neutron:
  Invalid

Bug description:
  Coverage tests for VPNaaS are no longer working, as testr does not
  support --coverage-package-name argument any more. Need to switch to
  using 'test' instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495503] Re: VPNaaS: Coverage tests broken

2015-09-14 Thread Paul Michali
Duplicate bug created due to launchpad issue.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495503

Title:
  VPNaaS: Coverage tests broken

Status in neutron:
  Invalid

Bug description:
  Coverage tests for VPNaaS are no longer working, as testr does not
  support --coverage-package-name argument any more. Need to switch to
  using 'test' instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495497] [NEW] VPNaaS: Coverage tests broken

2015-09-14 Thread Paul Michali
Public bug reported:

Currently, 'testr' does not support the --coverage-package-name option,
which is needed for coverage testing of VPN. Need to update tox.ini to
use 'test'.

** Affects: neutron
 Importance: Undecided
 Status: Invalid


** Tags: vpnaas

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495497

Title:
  VPNaaS: Coverage tests broken

Status in neutron:
  Invalid

Bug description:
  Currently, 'testr' does not support the --coverage-package-name
  option, which is needed for coverage testing of VPN. Need to update
  tox.ini to use 'test'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495504] [NEW] VPNaaS: Coverage tests broken

2015-09-14 Thread Paul Michali
Public bug reported:

Coverage tests for VPNaaS are no longer working, as testr does not
support --coverage-package-name argument any more. Need to switch to
using 'test' instead.

** Affects: neutron
 Importance: Undecided
 Status: Invalid


** Tags: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495504

Title:
  VPNaaS: Coverage tests broken

Status in neutron:
  Invalid

Bug description:
  Coverage tests for VPNaaS are no longer working, as testr does not
  support --coverage-package-name argument any more. Need to switch to
  using 'test' instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495503] [NEW] VPNaaS: Coverage tests broken

2015-09-14 Thread Paul Michali
Public bug reported:

Coverage tests for VPNaaS are no longer working, as testr does not
support --coverage-package-name argument any more. Need to switch to
using 'test' instead.

** Affects: neutron
 Importance: Undecided
 Status: Invalid


** Tags: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495503

Title:
  VPNaaS: Coverage tests broken

Status in neutron:
  Invalid

Bug description:
  Coverage tests for VPNaaS are no longer working, as testr does not
  support --coverage-package-name argument any more. Need to switch to
  using 'test' instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495505] [NEW] VPNaaS: Coverage tests broken

2015-09-14 Thread Paul Michali
Public bug reported:

Currently, 'testr' does not support the --coverage-package-name option,
which is needed for coverage testing of VPN. Need to update tox.ini to
use 'test'.

** Affects: neutron
 Importance: Undecided
 Assignee: Paul Michali (pcm)
 Status: In Progress


** Tags: vpnaas

** Changed in: neutron
 Assignee: (unassigned) => Paul Michali (pcm)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495505

Title:
  VPNaaS: Coverage tests broken

Status in neutron:
  In Progress

Bug description:
  Currently, 'testr' does not support the --coverage-package-name
  option, which is needed for coverage testing of VPN. Need to update
  tox.ini to use 'test'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495505/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495472] [NEW] Horizon forbids user access to identity users/groups with OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT=True

2015-09-14 Thread Paul Karikh
Public bug reported:

When Horizon is setted up with OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT =
True, user will not be to access identity/users and identity/groups with
Unauthorized error, which in turn makes Horizon logout user.

Horizon fills domain name before sending request to Keystone the following way:
domain_context = self.request.session.get('domain_context', None)

But there is no `domain_context variable` in the session, so will be set
to None. And domain=None will be send to the keystone with line

users = api.keystone.user_list(self.request, domain=domain_context)

which is present in all identity dashboard views (users, projects, groups, 
domains and NOT roles).
For example: 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/identity/users/views.py#L50

It look like if we change the code to
users = 
api.keystone.user_list(self.request,domain=self.request.user.user_domain_name) 
everything will be ok.

It is strange that identity/users does not work without correct domain, and 
identity/progects do, because they both send request to keystone without 
correctly setted domain.
And it looks like this problem only occurs with keystone v3 (there is no 
domains in the v2 keystone, so no domain - no problems).

After pushing "SetDomainContext" button in the identity/domains,
everything works fine.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1495472

Title:
  Horizon forbids user access to identity users/groups with
  OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT=True

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When Horizon is setted up with OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT
  = True, user will not be to access identity/users and identity/groups
  with Unauthorized error, which in turn makes Horizon logout user.

  Horizon fills domain name before sending request to Keystone the following 
way:
  domain_context = self.request.session.get('domain_context', None)

  But there is no `domain_context variable` in the session, so will be
  set to None. And domain=None will be send to the keystone with line

  users = api.keystone.user_list(self.request, domain=domain_context)

  which is present in all identity dashboard views (users, projects, groups, 
domains and NOT roles).
  For example: 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/identity/users/views.py#L50

  It look like if we change the code to
  users = 
api.keystone.user_list(self.request,domain=self.request.user.user_domain_name) 
everything will be ok.

  It is strange that identity/users does not work without correct domain, and 
identity/progects do, because they both send request to keystone without 
correctly setted domain.
  And it looks like this problem only occurs with keystone v3 (there is no 
domains in the v2 keystone, so no domain - no problems).

  After pushing "SetDomainContext" button in the identity/domains,
  everything works fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1495472/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495465] [NEW] RDNSS Option should be included in ICMPv6 Router Advertisements

2015-09-14 Thread Tore Anderson
Public bug reported:

The ICMPv6 Router Advertisements on an IPv6 subnet handled by Neutron
does not contain the Recursive DNS Server Option, even though the subnet
has been created with an appropriate "dns_nameservers" parameter. This
means that instances on a subnet using SLAAC does not learn any DNS
servers, and thus cannot resolve any hostnames after being provisioned.
That is likely to break lots of things, such as further provisioning of
applications to the instance.

The RDNSS option is documented in RFC 6106. It can be configured in
radvd.conf using the following syntax:

interface qr-foo {
  RDNSS server1 [server2 ...] {
# this is optional, but prevents problems noted in the second bullet of
# https://tools.ietf.org/html/draft-ietf-6man-rdnss-rfc6106bis-02#appendix-B
AdvRDNSSLifetime infinity;
  };
};

Observed on OpenStack Kilo.

Note: It might be that using DHCPv6 in some capacity would work around
this issue. I have not yet tested this, though.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495465

Title:
  RDNSS Option should be included in ICMPv6 Router Advertisements

Status in neutron:
  New

Bug description:
  The ICMPv6 Router Advertisements on an IPv6 subnet handled by Neutron
  does not contain the Recursive DNS Server Option, even though the
  subnet has been created with an appropriate "dns_nameservers"
  parameter. This means that instances on a subnet using SLAAC does not
  learn any DNS servers, and thus cannot resolve any hostnames after
  being provisioned. That is likely to break lots of things, such as
  further provisioning of applications to the instance.

  The RDNSS option is documented in RFC 6106. It can be configured in
  radvd.conf using the following syntax:

  interface qr-foo {
RDNSS server1 [server2 ...] {
  # this is optional, but prevents problems noted in the second bullet of
  # 
https://tools.ietf.org/html/draft-ietf-6man-rdnss-rfc6106bis-02#appendix-B
  AdvRDNSSLifetime infinity;
};
  };

  Observed on OpenStack Kilo.

  Note: It might be that using DHCPv6 in some capacity would work around
  this issue. I have not yet tested this, though.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495465/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495463] [NEW] While creating firewall for another tenant which does not have router, firewall policy , firewall gets created and it comes into active state.

2015-09-14 Thread ranjitray
Public bug reported:

While creating firewall for another tenant which does  not have router,
firewall policy , firewall gets created and it comes into active state.

Steps Followed:

i.  There are two tenants, admin and demo. Create router, network, add 
router interface  for admin ( nothing for demo user).
ii. Source rc file for admin user , create firewall rule, firewall policy
iii.Try to create firewall for demo user by passing --tenant-id 

 Observation:  Firewall getting created and status is in active state
even though tenant does not have any router, network, firewall rule,
firewall policy.  Attaching file “firewall created for dem user.txt” for
your reference. I think we should not be able to create firewall when
there is not firewall policy, rule for the given tenant.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: client fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495463

Title:
  While creating firewall for another tenant which does  not have
  router, firewall policy , firewall gets created and it comes into
  active state.

Status in neutron:
  New

Bug description:
  While creating firewall for another tenant which does  not have
  router, firewall policy , firewall gets created and it comes into
  active state.

  Steps Followed:

  i.There are two tenants, admin and demo. Create router, network, add 
router interface  for admin ( nothing for demo user).
  ii.   Source rc file for admin user , create firewall rule, firewall policy
  iii.  Try to create firewall for demo user by passing --tenant-id 

   Observation:  Firewall getting created and status is in active state
  even though tenant does not have any router, network, firewall rule,
  firewall policy.  Attaching file “firewall created for dem user.txt”
  for your reference. I think we should not be able to create firewall
  when there is not firewall policy, rule for the given tenant.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495463/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495444] [NEW] MTU Option should be included in ICMPv6 Router Advertisements

2015-09-14 Thread Tore Anderson
Public bug reported:

When using an overlay network on a physical network with standard
Ethernet MTU (1500 octets), the instances' effective MTU is reduced.

The Neutron Router should inform the nodes about this fact, by including
the MTU Option in the ICMPv6 Router Advertisements it sends. The current
situation leads to blackholing of traffic, as the absence of the MTU
Option causes the instance to believe it will be able to successfully
transmit 1500 octets large frames to the network. However, these will be
silently discarded. The symptom of is usually that the TCP three-way
handshake succeeds, but that the connection appears to hang the moment
payload starts being transmitted.

The MTU Option is documented here:
https://tools.ietf.org/html/rfc4861#section-4.6.4. The corresponding
radvd.conf option is called AdvLinkMTU. Note that the Neutron router is
clearly aware of the reduced effective MTU, as it does use the
corresponding DHCPv4 option to advertise it to instances/subnets using
IPv4.

I observe this problem on OpenStack Kilo.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495444

Title:
  MTU Option should be included in ICMPv6 Router Advertisements

Status in neutron:
  New

Bug description:
  When using an overlay network on a physical network with standard
  Ethernet MTU (1500 octets), the instances' effective MTU is reduced.

  The Neutron Router should inform the nodes about this fact, by
  including the MTU Option in the ICMPv6 Router Advertisements it sends.
  The current situation leads to blackholing of traffic, as the absence
  of the MTU Option causes the instance to believe it will be able to
  successfully transmit 1500 octets large frames to the network.
  However, these will be silently discarded. The symptom of is usually
  that the TCP three-way handshake succeeds, but that the connection
  appears to hang the moment payload starts being transmitted.

  The MTU Option is documented here:
  https://tools.ietf.org/html/rfc4861#section-4.6.4. The corresponding
  radvd.conf option is called AdvLinkMTU. Note that the Neutron router
  is clearly aware of the reduced effective MTU, as it does use the
  corresponding DHCPv4 option to advertise it to instances/subnets using
  IPv4.

  I observe this problem on OpenStack Kilo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495444/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495440] [NEW] Fwaas/CLI: Can not delete multiple firewall rule by passing multiple firewall rule id

2015-09-14 Thread ranjitray
Public bug reported:

While trying to delete multiple firewall rule using CLI by passing
firewall rule multiple times, it deletes only the first firewall Rule id

stack@hdp-001:~$ neutron
(neutron) firewall-rule-list
+--+-++-+-+
| id   | name| firewall_policy_id | 
summary | enabled |
+--+-++-+-+
| 8c4ea5c6-a6e4-43ab-a503-0a2265119238 | test1491637 || 
TCP,| True|
|  | || 
 source: none(none),| |
|  | || 
 dest: none(none),  | |
|  | || 
 allow  | |
| b8c1c061-8f92-482d-94d3-678f42c7ccd7 | rayrafw2|| 
ICMP,   | True|
|  | || 
 source: none(none),| |
|  | || 
 dest: none(none),  | |
|  | || 
 allow  | |
| ba35dde7-8b07-4ba1-8338-496962c83dbc | testrule1491637 || 
UDP,| True|
|  | || 
 source: 10.25.10.2/32(80), | |
|  | || 
 dest: none(none),  | |
|  | || 
 deny   | |
+--+-++-+-+
(neutron) firewall-rule-delete 8c4ea5c6-a6e4-43ab-a503-0a2265119238 
b8c1c061-8f92-482d-94d3-678f42c7ccd7
Deleted firewall_rule: 8c4ea5c6-a6e4-43ab-a503-0a2265119238
(neutron) firewall-rule-list
+--+-++-+-+
| id   | name| firewall_policy_id | 
summary | enabled |
+--+-++-+-+
| b8c1c061-8f92-482d-94d3-678f42c7ccd7 | rayrafw2|| 
ICMP,   | True|
|  | || 
 source: none(none),| |
|  | || 
 dest: none(none),  | |
|  | || 
 allow  | |
| ba35dde7-8b07-4ba1-8338-496962c83dbc | testrule1491637 || 
UDP,| True|
|  | || 
 source: 10.25.10.2/32(80), | |
|  | || 
 dest: none(none),  | |
|  | || 
 deny   | |
+--+-++-+-+
(neutron)

It  will be better if we can delete multiple firewall rule by passing
multiple firewall rule id

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: client fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495440

Title:
  Fwaas/CLI:  Can not delete multiple firewall rule by passing multiple
  firewall rule id

Status in neutron:
  New

Bug description:
  While trying to delete multiple firewall rule using CLI by passing
  firewall rule multiple times, it deletes only the first firewall Rule
  id

  stack@hdp-001:~$ neutron
  (neutron) firewall-rule-list
  
+--+-++-+-+
  | id   | name| firewall_policy_id 
| summary | enabled |
  
+--+-++-+-+
  | 8c4ea5c6-a6e4-43ab-a503-0a2265119238 | test1491637 |
| TCP,| True|

[Yahoo-eng-team] [Bug 1470666] Re: auto-file-discovery-test

2015-09-14 Thread Rob Cresswell
*** This bug is a duplicate of bug 1473138 ***
https://bugs.launchpad.net/bugs/1473138

Done, thanks for the feedback Sean, much appreciated!

** This bug has been marked a duplicate of bug 1473138
   Autodiscovery tests needed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1470666

Title:
  auto-file-discovery-test

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  We need to have tests to make sure the auto-file-discovery works as
  expected.

  Related patch: https://review.openstack.org/#/c/191592/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1470666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470665] Re: auto-file-discovery-test

2015-09-14 Thread Rob Cresswell
*** This bug is a duplicate of bug 1473138 ***
https://bugs.launchpad.net/bugs/1473138

** This bug is no longer a duplicate of bug 1470666
   auto-file-discovery-test
** This bug has been marked a duplicate of bug 1473138
   Autodiscovery tests needed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1470665

Title:
  auto-file-discovery-test

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We need to have tests to make sure the auto-file-discovery works as
  expected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1470665/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495433] [NEW] IPv6 packets from non-EUI-64 addresses dropped on SLAAC subnets

2015-09-14 Thread Tore Anderson
Public bug reported:

A newly created instance on an IPv6 subnet is provisioned with the
following ip6tables chain on the hypervisor:

tore@node-a3-02:~$ sudo ip6tables -vL neutron-openvswi-sb6a851ba-e
Chain neutron-openvswi-sb6a851ba-e (1 references)
 pkts bytes target prot opt in out source   destination 

 8869 1298K RETURN all  anyany 
2001:db8:200:f020:f816:3eff:fea7:b27d  anywhere MAC 
FA:16:3E:A7:B2:7D /* Allow traffic from defined IP/MAC pairs. */
   23  1820 DROP   all  anyany anywhere anywhere
 /* Drop traffic without an IP/MAC allow rule. */

This blocks outbound traffic from the instance sourced from addresses
other than the one mentioned in the RETURN rule. (Inbound traffic does
work fine though, and can can be observed in a tcpdump session on the
VM. Also ICMPv6 appears to be allowed elsewhere, so ping6 works.)

The logic appears to be based on a faulty assumption, namely that an
IPv6 node on a SLAAC subnet will only have a single IPv6 address, and
that this address has an EUI-64 based Interface ID.

That is, however, quite simply not how IPv6 works; when a host receives
an ICMPv6 Router Advertisement containing a /64 Prefix Information
Option with the Autonomous flag set, this essentially informs the host
that it can freely use *any* arbitrary address inside that /64 (provided
that it performs the Duplicate Address Detection algorithm). The host is
under no obligation to use the EUI-64 algorithm in order to constructs
its Interface ID. Even if it does use EUI-64, there is no requirement
that prevents it from at the same time self-configuring other addresses
using a different Interface ID generation algorithm.

This bug breaks the algorithms defined in RFC4941, RFC6877, RFC7217, I-D
.ietf-v6ops-siit-dc-2xlat, as well as any other application or use case
that requires multiple IPv6 addresses. See also I-D.ietf-v6ops-host-
addr-availability.

The fix should be trivial, namely to mask the source address in the
RETURN rule with a /64. (In the example above, it should have been
2001:db8:200:f020::/64 rather than
2001:db8:200:f020:f816:3eff:fea7:b27d. (Note that the ip6tables binary
will do so automatically if it is being given an address such as
"2001:db8:200:f020:f816:3eff:fea7:b27d/64".)

Tore

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495433

Title:
  IPv6 packets from non-EUI-64 addresses dropped on SLAAC subnets

Status in neutron:
  New

Bug description:
  A newly created instance on an IPv6 subnet is provisioned with the
  following ip6tables chain on the hypervisor:

  tore@node-a3-02:~$ sudo ip6tables -vL neutron-openvswi-sb6a851ba-e
  Chain neutron-openvswi-sb6a851ba-e (1 references)
   pkts bytes target prot opt in out source   
destination 
   8869 1298K RETURN all  anyany 
2001:db8:200:f020:f816:3eff:fea7:b27d  anywhere MAC 
FA:16:3E:A7:B2:7D /* Allow traffic from defined IP/MAC pairs. */
 23  1820 DROP   all  anyany anywhere anywhere  
   /* Drop traffic without an IP/MAC allow rule. */

  This blocks outbound traffic from the instance sourced from addresses
  other than the one mentioned in the RETURN rule. (Inbound traffic does
  work fine though, and can can be observed in a tcpdump session on the
  VM. Also ICMPv6 appears to be allowed elsewhere, so ping6 works.)

  The logic appears to be based on a faulty assumption, namely that an
  IPv6 node on a SLAAC subnet will only have a single IPv6 address, and
  that this address has an EUI-64 based Interface ID.

  That is, however, quite simply not how IPv6 works; when a host
  receives an ICMPv6 Router Advertisement containing a /64 Prefix
  Information Option with the Autonomous flag set, this essentially
  informs the host that it can freely use *any* arbitrary address inside
  that /64 (provided that it performs the Duplicate Address Detection
  algorithm). The host is under no obligation to use the EUI-64
  algorithm in order to constructs its Interface ID. Even if it does use
  EUI-64, there is no requirement that prevents it from at the same time
  self-configuring other addresses using a different Interface ID
  generation algorithm.

  This bug breaks the algorithms defined in RFC4941, RFC6877, RFC7217,
  I-D.ietf-v6ops-siit-dc-2xlat, as well as any other application or use
  case that requires multiple IPv6 addresses. See also I-D.ietf-v6ops-
  host-addr-availability.

  The fix should be trivial, namely to mask the source address in the
  RETURN rule with a /64. (In the example above, it should have been
  2001:db8:200:f020::/64 rather than
  2001:db8:200:f020:f816:3eff:fea7:b27d. (Note that the ip6tables binary
  will do so automatically if it i

[Yahoo-eng-team] [Bug 1495430] [NEW] delete lbaasv2 can't delete lbaas namespace automatically.

2015-09-14 Thread Hong Hui Xiao
Public bug reported:

Try the lbaas v2 in my env and found lots of orphan lbaas namespace. Look back 
to the code and find that lbaas instance will be undelployed, when delete 
listener. All things are deleted except the namespace.
However, from the method of deleting loadbalancer, the namespace will be 
deleted automatically.
The behavior is not consistent, namespace should be deleted from deleting 
listener too.

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New


** Tags: lbaas

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

** Description changed:

  Try the lbaas v2 in my env and found lots of orphan lbaas namespace. Look 
back to the code and find that lbaas instance will be undelployed, when delete 
listener. All things are deleted except the namespace.
  However, from the method of deleting loadbalancer, the namespace will be 
deleted automatically.
+ The behavior is not consistent, namespace should be deleted from deleting 
listener too.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495430

Title:
  delete lbaasv2 can't delete lbaas namespace automatically.

Status in neutron:
  New

Bug description:
  Try the lbaas v2 in my env and found lots of orphan lbaas namespace. Look 
back to the code and find that lbaas instance will be undelployed, when delete 
listener. All things are deleted except the namespace.
  However, from the method of deleting loadbalancer, the namespace will be 
deleted automatically.
  The behavior is not consistent, namespace should be deleted from deleting 
listener too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1285389] Re: Add Shelving command to Horizon

2015-09-14 Thread Rob Cresswell
This is a blueprint, not a bug, and the relevant blueprint has been
marked as Implemented.

** Changed in: horizon
   Status: Fix Committed => Invalid

** Changed in: horizon
Milestone: liberty-rc1 => None

** Changed in: horizon
 Assignee: Timur Sufiev (tsufiev-x) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1285389

Title:
  Add Shelving command to Horizon

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Hi, we have a team of 2 from Facebook's Open Academy that would like
  to add a VM Shelving command to Horizon.

  This is currently our proposed wireframe of the Shelving command with the 
instance page:
  
https://github.com/OpenAcademy-OpenStack/vm-hibernation/blob/master/docs/wireframe.pdf

  Any feedback is welcomed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1285389/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495429] [NEW] Vmware: Failed to snapshot an instance with a big root disk.

2015-09-14 Thread Kevin Tibi
Public bug reported:

python-nova-2015.1.1-1.el7.noarch
openstack-nova-common-2015.1.1-1.el7.noarch
python-novaclient-2.23.0-1.el7.noarch
openstack-nova-compute-2015.1.1-1.el7.noarch
python-oslo-vmware-0.11.1-1.el7.noarch

I can't snap an instance if the root disk is too large. (>8GB)

Snap in Vcenter works, OVF export works, DL image on glance node works,
but after the DL, compute have trace and delete glance image.

Trace in compute.log ==>

2015-09-14 10:46:00.003 10248 DEBUG oslo_vmware.api [-] Fault list: 
[ManagedObjectNotFound] _invoke_api 
/usr/lib/python2.7/site-packages/oslo_vmware/api.py:326
2015-09-14 10:46:00.004 10248 DEBUG oslo_vmware.exceptions [-] Fault 
ManagedObjectNotFound not matched. get_fault_class 
/usr/lib/python2.7/site-packages/oslo_vmware/exceptions.py:250
2015-09-14 10:46:00.004 10248 DEBUG nova.virt.vmwareapi.vm_util 
[req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c 
eb151dcad08b434ab919a47392da4c95 - - -] [instance: 
5da44d78-b0cf-44f6-9789-b0fd78906b4e] Destroying the VM destroy_vm 
/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vm_util.py:1304
2015-09-14 10:46:00.004 10248 DEBUG oslo_vmware.api 
[req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c 
eb151dcad08b434ab919a47392da4c95 - - -] Waiting for function _invoke_api to 
return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:121
2015-09-14 10:46:00.028 10248 DEBUG oslo_vmware.api 
[req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c 
eb151dcad08b434ab919a47392da4c95 - - -] Waiting for the task: (returnval){
2015-09-14 10:46:00.028 10248 DEBUG oslo_vmware.api [-] Invoking VIM API to 
read info of task: (returnval){
2015-09-14 10:46:00.029 10248 DEBUG oslo_vmware.api [-] Waiting for function 
_invoke_api to return. func 
/usr/lib/python2.7/site-packages/oslo_vmware/api.py:121
2015-09-14 10:46:05.029 10248 DEBUG oslo_vmware.api [-] Invoking VIM API to 
read info of task: (returnval){
2015-09-14 10:46:05.030 10248 DEBUG oslo_vmware.api [-] Waiting for function 
_invoke_api to return. func 
/usr/lib/python2.7/site-packages/oslo_vmware/api.py:121
2015-09-14 10:46:05.056 10248 DEBUG oslo_vmware.api [-] Task: (returnval){
2015-09-14 10:46:05.056 10248 INFO nova.virt.vmwareapi.vm_util 
[req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c 
eb151dcad08b434ab919a47392da4c95 - - -] [instance: 
5da44d78-b0cf-44f6-9789-b0fd78906b4e] Destroyed the VM
2015-09-14 10:46:05.056 10248 DEBUG nova.virt.vmwareapi.vmops 
[req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c 
eb151dcad08b434ab919a47392da4c95 - - -] [instance: 
5da44d78-b0cf-44f6-9789-b0fd78906b4e] Deleting Snapshot of the VM instance 
_delete_vm_snapshot 
/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vmops.py:759
2015-09-14 10:46:05.057 10248 DEBUG oslo_vmware.api 
[req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c 
eb151dcad08b434ab919a47392da4c95 - - -] Waiting for function _invoke_api to 
return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:121
2015-09-14 10:46:05.084 10248 DEBUG oslo_vmware.api 
[req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c 
eb151dcad08b434ab919a47392da4c95 - - -] Waiting for the task: (returnval){
2015-09-14 10:46:05.084 10248 DEBUG oslo_vmware.api [-] Invoking VIM API to 
read info of task: (returnval){
2015-09-14 10:46:05.085 10248 DEBUG oslo_vmware.api [-] Waiting for function 
_invoke_api to return. func 
/usr/lib/python2.7/site-packages/oslo_vmware/api.py:121
2015-09-14 10:46:10.085 10248 DEBUG oslo_vmware.api [-] Invoking VIM API to 
read info of task: (returnval){
2015-09-14 10:46:10.086 10248 DEBUG oslo_vmware.api [-] Waiting for function 
_invoke_api to return. func 
/usr/lib/python2.7/site-packages/oslo_vmware/api.py:121
2015-09-14 10:46:10.105 10248 DEBUG oslo_vmware.api [-] Task: (returnval){
2015-09-14 10:46:10.106 10248 DEBUG nova.virt.vmwareapi.vmops 
[req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c 
eb151dcad08b434ab919a47392da4c95 - - -] [instance: 
5da44d78-b0cf-44f6-9789-b0fd78906b4e] Deleted Snapshot of the VM instance 
_delete_vm_snapshot 
/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vmops.py:765
2015-09-14 10:46:10.106 10248 DEBUG nova.compute.manager 
[req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c 
eb151dcad08b434ab919a47392da4c95 - - -] [instance: 
5da44d78-b0cf-44f6-9789-b0fd78906b4e] Cleaning up image 
f17662ad-8627-4323-ab57-b2240ed45b61 decorated_function 
/usr/lib/python2.7/site-packages/nova/compute/manager.py:397
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 
5da44d78-b0cf-44f6-9789-b0fd78906b4e] Traceback (most recent call last):
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 
5da44d78-b0cf-44f6-9789-b0fd78906b4e]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 393, in 
decorated_function
2015-09-14 10

[Yahoo-eng-team] [Bug 1495423] [NEW] Cannot get dom0's ovsdb monitor result

2015-09-14 Thread huan
Public bug reported:

With Xenserver+Neutron, ML2 plugin, OVS driver, VLAN typs

When launching a new instance, the q-agt which runs in compute node cannot get 
the new instance's port changes in dom0's ovsdb.
This makes the q-agt cannot add the coresponding tag for this port and lead the 
instance cannot get IP from dhcp.
Because without correct tag, OVS flow rules will drop all the package from this 
instance to DHCP agent.

The dom0's ovsdb monitor output is got via netwrap which resides in dom0

neutron\neutron\plugins\ml2\drivers\openvswitch\agent\xenapi\etc\xapi.d\plugins\netwrap

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495423

Title:
  Cannot get dom0's ovsdb monitor result

Status in neutron:
  New

Bug description:
  With Xenserver+Neutron, ML2 plugin, OVS driver, VLAN typs

  When launching a new instance, the q-agt which runs in compute node cannot 
get the new instance's port changes in dom0's ovsdb.
  This makes the q-agt cannot add the coresponding tag for this port and lead 
the instance cannot get IP from dhcp.
  Because without correct tag, OVS flow rules will drop all the package from 
this instance to DHCP agent.

  The dom0's ovsdb monitor output is got via netwrap which resides in
  dom0

  
neutron\neutron\plugins\ml2\drivers\openvswitch\agent\xenapi\etc\xapi.d\plugins\netwrap

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495423/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495400] [NEW] list instance details caused host CPU overload if each one instance had lots of instance faults

2015-09-14 Thread Rui Chen
Public bug reported:

1. code base

$ git log -1
commit e70ff282f8ed92133942f4d878c2b5f8564f83a8
Merge: 9e0e7a3 222085d
Author: Jenkins 
Date:   Mon Sep 14 05:44:39 2015 +

Merge "Invalidate AZ cache when the instance AZ information is
different"

2. Reproduce steps:

* background data: 1000 instances, 1 faults record in each instance
* list all instance details

Expected result:
* nova-api Host CPU load in a reasonable range, like: 85%

Actual result:
* nova-api Host CPU load persist 100%

NOTE: we collect all faults of each instance in db api (all results
loop), and only return the latest one in InstanceList object(all results
loop), I think we should return the latest fault of each instance in db
api, that can avoid an all results loop in InstanceList object.

** Affects: nova
 Importance: Undecided
 Assignee: Rui Chen (kiwik-chenrui)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Rui Chen (kiwik-chenrui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1495400

Title:
  list instance details caused host CPU overload if each one instance
  had lots of instance faults

Status in OpenStack Compute (nova):
  New

Bug description:
  1. code base

  $ git log -1
  commit e70ff282f8ed92133942f4d878c2b5f8564f83a8
  Merge: 9e0e7a3 222085d
  Author: Jenkins 
  Date:   Mon Sep 14 05:44:39 2015 +

  Merge "Invalidate AZ cache when the instance AZ information is
  different"

  2. Reproduce steps:

  * background data: 1000 instances, 1 faults record in each instance
  * list all instance details

  Expected result:
  * nova-api Host CPU load in a reasonable range, like: 85%

  Actual result:
  * nova-api Host CPU load persist 100%

  NOTE: we collect all faults of each instance in db api (all results
  loop), and only return the latest one in InstanceList object(all
  results loop), I think we should return the latest fault of each
  instance in db api, that can avoid an all results loop in InstanceList
  object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1495400/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495388] [NEW] The instance hostname didn't match the RFC 952 and 1123's definition

2015-09-14 Thread Alex Xu
Public bug reported:

The instance hostname is convert from instance's name. There is method
used to do that
https://github.com/openstack/nova/blob/master/nova/utils.py#L774

But looks like this method didn't match all the cases described in the
RFC

For example, if the host name just one character, like 'A', this method
return 'A‘ also, this isn't allowed by RFC.

And the hostname was updated at wrong place: 
https://github.com/openstack/nova/blob/master/nova/compute/api.py#L641
It just update the instance db entry again after instance entry creation. We 
can populate the hostname before instance creation, then we can save one db 
operation.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1495388

Title:
  The instance hostname didn't match the RFC 952 and 1123's definition

Status in OpenStack Compute (nova):
  New

Bug description:
  The instance hostname is convert from instance's name. There is method
  used to do that
  https://github.com/openstack/nova/blob/master/nova/utils.py#L774

  But looks like this method didn't match all the cases described in the
  RFC

  For example, if the host name just one character, like 'A', this
  method return 'A‘ also, this isn't allowed by RFC.

  And the hostname was updated at wrong place: 
https://github.com/openstack/nova/blob/master/nova/compute/api.py#L641
  It just update the instance db entry again after instance entry creation. We 
can populate the hostname before instance creation, then we can save one db 
operation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1495388/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp