[Yahoo-eng-team] [Bug 1561337] [NEW] Unable to launch instance

2016-03-23 Thread Arun V
Public bug reported:

I installed Openstack Liberty using the official guide for ubuntu 14.01.
I am not unable to launch instance.

Here's the log from nova-api.log


2016-03-24 10:12:53.412 14413 INFO nova.osapi_compute.wsgi.server 
[req-ec45686b-ad24-4949-83bb-42b3ed336b94 55db47d40b91474399879d1003883561 
b3338b63521d4fb7a87011108e9b1107 - - -] 192.168.1.213 "GET 
/v2/b3338b63521d4fb7a87011108e9b1107/os-quota-sets/b3338b63521d4fb7a87011108e9b1107
 HTTP/1.1" status: 200 len: 568 time: 0.0969541

2016-03-24 10:12:57.869 14412 INFO nova.osapi_compute.wsgi.server 
[req-dcc90aa0-618f-4328-ace0-0e50d3a7bb53 55db47d40b91474399879d1003883561 
b3338b63521d4fb7a87011108e9b1107 - - -] 192.168.1.213 "GET 
/v2/b3338b63521d4fb7a87011108e9b1107/servers/detail?all_tenants=True_id=b3338b63521d4fb7a87011108e9b1107
 HTTP/1.1" status: 200 len: 211 time: 3.3184321
2016-03-24 10:12:59.651 14412 INFO nova.osapi_compute.wsgi.server 
[req-95cb7922-c703-4036-ba13-005dff79741e 55db47d40b91474399879d1003883561 
b3338b63521d4fb7a87011108e9b1107 - - -] 192.168.1.213 "GET 
/v2/b3338b63521d4fb7a87011108e9b1107/os-keypairs HTTP/1.1" status: 200 len: 212 
time: 0.0333679
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions 
[req-2efac7ae-b1ae-475c-bb03-ab7f28b8ac3d 55db47d40b91474399879d1003883561 
b3338b63521d4fb7a87011108e9b1107 - - -] Unexpected exception in API method
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/servers.py", line 
611, in create
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions 
**create_kwargs)
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/hooks.py", line 149, in inner
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions rv = 
f(*args, **kwargs)
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 1581, in create
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions 
check_server_group_quota=check_server_group_quota)
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 1181, in 
_create_instance
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions 
auto_disk_config, reservation_id, max_count)
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 955, in 
_validate_and_build_base_options
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions 
pci_request_info, requested_networks)
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 1059, in 
create_pci_requests_for_sriov_ports
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions neutron = 
get_client(context, admin=True)
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 237, in 
get_client
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions 
auth_token = _ADMIN_AUTH.get_token(_SESSION)
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/auth/identity/base.py", line 
200, in get_token
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions return 
self.get_access(session).auth_token
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/auth/identity/base.py", line 
240, in get_access
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions 
self.auth_ref = self.get_auth_ref(session)
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/auth/identity/v2.py", line 88, 
in get_auth_ref
2016-03-24 10:13:14.307 14413 ERROR 

[Yahoo-eng-team] [Bug 1552487] Re: Add tag mechanism for network resources

2016-03-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/290342
Committed: 
https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=02cf1684a34e76ccfd8032fb2e8e260d3d2179f7
Submitter: Jenkins
Branch:master

commit 02cf1684a34e76ccfd8032fb2e8e260d3d2179f7
Author: Hirofumi Ichihara 
Date:   Wed Mar 16 01:09:33 2016 +0900

Add advanced Tag section to networking guide

Change-Id: I9857d97e2c7b31d752a84f8c86d4ebf80caa9f84
Closes-Bug: #1552487


** Changed in: openstack-manuals
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552487

Title:
  Add tag mechanism for network resources

Status in neutron:
  Invalid
Status in openstack-api-site:
  In Progress
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/273881
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit ec1457dd7503626c917031ce4a16a366fe70c7bb
  Author: Hirofumi Ichihara 
  Date:   Tue Mar 1 11:05:56 2016 +0900

  Add tag mechanism for network resources
  
  Introduce a generic mechanism to allow the user to set tags
  on Neutron resources. This patch adds the function for "network"
  resource with tags.
  
  APIImpact
  DocImpact: allow users to set tags on network resources
  
  Partial-Implements: blueprint add-tags-to-core-resources
  Related-Bug: #1489291
  Change-Id: I4d9e80d2c46d07fc22de8015eac4bd3dacf4c03a

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1552487/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561310] [NEW] dashboard displays wrong quotas

2016-03-23 Thread yj
Public bug reported:

1. User try to  boot a new vm by click the button "+ Launch Instance"  in 
dashboard web->project->compute->instances.
2. But the button "+ Launch Instance"  is disabled and shows "quota exceeded".
3. Then user go to dashboard web->project->compute->overview perspective, but 
find Instances,VCPU and RAM haven't exceed 
the quotas. In this case, it's shows like below:
Instances
Used 8 of 10
VCPUs
Used 18 of 20
RAM
Used 512MB of 50.0GB 
This will confuse the user, is there anything incorrect?

root cause: 
I think one of the root cause is that the current project have another work, 
such as rebuilding vm, which obtail some quotas and mark
the 'reserved' fields(nova->quota_usages table) to a none zero value .
So I select *  from nova.quota_usages where project_id = 'current project id', 
and find thati in  the row 'cores' , the 'reserved' = 2. Because of this,  the 
VCPU is actually Used 10 of 10. 
Excepted result:
The 'reserved' quota should also be taken as an used quota ,and so in dashboard 
web->project->compute->overview perspectiv, it should shows as below:
VCPUs
Used 20 of 20.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1561310

Title:
  dashboard displays wrong quotas

Status in OpenStack Compute (nova):
  New

Bug description:
  1. User try to  boot a new vm by click the button "+ Launch Instance"  in 
dashboard web->project->compute->instances.
  2. But the button "+ Launch Instance"  is disabled and shows "quota exceeded".
  3. Then user go to dashboard web->project->compute->overview perspective, but 
find Instances,VCPU and RAM haven't exceed 
  the quotas. In this case, it's shows like below:
  Instances
  Used 8 of 10
  VCPUs
  Used 18 of 20
  RAM
  Used 512MB of 50.0GB 
  This will confuse the user, is there anything incorrect?

  root cause: 
  I think one of the root cause is that the current project have another work, 
such as rebuilding vm, which obtail some quotas and mark
  the 'reserved' fields(nova->quota_usages table) to a none zero value .
  So I select *  from nova.quota_usages where project_id = 'current project 
id', and find thati in  the row 'cores' , the 'reserved' = 2. Because of this,  
the VCPU is actually Used 10 of 10. 
  Excepted result:
  The 'reserved' quota should also be taken as an used quota ,and so in 
dashboard web->project->compute->overview perspectiv, it should shows as below:
  VCPUs
  Used 20 of 20.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1561310/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561022] Re: Server group policies are not honored during live migration

2016-03-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/296596
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=00efcf3a4171d270fc3a7b9b4a9230332aea81e6
Submitter: Jenkins
Branch:master

commit 00efcf3a4171d270fc3a7b9b4a9230332aea81e6
Author: Pawel Koniszewski 
Date:   Wed Mar 23 18:44:10 2016 +0100

Try to repopulate instance_group if it is None

There are cases that we need to create new RequestSpect object and we
pass None for instance_group argument. It might cause server group
policies to be omitted during filtering phase. Therefore if
instance_group is None we should try to repopulate instance_group
basing on data that is in filter_properties.

Change-Id: Id7e669e0a6db1ff1052c42006f1a141bdb8cdf29
Closes-Bug: #1561022


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1561022

Title:
  Server group policies are not honored during live migration

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  In Progress

Bug description:
  Commit
  
https://github.com/openstack/nova/commit/111a852e79f0d9e54228d8e2724dc4183f737397
  introduced regression that causes affinity/anti-affinity policies to
  be omitted while live migrating an instance.

  This is because we don't pass instance_group here:

  
https://github.com/openstack/nova/blob/111a852e79f0d9e54228d8e2724dc4183f737397/nova/conductor/tasks/live_migrate.py#L183

  However, filters are expecting this information:

  
https://github.com/openstack/nova/blob/111a852e79f0d9e54228d8e2724dc4183f737397/nova/scheduler/filters/affinity_filter.py#L86

  Basically we should pass instance group so that filters can read this
  information later.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1561022/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560449] Re: Link of Django logging directive in deployment.rst is out of date

2016-03-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/295730
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=f645d8521816502b5e3a9658a3c1221b65ba2db5
Submitter: Jenkins
Branch:master

commit f645d8521816502b5e3a9658a3c1221b65ba2db5
Author: Bo Wang 
Date:   Tue Mar 22 19:21:39 2016 +0800

Fix the link of Django logging directive

Current link is out of date. It's 404 Page not found now.
Change it to an address regardless of specific django version.

Remove the warning content since the bug has been fixed.

Change-Id: Idd0060313d538e25f9b89d7197adca739d2f6782
Closes-Bug: #1560449


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1560449

Title:
  Link of Django logging directive in deployment.rst is out of date

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Current link is out of date. 
  It's 404 Page not found now.

  Change it to a link regardless of specific django version

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1560449/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560469] Re: Migration in-use volume from lvm to ceph failed

2016-03-23 Thread Xuepeng Ji
** Project changed: nova-project => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1560469

Title:
  Migration in-use volume from lvm to ceph failed

Status in OpenStack Compute (nova):
  New

Bug description:
  Migration an in-use volume from lvm to ceph failed:

  2016-03-22 17:28:38.610 28985 ERROR oslo_messaging.rpc.dispatcher 
[req-f35c694e-3d7e-42c2-9135-691b36e84eaa 5650304261d74dbe8a4f7848661f95a6 
55bbe36a87af48c0af04c9204a49a854 - - -] Exception during message handling: Swap 
only supports host devices
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 8840, in 
swap_volume
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
new_volume_id)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher payload)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 379, in 
decorated_function
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
LOG.warning(msg, e, instance_uuid=instance_uuid)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 350, in 
decorated_function
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 407, in 
decorated_function
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 395, in 
decorated_function
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6010, in 
swap_volume
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
new_volume_id)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5969, in 
_swap_volume
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
self.volume_api.unreserve_volume(context, new_volume_id)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
  2016-03-22 

[Yahoo-eng-team] [Bug 1560469] [NEW] Migration in-use volume from lvm to ceph failed

2016-03-23 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Migration an in-use volume from lvm to ceph failed:

2016-03-22 17:28:38.610 28985 ERROR oslo_messaging.rpc.dispatcher 
[req-f35c694e-3d7e-42c2-9135-691b36e84eaa 5650304261d74dbe8a4f7848661f95a6 
55bbe36a87af48c0af04c9204a49a854 - - -] Exception during message handling: Swap 
only supports host devices
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 8840, in 
swap_volume
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
new_volume_id)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher payload)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 379, in 
decorated_function
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
LOG.warning(msg, e, instance_uuid=instance_uuid)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 350, in 
decorated_function
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 407, in 
decorated_function
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 395, in 
decorated_function
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6010, in 
swap_volume
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
new_volume_id)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5969, in 
_swap_volume
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
self.volume_api.unreserve_volume(context, new_volume_id)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5950, in 
_swap_volume
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher resize_to)
2016-03-22 17:28:38.610 

[Yahoo-eng-team] [Bug 1561252] Re: Removing 'force_gateway_on_subnet' option

2016-03-23 Thread Henry Gessau
** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561252

Title:
  Removing 'force_gateway_on_subnet' option

Status in neutron:
  Invalid
Status in openstack-manuals:
  New

Bug description:
  https://review.openstack.org/295843
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 7215168b119c11a973fbdff56c007f6eb157d257
  Author: Sreekumar S 
  Date:   Tue Mar 22 19:17:54 2016 +0530

  Removing 'force_gateway_on_subnet' option
  
  With this fix 'force_gateway_on_subnet' configuration
  option is removed, and gateway outside the subnet is
  always allowed. Gateway cannot be forced onto to the
  subnet range.
  
  DocImpact: All references of 'force_gateway_on_subnet'
  configuration option and its description should be
  removed from the docs.
  
  Change-Id: I1a676f35828e46fcedf339235ef7be388341f91e
  Closes-Bug: #1548193

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561252/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561233] Re: "Failed to format sample" warning in neutron.conf.sample file

2016-03-23 Thread Henry Gessau
I think this is a problem in oslo.config

** Also affects: oslo.config
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561233

Title:
  "Failed to format sample" warning in neutron.conf.sample file

Status in neutron:
  New
Status in oslo.config:
  New

Bug description:
  After generating the neutron configuration files, the following
  warnings appear in the [nova] section of the neutron.conf.sample file:

  #
  # From nova.auth
  #

  # Warning: Failed to format sample for auth_url
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for default_domain_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for default_domain_name
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for domain_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for domain_name
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for password
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for project_domain_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for project_domain_name
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for project_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for project_name
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for tenant_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for tenant_name
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for trust_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for user_domain_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for user_domain_name
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for user_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for username
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560469] Re: Migration in-use volume from lvm to ceph failed

2016-03-23 Thread weiweigu@zte
** Project changed: nova => nova-project

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1560469

Title:
  Migration in-use volume from lvm to ceph failed

Status in Nova:
  New

Bug description:
  Migration an in-use volume from lvm to ceph failed:

  2016-03-22 17:28:38.610 28985 ERROR oslo_messaging.rpc.dispatcher 
[req-f35c694e-3d7e-42c2-9135-691b36e84eaa 5650304261d74dbe8a4f7848661f95a6 
55bbe36a87af48c0af04c9204a49a854 - - -] Exception during message handling: Swap 
only supports host devices
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 8840, in 
swap_volume
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
new_volume_id)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher payload)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 379, in 
decorated_function
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
LOG.warning(msg, e, instance_uuid=instance_uuid)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 350, in 
decorated_function
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 407, in 
decorated_function
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 395, in 
decorated_function
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6010, in 
swap_volume
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
new_volume_id)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5969, in 
_swap_volume
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
self.volume_api.unreserve_volume(context, new_volume_id)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
  2016-03-22 17:28:38.610 28985 TRACE 

[Yahoo-eng-team] [Bug 1560469] Re: Migration in-use volume from lvm to ceph failed

2016-03-23 Thread Xuepeng Ji
** Changed in: cinder
   Status: Invalid => New

** Project changed: cinder => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1560469

Title:
  Migration in-use volume from lvm to ceph failed

Status in Nova:
  New

Bug description:
  Migration an in-use volume from lvm to ceph failed:

  2016-03-22 17:28:38.610 28985 ERROR oslo_messaging.rpc.dispatcher 
[req-f35c694e-3d7e-42c2-9135-691b36e84eaa 5650304261d74dbe8a4f7848661f95a6 
55bbe36a87af48c0af04c9204a49a854 - - -] Exception during message handling: Swap 
only supports host devices
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 8840, in 
swap_volume
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
new_volume_id)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher payload)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 379, in 
decorated_function
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
LOG.warning(msg, e, instance_uuid=instance_uuid)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 350, in 
decorated_function
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 407, in 
decorated_function
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 395, in 
decorated_function
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6010, in 
swap_volume
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
new_volume_id)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5969, in 
_swap_volume
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
self.volume_api.unreserve_volume(context, new_volume_id)
  2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in 

[Yahoo-eng-team] [Bug 1560469] [NEW] Migration in-use volume from lvm to ceph failed

2016-03-23 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Migration an in-use volume from lvm to ceph failed:

2016-03-22 17:28:38.610 28985 ERROR oslo_messaging.rpc.dispatcher 
[req-f35c694e-3d7e-42c2-9135-691b36e84eaa 5650304261d74dbe8a4f7848661f95a6 
55bbe36a87af48c0af04c9204a49a854 - - -] Exception during message handling: Swap 
only supports host devices
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 8840, in 
swap_volume
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
new_volume_id)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher payload)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 379, in 
decorated_function
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
LOG.warning(msg, e, instance_uuid=instance_uuid)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 350, in 
decorated_function
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 407, in 
decorated_function
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 395, in 
decorated_function
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6010, in 
swap_volume
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
new_volume_id)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5969, in 
_swap_volume
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
self.volume_api.unreserve_volume(context, new_volume_id)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5950, in 
_swap_volume
2016-03-22 17:28:38.610 28985 TRACE oslo_messaging.rpc.dispatcher resize_to)
2016-03-22 17:28:38.610 

[Yahoo-eng-team] [Bug 1561266] [NEW] cirros img cannot be launched

2016-03-23 Thread hanjiabao
Public bug reported:

first,create image with  cirros-0.3.1-x86_64-disk.img
second, launch instance with transfered ciros.img
can't launch an instance,occur an error:
Error: Unable to create the server.
RESP: [500] {'Content-Length': '194', 'X-Compute-Request-Id': 
'req-1997ec57-9e4c-43e1-8c41-625e5062657b', 'Vary': 
'X-OpenStack-Nova-API-Version', 'Connection': 'keep-alive', 
'X-Openstack-Nova-Api-Version': '2.1', 'Date': 'Thu, 24 Mar 2016 00:35:25 GMT', 
'Content-Type': 'application/json; charset=UTF-8'}
RESP BODY: {"computeFault": {"message": "Unexpected API Error. Please report 
this at http://bugs.launchpad.net/nova/ and attach the Nova API log if 
possible.\n", "code": 500}}
"POST /api/nova/servers/ HTTP/1.1" 500 216

however that cirros-0.3.2-x86_64-disk could be with launched

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

- first,create image with  cirros-0.3.1-x86_64-disk.img 
+ first,create image with  cirros-0.3.1-x86_64-disk.img
  second, launch instance with transfered ciros.img
  can't launch an instance,occur an error:
  Error: Unable to create the server.
  RESP: [500] {'Content-Length': '194', 'X-Compute-Request-Id': 
'req-1997ec57-9e4c-43e1-8c41-625e5062657b', 'Vary': 
'X-OpenStack-Nova-API-Version', 'Connection': 'keep-alive', 
'X-Openstack-Nova-Api-Version': '2.1', 'Date': 'Thu, 24 Mar 2016 00:35:25 GMT', 
'Content-Type': 'application/json; charset=UTF-8'}
  RESP BODY: {"computeFault": {"message": "Unexpected API Error. Please report 
this at http://bugs.launchpad.net/nova/ and attach the Nova API log if 
possible.\n", "code": 500}}
  "POST /api/nova/servers/ HTTP/1.1" 500 216
+ 
+ however that cirros-0.3.2-x86_64-disk could be with launched

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1561266

Title:
  cirros img cannot be launched

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  first,create image with  cirros-0.3.1-x86_64-disk.img
  second, launch instance with transfered ciros.img
  can't launch an instance,occur an error:
  Error: Unable to create the server.
  RESP: [500] {'Content-Length': '194', 'X-Compute-Request-Id': 
'req-1997ec57-9e4c-43e1-8c41-625e5062657b', 'Vary': 
'X-OpenStack-Nova-API-Version', 'Connection': 'keep-alive', 
'X-Openstack-Nova-Api-Version': '2.1', 'Date': 'Thu, 24 Mar 2016 00:35:25 GMT', 
'Content-Type': 'application/json; charset=UTF-8'}
  RESP BODY: {"computeFault": {"message": "Unexpected API Error. Please report 
this at http://bugs.launchpad.net/nova/ and attach the Nova API log if 
possible.\n", "code": 500}}
  "POST /api/nova/servers/ HTTP/1.1" 500 216

  however that cirros-0.3.2-x86_64-disk could be with launched

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1561266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561252] [NEW] Removing 'force_gateway_on_subnet' option

2016-03-23 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/295843
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 7215168b119c11a973fbdff56c007f6eb157d257
Author: Sreekumar S 
Date:   Tue Mar 22 19:17:54 2016 +0530

Removing 'force_gateway_on_subnet' option

With this fix 'force_gateway_on_subnet' configuration
option is removed, and gateway outside the subnet is
always allowed. Gateway cannot be forced onto to the
subnet range.

DocImpact: All references of 'force_gateway_on_subnet'
configuration option and its description should be
removed from the docs.

Change-Id: I1a676f35828e46fcedf339235ef7be388341f91e
Closes-Bug: #1548193

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561252

Title:
  Removing 'force_gateway_on_subnet' option

Status in neutron:
  New

Bug description:
  https://review.openstack.org/295843
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 7215168b119c11a973fbdff56c007f6eb157d257
  Author: Sreekumar S 
  Date:   Tue Mar 22 19:17:54 2016 +0530

  Removing 'force_gateway_on_subnet' option
  
  With this fix 'force_gateway_on_subnet' configuration
  option is removed, and gateway outside the subnet is
  always allowed. Gateway cannot be forced onto to the
  subnet range.
  
  DocImpact: All references of 'force_gateway_on_subnet'
  configuration option and its description should be
  removed from the docs.
  
  Change-Id: I1a676f35828e46fcedf339235ef7be388341f91e
  Closes-Bug: #1548193

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561252/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548193] Re: Remove 'force_gateway_on_subnet' option

2016-03-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/295843
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=7215168b119c11a973fbdff56c007f6eb157d257
Submitter: Jenkins
Branch:master

commit 7215168b119c11a973fbdff56c007f6eb157d257
Author: Sreekumar S 
Date:   Tue Mar 22 19:17:54 2016 +0530

Removing 'force_gateway_on_subnet' option

With this fix 'force_gateway_on_subnet' configuration
option is removed, and gateway outside the subnet is
always allowed. Gateway cannot be forced onto to the
subnet range.

DocImpact: All references of 'force_gateway_on_subnet'
configuration option and its description should be
removed from the docs.

Change-Id: I1a676f35828e46fcedf339235ef7be388341f91e
Closes-Bug: #1548193


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548193

Title:
  Remove 'force_gateway_on_subnet' option

Status in neutron:
  Fix Released

Bug description:
  The 'force_gateway_on_subnet' option is deprecated and should be removed in 
the 'Newton' cycle.
  This is raised for tracking the removal of the option in Newton.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548193/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561248] [NEW] Fullstack linux bridge tests sometimes fail because an agent wouldn't come up as it cannot connect to RabbitMQ

2016-03-23 Thread Assaf Muller
Public bug reported:

Here's a couple of examples:

http://logs.openstack.org/78/292178/10/check/gate-neutron-dsvm-
fullstack/b54b61b/logs/TestLinuxBridgeConnectivitySameNetwork.test_connectivity_VXLAN_
/neutron-linuxbridge-agent--2016-03-23--13-54-07-458571.log.txt.gz

http://logs.openstack.org/07/296507/2/check/gate-neutron-dsvm-
fullstack/1a24251/logs/TestLinuxBridgeConnectivitySameNetwork.test_connectivity_VXLAN_
/neutron-linuxbridge-agent--2016-03-23--21-05-41-093902.log.txt.gz

Note that in both cases the other two agents in the same test were able
to connect successfully. The commonality between agents that cannot
connect to rabbit is that they use a local_ip that is the *broadcast
address* of the subnet they belong to.

** Affects: neutron
 Importance: Undecided
 Assignee: Assaf Muller (amuller)
 Status: New


** Tags: fullstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561248

Title:
  Fullstack linux bridge tests sometimes fail because an agent wouldn't
  come up as it cannot connect to RabbitMQ

Status in neutron:
  New

Bug description:
  Here's a couple of examples:

  http://logs.openstack.org/78/292178/10/check/gate-neutron-dsvm-
  
fullstack/b54b61b/logs/TestLinuxBridgeConnectivitySameNetwork.test_connectivity_VXLAN_
  /neutron-linuxbridge-agent--2016-03-23--13-54-07-458571.log.txt.gz

  http://logs.openstack.org/07/296507/2/check/gate-neutron-dsvm-
  
fullstack/1a24251/logs/TestLinuxBridgeConnectivitySameNetwork.test_connectivity_VXLAN_
  /neutron-linuxbridge-agent--2016-03-23--21-05-41-093902.log.txt.gz

  Note that in both cases the other two agents in the same test were
  able to connect successfully. The commonality between agents that
  cannot connect to rabbit is that they use a local_ip that is the
  *broadcast address* of the subnet they belong to.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561246] [NEW] user cannot choose disk bus when attaching volume to instance

2016-03-23 Thread Favyen Bastani
Public bug reported:

After https://review.openstack.org/#/c/189632/, since the device name
specified by the user is ignored when attaching volumes to a VM, the
user now has no way to configure the disk bus that the volume is
attached to.

Before the change, if user specifies device name like "/dev/hda" then it
will be attached with ide driver, or "/dev/vda" with virtio driver (for
KVM).

Now, the device name is ignored. Instead, the disk bus is determined
first based on the image metadata of the VM instance, and then based on
the device type (e.g. "virtio" for a KVM disk). Note that the former is
based on the VM image metadata, not the volume image metadata. So, if
the VM is booted from an image that uses ide disk bus, then the volume
will also be attached with ide disk bus instead of virtio. If it is
based on the volume image metadata or volume metadata instead, then that
would solve the bug.

We added a temporary hack to mitigate the issue, so that the device name
is not completely ignored:

--- novaa/virt/libvirt/driver.py2016-03-23 18:40:52.0 -0400
+++ novab/virt/libvirt/driver.py2016-03-23 18:41:40.800635279 -0400
@@ -7278,7 +7278,14 @@

 # NOTE(ndipanov): get_info_from_bdm will generate the new device name
 # only when it's actually not set on the bd object
-block_device_obj.device_name = None
+if block_device_obj.device_name is not None:
+if len(block_device_obj.device_name) >= 6 and 
block_device_obj.device_name[0:5] == '/dev/' and 
block_device_obj.get('disk_bus') is None:
+if block_device_obj.device_name[5] == 'v':
+block_device_obj.disk_bus = 'virtio'
+elif block_device_obj.device_name[5] == 'h':
+block_device_obj.disk_bus = 'ide'
+block_device_obj.device_name = None
+
 disk_info = blockinfo.get_info_from_bdm(
 instance, CONF.libvirt.virt_type, image_meta,
 block_device_obj, mapping=instance_info['mapping'])

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  After https://review.openstack.org/#/c/189632/, since the device name
  specified by the user is ignored when attaching volumes to a VM, the
  user now has no way to configure the disk bus that the volume is
  attached to.
  
  Before the change, if user specifies device name like "/dev/hda" then it
  will be attached with ide driver, or "/dev/vda" with virtio driver (for
  KVM).
  
  Now, the device name is ignored. Instead, the disk bus is determined
  first based on the image metadata of the VM instance, and then based on
  the device type (e.g. "virtio" for a KVM disk). Note that the former is
  based on the VM image metadata, not the volume image metadata. So, if
  the VM is booted from an image that uses ide disk bus, then the volume
  will also be attached with ide disk bus instead of virtio. If it is
  based on the volume image metadata or volume metadata instead, then that
  would solve the bug.
  
  We added a temporary hack to mitigate the issue, so that the device name
  is not completely ignored:
  
  --- novaa/virt/libvirt/driver.py  2016-03-23 18:40:52.0 -0400
  +++ novab/virt/libvirt/driver.py  2016-03-23 18:41:40.800635279 -0400
  @@ -7278,7 +7278,14 @@
-  
-  # NOTE(ndipanov): get_info_from_bdm will generate the new device name
-  # only when it's actually not set on the bd object
+ 
+  # NOTE(ndipanov): get_info_from_bdm will generate the new device name
+  # only when it's actually not set on the bd object
  -block_device_obj.device_name = None
- +if block_device_obj.device_name is not None:
+ +if block_device_obj.get('device_name') is not None:
  +if len(block_device_obj.device_name) >= 6 and 
block_device_obj.device_name[0:5] == '/dev/' and 
block_device_obj.get('disk_bus') is None:
  +if block_device_obj.device_name[5] == 'v':
  +block_device_obj.disk_bus = 'virtio'
  +elif block_device_obj.device_name[5] == 'h':
  +block_device_obj.disk_bus = 'ide'
  +block_device_obj.device_name = None
  +
-  disk_info = blockinfo.get_info_from_bdm(
-  instance, CONF.libvirt.virt_type, image_meta,
-  block_device_obj, mapping=instance_info['mapping'])
+  disk_info = blockinfo.get_info_from_bdm(
+  instance, CONF.libvirt.virt_type, image_meta,
+  block_device_obj, mapping=instance_info['mapping'])

** Description changed:

  After https://review.openstack.org/#/c/189632/, since the device name
  specified by the user is ignored when attaching volumes to a VM, the
  user now has no way to configure the disk bus that the volume is
  attached to.
  
  Before the change, if user specifies device name like "/dev/hda" then it
  will be 

[Yahoo-eng-team] [Bug 1549311] Re: Unexpected SNAT behavior between instances with DVR+floating ip

2016-03-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/285982
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=1cea77b0aafbada6cad89a6fe0f5450004aef4e1
Submitter: Jenkins
Branch:master

commit 1cea77b0aafbada6cad89a6fe0f5450004aef4e1
Author: Hong Hui Xiao 
Date:   Mon Feb 29 11:07:15 2016 +

DVR: Fix issue of SNAT rule for DVR with floating ip

With current code, there are 2 issues.

1) The prevent snat rule that is added for floating ip will be
cleaned, when restarting the l3 agent. Without this rule, the fixed
ip will be SNATed to floating ip, even if the network request is to
an internal IP.

2) The prevent snat rule will not be cleaned, even if the external
device(rfp device) is deleted. So, when the floating ips are removed
from DVR router, there are still dump rules in iptables. Restarting
the l3 agent can clean these dump rules.

The fix in this patch will handle DVR floating ip nat rules at the
same step to handle nat rules for other routers(legacy router, dvr
edge router)

After the change in [1], the fip nat rules for external port have
been extracted together into a method. Add all rules in that method
in the same step can fix the issue of ping floating ip, but reply
with fixed ip.

[1] https://review.openstack.org/#/c/286392/

Change-Id: I018232c03f5df2237a11b48ac877793d1cb5c1bf
Closes-Bug: #1549311
Related-Bug: #1462154


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1549311

Title:
  Unexpected SNAT behavior between instances with DVR+floating ip

Status in neutron:
  Fix Released

Bug description:
  This might be related with [1]. The fix in [1] should be applied to
  dvr_local_router.

  = Scenario =

  • Latest code
  • Single Neutron DVR router, multiple hosts
  • two instances in two tenant networks attached to DVR router, the two 
instances are in two different hosts
  • Instance A has a floatingip

  INSTANCE A: TestNet1=100.0.0.4, 172.168.1.53
  INSTANCE B: TestNet2=100.0.1.4

  Pinging from INSTANCE A to INSTANCE B:
  tcpdump from Instance B
  [root@dvr-compute2 fedora]# tcpdump -ni qr-ca45d1e3-5d icmp
  tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
  listening on qr-ca45d1e3-5d, link-type EN10MB (Ethernet), capture size 262144 
bytes
  14:31:54.054629 IP 100.0.1.4 > 172.168.1.53: ICMP echo reply, id 18433, seq 
0, length 64

  The problem here is that it should be an internal communication, but
  the reply go through external network.

  [1] https://bugs.launchpad.net/neutron/+bug/1505781

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1549311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561243] [NEW] Integration bridge parameter leaking into Linux bridge agent

2016-03-23 Thread Matt Kassawara
Public bug reported:

The following warning appears when starting the Linux bridge agent:

2016-03-23 23:09:10.987 3322 WARNING neutron.agent.securitygroups_rpc
[req-4760542b-15dc-4f1f-ac26-b211d8a78e91 - - - - -] Firewall driver
neutron.agent.linux.iptables_firewall.IptablesFirewallDriver doesn't
accept integration_bridge parameter in __init__(): __init__() got an
unexpected keyword argument 'integration_bridge'

The integration bridge parameter only applies to Open vSwitch.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561243

Title:
  Integration bridge parameter leaking into Linux bridge agent

Status in neutron:
  New

Bug description:
  The following warning appears when starting the Linux bridge agent:

  2016-03-23 23:09:10.987 3322 WARNING neutron.agent.securitygroups_rpc
  [req-4760542b-15dc-4f1f-ac26-b211d8a78e91 - - - - -] Firewall driver
  neutron.agent.linux.iptables_firewall.IptablesFirewallDriver doesn't
  accept integration_bridge parameter in __init__(): __init__() got an
  unexpected keyword argument 'integration_bridge'

  The integration bridge parameter only applies to Open vSwitch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561243/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561233] [NEW] "Failed to format sample" warning in neutron.conf.sample file

2016-03-23 Thread Matt Kassawara
Public bug reported:

After generating the neutron configuration files, the following warnings
appear in the [nova] section of the neutron.conf.sample file:

#
# From nova.auth
#

# Warning: Failed to format sample for auth_url
# isinstance() arg 2 must be a class, type, or tuple of classes and types

# Warning: Failed to format sample for default_domain_id
# isinstance() arg 2 must be a class, type, or tuple of classes and types

# Warning: Failed to format sample for default_domain_name
# isinstance() arg 2 must be a class, type, or tuple of classes and types

# Warning: Failed to format sample for domain_id
# isinstance() arg 2 must be a class, type, or tuple of classes and types

# Warning: Failed to format sample for domain_name
# isinstance() arg 2 must be a class, type, or tuple of classes and types

# Warning: Failed to format sample for password
# isinstance() arg 2 must be a class, type, or tuple of classes and types

# Warning: Failed to format sample for project_domain_id
# isinstance() arg 2 must be a class, type, or tuple of classes and types

# Warning: Failed to format sample for project_domain_name
# isinstance() arg 2 must be a class, type, or tuple of classes and types

# Warning: Failed to format sample for project_id
# isinstance() arg 2 must be a class, type, or tuple of classes and types

# Warning: Failed to format sample for project_name
# isinstance() arg 2 must be a class, type, or tuple of classes and types

# Warning: Failed to format sample for tenant_id
# isinstance() arg 2 must be a class, type, or tuple of classes and types

# Warning: Failed to format sample for tenant_name
# isinstance() arg 2 must be a class, type, or tuple of classes and types

# Warning: Failed to format sample for trust_id
# isinstance() arg 2 must be a class, type, or tuple of classes and types

# Warning: Failed to format sample for user_domain_id
# isinstance() arg 2 must be a class, type, or tuple of classes and types

# Warning: Failed to format sample for user_domain_name
# isinstance() arg 2 must be a class, type, or tuple of classes and types

# Warning: Failed to format sample for user_id
# isinstance() arg 2 must be a class, type, or tuple of classes and types

# Warning: Failed to format sample for username
# isinstance() arg 2 must be a class, type, or tuple of classes and types

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561233

Title:
  "Failed to format sample" warning in neutron.conf.sample file

Status in neutron:
  New

Bug description:
  After generating the neutron configuration files, the following
  warnings appear in the [nova] section of the neutron.conf.sample file:

  #
  # From nova.auth
  #

  # Warning: Failed to format sample for auth_url
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for default_domain_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for default_domain_name
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for domain_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for domain_name
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for password
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for project_domain_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for project_domain_name
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for project_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for project_name
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for tenant_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for tenant_name
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for trust_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for user_domain_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for user_domain_name
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for user_id
  # isinstance() arg 2 must be a class, type, or tuple of classes and types

  # Warning: Failed to format sample for 

[Yahoo-eng-team] [Bug 1551907] Re: Add API extension for reporting IP availability usage statistics

2016-03-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/280953
Committed: 
https://git.openstack.org/cgit/openstack/api-site/commit/?id=d5054b4cc1ad892feb63751e3881ce746c2bd975
Submitter: Jenkins
Branch:master

commit d5054b4cc1ad892feb63751e3881ce746c2bd975
Author: Ankur Gupta 
Date:   Tue Feb 16 16:05:12 2016 -0600

Add Network IP Availability API Extension

There is new feature added to neutron for Network IP Availaibility [1].
CLI for this is also merged in neutronclient{2]. This patch updates the
added api's in api-site in that extension.

[1]. https://review.openstack.org/#/c/212955/
[2]. https://review.openstack.org/#/c/269926/

Change-Id: I63dde5cfe7699ec25caed8eb5bd8d19be7720117
Co-Authored-By: Manjeet Singh Bhatia 
Closes-Bug: #1551907


** Changed in: openstack-api-site
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551907

Title:
  Add API extension for reporting IP availability usage statistics

Status in OpenStack Dashboard (Horizon):
  New
Status in neutron:
  Fix Released
Status in openstack-api-site:
  Fix Released
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/212955
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 2f741ca5f9545c388270ddab774e9e030b006d8a
  Author: Mike Dorman 
  Date:   Thu Aug 13 21:24:58 2015 -0600

  Add API extension for reporting IP availability usage statistics
  
  Implements an API extension for reporting availibility of IP
  addresses on Neutron networks/subnets based on the blueprint
  proposed at https://review.openstack.org/#/c/180803/
  
  This provides an easy way for operators to count the number of
  used and total IP addresses on any or all networks and/or
  subnets.
  
  Co-Authored-By: David Bingham 
  Co-Authored-By: Craig Jellick 
  
  APIImpact
  DocImpact: As a new API, will need all new docs. See devref for details.
  
  Implements: blueprint network-ip-usage-api
  Closes-Bug: 1457986
  Change-Id: I81406054d46b2c0e0ffcd56e898e329f943ba46f

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1551907/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561230] [NEW] ng launch instance modal second time is weird

2016-03-23 Thread Cindy Lu
Public bug reported:

For Mitaka, we've replaced the original Django Launch Instance with the
new ng Launch Instance.

I successfully filled out the required steps and created an instance.
However, if I don't refresh the page manually, and click the "Launch
Instance" again, it shows an strange/incomplete modal.

It shows: Details, Source, Flavor, Security Groups, Metadata.
It *should* show: Details, Source, Flavor, Network Ports, Key Pair, 
Configuration, Metadata

See attached image.

Issues:
- It doesn't show all the workflow steps.
- Slide out help panel doesn't work. Only toggles the button.
- Need to click on cancel button TWICE to get it to close. First click shows a 
faint modal overlay sliding up.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1561230

Title:
  ng launch instance modal second time is weird

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  For Mitaka, we've replaced the original Django Launch Instance with
  the new ng Launch Instance.

  I successfully filled out the required steps and created an instance.
  However, if I don't refresh the page manually, and click the "Launch
  Instance" again, it shows an strange/incomplete modal.

  It shows: Details, Source, Flavor, Security Groups, Metadata.
  It *should* show: Details, Source, Flavor, Network Ports, Key Pair, 
Configuration, Metadata

  See attached image.

  Issues:
  - It doesn't show all the workflow steps.
  - Slide out help panel doesn't work. Only toggles the button.
  - Need to click on cancel button TWICE to get it to close. First click shows 
a faint modal overlay sliding up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1561230/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561118] Re: Agents: remove deprecated methods

2016-03-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/296354
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=db4a981ed6f7c1e7aaf539a289c1daab767a3413
Submitter: Jenkins
Branch:master

commit db4a981ed6f7c1e7aaf539a289c1daab767a3413
Author: Gary Kotton 
Date:   Wed Mar 23 02:57:28 2016 -0700

AGENTS: remove deprecated methods

The following method has been removed from the agent code:
  - pullup_route (commit 23b907bc6e87be153c13b1bf3e069467cdda0d27)

TrivialFix

Closes-bug: #1561118

Change-Id: Ia10af5b9d270ed73e28ade152821d82a0d285c94


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561118

Title:
  Agents: remove deprecated methods

Status in neutron:
  Fix Released

Bug description:
  Tracker for removing the deprecated method in the agents directoy. The 
commits where these were added is
   - pullup_route (commit 23b907bc6e87be153c13b1bf3e069467cdda0d27) -

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561118/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561208] [NEW] Identity users and project panels have unneeded "Domain Name" column

2016-03-23 Thread Doug Fish
Public bug reported:

Identity->Users and Identity->Projects both have Domain Name columns
(filled with the default domain name) even though
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = False

I think this column should be omitted unless multi-domain support is
being used.

same on create/edit user and create/edit project

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

  Identity->Users and Identity->Projects both have Domain Name columns
  (filled with the default domain name) even though
  OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = False
  
  I think this column should be omitted unless multi-domain support is
  being used.
+ 
+ same on create/edit user and create/edit project

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1561208

Title:
  Identity users and project panels have unneeded "Domain Name" column

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Identity->Users and Identity->Projects both have Domain Name columns
  (filled with the default domain name) even though
  OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = False

  I think this column should be omitted unless multi-domain support is
  being used.

  same on create/edit user and create/edit project

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1561208/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561200] [NEW] created_at and updated_at times don't include timezone

2016-03-23 Thread Steve McLellan
Public bug reported:

created_at and updated_at were recently added to the API calls and
notifications for many neutron resources (networks, subnets, ports,
possibly more), which is awesome! I've noticed that the times don't
include a timezone (compare to nova servers and glance images, for
instance).

Even if there's an assumption a user can make, this can create problems
with some display tools (I noticed this because a javascript date
formatting filter does local timezone conversions when a timezone is
created, which meant times for resources created seconds apart looked as
though they were several hours adrift.

Tested on neutron mitaka RC1.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561200

Title:
  created_at and updated_at times don't include timezone

Status in neutron:
  New

Bug description:
  created_at and updated_at were recently added to the API calls and
  notifications for many neutron resources (networks, subnets, ports,
  possibly more), which is awesome! I've noticed that the times don't
  include a timezone (compare to nova servers and glance images, for
  instance).

  Even if there's an assumption a user can make, this can create
  problems with some display tools (I noticed this because a javascript
  date formatting filter does local timezone conversions when a timezone
  is created, which meant times for resources created seconds apart
  looked as though they were several hours adrift.

  Tested on neutron mitaka RC1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561196] [NEW] breadcrumb on subnet page has improper navigation

2016-03-23 Thread Doug Fish
Public bug reported:

On Admin/Networks/[network detail]/[subnet detail] when clicking the
breadcrumb element with the network name, navigation goes to
project/networks. This is unexpected - it should remain on
Admin/Networks.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "subnet-nav.png"
   
https://bugs.launchpad.net/bugs/1561196/+attachment/4608975/+files/subnet-nav.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1561196

Title:
  breadcrumb on subnet page has improper navigation

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On Admin/Networks/[network detail]/[subnet detail] when clicking the
  breadcrumb element with the network name, navigation goes to
  project/networks. This is unexpected - it should remain on
  Admin/Networks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1561196/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561121] Re: Keystone unit test failure with oslo.* from master

2016-03-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/291207
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=8556437ee02de028ec5de3b867abaab82533cb91
Submitter: Jenkins
Branch:master

commit 8556437ee02de028ec5de3b867abaab82533cb91
Author: Brant Knudson 
Date:   Thu Mar 10 08:35:13 2016 -0600

Correct test to support changing N release name

oslo.log is going to change to use Newton rather than N so this test
should not make an assumption about the way that
versionutils.deprecated is calling report_deprecated_feature.

Change-Id: I06aa6d085232376811f73597b2d84b5174bc7a8d
Closes-Bug: 1561121


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1561121

Title:
  Keystone unit test failure with oslo.* from master

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  from http://logs.openstack.org/periodic/periodic-keystone-py27-with-
  oslo-master/0665198/console.html#_2016-03-23_06_21_06_074

  2016-03-23 06:21:06.073 | ==
  2016-03-23 06:21:06.073 | Failed 1 tests - output below:
  2016-03-23 06:21:06.073 | ==
  2016-03-23 06:21:06.073 | 
  2016-03-23 06:21:06.073 | 
keystone.tests.unit.common.test_manager.TestCreateLegacyDriver.test_class_is_properly_deprecated
  2016-03-23 06:21:06.073 | 

  2016-03-23 06:21:06.074 | 
  2016-03-23 06:21:06.074 | Captured traceback:
  2016-03-23 06:21:06.074 | ~~~
  2016-03-23 06:21:06.074 | Traceback (most recent call last):
  2016-03-23 06:21:06.074 |   File 
"/home/jenkins/workspace/periodic-keystone-py27-with-oslo-master/.tox/py27-oslo-master/local/lib/python2.7/site-packages/mock/mock.py",
 line 1305, in patched
  2016-03-23 06:21:06.074 | return func(*args, **keywargs)
  2016-03-23 06:21:06.074 |   File 
"keystone/tests/unit/common/test_manager.py", line 37, in 
test_class_is_properly_deprecated
  2016-03-23 06:21:06.075 | mock_reporter.assert_called_with(mock.ANY, 
mock.ANY, details)
  2016-03-23 06:21:06.075 |   File 
"/home/jenkins/workspace/periodic-keystone-py27-with-oslo-master/.tox/py27-oslo-master/local/lib/python2.7/site-packages/mock/mock.py",
 line 937, in assert_called_with
  2016-03-23 06:21:06.075 | 
six.raise_from(AssertionError(_error_message(cause)), cause)
  2016-03-23 06:21:06.075 |   File 
"/home/jenkins/workspace/periodic-keystone-py27-with-oslo-master/.tox/py27-oslo-master/local/lib/python2.7/site-packages/six.py",
 line 718, in raise_from
  2016-03-23 06:21:06.075 | raise value
  2016-03-23 06:21:06.075 | AssertionError: Expected call: 
report_deprecated_feature(, , {'in_favor_of': 
'keystone.catalog.core.CatalogDriverV8', 'as_of': 'Liberty', 'what': 
'keystone.catalog.core.Driver', 'remove_in': 'N'})
  2016-03-23 06:21:06.076 | Actual call: 
report_deprecated_feature(, u'%(what)s 
is deprecated as of %(as_of)s in favor of %(in_favor_of)s and may be removed in 
%(remove_in)s.', {'in_favor_of': 'keystone.catalog.core.CatalogDriverV8', 
'as_of': 'Liberty', 'what': 'keystone.catalog.core.Driver', 'remove_in': 
'Newton'})

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1561121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561151] Re: Neutron unit tests fail against oslo.* master

2016-03-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/296690
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=a94e1b410b0ae5193aabe7fdde6f4948334464f8
Submitter: Jenkins
Branch:master

commit a94e1b410b0ae5193aabe7fdde6f4948334464f8
Author: Davanum Srinivas 
Date:   Wed Mar 23 14:52:58 2016 -0400

Fix test failure against latest oslo.* from master

Looks like there's a lot of places in Neutron tests we are
using res.json['NeutronError']['message'] to look at the
exact message and we missed a spot

Closes-Bug: #1561151
Change-Id: I8e62ae9f16a2b239520f79ac53401e596f781b64


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561151

Title:
  Neutron unit tests fail against oslo.* master

Status in neutron:
  Fix Released

Bug description:
  from http://logs.openstack.org/periodic/periodic-neutron-py27-with-
  oslo-master/b093812/console.html#_2016-03-23_06_21_11_099 :

  2016-03-23 06:21:11.099 | Captured traceback:
  2016-03-23 06:21:11.099 | ~~~
  2016-03-23 06:21:11.099 | Traceback (most recent call last):
  2016-03-23 06:21:11.099 |   File 
"neutron/tests/unit/extensions/test_dns.py", line 455, in 
test_api_extension_validation_with_bad_dns_names
  2016-03-23 06:21:11.100 | 'cannot be converted to lowercase string' 
in res.text or
  2016-03-23 06:21:11.191 |   File 
"/home/jenkins/workspace/periodic-neutron-py27-with-oslo-master/.tox/py27-oslo-master/local/lib/python2.7/site-packages/webob/response.py",
 line 420, in _text__get
  2016-03-23 06:21:11.191 | "You cannot access Response.text unless 
charset is set")
  2016-03-23 06:21:11.191 | AttributeError: You cannot access Response.text 
unless charset is set
  2016-03-23 06:21:11.191 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561151/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560945] Re: Unable to create DVR+HA routers

2016-03-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/296394
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=a8b60671150ac383c6ed24c26e773a97a476f7d2
Submitter: Jenkins
Branch:master

commit a8b60671150ac383c6ed24c26e773a97a476f7d2
Author: John Schwarz 
Date:   Wed Mar 23 14:05:37 2016 +0200

Fix reference to uninitialized iptables manager

DvrEdgeRouter.process_address_scope() currently assumes that
snat_iptables_manager was initialized, however this is only done when an
external gateway is added. In case a new DVR+HA router was created
without an external gateway, the l3 agent will raise an exception and
will not create the router correctly. This patch adds a simple check to
make sure that it is defined before it's actually used.

Closes-Bug: #1560945
Change-Id: I677e0837956a6d008a3935d961f078987a07d0c4


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1560945

Title:
  Unable to create DVR+HA routers

Status in neutron:
  Fix Released

Bug description:
  When creating a new DVR+HA, the router is created (the API returns
  successfully) but the l3 agent enters an endless loop:

  2016-03-23 13:57:37.340 ERROR neutron.agent.l3.agent [-] Failed to process 
compatible router 'a04b3fd7-d46c-4520-82af-18d16835469d'
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent Traceback (most recent 
call last):
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/agent/l3/agent.py", line 497, in 
_process_router_update
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent 
self._process_router_if_compatible(router)
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/agent/l3/agent.py", line 436, in 
_process_router_if_compatible
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent 
self._process_updated_router(router)
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/agent/l3/agent.py", line 450, in 
_process_updated_router
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent ri.process(self)
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/agent/l3/dvr_edge_ha_router.py", line 92, in 
process
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent 
super(DvrEdgeHaRouter, self).process(agent)
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/agent/l3/dvr_local_router.py", line 486, in 
process
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent 
super(DvrLocalRouter, self).process(agent)
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/agent/l3/dvr_router_base.py", line 30, in 
process
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent super(DvrRouterBase, 
self).process(agent)
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/agent/l3/ha_router.py", line 386, in process
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent super(HaRouter, 
self).process(agent)
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/common/utils.py", line 377, in call
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent self.logger(e)
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent self.force_reraise()
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent 
six.reraise(self.type_, self.value, self.tb)
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/common/utils.py", line 374, in call
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent return func(*args, 
**kwargs)
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/agent/l3/router_info.py", line 963, in process
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent 
self.process_address_scope()
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/agent/l3/dvr_edge_router.py", line 235, in 
process_address_scope
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent with 
snat_iptables_manager.defer_apply():
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent AttributeError: 
'NoneType' object has no attribute 'defer_apply'
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent 

  This happens in upstream master.

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1370335] Re: Keystone should support HEAD requests for all GET /v3/* actions

2016-03-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/295641
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=1d087af001da5eedd38513b31ab4dd2b8f35014a
Submitter: Jenkins
Branch:master

commit 1d087af001da5eedd38513b31ab4dd2b8f35014a
Author: Colleen Murphy 
Date:   Mon Mar 21 14:15:52 2016 -0700

Implement HEAD method for all v3 GET actions

Implement the HEAD method for all get-one and list-all operations in the
v3 API (non-extended). While this may never be used by
python-openstackclient, it is useful to operators and application
developers for quickly obtaining metainformation about API resources,
and for "testing hypertext links for validity, accessibility, and
recent modification"[1].

[1] https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.4

Closes-bug: #1370335

Change-Id: Iae26ebea1aa40d3b5c6c676dabe4f60a86a4f99f


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1370335

Title:
  Keystone should support HEAD requests for all GET /v3/* actions

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  In all places keystone supports a GET request, a similar HEAD request
  should be supported.  This should only affect cases where a HEAD
  request did not already exist.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1370335/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561192] [NEW] create subnet wizard has weird styling

2016-03-23 Thread Doug Fish
Public bug reported:

on Admin/Network/[detail]/Create Subnet the create subnet dialog/wizard
has button-like styling on the tabs at top. I don't think they should be
buttons. They are sort of like tab headers (not exactly) They previously
had a custom style.

I'm not sure what the right appearance for this is, but I don't think
we've captured it yet.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "create-subnet.png"
   
https://bugs.launchpad.net/bugs/1561192/+attachment/4608969/+files/create-subnet.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1561192

Title:
  create subnet wizard has weird styling

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  on Admin/Network/[detail]/Create Subnet the create subnet
  dialog/wizard has button-like styling on the tabs at top. I don't
  think they should be buttons. They are sort of like tab headers (not
  exactly) They previously had a custom style.

  I'm not sure what the right appearance for this is, but I don't think
  we've captured it yet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1561192/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561193] [NEW] Modal windows don't close with ESC key in at least one scenario

2016-03-23 Thread Eddie Ramirez
Public bug reported:

When the user opens a modal window and clicks on an element that is NOT
a form input and tries to close the modal window pressing the ESC key,
the modal window will not close.

How to reproduce:
1. Open a modal window, e.g. Try to create a new volume, instance or image.
2. Click on an element that is NOT a form input,  e.g "Description Text", or a 
white space or any other element that makes inputs to lose focus.
3. Press the ESC key and the modal window won't close.

The modal window DO close if a form input has focus, but if any of those
elements loses focus then using ESC key to close the window will not
work.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: ux

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1561193

Title:
  Modal windows don't close with ESC key in at least one scenario

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When the user opens a modal window and clicks on an element that is
  NOT a form input and tries to close the modal window pressing the ESC
  key, the modal window will not close.

  How to reproduce:
  1. Open a modal window, e.g. Try to create a new volume, instance or image.
  2. Click on an element that is NOT a form input,  e.g "Description Text", or 
a white space or any other element that makes inputs to lose focus.
  3. Press the ESC key and the modal window won't close.

  The modal window DO close if a form input has focus, but if any of
  those elements loses focus then using ESC key to close the window will
  not work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1561193/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561188] [NEW] DB api: remove deprecated methods

2016-03-23 Thread Brian Haley
Public bug reported:

Tracker for removing the deprecated methods in neutron/db/api.py. The commit 
where these were added is
4b227c3771eba1cbaa27c6c33829108981cd9b69 :

 * get_object
 * get_objects
 * create_object
 * _safe_get_object
 * update_object
 * delete_object

** Affects: neutron
 Importance: Undecided
 Assignee: Brian Haley (brian-haley)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Brian Haley (brian-haley)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561188

Title:
  DB api: remove deprecated methods

Status in neutron:
  In Progress

Bug description:
  Tracker for removing the deprecated methods in neutron/db/api.py. The commit 
where these were added is
  4b227c3771eba1cbaa27c6c33829108981cd9b69 :

   * get_object
   * get_objects
   * create_object
   * _safe_get_object
   * update_object
   * delete_object

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561188/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561184] [NEW] Common utils: remove deprecated methods

2016-03-23 Thread Brian Haley
Public bug reported:

Tracker for removing the deprecated methods in neutron/common/utils.py. The 
commit where these were added is
8022adb7342b09886f53c91c12d0b37986fbf35c :

 * read_cached_file
 * find_config_file
 * get_keystone_url

** Affects: neutron
 Importance: Undecided
 Assignee: Brian Haley (brian-haley)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Brian Haley (brian-haley)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561184

Title:
  Common utils: remove deprecated methods

Status in neutron:
  In Progress

Bug description:
  Tracker for removing the deprecated methods in neutron/common/utils.py. The 
commit where these were added is
  8022adb7342b09886f53c91c12d0b37986fbf35c :

   * read_cached_file
   * find_config_file
   * get_keystone_url

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561184/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1425747] Re: User-created flavors do not enforce flavor id uniqueness

2016-03-23 Thread Matt Riedemann
** Changed in: nova
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1425747

Title:
  User-created flavors do not enforce flavor id uniqueness

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  The nova API makes an implicit assumption that the "flavorid" field is
  unique, but under certain circumstances this is not enforced. This
  results in incorrect behavior in the situation where a deleted
  flavor's flavorid is re-used for a new flavor. Any instances that are
  associated to the deleted flavor will now appear to be associated to
  the new flavor that re-used the id.

  Steps to reproduce:

1. Create a flavor named FOO with a flavorid of 5
2. Create an instance using flavor FOO
3. Delete the flavor FOO
4. Create a flavor named BAR with a flavorid of 5

  Look at the instance detail for the launched instance

  Expected Behavior:

  The instance detail says the instance is launched against FOO

  Actual Behavior:

  The instance detail says the instance is launched against BAR

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1425747/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561176] [NEW] Edit instance information tab is not properly styled

2016-03-23 Thread Doug Fish
Public bug reported:

On Admin/Instances/Edit Instance the Information tab is shown an mis-
styled.

Maybe it shouldn't be shown at all (since it's the only tab?) but if we
are going to show it, it need to be style right. It looks like a button
now.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "edit-instances.png"
   
https://bugs.launchpad.net/bugs/1561176/+attachment/4608942/+files/edit-instances.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1561176

Title:
  Edit instance information tab is not properly styled

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On Admin/Instances/Edit Instance the Information tab is shown an mis-
  styled.

  Maybe it shouldn't be shown at all (since it's the only tab?) but if
  we are going to show it, it need to be style right. It looks like a
  button now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1561176/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560993] Re: keystone_service returns ignore_other_regions error in liberty

2016-03-23 Thread Steve Martinelli
looks like "keystone_service" is coming from ansible, and swiftacular
uses ansible... not sure what the keystone server project itself can do
about this, i'll open this bug against openstack-ansible too

** Also affects: openstack-ansible
   Importance: Undecided
   Status: New

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1560993

Title:
  keystone_service returns ignore_other_regions error in liberty

Status in OpenStack Identity (keystone):
  Invalid
Status in openstack-ansible:
  New

Bug description:
  I am trying to port swiftacular from Havana to  liberty.

  The following line to create the service endpoint using keystone_service 
returns an error :
  - name: create keystone identity point
keystone_service: insecure=yes name=keystone type=identity 
description="Keystone Identity Service" publicurl="https://{{ keystone_server 
}}:5000/v2.0" internalurl="https://{{ keystone_server }}:5000/v2.0" 
adminurl="https://{{ keystone_server }}:35357/v2.0" region={{ keystone_region 
}} token={{ keystone_admin_token }} endpoint="https://127.0.0.1:35357/v2.0;

  returns the following error

  TASK [authentication : create keystone identity point] 
*
  fatal: [swift-keystone-01]: FAILED! => {"changed": false, "failed": true, 
"msg": "value of ignore_other_regions must be one of: 
yes,on,1,true,1,True,no,off,0,false,0,False, got: False"}
to retry, use: --limit @site.retry

  The same task worked without a hitch with havana.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1560993/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561143] Re: Add option for nova endpoint type

2016-03-23 Thread Armando Migliaccio
*** This bug is a duplicate of bug 1526245 ***
https://bugs.launchpad.net/bugs/1526245

This was triggered because the change  was cherry picked to stable...the
workflow doesn't seem quite right and probably we should have cherry
picked the change in the first place.

** This bug has been marked a duplicate of bug 1526245
   Add option for nova endpoint type

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561143

Title:
  Add option for nova endpoint type

Status in neutron:
  Invalid
Status in openstack-manuals:
  New

Bug description:
  https://review.openstack.org/291810
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 2be884aefe40faab305c9995dbe853e853cd8bba
  Author: Jeremy McDermond 
  Date:   Tue Dec 8 10:14:09 2015 -0800

  Add option for nova endpoint type
  
  When the neutron notification to nova was updated to use novaclient the
  nova_url parameter was disabled.  This prevents administrators from
  using anything but the publicURL as the proper endpoint to notify nova.
  This patch adds an option to pass on to novaclient for the
  endpoint_type so that the administrator can set the notification url to
  public, internal or admin.
  
  Change-Id: I405f76199cab6b8c8895f98419f79cd74cad
  Closes-Bug: #1478471
  DocImpact: Need to add a new option to the neutron configuration
  reference.
  (cherry picked from commit 7dad96deb4ae66509d968465bcd1c852c6743bc1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561143/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561169] [NEW] make rpm fails on CentOS with KeyError: u'd'

2016-03-23 Thread Ahmet Alp Balkan
Public bug reported:

Running `make rpm` after installing dependencies on a clean CentOS 7
machine fails:

# cat /etc/centos-release
CentOS Linux release 7.2.1511 (Core)
# python --version
Python 2.7.5
# pip --version
pip 8.1.1 from /usr/lib/python2.7/site-packages (python 2.7)
# pip install -r requirements.txt
...
# pip install -r test-requirements.txt
...
# make rpm
./packages/brpm --distro redhat
Archived the code in '/root/rpmbuild/SOURCES/cloud-init-0.7.7~bzr1188.tar.gz'
Traceback (most recent call last):
  File "./packages/brpm", line 277, in 
sys.exit(main())
  File "./packages/brpm", line 245, in main
os.path.basename(archive_fn))
  File "./packages/brpm", line 186, in generate_spec_contents
return templater.render_from_file(tmpl_fn, params=subs)
  File "/home/azureuser/cloud-init/cloudinit/templater.py", line 137, in 
render_from_file
return renderer(content, params)
  File "/home/azureuser/cloud-init/cloudinit/templater.py", line 83, in 
basic_render
return BASIC_MATCHER.sub(replacer, content)
  File "/home/azureuser/cloud-init/cloudinit/templater.py", line 81, in replacer
return str(selected_params[key])
KeyError: u'd'
make: *** [rpm] Error 1

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1561169

Title:
  make rpm fails on CentOS with KeyError: u'd'

Status in cloud-init:
  New

Bug description:
  Running `make rpm` after installing dependencies on a clean CentOS 7
  machine fails:

  # cat /etc/centos-release
  CentOS Linux release 7.2.1511 (Core)
  # python --version
  Python 2.7.5
  # pip --version
  pip 8.1.1 from /usr/lib/python2.7/site-packages (python 2.7)
  # pip install -r requirements.txt
  ...
  # pip install -r test-requirements.txt
  ...
  # make rpm
  ./packages/brpm --distro redhat
  Archived the code in '/root/rpmbuild/SOURCES/cloud-init-0.7.7~bzr1188.tar.gz'
  Traceback (most recent call last):
File "./packages/brpm", line 277, in 
  sys.exit(main())
File "./packages/brpm", line 245, in main
  os.path.basename(archive_fn))
File "./packages/brpm", line 186, in generate_spec_contents
  return templater.render_from_file(tmpl_fn, params=subs)
File "/home/azureuser/cloud-init/cloudinit/templater.py", line 137, in 
render_from_file
  return renderer(content, params)
File "/home/azureuser/cloud-init/cloudinit/templater.py", line 83, in 
basic_render
  return BASIC_MATCHER.sub(replacer, content)
File "/home/azureuser/cloud-init/cloudinit/templater.py", line 81, in 
replacer
  return str(selected_params[key])
  KeyError: u'd'
  make: *** [rpm] Error 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1561169/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1272623] Re: nova refuses to start if there are baremetal instances with no associated node

2016-03-23 Thread Ben Nemec
It appears this has been fixed in Nova for a long time.

** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1272623

Title:
  nova refuses to start if there are baremetal instances with no
  associated node

Status in OpenStack Compute (nova):
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  This can happen if a deployment is interrupted at just the wrong time.

  2014-01-25 06:53:38,781.781 14556 DEBUG nova.compute.manager 
[req-e1958f79-b0c0-4c80-b284-85bb56f1541d None None] [instance: 
e21e6bca-b528-4922-9f59-7a1a6534ec8d] Current state is 1, state in DB is 1. 
_init_instance 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py:720
  Traceback (most recent call last):
    File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 
346, in fire_timers
  timer()
    File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line 
56, in __call__
  cb(*args, **kw)
    File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
194, in main
  result = function(*args, **kwargs)
    File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/service.py",
 line 480, in run_service
  service.start()
    File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/service.py", line 
172, in start
  self.manager.init_host()
    File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 805, in init_host
  self._init_instance(context, instance)
    File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 684, in _init_instance
  self.driver.plug_vifs(instance, net_info)
    File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/baremetal/driver.py",
 line 538, in plug_vifs
  self._plug_vifs(instance, network_info)
    File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/baremetal/driver.py",
 line 543, in _plug_vifs
  node = _get_baremetal_node_by_instance_uuid(instance['uuid'])
    File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/baremetal/driver.py",
 line 85, in _get_baremetal_node_by_instance_uuid
  node = db.bm_node_get_by_instance_uuid(ctx, instance_uuid)
    File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/baremetal/db/api.py",
 line 101, in bm_node_get_by_instance_uuid
  instance_uuid)
    File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py",
 line 112, in wrapper
  return f(*args, **kwargs)
    File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/baremetal/db/sqlalchemy/api.py",
 line 152, in bm_node_get_by_instance_uuid
  raise exception.InstanceNotFound(instance_id=instance_uuid)
  InstanceNotFound: Instance 84c6090b-bf42-4c6a-b2ff-afb22b5ff156 could not be 
found.

  If there is no allocated node, we can just skip that part of delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1272623/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560221] Re: No port create notifications received for DHCP subnet creation nor router interface attach

2016-03-23 Thread Steve McLellan
** Also affects: searchlight
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1560221

Title:
  No port create notifications received for DHCP subnet creation nor
  router interface attach

Status in neutron:
  New
Status in OpenStack Search (Searchlight):
  New

Bug description:
  Creating a subnet with DHCP enabled either creates or updates a port
  with device_owner network:dhcp matching the network id to which the
  subnet belongs. While there is a notification received for the subnet
  creation, the port creation or update is implicit and has not
  necessarily taken place when the subnet creation event is received
  (and similarly we don't get a notification that the port has changed
  or been deleted when the subnet has DHCP disabled).

  My specific use case is that we're trying to index resource
  create/update/delete events for searchlight and we cannot track the
  network DHCP ports in the same way as we can ports created explicitly
  or as part of nova instance boots.

  The same problem exists for router interface:attach events, though
  with a difference that we do at least get a notification indicating
  the port id created. It would be nice if the ports created when
  attaching a router to a network also sent port.create notifications.

  Tested under mitaka RC-1 (or very close to) with 'messaging' as the
  notification driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1560221/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560226] Re: No notifications on tag operations

2016-03-23 Thread Steve McLellan
Yeah, there are events for update operations like renames -
network.update.end, subnet.update.end etc. I know very little about
neutron's codebase, unfortunately, though i may take a look once the
mitaka release is out of the way.

** Also affects: searchlight
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1560226

Title:
  No notifications on tag operations

Status in neutron:
  New
Status in OpenStack Search (Searchlight):
  New

Bug description:
  When a tag's added to (or removed from) a resource, no notification is
  generated indicating that the network (or port or whatever) has
  changed, although tags *are* included in notification and API data for
  those resources. It'd be more consistent if attaching a tag to a
  network generated a notification in the same way as if it were
  renamed.

  My use case is that Searchlight would really like to index tags
  attached to networks, routers, etc since it's a very powerful feature
  but we can't provide up to date information unless a notification's
  sent.

  Tested on neutron mitaka rc1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1560226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561143] Re: Add option for nova endpoint type

2016-03-23 Thread Henry Gessau
** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561143

Title:
  Add option for nova endpoint type

Status in neutron:
  Invalid
Status in openstack-manuals:
  New

Bug description:
  https://review.openstack.org/291810
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 2be884aefe40faab305c9995dbe853e853cd8bba
  Author: Jeremy McDermond 
  Date:   Tue Dec 8 10:14:09 2015 -0800

  Add option for nova endpoint type
  
  When the neutron notification to nova was updated to use novaclient the
  nova_url parameter was disabled.  This prevents administrators from
  using anything but the publicURL as the proper endpoint to notify nova.
  This patch adds an option to pass on to novaclient for the
  endpoint_type so that the administrator can set the notification url to
  public, internal or admin.
  
  Change-Id: I405f76199cab6b8c8895f98419f79cd74cad
  Closes-Bug: #1478471
  DocImpact: Need to add a new option to the neutron configuration
  reference.
  (cherry picked from commit 7dad96deb4ae66509d968465bcd1c852c6743bc1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561143/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561152] [NEW] neutron-sanity-check generates invalid bridge names

2016-03-23 Thread Terry Wilson
Public bug reported:

Instead of using neutron.tests.base.get_rand_device_name(), sanity check
tests have been generating their own prefix name and appending a random
string with utils.get_rand_name(). Many of the strings generated were
too long to be device names, so ovs-vswitchd would fail to create the
devices.

For example:

2016-03-18T05:40:41.950Z|07166|dpif|WARN|system@ovs-system: failed to query 
port patchtest-b76adc: Invalid argument
2016-03-18T05:40:41.950Z|07167|dpif|WARN|system@ovs-system: failed to add 
patchtest-b76adc as port: Invalid argument

** Affects: neutron
 Importance: Undecided
 Assignee: Terry Wilson (otherwiseguy)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561152

Title:
  neutron-sanity-check generates invalid bridge names

Status in neutron:
  In Progress

Bug description:
  Instead of using neutron.tests.base.get_rand_device_name(), sanity
  check tests have been generating their own prefix name and appending a
  random string with utils.get_rand_name(). Many of the strings
  generated were too long to be device names, so ovs-vswitchd would fail
  to create the devices.

  For example:

  2016-03-18T05:40:41.950Z|07166|dpif|WARN|system@ovs-system: failed to query 
port patchtest-b76adc: Invalid argument
  2016-03-18T05:40:41.950Z|07167|dpif|WARN|system@ovs-system: failed to add 
patchtest-b76adc as port: Invalid argument

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561152/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1514424] Re: neutron metadata ns proxy does not support ssl

2016-03-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/245945
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=7a306e2918775ebb94d9e1408aaa2b7c3ed26fc6
Submitter: Jenkins
Branch:master

commit 7a306e2918775ebb94d9e1408aaa2b7c3ed26fc6
Author: Vincent Untz 
Date:   Tue Nov 17 17:47:56 2015 +0100

Ensure metadata agent doesn't use SSL for UNIX socket

The communication between the ns metadata proxy and the metadata agent
is pure HTTP, and should not switch to HTTPS when neutron is using SSL.

We're therefore telling wsgi.Server to forcefully disable SSL in that
case.

Change-Id: I2cb9fa231193bcd5c721c4d5cf0eb9c16e842349
Closes-Bug: #1514424


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1514424

Title:
  neutron metadata ns proxy does not support ssl

Status in neutron:
  Fix Released

Bug description:
  When SSL is enabled in the neutron metadata agent the neutron metadata
  ns proxy isn't able to communicate to the neutron metadata agent via
  the unix domain socket and every request results in a BadStatusLine
  error:

  2015-11-06 16:30:44.060 269669 INFO neutron.wsgi [-] 192.168.0.2 - - 
[06/Nov/2015 16:30:44] "GET /2009-04-04/meta-data/instance-id HTTP/1.1" 500 343 
12.021586
  2015-11-06 16:30:56.064 269669 INFO neutron.wsgi [-] (269669) accepted 
('192.168.0.2', 50879)
  2015-11-06 16:30:56.071 269669 ERROR neutron.agent.metadata.namespace_proxy 
[-] Unexpected error.
  2015-11-06 16:30:56.071 269669 ERROR neutron.agent.metadata.namespace_proxy 
Traceback (most recent call last):
  2015-11-06 16:30:56.071 269669 ERROR neutron.agent.metadata.namespace_proxy   
File 
"/usr/lib/python2.7/dist-packages/neutron/agent/metadata/namespace_proxy.py", 
line 56, in __call__
  2015-11-06 16:30:56.071 269669 ERROR neutron.agent.metadata.namespace_proxy   
  req.body)
  2015-11-06 16:30:56.071 269669 ERROR neutron.agent.metadata.namespace_proxy   
File 
"/usr/lib/python2.7/dist-packages/neutron/agent/metadata/namespace_proxy.py", 
line 88, in _proxy_request
  2015-11-06 16:30:56.071 269669 ERROR neutron.agent.metadata.namespace_proxy   
  connection_type=agent_utils.UnixDomainHTTPConnection)
  2015-11-06 16:30:56.071 269669 ERROR neutron.agent.metadata.namespace_proxy   
File "/usr/lib/python2.7/dist-packages/httplib2/__init__.py", line 1569, in 
request
  2015-11-06 16:30:56.071 269669 ERROR neutron.agent.metadata.namespace_proxy   
  (response, content) = self._request(conn, authority, uri, request_uri, 
method, body, headers, redirections, cachekey)
  2015-11-06 16:30:56.071 269669 ERROR neutron.agent.metadata.namespace_proxy   
File "/usr/lib/python2.7/dist-packages/httplib2/__init__.py", line 1316, in 
_request
  2015-11-06 16:30:56.071 269669 ERROR neutron.agent.metadata.namespace_proxy   
  (response, content) = self._conn_request(conn, request_uri, method, body, 
headers)
  2015-11-06 16:30:56.071 269669 ERROR neutron.agent.metadata.namespace_proxy   
File "/usr/lib/python2.7/dist-packages/httplib2/__init__.py", line 1285, in 
_conn_request
  2015-11-06 16:30:56.071 269669 ERROR neutron.agent.metadata.namespace_proxy   
  response = conn.getresponse()
  2015-11-06 16:30:56.071 269669 ERROR neutron.agent.metadata.namespace_proxy   
File "/usr/lib/python2.7/httplib.py", line 1051, in getresponse
  2015-11-06 16:30:56.071 269669 ERROR neutron.agent.metadata.namespace_proxy   
  response.begin()
  2015-11-06 16:30:56.071 269669 ERROR neutron.agent.metadata.namespace_proxy   
File "/usr/lib/python2.7/httplib.py", line 415, in begin
  2015-11-06 16:30:56.071 269669 ERROR neutron.agent.metadata.namespace_proxy   
  version, status, reason = self._read_status()
  2015-11-06 16:30:56.071 269669 ERROR neutron.agent.metadata.namespace_proxy   
File "/usr/lib/python2.7/httplib.py", line 379, in _read_status
  2015-11-06 16:30:56.071 269669 ERROR neutron.agent.metadata.namespace_proxy   
  raise BadStatusLine(line)
  2015-11-06 16:30:56.071 269669 ERROR neutron.agent.metadata.namespace_proxy 
BadStatusLine: ''
  2015-11-06 16:30:56.071 269669 ERROR neutron.agent.metadata.namespace_proxy 

  It seems that the neutron metadata ns proxy does not support SSL for
  the communication.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1514424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561151] [NEW] Neutron unit tests fail against oslo.* master

2016-03-23 Thread Davanum Srinivas (DIMS)
Public bug reported:

from http://logs.openstack.org/periodic/periodic-neutron-py27-with-oslo-
master/b093812/console.html#_2016-03-23_06_21_11_099 :

2016-03-23 06:21:11.099 | Captured traceback:
2016-03-23 06:21:11.099 | ~~~
2016-03-23 06:21:11.099 | Traceback (most recent call last):
2016-03-23 06:21:11.099 |   File 
"neutron/tests/unit/extensions/test_dns.py", line 455, in 
test_api_extension_validation_with_bad_dns_names
2016-03-23 06:21:11.100 | 'cannot be converted to lowercase string' in 
res.text or
2016-03-23 06:21:11.191 |   File 
"/home/jenkins/workspace/periodic-neutron-py27-with-oslo-master/.tox/py27-oslo-master/local/lib/python2.7/site-packages/webob/response.py",
 line 420, in _text__get
2016-03-23 06:21:11.191 | "You cannot access Response.text unless 
charset is set")
2016-03-23 06:21:11.191 | AttributeError: You cannot access Response.text 
unless charset is set
2016-03-23 06:21:11.191 |

** Affects: neutron
 Importance: Undecided
 Assignee: Davanum Srinivas (DIMS) (dims-v)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561151

Title:
  Neutron unit tests fail against oslo.* master

Status in neutron:
  In Progress

Bug description:
  from http://logs.openstack.org/periodic/periodic-neutron-py27-with-
  oslo-master/b093812/console.html#_2016-03-23_06_21_11_099 :

  2016-03-23 06:21:11.099 | Captured traceback:
  2016-03-23 06:21:11.099 | ~~~
  2016-03-23 06:21:11.099 | Traceback (most recent call last):
  2016-03-23 06:21:11.099 |   File 
"neutron/tests/unit/extensions/test_dns.py", line 455, in 
test_api_extension_validation_with_bad_dns_names
  2016-03-23 06:21:11.100 | 'cannot be converted to lowercase string' 
in res.text or
  2016-03-23 06:21:11.191 |   File 
"/home/jenkins/workspace/periodic-neutron-py27-with-oslo-master/.tox/py27-oslo-master/local/lib/python2.7/site-packages/webob/response.py",
 line 420, in _text__get
  2016-03-23 06:21:11.191 | "You cannot access Response.text unless 
charset is set")
  2016-03-23 06:21:11.191 | AttributeError: You cannot access Response.text 
unless charset is set
  2016-03-23 06:21:11.191 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561151/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1306177] Re: wrong event expectation in _attachInputHandlers

2016-03-23 Thread Rob Cresswell
This no longer appears to be a bug.

** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1306177

Title:
  wrong event expectation in _attachInputHandlers

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  file horizon/static/horizon/js/horizon.quota.js

  Usage of 'data-progress-indicator-for' depends on 'keyup' event.
  In google chromium we can change value of input with arrows of increasing and 
decreasing of integer values.
  It makes only 'change' event without 'keyup'.

  Place with error - gigabyte quotas in volume creation dialog for
  cinder. (see attachment)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1306177/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490628] Re: Dashboard Panels should not include custom/styles

2016-03-23 Thread Rob Cresswell
** Changed in: horizon
   Status: In Progress => Invalid

** Changed in: horizon
 Assignee: Rajat Vig (rajatv) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1490628

Title:
  Dashboard Panels should not include custom/styles

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Dashboard Panels should not include custom/styles

  Currently the '/custom/styles' SCSS is included in
  Dashboard Panels
  1. project.scss
  2. identity.scss
  This introduces multiple inclusion of the same style.
  Instead this needs to be only done in app.scss.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1490628/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561121] [NEW] Keystone unit test failure with oslo.* from master

2016-03-23 Thread Davanum Srinivas (DIMS)
Public bug reported:

from http://logs.openstack.org/periodic/periodic-keystone-py27-with-
oslo-master/0665198/console.html#_2016-03-23_06_21_06_074

2016-03-23 06:21:06.073 | ==
2016-03-23 06:21:06.073 | Failed 1 tests - output below:
2016-03-23 06:21:06.073 | ==
2016-03-23 06:21:06.073 | 
2016-03-23 06:21:06.073 | 
keystone.tests.unit.common.test_manager.TestCreateLegacyDriver.test_class_is_properly_deprecated
2016-03-23 06:21:06.073 | 

2016-03-23 06:21:06.074 | 
2016-03-23 06:21:06.074 | Captured traceback:
2016-03-23 06:21:06.074 | ~~~
2016-03-23 06:21:06.074 | Traceback (most recent call last):
2016-03-23 06:21:06.074 |   File 
"/home/jenkins/workspace/periodic-keystone-py27-with-oslo-master/.tox/py27-oslo-master/local/lib/python2.7/site-packages/mock/mock.py",
 line 1305, in patched
2016-03-23 06:21:06.074 | return func(*args, **keywargs)
2016-03-23 06:21:06.074 |   File 
"keystone/tests/unit/common/test_manager.py", line 37, in 
test_class_is_properly_deprecated
2016-03-23 06:21:06.075 | mock_reporter.assert_called_with(mock.ANY, 
mock.ANY, details)
2016-03-23 06:21:06.075 |   File 
"/home/jenkins/workspace/periodic-keystone-py27-with-oslo-master/.tox/py27-oslo-master/local/lib/python2.7/site-packages/mock/mock.py",
 line 937, in assert_called_with
2016-03-23 06:21:06.075 | 
six.raise_from(AssertionError(_error_message(cause)), cause)
2016-03-23 06:21:06.075 |   File 
"/home/jenkins/workspace/periodic-keystone-py27-with-oslo-master/.tox/py27-oslo-master/local/lib/python2.7/site-packages/six.py",
 line 718, in raise_from
2016-03-23 06:21:06.075 | raise value
2016-03-23 06:21:06.075 | AssertionError: Expected call: 
report_deprecated_feature(, , {'in_favor_of': 
'keystone.catalog.core.CatalogDriverV8', 'as_of': 'Liberty', 'what': 
'keystone.catalog.core.Driver', 'remove_in': 'N'})
2016-03-23 06:21:06.076 | Actual call: 
report_deprecated_feature(, u'%(what)s 
is deprecated as of %(as_of)s in favor of %(in_favor_of)s and may be removed in 
%(remove_in)s.', {'in_favor_of': 'keystone.catalog.core.CatalogDriverV8', 
'as_of': 'Liberty', 'what': 'keystone.catalog.core.Driver', 'remove_in': 
'Newton'})

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1561121

Title:
  Keystone unit test failure with oslo.* from master

Status in OpenStack Identity (keystone):
  New

Bug description:
  from http://logs.openstack.org/periodic/periodic-keystone-py27-with-
  oslo-master/0665198/console.html#_2016-03-23_06_21_06_074

  2016-03-23 06:21:06.073 | ==
  2016-03-23 06:21:06.073 | Failed 1 tests - output below:
  2016-03-23 06:21:06.073 | ==
  2016-03-23 06:21:06.073 | 
  2016-03-23 06:21:06.073 | 
keystone.tests.unit.common.test_manager.TestCreateLegacyDriver.test_class_is_properly_deprecated
  2016-03-23 06:21:06.073 | 

  2016-03-23 06:21:06.074 | 
  2016-03-23 06:21:06.074 | Captured traceback:
  2016-03-23 06:21:06.074 | ~~~
  2016-03-23 06:21:06.074 | Traceback (most recent call last):
  2016-03-23 06:21:06.074 |   File 
"/home/jenkins/workspace/periodic-keystone-py27-with-oslo-master/.tox/py27-oslo-master/local/lib/python2.7/site-packages/mock/mock.py",
 line 1305, in patched
  2016-03-23 06:21:06.074 | return func(*args, **keywargs)
  2016-03-23 06:21:06.074 |   File 
"keystone/tests/unit/common/test_manager.py", line 37, in 
test_class_is_properly_deprecated
  2016-03-23 06:21:06.075 | mock_reporter.assert_called_with(mock.ANY, 
mock.ANY, details)
  2016-03-23 06:21:06.075 |   File 
"/home/jenkins/workspace/periodic-keystone-py27-with-oslo-master/.tox/py27-oslo-master/local/lib/python2.7/site-packages/mock/mock.py",
 line 937, in assert_called_with
  2016-03-23 06:21:06.075 | 
six.raise_from(AssertionError(_error_message(cause)), cause)
  2016-03-23 06:21:06.075 |   File 
"/home/jenkins/workspace/periodic-keystone-py27-with-oslo-master/.tox/py27-oslo-master/local/lib/python2.7/site-packages/six.py",
 line 718, in raise_from
  2016-03-23 06:21:06.075 | raise value
  2016-03-23 06:21:06.075 | AssertionError: Expected call: 
report_deprecated_feature(, , {'in_favor_of': 
'keystone.catalog.core.CatalogDriverV8', 'as_of': 'Liberty', 'what': 
'keystone.catalog.core.Driver', 'remove_in': 'N'})
  2016-03-23 06:21:06.076 | Actual call: 
report_deprecated_feature(, u'%(what)s 
is deprecated as of %(as_of)s in favor of %(in_favor_of)s and may be removed in 
%(remove_in)s.', 

[Yahoo-eng-team] [Bug 1561118] [NEW] Agents: remove deprecated methods

2016-03-23 Thread Gary Kotton
Public bug reported:

Tracker for removing the deprecated method in the agents directoy. The commits 
where these were added is
 - pullup_route (commit 23b907bc6e87be153c13b1bf3e069467cdda0d27) -

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561118

Title:
  Agents: remove deprecated methods

Status in neutron:
  In Progress

Bug description:
  Tracker for removing the deprecated method in the agents directoy. The 
commits where these were added is
   - pullup_route (commit 23b907bc6e87be153c13b1bf3e069467cdda0d27) -

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561118/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561107] [NEW] Horizon denies permissions when a user gets created with a non existant domain

2016-03-23 Thread Rene Ochoa Dorado
Public bug reported:

While i was testing and playing around with the Identity v3 api i
managed to break the permissions to the User page in the horizon UI by
accidentally setting a newly created user's domain to '{}'.

Once the user is set with the incorrect domain any navigation to the
user list in horizon will log you out continuously unless you break the
cycle by removing the login redirect URL in the address bar.

During this time i didn't  loose by ability to administer users through
the Identity v3 api. Deleting the bugged user resulted in normal
operation. I imagine updating the user's domain  would have yielded the
same results as deleting the user.

sample json that caused the error while posting to /v3/users
{"user":
{"name":"TestUser","password":"thisismypassword","domain":{},"domain_id":"default"}}

when i query the user list through postman you can see that the domain is set 
incorrectly
{
"domain": {},
  "name": "TestUser",
  "links": {
"self": "someurl"
  },
  "enabled": true,
  "id": "someid",
  "domain_id": "default"
}

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: list permissions user

** Description changed:

  While i was testing and playing around with the Identity v3 api i
  managed to break the permissions to the User page in the horizon UI by
  accidentally setting a newly created user's domain to '{}'.
  
  Once the user is set with the incorrect domain any navigation to the
  user list in horizon will log you out continuously unless you break the
  cycle by removing the login redirect URL in the address bar.
  
- During this time i didn't not loose by ability to administer users
- through the Identity api. Deleting the bugged user resulted in normal
+ During this time i didn't  loose by ability to administer users through
+ the Identity v3 api. Deleting the bugged user resulted in normal
  operation. I imagine updating the user's domain  would have yielded the
  same results as deleting the user.
  
  sample json that caused the error while posting to /v3/users
  {"user":
  
{"name":"TestUser","password":"thisismypassword","domain":{},"domain_id":"default"}}
  
- 
- when i query the user list through postman you can see that the domain in set 
incorrectly
+ when i query the user list through postman you can see that the domain is set 
incorrectly
  {
  "domain": {},
-   "name": "TestUser",
-   "links": {
- "self": "someurl"
-   },
-   "enabled": true,
-   "id": "someid",
-   "domain_id": "default"
+   "name": "TestUser",
+   "links": {
+ "self": "someurl"
+   },
+   "enabled": true,
+   "id": "someid",
+   "domain_id": "default"
  }

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1561107

Title:
  Horizon denies permissions when a user gets created with a non
  existant domain

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  While i was testing and playing around with the Identity v3 api i
  managed to break the permissions to the User page in the horizon UI by
  accidentally setting a newly created user's domain to '{}'.

  Once the user is set with the incorrect domain any navigation to the
  user list in horizon will log you out continuously unless you break
  the cycle by removing the login redirect URL in the address bar.

  During this time i didn't  loose by ability to administer users
  through the Identity v3 api. Deleting the bugged user resulted in
  normal operation. I imagine updating the user's domain  would have
  yielded the same results as deleting the user.

  sample json that caused the error while posting to /v3/users
  {"user":
  
{"name":"TestUser","password":"thisismypassword","domain":{},"domain_id":"default"}}

  when i query the user list through postman you can see that the domain is set 
incorrectly
  {
  "domain": {},
    "name": "TestUser",
    "links": {
  "self": "someurl"
    },
    "enabled": true,
    "id": "someid",
    "domain_id": "default"
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1561107/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561099] [NEW] keystone-manage looks for default_config_files in the wrong place

2016-03-23 Thread Colleen Murphy
Public bug reported:

Summary:

The keystone-manage command searches for a default keystone.conf
relative to the installed executable [1]. The result is that it will
look in /../etc/keystone.conf. Failing to find it there, it
will search the standard oslo.cfg directories: ~/.keystone/, ~/,
/etc/keystone/, /etc/.

I can't find documentation stating keystone.conf should live at /../etc/keystone.conf. I can find documentation saying it should
live in the etc/ directory of the keystone source directory[2], and I
can find documentation saying it should live in one of the oslo.cfg
directories[3]. If keystone-manage searched for keystone.conf relative
to the python source file keystone/cmd/manage.py rather than the
installed binary, the instructions at [2] would work correctly and [3]
would still work as a fallback.

Steps to reproduce:

1) Follow the "Developing with Keystone" instructions
(http://docs.openstack.org/developer/keystone/developing.html), copying
etc/keystone.conf.sample to etc/keystone.conf.

2) Change the database connection string in etc/keystone.conf to
sqlite:///keystone2.db

3) Run a keystone-manage db_sync

Expected result:

A sqlite database is created in the current working directory called
keystone2.db

Actual result:

A sqlite database is created in the current working directory called
keystone.db.

[1] 
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/cmd/manage.py#n23
[2] 
http://docs.openstack.org/developer/keystone/developing.html#configuring-keystone
[3] 
http://docs.openstack.org/developer/keystone/configuration.html#configuration-files

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1561099

Title:
  keystone-manage looks for default_config_files in the wrong place

Status in OpenStack Identity (keystone):
  New

Bug description:
  Summary:

  The keystone-manage command searches for a default keystone.conf
  relative to the installed executable [1]. The result is that it will
  look in /../etc/keystone.conf. Failing to find it there, it
  will search the standard oslo.cfg directories: ~/.keystone/, ~/,
  /etc/keystone/, /etc/.

  I can't find documentation stating keystone.conf should live at /../etc/keystone.conf. I can find documentation saying it should
  live in the etc/ directory of the keystone source directory[2], and I
  can find documentation saying it should live in one of the oslo.cfg
  directories[3]. If keystone-manage searched for keystone.conf relative
  to the python source file keystone/cmd/manage.py rather than the
  installed binary, the instructions at [2] would work correctly and [3]
  would still work as a fallback.

  Steps to reproduce:

  1) Follow the "Developing with Keystone" instructions
  (http://docs.openstack.org/developer/keystone/developing.html),
  copying etc/keystone.conf.sample to etc/keystone.conf.

  2) Change the database connection string in etc/keystone.conf to
  sqlite:///keystone2.db

  3) Run a keystone-manage db_sync

  Expected result:

  A sqlite database is created in the current working directory called
  keystone2.db

  Actual result:

  A sqlite database is created in the current working directory called
  keystone.db.

  [1] 
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/cmd/manage.py#n23
  [2] 
http://docs.openstack.org/developer/keystone/developing.html#configuring-keystone
  [3] 
http://docs.openstack.org/developer/keystone/configuration.html#configuration-files

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1561099/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1530294] Re: can't resize when use_cow_images is True

2016-03-23 Thread Markus Zoeller (markus_z)
@kaka:
As stated in comments 1-3 we need more information to solve this.
This bug report has the status "Incomplete" since more than 30 days.
To keep the bug list sane, I close this bug with "Invalid".
If you have more information, please set the bug back to "New" and
use the report template found at [1].

References:
[1] https://wiki.openstack.org/wiki/Nova/BugsTeam/BugReportTemplate

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1530294

Title:
  can't resize when use_cow_images is True

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  nova.conf:
  #force_raw_images = true
  use_cow_images = True

  error log:
  ] u'qemu-img resize 
/var/lib/nova/instances/727fd979-d02e-4a9b-8b7c-9488ead6c18b/disk 42949672960' 
failed. Not Retrying. execute 
/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:308
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager 
[req-c7949bf1-d9a1-46bd-8dd6-5bc26c32585c 7f50e4ce47aa4d28b78b3b5937f3a382 
65a1edd1dad24b15a4f27bb0d7dcb4d6 - - -] [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] Setting instance vm_state to ERROR
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] Traceback (most recent call last):
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3934, in 
finish_resize
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] disk_info, image)
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3900, in 
_finish_resize
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] old_instance_type)
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, in __exit__
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] six.reraise(self.type_, self.value, 
self.tb)
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3895, in 
_finish_resize
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] block_device_info, power_on)
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6836, in 
finish_migration
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] self._disk_resize(image, size)
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6815, in 
_disk_resize
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] disk.extend(image, size)
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b]   File 
"/usr/lib/python2.7/site-packages/nova/virt/disk/api.py", line 190, in extend
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] utils.execute('qemu-img', 'resize', 
image.path, size)
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b]   File 
"/usr/lib/python2.7/site-packages/nova/utils.py", line 390, in execute
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] return processutils.execute(*cmd, 
**kwargs)
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b]   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line 275, 
in execute
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] cmd=sanitized_cmd)
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] ProcessExecutionError: Unexpected error 
while running command.
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] Command: qemu-img resize 
/var/lib/nova/instances/727fd979-d02e-4a9b-8b7c-9488ead6c18b/disk 42949672960
  2015-12-31 16:33:16.696 18243 ERROR 

[Yahoo-eng-team] [Bug 1561054] [NEW] Make Fernet the default token provider

2016-03-23 Thread Lance Bragstad
Public bug reported:

The fernet token provider should be the default token provider in
Keystone. This will allow the keystone development team to deprecate all
other token providers in keystone and massively simplify the token
provider API.

** Affects: keystone
 Importance: Wishlist
 Status: New


** Tags: fernet

** Changed in: keystone
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1561054

Title:
  Make Fernet the default token provider

Status in OpenStack Identity (keystone):
  New

Bug description:
  The fernet token provider should be the default token provider in
  Keystone. This will allow the keystone development team to deprecate
  all other token providers in keystone and massively simplify the token
  provider API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1561054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561046] [NEW] If there is a /var/lib/neutron/ha_confs/.pid then l3 agent fails to spawn a keepalived process for that router

2016-03-23 Thread Hynek Mlnarik
Public bug reported:

If the .pid file for the previous keepalived process (located in
/var/lib/neutron/ha_confs/.pid) already exists then the L3
agent fails to spawn a keepalived process for that router.

For example, upon neutron node shutdown and restart the processes are
assigned new PIDs that can be same as those previously assigned to some
of the keepalived processes. The latter are captured in PID files and
once keepalived starts, it detects that there is a running process with
that PID and reports "daemon is already running".

Steps to reproduce:
1)  Pick a router that you want to make display this issue;  record the 
router_id
2)  kill the two processes denoted in these two files: 
/lib/neutron/ha_confs/.pid and 
/lib/neutron/ha_confs/.pid-vrrp
3)  Make sure that no keepalived process comes back for this router
4) Now pick out an existing process id - anything that's really  running - and 
put that processid into the PID files.  For example, a background sleep process 
running as pid 12345 can be put into .pid file and 
.pid-vrrp.

Bug valid with keepalived version 1.2.13 and 1.2.19.

** Affects: neutron
 Importance: Undecided
 Assignee: Hynek Mlnarik (hmlnarik-s)
 Status: In Progress


** Tags: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561046

Title:
  If there is a /var/lib/neutron/ha_confs/.pid then l3 agent
  fails to spawn a keepalived process for that router

Status in neutron:
  In Progress

Bug description:
  If the .pid file for the previous keepalived process (located in
  /var/lib/neutron/ha_confs/.pid) already exists then the L3
  agent fails to spawn a keepalived process for that router.

  For example, upon neutron node shutdown and restart the processes are
  assigned new PIDs that can be same as those previously assigned to
  some of the keepalived processes. The latter are captured in PID files
  and once keepalived starts, it detects that there is a running process
  with that PID and reports "daemon is already running".

  Steps to reproduce:
  1)  Pick a router that you want to make display this issue;  record the 
router_id
  2)  kill the two processes denoted in these two files: 
/lib/neutron/ha_confs/.pid and 
/lib/neutron/ha_confs/.pid-vrrp
  3)  Make sure that no keepalived process comes back for this router
  4) Now pick out an existing process id - anything that's really  running - 
and put that processid into the PID files.  For example, a background sleep 
process running as pid 12345 can be put into .pid file and 
.pid-vrrp.

  Bug valid with keepalived version 1.2.13 and 1.2.19.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560698] Re: Max char length for 'service type' field in dashboard is limited to 20 chars

2016-03-23 Thread Adnan Khan
This is invalid for upstream!

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1560698

Title:
  Max char length for 'service type' field in dashboard is limited to 20
  chars

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  The 'service type' field in Project metadata is limited to 20 chars.
  User cannot enter bigger values.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1560698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561040] [NEW] RuntimeError while deleting linux bridge by linux bridge agent

2016-03-23 Thread venkata anil
Public bug reported:

http://logs.openstack.org/14/275614/7/check/gate-neutron-dsvm-
fullstack/efae851/logs/TestLinuxBridgeConnectivitySameNetwork.test_connectivity_VLANs_
/neutron-linuxbridge-agent--2016-03-23--04-07-30-395169.log.txt.gz

Linux bridge is not handling RuntimeError exception when it is trying to
delete network's bridge, which is deleted by nova in parallel. Fullstack
test has similar scenario, it creates network's bridge for agent and
deletes the bridge after the test, like nova. Linux bridge agent has to
ignore RuntimeError exception if the bridge doesn't exist.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561040

Title:
  RuntimeError while deleting linux bridge by linux bridge agent

Status in neutron:
  New

Bug description:
  http://logs.openstack.org/14/275614/7/check/gate-neutron-dsvm-
  
fullstack/efae851/logs/TestLinuxBridgeConnectivitySameNetwork.test_connectivity_VLANs_
  /neutron-linuxbridge-agent--2016-03-23--04-07-30-395169.log.txt.gz

  Linux bridge is not handling RuntimeError exception when it is trying
  to delete network's bridge, which is deleted by nova in parallel.
  Fullstack test has similar scenario, it creates network's bridge for
  agent and deletes the bridge after the test, like nova. Linux bridge
  agent has to ignore RuntimeError exception if the bridge doesn't
  exist.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561040/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561022] Re: Server group policies are not honored during live migration

2016-03-23 Thread John Garbutt
Seems like a nasty regression, adding to mitaka rc2

** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova/mitaka
Milestone: None => mitaka-rc2

** Tags added: mitaka-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1561022

Title:
  Server group policies are not honored during live migration

Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Compute (nova) mitaka series:
  New

Bug description:
  Commit
  
https://github.com/openstack/nova/commit/111a852e79f0d9e54228d8e2724dc4183f737397
  introduced regression that causes affinity/anti-affinity policies to
  be omitted while live migrating an instance.

  This is because we don't pass instance_group here:

  
https://github.com/openstack/nova/blob/111a852e79f0d9e54228d8e2724dc4183f737397/nova/conductor/tasks/live_migrate.py#L183

  However, filters are expecting this information:

  
https://github.com/openstack/nova/blob/111a852e79f0d9e54228d8e2724dc4183f737397/nova/scheduler/filters/affinity_filter.py#L86

  Basically we should pass instance group so that filters can read this
  information later.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1561022/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561022] [NEW] Server group policies are not honored during live migration

2016-03-23 Thread Pawel Koniszewski
Public bug reported:

Commit
https://github.com/openstack/nova/commit/111a852e79f0d9e54228d8e2724dc4183f737397
introduced regression that causes affinity/anti-affinity policies to be
omitted while live migrating an instance.

This is because we don't pass instance_group here:

https://github.com/openstack/nova/blob/111a852e79f0d9e54228d8e2724dc4183f737397/nova/conductor/tasks/live_migrate.py#L183

However, filters are expecting this information:

https://github.com/openstack/nova/blob/111a852e79f0d9e54228d8e2724dc4183f737397/nova/scheduler/filters/affinity_filter.py#L86

Basically we should pass instance group so that filters can read this
information later.

** Affects: nova
 Importance: Medium
 Status: Confirmed

** Affects: nova/mitaka
 Importance: Undecided
 Status: New


** Tags: live-migration mitaka-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1561022

Title:
  Server group policies are not honored during live migration

Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Compute (nova) mitaka series:
  New

Bug description:
  Commit
  
https://github.com/openstack/nova/commit/111a852e79f0d9e54228d8e2724dc4183f737397
  introduced regression that causes affinity/anti-affinity policies to
  be omitted while live migrating an instance.

  This is because we don't pass instance_group here:

  
https://github.com/openstack/nova/blob/111a852e79f0d9e54228d8e2724dc4183f737397/nova/conductor/tasks/live_migrate.py#L183

  However, filters are expecting this information:

  
https://github.com/openstack/nova/blob/111a852e79f0d9e54228d8e2724dc4183f737397/nova/scheduler/filters/affinity_filter.py#L86

  Basically we should pass instance group so that filters can read this
  information later.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1561022/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560993] [NEW] keystone_service returns ignore_other_regions error in liberty

2016-03-23 Thread Simon Pierre Desrosiers
Public bug reported:

I am trying to port swiftacular from Havana to  liberty.

The following line to create the service endpoint using keystone_service 
returns an error :
- name: create keystone identity point
  keystone_service: insecure=yes name=keystone type=identity 
description="Keystone Identity Service" publicurl="https://{{ keystone_server 
}}:5000/v2.0" internalurl="https://{{ keystone_server }}:5000/v2.0" 
adminurl="https://{{ keystone_server }}:35357/v2.0" region={{ keystone_region 
}} token={{ keystone_admin_token }} endpoint="https://127.0.0.1:35357/v2.0;

returns the following error

TASK [authentication : create keystone identity point] *
fatal: [swift-keystone-01]: FAILED! => {"changed": false, "failed": true, 
"msg": "value of ignore_other_regions must be one of: 
yes,on,1,true,1,True,no,off,0,false,0,False, got: False"}
to retry, use: --limit @site.retry

The same task worked without a hitch with havana.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1560993

Title:
  keystone_service returns ignore_other_regions error in liberty

Status in OpenStack Identity (keystone):
  New

Bug description:
  I am trying to port swiftacular from Havana to  liberty.

  The following line to create the service endpoint using keystone_service 
returns an error :
  - name: create keystone identity point
keystone_service: insecure=yes name=keystone type=identity 
description="Keystone Identity Service" publicurl="https://{{ keystone_server 
}}:5000/v2.0" internalurl="https://{{ keystone_server }}:5000/v2.0" 
adminurl="https://{{ keystone_server }}:35357/v2.0" region={{ keystone_region 
}} token={{ keystone_admin_token }} endpoint="https://127.0.0.1:35357/v2.0;

  returns the following error

  TASK [authentication : create keystone identity point] 
*
  fatal: [swift-keystone-01]: FAILED! => {"changed": false, "failed": true, 
"msg": "value of ignore_other_regions must be one of: 
yes,on,1,true,1,True,no,off,0,false,0,False, got: False"}
to retry, use: --limit @site.retry

  The same task worked without a hitch with havana.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1560993/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560965] [NEW] libvirt selects wrong root device name

2016-03-23 Thread ebl...@nde.ag
Public bug reported:

Referring to Liberty, Compute runs with xen hypervisor:

When trying to boot an instance from volume via Horizon, the VM fails to spawn 
because of an invalid block device mapping. I found a cause for that in a 
default initial "device_name=vda" in the file 
/srv/www/openstack-dashboard/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py.
Log file nova-compute.log reports 
"Ignoring supplied device name: /dev/vda"

, but in the next step it uses it anyway and says

"Booting with blank volume at /dev/vda".

To test my assumption, I blanked the device_name and edited the array 
dev_mapping_2 to only append device_name if it's not empty. That works 
perfectly for Booting from Horizon and could be one way to fix this.
But if you use nova boot command, you can still provide (multiple) device 
names, for example if you launch an instance and attach a blank volume.

It seems that libvirt is indeed ignoring the supplied device name, but
only if it's not the root device. If I understand correctly, a user-
supplied device_name should also be nulled out for root_device_name and
picked by libvirt, if it's not valid. And also the default value for
device_name in Horizon dashboard should be None. If there is one
supplied, it should be processed and probably validated by libvirt.

Steps to reproduce from Horizon:
Use Xen as hypervisor

1. Go to the Horizon dashboard and launch an instance
2. Select "Boot from image (creates a new volume)" as Instance Boot Source

Expected result:
Instance starts with /dev/xvda as root device.

Actual result:
Build of instance fails, nova-compute.log reports 
"BuildAbortException: Build of instance c15f3344-f9e3-4853-9c18-ea8741563205 
aborted: Block Device Mapping is Invalid"

Steps to reproduce from nova cli:

1. Launch an instance from command line via
nova boot --flavor 1 --block-device 
source=image,id=IMAGE_ID,dest=volume,size=1,shutdown=remove,bootindex=0,device=vda
  --block-device source=blank,dest=volume,size=1,shutdown=remove,device=vdb VM

Expected result:
Instance starts with /dev/xvda as root device.

Actual result:
Build of instance fails, device name for vdb is ignored and replaced correctly, 
but the root device is not.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1560965

Title:
  libvirt selects wrong root device name

Status in OpenStack Compute (nova):
  New

Bug description:
  Referring to Liberty, Compute runs with xen hypervisor:

  When trying to boot an instance from volume via Horizon, the VM fails to 
spawn because of an invalid block device mapping. I found a cause for that in a 
default initial "device_name=vda" in the file 
/srv/www/openstack-dashboard/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py.
  Log file nova-compute.log reports 
  "Ignoring supplied device name: /dev/vda"

  , but in the next step it uses it anyway and says

  "Booting with blank volume at /dev/vda".

  To test my assumption, I blanked the device_name and edited the array 
dev_mapping_2 to only append device_name if it's not empty. That works 
perfectly for Booting from Horizon and could be one way to fix this.
  But if you use nova boot command, you can still provide (multiple) device 
names, for example if you launch an instance and attach a blank volume.

  It seems that libvirt is indeed ignoring the supplied device name, but
  only if it's not the root device. If I understand correctly, a user-
  supplied device_name should also be nulled out for root_device_name
  and picked by libvirt, if it's not valid. And also the default value
  for device_name in Horizon dashboard should be None. If there is one
  supplied, it should be processed and probably validated by libvirt.

  Steps to reproduce from Horizon:
  Use Xen as hypervisor

  1. Go to the Horizon dashboard and launch an instance
  2. Select "Boot from image (creates a new volume)" as Instance Boot Source

  Expected result:
  Instance starts with /dev/xvda as root device.

  Actual result:
  Build of instance fails, nova-compute.log reports 
  "BuildAbortException: Build of instance c15f3344-f9e3-4853-9c18-ea8741563205 
aborted: Block Device Mapping is Invalid"

  Steps to reproduce from nova cli:

  1. Launch an instance from command line via
  nova boot --flavor 1 --block-device 
source=image,id=IMAGE_ID,dest=volume,size=1,shutdown=remove,bootindex=0,device=vda
  --block-device source=blank,dest=volume,size=1,shutdown=remove,device=vdb VM

  Expected result:
  Instance starts with /dev/xvda as root device.

  Actual result:
  Build of instance fails, device name for vdb is ignored and replaced 
correctly, but the root device is not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1560965/+subscriptions

-- 
Mailing 

[Yahoo-eng-team] [Bug 1560963] [NEW] [RFE] Minimum bandwidth support

2016-03-23 Thread Miguel Angel Ajo
Public bug reported:

Minimum bandwidth support (opposed to bandwidth limiting), guarantees a
port minimum bandwidth when it's neighbours are consuming egress or
ingress traffic and can be throttled in favor of the guaranteed port.

Strict minimum bandwidth support requires scheduling cooperation, to
avoid physical interfaces overcommit. This RFE could be probably split
in two phases: strict, and non strict.

Use cases


NFV/telcos are interested in this type of rules (specially strict), to
make sure functions don't overcommit computes, and that any spawn of the
same architecture will perform exactly as expected.

CSP could make use of it to provide guaranteed bandwidth for streaming,
etc...


Notes
=

Technologies like SR-IOV support that, and OVS & Linux bridge can be
configured to support this type of service. Where in OvS it requires to
use veth ports between bridges instead of patch ports, it introduces a
performance overhead of a ~20%. Supporting this kind of rule for OvS
agents must be made optional, so the administrators can choose it only
when they really need it.

SR-IOV seems not to incur in any performance penalty.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1560963

Title:
  [RFE] Minimum bandwidth support

Status in neutron:
  New

Bug description:
  Minimum bandwidth support (opposed to bandwidth limiting), guarantees
  a port minimum bandwidth when it's neighbours are consuming egress or
  ingress traffic and can be throttled in favor of the guaranteed port.

  Strict minimum bandwidth support requires scheduling cooperation, to
  avoid physical interfaces overcommit. This RFE could be probably split
  in two phases: strict, and non strict.

  Use cases
  

  NFV/telcos are interested in this type of rules (specially strict), to
  make sure functions don't overcommit computes, and that any spawn of
  the same architecture will perform exactly as expected.

  CSP could make use of it to provide guaranteed bandwidth for
  streaming, etc...

  
  Notes
  =

  Technologies like SR-IOV support that, and OVS & Linux bridge can be
  configured to support this type of service. Where in OvS it requires
  to use veth ports between bridges instead of patch ports, it
  introduces a performance overhead of a ~20%. Supporting this kind of
  rule for OvS agents must be made optional, so the administrators can
  choose it only when they really need it.

  SR-IOV seems not to incur in any performance penalty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1560963/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560961] [NEW] [RFE] Allow instance-ingress bandwidth limiting

2016-03-23 Thread Miguel Angel Ajo
Public bug reported:

The current implementation of bandwidth limiting rules only supports egress 
bandwidth
limiting.

Use cases
=
There are cases where ingress bandwidth limiting is more important than
egress limiting, for example when the workload of the cloud is mostly a 
consumer of data (crawlers, datamining, etc), and administrators need to ensure 
other workloads won't be affected.

Other example are CSPs which need to plan & allocate the bandwidth
provided to customers, or provide different levels of network service.

API/Model impact
===
The BandwidthLimiting rules will be added a direction field (egress/ingress), 
which by default will be egress to match the current behaviour and, therefore
be backward compatible.

Combining egress/ingress would be achieved by including an egress
bandwidth limit and an ingress bandwidth limit.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: qos rfe

** Description changed:

- 
  The current implementation of bandwidth limiting rules only supports egress 
bandwidth
  limiting.
  
  Use cases
  
  
- There are cases where ingress bandwidth limiting is more important than 
egress limiting,
- for example when the workload of the cloud is mostly a consumer of data 
(crawlers,
- datamining, etc), and administrators need to ensure other workloads won't be 
affected.
+ There are cases where ingress bandwidth limiting is more important than
+ egress limiting, for example when the workload of the cloud is mostly a 
consumer of data (crawlers, datamining, etc), and administrators need to ensure 
other workloads won't be affected.
  
- Other example are CSPs which need to plan & allocate the bandwidth provided 
to 
- customers.
+ Other example are CSPs which need to plan & allocate the bandwidth
+ provided to customers.
  
  API/Model impact
  ===
- The BandwidthLimiting rules will be added a direction field (egress/ingress),
- which by default will be egress to match the current behaviour and, therefore 
+ The BandwidthLimiting rules will be added a direction field (egress/ingress), 
which by default will be egress to match the current behaviour and, therefore
  be backward compatible.
  
- Combining egress/ingress would be achieved by including an egress bandwidth 
limit
- and an ingress bandwidth limit.
+ Combining egress/ingress would be achieved by including an egress
+ bandwidth limit and an ingress bandwidth limit.

** Description changed:

  The current implementation of bandwidth limiting rules only supports egress 
bandwidth
  limiting.
  
  Use cases
- 
- 
+ =
  There are cases where ingress bandwidth limiting is more important than
  egress limiting, for example when the workload of the cloud is mostly a 
consumer of data (crawlers, datamining, etc), and administrators need to ensure 
other workloads won't be affected.
  
  Other example are CSPs which need to plan & allocate the bandwidth
  provided to customers.
  
  API/Model impact
  ===
  The BandwidthLimiting rules will be added a direction field (egress/ingress), 
which by default will be egress to match the current behaviour and, therefore
  be backward compatible.
  
  Combining egress/ingress would be achieved by including an egress
  bandwidth limit and an ingress bandwidth limit.

** Description changed:

  The current implementation of bandwidth limiting rules only supports egress 
bandwidth
  limiting.
  
  Use cases
  =
  There are cases where ingress bandwidth limiting is more important than
  egress limiting, for example when the workload of the cloud is mostly a 
consumer of data (crawlers, datamining, etc), and administrators need to ensure 
other workloads won't be affected.
  
  Other example are CSPs which need to plan & allocate the bandwidth
- provided to customers.
+ provided to customers, or provide different levels of network service.
  
  API/Model impact
  ===
  The BandwidthLimiting rules will be added a direction field (egress/ingress), 
which by default will be egress to match the current behaviour and, therefore
  be backward compatible.
  
  Combining egress/ingress would be achieved by including an egress
  bandwidth limit and an ingress bandwidth limit.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1560961

Title:
  [RFE] Allow instance-ingress bandwidth limiting

Status in neutron:
  New

Bug description:
  The current implementation of bandwidth limiting rules only supports egress 
bandwidth
  limiting.

  Use cases
  =
  There are cases where ingress bandwidth limiting is more important than
  egress limiting, for example when the workload of the cloud is mostly a 
consumer of data (crawlers, datamining, etc), and administrators need to ensure 
other workloads won't be affected.

  Other example are CSPs which need to plan & allocate the 

[Yahoo-eng-team] [Bug 1560957] [NEW] ovs mech_driver depends on neutron server firewall_driver option instead of the agent firewall_driver option to determine if hybrid plug can be used

2016-03-23 Thread Andreas Scheuring
Public bug reported:

The ovs mechanism driver determins if hybrid plug should be used along
the firewall_driver [1] setting that is made on the neutron server [2].

IPTABLES_FW_DRIVER_FULL = 
("neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver")
hybrid_plug_required = (cfg.CONF.SECURITYGROUP.firewall_driver in 
(IPTABLES_FW_DRIVER_FULL, 'iptables_hybrid'))

--> Only if the cfg.CONF.SECURITYGROUP.firewall_driver option is
configure to be hybrid, hybrid plug is enabled.


Let's assume you have a cloud, with a few nodes running lb and some other 
running ovs l2 agent. 
- neutron server: firewall_driver = 
neutron.agent.linux.iptables_firewall.IptablesFirewallDriver  (for lb)
- cpu node1: neutron-lb-agt: firewall_driver = 
neutron.agent.linux.iptables_firewall.IptablesFirewallDriver  (for lb)
- cpu node 2: neutron -ovs-agt: firewall_driver = 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver (for ovs)


Expected behavior
==
ovs agent uses hybrid plug, as it is configured in its configuration

Actual result
==

You'll never get a hybrid plug, as the neutron server does only consider its 
own fw_driver option instead of the agent option
--> No Security Groups

I see two approaches that can be discussed
=


#1 allow listing of multiple fw drivers in the neutron server configuration file

#2 Determine the hybrid_plug_required variable along the fw_driver
configured in the l2 agent (agent can report this to the sever as part
of its regular state report and mech_driver can use this information to
set hybrid plug option correctly when port_binding is requested)


[1] 
http://docs.openstack.org/liberty/config-reference/content/networking-options-securitygroups.html
[2] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/mech_driver/mech_openvswitch.py#L49

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovs sg-fw

** Summary changed:

- ovs mech driver depends on neutron server firewall_driver option instead of 
the agent firewall driver to determine if hybrid plug can be used
+ ovs mech_driver depends on neutron server firewall_driver option instead of 
the agent firewall_driver option to determine if hybrid plug can be used

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1560957

Title:
  ovs mech_driver depends on neutron server firewall_driver option
  instead of the agent firewall_driver option to determine if hybrid
  plug can be used

Status in neutron:
  New

Bug description:
  The ovs mechanism driver determins if hybrid plug should be used along
  the firewall_driver [1] setting that is made on the neutron server
  [2].

  IPTABLES_FW_DRIVER_FULL = 
("neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver")
  hybrid_plug_required = (cfg.CONF.SECURITYGROUP.firewall_driver in 
(IPTABLES_FW_DRIVER_FULL, 'iptables_hybrid'))

  --> Only if the cfg.CONF.SECURITYGROUP.firewall_driver option is
  configure to be hybrid, hybrid plug is enabled.

  
  Let's assume you have a cloud, with a few nodes running lb and some other 
running ovs l2 agent. 
  - neutron server: firewall_driver = 
neutron.agent.linux.iptables_firewall.IptablesFirewallDriver  (for lb)
  - cpu node1: neutron-lb-agt: firewall_driver = 
neutron.agent.linux.iptables_firewall.IptablesFirewallDriver  (for lb)
  - cpu node 2: neutron -ovs-agt: firewall_driver = 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver (for ovs)

  
  Expected behavior
  ==
  ovs agent uses hybrid plug, as it is configured in its configuration

  Actual result
  ==

  You'll never get a hybrid plug, as the neutron server does only consider its 
own fw_driver option instead of the agent option
  --> No Security Groups

  I see two approaches that can be discussed
  =

  
  #1 allow listing of multiple fw drivers in the neutron server configuration 
file

  #2 Determine the hybrid_plug_required variable along the fw_driver
  configured in the l2 agent (agent can report this to the sever as part
  of its regular state report and mech_driver can use this information
  to set hybrid plug option correctly when port_binding is requested)


  
  [1] 
http://docs.openstack.org/liberty/config-reference/content/networking-options-securitygroups.html
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/mech_driver/mech_openvswitch.py#L49

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1560957/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : 

[Yahoo-eng-team] [Bug 1560945] [NEW] Unable to create DVR+HA routers

2016-03-23 Thread John Schwarz
Public bug reported:

When creating a new DVR+HA, the router is created (the API returns
successfully) but the l3 agent enters an endless loop:

2016-03-23 13:57:37.340 ERROR neutron.agent.l3.agent [-] Failed to process 
compatible router 'a04b3fd7-d46c-4520-82af-18d16835469d'
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent Traceback (most recent 
call last):
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/agent/l3/agent.py", line 497, in 
_process_router_update
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent 
self._process_router_if_compatible(router)
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/agent/l3/agent.py", line 436, in 
_process_router_if_compatible
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent 
self._process_updated_router(router)
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/agent/l3/agent.py", line 450, in 
_process_updated_router
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent ri.process(self)
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/agent/l3/dvr_edge_ha_router.py", line 92, in 
process
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent super(DvrEdgeHaRouter, 
self).process(agent)
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/agent/l3/dvr_local_router.py", line 486, in 
process
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent super(DvrLocalRouter, 
self).process(agent)
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/agent/l3/dvr_router_base.py", line 30, in 
process
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent super(DvrRouterBase, 
self).process(agent)
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/agent/l3/ha_router.py", line 386, in process
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent super(HaRouter, 
self).process(agent)
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/common/utils.py", line 377, in call
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent self.logger(e)
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent self.force_reraise()
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent 
six.reraise(self.type_, self.value, self.tb)
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/common/utils.py", line 374, in call
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent return func(*args, 
**kwargs)
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/agent/l3/router_info.py", line 963, in process
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent 
self.process_address_scope()
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/agent/l3/dvr_edge_router.py", line 235, in 
process_address_scope
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent with 
snat_iptables_manager.defer_apply():
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent AttributeError: 'NoneType' 
object has no attribute 'defer_apply'
2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent 

This happens in upstream master.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-bgp l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1560945

Title:
  Unable to create DVR+HA routers

Status in neutron:
  New

Bug description:
  When creating a new DVR+HA, the router is created (the API returns
  successfully) but the l3 agent enters an endless loop:

  2016-03-23 13:57:37.340 ERROR neutron.agent.l3.agent [-] Failed to process 
compatible router 'a04b3fd7-d46c-4520-82af-18d16835469d'
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent Traceback (most recent 
call last):
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/agent/l3/agent.py", line 497, in 
_process_router_update
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent 
self._process_router_if_compatible(router)
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/agent/l3/agent.py", line 436, in 
_process_router_if_compatible
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent 
self._process_updated_router(router)
  2016-03-23 13:57:37.340 TRACE neutron.agent.l3.agent   File 
"/opt/openstack/neutron/neutron/agent/l3/agent.py", line 450, in 

[Yahoo-eng-team] [Bug 1538932] Re: source.scss file does not take effect since it does not match source.html

2016-03-23 Thread Rob Cresswell
File in question was removed here:
https://review.openstack.org/#/c/285084

** Changed in: horizon
   Status: In Progress => Invalid

** Changed in: horizon
 Assignee: Wang Bo (chestack) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1538932

Title:
  source.scss file does not take effect since it  does not match
  source.html

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  
openstack_dashboard/dashboards/project/static/dashboard/project/workflow/launch-instance/source/source.html:
  
...
  

  But currently the source.scss is:
  [ng-controller="LaunchInstanceSourceController"] {
td.hi-light {
  color: #0084d1;
}
...
  }

  Any change of scss file(such as:  color: red) does not take effect
  since the two files do not match.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1538932/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1539691] Re: Horizon should support a 'neutron.floatingip' custom choice when launching a Heat stack

2016-03-23 Thread Rob Cresswell
This seems to be a blueprint feature. Could you please register it here?
https://blueprints.launchpad.net/horizon

** Changed in: horizon
   Status: New => Invalid

** Changed in: horizon
 Assignee: Paul Breaux (p-breaux) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1539691

Title:
  Horizon should support a 'neutron.floatingip' custom choice when
  launching a Heat stack

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When launching a Heat stack through Horizon, a user should be able to
  choose an existing floating IP address from a drop-down if the user
  has specified the custom constraint 'neutron.floating_ip'. If
  'neutron.floating_ip' is specified, the appropriate field data is
  pulled from neutron regarding the floating IP addresses that exist and
  are accessible by this tenant.

  * The proposed change would add a custom choice in
  dashboards/project/stacks/forms.py

  * A corresponding change would be made to retrieve/assemble the field
  data in dashboards/project/instances/utils.py

  * No changes would be necessary to openstack_dashboard/api/neutron.py
  since it already has methods to retrieve this data from the neutron
  client.

  As I understand it, this would mean that a custom constraint
  validation would need to be added to the Heat project, to ensure the
  floating IP address is validated as such.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1539691/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461406] Re: libvirt: missing iotune parse for LibvirtConfigGuestDisk

2016-03-23 Thread OpenStack Infra
** Changed in: nova
   Status: Opinion => In Progress

** Changed in: nova
 Assignee: (unassigned) => ChangBo Guo(gcb) (glongwave)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461406

Title:
  libvirt: missing  iotune parse for  LibvirtConfigGuestDisk

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  We support  instance disk IO control with  iotune like :


  102400


  we set iotune in class LibvirtConfigGuestDisk  in libvirt/config.py . The 
method parse_dom doesn't parse iotue options now.
  Need fix that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560756] Re: galnce image-show could show the deleted images.That is not reasonable

2016-03-23 Thread Stuart McLaren
As a regular user I don't see this behaviour:

 $ glance --os-image-api-version 1 image-show 
5389c76a-2970-4666-9a86-1e9b24a854da
 No image with a name or ID of '5389c76a-2970-4666-9a86-1e9b24a854da' exists.

 $ glance --os-image-api-version 2 image-show 
5389c76a-2970-4666-9a86-1e9b24a854da
 404 Not Found: No image found with ID 5389c76a-2970-4666-9a86-1e9b24a854da 
(HTTP 404)

As an admin, I can show the image in v1 only:

 $ glance --os-image-api-version 2 image-show 
5389c76a-2970-4666-9a86-1e9b24a854da
 404 Not Found: No image found with ID 5389c76a-2970-4666-9a86-1e9b24a854da 
(HTTP 404)

 $ glance --os-image-api-version 1 image-show 
5389c76a-2970-4666-9a86-1e9b24a854da
 +--+--+
 | Property | Value| 
 +--+--+
 | checksum | 4215f4b77571603bee82ef427ea0ef84 |
 | container_format | bare |
 | created_at   | 2016-03-23T10:35:21.00   |
 | deleted  | True |
 | deleted_at   | 2016-03-23T10:35:39.00   |
 | disk_format  | raw  |
 | id   | 5389c76a-2970-4666-9a86-1e9b24a854da |
 | is_public| False|
 | min_disk | 0|
 | min_ram  | 0|
 | name | deleted-image|
 | owner| 5b0dd156a4b042d3afd720719b97b669 |
 | protected| False|
 | size | 37   |
 | status   | deleted  |
 | updated_at   | 2016-03-23T10:35:39.00   |
 +--+--+

I'd forgotten about this v1 behaviour to be honest, but it seems to be
by design rather than a bug.


http://webcache.googleusercontent.com/search?q=cache:_H-rS1q8c0UJ:openstack.10931.n7.nabble.com/Should-deleted-glance-images-be-visible-still-td10958.html=1=en=us=1=0



** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1560756

Title:
  galnce image-show could show the deleted images.That is not reasonable

Status in Glance:
  Invalid

Bug description:
  version :2015.1
  description:

  I can't see the image:

  [root@2C5_10_DELL05 ~(keystone_admin)]# glance image-list| grep 
000be9f8-6463-484d-b136-7f8ea9c6785c
  [root@2C5_10_DELL05 ~(keystone_admin)]# 

  but I can show the image :

  [root@2C5_10_DELL05 ~(keystone_admin)]# glance image-show 
000be9f8-6463-484d-b136-7f8ea9c6785c
  +--+--+
  | Property | Value|
  +--+--+
  | container_format | bare |
  | created_at   | 2016-01-29T02:01:38.00   |
  | deleted  | True |
  | deleted_at   | 2016-01-29T02:01:39.00   |
  | disk_format  | raw  |
  | id   | 000be9f8-6463-484d-b136-7f8ea9c6785c |
  | is_public| False|
  | min_disk | 0|
  | min_ram  | 0|
  | name | image-603465735  |
  | owner| 3280818cc4ac47ed9505bcdadf8f8a0a |
  | protected| False|
  | status   | deleted  |
  | updated_at   | 2016-01-29T02:01:39.00   |
  +--+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1560756/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560892] [NEW] neutron-fwaas does not provide a config file to be loaded by neutron-server

2016-03-23 Thread Ihar Hrachyshka
Public bug reported:

Some options [like quotas] should be loaded by neutron-server, but there
is no file that could be loaded by the service to get access to those
options.

fwaas should have a file similar to other *aas repos: neutron-lbaas.conf
or neutron-vpnaas.conf. It should be loaded by calling to
add_provider_configuration.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas

** Tags added: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1560892

Title:
  neutron-fwaas does not provide a config file to be loaded by neutron-
  server

Status in neutron:
  New

Bug description:
  Some options [like quotas] should be loaded by neutron-server, but
  there is no file that could be loaded by the service to get access to
  those options.

  fwaas should have a file similar to other *aas repos: neutron-
  lbaas.conf or neutron-vpnaas.conf. It should be loaded by calling to
  add_provider_configuration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1560892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441903] Re: rootwrap.d ln doesn't work for non iSCSI volumes

2016-03-23 Thread Eli Qiao
fixed by I181b594a3119f7ad74c595fc7059d521079b1d74

** Changed in: nova
 Assignee: lvmxh (shaohef) => (unassigned)

** Changed in: nova
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1441903

Title:
  rootwrap.d ln doesn't work for non iSCSI volumes

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The compute.filters line for ln doesn't allow for anything other than
  iSCSI volumes.

  It should allow for FC based volumes as well.

  # nova/virt/libvirt/volume.py:
  sginfo: CommandFilter, sginfo, root
  sg_scan: CommandFilter, sg_scan, root
  ln: RegExpFilter, ln, root, ln, --symbolic, --force, 
/dev/mapper/ip-.*-iscsi-iqn.*, /dev/disk/by-path/ip-.*-iscsi-iqn.*

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1441903/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560860] [NEW] mellanox infiniband SR-IOV(ib_hostdev vif) detach port fails

2016-03-23 Thread Lenny
Public bug reported:

detaching SRIOV port direct causes exception.

# neutron port-create --binding:vnic_type=direct private
# nova boot --flavor m1.small --image cirros-mellanox-x86_64-disk-ib --nic 
port-id=a247d89e-dae5-4d65-b414-e7bf3a26bfd1 vm1
# nova suspend vm1

logs:
https://review.openstack.org/#/c/286668
http://144.76.193.39/ci-artifacts/286668/3/Neutron-Networking-MLNX-ML2/

Traceback message

2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: 
cdf2e34d-bc2e-4edb-aff7-516b97487730] Traceback (most recent call last):
2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: 
cdf2e34d-bc2e-4edb-aff7-516b97487730]   File 
"/opt/stack/nova/nova/compute/manager.py", line 6515, in 
_error_out_instance_on_exception
2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: 
cdf2e34d-bc2e-4edb-aff7-516b97487730] yield
2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: 
cdf2e34d-bc2e-4edb-aff7-516b97487730]   File 
"/opt/stack/nova/nova/compute/manager.py", line 4172, in suspend_instance
2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: 
cdf2e34d-bc2e-4edb-aff7-516b97487730] self.driver.suspend(context, instance)
2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: 
cdf2e34d-bc2e-4edb-aff7-516b97487730]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2638, in suspend
2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: 
cdf2e34d-bc2e-4edb-aff7-516b97487730] self._detach_sriov_ports(context, 
instance, guest)
2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: 
cdf2e34d-bc2e-4edb-aff7-516b97487730]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 3425, in _detach_sriov_ports
2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: 
cdf2e34d-bc2e-4edb-aff7-516b97487730] if vif['vnic_type'] in 
network_model.VNIC_TYPES_SRIOV
2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: 
cdf2e34d-bc2e-4edb-aff7-516b97487730] AttributeError: 
'LibvirtConfigGuestHostdevPCI' object has no attribute 'source_dev'
2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: 
cdf2e34d-bc2e-4edb-aff7-516b97487730]

** Affects: nova
 Importance: Undecided
 Assignee: Moshe Levi (moshele)
 Status: New


** Tags: pci

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1560860

Title:
  mellanox infiniband SR-IOV(ib_hostdev vif)  detach  port fails

Status in OpenStack Compute (nova):
  New

Bug description:
  detaching SRIOV port direct causes exception.

  # neutron port-create --binding:vnic_type=direct private
  # nova boot --flavor m1.small --image cirros-mellanox-x86_64-disk-ib --nic 
port-id=a247d89e-dae5-4d65-b414-e7bf3a26bfd1 vm1
  # nova suspend vm1

  logs:
  https://review.openstack.org/#/c/286668
  http://144.76.193.39/ci-artifacts/286668/3/Neutron-Networking-MLNX-ML2/

  Traceback message

  2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: 
cdf2e34d-bc2e-4edb-aff7-516b97487730] Traceback (most recent call last):
  2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: 
cdf2e34d-bc2e-4edb-aff7-516b97487730]   File 
"/opt/stack/nova/nova/compute/manager.py", line 6515, in 
_error_out_instance_on_exception
  2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: 
cdf2e34d-bc2e-4edb-aff7-516b97487730] yield
  2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: 
cdf2e34d-bc2e-4edb-aff7-516b97487730]   File 
"/opt/stack/nova/nova/compute/manager.py", line 4172, in suspend_instance
  2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: 
cdf2e34d-bc2e-4edb-aff7-516b97487730] self.driver.suspend(context, instance)
  2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: 
cdf2e34d-bc2e-4edb-aff7-516b97487730]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2638, in suspend
  2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: 
cdf2e34d-bc2e-4edb-aff7-516b97487730] self._detach_sriov_ports(context, 
instance, guest)
  2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: 
cdf2e34d-bc2e-4edb-aff7-516b97487730]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 3425, in _detach_sriov_ports
  2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: 
cdf2e34d-bc2e-4edb-aff7-516b97487730] if vif['vnic_type'] in 
network_model.VNIC_TYPES_SRIOV
  2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: 
cdf2e34d-bc2e-4edb-aff7-516b97487730] AttributeError: 
'LibvirtConfigGuestHostdevPCI' object has no attribute 'source_dev'
  2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: 
cdf2e34d-bc2e-4edb-aff7-516b97487730]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1560860/+subscriptions

-- 
Mailing list: