[Yahoo-eng-team] [Bug 1213877] Re: something bad left by Remove DHCP lease logic

2015-01-23 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1213877

Title:
  something bad left by Remove DHCP lease logic

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  1.  create dhcp dir, which is configurable and created in driver later:
  dhcp_dir = os.path.dirname("/%s/dhcp/" % self.conf.state_path)
  84if not os.path.isdir(dhcp_dir):
  85os.makedirs(dhcp_dir, 0o755)
  https://review.openstack.org/#/c/37580/13/neutron/agent/dhcp_agent.py

  2. dhcp_agent.ini

  # Location to DHCP lease relay UNIX domain socket
  # dhcp_lease_relay_socket = $state_path/dhcp/lease_relay

  3.https://review.openstack.org/#/c/37580/13/neutron/db/db_base_plugin_v2.py
  def update_fixed_ip_lease_expiration(self, context, network_id,
  433ip_address, lease_remaining):
  which is not used because  update_lease_expiration  is removed at 
https://review.openstack.org/#/c/37580/13/neutron/db/dhcp_rpc_base.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1213877/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414253] [NEW] separate openswan special staff from general vpnaas framework

2015-01-23 Thread Hua Zhang
Public bug reported:

initial vpnaas effort puts general vpn framework and openswan stuff into
a file (device_drivers.ipsec.py) which will cause other vpn driver
implementations import this file, thus they will contain a bunch of
openswan stuff.  so we had better refactor openswan out in to its own
files and give some symmetry to these files

** Affects: neutron
 Importance: Undecided
 Assignee: Hua Zhang (zhhuabj)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1414253

Title:
  separate openswan special staff from general vpnaas framework

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  initial vpnaas effort puts general vpn framework and openswan stuff
  into a file (device_drivers.ipsec.py) which will cause other vpn
  driver implementations import this file, thus they will contain a
  bunch of openswan stuff.  so we had better refactor openswan out in to
  its own files and give some symmetry to these files

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1414253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414252] [NEW] Horizon throws unauthorized 403 error for cloud admin in domain setup

2015-01-23 Thread Sumanth
Public bug reported:

I have a devstack running following components
1.keystone
2.heat 
3.nova
4.horizon
5.cinder
 
For this open stack setup I wanted to enable domain feature, define admin 
boundaries.  To enable the domains, these changes were made : 
1. Changed the token format from PKI to UUID
2. added auth_version = v3.0  under [auth_token:fillter] section of all the 
api-paste.ini file of all the services 
3. updated the endpoints to point to v3 
4. restarted all the services 
5. Changed the default keystone policy.json with policy.v3sample.json and set 
the admin_domain_id to default 

I horizons local_settings.py file 
1. set the OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT to True
2. updated the enpoint to point to localhost:5000/v3
 

after all these changes when I try to login into the default domain with
admin credentials , i get ubale retirve domain list , unable retrive
project list errors horizons dashboard.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1414252

Title:
  Horizon throws unauthorized 403 error for cloud admin in domain setup

Status in OpenStack Identity (Keystone):
  New

Bug description:
  I have a devstack running following components
  1.keystone
  2.heat 
  3.nova
  4.horizon
  5.cinder
   
  For this open stack setup I wanted to enable domain feature, define admin 
boundaries.  To enable the domains, these changes were made : 
  1. Changed the token format from PKI to UUID
  2. added auth_version = v3.0  under [auth_token:fillter] section of all the 
api-paste.ini file of all the services 
  3. updated the endpoints to point to v3 
  4. restarted all the services 
  5. Changed the default keystone policy.json with policy.v3sample.json and set 
the admin_domain_id to default 

  I horizons local_settings.py file 
  1. set the OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT to True
  2. updated the enpoint to point to localhost:5000/v3
   

  after all these changes when I try to login into the default domain
  with admin credentials , i get ubale retirve domain list , unable
  retrive project list errors horizons dashboard.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1414252/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413876] Re: Horizon tests fail with latest django_openstack_auth 1.1.8

2015-01-23 Thread Lin Hua Cheng
** Also affects: django-openstack-auth
   Importance: Undecided
   Status: New

** Changed in: django-openstack-auth
 Assignee: (unassigned) => Lin Hua Cheng (lin-hua-cheng)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1413876

Title:
  Horizon tests fail with latest django_openstack_auth 1.1.8

Status in Django OpenStack Auth:
  In Progress
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce:
  1. checkout horizon master
  2. ./run_tests.sh
  3. checkout django_openstack_auth 1.1.8 tag
  4. create a soft link from horizon .venv to django_openstack_auth

  result: horizon tests fails

  There has been issue with the release of django_openstack_auth 1.1.8
  in pypi - this is the reason why we haven't seen the issue yet

  If I revert the last commit:
  
https://github.com/openstack/django_openstack_auth/commit/4ceb57d02b8bbed30578a8052a31b982a1339f41

  The horizon tests works fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1413876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414232] [NEW] l3-agent restart fails to remove qrouter namespace

2015-01-23 Thread Mike Smith
Public bug reported:

When a router is removed while a l3-agent is stopped, then started again
the qrouter namespace will fail to be destroyed because the driver
returns a 'Device or resource busy' error.   The reason for the error is
the metadata proxy is still running on the namespace.

The metadata proxy code has recently been refactored and no longer is
called in the _destroy_router_namespace() method.  In the use case of
this bug, there is no ri/router object since it has been removed, only
the namespace remains.  The new before_router_removed() method requires
a router object.

Changes will be required in both the l3-agent code and metadata proxy
service code to resolve this bug.

** Affects: neutron
 Importance: Undecided
 Assignee: Mike Smith (michael-smith6)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Mike Smith (michael-smith6)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1414232

Title:
  l3-agent restart fails to remove qrouter namespace

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When a router is removed while a l3-agent is stopped, then started
  again the qrouter namespace will fail to be destroyed because the
  driver returns a 'Device or resource busy' error.   The reason for the
  error is the metadata proxy is still running on the namespace.

  The metadata proxy code has recently been refactored and no longer is
  called in the _destroy_router_namespace() method.  In the use case of
  this bug, there is no ri/router object since it has been removed, only
  the namespace remains.  The new before_router_removed() method
  requires a router object.

  Changes will be required in both the l3-agent code and metadata proxy
  service code to resolve this bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1414232/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414218] [NEW] Remove extraneous trace in linux/dhcp.py

2015-01-23 Thread Billy Olsen
Public bug reported:

The debug tracepoint in Dnsmasq._output_hosts_file is extraneous and
causes unnecessary performance overhead due to string formating when
creating lots (> 1000) ports at one time.

The trace point is unnecessary since the data is being written to disk
and the file can be examined in a worst case scenario. The added
performance overhead is an order of magnitude in difference (~.5 seconds
versus ~.05 seconds at 1500 ports).

** Affects: neutron
 Importance: Undecided
 Assignee: Billy Olsen (billy-olsen)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Billy Olsen (billy-olsen)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1414218

Title:
  Remove extraneous trace in linux/dhcp.py

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The debug tracepoint in Dnsmasq._output_hosts_file is extraneous and
  causes unnecessary performance overhead due to string formating when
  creating lots (> 1000) ports at one time.

  The trace point is unnecessary since the data is being written to disk
  and the file can be examined in a worst case scenario. The added
  performance overhead is an order of magnitude in difference (~.5
  seconds versus ~.05 seconds at 1500 ports).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1414218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413852] Re: VMware store: Attribute error while retrying image upload

2015-01-23 Thread nikhil komawar
** Changed in: glance
   Importance: Undecided => Medium

** Also affects: glance-store
   Importance: Undecided
   Status: New

** Changed in: glance-store
 Assignee: (unassigned) => Sabari Murugesan (smurugesan)

** Changed in: glance-store
   Status: New => In Progress

** Changed in: glance-store
   Importance: Undecided => High

** Changed in: glance
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1413852

Title:
  VMware store: Attribute error while retrying image upload

Status in OpenStack Image Registry and Delivery Service (Glance):
  Opinion
Status in OpenStack Glance backend store-drivers library (glance_store):
  In Progress

Bug description:
  In case of unauthenticated session, vmware store retries to upload the
  image data but fails when seeking to reposition the file.

  2015-01-22 19:12:31.883 20624 ERROR glance.api.v1.upload_utils 
[e5be8f01-4bbb-4a1c-9721-495615e5e5cb 65dc7965ee13438e9ae706d4d42d2f65 
087a197802df458dac65c1060a8e4cd4 - - -] Failed to upload image 
fa4a3308-917a-4899-8905-66333955cdfb
  2015-01-22 19:12:31.883 20624 TRACE glance.api.v1.upload_utils Traceback 
(most recent call last):
  2015-01-22 19:12:31.883 20624 TRACE glance.api.v1.upload_utils   File 
"/opt/stack/glance/glance/api/v1/upload_utils.py", line 107, in 
upload_data_to_store
  2015-01-22 19:12:31.883 20624 TRACE glance.api.v1.upload_utils 
context=req.context)
  2015-01-22 19:12:31.883 20624 TRACE glance.api.v1.upload_utils   File 
"/opt/stack/glance_store/glance_store/backend.py", line 318, in 
store_add_to_backend
  2015-01-22 19:12:31.883 20624 TRACE glance.api.v1.upload_utils 
context=context)
  2015-01-22 19:12:31.883 20624 TRACE glance.api.v1.upload_utils   File 
"/opt/stack/glance_store/glance_store/_drivers/vmware_datastore.py", line 359, 
in add
  2015-01-22 19:12:31.883 20624 TRACE glance.api.v1.upload_utils 
image_file.rewind()
  2015-01-22 19:12:31.883 20624 TRACE glance.api.v1.upload_utils   File 
"/opt/stack/glance_store/glance_store/_drivers/vmware_datastore.py", line 124, 
in rewind
  2015-01-22 19:12:31.883 20624 TRACE glance.api.v1.upload_utils 
self.data.seek(0)
  2015-01-22 19:12:31.883 20624 TRACE glance.api.v1.upload_utils 
AttributeError: 'CooperativeReader' object has no attribute 'seek'
  2015-01-22 19:12:31.883 20624 TRACE glance.api.v1.upload_utils

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1413852/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414199] [NEW] Delete subnet can fail for SLAAC/DHCP_STATELESS with 409

2015-01-23 Thread Salvatore Orlando
Public bug reported:

The routine for deleting a subnet checks whether it's ok to remove a
subnet in the following way:

1) Query IP allocations on the subnet for ports that can be automatically 
deleted
2) Remove those allocations
3) Check again subnet allocations. If there is any other allocation this means 
that there are some IPs that cannot be automatically deleted
4) Raise a conflict exception

In the case of SLAAC or DHCP_STATELESS IPv6 subnets, every IP address can be 
automatically deleted - and that's where the problem lies.
Indeed the query performed at step #3 [1] and [2] for ML2 plugin, might cause a 
failure during subnet deletion if:
- The transaction isolation level is set to READ COMMITTED
- The subnet address mode is either SLAAC or DHCP STATELESS
- A port is created concurrently with the delete subnet procedure and an IP 
address is assigned to it.

These circumstances are quite unlikely to occur, but far from
impossible. They are indeed seen in gate tests [3].

It is advisable to provide a fix for this issue. To this aim it is
probably worth noting that the check #3 is rather pointless for subnets
with automatic address mode.


[1] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/db_base_plugin_v2.py#n1240
[2] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/plugin.py#n810
[3] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVW5hYmxlIHRvIGNvbXBsZXRlIG9wZXJhdGlvbiBvbiBzdWJuZXRcIiBBTkQgbWVzc2FnZTpcIk9uZSBvciBtb3JlIHBvcnRzIGhhdmUgYW4gSVAgYWxsb2NhdGlvbiBmcm9tIHRoaXMgc3VibmV0XCIgQU5EIHRhZ3M6Y29uc29sZSBBTkQgYnVpbGRfc3RhdHVzOkZBSUxVUkUgQU5EIGJ1aWxkX25hbWU6XCJjaGVjay1ncmVuYWRlLWRzdm0tbmV1dHJvblwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDIyMDQ1MDA1OTE3LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

** Affects: neutron
 Importance: Undecided
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1414199

Title:
  Delete subnet can fail for SLAAC/DHCP_STATELESS with 409

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The routine for deleting a subnet checks whether it's ok to remove a
  subnet in the following way:

  1) Query IP allocations on the subnet for ports that can be automatically 
deleted
  2) Remove those allocations
  3) Check again subnet allocations. If there is any other allocation this 
means that there are some IPs that cannot be automatically deleted
  4) Raise a conflict exception

  In the case of SLAAC or DHCP_STATELESS IPv6 subnets, every IP address can be 
automatically deleted - and that's where the problem lies.
  Indeed the query performed at step #3 [1] and [2] for ML2 plugin, might cause 
a failure during subnet deletion if:
  - The transaction isolation level is set to READ COMMITTED
  - The subnet address mode is either SLAAC or DHCP STATELESS
  - A port is created concurrently with the delete subnet procedure and an IP 
address is assigned to it.

  These circumstances are quite unlikely to occur, but far from
  impossible. They are indeed seen in gate tests [3].

  It is advisable to provide a fix for this issue. To this aim it is
  probably worth noting that the check #3 is rather pointless for
  subnets with automatic address mode.

  
  [1] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/db_base_plugin_v2.py#n1240
  [2] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/plugin.py#n810
  [3] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVW5hYmxlIHRvIGNvbXBsZXRlIG9wZXJhdGlvbiBvbiBzdWJuZXRcIiBBTkQgbWVzc2FnZTpcIk9uZSBvciBtb3JlIHBvcnRzIGhhdmUgYW4gSVAgYWxsb2NhdGlvbiBmcm9tIHRoaXMgc3VibmV0XCIgQU5EIHRhZ3M6Y29uc29sZSBBTkQgYnVpbGRfc3RhdHVzOkZBSUxVUkUgQU5EIGJ1aWxkX25hbWU6XCJjaGVjay1ncmVuYWRlLWRzdm0tbmV1dHJvblwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDIyMDQ1MDA1OTE3LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1414199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414164] [NEW] Horizon: incorrect error message when VM deletion failed

2015-01-23 Thread Danny Choi
Public bug reported:

nova-manage version: 2014.2.2

Issue: incorrect error message when VM delete failed.  It should be
"delete", not "launch".

e.g. ERROR: Failed to launch instance XYZ.  Please try again later.
[Error: delete_port_precommit failed.].

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1414164

Title:
  Horizon: incorrect error message when VM deletion failed

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  nova-manage version: 2014.2.2

  Issue: incorrect error message when VM delete failed.  It should be
  "delete", not "launch".

  e.g. ERROR: Failed to launch instance XYZ.  Please try again later.
  [Error: delete_port_precommit failed.].

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1414164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414138] [NEW] Integration tests - extend test flavors with update action

2015-01-23 Thread Martin Pavlásek
Public bug reported:

Current test case just create, check existency and delete flavor. There
is missing update action - Edit flavor.

** Affects: horizon
 Importance: Undecided
 Assignee: Martin Pavlásek (mpavlase)
 Status: New


** Tags: integration-tests

** Changed in: horizon
 Assignee: (unassigned) => Martin Pavlásek (mpavlase)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1414138

Title:
  Integration tests - extend test flavors with update action

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Current test case just create, check existency and delete flavor.
  There is missing update action - Edit flavor.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1414138/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414118] [NEW] Horizon shows stacktrace instead of 401: unauthorized

2015-01-23 Thread Nicolas Vila
Public bug reported:

In Icehouse if a user is unauthorized to access volumes, horizon shows a
popup upon login as well as when accesing volumes. In Juno, horizon
shows no warning upon login, and as one access volumes it shows a
stacktrace.

One way to reproduce is to remove cinder endpoint from keystone.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1414118

Title:
  Horizon shows stacktrace instead of 401: unauthorized

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In Icehouse if a user is unauthorized to access volumes, horizon shows
  a popup upon login as well as when accesing volumes. In Juno, horizon
  shows no warning upon login, and as one access volumes it shows a
  stacktrace.

  One way to reproduce is to remove cinder endpoint from keystone.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1414118/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407592] Re: Snapshots fail to upload larger (~30G+) images, with error '500 Internal Server Error Failed to upload image'

2015-01-23 Thread Joe Gordon
** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1407592

Title:
  Snapshots fail to upload larger (~30G+) images, with error '500
  Internal Server Error Failed to upload image'

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Identity (Keystone):
  New
Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  I use Ubuntu 14.04.1 and OpenStack Juno latest release.

  In case I do SNAPSHOT for small virtual machine - it works.  In case I
  do SNAPSHOT for a big size VM (more than 30G) it fails after starting
  of saving SNAPSHOT to a Glance storage.

  So if to summarize:
  - could be created SNAPSHOTS for small VMs but not for a big
  - it is enough space on Glance storage and on root partitions on all involved 
physical machines

  The following errors could be found in logs:
  /var/log/nova/nova-compute.log
  2015-01-05 11:14:56.745 1767 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for node4:node4
  2015-01-05 11:15:32.157 1767 ERROR glanceclient.common.http 
[req-692621ba-7a6b-471a-9e16-d74ee2eb875d ] Request returned failure status 500.
  2015-01-05 11:15:32.158 1767 WARNING urllib3.connectionpool 
[req-692621ba-7a6b-471a-9e16-d74ee2eb875d ] HttpConnectionPool is full, 
discarding connection: cloud
  2015-01-05 11:15:34.479 1767 ERROR glanceclient.common.http 
[req-692621ba-7a6b-471a-9e16-d74ee2eb875d ] Request returned failure status 401.
  2015-01-05 11:15:34.480 1767 ERROR nova.compute.manager 
[req-692621ba-7a6b-471a-9e16-d74ee2eb875d None] [instance: 
f5a232a4-a7b2-4858-b793-38936b0f7a4d] Error while trying to clean up image 
71cd0c84-fa54-4d35-9ad6-63e36969de1c
  2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: 
f5a232a4-a7b2-4858-b793-38936b0f7a4d] Traceback (most recent call last):
  2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: 
f5a232a4-a7b2-4858-b793-38936b0f7a4d]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 375, in 
decorated_function
  2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: 
f5a232a4-a7b2-4858-b793-38936b0f7a4d] self.image_api.delete(context, 
image_id)
  2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: 
f5a232a4-a7b2-4858-b793-38936b0f7a4d]   File 
"/usr/lib/python2.7/dist-packages/nova/image/api.py", line 137, in delete
  2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: 
f5a232a4-a7b2-4858-b793-38936b0f7a4d] return session.delete(context, 
image_id)
  2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: 
f5a232a4-a7b2-4858-b793-38936b0f7a4d]   File 
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 418, in delete
  2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: 
f5a232a4-a7b2-4858-b793-38936b0f7a4d] self._client.call(context, 1, 
'delete', image_id)
  2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: 
f5a232a4-a7b2-4858-b793-38936b0f7a4d]   File 
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 232, in call
  2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: 
f5a232a4-a7b2-4858-b793-38936b0f7a4d] return getattr(client.images, 
method)(*args, **kwargs)
  2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: 
f5a232a4-a7b2-4858-b793-38936b0f7a4d]   File 
"/usr/lib/python2.7/dist-packages/glanceclient/v1/images.py", line 255, in 
delete
  2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: 
f5a232a4-a7b2-4858-b793-38936b0f7a4d] resp, body = self.client.delete(url)
  2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: 
f5a232a4-a7b2-4858-b793-38936b0f7a4d]   File 
"/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 265, in 
delete
  2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: 
f5a232a4-a7b2-4858-b793-38936b0f7a4d] return self._request('DELETE', url, 
**kwargs)
  2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: 
f5a232a4-a7b2-4858-b793-38936b0f7a4d]   File 
"/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 221, in 
_request
  2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: 
f5a232a4-a7b2-4858-b793-38936b0f7a4d] raise exc.from_response(resp, 
resp.content)
  2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: 
f5a232a4-a7b2-4858-b793-38936b0f7a4d] HTTPUnauthorized: 
  2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: 
f5a232a4-a7b2-4858-b793-38936b0f7a4d]  
  2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: 
f5a232a4-a7b2-4858-b793-38936b0f7a4d]   401 Unauthorized
  2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: 
f5a232a4-a7b2-4858-b793-38936b0f7a4d]  
  2015-01-05 11:

[Yahoo-eng-team] [Bug 1414067] [NEW] Table row updating doesn't always work with tabs

2015-01-23 Thread Jonas
Public bug reported:

Updating table rows with ajax doesn't work if the table is part of a
tabs.TabbedTableView and the tabs.TableTab is not the first tab. Please
see the attached example code. Note that loading the second page
directly with
http://127.0.0.1:8000/mydashboard/?tab=mypanel_tabs__second_tab works
fine. It also works if the the table is part of the first tab.

Looking at the javascript code the problem seems to be that
horizon.datatables.update(); in horizon.tables.js gets called before the
code that switches to a different tab. Therefore the table doesn't exist
yet when it is trying to update the rows. An easy solution would be to
run horizon.datatables.update(); via horizon.tabs.addTabLoadFunction()
as this ensures that the tab code is executed before the table is
updated.

** Affects: horizon
 Importance: Undecided
 Status: New

** Patch added: "0001-ajax-bug-example.patch"
   
https://bugs.launchpad.net/bugs/1414067/+attachment/4304424/+files/0001-ajax-bug-example.patch

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1414067

Title:
  Table row updating doesn't always work with tabs

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Updating table rows with ajax doesn't work if the table is part of a
  tabs.TabbedTableView and the tabs.TableTab is not the first tab.
  Please see the attached example code. Note that loading the second
  page directly with
  http://127.0.0.1:8000/mydashboard/?tab=mypanel_tabs__second_tab works
  fine. It also works if the the table is part of the first tab.

  Looking at the javascript code the problem seems to be that
  horizon.datatables.update(); in horizon.tables.js gets called before
  the code that switches to a different tab. Therefore the table doesn't
  exist yet when it is trying to update the rows. An easy solution would
  be to run horizon.datatables.update(); via
  horizon.tabs.addTabLoadFunction() as this ensures that the tab code is
  executed before the table is updated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1414067/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414065] [NEW] Nova can loose track of running VM if live migration raises an exception

2015-01-23 Thread Daniel Berrange
Public bug reported:

There is a fairly serious bug in VM state handling during live
migration, with a result that if libvirt raises an error *after* the VM
has successfully live migrated to the target host, Nova can end up
thinking the VM is shutoff everywhere, despite it still being active.
The consequences of this are quite dire as the user can then manually
start the VM again and corrupt any data in shared volumes and the like.

The fun starts in the _live_migration method in
nova.virt.libvirt.driver, if the 'migrateToURI2' method fails *after*
the guest has completed migration.

At start of migration, we see an event received by Nova for the new QEMU
process starting on target host

2015-01-23 15:39:57.743 DEBUG nova.compute.manager [-] [instance:
12bac45e-aca8-40d1-8f39-941bc6bb59f0] Synchronizing instance power state
after lifecycle event "Started"; current vm_state: active, current
task_state: migrating, current DB power_state: 1, VM power_state: 1 from
(pid=19494) handle_lifecycle_event
/home/berrange/src/cloud/nova/nova/compute/manager.py:1134


Upon migration completion we see CPUs start running on the target host

2015-01-23 15:40:02.794 DEBUG nova.compute.manager [-] [instance:
12bac45e-aca8-40d1-8f39-941bc6bb59f0] Synchronizing instance power state
after lifecycle event "Resumed"; current vm_state: active, current
task_state: migrating, current DB power_state: 1, VM power_state: 1 from
(pid=19494) handle_lifecycle_event
/home/berrange/src/cloud/nova/nova/compute/manager.py:1134

And finally an event saying that the QEMU on the source host has stopped

2015-01-23 15:40:03.629 DEBUG nova.compute.manager [-] [instance:
12bac45e-aca8-40d1-8f39-941bc6bb59f0] Synchronizing instance power state
after lifecycle event "Stopped"; current vm_state: active, current
task_state: migrating, current DB power_state: 1, VM power_state: 4 from
(pid=23081) handle_lifecycle_event
/home/berrange/src/cloud/nova/nova/compute/manager.py:1134


It is the last event that causes the trouble.  It causes Nova to mark the VM as 
shutoff at this point.

Normally the '_live_migrate' method would succeed and so Nova would then
immediately & explicitly mark the guest as running on the target host.
If an exception occurrs though, this explicit update of VM state doesn't
happen so Nova considers the guest shutoff, even though it is still
running :-(


The lifecycle events from libvirt have an associated "reason", so we could see 
that the shutoff event from libvirt corresponds to a migration being completed, 
and so not mark the VM as shutoff in Nova.  We would also have to make sure the 
target host processes the 'resume' event upon migrate completion.

An safer approach though, might be to just mark the VM as in an ERROR
state if any exception occurs during migration.

** Affects: nova
 Importance: High
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1414065

Title:
  Nova can loose track of running VM if live migration raises an
  exception

Status in OpenStack Compute (Nova):
  New

Bug description:
  There is a fairly serious bug in VM state handling during live
  migration, with a result that if libvirt raises an error *after* the
  VM has successfully live migrated to the target host, Nova can end up
  thinking the VM is shutoff everywhere, despite it still being active.
  The consequences of this are quite dire as the user can then manually
  start the VM again and corrupt any data in shared volumes and the
  like.

  The fun starts in the _live_migration method in
  nova.virt.libvirt.driver, if the 'migrateToURI2' method fails *after*
  the guest has completed migration.

  At start of migration, we see an event received by Nova for the new
  QEMU process starting on target host

  2015-01-23 15:39:57.743 DEBUG nova.compute.manager [-] [instance:
  12bac45e-aca8-40d1-8f39-941bc6bb59f0] Synchronizing instance power
  state after lifecycle event "Started"; current vm_state: active,
  current task_state: migrating, current DB power_state: 1, VM
  power_state: 1 from (pid=19494) handle_lifecycle_event
  /home/berrange/src/cloud/nova/nova/compute/manager.py:1134

  
  Upon migration completion we see CPUs start running on the target host

  2015-01-23 15:40:02.794 DEBUG nova.compute.manager [-] [instance:
  12bac45e-aca8-40d1-8f39-941bc6bb59f0] Synchronizing instance power
  state after lifecycle event "Resumed"; current vm_state: active,
  current task_state: migrating, current DB power_state: 1, VM
  power_state: 1 from (pid=19494) handle_lifecycle_event
  /home/berrange/src/cloud/nova/nova/compute/manager.py:1134

  And finally an event saying that the QEMU on the source host has
  stopped

  2015-01-23 15:40:03.629 DEBUG nova.compute.manager [-] [instance:
  12bac45e-aca8-40d1-8f39-941bc6bb59f0] Synchronizing instance power
  state after lifecycle event "Stopped"; current 

[Yahoo-eng-team] [Bug 1414038] [NEW] libvirt driver creates wrong console device for system z

2015-01-23 Thread Markus Zoeller
Public bug reported:

The system z platform needs a console device for the log file of an 
instance. A (manual[*]) regression test showed that this is not the case 
with master from 2015-01-22. The guest.cpu.arch is None which leads
to the creation of a wrong device.

Wrong domain XML:
 
[...]

[...]






[...]

 

Correct domain XML:
 
[...]

[...]







[...]

 


[*] The CI environment for this platform is still under development.

** Affects: nova
 Importance: Undecided
 Assignee: Markus Zoeller (mzoeller)
 Status: In Progress


** Tags: libvirt

** Changed in: nova
 Assignee: (unassigned) => Markus Zoeller (mzoeller)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1414038

Title:
  libvirt driver creates wrong console device for system z

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The system z platform needs a console device for the log file of an 
  instance. A (manual[*]) regression test showed that this is not the case 
  with master from 2015-01-22. The guest.cpu.arch is None which leads
  to the creation of a wrong device.

  Wrong domain XML:
 
  [...]
  
  [...]
  
  
  
  
  
  
  [...]
  
   

  Correct domain XML:
 
  [...]
  
  [...]
  
  
  
  
  
  
  
  [...]
  
   
  

  [*] The CI environment for this platform is still under development.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1414038/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413998] [NEW] Class rpc.Connection misses close() method

2015-01-23 Thread Ilya Shakhat
Public bug reported:

Method stop() of neutron.common.rpc.Service calls non-existent
Connection.close() method. The issue is hidden by broad try-except
block, however oslo.messaging's rpc_server is not shut down gracefully.
Code fragment:
https://github.com/openstack/neutron/blob/master/neutron/common/rpc.py#L181

** Affects: neutron
 Importance: Undecided
 Assignee: Ilya Shakhat (shakhat)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Ilya Shakhat (shakhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1413998

Title:
  Class rpc.Connection misses close() method

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Method stop() of neutron.common.rpc.Service calls non-existent
  Connection.close() method. The issue is hidden by broad try-except
  block, however oslo.messaging's rpc_server is not shut down
  gracefully. Code fragment:
  https://github.com/openstack/neutron/blob/master/neutron/common/rpc.py#L181

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1413998/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413985] [NEW] BASEV2.__table_args__ is not properly overloaded

2015-01-23 Thread Cedric Brandily
Public bug reported:

neutron.db.model_base.BASEV2 defines __table_args__ attribute.
Most of its subclasses overloading __table_args__  does not properly inherit 
the __table args__ from parent class.

** Affects: neutron
 Importance: Undecided
 Assignee: Cedric Brandily (cbrandily)
 Status: New


** Tags: db

** Changed in: neutron
 Assignee: (unassigned) => Cedric Brandily (cbrandily)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1413985

Title:
  BASEV2.__table_args__ is not properly overloaded

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  neutron.db.model_base.BASEV2 defines __table_args__ attribute.
  Most of its subclasses overloading __table_args__  does not properly inherit 
the __table args__ from parent class.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1413985/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1396501] Re: Post-Creation option marked mandatory in Launch Instance window

2015-01-23 Thread Kieran Spear
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

** Changed in: horizon/juno
   Importance: Undecided => Low

** Changed in: horizon/juno
   Status: New => In Progress

** Changed in: horizon/juno
 Assignee: (unassigned) => Kieran Spear (kspear)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1396501

Title:
  Post-Creation option marked mandatory in Launch Instance window

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  In Progress

Bug description:
  In Project -> Instances -> Launch Instance window, the Post-Creation
  tab is marked with a * making it a mandatory parameter whereas it is
  not a mandatory parameter.

  It represents --user-data, which is an optional argument. Also, if we
  do not specify anything in the Script Data (when Direct Input is the
  script source) or if we do not choose a file (when File is the script
  source), and launch an instance, no error is raised.

  Thus, the Asterisk sign is mis-leading and should be removed to  make
  it an optional parameter.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1396501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389108] Re: Horizon crashed when parsing volume list including a volume without name

2015-01-23 Thread Kieran Spear
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

** Changed in: horizon/juno
   Importance: Undecided => Low

** Changed in: horizon/juno
   Status: New => In Progress

** Changed in: horizon/juno
 Assignee: (unassigned) => Kieran Spear (kspear)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1389108

Title:
  Horizon crashed when parsing volume list including a volume without
  name

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  In Progress

Bug description:
  This bug can be re-produced as follows:
   - 1. Use Nova or Cinder CLI to create a volume without name that can be used 
to boot a new VM:

  nova boot some_name --flavor 1 --image some_img_id --nic net-
  id=some_nic_id  --block-device
  source=image,id=some_img_id,dest=volume,size=1,bootindex=1"

     Note that: in the "--block-device", we don't specify volume
  name

  
  cinder create 1 --image-id some_img_id

 Note that: we don't specify name for the volume to be created

  
   - 2. After that, if we try to retrieve the volume list by clicking the tab 
"Volumes" on Horizon, then Horizon will always be crashed. Note that, the crash 
does not occur when using CLI "cinder list" to list all available volumes.

  Traceback:

  [Tue Nov 04 06:36:51.620653 2014] [:error] [pid 12412:tid
  140421861299968] Error while rendering table rows.

  [Tue Nov 04 06:36:51.620733 2014] [:error] [pid 12412:tid
  140421861299968] Traceback (most recent call last):

  [Tue Nov 04 06:36:51.620776 2014] [:error] [pid 12412:tid
  140421861299968]   File
  
"/opt/stack_stable_juno/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py",
  line 1731, in get_rows

  [Tue Nov 04 06:36:51.620817 2014] [:error] [pid 12412:tid
  140421861299968] row = self._meta.row_class(self, datum)

  [Tue Nov 04 06:36:51.620871 2014] [:error] [pid 12412:tid
  140421861299968]   File
  
"/opt/stack_stable_juno/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py",
  line 522, in __init__

  [Tue Nov 04 06:36:51.620911 2014] [:error] [pid 12412:tid
  140421861299968] self.load_cells()

  [Tue Nov 04 06:36:51.620975 2014] [:error] [pid 12412:tid
  140421861299968]   File
  
"/opt/stack_stable_juno/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py",
  line 548, in load_cells

  [Tue Nov 04 06:36:51.621040 2014] [:error] [pid 12412:tid
  140421861299968] cell = table._meta.cell_class(datum, column,
  self)

  [Tue Nov 04 06:36:51.621078 2014] [:error] [pid 12412:tid
  140421861299968]   File
  
"/opt/stack_stable_juno/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py",
  line 644, in __init__

  [Tue Nov 04 06:36:51.621118 2014] [:error] [pid 12412:tid
  140421861299968] self.data = self.get_data(datum, column, row)

  [Tue Nov 04 06:36:51.621159 2014] [:error] [pid 12412:tid
  140421861299968]   File
  
"/opt/stack_stable_juno/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py",
  line 682, in get_data

  [Tue Nov 04 06:36:51.621200 2014] [:error] [pid 12412:tid
  140421861299968] data = column.get_data(datum)

  [Tue Nov 04 06:36:51.621238 2014] [:error] [pid 12412:tid
  140421861299968]   File
  
"/opt/stack_stable_juno/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py",
  line 375, in get_data

  [Tue Nov 04 06:36:51.621331 2014] [:error] [pid 12412:tid
  140421861299968] data = self.get_raw_data(datum)

  [Tue Nov 04 06:36:51.621373 2014] [:error] [pid 12412:tid
  140421861299968]   File
  
"/opt/stack_stable_juno/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/volumes/volumes/tables.py",
  line 298, in get_raw_data

  [Tue Nov 04 06:36:51.621414 2014] [:error] [pid 12412:tid
  140421861299968] "dev": html.escape(attachment["device"])}

  [Tue Nov 04 06:36:51.621473 2014] [:error] [pid 12412:tid
  140421861299968] KeyError: 'device'

  Root cause: from the above Traceback, it's easy to see that the crash
  root cause is because Horizon tries to get the value of a non-existing
  key.

  Proposal for fixing this bug: secure the accessing the key "device" as 
follows:
     "dev": html.escape(attachment.get("device", ""))}

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1389108/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367442] Re: project > images fixed filter is missing icons

2015-01-23 Thread Kieran Spear
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

** Changed in: horizon/juno
   Importance: Undecided => Low

** Changed in: horizon/juno
   Status: New => In Progress

** Changed in: horizon/juno
 Assignee: (unassigned) => Kieran Spear (kspear)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1367442

Title:
  project > images fixed filter is missing icons

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) juno series:
  In Progress

Bug description:
  Project | Shared with Me | Public ==> should have icons beside each
  filter choice as it did before bootstrap 3 update.

  
https://github.com/openstack/horizon/blob/master/horizon/templates/horizon/common/_data_table_table_actions.html#L7

  code:
  {% if button.icon %}

  With Bootstrap 3 update, we should be using... class="glyphicon
  glyphicon-star" for icons instead of 

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1367442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413503] Re: ovs agent get exception when processing VIF ports

2015-01-23 Thread yalei wang
** No longer affects: devstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1413503

Title:
  ovs agent get exception when processing VIF ports

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Mark this error here, not sure if it is a neutron's bug

  =
  2015-01-22 15:46:12.460 ERROR 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-bde320a0-61da-4f4b-8710-eef1e8c8819b None None] Error while processing VIF 
ports
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call 
last):
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", 
line 1455, in rpc_loop
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ovs_restarted)
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", 
line 1221, in process_network_ports
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
port_info.get('updated', set()))
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 342, in 
setup_port_filters
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self.prepare_devices_filter(new_devices)
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 201, in 
decorated_function
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent *args, **kwargs)
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 226, in 
prepare_devices_filter
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent security_groups, 
security_group_member_ips)
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent self.gen.next()
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/firewall.py", line 106, in defer_apply
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self.filter_defer_apply_off()
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/linux/iptables_firewall.py", line 539, in 
filter_defer_apply_off
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self.iptables.defer_apply_off()
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/linux/iptables_manager.py", line 397, in 
defer_apply_off
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent self._apply()
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/linux/iptables_manager.py", line 412, in 
_apply
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent with 
lockutils.lock(lock_name, utils.SYNCHRONIZED_PREFIX, True):
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/contextlib.py", line 17, in __enter__
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent return self.gen.next()
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
381, in lock
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ext_lock = 
external_lock(name, lock_file_prefix, lock_path)
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
318, in external_lock
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent lock_file_path = 
_get_lock_path(name, lock_file_prefix, lock_path)
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
309, in _get_lock_path
  2015-01-22 15:46:12.460 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent local_lock

[Yahoo-eng-team] [Bug 1413905] [NEW] some exception messages of trust manager are not formatted

2015-01-23 Thread wanghong
Public bug reported:

There are three unformatted exceptions in trust manager:
https://github.com/openstack/keystone/blob/master/keystone/trust/core.py#L56
https://github.com/openstack/keystone/blob/master/keystone/trust/core.py#L65
https://github.com/openstack/keystone/blob/master/keystone/trust/core.py#L140

The test code is as follow:
>>> from keystone import exception
>>> from keystone.i18n import _
>>> from keystone import config
>>> CONF = config.CONF
>>> CONF.debug=True
>>> raise exception.Forbidden(_('%(redelegation_depth)d  [0..%(max_count)d]'), 
>>> redelegation_depth=1, max_count=2)
Traceback (most recent call last):
  File "", line 1, in 
keystone.exception.Forbidden: %(redelegation_depth)d  [0..%(max_count)d] 
(Disable debug mode to suppress these details.)

** Affects: keystone
 Importance: Undecided
 Assignee: wanghong (w-wanghong)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => wanghong (w-wanghong)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1413905

Title:
  some exception messages of trust manager are not formatted

Status in OpenStack Identity (Keystone):
  New

Bug description:
  There are three unformatted exceptions in trust manager:
  https://github.com/openstack/keystone/blob/master/keystone/trust/core.py#L56
  https://github.com/openstack/keystone/blob/master/keystone/trust/core.py#L65
  https://github.com/openstack/keystone/blob/master/keystone/trust/core.py#L140

  The test code is as follow:
  >>> from keystone import exception
  >>> from keystone.i18n import _
  >>> from keystone import config
  >>> CONF = config.CONF
  >>> CONF.debug=True
  >>> raise exception.Forbidden(_('%(redelegation_depth)d  
[0..%(max_count)d]'), redelegation_depth=1, max_count=2)
  Traceback (most recent call last):
File "", line 1, in 
  keystone.exception.Forbidden: %(redelegation_depth)d  [0..%(max_count)d] 
(Disable debug mode to suppress these details.)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1413905/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp