[Yahoo-eng-team] [Bug 1296148] [NEW] neutronclient should raise generic per-code exceptions

2014-03-22 Thread Akihiro Motoki
Public bug reported:

Neutronclient now raises known exceptions (such as NetworkNotFound, 
PortNotFound,...) or generic NeutronClientException, and library users (like 
Horizon) cannot catch generic NotFound exception without checking the detail of 
NeutronClientException.
Obviously neutronclient should raise individual exceptions based on response 
status code (400, 404, 409 and so on).

By doing so, Horizon exception handling will be much simpler and I believe it 
brings the similar merits to other library users.
In Horizon it is targeted to Icehouse release.

It is related to bug 1284317, which is specific to Router Not Found
exception, and this bug is intended to address the issue in more generic
way.

** Affects: horizon
 Importance: Medium
 Assignee: Akihiro Motoki (amotoki)
 Status: New

** Affects: python-neutronclient
 Importance: High
 Assignee: Akihiro Motoki (amotoki)
 Status: In Progress

** Also affects: horizon
   Importance: Undecided
   Status: New

** Changed in: horizon
Milestone: None => icehouse-rc1

** Changed in: horizon
   Importance: Undecided => Medium

** Changed in: horizon
 Assignee: (unassigned) => Akihiro Motoki (amotoki)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1296148

Title:
  neutronclient should raise generic per-code exceptions

Status in OpenStack Dashboard (Horizon):
  New
Status in Python client library for Neutron:
  In Progress

Bug description:
  Neutronclient now raises known exceptions (such as NetworkNotFound, 
PortNotFound,...) or generic NeutronClientException, and library users (like 
Horizon) cannot catch generic NotFound exception without checking the detail of 
NeutronClientException.
  Obviously neutronclient should raise individual exceptions based on response 
status code (400, 404, 409 and so on).

  By doing so, Horizon exception handling will be much simpler and I believe it 
brings the similar merits to other library users.
  In Horizon it is targeted to Icehouse release.

  It is related to bug 1284317, which is specific to Router Not Found
  exception, and this bug is intended to address the issue in more
  generic way.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1296148/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296321] [NEW] run_tests.sh outputs

2014-03-23 Thread Akihiro Motoki
Public bug reported:

When running run_tests.sh, we see the following outputs:




According to my initial debug, it comes from
openstack_dashboard.dashboards.project.instances.tests.InstanceTests.test_index_form_action_with_pagination

(What I did is just to print test names by adding "print self.id()" to
TestCase.setUp in openstack_dashboard/test/helpers.py)

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1296321

Title:
  run_tests.sh outputs 

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When running run_tests.sh, we see the following outputs:

  
  

  According to my initial debug, it comes from
  
openstack_dashboard.dashboards.project.instances.tests.InstanceTests.test_index_form_action_with_pagination

  (What I did is just to print test names by adding "print self.id()" to
  TestCase.setUp in openstack_dashboard/test/helpers.py)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1296321/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296337] [NEW] openstack.common needs to be synced and unused module should be removed

2014-03-23 Thread Akihiro Motoki
Public bug reported:

Most modules in openstack_dashboard/openstack/common are out-of-date and need 
to be synced with latest oslo.
The following modules are out-of-date.

In addition, many modules are not used in OpenStack Dashboard. If I remember 
correctly, they were introduced because of third-party integration but if it is 
required in third-party implementation they should import oslo in their tree 
not in the upstream.
Note that notifier and rpc in oslo now graduated oslo-incubator and maintained 
as separate module (oslo.messaging).
Use of oslo-incubator rpc and oslo is deprecated.
IMO they should be removed from horizon tree from the maitenance perspective 
before Icehouse release.

- config/*
- context
- eventlet_backdoor
- loopingcall
- network_utils
- notifier/*
- rpc/*
- service
- threadgroup
- uuidutils

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1296337

Title:
  openstack.common needs to be synced and unused module should be
  removed

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Most modules in openstack_dashboard/openstack/common are out-of-date and need 
to be synced with latest oslo.
  The following modules are out-of-date.

  In addition, many modules are not used in OpenStack Dashboard. If I remember 
correctly, they were introduced because of third-party integration but if it is 
required in third-party implementation they should import oslo in their tree 
not in the upstream.
  Note that notifier and rpc in oslo now graduated oslo-incubator and 
maintained as separate module (oslo.messaging).
  Use of oslo-incubator rpc and oslo is deprecated.
  IMO they should be removed from horizon tree from the maitenance perspective 
before Icehouse release.

  - config/*
  - context
  - eventlet_backdoor
  - loopingcall
  - network_utils
  - notifier/*
  - rpc/*
  - service
  - threadgroup
  - uuidutils

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1296337/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296356] [NEW] attached volume link in instance detail points to volume snapshot detail

2014-03-23 Thread Akihiro Motoki
Public bug reported:

When a volume is attached to an instance, instance detail page has a
link to volume detail but the link points to volume snapshot and it
leads to "Unable to retrieve snapshot detail" error.

instance detail template _detail_overview.html needs to be updated to volume 
detail: horizon:project:volumes:volumes:detail.
(Two "volumes" is required).

Note that 'horizon:project:volumes:detail' is actually a reference to
volume snapshot detail. It is confusing and should be fixed too.

** Affects: horizon
 Importance: Low
 Status: New

** Changed in: horizon
Milestone: None => icehouse-rc1

** Changed in: horizon
   Importance: Undecided => Medium

** Changed in: horizon
   Importance: Medium => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1296356

Title:
  attached volume link in instance detail points to volume snapshot
  detail

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When a volume is attached to an instance, instance detail page has a
  link to volume detail but the link points to volume snapshot and it
  leads to "Unable to retrieve snapshot detail" error.

  instance detail template _detail_overview.html needs to be updated to volume 
detail: horizon:project:volumes:volumes:detail.
  (Two "volumes" is required).

  Note that 'horizon:project:volumes:detail' is actually a reference to
  volume snapshot detail. It is confusing and should be fixed too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1296356/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296397] [NEW] It is confusing "horizon:project:volumes:detail" refers to volume snapshot

2014-03-23 Thread Akihiro Motoki
Public bug reported:

Related to bug 1296356, It is confusing that
"horizon:project:volumes:detail" refers to a volume snapshot. It is
better volume snapshot codes are placed in snapshots directory.

** Affects: horizon
 Importance: Low
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1296397

Title:
  It is confusing "horizon:project:volumes:detail" refers to volume
  snapshot

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Related to bug 1296356, It is confusing that
  "horizon:project:volumes:detail" refers to a volume snapshot. It is
  better volume snapshot codes are placed in snapshots directory.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1296397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296398] [NEW] It is confusing "horizon:project:volumes:detail" refers to volume snapshot

2014-03-23 Thread Akihiro Motoki
Public bug reported:

Related to bug 1296356, It is confusing that
"horizon:project:volumes:detail" refers to a volume snapshot. It is
better volume snapshot codes are placed in snapshots directory.

** Affects: horizon
 Importance: Low
 Assignee: Akihiro Motoki (amotoki)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1296398

Title:
  It is confusing "horizon:project:volumes:detail" refers to volume
  snapshot

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Related to bug 1296356, It is confusing that
  "horizon:project:volumes:detail" refers to a volume snapshot. It is
  better volume snapshot codes are placed in snapshots directory.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1296398/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296419] [NEW] It is better to warn when trying to delete a pool with members

2014-03-23 Thread Akihiro Motoki
Public bug reported:

Regarding the discussion in bug 1242338, there is a suggestion that it is 
better to check if VIP has members is being deleted.
>From UX perspective, it is better to be warned when a user try to delete a VIP 
>with members.

** Affects: horizon
 Importance: Wishlist
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1296419

Title:
  It is better to warn when trying to delete a pool with members

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Regarding the discussion in bug 1242338, there is a suggestion that it is 
better to check if VIP has members is being deleted.
  From UX perspective, it is better to be warned when a user try to delete a 
VIP with members.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1296419/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293631] Re: when delete a dead VM the status of attached cinder volume is not updated

2014-03-23 Thread Akihiro Motoki
Horizon just fetches a volume status from Cinder. It should be addressed
in Cinder side.

** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1293631

Title:
  when delete a dead VM the status of attached cinder volume is not
  updated

Status in Cinder:
  New
Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  when I terminate a VM which is in suspended status and has a cinder
  volume attached, the status of cinder volume can not be updated. the
  volume status will be kept as attached and the attached-host in
  database isn't updated. the cinder volume becomes an orphan on which
  you can't do delete/update/attach/detach.  the only option is goto the
  database and update the volumes table manually.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1293631/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242338] Re: Trying to remove a load balancer pool (which contains members) via horizon ends with error

2014-03-23 Thread Akihiro Motoki
The point related to UX perspective (is it reasonable to allow deleting
a VIP with members) is filed as bug 1296419.

I confirmed Horizon does not return Internal server error even when
deleting a VIP with members and the VIP is successfully deleted. I mark
this bug Invalid in Horizon.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1242338

Title:
  Trying to remove a load balancer pool (which contains members) via
  horizon ends with error

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  I've tried to remove a pool that has 2 members and a health monitor, 
operation failed with the following popup:
  "Error: Unable to delete pool. 409-{u'NeutronError': {u'message': u'Pool 
f5004d04-4461-4a9a-aa7c-04a9bdfde974 is still in use', u'type': u'PoolInUse', 
u'detail': u''}}"

  I did expect this operation to fail, I just didn't expect it to be
  available in horizon while the pool still has other objects associated
  with it and I didn't expect it to leave the pool in "PENDING_DELETE"
  status.

  The exception from the log file:

  2013-10-20 16:12:13.564 22804 ERROR 
neutron.services.loadbalancer.drivers.haproxy.agent_manager [-] Unable to 
destroy device for pool: f5004d04-4461-4a9a-aa7c-04a9bdfde974
  2013-10-20 16:12:13.564 22804 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager Traceback (most 
recent call last):
  2013-10-20 16:12:13.564 22804 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager   File 
"/usr/lib/python2.6/site-packages/neutron/services/loadbalancer/drivers/haproxy/agent_manager.py",
 line 244, in destroy_device
  2013-10-20 16:12:13.564 22804 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager 
self.driver.destroy(pool_id)
  2013-10-20 16:12:13.564 22804 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager   File 
"/usr/lib/python2.6/site-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 92, in destroy
  2013-10-20 16:12:13.564 22804 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager 
ns.garbage_collect_namespace()
  2013-10-20 16:12:13.564 22804 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager   File 
"/usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py", line 141, in 
garbage_collect_namespace
  2013-10-20 16:12:13.564 22804 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager 
self.netns.delete(self.namespace)
  2013-10-20 16:12:13.564 22804 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager   File 
"/usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py", line 440, in 
delete
  2013-10-20 16:12:13.564 22804 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager 
self._as_root('delete', name, use_root_namespace=True)
  2013-10-20 16:12:13.564 22804 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager   File 
"/usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py", line 206, in 
_as_root
  2013-10-20 16:12:13.564 22804 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager 
kwargs.get('use_root_namespace', False))
  2013-10-20 16:12:13.564 22804 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager   File 
"/usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py", line 65, in 
_as_root
  2013-10-20 16:12:13.564 22804 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager namespace)
  2013-10-20 16:12:13.564 22804 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager   File 
"/usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py", line 76, in 
_execute
  2013-10-20 16:12:13.564 22804 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager 
root_helper=root_helper)
  2013-10-20 16:12:13.564 22804 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager   File 
"/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py", line 61, in 
execute
  2013-10-20 16:12:13.564 22804 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager raise 
RuntimeError(m)
  2013-10-20 16:12:13.564 22804 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager RuntimeError: 
  2013-10-20 16:12:13.564 22804 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager Command: ['sudo', 
'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'delete', 
'qlbaas-f5004d04-4461-4a9a-aa7c-04a9bdfde974']
  2013-10-20 16:12:13.564 22804 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager Exit code: 255
  2013-10-20 16:12:13.564 22804 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager Stdout: ''
  2013-10-20 16:12:13.564 22804 TRACE 
neutron.services.loadbalancer.drivers.hapr

[Yahoo-eng-team] [Bug 1296429] [NEW] default of OPENSTACK_KEYSTONE_DEFAULT_ROLE still "Member"

2014-03-23 Thread Akihiro Motoki
Public bug reported:

In bug 1264228 and https://review.openstack.org/#/c/64137/, the default
value of OPENSTACK_KEYSTONE_DEFAULT_ROLE was changed from "Member" to
"_member_" which is the default config value in keystone.

However, OPENSTACK_KEYSTONE_DEFAULT_ROLE in openstack_dashboard/settings.py is 
not changed and it is the default value.
It should be updated too.

It is better to be fixed before Icehouse release for consistency.

** Affects: horizon
 Importance: Low
     Assignee: Akihiro Motoki (amotoki)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1296429

Title:
  default of OPENSTACK_KEYSTONE_DEFAULT_ROLE still "Member"

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  In bug 1264228 and https://review.openstack.org/#/c/64137/, the
  default value of OPENSTACK_KEYSTONE_DEFAULT_ROLE was changed from
  "Member" to "_member_" which is the default config value in keystone.

  However, OPENSTACK_KEYSTONE_DEFAULT_ROLE in openstack_dashboard/settings.py 
is not changed and it is the default value.
  It should be updated too.

  It is better to be fixed before Icehouse release for consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1296429/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1292366] Re: UnboundLocalError: local variable 'network_name' in neutronv2/api.py, line 964

2014-03-23 Thread Akihiro Motoki
** Changed in: neutron
   Status: Fix Committed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1292366

Title:
  UnboundLocalError: local variable 'network_name' in
  neutronv2/api.py,line 964

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  when use 'nova host-evacuate compute5 --target_host compute3 --on-
  shared-storage' evacuate, have some error as follows:

  2014-03-14 13:19:38.573 28576 ERROR nova.compute.manager 
[req-1e0bf5e9-1b46-4666-b3e0-177ec3fe2e05 6965226966304bd5a3ae07587d5ef958 
d2390e6dd4ce4b48866be0d3d1417c01] [instance: 
cb7d4246-4570-47fe-a3e6-79a742d63736] Setting instance vm_state to ERROR
  2014-03-14 13:19:38.573 28576 TRACE nova.compute.manager [instance: 
cb7d4246-4570-47fe-a3e6-79a742d63736] Traceback (most recent call last):
  2014-03-14 13:19:38.573 28576 TRACE nova.compute.manager [instance: 
cb7d4246-4570-47fe-a3e6-79a742d63736]   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 4990, in 
_error_out_instance_on_exception
  2014-03-14 13:19:38.573 28576 TRACE nova.compute.manager [instance: 
cb7d4246-4570-47fe-a3e6-79a742d63736] yield
  2014-03-14 13:19:38.573 28576 TRACE nova.compute.manager [instance: 
cb7d4246-4570-47fe-a3e6-79a742d63736]   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2029, in 
rebuild_instance
  2014-03-14 13:19:38.573 28576 TRACE nova.compute.manager [instance: 
cb7d4246-4570-47fe-a3e6-79a742d63736] network_info = 
self._get_instance_nw_info(context, instance)
  2014-03-14 13:19:38.573 28576 TRACE nova.compute.manager [instance: 
cb7d4246-4570-47fe-a3e6-79a742d63736]   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 876, in 
_get_instance_nw_info
  2014-03-14 13:19:38.573 28576 TRACE nova.compute.manager [instance: 
cb7d4246-4570-47fe-a3e6-79a742d63736] instance)
  2014-03-14 13:19:38.573 28576 TRACE nova.compute.manager [instance: 
cb7d4246-4570-47fe-a3e6-79a742d63736]   File 
"/usr/lib/python2.6/site-packages/nova/network/api.py", line 49, in wrapper
  2014-03-14 13:19:38.573 28576 TRACE nova.compute.manager [instance: 
cb7d4246-4570-47fe-a3e6-79a742d63736] res = f(self, context, *args, 
**kwargs)
  2014-03-14 13:19:38.573 28576 TRACE nova.compute.manager [instance: 
cb7d4246-4570-47fe-a3e6-79a742d63736]   File 
"/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py", line 456, in 
get_instance_nw_info
  2014-03-14 13:19:38.573 28576 TRACE nova.compute.manager [instance: 
cb7d4246-4570-47fe-a3e6-79a742d63736] result = 
self._get_instance_nw_info(context, instance, networks)
  2014-03-14 13:19:38.573 28576 TRACE nova.compute.manager [instance: 
cb7d4246-4570-47fe-a3e6-79a742d63736]   File 
"/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py", line 465, in 
_get_instance_nw_info
  2014-03-14 13:19:38.573 28576 TRACE nova.compute.manager [instance: 
cb7d4246-4570-47fe-a3e6-79a742d63736] nw_info = 
self._build_network_info_model(context, instance, networks)
  2014-03-14 13:19:38.573 28576 TRACE nova.compute.manager [instance: 
cb7d4246-4570-47fe-a3e6-79a742d63736]   File 
"/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py", line 1011, in 
_build_network_info_model
  2014-03-14 13:19:38.573 28576 TRACE nova.compute.manager [instance: 
cb7d4246-4570-47fe-a3e6-79a742d63736] subnets)
  2014-03-14 13:19:38.573 28576 TRACE nova.compute.manager [instance: 
cb7d4246-4570-47fe-a3e6-79a742d63736]   File 
"/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py", line 964, in 
_nw_info_build_network
  2014-03-14 13:19:38.573 28576 TRACE nova.compute.manager [instance: 
cb7d4246-4570-47fe-a3e6-79a742d63736] label=network_name,
  2014-03-14 13:19:38.573 28576 TRACE nova.compute.manager [instance: 
cb7d4246-4570-47fe-a3e6-79a742d63736] UnboundLocalError: local variable 
'network_name' referenced before assignment
  2014-03-14 13:19:38.573 28576 TRACE nova.compute.manager [instance: 
cb7d4246-4570-47fe-a3e6-79a742d63736]
  2014-03-14 13:19:39.008 28576 ERROR nova.openstack.common.rpc.amqp 
[req-1e0bf5e9-1b46-4666-b3e0-177ec3fe2e05 6965226966304bd5a3ae07587d5ef958 
d2390e6dd4ce4b48866be0d3d1417c01] Exception during message handling
  2014-03-14 13:19:39.008 28576 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
  2014-03-14 13:19:39.008 28576 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 461, 
in _process_data
  2014-03-14 13:19:39.008 28576 TRACE nova.openstack.common.rpc.amqp **args)
  2014-03-14 13:19:39.008 28576 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", 
line 172, in dispatch
  2014-03-14 13:19:39.008 28576 TRACE nova.openstack.common.rpc.amqp result 
= getattr(proxyobj, method)(ctxt,

[Yahoo-eng-team] [Bug 1298456] [NEW] db migration: quotas table is not created for NSX plugin

2014-03-27 Thread Akihiro Motoki
Public bug reported:

folsom_initial db migration does not create quotas table for NSX plugin

** Affects: neutron
 Importance: Low
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1298456

Title:
  db migration: quotas table is not created for NSX plugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  folsom_initial db migration does not create quotas table for NSX
  plugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1298456/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298459] [NEW] db migration: some tables are not created for bigswitch plugin and bigswitch mech driver

2014-03-27 Thread Akihiro Motoki
Public bug reported:

For bigswitch plugin, networkdhcpagentbindings table is not created by db 
migration.
http://openstack-ci-gw.bigswitch.com/logs/refs-changes-96-40296-13/BSN_PLUGIN/logs/screen/screen-q-svc.log.gz

FOr bigswitch ML2 mech driver, neutron_ml2.consistencyhashes is not created by 
db migration.
http://openstack-ci-gw.bigswitch.com/logs/refs-changes-96-40296-13/BSN_ML2/logs/screen/screen-q-svc.log.gz

** Affects: neutron
 Importance: Low
 Status: New


** Tags: bigswitch

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1298459

Title:
  db migration: some tables are not created for bigswitch plugin and
  bigswitch mech driver

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  For bigswitch plugin, networkdhcpagentbindings table is not created by db 
migration.
  
http://openstack-ci-gw.bigswitch.com/logs/refs-changes-96-40296-13/BSN_PLUGIN/logs/screen/screen-q-svc.log.gz

  FOr bigswitch ML2 mech driver, neutron_ml2.consistencyhashes is not created 
by db migration.
  
http://openstack-ci-gw.bigswitch.com/logs/refs-changes-96-40296-13/BSN_ML2/logs/screen/screen-q-svc.log.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1298459/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298461] [NEW] embrane plugin lacks db migration

2014-03-27 Thread Akihiro Motoki
Public bug reported:

db migration for Embrane plugin needs to be added.

It is found during the review : https://review.openstack.org/#/c/40296/

** Affects: neutron
 Importance: Low
 Status: New


** Tags: embrane

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1298461

Title:
  embrane plugin lacks db migration

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  db migration for Embrane plugin needs to be added.

  It is found during the review :
  https://review.openstack.org/#/c/40296/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1298461/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267636] Re: Horizon will not authenticate against keystone v3

2014-03-27 Thread Akihiro Motoki
For clarification, I added django-openstack-auth to the affected
project. Thanks for the fix.

** Also affects: django-openstack-auth
   Importance: Undecided
   Status: New

** Changed in: django-openstack-auth
   Importance: Undecided => High

** Changed in: django-openstack-auth
 Assignee: (unassigned) => David Lyle (david-lyle)

** Changed in: django-openstack-auth
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1267636

Title:
  Horizon will not authenticate against keystone v3

Status in Django OpenStack Auth:
  Fix Committed
Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:

  Using devstack.
  In /opt/stack/horizon/horizon/openstack_dashboard/local/local_settings.py

  Setting the following

  OPENSTACK_API_VERSIONS = {
  "identity": 3
  }

  results in an authentication failure in keystone.

  A keystone v3 endpoint is available.

  What follows are the keystone logs for the failure case:

  (eventlet.wsgi.server): 2014-01-09 14:07:04,229 INFO log write (28305)
  accepted ('127.0.0.1', 4)

  (routes.middleware): 2014-01-09 14:07:04,231 DEBUG middleware __call__ 
Matched POST /auth/tokens
  (routes.middleware): 2014-01-09 14:07:04,231 DEBUG middleware __call__ Route 
path: '{path_info:.*}', defaults: {'controller': 
}
  (routes.middleware): 2014-01-09 14:07:04,232 DEBUG middleware __call__ Match 
dict: {'controller': , 'path_info': '/auth/tokens'}
  (routes.middleware): 2014-01-09 14:07:04,232 DEBUG middleware __call__ 
Matched POST /auth/tokens
  (routes.middleware): 2014-01-09 14:07:04,232 DEBUG middleware __call__ Route 
path: '{path_info:.*}', defaults: {'controller': 
}
  (routes.middleware): 2014-01-09 14:07:04,232 DEBUG middleware __call__ Match 
dict: {'controller': , 'path_info': '/auth/tokens'}
  (routes.middleware): 2014-01-09 14:07:04,232 DEBUG middleware __call__ No 
route matched for POST /auth/tokens
  (access): 2014-01-09 14:07:04,233 INFO core __call__ 127.0.0.1 - - 
[09/Jan/2014:22:07:04 +] "POST http://127.0.0.1:5000/v2.0/auth/tokens 
HTTP/1.0" 404 93
  (eventlet.wsgi.server): 2014-01-09 14:07:04,233 INFO log write 127.0.0.1 - - 
[09/Jan/2014 14:07:04] "POST /v2.0/auth/tokens HTTP/1.1" 404 228 0.002791

  
  When using the default (v2.0) keystone (having the above code commented out), 
authentication succeeds:

  What follows are the corresponding partial  keystone logs for the
  success case:

  (eventlet.wsgi.server): 2014-01-09 14:08:41,806 INFO log write (28305)
  accepted ('127.0.0.1', 41112)

  (routes.middleware): 2014-01-09 14:08:41,807 DEBUG middleware __call__ 
Matched POST /tokens
  (routes.middleware): 2014-01-09 14:08:41,807 DEBUG middleware __call__ Route 
path: '{path_info:.*}', defaults: {'controller': 
}
  (routes.middleware): 2014-01-09 14:08:41,807 DEBUG middleware __call__ Match 
dict: {'controller': , 'path_info': '/tokens'}
  (routes.middleware): 2014-01-09 14:08:41,807 DEBUG middleware __call__ 
Matched POST /tokens
  (routes.middleware): 2014-01-09 14:08:41,807 DEBUG middleware __call__ Route 
path: '{path_info:.*}', defaults: {'controller': 
}
  (routes.middleware): 2014-01-09 14:08:41,808 DEBUG middleware __call__ Match 
dict: {'controller': , 'path_info': '/tokens'}
  (routes.middleware): 2014-01-09 14:08:41,808 DEBUG middleware __call__ 
Matched POST /tokens
  (routes.middleware): 2014-01-09 14:08:41,808 DEBUG middleware __call__ Route 
path: '/tokens', defaults: {'action': u'authenticate', 'controller': 
}
  (routes.middleware): 2014-01-09 14:08:41,808 DEBUG middleware __call__ Match 
dict: {'action': u'authenticate', 'controller': 
}
  (keystone.common.wsgi): 2014-01-09 14:08:41,808 DEBUG wsgi __call__ arg_dict: 
{}
  (keystone.openstack.common.versionutils): 2014-01-09 14:08:41,809 WARNING log 
deprecated Deprecated: v2 API is deprecated as of Icehouse in favor of v3 API 
and may be removed in K.
  (dogpile.core.dogpile): 2014-01-09 14:08:41,809 DEBUG dogpile _enter 
NeedRegenerationException

  Using (eventlet.wsgi.server): 2014-01-09 14:08:41,806 INFO log write
  (28305) accepted ('127.0.0.1', 41112)

  (routes.middleware): 2014-01-09 14:08:41,807 DEBUG middleware __call__ 
Matched POST /tokens
  (routes.middleware): 2014-01-09 14:08:41,807 DEBUG middleware __call__ Route 
path: '{path_info:.*}', defaults: {'controller': 
}
  (routes.middleware): 2014-01-09 14:08:41,807 DEBUG middleware __call__ Match 
dict: {'controller': , 'path_info': '/tokens'}
  (routes.middleware): 2014-01-09 14:08:41,807 DEBUG middleware __call__ 
Matched POST /tokens
  (routes.middleware): 2014-01-09 14:08:41,807 DEBUG middleware __call__ Route 
path: '{path_info:.*}', defaults: {'controller': 
}
  (routes.middleware): 2014-01-09 14:08:41,808 DEBUG middleware __call__ Match 
dict: {'controller': , 'path_info': '/tokens'}
  (routes.middleware): 2014-01-09 14:08:41,808

[Yahoo-eng-team] [Bug 1298934] [NEW] Cannot understand "Gigabytes" in default quotas table

2014-03-28 Thread Akihiro Motoki
Public bug reported:

In the default quotas table, we see the entry "Gigabytes".
It comes from Cinder and it is reasonable as the output from Cinder.
However Horizon displays quotas values from various projects and "Gigabytes" is 
hard to understand.
I would suggest to change it to "Volume Gigabytes".

Similarly, "Snapshots" means "Volume Snapshots". it is better to renamed
to "Volume Snapshots".

** Affects: horizon
 Importance: Low
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: i18n

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1298934

Title:
  Cannot understand "Gigabytes" in default quotas table

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the default quotas table, we see the entry "Gigabytes".
  It comes from Cinder and it is reasonable as the output from Cinder.
  However Horizon displays quotas values from various projects and "Gigabytes" 
is hard to understand.
  I would suggest to change it to "Volume Gigabytes".

  Similarly, "Snapshots" means "Volume Snapshots". it is better to
  renamed to "Volume Snapshots".

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1298934/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293627] Re: When a VM is in suspended status, and the host(a dedicated compute node) get rebooted, the VM can not be resumed after the host is back to active.

2014-03-28 Thread Akihiro Motoki
Horizon does not support rebooting a compute node. It is apparently NOT related 
to Horizon.
Horizon just retrieves status from Nova. If you still have an issue, please 
report it to nova (or cinder if it only happens combined with attached volume).

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1293627

Title:
  When a VM is in suspended status, and the host(a dedicated compute
  node) get rebooted, the VM can not be resumed after the host is back
  to active.

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  The VM is a ubuntu server 12.04 LTS with one cinder-volume attached.
  after I suspended the VM, reboot the host ( which is a dedicated
  compute node), the VM can't be resumed when the host is back.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1293627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288328] Re: "View details" of containers is not accurate

2014-03-28 Thread Akihiro Motoki
Multiple folks including me confirmed it works as expected, so I close
this isseu by marking it "Invalid". If it still occurs, feel free to
reopen.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1288328

Title:
  "View details" of containers is not accurate

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  From the Horizon Portal we can view the container and the objects in
  them. When attempting to view the details of the container, it always
  reports the container has "Object Count" as None and "Size" as 0
  bytes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1288328/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299188] [NEW] qunit and jasmine urlpatterns should be defined only for test

2014-03-28 Thread Akihiro Motoki
Public bug reported:

qunit and jasmine urlpatterns are defined in horizon/site_urls.py if 
settings.DEBUG is True.
DEBUG=True is a valid situation without dev env (it is reasonable when testing).

It is better to use another flag other than DEBUG and define url_pattern
"jasmine" and "qunit" only if the new flag is True.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1299188

Title:
  qunit and jasmine urlpatterns should be defined only for test

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  qunit and jasmine urlpatterns are defined in horizon/site_urls.py if 
settings.DEBUG is True.
  DEBUG=True is a valid situation without dev env (it is reasonable when 
testing).

  It is better to use another flag other than DEBUG and define
  url_pattern "jasmine" and "qunit" only if the new flag is True.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1299188/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1207598] Re: Security groups with underscore in name can't be added to running instances

2014-03-29 Thread Akihiro Motoki
The issue no longer exists in Havana and later.
Grizzly release is now marked as EOL and no longer accepts new changes.

** Changed in: horizon
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1207598

Title:
  Security groups with underscore in name can't be added to running
  instances

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Dashboard (Horizon) grizzly series:
  Confirmed

Bug description:
  This was discovered on Grizzly, it's currently blocked on master by this:
  https://bugs.launchpad.net/horizon/+bug/1207184

  There's an issue with the construction of the form for adding security
  groups. The project membership widget links the javascript generated
  html with the form by storing values like "id_user_".

  
https://github.com/openstack/horizon/blob/a81bfc251b33b7f796cc72327159d1e547689523/horizon/static/horizon/js/horizon.projects.js#L193-L202

  In the security group case this becomes
  "id_user_" or e.g.,
  "id_user_sg_with_underscores". This is a problem when we try to
  separate the id_user_ part from the security group name, since it
  currently splits on the last underscore:

  
https://github.com/openstack/horizon/blob/a81bfc251b33b7f796cc72327159d1e547689523/horizon/static/horizon/js/horizon.projects.js#L14-L16

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1207598/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1224374] Re: Horizon English po files should be updated automatically

2014-03-29 Thread Akihiro Motoki
horizon-upstream-translation-update.sh now works as post-merge job.

http://git.openstack.org/cgit/openstack-
infra/config/tree/modules/jenkins/files/slave_scripts/upstream_translation_horizon.sh

https://jenkins.openstack.org/view/All/job/horizon-upstream-translation-
update/

** Changed in: horizon
Milestone: None => icehouse-rc1

** Changed in: horizon
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1224374

Title:
  Horizon English po files should be updated automatically

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  English PO files in Horizon repo are updated manually.
  It is better to update them automatically by infra job.

  The required tasks are:
  - Fetch the latest horizon repo
  - Run ./run_tests.sh --makemessages
  - Propose a patch if there is a change of PO files

  Note that we only need to update English PO files (English is the source 
language of translations)
  and don't need to update other language PO files. It keeps the patch simple.

  The infra job does not need to upload PO files to Transifex.
  Transifex now watches the PO files in Horizon repo and fetch them if updated.

  Regarding importing translations from Transifex, I18N need more discussion 
about when they should be imported.
  Thus the translation import is out of scope of this bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1224374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299828] [NEW] Cannot know which container is focused

2014-03-30 Thread Akihiro Motoki
Public bug reported:

In "Containers" panel, we cannot distinguish which container is focused.
Previously focused container has different background color and we can know a 
focused folder.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: swift

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1299828

Title:
  Cannot know which container is focused

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In "Containers" panel, we cannot distinguish which container is focused.
  Previously focused container has different background color and we can know a 
focused folder.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1299828/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300290] [NEW] Import translations for Icehouse release

2014-03-31 Thread Akihiro Motoki
Public bug reported:

Needs to import translations for Icehouse release.


[Mar 31] At the current plan, importing translation will be scheduled next 
Monday (UTC). Most translations are completed, but we have some string 
updates in the last moment of RC1, and translator need to catch up with it.

** Affects: horizon
 Importance: High
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: icehouse-rc-potential

** Changed in: horizon
 Assignee: (unassigned) => Akihiro Motoki (amotoki)

** Changed in: horizon
   Importance: Undecided => High

** Changed in: horizon
Milestone: None => icehouse-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1300290

Title:
  Import translations for Icehouse release

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Needs to import translations for Icehouse release.

  
  [Mar 31] At the current plan, importing translation will be scheduled next 
Monday (UTC). Most translations are completed, but we have some string 
updates in the last moment of RC1, and translator need to catch up with it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1300290/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300868] [NEW] action title can oveflow pulldown menu when action title is long

2014-04-01 Thread Akihiro Motoki
Public bug reported:

When an action title of pulldown menu is long, the action title can overflow 
the pulldown menu.
It is found during checking Japanese translation.
https://cloud.githubusercontent.com/assets/6553985/2580835/d08465a4-b9b1-11e3-9262-8509ab3589ed.png

It would be nice if a width of a pulldown menu fits the longest title of
pulldown menus.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1300868

Title:
  action title can oveflow pulldown menu  when action title is long

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When an action title of pulldown menu is long, the action title can overflow 
the pulldown menu.
  It is found during checking Japanese translation.
  
https://cloud.githubusercontent.com/assets/6553985/2580835/d08465a4-b9b1-11e3-9262-8509ab3589ed.png

  It would be nice if a width of a pulldown menu fits the longest title
  of pulldown menus.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1300868/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301105] [NEW] Second firewall creation returns 500

2014-04-01 Thread Akihiro Motoki
Public bug reported:

Second firewall creation returns 500.
It is an expected behavior of firewall reference implementation and an internal 
server error should not be returned.
It is some kind of quota error and 409 looks appropriate.

** Affects: neutron
 Importance: Low
 Assignee: Akihiro Motoki (amotoki)
 Status: In Progress


** Tags: fwaas icehouse-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1301105

Title:
  Second firewall creation returns 500

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Second firewall creation returns 500.
  It is an expected behavior of firewall reference implementation and an 
internal server error should not be returned.
  It is some kind of quota error and 409 looks appropriate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1301105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301337] [NEW] We have three duplicated tests for check_ovs_vxlan_version

2014-04-02 Thread Akihiro Motoki
Public bug reported:

test_ovs_lib, test_ovs_neutron_agent and test_ofa_neutron_agent have duplicated 
same unit tests for check_ovs_vxlan_version. The only difference is SystemError 
(from ovs_lib) and SystemExit (from agents).
The tested logic is 99% same, and unit tests in ovs/ofa agent looks unnecessary.

** Affects: neutron
 Importance: Low
 Assignee: Akihiro Motoki (amotoki)
 Status: In Progress


** Tags: unittest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1301337

Title:
  We have three duplicated tests for check_ovs_vxlan_version

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  test_ovs_lib, test_ovs_neutron_agent and test_ofa_neutron_agent have 
duplicated same unit tests for check_ovs_vxlan_version. The only difference is 
SystemError (from ovs_lib) and SystemExit (from agents).
  The tested logic is 99% same, and unit tests in ovs/ofa agent looks 
unnecessary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1301337/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301350] [NEW] Maximum length of string fields in API should be validated

2014-04-02 Thread Akihiro Motoki
Public bug reported:

At now we have no validation on maximum string length in the API.
All string fields call string validator with None argument.
As a result, some string will be truncated at database layer.

It is better to validate maximum string length.

** Affects: neutron
 Importance: Undecided
 Assignee: Akihiro Motoki (amotoki)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1301350

Title:
  Maximum length of string fields in API should be validated

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  At now we have no validation on maximum string length in the API.
  All string fields call string validator with None argument.
  As a result, some string will be truncated at database layer.

  It is better to validate maximum string length.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1301350/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301534] [NEW] keep selected panel when switching tenant

2014-04-02 Thread Akihiro Motoki
Public bug reported:

When we switch a tenant in the drop-down menu, a selected panel is not kept and 
overview panel will be displayed.
It would be nice to keep the selected panel even after switching a tenant.

** Affects: horizon
 Importance: Undecided
 Assignee: Akihiro Motoki (amotoki)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1301534

Title:
  keep selected panel when switching tenant

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When we switch a tenant in the drop-down menu, a selected panel is not kept 
and overview panel will be displayed.
  It would be nice to keep the selected panel even after switching a tenant.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1301534/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302390] [NEW] whitelist_externals = bash in tox.ini should be pep8 section

2014-04-04 Thread Akihiro Motoki
Public bug reported:

Commit 085a35d657cf0fa41a402f2af66c4beaa0f60db2 fixes translation jobs,
but "whitelist_externals = bash" is added to the wrong section. It
should be not in functional section but pep8 section in tox.ini.

** Affects: neutron
 Importance: Undecided
 Assignee: Akihiro Motoki (amotoki)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1302390

Title:
  whitelist_externals = bash in tox.ini should be pep8 section

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Commit 085a35d657cf0fa41a402f2af66c4beaa0f60db2 fixes translation
  jobs, but "whitelist_externals = bash" is added to the wrong section.
  It should be not in functional section but pep8 section in tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1302390/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302532] Re: docs build for django-based projects should run with DJANGO_SETTINGS_MODULE envvar

2014-04-04 Thread Akihiro Motoki
My plan is:
- to add "docs" env which defines DJANGO_SETTINGS_MODULE and runs build_sphinx 
to tox.ini in Horizon and Django OpenStack auth.
- fix run_docs.sh in slave_scripts to use "tox -edocs" rather than "tox -evenv 
"


** Changed in: openstack-ci
 Assignee: (unassigned) => Akihiro Motoki (amotoki)

** Also affects: horizon
   Importance: Undecided
   Status: New

** Also affects: django-openstack-auth
   Importance: Undecided
   Status: New

** Changed in: horizon
 Assignee: (unassigned) => Akihiro Motoki (amotoki)

** Changed in: django-openstack-auth
     Assignee: (unassigned) => Akihiro Motoki (amotoki)

** Changed in: horizon
   Importance: Undecided => Medium

** Changed in: django-openstack-auth
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1302532

Title:
  docs build for django-based projects should run with
  DJANGO_SETTINGS_MODULE envvar

Status in Django OpenStack Auth:
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Core Infrastructure:
  New

Bug description:
  When running docs build in the gate and post jobs for django-based
  projects (e.g., horizon, django_openstack_auth,  ...), we need to set
  DJANGO_SETTINGS_MODULE envvar.

  We can see the following errors in console logs:

  horizon-docs, gate-horizon-docs:
  ImproperlyConfigured: The SECRET_KEY setting must not be empty.
  https://jenkins.openstack.org/view/All/job/horizon-docs/
  https://jenkins.openstack.org/view/All/job/gate-horizon-docs/

  django-openstack-auth:
  error: invalid command 'build_sphinx'
  ERROR: InvocationError: 
'/home/jenkins/workspace/gate-django_openstack_auth-docs/.tox/venv/bin/python 
setup.py build_sphinx'
  It also comes from lack of DJANGO_SETTINGS_MODULES.
  https://jenkins.openstack.org/view/All/job/gate-django_openstack_auth-docs/
  https://jenkins.openstack.org/view/All/job/django_openstack_auth-docs/

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1302532/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303091] [NEW] workflow contributes() should not return None

2014-04-05 Thread Akihiro Motoki
Public bug reported:

Several number of workflow classes use the following manner:

def contribute(self, data, context):
context = super(AddIPSecPolicyStep, self).contribute(data, context)
context.update({'lifetime': {'units': data['lifetime_units'],
 'value': data['lifetime_value']}})
context.pop('lifetime_units')
context.pop('lifetime_value')
if data:
return context

A return value of contribute() is passed to contributes() of subsequent steps,
so context should be returned from contributes.
Otherwise it leads to resetting self.context in workflow class.

https://github.com/openstack/horizon/blob/master/horizon/workflows/base.py#L656

Fortunately, we have the wrong style in a workflow with a single step.

** Affects: horizon
 Importance: Low
 Assignee: Akihiro Motoki (amotoki)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1303091

Title:
  workflow contributes() should not return None

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Several number of workflow classes use the following manner:

  def contribute(self, data, context):
  context = super(AddIPSecPolicyStep, self).contribute(data, context)
  context.update({'lifetime': {'units': data['lifetime_units'],
   'value': data['lifetime_value']}})
  context.pop('lifetime_units')
  context.pop('lifetime_value')
  if data:
  return context

  A return value of contribute() is passed to contributes() of subsequent steps,
  so context should be returned from contributes.
  Otherwise it leads to resetting self.context in workflow class.

  
https://github.com/openstack/horizon/blob/master/horizon/workflows/base.py#L656

  Fortunately, we have the wrong style in a workflow with a single step.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1303091/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301626] Re: Same Keypairs accessible in multiple projects assigned to same user

2014-04-05 Thread Akihiro Motoki
In Nova, each keypair belongs to a user not a project. It is a nova
behavior. In my understanding, the behavior is reasonable because
sharing a keypair means all folks in a project share one private key and
it is not a good idea from security perspective. If you need to share a
VM, you can add ssh public keys to VM's authorized_key to share a VM.

I mark this bug "Won't fix" as Horizon.
If you need to discuss this behavior further, please discuss it on ML or add 
Nova to "affected projects".

** Changed in: horizon
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1301626

Title:
  Same Keypairs accessible in multiple projects assigned to same user

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  If I have two projects assigned to the same user in Horizon Dashboard.
  Each project has a separate ID, separate set of VMs and different floating 
IPs and different security rules assigned.

  But,  The Keypairs are being shared for both the projects are the
  same.

  This is causing an issue since this implies that I can access VMs belonging 
to different projects using the same key pair.
  Also, i cannot add new keypairs for a particular project as its becoming 
visible in both the projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1301626/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258440] Re: environment variable DJANGO_SETTINGS_MODULE or call settings.configure() is undefined

2014-04-05 Thread Akihiro Motoki
*** This bug is a duplicate of bug 1257885 ***
https://bugs.launchpad.net/bugs/1257885

This is duplicated with bug 1257885 and bug 1257809.

** This bug has been marked a duplicate of bug 1257885
   horizon does not work with django 1.6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1258440

Title:
   environment variable DJANGO_SETTINGS_MODULE or call
  settings.configure() is undefined

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Having installed openstack by Devstack,we can't login dashborad and
  get an error:

  The server encountered an internal error or misconfiguration and was unable 
to complete your request.
  Please contact the server administrator, [no address given] and inform them 
of the time the error occurred, and anything you might have done that may have 
caused the error.
  More information about this error may be available in the server error log.
  Apache/2.2.22 (Ubuntu) Server at 192.168.227.128 Port 80

  Check the Horizon_error.log,we see(partial):
  [Fri Dec 06 16:39:29 2013] [error] [client 192.168.227.128]   File 
"/usr/local/lib/python2.7/dist-packages/django/dispatch/dispatcher.py", line 
88, in connect
  [Fri Dec 06 16:39:29 2013] [error] [client 192.168.227.128] if 
settings.DEBUG:
  [Fri Dec 06 16:39:29 2013] [error] [client 192.168.227.128]File 
"/usr/local/lib/python2.7/dist-packages/django/conf/__init__.py", line 54, in 
__getattr__
  [Fri Dec 06 16:39:29 2013] [error] [client 192.168.227.128] 
self._setup(name)
  [Fri Dec 06 16:39:29 2013] [error] [client 192.168.227.128]   File 
"/usr/local/lib/python2.7/dist-packages/django/conf/__init__.py", line 47, in 
_setup
  [Fri Dec 06 16:39:29 2013] [error] [client 192.168.227.128] % (desc, 
ENVIRONMENT_VARIABLE))
  [Fri Dec 06 16:39:29 2013][error] [client 192.168.227.128] 
ImproperlyConfigured: Requested setting DEBUG, but settings are not configured. 
You must either define the environment variable DJANGO_SETTINGS_MODULE or call 
settings.configure() before accessing settings.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1258440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265626] Re: Resource usage, global object store usage fails when project is deleted but data is available

2014-04-07 Thread Akihiro Motoki
"Global Object Store Usage" existed in the initial Havana release but it
is troublesome and was removed in the subsequent Havana stable update. I
will mark this bug report as Invalid.

On the other hand, we need to handle a case where a project is deleted
as a whole.

Thanks for the report.

** Tags added: ceilometer

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1265626

Title:
  Resource usage, global object store usage fails when project is
  deleted but data is available

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When requesting "Global Object Store Usage" in Dashboard under
  "Resource Usage" when a project is deleted, the request fails with a
  404 in error_log:

  NotFound: Could not find project, b1849da6f313414793b53fdbc6871177.
  (HTTP 404)

  and "Error: Unable to retrieve statistics." as error via the ajax-
  popup.

  Expected behaviour would be that this project line is omitted from the
  overview, but in stead the whole page hard fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1265626/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276251] Re: Minimum disk details should be updated when image is selected considering the image size.

2014-04-07 Thread Akihiro Motoki
As Ana commented, we cannot know the minimum disk requirement on the form in 
OpenStack Dashboard.
To know min_disk requirement we need to analyze "image" to be created. Even if 
we can analyze, there is no guarantee the estimated value is correct. I think 
it is not a thing Horizon cares.


** Changed in: horizon
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1276251

Title:
  Minimum disk details should be updated when image is selected
  considering the image size.

Status in OpenStack Dashboard (Horizon):
  Opinion

Bug description:
  Steps to Reproduce the problem
  1. Login on Horizon.
  2. Select create an Image.
  3. Input the required parameters, such as image source and type.
  4. After selecting the image, the "Minimum Disk" should be updated with the 
image size instead of considering "no minimum".

  Expected behavior
  Disk value cannot be zero by default. It must consider the default Image 
source size.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1276251/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304105] [NEW] Two duplicated config section: securitygroup and security_group

2014-04-07 Thread Akihiro Motoki
Public bug reported:

There are two duplicated configuration sections: security_group and 
securitygroup.
Reference dev ML thread: 
http://lists.openstack.org/pipermail/openstack-dev/2014-April/032086.html

[securitygroup] firewall_driver
[security_group] enable_security_group

We have [securitygroup] section exists in Havana and previous releases and it 
is the right section name.
When we introduced enable_security_group option, we seem to have added a new 
section
accidentally. We don't intended to introduce a new section name.

Both firewall_driver and enable_security_group are placed in
[securitygroup].

It should be fixed before the release.

** Affects: neutron
 Importance: Undecided
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: icehouse-rc-potential

** Tags removed: icehouse-rcpo
** Tags added: icehouse-rc-potential

** Changed in: neutron
 Assignee: (unassigned) => Akihiro Motoki (amotoki)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1304105

Title:
  Two duplicated config section: securitygroup and security_group

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  There are two duplicated configuration sections: security_group and 
securitygroup.
  Reference dev ML thread: 
http://lists.openstack.org/pipermail/openstack-dev/2014-April/032086.html

  [securitygroup] firewall_driver
  [security_group] enable_security_group

  We have [securitygroup] section exists in Havana and previous releases and it 
is the right section name.
  When we introduced enable_security_group option, we seem to have added a new 
section
  accidentally. We don't intended to introduce a new section name.

  Both firewall_driver and enable_security_group are placed in
  [securitygroup].

  It should be fixed before the release.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1304105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583757] [NEW] multiline blocktrans in django templates should use 'trimmed' option

2016-05-19 Thread Akihiro Motoki
Public bug reported:

Django templates support 'trimmed' option [1]. remove newline characters
from the beginning and the end of the content of the {% blocktrans %}
tag, replace any whitespace at the beginning and end of a line and merge
all lines into one using a space character to separate them.

Without 'trimmed' option, translators will get source strings with
meaningless newlines and whitespaces like below. Zanata (translation
check site) checks the number of newlines, so translators need to insert
newlines to silent Zanata validations. It is really annoying and
meaningless.

  #: 
openstack_dashboard/dashboards/admin/volumes/templates/volumes/snapshots/_update_status.html:10
  msgid ""
  "\n"
  "The status of a volume snapshot is normally managed automatically.  "
  "In some circumstances\n"
  "an administrator may need to explicitly update the status value. This"
  " is equivalent to\n"
  "the cinder snapshot-reset-state command.\n"
  ""
  msgstr ""

By using 'trimmed' option in Django templates, we can get rid of meaningless 
newlines in extracted message strings.
The recent released version of django-bebal supports 'trimmed' option [2] and 
now we can move the situation forward.

[1] 
https://docs.djangoproject.com/ja/1.9/topics/i18n/translation/#blocktrans-template-tag
[2] 
https://github.com/python-babel/django-babel/commit/88b389381c0e269605311ae07029555b65a86bc5

** Affects: horizon
 Importance: Medium
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: i18n

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1583757

Title:
  multiline blocktrans in django templates should use 'trimmed' option

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Django templates support 'trimmed' option [1]. remove newline
  characters from the beginning and the end of the content of the {%
  blocktrans %} tag, replace any whitespace at the beginning and end of
  a line and merge all lines into one using a space character to
  separate them.

  Without 'trimmed' option, translators will get source strings with
  meaningless newlines and whitespaces like below. Zanata (translation
  check site) checks the number of newlines, so translators need to
  insert newlines to silent Zanata validations. It is really annoying
  and meaningless.

#: 
openstack_dashboard/dashboards/admin/volumes/templates/volumes/snapshots/_update_status.html:10
msgid ""
"\n"
"The status of a volume snapshot is normally managed automatically.  "
"In some circumstances\n"
"an administrator may need to explicitly update the status value. This"
" is equivalent to\n"
"the cinder snapshot-reset-state command.\n"
""
msgstr ""

  By using 'trimmed' option in Django templates, we can get rid of meaningless 
newlines in extracted message strings.
  The recent released version of django-bebal supports 'trimmed' option [2] and 
now we can move the situation forward.

  [1] 
https://docs.djangoproject.com/ja/1.9/topics/i18n/translation/#blocktrans-template-tag
  [2] 
https://github.com/python-babel/django-babel/commit/88b389381c0e269605311ae07029555b65a86bc5

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1583757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592965] [NEW] i18n: babel_extract_angular should trim whitespaces in AngularJS templates

2016-06-15 Thread Akihiro Motoki
Public bug reported:

This is "AngularJS templates" version of bug 1583757.

At now, translators will get source strings with meaningless newlines
and whitespaces like below. Zanata (translation check site) checks the
number of newlines, so translators need to insert newlines to silent
Zanata validations. It is really annoying and meaningless.

For Django templates, Django provides 'trimmed' option to trim
whitespaces in extracted messages. It would be nice if we have the
similar behavior to Django 'trimmed' option for AngularJS template
message extraction. In HTML case, we don't need to care consecutive
whitespaces, so we can simply trim whitespaces in AngularJS HTML
templates.

#: 
openstack_dashboard/dashboards/project/static/dashboard/project/containers/create-container-modal.html:40
msgid ""
"A container is a storage compartment for your data and provides a way\n"
"  for you to organize your data. You can think of a container as "
"a\n"
"  folder in Windows® or a directory in UNIX®. The primary "
"difference\n"
"  between a container and these other file system concepts is "
"that\n"
"  containers cannot be nested. You can, however, create an "
"unlimited\n"
"  number of containers within your account. Data must be stored "
"in a\n"
"  container so you must have at least one container defined in "
"your\n"
"  account prior to uploading data."
msgstr ""

We would like to have a string like:

#: 
openstack_dashboard/dashboards/project/static/dashboard/project/containers/create-container-modal.html:40
msgid ""
"A container is a storage compartment for your data and provides a way for"
" you to organize your data. You can think of a container as a folder in "
"Windows® or a directory in UNIX®. The primary difference between a "
"container and these other file system concepts is that containers cannot "
"be nested. You can, however, create an unlimited number of containers "
"within your account. Data must be stored in a container so you must have "
"at least one container defined in your account prior to uploading data."
msgstr ""

** Affects: horizon
 Importance: Medium
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: i18n

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1592965

Title:
  i18n: babel_extract_angular should trim whitespaces in AngularJS
  templates

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  This is "AngularJS templates" version of bug 1583757.

  At now, translators will get source strings with meaningless newlines
  and whitespaces like below. Zanata (translation check site) checks the
  number of newlines, so translators need to insert newlines to silent
  Zanata validations. It is really annoying and meaningless.

  For Django templates, Django provides 'trimmed' option to trim
  whitespaces in extracted messages. It would be nice if we have the
  similar behavior to Django 'trimmed' option for AngularJS template
  message extraction. In HTML case, we don't need to care consecutive
  whitespaces, so we can simply trim whitespaces in AngularJS HTML
  templates.

  #: 
openstack_dashboard/dashboards/project/static/dashboard/project/containers/create-container-modal.html:40
  msgid ""
  "A container is a storage compartment for your data and provides a way\n"
  "  for you to organize your data. You can think of a container as "
  "a\n"
  "  folder in Windows® or a directory in UNIX®. The primary "
  "difference\n"
  "  between a container and these other file system concepts is "
  "that\n"
  "  containers cannot be nested. You can, however, create an "
  "unlimited\n"
  "  number of containers within your account. Data must be stored "
  "in a\n"
  "  container so you must have at least one container defined in "
  "your\n"
  "  account prior to uploading data."
  msgstr ""

  We would like to have a string like:

  #: 
openstack_dashboard/dashboards/project/static/dashboard/project/containers/create-container-modal.html:40
  msgid ""
  "A container is a storage compartment for your data and provides a way for"
  " you to organize your data. You can think of a container as a folder in "
  "Windows® or a directory in UNIX®. The primary difference between a "
  "container and thes

[Yahoo-eng-team] [Bug 1497272] Re: L3 HA: Unstable rescheduling time for keepalived v1.2.7

2016-06-30 Thread Akihiro Motoki
Per bug discussion, it is better to add a note on appropriate keepalived
versions to the networking guide.

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

** Tags added: networking-guide

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497272

Title:
  L3 HA: Unstable rescheduling time for keepalived v1.2.7

Status in neutron:
  Triaged
Status in openstack-manuals:
  New

Bug description:
  I have tested work of L3 HA on environment with 3 controllers and 1 compute 
(Kilo) with this simple scenario:
  1) ping vm by floating ip
  2) disable master l3-agent (which ha_state is active)
  3) wait for pings to continue and another agent became active
  4) check number of packages that were lost

  My results are  following:
  1) When max_l3_agents_per_router=2, 3 to 4 packages were lost.
  2) When max_l3_agents_per_router=3 or 0 (meaning the router will be scheduled 
on every agent), 10 to 70 packages were lost.

  I should mention that in both cases there was only one ha router.

  It is expected that less packages will be lost when
  max_l3_agents_per_router=3(0).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497272/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323769] [NEW] nec plugin: AttributeError: No such RPC function 'update_floatingip_statuses'

2014-05-27 Thread Akihiro Motoki
Public bug reported:

In nec plugin with l3-agent (icehouse) AttributeError: No such RPC
function 'update_floatingip_statuses' occurs.

update_floatingip_statuses was implemented in Icehouse and RPC callback version 
related to L3RpcCallbackMixin was bumped to 1.1, but the version of 
L3RpcCallback in NEC plugin was not bumped to 1.1 yet.
update_floatingip_statues RPC call from l3-agent expects RPC version 1.1.

** Affects: neutron
 Importance: Medium
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: icehouse-backport-potential nec

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1323769

Title:
  nec plugin: AttributeError: No such RPC function
  'update_floatingip_statuses'

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In nec plugin with l3-agent (icehouse) AttributeError: No such RPC
  function 'update_floatingip_statuses' occurs.

  update_floatingip_statuses was implemented in Icehouse and RPC callback 
version related to L3RpcCallbackMixin was bumped to 1.1, but the version of 
L3RpcCallback in NEC plugin was not bumped to 1.1 yet.
  update_floatingip_statues RPC call from l3-agent expects RPC version 1.1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1323769/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1344755] [NEW] libvirt OVS hybrid VIF driver does not honor network_device_mtu config

2014-07-19 Thread Akihiro Motoki
Public bug reported:

plug_ovs_hybrid VIF driver in libvirt/vif.py does not honor network_device_mtu 
configuration variable.
It prevents operators from using jumbo frame.

** Affects: nova
 Importance: Undecided
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: icehouse-backport-potential network

** Changed in: nova
 Assignee: (unassigned) => Akihiro Motoki (amotoki)

** Tags added: network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1344755

Title:
  libvirt OVS hybrid VIF driver does not honor network_device_mtu config

Status in OpenStack Compute (Nova):
  New

Bug description:
  plug_ovs_hybrid VIF driver in libvirt/vif.py does not honor 
network_device_mtu configuration variable.
  It prevents operators from using jumbo frame.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1344755/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1344755] Re: libvirt OVS hybrid VIF driver does not honor network_device_mtu config

2014-07-21 Thread Akihiro Motoki
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1344755

Title:
  libvirt OVS hybrid VIF driver does not honor network_device_mtu config

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  plug_ovs_hybrid VIF driver in libvirt/vif.py does not honor 
network_device_mtu configuration variable.
  It prevents operators from using jumbo frame.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1344755/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1341040] Re: neutron CLI should not allow user to create /32 subnet

2014-07-21 Thread Akihiro Motoki
** Project changed: neutron => python-neutronclient

** Changed in: python-neutronclient
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

** Changed in: python-neutronclient
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1341040

Title:
  neutron CLI should not allow user to create /32 subnet

Status in Python client library for Neutron:
  New

Bug description:
  I'm using devstack stable/icehouse, and my neutron version is
  1409da70959496375f1ac45457663a918ec8

  I created an internal network not connected to the router.  If I 
mis-configure the subnet, Horizon will catch the problem, but not neutron CLI.
  Subsequently VM cannot be created on this misconfigured subnet, as it ran out 
of IP to offer to the VM.

  > neutron net-create test-net
  Created a new network:
  ++--+
  | Field  | Value|
  ++--+
  | admin_state_up | True |
  | id | b7bb10bb-48e0-4c1a-a5fc-9590b6619f5a |
  | name   | test-net |
  | shared | False|
  | status | ACTIVE   |
  | subnets|  |
  | tenant_id  | 8092813be8fd4122a20ee3a6bfe91162 |
  ++--+

  If I use Horizon, go to "Networks", "test-net", "Create Subnet", then use 
parameters,
Subnet Name: subnet-1
Network Address: 10.10.150.0/32
IP Version: IPv4
  Horizon returns the error message "The subnet in the Network Address is too 
small (/32)."

  If I use neutron CLI,

  > neutron subnet-create --name subnet-1 test-net 10.10.150.0/32
  Created a new subnet:
  +--+--+
  | Field| Value|
  +--+--+
  | allocation_pools |  |
  | cidr | 10.10.150.0/32   |
  | dns_nameservers  |  |
  | enable_dhcp  | True |
  | gateway_ip   | 10.10.150.1  |
  | host_routes  |  |
  | id   | 4142ff1d-28de-4e77-b82b-89ae604190ae |
  | ip_version   | 4|
  | name | subnet-1 |
  | network_id   | b7bb10bb-48e0-4c1a-a5fc-9590b6619f5a |
  | tenant_id| 8092813be8fd4122a20ee3a6bfe91162 |
  +--+--+

  > neutron net-list
  
+--+--+-+
  | id   | name | subnets   
  |
  
+--+--+-+
  | 0dd5722d-f535-42ec-9257-437c05e4de25 | private  | 
81859ee5-4ea5-4e60-ab2a-ba74146d39ba 10.0.0.0/24|
  | 27c1649d-f6fc-4893-837d-dbc293fc4b80 | public   | 
6c1836a1-eb7d-4acb-ad6f-6c394cedced5|
  | b7bb10bb-48e0-4c1a-a5fc-9590b6619f5a | test-net | 
4142ff1d-28de-4e77-b82b-89ae604190ae 10.10.150.0/32 |
  
+--+--+-+

  > nova boot --image cirros-0.3.1-x86_64-uec --flavor m1.tiny --nic 
net-id=b7bb10bb-48e0-4c1a-a5fc-9590b6619f5a vm2
  :
  :

  > nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power > 
State | Networks |
  
+--+--+++-+--+
  | d98511f7-452c-4ab6-8af9-d73576714c87 | vm1  | ACTIVE | -  | Running 
| private=10.0.0.2 |
  | b12b6a6d-4ab9-43b2-825c-ae656a7aafc4 | vm2  | ERROR  | -  | NOSTATE 
|  |
  
+--+--+++-+--+

  I get this output from screen:

  2014-07-11 18:37:32.327 DEBUG neutronclient.client [-] RESP:409
  CaseInsensitiveDict({'date': 'Sat, 12 Jul 2014 01:37:32 GMT',
  'content-length': '164', 'content-type': 'application/json;
  charset=UTF-8', 'x-openstack-request-id': 'req-35a49577-5a3d-
  4a98-a790-52694f09d59a'}) {"NeutronError": {"message": "No more IP
  addresses available on network b7bb10bb-48e0-4c1a-a5fc-9590b6619f5a.",
  "type": "IpAddressGenerationFailure", "detail": ""}}

  2014-07-11 18:37:32.327 DEBUG neutr

<    5   6   7   8   9   10