[Yahoo-eng-team] [Bug 1279210] Re: Wrong arguments position in "neutron firewall-update" command

2014-02-11 Thread Eugene Nikanorov
** Project changed: neutron => python-neutronclient

** Changed in: python-neutronclient
 Assignee: (unassigned) => Eugene Nikanorov (enikanorov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1279210

Title:
  Wrong arguments position in "neutron firewall-update" command

Status in Python client library for Neutron:
  New

Bug description:
  Havana on rhel6.5

  Description
  ===
  The FIREWALL argument of "neutron firewall-update" should be the first 
argument in order to get the command successful, altough the help page mention 
that the FIREWALL argument should be the last argument.

  
  Scenario 1
  ==
  # neutron firewall-show firewall_admin
  ++--+
  | Field  | Value|
  ++--+
  | admin_state_up | True |
  | description|  |
  | firewall_policy_id | 3d723253-d7f6-4481-8f06-254007fd5f79 |
  | id | 72c782b2-2838-4773-b736-aa79564ac2ef |
  | name   | firewall_admin   |
  | status | ACTIVE   |
  | tenant_id  | 3384a1b666ac473b98dabcc385161a20 |
  ++--+

  # neutron firewall-update 
  usage: neutron firewall-update [-h] [--request-format {json,xml}] FIREWALL
  neutron firewall-update: error: too few arguments

  # neutron firewall-update --admin_state_up False 
72c782b2-2838-4773-b736-aa79564ac2ef
  Unable to find firewall with name 'False'

  # neutron firewall-update 72c782b2-2838-4773-b736-aa79564ac2ef 
--admin_state_up False
  Updated firewall: 72c782b2-2838-4773-b736-aa79564ac2ef

  [root@puma10 ~(keystone_admin)]# neutron firewall-show firewall_admin
  ++--+
  | Field  | Value|
  ++--+
  | admin_state_up | False|
  | description|  |
  | firewall_policy_id | 3d723253-d7f6-4481-8f06-254007fd5f79 |
  | id | 72c782b2-2838-4773-b736-aa79564ac2ef |
  | name   | firewall_admin   |
  | status | ACTIVE   |
  | tenant_id  | 3384a1b666ac473b98dabcc385161a20 |
  ++--+

  
  scenario 2
  ==
  # neutron firewall-show 72c782b2-2838-4773-b736-aa79564ac2ef
  neu++--+
  | Field  | Value|
  ++--+
  | admin_state_up | False|
  | description|  |
  | firewall_policy_id | 3d723253-d7f6-4481-8f06-254007fd5f79 |
  | id | 72c782b2-2838-4773-b736-aa79564ac2ef |
  | name   | firewall_admin   |
  | status | ACTIVE   |
  | tenant_id  | 3384a1b666ac473b98dabcc385161a20 |
  ++--+

  # neutron firewall-update --admin_state_up True 
72c782b2-2838-4773-b736-aa79564ac2ef
  Unable to find firewall with name 'True'

  # neutron firewall-update 72c782b2-2838-4773-b736-aa79564ac2ef 
--admin_state_up True
  Updated firewall: 72c782b2-2838-4773-b736-aa79564ac2ef

  # neutron firewall-show 72c782b2-2838-4773-b736-aa79564ac2ef
  ++--+
  | Field  | Value|
  ++--+
  | admin_state_up | True |
  | description|  |
  | firewall_policy_id | 3d723253-d7f6-4481-8f06-254007fd5f79 |
  | id | 72c782b2-2838-4773-b736-aa79564ac2ef |
  | name   | firewall_admin   |
  | status | ACTIVE   |
  | tenant_id  | 3384a1b666ac473b98dabcc385161a20 |
  ++--+

  
  Note that the "--admin_state_down" option is not available in "neutron 
firewall-update" (although it does exist in "neutron firewall-create"):

  # neutron firewall-update --admin_state_down 
8f4ac76b-6786-4bc4-a4c8-9e23731f2675
  Unrecognized attribute(s) 'admin_state_down'

  # neutron firewall-update 8f4ac76b-6786-4bc4-a4c8-9e23731f2675 
--admin_state_down 
  Unrecognized attribute(s) 'admin_state_down'

To manage notifications abo

[Yahoo-eng-team] [Bug 1279208] Re: Firewall rules can not be updated in a firewall policy after firewall policy creation

2014-02-11 Thread Eugene Nikanorov
** Changed in: neutron
 Assignee: (unassigned) => Eugene Nikanorov (enikanorov)

** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1279208

Title:
  Firewall rules can not be updated in a firewall policy after firewall
  policy creation

Status in Python client library for Neutron:
  New

Bug description:
  Havana on RHEL6.5

  Description
  ===
  Firewall rules can not be updated in a firewall policy after the firewall 
policy creation (at least when the policy already created with a rule).
  It looks like the firewall-policy-update looks only at the first char of the 
policy id and hence reports that this rule was not found.

  [root@puma10 ~(keystone_admin)]# neutron firewall-policy-show 
f1224bee-740e-4aab-bdbe-829d76aeb647
  ++--+
  | Field  | Value|
  ++--+
  | audited| True |
  | description|  |
  | firewall_rules | 2f381389-3137-48f0-a7ff-86744a63c0cb |
  | id | f1224bee-740e-4aab-bdbe-829d76aeb647 |
  | name   | tcp_90_policy|
  | shared | True |
  | tenant_id  | 699ae084c9df430d83dbb9a547bab2e3 |
  ++--+
  [root@puma10 ~(keystone_admin)]# neutron firewall-policy-update 
f1224bee-740e-4aab-bdbe-829d76aeb647 --firewall-rules 
4e57336a-4f91-46b8-af00-b5312fa7e175
  Firewall Rule 4 could not be found.
  [root@puma10 ~(keystone_admin)]# neutron firewall-rule-show 
4e57336a-4f91-46b8-af00-b5312fa7e175
  ++--+
  | Field  | Value|
  ++--+
  | action | deny |
  | description|  |
  | destination_ip_address | 10.35.211.3  |
  | destination_port   | 100  |
  | enabled| True |
  | firewall_policy_id |  |
  | id | 4e57336a-4f91-46b8-af00-b5312fa7e175 |
  | ip_version | 4|
  | name   |  |
  | position   |  |
  | protocol   | tcp  |
  | shared | False|
  | source_ip_address  | 10.35.115.14 |
  | source_port|  |
  | tenant_id  | 699ae084c9df430d83dbb9a547bab2e3 |
  ++--+
  [root@puma10 ~(keystone_admin)]# neutron firewall-policy-update 
f1224bee-740e-4aab-bdbe-829d76aeb647 --firewall-rules 
5e57336a-4f91-46b8-af00-b5312fa7e175
  Firewall Rule 5 could not be found.
  [root@puma10 ~(keystone_admin)]# neutron firewall-policy-update 
f1224bee-740e-4aab-bdbe-829d76aeb647 --firewall-rules rami
  Firewall Rule r could not be found.

  From the server.log
  ===

  2013-10-02 13:24:11.404 26705 ERROR neutron.api.v2.resource [-] update failed
  2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/resource.py", line 84, in 
resource
  2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/base.py", line 486, in update
  2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource obj = 
obj_updater(request.context, id, **kwargs)
  2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/services/firewall/fwaas_plugin.py", 
line 247, in update_firewall_policy
  2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource 
self).update_firewall_policy(context, id, firewall_policy)
  2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/db/firewall/firewall_db.py", line 
302, in update_firewall_policy
  2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource 
fwp['firewall_rules'])
  2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/db/firewall/firewall_db.py", line 
18

[Yahoo-eng-team] [Bug 1279212] Re: The "firewall-policy-insert-rule" returns json output instead of Field-Value table

2014-02-11 Thread Eugene Nikanorov
** Changed in: neutron
 Assignee: (unassigned) => Eugene Nikanorov (enikanorov)

** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1279212

Title:
  The "firewall-policy-insert-rule" returns json output instead of
  Field-Value table

Status in Python client library for Neutron:
  New

Bug description:
  Havana on RHEL6.5

  Description
  ===

  The "firewall-policy-insert-rule" returns json output which is the
  output of "firewall-policy-show" of the same policy instead of ascii
  Field-Value table like all show commands return.

  [root@puma10 ~(keystone_admin)]# neutron firewall-policy-insert-rule 
f1224bee-740e-4aab-bdbe-829d76aeb647 f71439f6-9934-46b1-90b7-55fdfff5d7fd
  {"name": "tcp_90_policy", "firewall_rules": 
["f71439f6-9934-46b1-90b7-55fdfff5d7fd", 
"4e57336a-4f91-46b8-af00-b5312fa7e175", 
"2f381389-3137-48f0-a7ff-86744a63c0cb"], "tenant_id": 
"699ae084c9df430d83dbb9a547bab2e3", "firewall_list": 
["8a251de8-4962-4d1a-b6cd-a384f4cbb15c"], "audited": false, "shared": true, 
"id": "f1224bee-740e-4aab-bdbe-829d76aeb647", "description": ""}

  [root@puma10 ~(keystone_admin)]# neutron firewall-policy-show 
f1224bee-740e-4aab-bdbe-829d76aeb647
  ++--+
  | Field  | Value|
  ++--+
  | audited| False|
  | description|  |
  | firewall_rules | f71439f6-9934-46b1-90b7-55fdfff5d7fd |
  || 4e57336a-4f91-46b8-af00-b5312fa7e175 |
  || 2f381389-3137-48f0-a7ff-86744a63c0cb |
  | id | f1224bee-740e-4aab-bdbe-829d76aeb647 |
  | name   | tcp_90_policy|
  | shared | True |
  | tenant_id  | 699ae084c9df430d83dbb9a547bab2e3 |
  ++--+

  
  Note that the same happens also with "neutron firewall-policy-remove-rule":

  [root@puma10 ~(keystone_admin)]# neutron firewall-policy-remove-rule 
3d723253-d7f6-4481-8f06-254007fd5f79 625f7937-2f52-470a-bae5-28181d6610b3
  {"name": "policy_admin", "firewall_rules": 
["edf70317-fdba-4bdb-85d8-c254ea46f619"], "tenant_id": 
"3384a1b666ac473b98dabcc385161a20", "firewall_list": 
["72c782b2-2838-4773-b736-aa79564ac2ef"], "audited": false, "shared": true, 
"id": "3d723253-d7f6-4481-8f06-254007fd5f79", "description": ""}

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1279212/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279216] [NEW] No longer any need to pass admin context to aggregate DB API methods

2014-02-11 Thread Mark McLoughlin
Public bug reported:

Since https://review.openstack.org/67026 the aggregate DB APIs no longer
require admin context

One implication of that is that methods like
get_host_availability_zone() and get_availability_zones() no longer need
to require an admin context and their callers, e.g.

def _describe_availability_zones(self, context, **kwargs):
ctxt = context.elevated()
available_zones, not_available_zones = \
availability_zones.get_availability_zones(ctxt)

no longer need to pass an elevated context

Also, in some of our scheduler filters, we do:

def _get_cpu_allocation_ratio(self, host_state, filter_properties):
context = filter_properties['context'].elevated()
metadata = db.aggregate_metadata_get_by_host(
 context, host_state.host, key='cpu_allocation_ratio')

and those too should no longer require an elevated context

** Affects: nova
 Importance: Low
 Status: Triaged


** Tags: aggregates

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279216

Title:
  No longer any need to pass admin context to aggregate DB API methods

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  Since https://review.openstack.org/67026 the aggregate DB APIs no
  longer require admin context

  One implication of that is that methods like
  get_host_availability_zone() and get_availability_zones() no longer
  need to require an admin context and their callers, e.g.

  def _describe_availability_zones(self, context, **kwargs):
  ctxt = context.elevated()
  available_zones, not_available_zones = \
  availability_zones.get_availability_zones(ctxt)

  no longer need to pass an elevated context

  Also, in some of our scheduler filters, we do:

  def _get_cpu_allocation_ratio(self, host_state, filter_properties):
  context = filter_properties['context'].elevated()
  metadata = db.aggregate_metadata_get_by_host(
   context, host_state.host, key='cpu_allocation_ratio')

  and those too should no longer require an elevated context

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1279216/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279212] [NEW] The "firewall-policy-insert-rule" returns json output instead of Field-Value table

2014-02-11 Thread Yair Fried
Public bug reported:

Havana on RHEL6.5

Description
===

The "firewall-policy-insert-rule" returns json output which is the
output of "firewall-policy-show" of the same policy instead of ascii
Field-Value table like all show commands return.

[root@puma10 ~(keystone_admin)]# neutron firewall-policy-insert-rule 
f1224bee-740e-4aab-bdbe-829d76aeb647 f71439f6-9934-46b1-90b7-55fdfff5d7fd
{"name": "tcp_90_policy", "firewall_rules": 
["f71439f6-9934-46b1-90b7-55fdfff5d7fd", 
"4e57336a-4f91-46b8-af00-b5312fa7e175", 
"2f381389-3137-48f0-a7ff-86744a63c0cb"], "tenant_id": 
"699ae084c9df430d83dbb9a547bab2e3", "firewall_list": 
["8a251de8-4962-4d1a-b6cd-a384f4cbb15c"], "audited": false, "shared": true, 
"id": "f1224bee-740e-4aab-bdbe-829d76aeb647", "description": ""}

[root@puma10 ~(keystone_admin)]# neutron firewall-policy-show 
f1224bee-740e-4aab-bdbe-829d76aeb647
++--+
| Field  | Value|
++--+
| audited| False|
| description|  |
| firewall_rules | f71439f6-9934-46b1-90b7-55fdfff5d7fd |
|| 4e57336a-4f91-46b8-af00-b5312fa7e175 |
|| 2f381389-3137-48f0-a7ff-86744a63c0cb |
| id | f1224bee-740e-4aab-bdbe-829d76aeb647 |
| name   | tcp_90_policy|
| shared | True |
| tenant_id  | 699ae084c9df430d83dbb9a547bab2e3 |
++--+


Note that the same happens also with "neutron firewall-policy-remove-rule":

[root@puma10 ~(keystone_admin)]# neutron firewall-policy-remove-rule 
3d723253-d7f6-4481-8f06-254007fd5f79 625f7937-2f52-470a-bae5-28181d6610b3
{"name": "policy_admin", "firewall_rules": 
["edf70317-fdba-4bdb-85d8-c254ea46f619"], "tenant_id": 
"3384a1b666ac473b98dabcc385161a20", "firewall_list": 
["72c782b2-2838-4773-b736-aa79564ac2ef"], "audited": false, "shared": true, 
"id": "3d723253-d7f6-4481-8f06-254007fd5f79", "description": ""}

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1279212

Title:
  The "firewall-policy-insert-rule" returns json output instead of
  Field-Value table

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Havana on RHEL6.5

  Description
  ===

  The "firewall-policy-insert-rule" returns json output which is the
  output of "firewall-policy-show" of the same policy instead of ascii
  Field-Value table like all show commands return.

  [root@puma10 ~(keystone_admin)]# neutron firewall-policy-insert-rule 
f1224bee-740e-4aab-bdbe-829d76aeb647 f71439f6-9934-46b1-90b7-55fdfff5d7fd
  {"name": "tcp_90_policy", "firewall_rules": 
["f71439f6-9934-46b1-90b7-55fdfff5d7fd", 
"4e57336a-4f91-46b8-af00-b5312fa7e175", 
"2f381389-3137-48f0-a7ff-86744a63c0cb"], "tenant_id": 
"699ae084c9df430d83dbb9a547bab2e3", "firewall_list": 
["8a251de8-4962-4d1a-b6cd-a384f4cbb15c"], "audited": false, "shared": true, 
"id": "f1224bee-740e-4aab-bdbe-829d76aeb647", "description": ""}

  [root@puma10 ~(keystone_admin)]# neutron firewall-policy-show 
f1224bee-740e-4aab-bdbe-829d76aeb647
  ++--+
  | Field  | Value|
  ++--+
  | audited| False|
  | description|  |
  | firewall_rules | f71439f6-9934-46b1-90b7-55fdfff5d7fd |
  || 4e57336a-4f91-46b8-af00-b5312fa7e175 |
  || 2f381389-3137-48f0-a7ff-86744a63c0cb |
  | id | f1224bee-740e-4aab-bdbe-829d76aeb647 |
  | name   | tcp_90_policy|
  | shared | True |
  | tenant_id  | 699ae084c9df430d83dbb9a547bab2e3 |
  ++--+

  
  Note that the same happens also with "neutron firewall-policy-remove-rule":

  [root@puma10 ~(keystone_admin)]# neutron firewall-policy-remove-rule 
3d723253-d7f6-4481-8f06-254007fd5f79 625f7937-2f52-470a-bae5-28181d6610b3
  {"name": "policy_admin", "firewall_rules": 
["edf70317-fdba-4bdb-85d8-c254ea46f619"], "tenant_id": 
"3384a1b666ac473b98dabcc385161a20", "firewall_list": 
["72c782b2-2838-4773-b736-aa79564ac2ef"], "audited": false, "shared": true, 
"id": "3d723253-d7f6-4481-8f06-254007fd5f79", "description": ""}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1279212/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng

[Yahoo-eng-team] [Bug 1279213] [NEW] Firewall in admin state down remains in status "ACTIVE" and keeps enforcing its policy rules

2014-02-11 Thread Yair Fried
Public bug reported:

Havana on RHEL6.5

Description
===
I moved my only firewall to admin_state_up=False, but it still enforces the 
policy rules, only when I delete it - it stops enforcing the policy rules.
I also expected the status to change to INACTIVE right when I changed the 
admin_state_up to False, but the firewall remains on status "ACTIVE".

# neutron firewall-show 72c782b2-2838-4773-b736-aa79564ac2ef
++--+
| Field  | Value|
++--+
| admin_state_up | False|
| description|  |
| firewall_policy_id | 3d723253-d7f6-4481-8f06-254007fd5f79 |
| id | 72c782b2-2838-4773-b736-aa79564ac2ef |
| name   | firewall_admin   |
| status | ACTIVE   |
| tenant_id  | 3384a1b666ac473b98dabcc385161a20 |
++--+

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1279213

Title:
   Firewall in admin state down remains in status "ACTIVE" and keeps
  enforcing its policy rules

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Havana on RHEL6.5

  Description
  ===
  I moved my only firewall to admin_state_up=False, but it still enforces the 
policy rules, only when I delete it - it stops enforcing the policy rules.
  I also expected the status to change to INACTIVE right when I changed the 
admin_state_up to False, but the firewall remains on status "ACTIVE".

  # neutron firewall-show 72c782b2-2838-4773-b736-aa79564ac2ef
  ++--+
  | Field  | Value|
  ++--+
  | admin_state_up | False|
  | description|  |
  | firewall_policy_id | 3d723253-d7f6-4481-8f06-254007fd5f79 |
  | id | 72c782b2-2838-4773-b736-aa79564ac2ef |
  | name   | firewall_admin   |
  | status | ACTIVE   |
  | tenant_id  | 3384a1b666ac473b98dabcc385161a20 |
  ++--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1279213/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279210] [NEW] Wrong arguments position in "neutron firewall-update" command

2014-02-11 Thread Yair Fried
Public bug reported:

Havana on rhel6.5

Description
===
The FIREWALL argument of "neutron firewall-update" should be the first argument 
in order to get the command successful, altough the help page mention that the 
FIREWALL argument should be the last argument.


Scenario 1
==
# neutron firewall-show firewall_admin
++--+
| Field  | Value|
++--+
| admin_state_up | True |
| description|  |
| firewall_policy_id | 3d723253-d7f6-4481-8f06-254007fd5f79 |
| id | 72c782b2-2838-4773-b736-aa79564ac2ef |
| name   | firewall_admin   |
| status | ACTIVE   |
| tenant_id  | 3384a1b666ac473b98dabcc385161a20 |
++--+

# neutron firewall-update 
usage: neutron firewall-update [-h] [--request-format {json,xml}] FIREWALL
neutron firewall-update: error: too few arguments

# neutron firewall-update --admin_state_up False 
72c782b2-2838-4773-b736-aa79564ac2ef
Unable to find firewall with name 'False'

# neutron firewall-update 72c782b2-2838-4773-b736-aa79564ac2ef --admin_state_up 
False
Updated firewall: 72c782b2-2838-4773-b736-aa79564ac2ef

[root@puma10 ~(keystone_admin)]# neutron firewall-show firewall_admin
++--+
| Field  | Value|
++--+
| admin_state_up | False|
| description|  |
| firewall_policy_id | 3d723253-d7f6-4481-8f06-254007fd5f79 |
| id | 72c782b2-2838-4773-b736-aa79564ac2ef |
| name   | firewall_admin   |
| status | ACTIVE   |
| tenant_id  | 3384a1b666ac473b98dabcc385161a20 |
++--+


scenario 2
==
# neutron firewall-show 72c782b2-2838-4773-b736-aa79564ac2ef
neu++--+
| Field  | Value|
++--+
| admin_state_up | False|
| description|  |
| firewall_policy_id | 3d723253-d7f6-4481-8f06-254007fd5f79 |
| id | 72c782b2-2838-4773-b736-aa79564ac2ef |
| name   | firewall_admin   |
| status | ACTIVE   |
| tenant_id  | 3384a1b666ac473b98dabcc385161a20 |
++--+

# neutron firewall-update --admin_state_up True 
72c782b2-2838-4773-b736-aa79564ac2ef
Unable to find firewall with name 'True'

# neutron firewall-update 72c782b2-2838-4773-b736-aa79564ac2ef --admin_state_up 
True
Updated firewall: 72c782b2-2838-4773-b736-aa79564ac2ef

# neutron firewall-show 72c782b2-2838-4773-b736-aa79564ac2ef
++--+
| Field  | Value|
++--+
| admin_state_up | True |
| description|  |
| firewall_policy_id | 3d723253-d7f6-4481-8f06-254007fd5f79 |
| id | 72c782b2-2838-4773-b736-aa79564ac2ef |
| name   | firewall_admin   |
| status | ACTIVE   |
| tenant_id  | 3384a1b666ac473b98dabcc385161a20 |
++--+


Note that the "--admin_state_down" option is not available in "neutron 
firewall-update" (although it does exist in "neutron firewall-create"):

# neutron firewall-update --admin_state_down 
8f4ac76b-6786-4bc4-a4c8-9e23731f2675
Unrecognized attribute(s) 'admin_state_down'

# neutron firewall-update 8f4ac76b-6786-4bc4-a4c8-9e23731f2675 
--admin_state_down 
Unrecognized attribute(s) 'admin_state_down'

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1279210

Title:
  Wrong arguments position in "neutron firewall-update" command

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Havana on rhel6.5

  Description
  ===
  The FIREWALL argument of "neutron firewall-update" should be the first 
argument in order to get the command successful, altough the help page mention 
that the FIREWALL argume

[Yahoo-eng-team] [Bug 1279208] [NEW] Firewall rules can not be updated in a firewall policy after firewall policy creation

2014-02-11 Thread Yair Fried
Public bug reported:

Havana on RHEL6.5

Description
===
Firewall rules can not be updated in a firewall policy after the firewall 
policy creation (at least when the policy already created with a rule).
It looks like the firewall-policy-update looks only at the first char of the 
policy id and hence reports that this rule was not found.

[root@puma10 ~(keystone_admin)]# neutron firewall-policy-show 
f1224bee-740e-4aab-bdbe-829d76aeb647
++--+
| Field  | Value|
++--+
| audited| True |
| description|  |
| firewall_rules | 2f381389-3137-48f0-a7ff-86744a63c0cb |
| id | f1224bee-740e-4aab-bdbe-829d76aeb647 |
| name   | tcp_90_policy|
| shared | True |
| tenant_id  | 699ae084c9df430d83dbb9a547bab2e3 |
++--+
[root@puma10 ~(keystone_admin)]# neutron firewall-policy-update 
f1224bee-740e-4aab-bdbe-829d76aeb647 --firewall-rules 
4e57336a-4f91-46b8-af00-b5312fa7e175
Firewall Rule 4 could not be found.
[root@puma10 ~(keystone_admin)]# neutron firewall-rule-show 
4e57336a-4f91-46b8-af00-b5312fa7e175
++--+
| Field  | Value|
++--+
| action | deny |
| description|  |
| destination_ip_address | 10.35.211.3  |
| destination_port   | 100  |
| enabled| True |
| firewall_policy_id |  |
| id | 4e57336a-4f91-46b8-af00-b5312fa7e175 |
| ip_version | 4|
| name   |  |
| position   |  |
| protocol   | tcp  |
| shared | False|
| source_ip_address  | 10.35.115.14 |
| source_port|  |
| tenant_id  | 699ae084c9df430d83dbb9a547bab2e3 |
++--+
[root@puma10 ~(keystone_admin)]# neutron firewall-policy-update 
f1224bee-740e-4aab-bdbe-829d76aeb647 --firewall-rules 
5e57336a-4f91-46b8-af00-b5312fa7e175
Firewall Rule 5 could not be found.
[root@puma10 ~(keystone_admin)]# neutron firewall-policy-update 
f1224bee-740e-4aab-bdbe-829d76aeb647 --firewall-rules rami
Firewall Rule r could not be found.

>From the server.log
===

2013-10-02 13:24:11.404 26705 ERROR neutron.api.v2.resource [-] update failed
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/resource.py", line 84, in 
resource
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/base.py", line 486, in update
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource obj = 
obj_updater(request.context, id, **kwargs)
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/services/firewall/fwaas_plugin.py", 
line 247, in update_firewall_policy
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource 
self).update_firewall_policy(context, id, firewall_policy)
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/db/firewall/firewall_db.py", line 
302, in update_firewall_policy
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource 
fwp['firewall_rules'])
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/db/firewall/firewall_db.py", line 
185, in _set_rules_for_policy
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource fwrule_id)
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource 
FirewallRuleNotFound: Firewall Rule 4 could not be found.

** Affects: neutron
 Importance: Undecided
 Status: New

** Project changed: barbican => neutron

** Description changed:

- RDO havana on RHEL6.4
- openstack-neutron-2013.2-0.4.b3.el6
- 
+ Havana on RHEL6.5
  
  Description
  ===
  Firewall rules can not be updated in a firewall policy after the firewall 
policy c

[Yahoo-eng-team] [Bug 1279208] [NEW] Firewall rules can not be updated in a firewall policy after firewall policy creation

2014-02-11 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

RDO havana on RHEL6.4
openstack-neutron-2013.2-0.4.b3.el6


Description
===
Firewall rules can not be updated in a firewall policy after the firewall 
policy creation (at least when the policy already created with a rule).
It looks like the firewall-policy-update looks only at the first char of the 
policy id and hence reports that this rule was not found.

[root@puma10 ~(keystone_admin)]# neutron firewall-policy-show 
f1224bee-740e-4aab-bdbe-829d76aeb647
++--+
| Field  | Value|
++--+
| audited| True |
| description|  |
| firewall_rules | 2f381389-3137-48f0-a7ff-86744a63c0cb |
| id | f1224bee-740e-4aab-bdbe-829d76aeb647 |
| name   | tcp_90_policy|
| shared | True |
| tenant_id  | 699ae084c9df430d83dbb9a547bab2e3 |
++--+
[root@puma10 ~(keystone_admin)]# neutron firewall-policy-update 
f1224bee-740e-4aab-bdbe-829d76aeb647 --firewall-rules 
4e57336a-4f91-46b8-af00-b5312fa7e175
Firewall Rule 4 could not be found.
[root@puma10 ~(keystone_admin)]# neutron firewall-rule-show 
4e57336a-4f91-46b8-af00-b5312fa7e175
++--+
| Field  | Value|
++--+
| action | deny |
| description|  |
| destination_ip_address | 10.35.211.3  |
| destination_port   | 100  |
| enabled| True |
| firewall_policy_id |  |
| id | 4e57336a-4f91-46b8-af00-b5312fa7e175 |
| ip_version | 4|
| name   |  |
| position   |  |
| protocol   | tcp  |
| shared | False|
| source_ip_address  | 10.35.115.14 |
| source_port|  |
| tenant_id  | 699ae084c9df430d83dbb9a547bab2e3 |
++--+
[root@puma10 ~(keystone_admin)]# neutron firewall-policy-update 
f1224bee-740e-4aab-bdbe-829d76aeb647 --firewall-rules 
5e57336a-4f91-46b8-af00-b5312fa7e175
Firewall Rule 5 could not be found.
[root@puma10 ~(keystone_admin)]# neutron firewall-policy-update 
f1224bee-740e-4aab-bdbe-829d76aeb647 --firewall-rules rami
Firewall Rule r could not be found.


>From the server.log
===

2013-10-02 13:24:11.404 26705 ERROR neutron.api.v2.resource [-] update failed
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/resource.py", line 84, in 
resource
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/base.py", line 486, in update
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource obj = 
obj_updater(request.context, id, **kwargs)
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/services/firewall/fwaas_plugin.py", 
line 247, in update_firewall_policy
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource 
self).update_firewall_policy(context, id, firewall_policy)
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/db/firewall/firewall_db.py", line 
302, in update_firewall_policy
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource 
fwp['firewall_rules'])
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/db/firewall/firewall_db.py", line 
185, in _set_rules_for_policy
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource fwrule_id)
2013-10-02 13:24:11.404 26705 TRACE neutron.api.v2.resource 
FirewallRuleNotFound: Firewall Rule 4 could not be found.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
Firewall rules can not be updated in a firewall policy after firewall policy 
creation
https://bugs.launchpad.net/bugs/1279208
You received this bug notification because you are a member of Yahoo! 
En

[Yahoo-eng-team] [Bug 1279199] [NEW] unrescue a RESCUE status instance with vcenter driver failed

2014-02-11 Thread dingxy
Public bug reported:

I use openstack icehouse build, when unrescue a rescued instance based on 
vcenter driver, the instance became to ERROR status, and when nova show  we hit 
error below:
{u'message': u'The attempted operation cannot be performed in the current state 
(Powered on).', u'code': 500, u'details': u'  File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 258, in 
decorated_function |
|  | return function(self, context, 
*args, **kwargs)

|
|  |   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2797, in 
unrescue_instance   
  |
|  | network_info)  


|
|  |   File 
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py", line 675, in 
unrescue
 |
|  | _vmops.unrescue(instance)  


|
|  |   File 
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vmops.py", line 1132, in 
unrescue
 |
|  | 
self._volumeops.detach_disk_from_vm(vm_rescue_ref, r_instance, device)  

   |
|  |   File 
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/volumeops.py", line 129, 
in detach_disk_from_vm  
 |
|  | 
self._session._wait_for_task(instance_uuid, reconfig_task)  

   |
|  |   File 
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py", line 890, in 
_wait_for_task  
 |
|  | ret_val = done.wait()  


|
|  |   File 
"/usr/lib/python2.6/site-packages/eventlet/event.py", line 116, in wait 

|
|  | return hubs.get_hub().switch() 


|
|  |   File 
"/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 187, in switch

|
|  | return self.greenlet.switch()

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: testing vmwareapi

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279199

Title:
  unrescue a RESCUE status instance with vcenter driver failed

Status in OpenStack Compute (Nova):
  New

Bug description:
  I use openstack icehouse build, when unrescue a rescued instance based on 
vcenter driver, the instance became to ERROR status, and when nova show  we hit 
error below:
  {u'message': u'The attempted operation cannot be performed in the current 
state (Powered on).', u'code': 500, u'details': u'  File 
"/usr/lib/python2.6/site

[Yahoo-eng-team] [Bug 1279195] [NEW] get_spice_console should catch NotImplement exception and handle it

2014-02-11 Thread jichencom
Public bug reported:

in v2/v3 API layer
def get_spice_console(self, req, id, body):

didn't catch NotImplement exception and handle it

** Affects: nova
 Importance: Undecided
 Assignee: jichencom (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => jichencom (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279195

Title:
  get_spice_console should catch NotImplement exception and handle it

Status in OpenStack Compute (Nova):
  New

Bug description:
  in v2/v3 API layer
  def get_spice_console(self, req, id, body):

  didn't catch NotImplement exception and handle it

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1279195/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279189] [NEW] delete or query an not exist snapshot raise exception more readable

2014-02-11 Thread jichencom
Public bug reported:

In V2 API layer code, it would be better if we raise exception instead
of return it

try:
self.volume_api.delete_snapshot(context, id)
except exception.NotFound:
return exc.HTTPNotFound()  > use raise instead of return

** Affects: nova
 Importance: Undecided
 Assignee: jichencom (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => jichencom (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279189

Title:
  delete or query an not exist snapshot raise exception more readable

Status in OpenStack Compute (Nova):
  New

Bug description:
  In V2 API layer code, it would be better if we raise exception instead
  of return it

  try:
  self.volume_api.delete_snapshot(context, id)
  except exception.NotFound:
  return exc.HTTPNotFound()  > use raise instead of return

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1279189/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279103] Re: CAP_PORT_FILTER should be VIF_DETAILS

2014-02-11 Thread Robert Kukura
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1279103

Title:
  CAP_PORT_FILTER should be VIF_DETAILS

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  neutron/plugins/ml2/drivers/mech_hyperv.py

  neutron/plugins/ml2/drivers/mech_linuxbridge.py

  neutron/plugins/ml2/drivers/mech_openvswitch.py

  
  {portbindings.CAP_PORT_FILTER: True})

  should be

  {portbindings.VIF_DETAILS: True})

  
  Blueprint:
  Implement the binding:profile port attribute in ML2
  https://blueprints.launchpad.net/neutron/+spec/ml2-binding-profile

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1279103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1178103] Re: can't disable file injection for bare metal

2014-02-11 Thread Devananda van der Veen
Marking as Invalid for Ironic -- file injection is not implemented in
the PXE driver and thus doesn't need to be disabled.

** Changed in: ironic
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1178103

Title:
  can't disable file injection for bare metal

Status in Ironic (Bare Metal Provisioning):
  Invalid
Status in OpenStack Compute (Nova):
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  For two reasons : a) until we have quantum-pxe done, it won't work,
  and b) file injection always happens.

  One of the reasons to want to disable file injection is to work with
  hardware that gets a ethernet interface other than 'eth0' - e.g. if
  only eth1 is plugged in on the hardware, file injection with it's
  hardcoded parameters interferes with network bringup.

  A workaround for homogeneous environments is to change the template to
  hardcode the interface name (s/iface.name/eth2/)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1178103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279172] [NEW] Unicode encoding error exists in extended Nova API, when the data contain unicode

2014-02-11 Thread David Jia
Public bug reported:

We have developed an extended Nova API, the API query disks at first, then add 
a disk to an instance.
After querying, if disk has non-english disk name, unicode will be converted to 
str in nova/api/openstack/wsgi.py line 451 
"node = doc.createTextNode(str(data))", then unicode encoding error exists.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279172

Title:
  Unicode encoding error exists in extended Nova API, when the data
  contain unicode

Status in OpenStack Compute (Nova):
  New

Bug description:
  We have developed an extended Nova API, the API query disks at first, then 
add a disk to an instance.
  After querying, if disk has non-english disk name, unicode will be converted 
to str in nova/api/openstack/wsgi.py line 451 
  "node = doc.createTextNode(str(data))", then unicode encoding error exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1279172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279163] [NEW] log-related tracebacks in nsx plugin

2014-02-11 Thread Armando Migliaccio
Public bug reported:

A number of VMware Mine Sweeper have shown this stacktrace:

2014-02-11 17:31:06.067 31651 DEBUG neutron.plugins.nicira.api_client.request 
[-] Setting X-Nvp-Wait-For-Config-Generation request header: '4403' 
_issue_request 
/opt/stack/neutron/neutron/plugins/nicira/api_client/request.py:124
Traceback (most recent call last):
  File "/usr/lib/python2.7/logging/__init__.py", line 846, in emit
msg = self.format(record)
  File "/opt/stack/neutron/neutron/openstack/common/log.py", line 619, in format
return logging.StreamHandler.format(self, record)
  File "/usr/lib/python2.7/logging/__init__.py", line 723, in format
return fmt.format(record)
  File "/opt/stack/neutron/neutron/openstack/common/log.py", line 583, in format
return logging.Formatter.format(self, record)
  File "/usr/lib/python2.7/logging/__init__.py", line 464, in format
record.message = record.getMessage()
  File "/usr/lib/python2.7/logging/__init__.py", line 328, in getMessage
msg = msg % self.args
  File "/opt/stack/neutron/neutron/openstack/common/gettextutils.py", line 197, 
in __mod__
unicode_mod = super(Message, self).__mod__(params)
KeyError: u'sec'
Logged from file request.py, line 141

This pollutes the log and may break the normal control flow.

** Affects: neutron
 Importance: Undecided
 Assignee: Armando Migliaccio (armando-migliaccio)
 Status: New


** Tags: nicira

** Changed in: neutron
 Assignee: (unassigned) => Armando Migliaccio (armando-migliaccio)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1279163

Title:
  log-related tracebacks in nsx plugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  A number of VMware Mine Sweeper have shown this stacktrace:

  2014-02-11 17:31:06.067 31651 DEBUG neutron.plugins.nicira.api_client.request 
[-] Setting X-Nvp-Wait-For-Config-Generation request header: '4403' 
_issue_request 
/opt/stack/neutron/neutron/plugins/nicira/api_client/request.py:124
  Traceback (most recent call last):
File "/usr/lib/python2.7/logging/__init__.py", line 846, in emit
  msg = self.format(record)
File "/opt/stack/neutron/neutron/openstack/common/log.py", line 619, in 
format
  return logging.StreamHandler.format(self, record)
File "/usr/lib/python2.7/logging/__init__.py", line 723, in format
  return fmt.format(record)
File "/opt/stack/neutron/neutron/openstack/common/log.py", line 583, in 
format
  return logging.Formatter.format(self, record)
File "/usr/lib/python2.7/logging/__init__.py", line 464, in format
  record.message = record.getMessage()
File "/usr/lib/python2.7/logging/__init__.py", line 328, in getMessage
  msg = msg % self.args
File "/opt/stack/neutron/neutron/openstack/common/gettextutils.py", line 
197, in __mod__
  unicode_mod = super(Message, self).__mod__(params)
  KeyError: u'sec'
  Logged from file request.py, line 141

  This pollutes the log and may break the normal control flow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1279163/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1227698] Re: nova should distinguish between "private_ip" "accessible_private_ip" and "public_ip" (or "private_ip", "public_ip", "internal_ip")

2014-02-11 Thread Michael H Wilson
Thanks for the suggestion. If you'd like to propose some code for this
or if you can convince someone else to get on board I would suggest
filing a blueprint. It might also be beneficial to start a mailing list
thread on this and get some additional eyes. A bug report is not really
appropriate for what you are wanting which is a change in intended
behavior.

** Changed in: nova
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1227698

Title:
  nova should distinguish between "private_ip" "accessible_private_ip"
  and "public_ip" (or "private_ip", "public_ip", "internal_ip")

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  As described in https://bugs.launchpad.net/bugs/1227694 I think it
  would be useful to have three categories of IPs in nova.

  While using salt-cloud https://github.com/saltstack/salt-cloud this
  distinction would also be useful in the sense that by default it tries
  to connect to the public IP by ssh to deploy salt. If
  "accessible_private_ip" existed, then it could try that out too if no
  public IP is available. (It also uses some recognized private IP
  ranges to "test" if they are private).

  The other reason for this request is that I find it confusion to use
  the label public_ip on a 172.x.x.x range on a private intranet-only
  cloud infrastructure (for example).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1227698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279135] [NEW] resize instance - flavors not sorted

2014-02-11 Thread Cindy Lu
Public bug reported:

New Flavor dropdown menu should be sorted by RAM or something.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1279135

Title:
  resize instance - flavors not sorted

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  New Flavor dropdown menu should be sorted by RAM or something.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1279135/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279100] Re: keystone failed to start "No module named passlib.hash"

2014-02-11 Thread Dolph Mathews
*** This bug is a duplicate of bug 1277507 ***
https://bugs.launchpad.net/bugs/1277507

** This bug has been marked a duplicate of bug 1277507
   "ImportError: No module named passlib.hash"; HTTP error 403 while getting 
ipaddr from googledrive.com

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1279100

Title:
  keystone failed to start "No module named passlib.hash"

Status in OpenStack Identity (Keystone):
  New
Status in Tempest:
  New

Bug description:
  Got this error from a gate verify, don't see any existing bugs
  describing the problem:

  https://review.openstack.org/#/c/71929/

  http://logs.openstack.org/29/71929/4/check/check-tempest-dsvm-
  postgres-full/859ca89/console.html

  2014-02-11 20:41:33.757 | 2014-02-11 20:41:33 [Call Trace]
  2014-02-11 20:41:33.759 | 2014-02-11 20:41:33 ./stack.sh:902:start_keystone
  2014-02-11 20:41:33.761 | 2014-02-11 20:41:33 
/opt/stack/new/devstack/lib/keystone:419:die
  2014-02-11 20:41:33.782 | 2014-02-11 20:41:33 [ERROR] 
/opt/stack/new/devstack/lib/keystone:419 keystone did not start
  2014-02-11 20:41:36.751 | Build step 'Execute shell' marked build as failure
  2014-02-11 20:41:38.318 | [SCP] Connecting to static.openstack.org

  http://logs.openstack.org/29/71929/4/check/check-tempest-dsvm-
  postgres-full/859ca89/logs/screen-key.txt.gz

  + ln -sf /opt/stack/new/screen-logs/screen-key.2014-02-11-203430.log 
/opt/stack/new/screen-logs/screen-key.log
  + export PYTHONUNBUFFERED=1
  + PYTHONUNBUFFERED=1
  + exec /bin/bash -c 'cd /opt/stack/new/keystone && 
/opt/stack/new/keystone/bin/keystone-all --config-file 
/etc/keystone/keystone.conf --debug'
  Traceback (most recent call last):
File "/opt/stack/new/keystone/bin/keystone-all", line 47, in 
  from keystone.common import sql
File "/opt/stack/new/keystone/keystone/common/sql/__init__.py", line 18, in 

  from keystone.common.sql.core import *
File "/opt/stack/new/keystone/keystone/common/sql/core.py", line 32, in 

  from keystone.common import utils
File "/opt/stack/new/keystone/keystone/common/utils.py", line 28, in 

  import passlib.hash
  ImportError: No module named passlib.hash

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1279100/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278920] Re: Bad performance when editing project members

2014-02-11 Thread Dolph Mathews
On the keystone side, this is specifically addressed by
https://blueprints.launchpad.net/keystone/+spec/list-limiting

On the horizon side, horizon should issue queries to keystone with as
many filters as reasonably possible to avoid hitting the truncation
behavior implemented above.

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1278920

Title:
  Bad performance when editing project members

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  In an envrionment with where there are more than 1000 users, edito
  project members dialog performs a query for each user to get its
  roles, which results in a bad performance. There are some alternatives
  to fix in the UI side, but we could also fix in keystone with an
  extension, for example. Please reassign to keystone if needed.

  In the best scenario, Horizon takes 8 secs to render the UI with 1000 users 
and 2 assigned users.
  On the other hand, it takes 1 min to show the UI with 1000 users and 1000 
assigned users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1278920/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279103] [NEW] CAP_PORT_FILTER should be VIF_DETAILS

2014-02-11 Thread punal patel
Public bug reported:

neutron/plugins/ml2/drivers/mech_hyperv.py

neutron/plugins/ml2/drivers/mech_linuxbridge.py

neutron/plugins/ml2/drivers/mech_openvswitch.py


{portbindings.CAP_PORT_FILTER: True})

should be

{portbindings.VIF_DETAILS: True})


Blueprint:
Implement the binding:profile port attribute in ML2
https://blueprints.launchpad.net/neutron/+spec/ml2-binding-profile

** Affects: neutron
 Importance: Medium
 Assignee: Robert Kukura (rkukura)
 Status: New

** Changed in: nova
   Importance: Undecided => Medium

** Project changed: nova => neutron

** Changed in: neutron
Milestone: icehouse-3 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279103

Title:
  CAP_PORT_FILTER should be VIF_DETAILS

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  neutron/plugins/ml2/drivers/mech_hyperv.py

  neutron/plugins/ml2/drivers/mech_linuxbridge.py

  neutron/plugins/ml2/drivers/mech_openvswitch.py

  
  {portbindings.CAP_PORT_FILTER: True})

  should be

  {portbindings.VIF_DETAILS: True})

  
  Blueprint:
  Implement the binding:profile port attribute in ML2
  https://blueprints.launchpad.net/neutron/+spec/ml2-binding-profile

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1279103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279100] [NEW] keystone failed to start "No module named passlib.hash"

2014-02-11 Thread Steven Hardy
Public bug reported:

Got this error from a gate verify, don't see any existing bugs
describing the problem:

https://review.openstack.org/#/c/71929/

http://logs.openstack.org/29/71929/4/check/check-tempest-dsvm-postgres-
full/859ca89/console.html

2014-02-11 20:41:33.757 | 2014-02-11 20:41:33 [Call Trace]
2014-02-11 20:41:33.759 | 2014-02-11 20:41:33 ./stack.sh:902:start_keystone
2014-02-11 20:41:33.761 | 2014-02-11 20:41:33 
/opt/stack/new/devstack/lib/keystone:419:die
2014-02-11 20:41:33.782 | 2014-02-11 20:41:33 [ERROR] 
/opt/stack/new/devstack/lib/keystone:419 keystone did not start
2014-02-11 20:41:36.751 | Build step 'Execute shell' marked build as failure
2014-02-11 20:41:38.318 | [SCP] Connecting to static.openstack.org

http://logs.openstack.org/29/71929/4/check/check-tempest-dsvm-postgres-
full/859ca89/logs/screen-key.txt.gz

+ ln -sf /opt/stack/new/screen-logs/screen-key.2014-02-11-203430.log 
/opt/stack/new/screen-logs/screen-key.log
+ export PYTHONUNBUFFERED=1
+ PYTHONUNBUFFERED=1
+ exec /bin/bash -c 'cd /opt/stack/new/keystone && 
/opt/stack/new/keystone/bin/keystone-all --config-file 
/etc/keystone/keystone.conf --debug'
Traceback (most recent call last):
  File "/opt/stack/new/keystone/bin/keystone-all", line 47, in 
from keystone.common import sql
  File "/opt/stack/new/keystone/keystone/common/sql/__init__.py", line 18, in 

from keystone.common.sql.core import *
  File "/opt/stack/new/keystone/keystone/common/sql/core.py", line 32, in 

from keystone.common import utils
  File "/opt/stack/new/keystone/keystone/common/utils.py", line 28, in 
import passlib.hash
ImportError: No module named passlib.hash

** Affects: keystone
 Importance: Undecided
 Status: New

** Affects: tempest
 Importance: Undecided
 Status: New

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1279100

Title:
  keystone failed to start "No module named passlib.hash"

Status in OpenStack Identity (Keystone):
  New
Status in Tempest:
  New

Bug description:
  Got this error from a gate verify, don't see any existing bugs
  describing the problem:

  https://review.openstack.org/#/c/71929/

  http://logs.openstack.org/29/71929/4/check/check-tempest-dsvm-
  postgres-full/859ca89/console.html

  2014-02-11 20:41:33.757 | 2014-02-11 20:41:33 [Call Trace]
  2014-02-11 20:41:33.759 | 2014-02-11 20:41:33 ./stack.sh:902:start_keystone
  2014-02-11 20:41:33.761 | 2014-02-11 20:41:33 
/opt/stack/new/devstack/lib/keystone:419:die
  2014-02-11 20:41:33.782 | 2014-02-11 20:41:33 [ERROR] 
/opt/stack/new/devstack/lib/keystone:419 keystone did not start
  2014-02-11 20:41:36.751 | Build step 'Execute shell' marked build as failure
  2014-02-11 20:41:38.318 | [SCP] Connecting to static.openstack.org

  http://logs.openstack.org/29/71929/4/check/check-tempest-dsvm-
  postgres-full/859ca89/logs/screen-key.txt.gz

  + ln -sf /opt/stack/new/screen-logs/screen-key.2014-02-11-203430.log 
/opt/stack/new/screen-logs/screen-key.log
  + export PYTHONUNBUFFERED=1
  + PYTHONUNBUFFERED=1
  + exec /bin/bash -c 'cd /opt/stack/new/keystone && 
/opt/stack/new/keystone/bin/keystone-all --config-file 
/etc/keystone/keystone.conf --debug'
  Traceback (most recent call last):
File "/opt/stack/new/keystone/bin/keystone-all", line 47, in 
  from keystone.common import sql
File "/opt/stack/new/keystone/keystone/common/sql/__init__.py", line 18, in 

  from keystone.common.sql.core import *
File "/opt/stack/new/keystone/keystone/common/sql/core.py", line 32, in 

  from keystone.common import utils
File "/opt/stack/new/keystone/keystone/common/utils.py", line 28, in 

  import passlib.hash
  ImportError: No module named passlib.hash

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1279100/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278853] Re: when i run "git review" for a bug, "Permission denied (publickey)."

2014-02-11 Thread Michael Still
Hi, the documentation on how to setup your development environment is
here --
https://wiki.openstack.org/wiki/How_To_Contribute#If_you.27re_a_developer

Specifically, we don't use ssh keys from launchpad any more, you need to
upload it into review.openstack.org.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1278853

Title:
  when i run "git review" for a bug,"Permission denied (publickey)."

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  when i run "git review" for a bug,
  log shows :
  Problems encountered installing commit-msg hook
  Permission denied (publickey).
  
---
  and i'ver upload my ssh-key  id_rsa.pub to https://launchpad.net/~wangxh-smart

  https://launchpad.net/~wangxh-smart
  and my git config as:
  root@ubuntu:/opt/stack/glance# git config --list
  user.name=wangxiaohuastack
  user.email=wangxiaohuast...@gmail.com
  core.repositoryformatversion=0
  core.filemode=true
  core.bare=false
  core.logallrefupdates=true
  remote.origin.fetch=+refs/heads/*:refs/remotes/origin/*
  remote.origin.url=git://git.openstack.org/openstack/glance.git
  branch.master.remote=origin
  branch.master.merge=refs/heads/master
  
remote.gerrit.url=ssh://wangxiaohuast...@review.openstack.org:29418/openstack/glance.git
  remote.gerrit.fetch=+refs/heads/*:refs/remotes/gerrit/*
  root@ubuntu:/opt/stack/glance# 
  
---
  I want to know why.thanks for response as soon as possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1278853/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 879526] Re: Instance fails to allocate the correct IP address after receiving two

2014-02-11 Thread Michael Still
** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/879526

Title:
  Instance fails to allocate the correct IP address after receiving two

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  One of my users created an instance which failed to allocate the
  correct IP address.

  Digging a little further I have noticed that two IP addresses were
  passed to the instance. The first IP in the list was assigned to the
  instance in the database and dnsmasq's dhcp server allocated the
  second IP address.

  The instance was created using euca2ools.

  Message on nova-compute (pretty printed):

  2011-10-21 12:03:16,040 DEBUG nova.compute.manager [-] instance network_info: 
|
  [   [   {   u'bridge': u'br100',
  u'bridge_interface': u'eth0',
  u'cidr': u'172.16.60.0/24',
  u'cidr_v6': None,
  u'id': 1,
  u'injected': False,
  u'multi_host': False,
  u'vlan': None},
  {   u'broadcast': u'172.16.60.255',
  u'dhcp_server': u'172.16.60.1',
  u'dns': [u'8.8.8.8'],
  u'gateway': u'172.16.60.1',
  u'ips': [   {   u'enabled': u'1',
  u'ip': u'172.16.60.3',
  u'netmask': u'255.255.255.0'},
  {   u'enabled': u'1',
  u'ip': u'172.16.60.34',
  u'netmask': u'255.255.255.0'}],
  u'label': u'nova',
  u'mac': u'02:16:3e:30:f0:58',
  u'rxtx_cap': 0,
  u'should_create_bridge': True,
  u'should_create_vlan': False,
  u'vif_uuid': u'5a69-e155-4927-9f8f-52010cbbb0c4'}]]
  | from (pid=28045) _run_instance 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:394

  /var/lib/nova/networks/nova-br100.conf:

  02:16:3e:30:f0:58,server-2502.novalocal,172.16.60.34

  $ euca-describe-instances i-09c6
  RESERVATION r-2f81m0s5  user_project   default
  INSTANCEi-09c6  ami-0002172.16.60.3  172.16.60.3  
running user (user_project, ankaa)0  m1.large 
2011-10-20T15:55:36Znovaaki-0001ami-

  mysql> select count(*) from fixed_ips where address = "172.16.60.3" or 
address = "172.16.60.34" and instance_id <> NULL;
  +--+
  | count(*) |
  +--+
  |1 |
  +--+
  1 row in set (0.00 sec)

  mysql> select instance_id from fixed_ips where address = "172.16.60.3" or 
address = "172.16.60.34" and instance_id <> NULL;
  +-+
  | instance_id |
  +-+
  |2502 |
  +-+
  1 row in set (0.00 sec)

  mysql> select count(*) from fixed_ips where instance_id = 2502;
  +--+
  | count(*) |
  +--+
  |1 |
  +--+
  1 row in set (0.00 sec)

  mysql> select address from fixed_ips where instance_id = 2502;
  +-+
  |  address|
  +-+
  | 172.16.60.3 |
  +-+
  1 row in set (0.00 sec)

  /var/log/syslog:

  Oct 21 12:35:51 nova dnsmasq-dhcp[31614]: DHCPDISCOVER(br100) 172.16.60.34 
02:16:3e:30:f0:58 
  Oct 21 12:35:51 nova dnsmasq-dhcp[31614]: DHCPOFFER(br100) 172.16.60.34 
02:16:3e:30:f0:58 
  Oct 21 12:35:51 nova dnsmasq-dhcp[31614]: DHCPREQUEST(br100) 172.16.60.34 
02:16:3e:30:f0:58 
  Oct 21 12:35:51 nova dnsmasq-dhcp[31614]: DHCPACK(br100) 172.16.60.34 
02:16:3e:30:f0:58 server-2502
  Oct 21 12:37:52 nova dnsmasq-dhcp[31614]: DHCPDISCOVER(br100) 172.16.60.34 
02:16:3e:30:f0:58 
  Oct 21 12:37:52 nova dnsmasq-dhcp[31614]: DHCPOFFER(br100) 172.16.60.34 
02:16:3e:30:f0:58 
  Oct 21 12:37:52 nova dnsmasq-dhcp[31614]: DHCPREQUEST(br100) 172.16.60.34 
02:16:3e:30:f0:58 
  Oct 21 12:37:52 nova dnsmasq-dhcp[31614]: DHCPACK(br100) 172.16.60.34 
02:16:3e:30:f0:58 server-2502
  Oct 21 12:39:34 nova dnsmasq-dhcp[31614]: DHCPRELEASE(br100) 172.16.60.3 
02:16:3e:30:f0:58 unknown lease
  Oct 21 12:39:35 nova dnsmasq-dhcp[31614]: DHCPRELEASE(br100) 172.16.60.34 
02:16:3e:30:f0:58

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/879526/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1012618] Re: GET console output of server in Rebuild status is throwing 500 error

2014-02-11 Thread Michael Still
** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1012618

Title:
  GET console output of server in Rebuild status is throwing 500 error

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  GET console output of server in Rebuild status is throwing 500 error.

  Actual Result:
  Is throwing 500 error code.

  Expected Result:
  Should return the console output else should shown an appropriate error 
message.

  LOG:
  

  [muralik@openstack27 ~]$nova list
  
+--+++-+
  |  ID  |  Name  | Status |   Networks 
 |
  
+--+++-+
  | 942296a6-74a1-496b-818d-491cb5084b3b | test-1 | ACTIVE | 
private_1=10.0.0.36 |
  
+--+++-+
  [muralik@openstack27 ~]$nova rebuild 942296a6-74a1-496b-818d-491cb5084b3b 
bc1b9cb1-81e1-4b5c-8307-59bc1969c260
  
+---+--+
  |  Property |  Value  
 |
  
+---+--+
  | OS-DCF:diskConfig | MANUAL  
 |
  | accessIPv4| 
 |
  | accessIPv6| 
 |
  | adminPass | FUFQxs4tQLce
 |
  | config_drive  | 
 |
  | created   | 2012-06-13T11:57:51Z
 |
  | flavor| m1.tiny 
 |
  | hostId| 
3bfc2f6b4e561b4edffb8502d1a3c97c4c836963344a770520006e22 |
  | id| 942296a6-74a1-496b-818d-491cb5084b3b
 |
  | image | cirros-0.3-x86_64   
 |
  | key_name  | 
 |
  | metadata  | {}  
 |
  | name  | test-1  
 |
  | private_1 network | 10.0.0.36   
 |
  | progress  | 0   
 |
  | status| REBUILD 
 |
  | tenant_id | admin   
 |
  | updated   | 2012-06-13T12:03:33Z
 |
  | user_id   | admin   
 |
  
+---+--+
  [muralik@openstack27 ~]$nova --debug console-log 
942296a6-74a1-496b-818d-491cb5084b3b --length 10
  connect: (10.233.52.236, 8774)
  send: 'GET /v1.1 HTTP/1.1\r\nHost: 10.233.52.236:8774\r\nx-auth-project-id: 
admin\r\naccept-encoding: gzip, deflate\r\nx-auth-user: admin\r\nuser-agent: 
python-novaclient\r\nx-auth-key: testuser\r\naccept: application/json\r\n\r\n'
  reply: 'HTTP/1.1 204 No Content\r\n'
  header: Content-Length: 0
  header: X-Auth-Token: admin:admin
  header: X-Server-Management-Url: http://10.233.52.236:8774/v1.1/admin
  header: Content-Type: text/plain; charset=UTF-8
  header: Date: Wed, 13 Jun 2012 12:03:37 GMT
  send: 'GET /v1.1/admin/servers/942296a6-74a1-496b-818d-491cb5084b3b 
HTTP/1.1\r\nHost: 10.233.52.236:8774\r\nx-auth-project-id: 
admin\r\nx-auth-token: admin:admin\r\naccept-encoding: gzip, deflate\r\naccept: 
application/json\r\nuser-agent: python-novaclient\r\n\r\n'
  reply: 'HTTP/1.1 200 OK\r\n'
  header: X-Compute-Request-Id: req-8eaa7fad-668c-471e-afd2-6e6294c1e168
  header: Content-Type: application/json
  header: Content-Length: 1230
  header: Date: Wed, 13 Jun 2012 12:03:37 GMT
  send: u'GET /v1.1/admin/servers/942296a6-74a1-496b-818d-491cb5084b3b 
HTTP/1.1\r\nHost: 10.233.52.236:8774\r\nx-auth-project-id: 
admin\r\nx-auth-token: admin:admin\r\naccept-encoding: gzip, deflate\r\naccept: 
application/json\r\nuser-agent: python-novaclient\r\n\r\n'
  reply: 'HTTP/1.1 200 OK\r\n'
  header: X-Compute-Request-Id: req-b6e527d3-c20c-4e18-aa9b-79f62e88199c
  header: Content-Type: application/json
  header: Content-Length: 1230
  header: Date: Wed, 13 Jun 2012 12:03:38 GMT
  send: u'POST /v1.1/admin/servers/942296a6-74a1-496b-818d-491cb5084b3b/action 
HTTP/1.1\r\nHost: 10.233.52.236:8774\r\nContent-Length: 
41\r\nx-auth-project-id: admin\r\naccept-encoding: gzip, deflate\r\naccept: 
application/json\r\nx-auth-t

[Yahoo-eng-team] [Bug 1231339] Re: keystone s3_token middleware not usable

2014-02-11 Thread Jamie Lennox
@Dolph: This would appear to me to be a keystone issue or possibly swift
should be doing gettext before importing the middleware. There are no
gettext translations in the client.

The exception here is coming from keystone - Swift should update to use
s3token from keystoneclient now that has been released.

Setting to keystone, send it back if i'm wrong.

** Also affects: keystone
   Importance: Undecided
   Status: New

** Changed in: python-keystoneclient
   Status: New => Invalid

** Changed in: keystone
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1231339

Title:
  keystone s3_token middleware not usable

Status in OpenStack Identity (Keystone):
  New
Status in Python client library for Keystone:
  Invalid
Status in “swift” package in Ubuntu:
  Confirmed

Bug description:
  s3 middleware authentication is causing swift-proxy server to fail to
  start:

  proxy-server (21880) appears to have stopped
  Starting proxy-server...(/etc/swift/proxy-server.conf)
  Traceback (most recent call last):
File "/usr/bin/swift-proxy-server", line 22, in 
  run_wsgi(conf_file, 'proxy-server', default_port=8080, **options)
File "/usr/lib/python2.7/dist-packages/swift/common/wsgi.py", line 256, in 
run_wsgi
  loadapp(conf_path, global_conf={'log_name': log_name})
File "/usr/lib/python2.7/dist-packages/swift/common/wsgi.py", line 107, in 
wrapper
  return f(conf_uri, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 247, 
in loadapp
  return loadobj(APP, uri, name=name, **kw)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 271, 
in loadobj
  global_conf=global_conf)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 296, 
in loadcontext
  global_conf=global_conf)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 320, 
in _loadconfig
  return loader.get_context(object_type, name, global_conf)
File "/usr/lib/python2.7/dist-packages/swift/common/wsgi.py", line 55, in 
get_context
  object_type, name=name, global_conf=global_conf)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 450, 
in get_context
  global_additions=global_additions)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 562, 
in _pipeline_app_context
  for name in pipeline[:-1]]
File "/usr/lib/python2.7/dist-packages/swift/common/wsgi.py", line 55, in 
get_context
  object_type, name=name, global_conf=global_conf)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 458, 
in get_context
  section)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 517, 
in _context_from_explicit
  value = import_string(found_expr)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 22, 
in import_string
  return pkg_resources.EntryPoint.parse("x=" + s).load(False)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2015, in load
  entry = __import__(self.module_name, globals(),globals(), ['__name__'])
File "/usr/lib/python2.7/dist-packages/keystone/middleware/__init__.py", 
line 18, in 
  from keystone.middleware.core import *
File "/usr/lib/python2.7/dist-packages/keystone/middleware/core.py", line 
21, in 
  from keystone.common import utils
File "/usr/lib/python2.7/dist-packages/keystone/common/utils.py", line 32, 
in 
  from keystone import exception
File "/usr/lib/python2.7/dist-packages/keystone/exception.py", line 63, in 

  class ValidationError(Error):
File "/usr/lib/python2.7/dist-packages/keystone/exception.py", line 64, in 
ValidationError
  message_format = _("Expecting to find %(attribute)s in %(target)s."
  NameError: name '_' is not defined

  ProblemType: Bug
  DistroRelease: Ubuntu 13.10
  Package: swift-proxy 1.9.1-0ubuntu2
  ProcVersionSignature: Ubuntu 3.11.0-4.9-generic 3.11.0-rc7
  Uname: Linux 3.11.0-4-generic x86_64
  ApportVersion: 2.12.4-0ubuntu1
  Architecture: amd64
  Date: Thu Sep 26 09:14:50 2013
  Ec2AMI: ami-0092
  Ec2AMIManifest: FIXME
  Ec2AvailabilityZone: nova
  Ec2InstanceType: m1.small
  Ec2Kernel: aki-0002
  Ec2Ramdisk: ari-0002
  MarkForUpload: True
  PackageArchitecture: all
  ProcEnviron:
   TERM=screen
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: swift
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1231339/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279055] [NEW] vmware: performance issue on restart of nova compute when there are a large number of vms

2014-02-11 Thread Sreeram Yerrapragada
Public bug reported:

When there are large number of VMs in the could there is a performance
issue on restarting of nova compute. Compute server tries to sync states
of all the vm and does not respond until sync is complete.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279055

Title:
  vmware: performance issue on restart of nova compute when there are  a
  large number of vms

Status in OpenStack Compute (Nova):
  New

Bug description:
  When there are large number of VMs in the could there is a performance
  issue on restarting of nova compute. Compute server tries to sync
  states of all the vm and does not respond until sync is complete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1279055/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279052] [NEW] vmware: Performance issue with Delete VM when there are 100s of networks and ports in the cloud

2014-02-11 Thread Sreeram Yerrapragada
Public bug reported:

When there are 100s of networks and ports in the system, concurrent
delete vm action uses 100% cpu and brings the system to stand still.
None of the openstack services respond until the vm is deleted.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279052

Title:
  vmware: Performance issue with Delete VM when there are 100s of
  networks and ports in the cloud

Status in OpenStack Compute (Nova):
  New

Bug description:
  When there are 100s of networks and ports in the system, concurrent
  delete vm action uses 100% cpu and brings the system to stand still.
  None of the openstack services respond until the vm is deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1279052/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279032] Re: openstack-dashboard not fully updated for Django 1.5

2014-02-11 Thread Chuck Short
** Also affects: horizon (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: horizon
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1279032

Title:
  openstack-dashboard not fully updated for Django 1.5

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in “horizon” package in Ubuntu:
  New

Bug description:
  Installing Grizzly on Ubuntu 12.04 and using the Cloud Archive, when
  accessing the horizon I get an Internal Server Error caused by using
  "from django.utils.translation import force_unicode" in /usr/share
  /openstack-
  dashboard/openstack_dashboard/dashboards/admin/users/forms.py

  Full traceback: http://pastebin.ubuntu.com/6916728/

  It seems to be fixed upstream:
  
https://github.com/openstack/horizon/commit/5d32caf3af3b11fcf496ebb04ccfc44f49cbe0b9

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1279032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279032] [NEW] openstack-dashboard not fully updated for Django 1.5

2014-02-11 Thread Ramon Acedo
Public bug reported:

Installing Grizzly on Ubuntu 12.04 and using the Cloud Archive, when
accessing the horizon I get an Internal Server Error caused by using
"from django.utils.translation import force_unicode" in /usr/share
/openstack-dashboard/openstack_dashboard/dashboards/admin/users/forms.py

Full traceback: http://pastebin.ubuntu.com/6916728/

It seems to be fixed upstream:
https://github.com/openstack/horizon/commit/5d32caf3af3b11fcf496ebb04ccfc44f49cbe0b9

** Affects: horizon
 Importance: Undecided
 Status: Fix Released

** Affects: horizon (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1279032

Title:
  openstack-dashboard not fully updated for Django 1.5

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in “horizon” package in Ubuntu:
  New

Bug description:
  Installing Grizzly on Ubuntu 12.04 and using the Cloud Archive, when
  accessing the horizon I get an Internal Server Error caused by using
  "from django.utils.translation import force_unicode" in /usr/share
  /openstack-
  dashboard/openstack_dashboard/dashboards/admin/users/forms.py

  Full traceback: http://pastebin.ubuntu.com/6916728/

  It seems to be fixed upstream:
  
https://github.com/openstack/horizon/commit/5d32caf3af3b11fcf496ebb04ccfc44f49cbe0b9

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1279032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278920] Re: Bad performance when editing project members

2014-02-11 Thread Guilherme Lazzari
** Also affects: keystone
   Importance: Undecided
   Status: New

** Description changed:

  In an envrionment with where there are more than 1000 users, edito
  project members dialog performs a query for each user to get its roles,
  which results in a bad performance. There are some alternatives to fix
  in the UI side, but we could also fix in keystone with an extension, for
  example. Please reassign to keystone if needed.
+ 
+ In the best scenario, Horizon takes 8 secs to render the UI with 1000 users 
and 2 assigned users.
+ On the other hand, it takes 1 min to show the UI with 1000 users and 1000 
assigned users.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1278920

Title:
  Bad performance when editing project members

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Identity (Keystone):
  New

Bug description:
  In an envrionment with where there are more than 1000 users, edito
  project members dialog performs a query for each user to get its
  roles, which results in a bad performance. There are some alternatives
  to fix in the UI side, but we could also fix in keystone with an
  extension, for example. Please reassign to keystone if needed.

  In the best scenario, Horizon takes 8 secs to render the UI with 1000 users 
and 2 assigned users.
  On the other hand, it takes 1 min to show the UI with 1000 users and 1000 
assigned users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1278920/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279007] [NEW] check-tempest-dsvm-cells-full fails hard in check queue

2014-02-11 Thread Matt Riedemann
Public bug reported:

check-tempest-dsvm-cells-full failed with a ton of errors in this patch,
but I don't think it's related to this patch:

check-tempest-dsvm-cells-full

Looking at the first error in the parent cell log here:

http://logs.openstack.org/59/72559/2/check/check-tempest-dsvm-cells-
full/67d158b/logs/screen-n-cell-region.txt.gz?level=TRACE

Looks like some issue with messaging.  I don't know if this is a
regression due to the recent oslo.messaging changes in nova or not.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: cells testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279007

Title:
  check-tempest-dsvm-cells-full fails hard in check queue

Status in OpenStack Compute (Nova):
  New

Bug description:
  check-tempest-dsvm-cells-full failed with a ton of errors in this
  patch, but I don't think it's related to this patch:

  check-tempest-dsvm-cells-full

  Looking at the first error in the parent cell log here:

  http://logs.openstack.org/59/72559/2/check/check-tempest-dsvm-cells-
  full/67d158b/logs/screen-n-cell-region.txt.gz?level=TRACE

  Looks like some issue with messaging.  I don't know if this is a
  regression due to the recent oslo.messaging changes in nova or not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1279007/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279000] [NEW] db migrate script to set charset=utf8 for all tables

2014-02-11 Thread Bhuvan Arumugam
Public bug reported:

2014-02-11 17:42:15.495 26564 CRITICAL glance [-] ValueError: Tables 
"image_locations,image_members,image_properties,image_tags,images,migrate_version,task_info,tasks"
 have non utf8 collation, please make sure all tables are CHARSET=utf8
2014-02-11 17:42:15.495 26564 TRACE glance Traceback (most recent call last):
2014-02-11 17:42:15.495 26564 TRACE glance   File 
"/usr/local/csi/share/csi-glance.venv//bin/glance-manage", line 13, in 
2014-02-11 17:42:15.495 26564 TRACE glance sys.exit(main())
2014-02-11 17:42:15.495 26564 TRACE glance   File 
"/usr/local/csi/share/csi-glance.venv/lib/python2.6/site-packages/glance/cmd/manage.py",
 line 220, in main
2014-02-11 17:42:15.495 26564 TRACE glance return CONF.command.action_fn()
2014-02-11 17:42:15.495 26564 TRACE glance   File 
"/usr/local/csi/share/csi-glance.venv/lib/python2.6/site-packages/glance/cmd/manage.py",
 line 121, in sync
2014-02-11 17:42:15.495 26564 TRACE glance CONF.command.current_version)
2014-02-11 17:42:15.495 26564 TRACE glance   File 
"/usr/local/csi/share/csi-glance.venv/lib/python2.6/site-packages/glance/cmd/manage.py",
 line 98, in sync
2014-02-11 17:42:15.495 26564 TRACE glance 
migration.db_sync(db_migration.MIGRATE_REPO_PATH, version)
2014-02-11 17:42:15.495 26564 TRACE glance   File 
"/usr/local/csi/share/csi-glance.venv/lib/python2.6/site-packages/glance/openstack/common/db/sqlalchemy/migration.py",
 line 195, in db_sync
2014-02-11 17:42:15.495 26564 TRACE glance _db_schema_sanity_check()
2014-02-11 17:42:15.495 26564 TRACE glance   File 
"/usr/local/csi/share/csi-glance.venv/lib/python2.6/site-packages/glance/openstack/common/db/sqlalchemy/migration.py",
 line 216, in _db_schema_sanity_check
2014-02-11 17:42:15.495 26564 TRACE glance ) % ','.join(table_names))
2014-02-11 17:42:15.495 26564 TRACE glance ValueError: Tables 
"image_locations,image_members,image_properties,image_tags,images,migrate_version,task_info,tasks"
 have non utf8 collation, please make sure all tables are CHARSET=utf8
2014-02-11 17:42:15.495 26564 TRACE glance

glance-manage fail to come up with above error. It's like due to following 
commit in oslo wherein we enforce charset=utf8 for all tables.
7aa94df Add a db check for CHARSET=utf8

I think we should have a migration script to change the charset for all
tables.

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1279000

Title:
  db migrate script to set charset=utf8 for all tables

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  2014-02-11 17:42:15.495 26564 CRITICAL glance [-] ValueError: Tables 
"image_locations,image_members,image_properties,image_tags,images,migrate_version,task_info,tasks"
 have non utf8 collation, please make sure all tables are CHARSET=utf8
  2014-02-11 17:42:15.495 26564 TRACE glance Traceback (most recent call last):
  2014-02-11 17:42:15.495 26564 TRACE glance   File 
"/usr/local/csi/share/csi-glance.venv//bin/glance-manage", line 13, in 
  2014-02-11 17:42:15.495 26564 TRACE glance sys.exit(main())
  2014-02-11 17:42:15.495 26564 TRACE glance   File 
"/usr/local/csi/share/csi-glance.venv/lib/python2.6/site-packages/glance/cmd/manage.py",
 line 220, in main
  2014-02-11 17:42:15.495 26564 TRACE glance return CONF.command.action_fn()
  2014-02-11 17:42:15.495 26564 TRACE glance   File 
"/usr/local/csi/share/csi-glance.venv/lib/python2.6/site-packages/glance/cmd/manage.py",
 line 121, in sync
  2014-02-11 17:42:15.495 26564 TRACE glance CONF.command.current_version)
  2014-02-11 17:42:15.495 26564 TRACE glance   File 
"/usr/local/csi/share/csi-glance.venv/lib/python2.6/site-packages/glance/cmd/manage.py",
 line 98, in sync
  2014-02-11 17:42:15.495 26564 TRACE glance 
migration.db_sync(db_migration.MIGRATE_REPO_PATH, version)
  2014-02-11 17:42:15.495 26564 TRACE glance   File 
"/usr/local/csi/share/csi-glance.venv/lib/python2.6/site-packages/glance/openstack/common/db/sqlalchemy/migration.py",
 line 195, in db_sync
  2014-02-11 17:42:15.495 26564 TRACE glance _db_schema_sanity_check()
  2014-02-11 17:42:15.495 26564 TRACE glance   File 
"/usr/local/csi/share/csi-glance.venv/lib/python2.6/site-packages/glance/openstack/common/db/sqlalchemy/migration.py",
 line 216, in _db_schema_sanity_check
  2014-02-11 17:42:15.495 26564 TRACE glance ) % ','.join(table_names))
  2014-02-11 17:42:15.495 26564 TRACE glance ValueError: Tables 
"image_locations,image_members,image_properties,image_tags,images,migrate_version,task_info,tasks"
 have non utf8 collation, please make sure all tables are CHARSET=utf8
  2014-02-11 17:42:15.495 26564 TRACE glance

  glance-manage fail to come up with above error. It's like due to following 
commit in oslo wherein we enforce charset=utf8 for all tables.
  7a

[Yahoo-eng-team] [Bug 1278991] [NEW] NSX: Deadlock can occur in sync code

2014-02-11 Thread Aaron Rosen
Public bug reported:

The sync code that syncs the operational status to the db calls out to
nsx within a db transaction which can cause deadlock if another request
comes in and eventlet context switches it.

** Affects: neutron
 Importance: High
 Assignee: Aaron Rosen (arosen)
 Status: New


** Tags: nicira

** Changed in: neutron
 Assignee: (unassigned) => Aaron Rosen (arosen)

** Changed in: neutron
   Importance: Undecided => High

** Tags added: nicira

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1278991

Title:
  NSX: Deadlock can occur in sync code

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The sync code that syncs the operational status to the db calls out to
  nsx within a db transaction which can cause deadlock if another
  request comes in and eventlet context switches it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1278991/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278956] [NEW] Table rendering broken for certain objects

2014-02-11 Thread Brian DeHamer
Public bug reported:

The patch that was submitted for Bug #1203391
(https://bugs.launchpad.net/horizon/+bug/1203391) introduced a slight
refactoring of the Column.get_raw_data method
(https://github.com/openstack/horizon/commit/abb76704d014eab68dc0aabe6c8d1d0d762e56e9#diff-0).
Specifically, the ordering of the Dict lookup and basic object lookup
was reversed.

Unfortunately, this results in errors for certain objects that may want
to be rendered in a table. Any object which is iterable will be treated
as a dictionary. However, an error will be raised if that object doesn't
also implement a 'get' method (called on line 312). I have a few pages
which use tables to display data from mongoengine-based models -- these
models are iterable, but do NOT implement a get method and are now
resulting in errors when rendered in a table.

  File 
"/adminui/.venv/local/lib/python2.7/site-packages/horizon/tables/base.py", line 
1553, in get_rows
row = self._meta.row_class(self, datum)
  File 
"/adminui/.venv/local/lib/python2.7/site-packages/horizon/tables/base.py", line 
477, in __init__
self.load_cells()
  File 
"/adminui/.venv/local/lib/python2.7/site-packages/horizon/tables/base.py", line 
503, in load_cells
cell = table._meta.cell_class(datum, column, self)
  File 
"/adminui/.venv/local/lib/python2.7/site-packages/horizon/tables/base.py", line 
592, in __init__
self.data = self.get_data(datum, column, row)
  File 
"/adminui/.venv/local/lib/python2.7/site-packages/horizon/tables/base.py", line 
628, in get_data
data = column.get_data(datum)
  File 
"/adminui/.venv/local/lib/python2.7/site-packages/horizon/tables/base.py", line 
339, in get_data
data = self.get_raw_data(datum)
  File 
"/adminui/.venv/local/lib/python2.7/site-packages/horizon/tables/base.py", line 
308, in get_raw_data
if callable(self.transform):
AttributeError: 'Application' object has no attribute 'get'

** Affects: horizon
 Importance: Undecided
 Assignee: Brian DeHamer (brian-dehamer)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Brian DeHamer (brian-dehamer)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1278956

Title:
  Table rendering broken for certain objects

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The patch that was submitted for Bug #1203391
  (https://bugs.launchpad.net/horizon/+bug/1203391) introduced a slight
  refactoring of the Column.get_raw_data method
  
(https://github.com/openstack/horizon/commit/abb76704d014eab68dc0aabe6c8d1d0d762e56e9#diff-0).
  Specifically, the ordering of the Dict lookup and basic object lookup
  was reversed.

  Unfortunately, this results in errors for certain objects that may
  want to be rendered in a table. Any object which is iterable will be
  treated as a dictionary. However, an error will be raised if that
  object doesn't also implement a 'get' method (called on line 312). I
  have a few pages which use tables to display data from mongoengine-
  based models -- these models are iterable, but do NOT implement a get
  method and are now resulting in errors when rendered in a table.

File 
"/adminui/.venv/local/lib/python2.7/site-packages/horizon/tables/base.py", line 
1553, in get_rows
  row = self._meta.row_class(self, datum)
File 
"/adminui/.venv/local/lib/python2.7/site-packages/horizon/tables/base.py", line 
477, in __init__
  self.load_cells()
File 
"/adminui/.venv/local/lib/python2.7/site-packages/horizon/tables/base.py", line 
503, in load_cells
  cell = table._meta.cell_class(datum, column, self)
File 
"/adminui/.venv/local/lib/python2.7/site-packages/horizon/tables/base.py", line 
592, in __init__
  self.data = self.get_data(datum, column, row)
File 
"/adminui/.venv/local/lib/python2.7/site-packages/horizon/tables/base.py", line 
628, in get_data
  data = column.get_data(datum)
File 
"/adminui/.venv/local/lib/python2.7/site-packages/horizon/tables/base.py", line 
339, in get_data
  data = self.get_raw_data(datum)
File 
"/adminui/.venv/local/lib/python2.7/site-packages/horizon/tables/base.py", line 
308, in get_raw_data
  if callable(self.transform):
  AttributeError: 'Application' object has no attribute 'get'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1278956/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278939] [NEW] Insufficient upper bounds checking performed on flavor-create API

2014-02-11 Thread John Warren
Public bug reported:

The fix for bug #1243195 added upper-bound checking for flavor fields
based on sys.maxint.  The problem is that the value of maxint varies by
platform (either 32- or 64-bit signed integers), while the INTEGER type
in databases is limited to 32-bit signed integers.  This means that the
upper-bound checking does not work on platforms where sys.maxint returns
the maximum value of a 64-bit signed integer.  The original reported
problems (as shown in http://paste.openstack.org/show/48988/) persist on
64-bit platforms.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: api db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1278939

Title:
  Insufficient upper bounds checking performed on flavor-create API

Status in OpenStack Compute (Nova):
  New

Bug description:
  The fix for bug #1243195 added upper-bound checking for flavor fields
  based on sys.maxint.  The problem is that the value of maxint varies
  by platform (either 32- or 64-bit signed integers), while the INTEGER
  type in databases is limited to 32-bit signed integers.  This means
  that the upper-bound checking does not work on platforms where
  sys.maxint returns the maximum value of a 64-bit signed integer.  The
  original reported problems (as shown in
  http://paste.openstack.org/show/48988/) persist on 64-bit platforms.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1278939/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278920] [NEW] Bad performance when editing project members

2014-02-11 Thread Guilherme Lazzari
Public bug reported:

In an envrionment with where there are more than 1000 users, edito
project members dialog performs a query for each user to get its roles,
which results in a bad performance. There are some alternatives to fix
in the UI side, but we could also fix in keystone with an extension, for
example. Please reassign to keystone if needed.

** Affects: horizon
 Importance: Undecided
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1278920

Title:
  Bad performance when editing project members

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  In an envrionment with where there are more than 1000 users, edito
  project members dialog performs a query for each user to get its
  roles, which results in a bad performance. There are some alternatives
  to fix in the UI side, but we could also fix in keystone with an
  extension, for example. Please reassign to keystone if needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1278920/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271311] Re: Neutron should disallow a CIDR of /32

2014-02-11 Thread Paul Ward
It's been decided to abandon this fix because it's preventing something
that, while not useful, is technically "ok".  The better place for this
fix would be in the UI as a hint to the user.

For more info: http://lists.openstack.org/pipermail/openstack-
dev/2014-January/025385.html

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1271311

Title:
  Neutron should disallow a CIDR of /32

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  NeutronDbPluginV2._validate_allocation_pools() currently does basic
  checks to be sure you don't have an invalid subnet specified.
  However, one thing missing is checking for a CIDR of /32.  Such a
  subnet would only have one valid IP in it, which would be consumed by
  the gateway, thus making this network a dead network since no IPs are
  left over to be allocated to VMs.

  I propose a change to disallow start_ip == end_ip in
  NeutronDbPluginV2._validate_allocation_pools() to cover the CIDR of
  /32 case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1271311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1233188] Re: Cant create VM with rbd backend enabled

2014-02-11 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

** Changed in: nova/havana
   Importance: Undecided => Medium

** Changed in: nova/havana
   Status: New => In Progress

** Changed in: nova/havana
 Assignee: (unassigned) => Xavier Queralt (xqueralt)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1233188

Title:
  Cant create VM with rbd backend enabled

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  In Progress

Bug description:
  nova-compute.log:

  2013-09-30 15:52:18.897 12884 ERROR nova.compute.manager 
[req-d112a8fd-89c4-4b5b-b6c2-1896dcd0e4ab f70773b792354571a10d44260397fde1 
b9e4ccd38a794fee82dfb06a52ec3cfd] [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] Error: libvirt_info() takes exactly 6 
arguments (7 given)
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] Traceback (most recent call last):
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1037, in 
_build_instance
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] set_access_ip=set_access_ip)
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1410, in _spawn
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1407, in _spawn
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] block_device_info)
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2069, in 
spawn
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] write_to_disk=True)
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3042, in 
to_xml
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] disk_info, rescue, block_device_info)
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2922, in 
get_guest_config
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] inst_type):
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2699, in 
get_guest_storage_config
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] inst_type)
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2662, in 
get_guest_disk_config
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] self.get_hypervisor_version())
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] TypeError: libvirt_info() takes exactly 6 
arguments (7 given)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1233188/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1158807] Re: Qpid SSL protocol

2014-02-11 Thread Alan Pevec
** Also affects: nova/grizzly
   Importance: Undecided
   Status: New

** Changed in: nova/grizzly
 Assignee: (unassigned) => Xavier Queralt (xqueralt)

** Changed in: nova/grizzly
   Importance: Undecided => High

** Changed in: nova/grizzly
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1158807

Title:
  Qpid SSL protocol

Status in Cinder:
  Invalid
Status in Cinder grizzly series:
  In Progress
Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) grizzly series:
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  By default, TCP is used as transport for QPID connections. If you like
  to enable SSL, there is a flat 'qpid_protocol = ssl' available in
  nova.conf. However, python-qpid client is awaiting transport type
  instead of protocol. It seems to be a bug:

  Solution:
  
(https://github.com/openstack/nova/blob/master/nova/openstack/common/rpc/impl_qpid.py#L323)

  WRONG:self.connection.protocol = self.conf.qpid_protocol
  CORRECT:self.connection.transport = self.conf.qpid_protocol

  Regards,
  JuanFra.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1158807/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276510] Re: MySQL 2013 lost connection is being raised

2014-02-11 Thread Flavio Percoco
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1276510

Title:
  MySQL 2013 lost connection is being raised

Status in Cinder:
  In Progress
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Committed
Status in oslo havana series:
  Fix Committed

Bug description:
  MySQL's 2013 code error is not in the list of connection lost issues.
  This causes the reconnect loop to raise this error and stop retrying.

  [database]
  max_retries = -1
  retry_interval = 1

  mysql down:

  ==> scheduler.log <==
  2014-02-03 16:51:50.956 16184 CRITICAL cinder [-] (OperationalError) (2013, 
"Lost connection to MySQL server at 'reading initial communication packet', 
system error: 0") None None

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1276510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278853] [NEW] when i run "git review" for a bug, "Permission denied (publickey)."

2014-02-11 Thread wangxh.openstack
Public bug reported:

when i run "git review" for a bug,
log shows :
Problems encountered installing commit-msg hook
Permission denied (publickey).
---
and i'ver upload my ssh-key  id_rsa.pub to https://launchpad.net/~wangxh-smart

https://launchpad.net/~wangxh-smart
and my git config as:
root@ubuntu:/opt/stack/glance# git config --list
user.name=wangxiaohuastack
user.email=wangxiaohuast...@gmail.com
core.repositoryformatversion=0
core.filemode=true
core.bare=false
core.logallrefupdates=true
remote.origin.fetch=+refs/heads/*:refs/remotes/origin/*
remote.origin.url=git://git.openstack.org/openstack/glance.git
branch.master.remote=origin
branch.master.merge=refs/heads/master
remote.gerrit.url=ssh://wangxiaohuast...@review.openstack.org:29418/openstack/glance.git
remote.gerrit.fetch=+refs/heads/*:refs/remotes/gerrit/*
root@ubuntu:/opt/stack/glance# 
---
I want to know why.thanks for response as soon as possible.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1278853

Title:
  when i run "git review" for a bug,"Permission denied (publickey)."

Status in OpenStack Compute (Nova):
  New

Bug description:
  when i run "git review" for a bug,
  log shows :
  Problems encountered installing commit-msg hook
  Permission denied (publickey).
  
---
  and i'ver upload my ssh-key  id_rsa.pub to https://launchpad.net/~wangxh-smart

  https://launchpad.net/~wangxh-smart
  and my git config as:
  root@ubuntu:/opt/stack/glance# git config --list
  user.name=wangxiaohuastack
  user.email=wangxiaohuast...@gmail.com
  core.repositoryformatversion=0
  core.filemode=true
  core.bare=false
  core.logallrefupdates=true
  remote.origin.fetch=+refs/heads/*:refs/remotes/origin/*
  remote.origin.url=git://git.openstack.org/openstack/glance.git
  branch.master.remote=origin
  branch.master.merge=refs/heads/master
  
remote.gerrit.url=ssh://wangxiaohuast...@review.openstack.org:29418/openstack/glance.git
  remote.gerrit.fetch=+refs/heads/*:refs/remotes/gerrit/*
  root@ubuntu:/opt/stack/glance# 
  
---
  I want to know why.thanks for response as soon as possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1278853/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278849] [NEW] Need more log info for "Instance could not be found" error.

2014-02-11 Thread Venkatesh Sampath
Public bug reported:

Rarely, when looking up for the details of an 'instance_id' we get to
see the following error. But the instance pertaining to the instance_id
is still available and active.

call: GET /${project_id}/servers/${instance_id}
HTTP exception thrown: Instance could not be found (404)

Apart from the above, there is not enough information to traceback what
exactly caused the 404. Hence some log message with traceback can help
us identify the root cause.

This bug is to add more log messages with traceback for more
information.

** Affects: nova
 Importance: Undecided
 Assignee: Venkatesh Sampath (venkateshsampath)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Venkatesh Sampath (venkateshsampath)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1278849

Title:
  Need more log info for "Instance could not be found" error.

Status in OpenStack Compute (Nova):
  New

Bug description:
  Rarely, when looking up for the details of an 'instance_id' we get to
  see the following error. But the instance pertaining to the
  instance_id is still available and active.

  call: GET /${project_id}/servers/${instance_id}
  HTTP exception thrown: Instance could not be found (404)

  Apart from the above, there is not enough information to traceback
  what exactly caused the 404. Hence some log message with traceback can
  help us identify the root cause.

  This bug is to add more log messages with traceback for more
  information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1278849/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278845] [NEW] Disassociate floating ip from instance remove them from ip allocation list

2014-02-11 Thread William Novais
Public bug reported:

Hi,

I'm using Havanna, nova(qemu) and neutron.

root@controller:~# neutron --version
2.3.0
root@controller:~# nova --version
2.15.0

If i disassociate a floating ip from instance using Instances > More >
disassociate Floating IP the ip is removed from floating ips list, but
if i disassociate using Access & Security > Floating IPs the ip  is not
released.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1278845

Title:
  Disassociate floating ip from instance remove them from ip allocation
  list

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Hi,

  I'm using Havanna, nova(qemu) and neutron.

  root@controller:~# neutron --version
  2.3.0
  root@controller:~# nova --version
  2.15.0

  If i disassociate a floating ip from instance using Instances > More >
  disassociate Floating IP the ip is removed from floating ips list, but
  if i disassociate using Access & Security > Floating IPs the ip  is
  not released.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1278845/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278843] [NEW] Neutron doesn't report using a stale CA certificate

2014-02-11 Thread Piotr Wachowicz
Public bug reported:

It seems that when the CA certificate cashed locally by Neutron in
/var/lib/neutron/keystone-signing/ is stale (does not match the current
CA cert used by keystone), it is not possible to start a new instance.
This is  understandable. However, the stacktrace error you get while
trying to start as instance in such a situation is a hugely misleading:

"ERROR: Error: unsupported operand type(s) for +: 'NoneType' and 'str'"

It's rather tricky to debug the issue.

To reproduce just redo the pki-setup for keystone on a deployed and
otherwise healthy openstack cluster. This will create a new CA cert for
keystone, however neutron-server will be completely oblivious to this
fact and will still insist on using it's local copy of the cacert.

I'm using Havana.

--
/var/log/nova/compute.log  on the compute node when trying to start a vm

OpenStack (nova:4668) ERROR: Instance failed network setup after 1 attempt(s)
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager Traceback (most recent 
call last):
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1238, in 
_allocate_network_async
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager 
dhcp_options=dhcp_options)
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager   File 
"/usr/lib/python2.6/site-packages/nova/network/api.py", line 49, in wrapper
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager res = f(self, 
context, *args, **kwargs)
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager   File 
"/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py", line 358, in 
allocate_for_instance
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager LOG.exception(msg, 
port_id)
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager   File 
"/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py", line 323, in 
allocate_for_instance
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager port_req_body)
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager   File 
"/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py", line 392, in 
_populate_neutron_extension_values
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager 
self._refresh_neutron_extensions_cache()
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager   File 
"/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py", line 376, in 
_refresh_neutron_extensions_cache
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager extensions_list = 
neutron.list_extensions()['extensions']
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager   File 
"/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 108, in 
with_params
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager ret = 
self.function(instance, *args, **kwargs)
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager   File 
"/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 286, in 
list_extensions
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager return 
self.get(self.extensions_path, params=_params)
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager   File 
"/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 1183, in 
get
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager headers=headers, 
params=params)
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager   File 
"/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 1168, in 
retry_request
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager headers=headers, 
params=params)
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager   File 
"/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 1103, in 
do_request
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager resp, replybody = 
self.httpclient.do_request(action, method, body=body)
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager   File 
"/usr/lib/python2.6/site-packages/neutronclient/client.py", line 188, in 
do_request
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager self.authenticate()
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager   File 
"/usr/lib/python2.6/site-packages/neutronclient/client.py", line 224, in 
authenticate
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager token_url = 
self.auth_url + "/tokens"
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager TypeError: unsupported 
operand type(s) for +: 'NoneType' and 'str'
2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager 



/var/log/neutron/server.log on the controller when trying to start a vm

OpenStack (neutron:29274) DEBUG: Removing headers from request environment: 
X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-
Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role

[Yahoo-eng-team] [Bug 1278828] [NEW] Branch info is shown incorrectly while downloading

2014-02-11 Thread Alexander Gorodnev
Public bug reported:

If branch name is missing in config file, then branch name is shown as
'None' instead of 'master'

Output:

INFO: @anvil.components.base_install : |-- 
git://github.com/openstack/oslo.config.git
INFO: @anvil.downloader : Downloading 
git://github.com/openstack/oslo.config.git (None) to 
/root/openstack/oslo-config/app.
INFO: @anvil.downloader : Adjusting to tag 1.2.1.
INFO: @anvil.downloader : Removing tags: 1.3.0a0
INFO: @anvil.actions.prepare : Performed 1 downloads.
INFO: @anvil.actions.prepare : Downloading keystone.
INFO: @anvil.components.base_install : Downloading from 1 uris:
INFO: @anvil.components.base_install : |-- 
git://github.com/openstack/keystone.git
INFO: @anvil.downloader : Downloading git://github.com/openstack/keystone.git 
(None) to /root/openstack/keystone/app.

** Affects: anvil
 Importance: Undecided
 Assignee: Alexander Gorodnev (a-gorodnev)
 Status: New

** Changed in: anvil
 Assignee: (unassigned) => Alexander Gorodnev (a-gorodnev)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1278828

Title:
  Branch info is shown incorrectly while downloading

Status in ANVIL for forging OpenStack.:
  New

Bug description:
  If branch name is missing in config file, then branch name is shown as
  'None' instead of 'master'

  Output:

  INFO: @anvil.components.base_install : |-- 
git://github.com/openstack/oslo.config.git
  INFO: @anvil.downloader : Downloading 
git://github.com/openstack/oslo.config.git (None) to 
/root/openstack/oslo-config/app.
  INFO: @anvil.downloader : Adjusting to tag 1.2.1.
  INFO: @anvil.downloader : Removing tags: 1.3.0a0
  INFO: @anvil.actions.prepare : Performed 1 downloads.
  INFO: @anvil.actions.prepare : Downloading keystone.
  INFO: @anvil.components.base_install : Downloading from 1 uris:
  INFO: @anvil.components.base_install : |-- 
git://github.com/openstack/keystone.git
  INFO: @anvil.downloader : Downloading git://github.com/openstack/keystone.git 
(None) to /root/openstack/keystone/app.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1278828/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1207576] Re: python package amqp, should not run in debug mode when nova is in debug mode

2014-02-11 Thread Doug Hellmann
** Changed in: oslo
   Status: Fix Committed => In Progress

** Changed in: oslo
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1207576

Title:
  python package amqp, should not run in debug mode when nova is in
  debug mode

Status in OpenStack Compute (Nova):
  Fix Committed
Status in Oslo - a Library of Common OpenStack Code:
  Won't Fix

Bug description:
  Currently the python package amqp runs in debug mode when nova is in
  debug mode.When specifying debug level logs in nova generally we
  are trying to debug nova not its dependencies. So by default amqp
  and other python dependencies shouldn't be running in debug mode when
  nova is.

  http://logs.openstack.org/47/36947/5/check/gate-tempest-devstack-vm-
  full/37958/logs/screen-n-sch.txt.gz

  2013-07-31 20:52:43.207 28626 DEBUG amqp [-] Start from server, version: 0.9, 
properties: {u'information': u'Licensed under the MPL.  See 
http://www.rabbitmq.com/', u'product': u'RabbitMQ', u'copyright': u'Copyright 
(C) 2007-2011 VMware, Inc.', u'capabilities': {u'exchange_exchange_bindings': 
True, u'consumer_cancel_notify': True, u'publisher_confirms': True, 
u'basic.nack': True}, u'platform': u'Erlang/OTP', u'version': u'2.7.1'}, 
mechanisms: [u'PLAIN', u'AMQPLAIN'], locales: [u'en_US'] _start 
/usr/local/lib/python2.7/dist-packages/amqp/connection.py:706
  2013-07-31 20:52:43.208 28626 DEBUG amqp [-] Open OK! _open_ok 
/usr/local/lib/python2.7/dist-packages/amqp/connection.py:592
  2013-07-31 20:52:43.208 28626 DEBUG amqp [-] using channel_id: 1 __init__ 
/usr/local/lib/python2.7/dist-packages/amqp/channel.py:70
  2013-07-31 20:52:43.208 28626 DEBUG amqp [-] Channel open _open_ok 
/usr/local/lib/python2.7/dist-packages/amqp/channel.py:420

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1207576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278825] [NEW] Raising NotImplementedError makes it impossible to use super() properly

2014-02-11 Thread Radomir Dopieralski
Public bug reported:

There is an antipattern in Python, where to define a "virtual" method
people would raise a NotImplementedError exception in the body of the
method. This makes it impossible to use super() properly with that
method and, because of that, to use multiple inheritance, such as
mixins.

Horizon has several places where this is used, including some methods,
such as get_context_data, that are very likely to be touched by various
mixins. Those methods should return some default, empty value, instead
of raising an exception.

A typical implementation of get_context_data() in Django should look
something like this:

def get_context_data(self, **kwargs):
context = super(OverviewTab, self).get_context_data(**kwargs)
context.update({
'overcloud': 1,
'roles': [1, 2, 3],
})
return context

This will break horribly if the superclass' get_context_data raises
NotImplementedError.

** Affects: horizon
 Importance: Undecided
 Assignee: Radomir Dopieralski (thesheep)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Radomir Dopieralski (thesheep)

** Description changed:

  There is an antipattern in Python, where to define a "virtual" method
  people would raise a NotImplementedError exception in the body of the
  method. This makes it impossible to use super() properly with that
  method and, because of that, to use multiple inheritance, such as
  mixins.
  
  Horizon has several places where this is used, including some methods,
  such as get_context_data, that are very likely to be touched by various
  mixins. Those methods should return some default, empty value, instead
  of raising an exception.
  
  A typical implementation of get_context_data() in Django should look
  something like this:
  
- def get_context_data(self, request, **kwargs):
- context = super(OverviewTab, self).get_context_data(request, **kwargs)
- context.update({
- 'overcloud': 1,
- 'roles': [1, 2, 3],
- })
- return context
+ def get_context_data(self, **kwargs):
+ context = super(OverviewTab, self).get_context_data(**kwargs)
+ context.update({
+ 'overcloud': 1,
+ 'roles': [1, 2, 3],
+ })
+ return context
  
  This will break horribly if the superclass' get_context_data raises
  NotImplementedError.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1278825

Title:
  Raising NotImplementedError makes it impossible to use super()
  properly

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  There is an antipattern in Python, where to define a "virtual" method
  people would raise a NotImplementedError exception in the body of the
  method. This makes it impossible to use super() properly with that
  method and, because of that, to use multiple inheritance, such as
  mixins.

  Horizon has several places where this is used, including some methods,
  such as get_context_data, that are very likely to be touched by
  various mixins. Those methods should return some default, empty value,
  instead of raising an exception.

  A typical implementation of get_context_data() in Django should look
  something like this:

  def get_context_data(self, **kwargs):
  context = super(OverviewTab, self).get_context_data(**kwargs)
  context.update({
  'overcloud': 1,
  'roles': [1, 2, 3],
  })
  return context

  This will break horribly if the superclass' get_context_data raises
  NotImplementedError.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1278825/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278823] [NEW] VPNaaS tests refactoring

2014-02-11 Thread Tatiana Mazur
Public bug reported:

Some VPNaaS tests have redundant code and need to be refactored

** Affects: horizon
 Importance: Wishlist
 Assignee: Tatiana Mazur (tmazur)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Tatiana Mazur (tmazur)

** Changed in: horizon
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1278823

Title:
  VPNaaS tests refactoring

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Some VPNaaS tests have redundant code and need to be refactored

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1278823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278819] [NEW] Incorrect instance count shown in overview page

2014-02-11 Thread Erasmo Isotton
Public bug reported:

Incorrect instance count shown in overview page

Steps to Reproduce the problem
1. Perform provisioning and deletion of the VMs
2. Check the Horizon home page for values
3. Horizon shows 95 active instances
4. VCenter console shows only 20 instances
5. The values are not updated after more than an hour .


Incorrect behavior
Incorrect data displayed in Horizon

Expected behavior
Data should be accurate to reflect the actual active instances

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1278819

Title:
  Incorrect instance count shown in overview page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Incorrect instance count shown in overview page

  Steps to Reproduce the problem
  1. Perform provisioning and deletion of the VMs
  2. Check the Horizon home page for values
  3. Horizon shows 95 active instances
  4. VCenter console shows only 20 instances
  5. The values are not updated after more than an hour .

  
  Incorrect behavior
  Incorrect data displayed in Horizon

  Expected behavior
  Data should be accurate to reflect the actual active instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1278819/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278674] Re: Vm doesn't get IP due to deleting another subnet

2014-02-11 Thread shihanzhang
this bug has already be fixed!

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1278674

Title:
  Vm doesn't get IP due to deleting another subnet

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  I found that when a network has two subnet, vm can't get IP if I delete one 
subnet, you can reprodeuce this problem by the step:
  1. create a new network
  2. create subnetA in this network
  3. create a port in subnetA
  4. create subnetB in this network
  5. delete subnetA 
  6. create vm in subnetB, the vm can't get IP

  I analyse the problem, the reason is that:
  in my environment, the network id is 2914fe0b-721e-4738-b018-71ac4883e6f8, 
then you can see there are two TAP in the namespace 
qdhcp-2914fe0b-721e-4738-b018-71ac4883e6f8

  root@ubuntu-242:/var/log/neutron# ip netns exec 
qdhcp-2914fe0b-721e-4738-b018-71ac4883e6f8 ip addr show
  10: tapfd90aa02-d9:  mtu 1500 qdisc noqueue 
state UNKNOWN 
  link/ether fa:16:3e:6a:37:8f brd ff:ff:ff:ff:ff:ff
  inet 60.60.60.3/24 brd 60.60.60.255 scope global tapfd90aa02-d9
  inet 169.254.169.254/16 brd 169.254.255.255 scope global tapfd90aa02-d9
  inet 70.70.70.2/24 brd 70.70.70.255 scope global tapfd90aa02-d9
  inet6 fe80::f816:3eff:fe6a:378f/64 scope link 
 valid_lft forever preferred_lft forever
  15: lo:  mtu 16436 qdisc noqueue state UNKNOWN 
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
  inet6 ::1/128 scope host 
 valid_lft forever preferred_lft forever
  37: tapbed116e5-4f:  mtu 1500 qdisc noqueue 
state UNKNOWN 
  link/ether fa:16:3e:96:c7:ec brd ff:ff:ff:ff:ff:ff
  inet 70.70.70.2/24 brd 70.70.70.255 scope global tapbed116e5-4f
  inet 169.254.169.254/16 brd 169.254.255.255 scope global tapbed116e5-4f
  inet6 fe80::f816:3eff:fe96:c7ec/64 scope link 
 valid_lft forever preferred_lft forever
  you can see that the ip: 70.70.70.2 is set on two tap

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1278674/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1275062] Re: image location is logged when authentication to store fails

2014-02-11 Thread Thierry Carrez
+1 for impact description

** No longer affects: glance/grizzly

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1275062

Title:
  image location is logged when authentication to store fails

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Glance havana series:
  In Progress
Status in OpenStack Security Advisories:
  Triaged

Bug description:
  WARNING glance.store [-] Get image  data from {'url':
  u'swift+https://X@my_auth_url.com/v2.0/my-images/,
  'metadata': {}} failed: Auth GET failed: https://my_auth_url.com
  RESP_CODE

  19:13:05.027  ERROR glance.store [-] Glance tried all locations to get
  data for image  but all have failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1275062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 980037] Re: Service managers starting keystone-all don't know when its ready

2014-02-11 Thread Jakub Libosvar
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => Jakub Libosvar (libosvar)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/980037

Title:
  Service managers starting keystone-all don't know when its ready

Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  If starting keystone-all with a Service Manager (systemd for example),
  keystone has no way of reporting back to systemd that it is ready to
  serve http requests and as a result its possible for systemd to return
  before keystone is ready.

  For example on Fedora 
  where the systemd  process start-up type is set to simple (i.e. just start 
the process and return)

  > /bin/systemctl stop openstack-keystone.service ; /bin/systemctl start 
openstack-keystone.service ; /usr/bin/keystone --token keystone_admin_token 
--endpoint http://127.0.0.1:35357/v2.0/ service-list
  Unable to communicate with identity service: 'NoneType' object has no 
attribute 'makefile'. (HTTP 400)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/980037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278796] [NEW] Horizon Ceilometer hard-coded availability zone

2014-02-11 Thread David Taylor
Public bug reported:

I spent the last couple of hours trying to figure out why nothing was
showing up under the 'Compute' menu in the 'Resource Usage' panel in
Horizon. I am using a custom availability zone name for my Nova compute
nodes. I am using 'openstack-dashboard' version 2013.2.1-1 on a CentOS
6.5 server. If you look in:

/usr/share/openstack-
dashboard/openstack_dashboard/dashboards/admin/metering/tabs.py

at around line 40, you will see a 'query' object that looks for
instances by availability zone:

query = [{"field": "metadata.OS-EXT-AZ:availability_zone",
  "op": "eq",
  "value": "nova"}]

The ceilometer panel in Horizon should account for the fact that users
may have custom (and possibly multiple) availability zones. You could
add an additional drop down menu in the panel to select from a list of
the current availability zones in the database. Replace the hard-coded
value of 'nova' with a variable that is populated from a drop down menu
in Horizon.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1278796

Title:
  Horizon Ceilometer hard-coded availability zone

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I spent the last couple of hours trying to figure out why nothing was
  showing up under the 'Compute' menu in the 'Resource Usage' panel in
  Horizon. I am using a custom availability zone name for my Nova
  compute nodes. I am using 'openstack-dashboard' version 2013.2.1-1 on
  a CentOS 6.5 server. If you look in:

  /usr/share/openstack-
  dashboard/openstack_dashboard/dashboards/admin/metering/tabs.py

  at around line 40, you will see a 'query' object that looks for
  instances by availability zone:

  query = [{"field": "metadata.OS-EXT-AZ:availability_zone",
"op": "eq",
"value": "nova"}]

  The ceilometer panel in Horizon should account for the fact that users
  may have custom (and possibly multiple) availability zones. You could
  add an additional drop down menu in the panel to select from a list of
  the current availability zones in the database. Replace the hard-coded
  value of 'nova' with a variable that is populated from a drop down
  menu in Horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1278796/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1272200] Re: image-create failed, but not return correct messages

2014-02-11 Thread ugvddm
** Project changed: glance => python-glanceclient

** Changed in: python-glanceclient
 Assignee: (unassigned) => ugvddm (271025598-9)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1272200

Title:
  image-create failed, but not return correct messages

Status in Python client library for Glance:
  New

Bug description:
  When use "glance image-create" ,  and --location is image path, then
  this cmd will return "data".

  User will be confused  what "data" mean, therefore,  the return
  message should illustrate

  the --location is not a file path, but a URL.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-glanceclient/+bug/1272200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278739] [NEW] trusts in keystone fail in backend when impersonation is not provided

2014-02-11 Thread Lance Bragstad
Public bug reported:

When creating trusts in Keystone, if 'impersonation' is not provided
Keystone fails out in the backend code. This should probably be handed
at the controller level to be consistent across all backends.

lbragstad@precise64:~/curl-examples$ cat create_trust.json
{
"trust": {
"expires_at": "2014-02-27T18:30:59.99Z",
"project_id": "c7e2b98178e64418bb884929d3611b89",
"impersonation": true,
"roles": [
{
"name": "admin"
}
],
"trustee_user_id": "bf3a4c9ef46d44fa9ce57349462b1998",
"trustor_user_id": "406e6d96a30449069bf4241a00308b23"
}
}

lbragstad@precise64:~/curl-examples$ cat create_trust_bad.json
{
"trust": {
"expires_at": "2014-02-27T18:30:59.99Z",
"project_id": "c7e2b98178e64418bb884929d3611b89",
"roles": [
{
"name": "admin"
}
],
"trustee_user_id": "bf3a4c9ef46d44fa9ce57349462b1998",
"trustor_user_id": "406e6d96a30449069bf4241a00308b23"
}
}

Using impersonation in  the create_trust.json file returns a trust
successfully:

lbragstad@precise64:~/curl-examples$ curl -si -H "X-Auth-Token:$TOKEN" -H 
"Content-type:application/json" -d @create_trust.json 
http://localhost:5000/v3/OS-TRUST/trusts
HTTP/1.1 201 Created
Vary: X-Auth-Token
Content-Type: application/json
Content-Length: 675
Date: Sun, 09 Feb 2014 04:36:56 GMT

{"trust": {"impersonation": true, "roles_links": {"self":
"http://10.0.2.15:5000/v3/OS-
TRUST/trusts/12ce9f7214f04c018384f654f5ea9aa5/roles", "previous": null,
"next": null}, "trustor_user_id": "406e6d96a30449069bf4241a00308b23",
"links": {"self": "http://10.0.2.15:5000/v3/OS-
TRUST/trusts/12ce9f7214f04c018384f654f5ea9aa5"}, "roles": [{"id":
"937488fff5444edb9da1e93d20596d4b", "links": {"self":
"http://10.0.2.15:5000/v3/roles/937488fff5444edb9da1e93d20596d4b"},
"name": "admin"}], "expires_at": "2014-02-27T18:30:59.99Z",
"trustee_user_id": "bf3a4c9ef46d44fa9ce57349462b1998", "project_id":
"c7e2b98178e64418bb884929d3611b89", "id":
"12ce9f7214f04c018384f654f5ea9aa5"}}

When using the request without impersonation defined I get:

lbragstad@precise64:~/curl-examples$ curl -si -H "X-Auth-Token:$TOKEN" -H 
"Content-type:application/json" -d @create_trust_bad.json http://localhos
t:5000/v3/OS-TRUST/trusts
HTTP/1.1 500 Internal Server Error
Vary: X-Auth-Token
Content-Type: application/json
Content-Length: 618
Date: Sun, 09 Feb 2014 04:33:08 GMT

{"error": {"message": "An unexpected error prevented the server from fulfilling 
your request. (OperationalError) (1048, \"Column 'impersonation
' cannot be null\") 'INSERT INTO trust (id, trustor_user_id, trustee_user_id, 
project_id, impersonation, deleted_at, expires_at, extra) VALUES
(%s, %s, %s, %s, %s, %s, %s, %s)' ('b49ac0c7558a4450949c22c840db9794', 
'406e6d96a30449069bf4241a00308b23', 'bf3a4c9ef46d44fa9ce57349462b1998',
'c7e2b98178e64418bb884929d3611b89', None, None, datetime.datetime(2014, 2, 27, 
18, 30, 59, 99), '{\"roles\": [{\"name\": \"admin\"}]}')", "
code": 500, "title": "Internal Server Error"}}


According to the Identity V3 API, 'impersonation' is a requirement when 
creating a trust. 
https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md#trusts

** Affects: keystone
 Importance: Undecided
 Assignee: Lance Bragstad (ldbragst)
 Status: In Progress


** Tags: trusts v3

** Summary changed:

- trusts in keystone fail in driver when impersonation is not provided
+ trusts in keystone fail in backend when impersonation is not provided

** Description changed:

  When creating trusts in Keystone, if 'impersonation' is not provided
  Keystone fails out in the backend code. This should probably be handed
  at the controller level to be consistent across all backends.
  
  lbragstad@precise64:~/curl-examples$ cat create_trust.json
  {
- "trust": {
- "expires_at": "2014-02-27T18:30:59.99Z",
- "project_id": "c7e2b98178e64418bb884929d3611b89",
- "impersonation": true,
- "roles": [
- {
- "name": "admin"
- }
- ],
- "trustee_user_id": "bf3a4c9ef46d44fa9ce57349462b1998",
- "trustor_user_id": "406e6d96a30449069bf4241a00308b23"
- }
+ "trust": {
+ "expires_at": "2014-02-27T18:30:59.99Z",
+ "project_id": "c7e2b98178e64418bb884929d3611b89",
+ "impersonation": true,
+ "roles": [
+ {
+ "name": "admin"
+ }
+ ],
+ "trustee_user_id": "bf3a4c9ef46d44fa9ce57349462b1998",
+ "trustor_user_id": "406e6d96a30449069bf4241a00308b23"
+ }
  }
  
  lbragstad@precise64:~/curl-examples$ cat create_trust_bad.json
  {
- "trust": {
- "expires_at": "2014-02-27T18:30:59.99Z",
- "project_id": "c7e2b98178e64418bb884929d3611b89",
-

[Yahoo-eng-team] [Bug 1278741] [NEW] resource tracker fails after migration if instance is already tracked on new node

2014-02-11 Thread Pavel Kirpichyov
Public bug reported:

 {u'message': u'\'list\' object has no attribute \'iteritems\'  

  
 Traceback (most recent call last): 

  


  
   File "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", 
line 461, in _process_data  

 **args)

  


  
   File "/usr/lib/python2.7/dist-packages/nova/openstack/com', u'code': 500, 
u'details': u'  File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 255, in 
decorated_function 
 return function(self, context, *args, **kwargs)

  
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2949, 
in prep_resize  
   
 filter_properties) 

  
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2943, 
in prep_resize  
   
 node)  

  
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2911, 
in _prep_resize 
   
 limits=limits) as claim:   

  
   File "/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", 
line 246, in inner  
   
 return f(*args, **kwargs)  

  
   File "/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", 
line 173, in resize_claim   
 
 self._update(elevated, self.compute_node)  

  
   File "/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", 
line 428, in _update
 
 context, self.compute_node, values, prune_stats)   

  
   File "/usr/lib/python2.7/dist-packages/nova/conductor/api.py", line 263, in 
compute_node_update 
   
 prune_stats)   

  
   File "/usr/lib/python2.7/dist-packages/nova/conductor/rpcapi.py", line 386, 
in compute_node_update  
   
 prune_stats=prune_stats)   

  
   File "/usr/lib/python2.7/dist-packages/nova/rpcclient.py", line 85, in call  

  
 return self._invoke(self.proxy.call, ctxt, method, **kwargs)   
  

[Yahoo-eng-team] [Bug 1278738] [NEW] trusts in keystone fail in driver when impersonation is not provided

2014-02-11 Thread Lance Bragstad
Public bug reported:

When creating trusts in Keystone, if 'impersonation' is not provided
Keystone fails out in the backend code. This should probably be handed
at the controller level to be consistent across all backends.

lbragstad@precise64:~/curl-examples$ cat create_trust.json
{
"trust": {
"expires_at": "2014-02-27T18:30:59.99Z",
"project_id": "c7e2b98178e64418bb884929d3611b89",
"impersonation": true,
"roles": [
{
"name": "admin"
}
],
"trustee_user_id": "bf3a4c9ef46d44fa9ce57349462b1998",
"trustor_user_id": "406e6d96a30449069bf4241a00308b23"
}
}

lbragstad@precise64:~/curl-examples$ cat create_trust_bad.json
{
"trust": {
"expires_at": "2014-02-27T18:30:59.99Z",
"project_id": "c7e2b98178e64418bb884929d3611b89",
"roles": [
{
"name": "admin"
}
],
"trustee_user_id": "bf3a4c9ef46d44fa9ce57349462b1998",
"trustor_user_id": "406e6d96a30449069bf4241a00308b23"
}
}

Using impersonation in  the create_trust.json file returns a trust
successfully:

lbragstad@precise64:~/curl-examples$ curl -si -H "X-Auth-Token:$TOKEN" -H 
"Content-type:application/json" -d @create_trust.json 
http://localhost:5000/v3/OS-TRUST/trusts
HTTP/1.1 201 Created
Vary: X-Auth-Token
Content-Type: application/json
Content-Length: 675
Date: Sun, 09 Feb 2014 04:36:56 GMT

{"trust": {"impersonation": true, "roles_links": {"self":
"http://10.0.2.15:5000/v3/OS-
TRUST/trusts/12ce9f7214f04c018384f654f5ea9aa5/roles", "previous": null,
"next": null}, "trustor_user_id": "406e6d96a30449069bf4241a00308b23",
"links": {"self": "http://10.0.2.15:5000/v3/OS-
TRUST/trusts/12ce9f7214f04c018384f654f5ea9aa5"}, "roles": [{"id":
"937488fff5444edb9da1e93d20596d4b", "links": {"self":
"http://10.0.2.15:5000/v3/roles/937488fff5444edb9da1e93d20596d4b"},
"name": "admin"}], "expires_at": "2014-02-27T18:30:59.99Z",
"trustee_user_id": "bf3a4c9ef46d44fa9ce57349462b1998", "project_id":
"c7e2b98178e64418bb884929d3611b89", "id":
"12ce9f7214f04c018384f654f5ea9aa5"}}

When using the request without impersonation defined I get:

lbragstad@precise64:~/curl-examples$ curl -si -H "X-Auth-Token:$TOKEN" -H 
"Content-type:application/json" -d @create_trust_bad.json http://localhos
t:5000/v3/OS-TRUST/trusts
HTTP/1.1 500 Internal Server Error
Vary: X-Auth-Token
Content-Type: application/json
Content-Length: 618
Date: Sun, 09 Feb 2014 04:33:08 GMT

{"error": {"message": "An unexpected error prevented the server from fulfilling 
your request. (OperationalError) (1048, \"Column 'impersonation
' cannot be null\") 'INSERT INTO trust (id, trustor_user_id, trustee_user_id, 
project_id, impersonation, deleted_at, expires_at, extra) VALUES
(%s, %s, %s, %s, %s, %s, %s, %s)' ('b49ac0c7558a4450949c22c840db9794', 
'406e6d96a30449069bf4241a00308b23', 'bf3a4c9ef46d44fa9ce57349462b1998',
'c7e2b98178e64418bb884929d3611b89', None, None, datetime.datetime(2014, 2, 27, 
18, 30, 59, 99), '{\"roles\": [{\"name\": \"admin\"}]}')", "
code": 500, "title": "Internal Server Error"}}

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: v3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1278738

Title:
  trusts in keystone fail in driver when impersonation is not provided

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When creating trusts in Keystone, if 'impersonation' is not provided
  Keystone fails out in the backend code. This should probably be handed
  at the controller level to be consistent across all backends.

  lbragstad@precise64:~/curl-examples$ cat create_trust.json
  {
  "trust": {
  "expires_at": "2014-02-27T18:30:59.99Z",
  "project_id": "c7e2b98178e64418bb884929d3611b89",
  "impersonation": true,
  "roles": [
  {
  "name": "admin"
  }
  ],
  "trustee_user_id": "bf3a4c9ef46d44fa9ce57349462b1998",
  "trustor_user_id": "406e6d96a30449069bf4241a00308b23"
  }
  }

  lbragstad@precise64:~/curl-examples$ cat create_trust_bad.json
  {
  "trust": {
  "expires_at": "2014-02-27T18:30:59.99Z",
  "project_id": "c7e2b98178e64418bb884929d3611b89",
  "roles": [
  {
  "name": "admin"
  }
  ],
  "trustee_user_id": "bf3a4c9ef46d44fa9ce57349462b1998",
  "trustor_user_id": "406e6d96a30449069bf4241a00308b23"
  }
  }

  Using impersonation in  the create_trust.json file returns a trust
  successfully:

  lbragstad@precise64:~/curl-examples$ curl -si -H "X-Auth-Token:$TOKEN" -H 
"Content-type:application/json" -d @create_trust.json 
http://localhost:5000/v3/OS-TRUST/trusts
  HTTP/1.1 201 Created
  Vary: X-Auth-Token
  Content-Type: application/json